Jun 12, 2023

Query Augmentation for LLMs

 Generative Question Answering with #LLMs and (local) Knowledge Bases

Architecting an AI platform with LLMs is exciting for the research community and enterprise AI designers. If we can augment a general model with local knowledge for generative question-answering #GQA we can improve general LLMs. At Uplift Array, we infuse LLMs with custom knowledge from our wellness and behavioral science experts. 

Kader Sakkaria and the team are excited about this architecture's transformational impact. Retrieval augmentation with #LangChain libraries is a robust solution to many well known #LLM issues. LLMs have several output quality problems at this time and possibly in the foreseeable future. The issue of model focus is hard to handle. Predicting if the model "saw" relevant data sets is challenging. This issue can be significantly addressed by chaining local sources of knowledge, "stuffing" queries, and optimizing with different chain types. The vectorization and indexation of #OpenAI embeddings often need significant optimization as well. However, it enables natural language retrieval access, and keeps the vectors in the enterprise environment. 

There is a lot of potential for generative AI to help students, coaches, and counselors to increase academic retention. #UpliftArray #ai #machinelearning #deeplearning #deeplearningai #queryoptimization #mapreduce #gpt #chatgpt #huggingface

https://arxiv.org/pdf/2305.14283.pdf


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.