Retrieval Augmented Generation, more commonly known as RAG, is a technique for combining LLMs and databases in order to maximise their respective strengths and functions while mitigating their shortcomings.
LLMs offer intuitive interactions and advanced functionality but frequently hallucinate and cannot be audited, misleading users with unverifiable statements.
To navigate this challenge and keep the benefits of LLMs, RAG imposes a structure in which the LLM used as a mediator between the user and the data stored in the database: having been asked a questions by the user, the LLM creates a relevant query for the database, then, using exclusively the data it receives, answers the users initial question. This means that any answer returned to the user always originates from the owned knowledge base, rather than being a hallucination.
Rules-based AI is used in this process to enrich the database with knowledge, meaning that the LLM can answer much broader and valuable questions as it now has access to a more insightful pool of data. Rules-based AI also drastically improves the speed of the query, enabling the LLM to return answers faster.
This approach maintains the LLMs strength of answering the questions that are most important to the user while ensuring they are given factual statements in replies.