Back to projects
May 08, 2024
3 min read


A public open-source repo for all good things Retrieval Augmented Generation

The RAG repo is a compilation of useful techniques for achieving razor-sharp accuracy and keeping your bots & assistants up-to-date.

Why RAG?

LLMs are a key technology powering chatbots, assistants, web applications, front- and backend-services, etc. Unfortunately, there are several major challenges inherent in the nature of LLMs:

  • LLMs extrapolate when facts aren’t available, leading to responses full of false information.
  • They provide out-of-date or generic information.
  • Models often use responses from non-authoritative sources.
  • They give inaccurate responses due to terminology confusion, architecture specifics, etc.

LLMs can be compared to over-enthusiastic employees who DON’T want to stay up-to-date with the latest company events & policies.

In contrast, RAG is like an over-enthusiastic employee who ACTUALLY STAYS up-to-date, obeys all the rules, and is aware of every single policy change.

Main Goal

The main goal of the RAG repo is to showcase solutions to some common roadblocks encountered while building AI systems, bots & assistants.

What you’ll find?

Here are some topics you’ll find in the RAG repo:

  1. Data processing - Best tips & tricks on how to process various data formats and structures.
  2. Chunking - Different chunking techniques & useful strategies for chunks augmentation.
  3. RAG pipelines - Implementation of robust RAG pipelines with query routing, combination of different searches, algorithms, re-rankers, etc.
  4. Embeddings - Best tips on improving data representations using open- and closed-source embedding models.
  5. Vector Stores - Overview & implementation of well-recognized vector stores depending on the use case.
  6. Retrieval - Implementation & comparison of advanced retrieval strategies.
  7. Prompting - Experimentations with base system prompts for achieving razor-sharp accuracy.
  8. Routing - The most extensive adaptive routing guide for different user intenses & dynamic decision making

In other words, you’ll learn how to transform LLMs into ultra-precise, lightning-fast, cost-effective AI powerhouses.


Feel free to contact me & leave your feedback. I really appreciate it.

Remember, RAG is here to stay, and systematic exploration will be the key to its success!

Dreaming of something innovative? It's time to bring that dream to life!

Let's create incredible products together! I am just an email away from kickstarting.