RAG (Retrieval-Augmented Generation) is one of the hottest topics in AI today. It allows you to retrieve desired outputs from a model without fine-tuning it or modifying its underlying layers and weights. Think of it like this: you send your query along with specific instructions to the AI, and it returns results that are aligned with those instructions. These instructions are what we call a prompt, and the effectiveness of your RAG system largely depends on how well you design the prompt.
Another critical component of RAG is fetching relevant information from your source database. This data helps you build the prompt more effectively. To achieve this, you need to retrieve data that is contextually similar to the user’s query, a process known as semantic similarity. This is where a Vector Database comes into play. As discussed in our previous posts, vector embeddings play a crucial role here. Transformers and other NLP models accept embedded vectors of tokens as input, and these embeddings encode semantic meaning. If you’re not familiar with vector embeddings or transformers, we recommend checking the page on Styrish AI, Understanding Transformers.
At a high level, RAG is about finding contextually or semantically similar data to a user’s query. Text in plain English, or any human-readable language, needs to be converted into high-dimensional numeric vectors. Each token is assigned a numeric representation, and the closeness between two vectors indicates how contextually related the tokens are. This is the core concept that powers semantic retrieval in RAG systems.
In the exercise below, we will demonstrate how embedding vectors are generated and used to fetch contextually relevant data. We will use some available LLMs and APIs to showcase this workflow. These approaches are very similar to what you would use in real-time RAG projects. If you go with AWS services for example AWS Bedrock, there are other components to do so.
- Generating Embeddings
Here, I have used sentence-transformers to generate the embeddings. Sentence Transformers are lightweight models that convert sentences or paragraphs into dense vector embeddings, enabling semantic search and similarity comparisons for tasks like RAG.
2. Store embeddings in a vector database (FAISS)
Now, we would store these embeddings in our vector database. I have used FAISS (Facebook AI similarity search). This is an open source API. If you use AWS, you can use AWS open-search as vector database. There are other APIs also for such purpose. Remember, one of the the core mechanisms inside which finds semantic similarity, is known as cosine similarity.
3. Perform semantic search
It’s time to perform the search, let’s see the magic in action!
In the screenshots below, we pass two different input queries to our RAG system. You’ll notice that for both queries, the system retrieves contextually relevant and meaningful results, demonstrating the power of semantic search using embeddings and a vector database. I have fetched the top-3 results, you can definitely change it to get more or less.
Once the contextually relevant data is retrieved, you can append it to a carefully designed prompt and pass it to the LLM. The model then generates the desired output based on the user’s query. This process about combining retrieval of relevant context and prompt-guided generation, forms the complete RAG workflow.
Comments
Post a Comment