Kodnos
/
AI & Data Science

RAG (Retrieval-Augmented Generation): A Production Setup Guide

A comprehensive guide to building production-ready RAG pipelines — from chunking strategies and hybrid retrieval to hallucination mitigation and 2025 innovations.

A
admin
May 3, 2026 7 min read 11 views

Large Language Models are powerful — but they hallucinate. They confidently generate plausible-sounding answers from training data that may be outdated, incomplete, or simply wrong. Retrieval-Augmented Generation (RAG) solves this by grounding the model in real, verifiable data before it generates a single token.

In this guide, we'll build a production-ready RAG system from scratch: data preparation, embedding, retrieval, generation, and the optimizations that separate a demo from a reliable production system.

What is RAG?

RAG is a hybrid AI architecture that combines a retriever (search component) with a generator (LLM). Instead of relying solely on parametric memory, the model first retrieves relevant documents from an external knowledge base, then generates its answer using that context.

Example: A customer support bot receives the question _"How do I reset my API key?"_. Instead of guessing, it:

1. Embeds the query into a vector 2. Searches the company's documentation database 3. Retrieves the 3 most relevant paragraphs 4. Sends them as context to the LLM 5. The LLM generates an answer grounded in actual documentation

The result: accurate, up-to-date, and verifiable answers.

Core Components

| Component | Purpose | Tools | |---|---|---| | Data Pipeline | Document ingestion, cleaning, chunking | LangChain, LlamaIndex | | Embedding Model | Convert text to vector representations | OpenAI Ada v2, Sentence-BERT | | Vector Database | Index and search embeddings | FAISS, Pinecone, Weaviate, Chroma | | Retriever | Find the most relevant chunks | BM25, dense search, hybrid | | LLM Generator | Produce answers from retrieved context | GPT-4, Llama 3, Claude | | Re-ranker | Reorder results by relevance | Cohere Rerank, BGE Reranker |

Data Preparation & Chunking

The quality of your raw data determines the ceiling of your entire RAG system. Garbage in, garbage out — this is even more true for retrieval than for training.

Step 1: Clean Your Data

  • De-duplication — Remove duplicate paragraphs. Otherwise the model sees the same information as multiple "independent" sources, inflating confidence in potentially wrong data.
  • Normalization — Fix Unicode errors, inconsistent casing, encoding issues, and stray formatting artifacts.
  • Metadata extraction — Preserve document titles, section headers, dates, and source URLs. This metadata becomes invaluable for filtering and attribution.
  • Step 2: Chunking Strategy

    Chunking is how you split documents into pieces that become individual entries in your vector database. This single decision has an outsized impact on retrieval quality.

    | Strategy | Chunk Size | Pros | Cons | |---|---|---|---| | Fixed-size | 256–512 tokens | Simple, predictable | Cuts mid-sentence | | Sentence-based | Varies | Respects boundaries | Uneven sizes | | Semantic | Varies | Groups related content | More complex | | Recursive | 512 tokens with overlap | Good balance | Requires tuning |

    Rule of thumb: Start with recursive chunking at 512 tokens with 50-token overlap. Measure retrieval recall on a test set, then adjust.

    Larger chunks capture more context but introduce noise. Smaller chunks are precise but may lose surrounding meaning. The overlap ensures no information falls into the gap between chunks.

    Retrieval: The Heart of RAG

    The retriever is the most critical component. If it fails to find the right documents, even the best LLM will produce a wrong answer — just more eloquently.

    Sparse Search (BM25)

    BM25 is keyword-based search. It's fast, interpretable, and excels at exact matches. If a user searches for "PostgreSQL connection pooling", BM25 will reliably find documents containing those exact terms.

    Dense Search (Embeddings)

    Dense retrieval uses neural embeddings to capture semantic meaning. The query _"How to manage database connections efficiently"_ will match documents about connection pooling even without those exact words.

    Hybrid Search (Best of Both)

    As of 2025, hybrid retrieval delivers the highest recall:

    1. Run BM25 and dense search in parallel 2. Combine results using Reciprocal Rank Fusion (RRF) 3. Pass the merged results through a re-ranker for final ordering

    This approach catches both exact keyword matches and semantic similarities.

    Python
    from langchain.retrievers import EnsembleRetriever
    from langchain.retrievers import BM25Retriever
    from langchain.vectorstores import FAISS

    # Sparse retriever bm25 = BM25Retriever.from_documents(docs, k=10)

    # Dense retriever vectorstore = FAISS.from_documents(docs, embeddings) dense = vectorstore.as_retriever(search_kwargs={"k": 10})

    # Hybrid: 50/50 weight hybrid = EnsembleRetriever( retrievers=[bm25, dense], weights=[0.5, 0.5] )

    Multi-Vector Retrieval

    An advanced technique: store multiple embedding vectors per chunk. For example, embed both the chunk content and a generated summary. Different query types will match different representations of the same content, improving recall across diverse question styles.

    Building a Simple RAG Pipeline

    Here's a complete, minimal RAG pipeline using LangChain and FAISS:

    Python
    from langchain.document_loaders import DirectoryLoader
    from langchain.text_splitter import RecursiveCharacterTextSplitter
    from langchain.embeddings import OpenAIEmbeddings
    from langchain.vectorstores import FAISS
    from langchain.chains import RetrievalQA
    from langchain.llms import OpenAI

    # 1. Load documents loader = DirectoryLoader("./docs", glob="*/.md") documents = loader.load()

    # 2. Chunk documents splitter = RecursiveCharacterTextSplitter( chunk_size=512, chunk_overlap=50, separators=["\n\n", "\n", ". ", " "] ) chunks = splitter.split_documents(documents)

    # 3. Create vector store embeddings = OpenAIEmbeddings() vectorstore = FAISS.from_documents(chunks, embeddings)

    # 4. Build retriever retriever = vectorstore.as_retriever( search_type="similarity", search_kwargs={"k": 4} )

    # 5. Create RAG chain qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(temperature=0), chain_type="stuff", retriever=retriever, return_source_documents=True )

    # 6. Query result = qa_chain.invoke("What is KV Cache in LLM inference?") print(result["result"]) for doc in result["source_documents"]: print(f" Source: {doc.metadata['source']}")

    Production Considerations

    A RAG demo takes a day. A production RAG system takes weeks. Here's what separates them.

    Latency vs. Accuracy

    Every layer you add (retrieval → re-ranking → generation) adds latency. Balance this with:

  • Multi-stage retrieval — First pass: cheap, broad filter (BM25 over 1000 docs → top 100). Second pass: expensive, precise re-ranker (top 100 → top 5).
  • Async pipeline — Start retrieval and LLM warm-up in parallel.
  • Precomputed embeddings — Cache embeddings for frequent queries.
  • Hallucination Mitigation

    Even with retrieval, hallucination isn't eliminated — it's reduced. To push it further:

  • Confidence scoring — Measure retrieval similarity scores. If the best match is below a threshold (e.g., cosine similarity < 0.7), return "I don't have enough information" instead of a generated answer.
  • Source attribution — Always return source documents alongside the answer. Let users verify.
  • Grounding validation — Use a second LLM call to verify that the generated answer is actually supported by the retrieved context.
  • Monitoring

    In production, track:

    | Metric | What It Tells You | |---|---| | Retrieval recall | Are relevant docs being found? | | Answer faithfulness | Is the answer grounded in sources? | | Latency (P50/P99) | Is the system fast enough? | | User feedback | Thumbs up/down on answers |

    Scaling

  • Index sharding — Split your vector index across multiple nodes for large document collections.
  • Tiered storage — Keep frequently accessed embeddings in memory, archive old ones to disk.
  • Incremental indexing — Don't re-embed everything when you add new documents; append to the existing index.

2025 Innovations

HetaRAG

Combines vector database, knowledge graph, full-text index, and relational database into a single retrieval layer. Instead of relying solely on vector similarity, it can traverse relationships and apply structured filters — breaking through the limitations of pure vector search.

Federated RAG

For sensitive domains like healthcare and finance: run RAG across distributed data sources without sharing raw data. Each data owner runs local retrieval; only the retrieved chunks (or their summaries) are sent to the generator. Privacy-preserving RAG at scale.

Adaptive Retrieval

Dynamically adjusts retrieval depth based on query complexity. Simple factual questions get a fast, shallow search. Complex analytical questions trigger deeper retrieval with re-ranking and multi-hop reasoning. This optimizes both latency and cost.

Conclusion

RAG is the most practical way to make LLMs reliable in production. The architecture is conceptually simple — retrieve, then generate — but the engineering details matter enormously. Start with the basics: clean data, good chunking, hybrid retrieval. Then iterate based on real metrics.

References

1. Pinecone — Retrieval-Augmented Generation (RAG) 2. DEV Community — Best Practices for Building Robust RAG Systems (2025) 3. Chitika — RAG Definitive Guide 2025 4. Aggil.fr — RAG in 2025: Best Practices 5. IEEE — Best Practices for Constructing Knowledge Graphs in RAG Systems (2025) 6. arXiv 2501.07391 — Enhancing RAG: A Study of Best Practices (Jan 2025) 7. arXiv 2508.06401 — Systematic Literature Review of RAG (2025) 8. arXiv 2505.18906 — HetaRAG: Heterogeneous Data Stores for Hybrid Deep Retrieval (2025) 9. arXiv 2505.18906 — Federated RAG: Systematic Mapping Study (May 2025)


7 MinShare this article
Oğuzhan Berke Özdil
Author

Oğuzhan Berke Özdil

I have been connected to computers since childhood. On this website, I share what I learn and experience while trying to build a strong foundation in software. I completed my BSc in Computer Science at AGH University of Krakow and I am currently pursuing an MSc in Computer Science with a focus on AI & Data Analysis at the same university.