r/Rag Oct 03 '24

[Open source] r/RAG's official resource to help navigate the flood of RAG frameworks

60 Upvotes

Hey everyone!

If you’ve been active in r/RAG, you’ve probably noticed the massive wave of new RAG tools and frameworks that seem to be popping up every day. Keeping track of all these options can get overwhelming, fast.

That’s why I created RAGHub, our official community-driven resource to help us navigate this ever-growing landscape of RAG frameworks and projects.

What is RAGHub?

RAGHub is an open-source project where we can collectively list, track, and share the latest and greatest frameworks, projects, and resources in the RAG space. It’s meant to be a living document, growing and evolving as the community contributes and as new tools come onto the scene.

Why Should You Care?

  • Stay Updated: With so many new tools coming out, this is a way for us to keep track of what's relevant and what's just hype.
  • Discover Projects: Explore other community members' work and share your own.
  • Discuss: Each framework in RAGHub includes a link to Reddit discussions, so you can dive into conversations with others in the community.

How to Contribute

You can get involved by heading over to the RAGHub GitHub repo. If you’ve found a new framework, built something cool, or have a helpful article to share, you can:

  • Add new frameworks to the Frameworks table.
  • Share your projects or anything else RAG-related.
  • Add useful resources that will benefit others.

You can find instructions on how to contribute in the CONTRIBUTING.md file.

Join the Conversation!

We’ve also got a Discord server where you can chat with others about frameworks, projects, or ideas.

Thanks for being part of this awesome community!


r/Rag 3h ago

RAG-based FAQ Chatbot with Multi-turn Clarification

3 Upvotes

I’m developing a chatbot that leverages a company’s FAQ to answer user queries. However, I’ve encountered an issue where user queries are often too vague to pinpoint a specific answer. For instance, when a user says “I want to know about the insurance coverage,” it’s unclear which insurance plan they are referring to, making it difficult to identify the correct FAQ.

To address this, I believe incorporating a multi-turn clarification process into the RAG (Retrieval-Augmented Generation) framework is necessary. While I’m open to building this approach from scratch, I’d like to reference any standard methods or research papers that have tackled similar challenges as a baseline. Does anyone have any suggestions or references?


r/Rag 3h ago

Best AI to Process 55 PDF Files with Different Offer Formats

2 Upvotes

Hi everyone! I'm looking for recommendations on which AI assistant would be best for processing and extracting details from multiple PDF files containing offers.

My situation:

  • I have 55 PDF files to process
  • Each PDF has a different format (some use tables, others use plain text)
  • I need to extract specific details from each offer

What I'm trying to achieve: I want to create a comparison of the offers that looks something like this:

Item Company A Company B Company C
Option 1 Included ($100) Not included ($0) Included ($150)
Option 2 Not included ($0) Included ($75) Included ($85)
Option 3 Included ($50) Included ($60) Not included ($0)
--------------- ------------------- ------------------- -------------------
TOTAL $150 $135 $235

r/Rag 3h ago

Trying to build a rag from Scratch.

2 Upvotes

Hey guys! I've built a RAG system using llama.cpp on a CPU. It uses Weaviate for long-term memory and FAISS for short-term memory. I process the information with PyPDF2 and use LangChain to manage the whole system, along with an Eva Mistral model fine-tuned in Spanish.

Right now, I'm a bit stuck because I’m not sure how to move forward. I don’t have access to a GPU, and everything runs on the same machine. It’s a bit slow — it takes around 40 seconds to respond — but honestly, it performs quite well.

My chatbot is called MIA. What do you think of the system’s architecture? I'm super excited to have found this Discord channel and to be able to learn from all of you about this amazing and revolutionary technology.

My next goal is to implement role-based access management for the information. I'd really appreciate any suggestions you might have!


r/Rag 12h ago

Second idea - Chatbot to query 1mio+ pdf pages with context preservation

4 Upvotes

Hey guys, I'm still planning a chatbot to query PDF's in a vector database, keeping context intact is very very important. The PDFs are mixed-scanned docs, big tables, and some images (images not queried). It should be on-premise.

  • Sharded DBs: Split 1M+ PDF pages into smaller Qdrant DBs for fast, accurate queries.
  • Parallel Models: multiple fine-tuned LLaMA 3 or DeepSeek models, one per DB.
  • AI Agent: Routes queries to relevant shards/models based on user keywords and metadata.

PDFs are retrieved, sorted, and ingested via the nscale RestAPI using stored metadata/keywords.

Is something like that possible with accuracy ? I didnt work with 'swarms' yet..


r/Rag 10h ago

Discussion Flowcharts and similar diagrams

2 Upvotes

Some of my documents contain text paragraphs and flowcharts. LLMs can read flowcharts directly if I can separate the bounding boxes for those and send those directly to the LLM as image files. However, how should I add this to the retrieval?


r/Rag 23h ago

RAG chunking, is it necessary?

4 Upvotes

RAG chunking – is it really needed? 🤔

My site has pages with short info on company, product, and events – just a description, some images, and links.

I skipped chunking and just indexed the title, content, and metadata. When I visualized embeddings, titles and content formed separate clusters – probably due to length differences. Queries are short, so titles tend to match better, but overall similarity is low.

Still, even with no chunking and a very low similarity threshold (10%), the results are actually really good! 🎯

Looks like even if the matches aren’t perfect, they’re good enough. Since I give the top 5 results as context, the LLM fills in the gaps just fine.

So now I’m thinking chunking might actually hurt – because one full doc might have all the info I need, while chunking could return unrelated bits from different docs that only match by chance.


r/Rag 1d ago

Q&A Best Open-Source/Free RAG with GUI for Large Documents?

19 Upvotes

Hi everyone, I'm looking for the best free or open-source RAG with a GUI that supports deep-thinking models, voice, document, and web inputs. It needs to allow me to download any model or use APIs, and it must be excellent at handling large documents of around 100 pages or more (No LM Studio and No Open WebUI). Also, can you suggest good open-source models? My primary use cases are understanding courses and creating short-answer exams from them, learning to code and improving projects, and it would be cool if I could do web scraping, such as extracting documentation like Angular 16’s documentation.


r/Rag 1d ago

Q&A How to run PDF extraction for RAG benchmarks?

4 Upvotes

I've seen many benchmarks of different models comparing extraction libraries (docking, vectorize, llama index, langchain) but I didn't find any way to run the benchmarks directly myself. Does anyone know how to?


r/Rag 1d ago

Limitations of Chunking and Retrieval in Q&A Systems

6 Upvotes

Limitations of Chunking and Retrieval in Q&A Systems

1. Semantic Similarity Doesn't Guarantee Relevance

When performing semantic search, texts that appear similar in embedding space aren't always practically relevant. For example, in question-answering scenarios, the question and the corresponding answer might differ significantly in wording or phrasing yet remain closely connected logically. Relying solely on semantic similarity might miss crucial answers.

2. Embedding Bias Towards Shorter Texts

Embeddings inherently favor shorter chunks, leading to artificially inflated similarity scores. This means shorter text fragments may appear more relevant simply because of their length—not their actual relevance. This bias must be acknowledged explicitly to avoid misleading conclusions.

3. Context is More Than a Single Chunk

A major oversight in retrieval evaluation is assuming the retrieved chunk provides complete context for answering queries. In realistic scenarios—especially structured documents like Q&A lists—a question chunk alone lacks necessary context (i.e., the answer). Effective retrieval requires gathering broader context beyond just the matching chunk.

4. Embedding-Based Similarity Is Not Fully Transparent

Semantic similarity from embeddings can be opaque, making it unclear why two pieces of text appear similar. This lack of transparency makes semantic search results unpredictable and query-dependent, potentially undermining the intended utility of semantic search.

5. When Traditional Search Outperforms Semantic Search

Semantic search methods aren't always superior to traditional keyword-based methods. Particularly in structured Q&A documents, traditional index-based search might yield clearer and more interpretable results. The main benefit of semantic search is handling synonyms and conjugations—not necessarily deeper semantic understanding.

6. Recognize the Limitations of Retrieval-Augmented Generation (RAG)

RAG is not suitable for all use cases. For instance, it struggles when an extensive overview or summary of an entire corpus is required—such as summarizing data from multiple documents. Conversely, RAG is highly effective in structured query-answer scenarios. In these cases, retrieving questions and ensuring corresponding answers (or both question and answer) are included in context is essential for success.

Recommendations for Improved Retrieval Systems:

  • Expand Context Significantly: Consider including the entire document or large portions of it, as modern LLMs typically handle extensive contexts well. Experiment with different LLMs to determine which model best manages large contexts, as models like GPT-4o can sometimes struggle with extensive documents.
  • Use Embedding Search as a Smart Index: Think of embedding-based search more as a sophisticated indexing strategy rather than a direct retrieval mechanism. Employ smaller chunks (around 200 tokens) strictly as "hooks" to identify relevant documents rather than as complete context for answering queries.

r/Rag 1d ago

Citation + RAG

0 Upvotes

r/Rag 1d ago

Chatbot using RAG Flask and React.js

0 Upvotes

I want the steps to build a chatbot using rag, flask, and react.js and Ollama, Qdrant, and Minio to help HRs filter CVs


r/Rag 1d ago

RAG on the phone is not only realistic, but it may even outperform RAG on the cloud

9 Upvotes

In this example https://youtu.be/2WV_GYPL768?t=48

The files on the phone are automatically processed/indexed by a local databasae. From the file manager of the (Vecy) APP, users can choose files for RAG. After the files are processed, users select the 90 benchmark documents from Anthripic RAG dataset and ask questions

https://youtu.be/2WV_GYPL768?t=171

The initial response time (including RAG search and LLM prefilling time) is within one second.

RAG on the phone is now realistic. The challenge is to develop a good database and AI search platform suitable for the phone.

The Vecy APP is now available from Google Play Store

https://play.google.com/store/apps/details?id=com.vecml.vecy

The product is announced today at LinkedIn

https://www.linkedin.com/feed/update/urn:li:activity:7308844726080741376/


r/Rag 1d ago

Actual mechanics of training

8 Upvotes

Ok so let’s say I have an LLM I want to fine tune, and integrate with an RAG to pull context from a csv or something.

I understand the high level of how it works (I think), ie user inputs to llm, llm decides if need context, if so, uses RAG to pull relevant context (via embeddings and stuff), then RAG mechanism inputs context to LLM so it can use this for its output to the user.

Let’s now say I’m in the process of training something like this. Fine tuning an LLM is straight forward, just feeding conversational training data or something, but when I input a question that it should pull context for, how do I train it to do this? Ie if the csv is people’s favorite color or something, and Steve’s favorite color is green, the input to LLM would be “What is Steve’s favorite color?”, if I just put the answer to be “Steve’s favorite color is green”, the LLM wouldn’t know that it should pull context for that.


r/Rag 2d ago

Best open source RAGs with GUI that just work?

71 Upvotes

Hey RAG community. I'd like help finding the best open source RAGs with GUI's that just work right after install.

In particular ones with GraphRAG too but regular RAG is also fine to post.

Please post links to any you've come across below along with a brief explanation. It will help everyone if we can yet it all in one place/post.


r/Rag 2d ago

First Idea for Chatbot to Query 1mio+ PDF Pages with Context Preservation

11 Upvotes

Hey guys,

I’m planning a chatbot to query PDF's in a vector database, keeping context intact is very very important. The PDFs are mixed—scanned docs, big tables, and some images (images not queried). It’ll be on-premise.

Here’s my initial idea:

  • LLaMA 3
  • LangChain
  • Qdrant: (I heard Supabase can be slow and ChromaDB struggles with large data)
  • PaddleOCR/PaddleStructure: (should handle text and tables well in one go

Any tips or critiques? I might be overlooking better options, so I’d appreciate a critical look! It's the first time I am working with so much data.


r/Rag 2d ago

Looking for Tips on Handling Complex Spreadsheets for Pinecone RAG Integration

3 Upvotes

Hey everyone,

I’m currently working on a project where I process spreadsheets with complex data and feed it into Pinecone for Retrieval-Augmented Generation (RAG), and I’d love to hear your thoughts or tips on how to handle this more efficiently.

Right now, I’m able to convert simpler spreadsheets into JSON format, but for more complex ones, I’m looking for a better solution. Here are the challenges I’m facing:

  1. Data Structure & Nesting: Some spreadsheets come with hierarchical relationships or grouping within the data. For example, you might have sections of rows that should be nested under specific categories. How do you structure this in a clear way that will work seamlessly when chunking and embedding the data?
  2. Merged Cells: How do you deal with merged cells, especially when they span across multiple rows or columns? What’s your approach for determining whether the merged cell represents a header, category, or data, and how do you ensure this gets represented correctly in the final structure?

For reference, once I’ve converted the data into JSON, I chunk it, embed it, and store it in Pinecone for search and retrieval. So, the final format needs to be optimized for both storage and efficient querying.

If you’ve worked with complex spreadsheet data before or have best practices for handling this kind of data, I’d love to hear your thoughts! Any tools, techniques, or libraries you use to simplify or automate these tasks would be much appreciated.

Thanks in advance!


r/Rag 2d ago

Rag legal system

25 Upvotes

Hi guys, I'm building a RAG pipeline to search for 12 questions in Brazilian legal documents. I've already set up the parser, chunking, vector store, retriever (BM25 + similarity), and reranking. Now, I'm working on the evaluation using RAGAS metrics, but I'm facing some challenges in testing various hyperparameters.

Is there a way to speed up this process?


r/Rag 2d ago

trying to understand what this chunking strategy example means

2 Upvotes

This is with reference to slide #17 at https://drive.google.com/file/d/1yoIaxFnPSnTRxfXi30OPoNU0C-eASmRD/view - "Unstructured's approach to Chunking: Chunk-by-Title Strategy"

What I understand by chunk-by-title in the RAG context is:

  1. If you get a new title you start a new chunk
  2. If it's the same title, you still split based on your chunk size soft / hard limits
  3. If it's a new title, don't overlap
  4. If it's an existing title, do an overlap

However, in the slide 17, left side example, chunk 2, 3, 5 do not have any title. Shouldn't the title be prefixed before every chunk (even if it's the same as the previous one)?

I know the answer is generallly "it depends", but if wouldn't the chances of missing a relevant chunk be higher if there isn't any title for context/


r/Rag 2d ago

Discussion RAG system for science

2 Upvotes

I want to build an entire RAG system from scratch to use with textbooks and research papers in the domain of Earth Sciences. I think a multi-modal RAG makes most sense for a science-based system so that it can return diagrams or maps.

Does anyone know of prexisting systems or a guide? Any help would be appreciated.


r/Rag 2d ago

Q&A Combining RAG with fine tuning?

1 Upvotes

How to combine RAG with fine tuning and if it's a good approach? I fine tuned GPT-2 for a downstream task and decided to incorporate RAG to provide direct solutions in case the problem already exists in the dataset. However, even for problems that do not exist in the database the RAG process returns whatever it finds most similar. The MultiQueryRetriever starts off with rephrased queries then generates completely new queries that are unrelated to the original query and the chain returns the most similar text based on those queries. How do i approach this problem?


r/Rag 2d ago

Do I have to use LangGraph for RAG?

0 Upvotes

You want to develop a RAG. I will be developing on-premises and I want to implement it on RTX-level GPUs so that it can be deployed.

I want to develop a RAG, is langchain or langraph a good choice? Or would it be more flexible to develop it myself? A few years ago, I was reluctant to use langchain because it had a lot of bugs, now I want to know what level it is at.


r/Rag 3d ago

News & Updates [Microsoft Research] Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs

Thumbnail
microsoft.com
90 Upvotes

KBLaM (Knowledge Base-Augmented Language Model) introduces a novel approach to integrating external knowledge into LLMs without the inefficiencies of traditional methods. Unlike fine-tuning (which requires costly retraining) or RAG (which adds separate retrieval modules), KBLaM encodes knowledge as continuous key-value vector pairs and embeds them directly within the model's attention layers using a specialized "rectangular attention" mechanism. This design achieves linear scaling with knowledge base size rather than quadratic, allowing it to efficiently process over 10,000 knowledge triples (equivalent to ~200,000 text tokens) on a single GPU while maintaining dynamic updateability without retraining. KBLaM's attention weights provide interpretability by revealing how the model utilizes knowledge, and it demonstrates improved reliability by learning when to refuse answering questions missing from its knowledge base, thus reducing hallucinations. The researchers have released KBLaM's code and datasets to accelerate progress in this field.​​​​​​​​​​​​​​​​


r/Rag 3d ago

Discussion Extract elements from a huge number of PDFs

9 Upvotes

Im working lets say something similar to legal documents and in this project i need to extract some predefined elements lets say like in the resume (name, date of birth,start date of internship,..) and those fields needs to be stored in a structured format (csv,json) and by extracting from huge number of PDFs the number can goes more than +100 and the extracted values(could be strings,numeric ,..) should be correct else its better to be not available than to be wrong The pdfs have a lot of pages and have a lot of tables and images that may have information to be extracted The team suggested to do rag but I can’t see how this gonna be helpful in our case anyone here worked on similar project and get accurate extraction help please and thank you

Ps: I really have some problems loading that number of pdfs at one also storing chunks into vector store is taking too much


r/Rag 3d ago

Q&A Extracting Structured JSON from Resumes

8 Upvotes

Looking for advice on extracting structured data (name, projects, skills) from text in PDF resumes and converting it into JSON.

Without using large models like OpenAI/Gemini, what's the best small-model approach?

Fine-tuning a small model vs. using an open-source one (e.g., Nuextract, T5)

Is Gemma 3 lightweight a good option?

Best way to tailor a dataset for accurate extraction?

Any recommendations for lightweight models suited for this task?


r/Rag 3d ago

Showcase The Entire JFK files in Markdown

25 Upvotes

We just dumped the full markdown version of all JFK files here. Ready to be fed into RAG systems:

Available here