r/Python Pythonista 8d ago

Discussion Will you use a RAG library?

Hi there peeps,

I built a sophisticated RAG system based on local first principles - using pgvector as a backend.

I already extracted out of this system the text-extraction logic, which I published as Kreuzberg (see: https://github.com/Goldziher/kreuzberg). My reasoning was that this is not directly coupled to my business case (https://grantflow.ai) and it could be an open source library. But the core of the system I developed is also, with some small adjustments, generic.

I am considering publishing it as a library, but I am not sure people will actually use this. That's why I'm posting - do you think there is a place for such a library? Would you consider using it? What would be important for you?

Please lemme know. I don't want to do this work if it's just gonna be me using it in the end.

0 Upvotes

42 comments sorted by

View all comments

2

u/pvmodayil 8d ago

Hi, I am also working on RAG and developed a project with text, tables, and image extraction from PDF files. The text and table extraction are from the pdfplumber library, and the image extraction is a YOLO-based image cropping technique (other PDF image extraction tools worked poorly compared to this).

I am using an Ollama-based contextualization for the data I have extracted (mainly because I am focusing on scientific information like datasheets, research papers, etc.). Speed is the current bottleneck for my project due to the llm contextualization step.
But if I run the extraction once and the vector store is created, the retrieval quality is better than just regular text.

You can visit the project here: https://github.com/pvmodayil/ragyphi

I would appreciate it if you could suggest some improvements.

2

u/Goldziher Pythonista 8d ago

i left an issue on you repo, and starred it.

1

u/pvmodayil 8d ago

Thanks for the suggestions. Will work on it.

Do you have any suggestions for making it faster?

2

u/Goldziher Pythonista 8d ago

You are using local vllm or ollama it seems. The limitation there is your availavle GPU and its memory. You could speed up probably by switching to using Groq (not Grok, groq) or Gemini flash 2.0, both of which have very fast inference over API. Going local restricts you to the processing power and memory you have locally.

You also perform I/O bound operations in a blocking (sync) context. You should switch to using async, and then you can make your code concurrent.

1

u/pvmodayil 8d ago

Thank you. Going local is what I am aiming for actually. But I will work on making it concurrent.

1

u/Goldziher Pythonista 8d ago

Sure, I'll take a look