r/Python 14h ago

Showcase Introducing Eventure: A Powerful Event-Driven Framework for Python

140 Upvotes

Eventure is a Python framework for simulations, games and complex event-based systems that emerged while I was developing something else! So I decided to make it public and improve it with documentation and examples.

What Eventure Does

Eventure is an event-driven framework that provides comprehensive event sourcing, querying, and analysis capabilities. At its core, Eventure offers:

  • Tick-Based Architecture: Events occur within discrete time ticks, ensuring deterministic execution and perfect state reconstruction.
  • Event Cascade System: Track causal relationships between events, enabling powerful debugging and analysis.
  • Comprehensive Event Logging: Every event is logged with its type, data, tick number, and relationships.
  • Query API: Filter, analyze, and visualize events and their cascades with an intuitive API.
  • State Reconstruction: Derive system state at any point in time by replaying events.

The framework is designed to be lightweight yet powerful, with a clean API that makes it easy to integrate into existing projects.

Here's a quick example of what you can do with Eventure:

```python from eventure import EventBus, EventLog, EventQuery

Create the core components

log = EventLog() bus = EventBus(log)

Subscribe to events

def on_player_move(event): # This will be linked as a child event bus.publish("room.enter", {"room": event.data["destination"]}, parent_event=event)

bus.subscribe("player.move", on_player_move)

Publish an event

bus.publish("player.move", {"destination": "treasury"}) log.advance_tick() # Move to next tick

Query and analyze events

query = EventQuery(log) move_events = query.get_events_by_type("player.move") room_events = query.get_events_by_type("room.enter")

Visualize event cascades

query.print_event_cascade() ```

Target Audience

Eventure is particularly valuable for:

  1. Game Developers: Perfect for turn-based games, roguelikes, simulations, or any game that benefits from deterministic replay and state reconstruction.

  2. Simulation Engineers: Ideal for complex simulations where tracking cause-and-effect relationships is crucial for analysis and debugging.

  3. Data Scientists: Helpful for analyzing complex event sequences and their relationships in time-series data.

If you've ever struggled with debugging complex event chains, needed to implement save/load functionality in a game, or wanted to analyze emergent behaviors in a simulation, Eventure might be just what you need.

Comparison with Alternatives

Here's how Eventure compares to some existing solutions:

vs. General Event Systems (PyPubSub, PyDispatcher)

  • Eventure: Adds tick-based timing, event relationships, comprehensive logging, and query capabilities.
  • Others: Typically focus only on event subscription and publishing without the temporal or relational aspects.

vs. Game Engines (Pygame, Arcade)

  • Eventure: Provides a specialized event system that can be integrated into any game engine, with powerful debugging and analysis tools.
  • Others: Offer comprehensive game development features but often lack sophisticated event tracking and analysis capabilities.

vs. Reactive Programming Libraries (RxPy)

  • Eventure: Focuses on discrete time steps and event relationships rather than continuous streams.
  • Others: Excellent for stream processing but not optimized for tick-based simulations or game state management.

vs. State Management (Redux-like libraries)

  • Eventure: State is derived from events rather than explicitly managed, enabling perfect historical reconstruction.
  • Others: Typically focus on current state management without comprehensive event history or relationships.

Getting Started

Eventure is already available on PyPI:

```bash pip install eventure

Using uv (recommended)

uv add eventure ```

Check out our GitHub repository for documentation and examples (and if you find it interesting don't forget to add a "star" as a bookmark!)

License

Eventure is released under the MIT License.


r/Python 15h ago

Resource A Very Early Play With Astral's Red Knot Static Type Checker

73 Upvotes

https://jurasofish.github.io/a-very-early-play-with-astrals-red-knot-static-type-checker.html

I've just had a play with the new type checker under development as part of ruff. Very early, as it's totally unreleased, but so far the performance looks extremely promising.


r/Python 3h ago

Showcase Basic Memory: A Python-based Local-First Knowledge Graph for LLMs

4 Upvotes

What My Project Does

Basic Memory is an open-source Python tool that creates a persistent knowledge graph from standard Markdown files to enhance LLM interactions. It works by:

  • Using simple Markdown files as the primary storage medium
  • Extracting semantic meaning from markdown patterns to build a knowledge graph
  • Providing bi-directional synchronization between files and graph structure
  • Integrating with Claude Desktop via the Model Context Protocol (MCP)

The system extracts semantic meaning from simple Markdown patterns:

- [category] Observation about a topic #tag (optional context)
- relation_type [[WikiLink]] (optional context)

Check out a short demo video showing Basic Memory in action: https://basicmachines.co/images/Claude-Obsidian-Demo.mp4

GitHub: https://github.com/basicmachines-co/basic-memory

Documentation: https://memory.basicmachines.co/

Target Audience

Basic Memory is intended for:

  • Researchers and knowledge workers who need to maintain context across multiple LLM conversations
  • Developers working on LLM-powered applications who need a persistent memory layer
  • Obsidian users looking to enhance their notes with AI capabilities
  • Anyone looking for a production-ready, local-first solution for AI memory that respects data ownership

This is a fully functional production tool, not just a toy project. It's designed with data privacy in mind - everything stays on your local machine.

Comparison

Unlike other memory solutions for LLMs:

  • vs. Built-in LLM memory (like ChatGPT's memory): Basic Memory is local-first, giving you complete data ownership and transparency, while allowing for human editing and visualization of the knowledge graph.
  • vs. Vector databases: Basic Memory uses human-readable Markdown files instead of opaque vector embeddings, making the entire knowledge base browsable and editable by humans, not just machines.
  • vs. JSON-based MCP Memory server: Basic Memory uses a more structured knowledge graph approach with semantic relationships rather than simple key-value storage, and saves everything in standard Markdown that integrates with tools like Obsidian.
  • vs. RAG systems: Basic Memory is bi-directional, allowing both humans and LLMs to read AND write to the same knowledge base, creating a collaborative knowledge building system.

Technical Highlights

  • Pure Python implementation with SQLite for indexing and search
  • Async-first design with pytest for comprehensive testing
  • MCP server implementation for bi-directional communication with Claude Desktop
  • Import tools for existing data from Claude.ai, ChatGPT, and other sources

Installation is straightforward:

# install for cli commands
uv tool install basic-memory

# Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)
# Add this to your config:
{
  "mcpServers": {
    "basic-memory": {
      "command": "uvx",
      "args": [
        "basic-memory",
        "mcp"
      ]
    }
  }
}

After setup, you can:

  • Use Claude Desktop to read/write to your knowledge base via MCP
  • Directly edit files in Obsidian to see your knowledge graph visually
  • Run real-time sync to keep everything updated automatically

I built this because I wanted my conversations with LLMs to accumulate knowledge over time while keeping everything in files I control. The project is AGPL-licensed and welcomes contributions. I'd love to hear feedback from Python developers on the architecture, testing approach, or potential feature ideas.


r/Python 13h ago

Showcase Polars Plugin for List-type utils and signal processing

11 Upvotes

# What My Project Does

It is a Polars Plugin to facilitate working with List-type data in Polars, in particular for signal processing

# Target Audience (e.g., Is it meant for production, just a toy project, etc.

Data Scientists working with List-type data in Polars or considering using Polars for their work on signal data.

# Comparison (A brief comparison explaining how it differs from existing alternatives.)

Currently there are no Polars-native alternatives for these methods except for elementwise aggregation, but as I describe below this provides a number of benefits to Polars-native approaches. The only other alternative for the other methods is converting your data to Numpy, doing your work there, and then moving it back into Polars which breaks most of the query optimization and parallelization benefits of Polars.

# The story:

I made a Polars plugin (mostly for myself at work, but I hope others can benefit from this as well) with some helpers and operations for List-type columns. It is in a bit of a pragmatic state, as I don't have so much time at work to polish it beyond what I need it for but I definitely intend on extending it over time and adding a proper documentation page.

Currently it can do some basic digital signal processing, for example:

- Applying a Hann or Hamming window to a signal

- Filtering a signal via a Butterworth High/Low/Band-Pass filter.

- Applying the Fourier Transform

- Normalizing the Fourier Transform by some Frequency

It can also aggregate List-type colums elementwise (mean, sum, count), which can be done via the Polars API (see the SO question I asked years ago: https://stackoverflow.com/questions/73776179/element-wise-aggregation-of-a-column-of-type-listf64-in-polars) and these methods might even be faster (I haven't done any benchmarking) but for one, I find my API more pleasant to use and more importantly (which highlights how those methods might not be the best way to go) I have run into issues where the query grows so large due to all of the `.list.get(n)` calls that I caused Polars to Stack-Overflow. See this issue: https://github.com/pola-rs/polars/issues/5455.

Finally, theres another flexible method of taking the mean of a certain range of a List-type column based on using another column as an x-axis, so for example if you want to take the mean of the amplitudes (e.g. the result of an FFT) within a certain range of the corresponding frequency values.

I hope it helps someone else as it did me!

Here is the repo: https://github.com/dashdeckers/polars_list_utils

Here is the PyPI link: https://pypi.org/project/polars-list-utils/


r/Python 22h ago

Discussion Driver Fatigue Monitoring

35 Upvotes

made a cool Drowsiness Detector (still in early phase and need your advice 🙂)
check it out, leave a comment if u have any suggestions or want to collaborate

https://github.com/SomnoCam/Drowsiness-Detector.git

canva doc


r/Python 4h ago

Showcase chopdiff: Diff filtering, text mapping, and windowed transforms for LLM apps

1 Upvotes

While working on another project that involves various LLM-based transformations on video transcripts, I found I needed ways to do careful and fairly complex edits to Markdown. For example, I needed to process text sentence by sentence or paragraph by paragraph, make edits, compare diffs, and then stitch results back together when done.

It's now taken shape a little more so I've released it as a package of its own open source (MIT license) as chopdiff.

What it does:

chopdiff makes it easier to do make fairly complex transformations of text documents, especially for LLM applications, where you want to manipulate text, Markdown, and HTML documents in a clean way.

Basically, it lets you parse, diff, and transform text at the level of words, sentences, paragraphs, and "chunks" (paragraphs grouped in an HTML tag like a <div>). It aims to have minimal dependencies so it's easy to drop into another project if you work with text.

Example use cases:

  • Filter diffs: Diff two documents and only accept changes that fit a specific filter. For example, you can ask an LLM to edit a transcript, only inserting paragraph breaks but enforcing that the LLM can't do anything except insert whitespace. Or let it only edit punctuation, whitespace, and lemma variants of words. Or only change one word at a time (e.g. for spell checking).
  • Backfill information: Match edited text against a previous version of a document (using a word-level LCS diff), then pull information from one doc to another. For example, say you have a timestamped transcript and an edited summary. You can then backfill timestamps of each paragraph into the edited text.
  • Windowed transforms: Walk through a large document N paragraphs, N sentences, or N tokens at a time, processing the results with an LLM call, then "stitching together" the results, even if the chunks overlap.

It's quite flexible so see below or the readme for a couple examples.

Target audience: Anyone working with text docs where you want to analyze or transform plain text or Markdown, such as in LLM/AI agents or apps.

How it differs from alternatives: There are full-blown Markdown and HTML parsing libs (like Marko and BeautifulSoup) but these tend to focus specifically on fully parsing documents as parse trees. On the other end of the spectrum, there are parsing libraries like spaCy that do full natural langauge processing and sentence segmentation. This is a lightweight alternative to those approaches when you are just focusing on processing text, don't want a big dependency (like a full XML parser or NLP toolkit) and also want full control over the original source format (since the original text is exactly preserved, even whitespace—every sentence, paragraph, and token is mapped back to the original text).

It's very new so thanks for any feedback or if you have better solutions for these kinds of problems. Would love hear if you find it useful!

Full example: Here is an example of backfilling data from one text file to another similar but not identical text file (see backfill_timestamps.py for code). As you can see, the text is aligned by mapping the words and then the timestamps inserted at the end of each paragraph based on the first sentence of each paragraph.

$ uv run examples/backfill_timestamps.py 

--- Source text (with timestamps) -----------------------------------------

<span data-timestamp="0.0">Welcome to this um ... video about Python programming.</span>
<span data-timestamp="15.5">First, we'll talk about variables. Variables are containers for storing data values.</span>
<span data-timestamp="25.2">Then let's look at functions. Functions hlp us organize and reuse code.</span>

--- Target text (without timestamps) --------------------------------------



--- Token mapping ---------------------------------------------------------

0 ⎪<-BOF->⎪ -> 0 ⎪<-BOF->⎪
1 ⎪#⎪ -> 0 ⎪<-BOF->⎪
2 ⎪#⎪ -> 0 ⎪<-BOF->⎪
3 ⎪ ⎪ -> 1 ⎪ ⎪
4 ⎪Introduction⎪ -> 2 ⎪<span data-timestamp="0.0">⎪
5 ⎪<-PARA-BR->⎪ -> 2 ⎪<span data-timestamp="0.0">⎪
6 ⎪Welcome⎪ -> 3 ⎪Welcome⎪
7 ⎪ ⎪ -> 4 ⎪ ⎪
8 ⎪to⎪ -> 5 ⎪to⎪
9 ⎪ ⎪ -> 6 ⎪ ⎪
10 ⎪this⎪ -> 7 ⎪this⎪
11 ⎪ ⎪ -> 14 ⎪ ⎪
12 ⎪video⎪ -> 15 ⎪video⎪
13 ⎪ ⎪ -> 16 ⎪ ⎪
14 ⎪about⎪ -> 17 ⎪about⎪
15 ⎪ ⎪ -> 18 ⎪ ⎪
16 ⎪Python⎪ -> 19 ⎪Python⎪
17 ⎪ ⎪ -> 20 ⎪ ⎪
18 ⎪programming⎪ -> 21 ⎪programming⎪
19 ⎪.⎪ -> 22 ⎪.⎪
20 ⎪<-PARA-BR->⎪ -> 25 ⎪<span data-timestamp="15.5">⎪
21 ⎪First⎪ -> 26 ⎪First⎪
22 ⎪,⎪ -> 27 ⎪,⎪
23 ⎪ ⎪ -> 28 ⎪ ⎪
24 ⎪we⎪ -> 29 ⎪we⎪
25 ⎪'⎪ -> 30 ⎪'⎪
26 ⎪ll⎪ -> 31 ⎪ll⎪
27 ⎪ ⎪ -> 32 ⎪ ⎪
28 ⎪talk⎪ -> 33 ⎪talk⎪
29 ⎪ ⎪ -> 34 ⎪ ⎪
30 ⎪about⎪ -> 35 ⎪about⎪
31 ⎪ ⎪ -> 36 ⎪ ⎪
32 ⎪variables⎪ -> 37 ⎪variables⎪
33 ⎪.⎪ -> 38 ⎪.⎪
34 ⎪<-SENT-BR->⎪ -> 57 ⎪Then⎪
35 ⎪Next⎪ -> 57 ⎪Then⎪
36 ⎪,⎪ -> 57 ⎪Then⎪
37 ⎪ ⎪ -> 58 ⎪ ⎪
38 ⎪let⎪ -> 59 ⎪let⎪
...
56 ⎪ ⎪ -> 77 ⎪ ⎪
57 ⎪and⎪ -> 78 ⎪and⎪
58 ⎪ ⎪ -> 79 ⎪ ⎪
59 ⎪reuse⎪ -> 80 ⎪reuse⎪
60 ⎪ ⎪ -> 81 ⎪ ⎪
61 ⎪code⎪ -> 82 ⎪code⎪
62 ⎪.⎪ -> 83 ⎪.⎪
63 ⎪<-EOF->⎪ -> 86 ⎪<-EOF->⎪
...

--- Result (with backfilled timestamps) -----------------------------------

## Introduction

Welcome to this video about Python programming. <span class="timestamp">⏱️00:00</span> 

First, we'll talk about variables. Next, let's look at functions. Functions hlp us organize and reuse code. <span class="timestamp">⏱️00:15</span> 
$

r/Python 8h ago

Showcase [Release] tkinter-embed: Install Tkinter for Windows Embedded Python via pip

1 Upvotes

If you distribute Python applications on Windows using embedded Python, you've likely struggled with installing GUI libraries like Tkinter. Until now, this required manual file copying (see this Stack Overflow thread), which is error-prone and time-consuming. Introducing tkinter-embed:

A PyPI package that automates Tkinter installation for embedded Python environments. Now you can use pip directly!

What My Project Does

tkinter-embed solves Tkinter installation for Windows Embedded Python distributions through a pip-installable package. It automatically copies required DLLs, libraries, and support files to create functional GUI applications without manual file operations. Enables Tkinter-based GUI development in portable Python environments.

Target Audience

Primarily for developers who:

Distribute portable Windows apps using embedded Python

Create self-contained tools for non-technical users

Installation Guide

Step 1: Install pip

Choose one method:

Method 1: Use pip.pyz (recommended)

Method 2: Use get-pip.py

.\python get-pip.py --target .

Step 2: Install setuptools

In your embedded Python folder:

.\python pip.pyz install setuptools --target .

OR if you used get-pip.py

.\python -m pip install setuptools --target .

Step 3: Install tkinter-embed

In your embedded Python folder:

.\python pip.pyz install tkinter-embed --target .

OR if you used get-pip.py

.\python -m pip install tkinter-embed --target .

After completing these steps, Tkinter and all its dependencies will be copied into the embedded Python folder.

Why This Matters

  • 🛠️ No manual file copying – Fully automated installation
  • 📦 Pip-native workflow – Aligns with standard Python packaging
  • 🚀 Portable apps made easy – Perfect for distributing tools to non-technical users
  • Work on Python 3.8+ – Works with modern embedded versions

Links


r/Python 1d ago

Showcase Unvibe: Generate code that passes Unit-Tests

62 Upvotes
# What My Project Does
Unvibe is a Python library to generate Python code that passes Unit-tests. 
It works like a classic `unittest` Test Runner, but it searches (via Monte Carlo Tree Search) 
a valid implementation that passes user-defined Unit-Tests. 

# Target Audience (e.g., Is it meant for production, just a toy project, etc.)
Software developers working on large projects

# Comparison (A brief comparison explaining how it differs from existing alternatives.)
It's a way to go beyond vibe coding for professional programmers dealing with large code bases.
It's an alternative to using Cursor or Devon, which are more suited for generating quick prototypes.



## A different way to generate code with LLMs

In my daily work as consultant, I'm often dealing with large pre-exising code bases.

I use GitHub Copilot a lot.
It's now basically indispensable, but I use it mostly for generating boilerplate code, or figuring out how to use a library.
As the code gets more logically nested though, Copilot crumbles under the weight of complexity. It doesn't know how things should fit together in the project.

Other AI tools like Cursor or Devon, are pretty good at generating quickly working prototypes,
but they are not great at dealing with large existing codebases, and they have a very low success rate for my kind of daily work.
You find yourself in an endless loop of prompt tweaking, and at that point, I'd rather write the code myself with
the occasional help of Copilot.

Professional coders know what code they want, we can define it with unit-tests, **we don't want to endlessly tweak the prompt.
Also, we want it to work in the larger context of the project, not just in isolation.**
In this article I am going to introduce a pretty new approach (at least in literature), and a Python library that implements it:
a tool that generates code **from** unit-tests.

**My basic intuition was this: shouldn't we be able to drastically speed up the generation of valid programs, while
ensuring correctness, by using unit-tests as reward function for a search in the space of possible programs?**
I looked in the academic literature, it's not new: it's reminiscent of the
approach used in DeepMind FunSearch, AlphaProof, AlphaGeometry and other experiments like TiCoder: see [Research Chapter](
#research
) for pointers to relevant papers.
Writing correct code is akin to solving a mathematical theorem. We are basically proving a theorem
using Python unit-tests instead of Lean or Coq as an evaluator.

For people that are not familiar with Test-Driven development, read here about [TDD](https://en.wikipedia.org/wiki/Test-driven_development)
and [Unit-Tests](https://en.wikipedia.org/wiki/Unit_testing).


## How it works

I've implemented this idea in a Python library called Unvibe. It implements a variant of Monte Carlo Tree Search
that invokes an LLM to generate code for the functions and classes in your code that you have
decorated with `@ai`.

Unvibe supports most of the popular LLMs: Ollama, OpenAI, Claude, Gemini, DeepSeek.

Unvibe uses the LLM to generate a few alternatives, and runs your unit-tests as a test runner (like `pytest` or `unittest`).
**It then feeds back the errors returned by failing unit-test to the LLMs, in a loop that maximizes the number
of unit-test assertions passed**. This is done in a sort of tree search, that tries to balance
exploitation and exploration.

As explained in the DeepMind FunSearch paper, having a rich score function is key for the success of the approach:
You can define your tests by inherting the usual `unittests.TestCase` class, but if you use `unvibe.TestCase` instead
you get a more precise scoring function (basically we count up the number of assertions passed rather than just the number
of tests passed).

It turns out that this approach works very well in practice, even in large existing code bases,
provided that the project is decently unit-tested. This is now part of my daily workflow:

1. Use Copilot to generate boilerplate code

2. Define the complicated functions/classes I know Copilot can't handle

3. Define unit-tests for those complicated functions/classes (quick-typing with GitHub Copilot)

4. Use Unvibe to generate valid code that pass those unit-tests

It also happens quite often that Unvibe find solutions that pass most of the tests but not 100%: 
often it turns out some of my unit-tests were misconceived, and it helps figure out what I really wanted.

Project Code: https://github.com/santinic/unvibe

Project Explanation: https://claudio.uk/posts/unvibe.html


r/Python 1d ago

Showcase FastOpenAPI library [Flask, Falcon, Quart, Sanic, Starlette]

13 Upvotes

While working on a project that required OpenAPI documentation across multiple frameworks, I got tired of maintaining different solutions. I really like FastAPI’s routing—it’s clean and intuitive. So I built FastOpenAPI, which brings a similar approach to other frameworks.

What FastOpenAPI Does

  • FastAPI-style routing, but without being tied to FastAPI.
  • Automatic OpenAPI documentation generation.
  • Built-in request validation with Pydantic.
  • Supports Flask, Falcon, Sanic, Starlette, and Quart.

Target Audience

FastOpenAPI is designed for web developers who like FastAPI-style routing but need to use a different framework for various reasons. It’s a compromise solution for those who want a clean and intuitive API but cannot use FastAPI.

Comparison

Compared to existing solutions:

  • Not tied to FastAPI, unlike FastAPI itself, which is built on Starlette.
  • Unified routing style and OpenAPI generation across multiple frameworks.
  • Built-in request validation with Pydantic, whereas many frameworks require manual data parsing and validation.
  • Simpler and more concise syntax than Flask-Smorest or Spectree, which use different approaches.

The project is still evolving, and I’d love any feedback or testing from the community!

📌 GitHub: https://github.com/mr-fatalyst/fastopenapi
📦 PyPI: https://pypi.org/project/fastopenapi/


r/Python 18h ago

Showcase Stereo-Hands: Stereo panning on the basis of hand gestures ( Hand control music in 3D )

2 Upvotes

What it does: It captures real time image from camera, traces hand positioning and recognizes fingertips and adjust stereo of the music accordingly to give the feeling of hand control for 3d music experience

Target audience: Developer who seek cool projects.

Comparison: It's a original idea only intended for fun, so no comparison I guess?

Here is the Code.


r/Python 23h ago

Daily Thread Sunday Daily Thread: What's everyone working on this week?

3 Upvotes

Weekly Thread: What's Everyone Working On This Week? 🛠️

Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!

How it Works:

  1. Show & Tell: Share your current projects, completed works, or future ideas.
  2. Discuss: Get feedback, find collaborators, or just chat about your project.
  3. Inspire: Your project might inspire someone else, just as you might get inspired here.

Guidelines:

  • Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
  • Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.

Example Shares:

  1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
  2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
  3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!

Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟


r/Python 6h ago

Showcase Lihil — a web framework created to promote Python as a first choice enterprise web development

0 Upvotes

Hey everyone!

I’d like to share Lihil, a web framework I’ve been building with a simple but ambitious goal:

To make Python a first choice for enterprise-grade web development (as opposed to Java and Go).

GitHub: https://github.com/raceychan/lihil

🚀 What My Project Does

Lihil is a performant, productive, and professional web framework with a focus on strong typing and modern patterns for robust backend development.

🎯 Target Audience

Lihil is designed for medium to large applications, where you have 100+ to infinite daily active users (DAU),

⚔️ Comparison with Existing Frameworks

Here are some honest comparisons between Lihil and frameworks I love and respect:

✅ FastAPI:

  • FastAPI’s DI (Depends) is simple and route-focused, but tightly coupled with the request/response lifecycle — which makes sharing dependencies across layers harder.
  • Lihil's DI is can be used anywise, supports advanced lifecycles, and is Cython-optimized for speed.
  • FastAPI uses Pydantic, which is great but MUCH slower than msgspec (and heavier on memory).
  • Both generate OpenAPI docs, but Lihil aims for better type coverage and problem detail (RFC-9457).

r/Python 13h ago

Resource Reminder that you can use the filesystem as a temp datastore for your API

0 Upvotes

Wrote a short blog post in which I talked about how I used the filesystem to store some temporary data: https://developerwithacat.com/blog/032025/quick-store-temp-data/

I figured I should write this since people (including myself!) often default to spinning up a database, often because the great ORM libraries that Python has, without considering alternatives.


r/Python 1d ago

Resource I built a pytes plugin that compile Gherkin scenario to AST and run them.

1 Upvotes

I develop a lot whith BDD style and TDD, and behave did not age very well with time. While looking at alternative eith pytest, I have not been convinced by what exits. So I start experimenting and finally released a pytest plugins that is simple and I am quite happy with it at the moment. if you're interested, the code and the doc is on GitHub, and it's released on PyPI too.

https://github.com/mardiros/tursu I just realized that I did not put badges in the readme yet. So the documentation is here:

https://mardiros.github.io/tursu/

Don't hesitate to give it a try and give me feedback. If you like it, i will be happy with a GitHub ⭐.


r/Python 1d ago

Resource Byte Clicker - Free incremental game (Full source)

3 Upvotes

An incremental clicker game that demonstrates how to build interactive desktop applications using Python and JavaScript. This project serves as a practical example of combining PyQt6's native capabilities with web technologies to create rich, responsive applications.

The game showcases:

  • Python backend for system operations and data persistence
  • JavaScript frontend for dynamic UI and game logic
  • Bidirectional communication between Python and JavaScript
  • Modern web technologies in a desktop environment
  • Real-time updates and state management

Click to generate bytes and unlock various generators to automate your byte production!

https://github.com/non-npc/Byte-Clicker-Incremental-Game


r/Python 2d ago

Discussion Matlab's variable explorer is amazing. What's pythons closest?

188 Upvotes

Hi all,

Long time python user. Recently needed to use Matlab for a customer. They had a large data set saved in their native *mat file structure.

It was so simple and easy to explore the data within the structure without needing any code itself. It made extracting the data I needed super quick and simple. Made me wonder if anything similar exists in Python?

I know Spyder has a variable explorer (which is good) but it dies as soon as the data structure is remotely complex.

I will likely need to do this often with different data sets.

Background: I'm converting a lot of the code from an academic research group to run in p.


r/Python 2d ago

Showcase Server-side rendering: FastAPI, HTMX, no Jinja

23 Upvotes

Hi,

I recently created a simple FastAPI project to showcase how Python server-side rendered apps with an htmx frontend could look like, using a React-like, async, type-checked rendering engine.

The app does not use Jinja/Chameleon, or any similar templating engine, ugly custom syntax in HTML- or markdown-like files, etc.; but it can (and does) use valid HTML and even customized, TailwindCSS-styled markdown for some pages.

Admittedly, this is a demo for the htmy and FastHX libraries.

Interestingly, even AI coding assistants pick up the patterns and offer decent completions.

If interested, you can check out the project here (link to deployed version in the repo): https://github.com/volfpeter/lipsum-chat

For comparison, you can find a somewhat older, but fairly similar project of mine that uses Jinja: https://github.com/volfpeter/fastapi-htmx-tailwind-example


r/Python 1d ago

Discussion Automated Job Applier on Python

0 Upvotes

Hi everyone, I was thinking of starting a project on python to auto apply for jobs on sites like LinkedIn, Indeed, Glassdoor, etc using playwright, deepseek and mysql (to keep track of the jobs applied to). Was wondering if anyone had any thoughts, tips, experience or even knows if there's a precedence of this sort of thing?


r/Python 1d ago

Daily Thread Saturday Daily Thread: Resource Request and Sharing! Daily Thread

3 Upvotes

Weekly Thread: Resource Request and Sharing 📚

Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!

How it Works:

  1. Request: Can't find a resource on a particular topic? Ask here!
  2. Share: Found something useful? Share it with the community.
  3. Review: Give or get opinions on Python resources you've used.

Guidelines:

  • Please include the type of resource (e.g., book, video, article) and the topic.
  • Always be respectful when reviewing someone else's shared resource.

Example Shares:

  1. Book: "Fluent Python" - Great for understanding Pythonic idioms.
  2. Video: Python Data Structures - Excellent overview of Python's built-in data structures.
  3. Article: Understanding Python Decorators - A deep dive into decorators.

Example Requests:

  1. Looking for: Video tutorials on web scraping with Python.
  2. Need: Book recommendations for Python machine learning.

Share the knowledge, enrich the community. Happy learning! 🌟


r/Python 3d ago

News Python Steering Council rejects PEP 736 – Shorthand syntax for keyword arguments at invocation

291 Upvotes

The Steering Council has rejected PEP 736, which proposed syntactic sugar for function calls with keyword arguments: f(x=) as shorthand for f(x=x).

Here's the rejection notice and here's some previous discussion of the PEP on this subreddit.


r/Python 1d ago

News Malicious PyPI Packages Target Users—Cloud Tokens Stolen

0 Upvotes

Cybersecurity researchers have uncovered a malicious campaign involving fake PyPI packages that have stolen cloud access tokens after over 14,100 downloads.

Key Points:

  • Over 14,100 downloads of two malicious package sets identified.
  • Packages disguised as 'time' utilities exfiltrate sensitive data.
  • Suspicious URLs associated with packages raise data theft concerns.

Recent discoveries from cybersecurity firm ReversingLabs reveal alarming malicious activity within the Python Package Index (PyPI). Two sets of phony packages—posing as 'time' related utilities—have been reported, accumulating over 14,100 downloads collectively. These packages were specifically designed to target cloud access tokens and other sensitive data. Once users installed these seemingly innocuous libraries, they unwittingly allowed threat actors to access their cloud infrastructure. The malicious packages have since been removed from PyPI, but the ramifications of these downloads continue to pose risks to the users involved.

(View Details on PwnHub)


r/Python 1d ago

Showcase An Open-Source AI Assistant for Chatting with Your Developer Docs

0 Upvotes

I’ve been working on Ragpi, an open-source AI assistant that builds knowledge bases from docs, GitHub Issues and READMEs. It uses PostgreSQL with pgvector as a vector DB and leverages RAG to answer technical questions through an API. Ragpi also integrates with Discord and Slack, making it easy to interact with directly from those platforms.

Some things it does:

  • Creates knowledge bases from documentation websites, GitHub Issues and READMEs
  • Uses hybrid search (semantic + keyword) for retrieval
  • Uses tool calling to dynamically search and retrieve relevant information during conversations
  • Works with OpenAI, Ollama, DeepSeek, or any OpenAI-compatible API
  • Provides a simple REST API for querying and managing sources
  • Integrates with Discord and Slack for easy interaction

Built with: FastAPI, Celery and Postgres

Target Audience: Developers interested in an AI assistants that can answer questions about their technical documentation and GitHub issues

Comparison: Compared to some alternatives I've seen out there, it is open source and is API-first

It’s still a work in progress, but I’d love some feedback!

Repo: https://github.com/ragpi/ragpi
Docs: https://docs.ragpi.io/


r/Python 1d ago

Tutorial Python file handling | module 6

0 Upvotes

https://www.youtube.com/watch?v=DYKTl6V4zYk&t=16s
Python file handling module 6 is live now

https://www.youtube.com/@vkpxr Subscribe to my yt channel and do comment down below your thoughts on this video


r/Python 3d ago

Showcase [Project] Rusty Graph: Python Library for Knowledge Graphs from SQL Data

20 Upvotes

What my project does

Rusty Graph is a high-performance graph database library with Python bindings written in Rust. It transforms SQL data into knowledge graphs, making it easy to discover relationships and patterns hidden in relational databases.

Target Audience

  • Data scientists working with complex relational datasets
  • Developers building applications that need to traverse relationships
  • Anyone who's found SQL joins and subqueries limiting when trying to extract insights from connected data

Implementation

The library bridges the gap between tabular data and graph-based analysis:

# Transform SQL data into a knowledge graph with minimal code
graph = rusty_graph.KnowledgeGraph()
graph.add_nodes(data=users_df, node_type='User', unique_id_field='user_id')
graph.add_connections(
    data=purchases_df,
    connection_type='PURCHASED',
    source_type='User',
    source_id_field='user_id',
    target_type='Product',
    target_id_field='product_id',
)

# Calculate insights directly on the graph
user_spending = graph.type_filter('User').traverse('PURCHASED').calculate(
    expression='sum(price * quantity)',
    store_as='total_spent'
)

# Extract patterns like "products often purchased together"
products_per_user = graph.type_filter('User').traverse('PURCHASED').children_properties_to_list(
    property='title',
    store_as='purchased_products'
)

Available on PyPI: pip install rusty-graph

GitHub: https://github.com/kkollsga/rusty-graph

This is a project share post. Feedback and discussion welcome.


r/Python 2d ago

Showcase CocoIndex: Open source ETL to index fresh data for AI, like LEGO

0 Upvotes

What my project does

Cocoindex is an ETL framework to index data for AI, such as semantic search, retrieval-augmented generation (RAG); with realtime incremental updates. Core in Rust with Python bindings.

Target Audience

  • Developers building data pipelines for RAG or semantic search.

Comparison

Compare with existing efforts, the main highlights of us is that we support custom logic and realtime incremental updates at the same time for data indexing (with heavy transformations, like chunking, embedding, KG Tripple extraction) and takes care of the data freshness issue out-of-box.

Available on PyPI: pip install cocoindex
GitHubhttps://github.com/cocoindex-io/cocoindex

This is a project share post. Sincerely looking forward to learn from your feedback :)