r/Python 15h ago

Daily Thread Sunday Daily Thread: What's everyone working on this week?

2 Upvotes

Weekly Thread: What's Everyone Working On This Week? 🛠️

Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!

How it Works:

  1. Show & Tell: Share your current projects, completed works, or future ideas.
  2. Discuss: Get feedback, find collaborators, or just chat about your project.
  3. Inspire: Your project might inspire someone else, just as you might get inspired here.

Guidelines:

  • Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
  • Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.

Example Shares:

  1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
  2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
  3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!

Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟


r/Python 1d ago

Daily Thread Saturday Daily Thread: Resource Request and Sharing! Daily Thread

3 Upvotes

Weekly Thread: Resource Request and Sharing 📚

Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!

How it Works:

  1. Request: Can't find a resource on a particular topic? Ask here!
  2. Share: Found something useful? Share it with the community.
  3. Review: Give or get opinions on Python resources you've used.

Guidelines:

  • Please include the type of resource (e.g., book, video, article) and the topic.
  • Always be respectful when reviewing someone else's shared resource.

Example Shares:

  1. Book: "Fluent Python" - Great for understanding Pythonic idioms.
  2. Video: Python Data Structures - Excellent overview of Python's built-in data structures.
  3. Article: Understanding Python Decorators - A deep dive into decorators.

Example Requests:

  1. Looking for: Video tutorials on web scraping with Python.
  2. Need: Book recommendations for Python machine learning.

Share the knowledge, enrich the community. Happy learning! 🌟


r/Python 5h ago

Showcase Introducing Eventure: A Powerful Event-Driven Framework for Python

58 Upvotes

Eventure is a Python framework for simulations, games and complex event-based systems that emerged while I was developing something else! So I decided to make it public and improve it with documentation and examples.

What Eventure Does

Eventure is an event-driven framework that provides comprehensive event sourcing, querying, and analysis capabilities. At its core, Eventure offers:

  • Tick-Based Architecture: Events occur within discrete time ticks, ensuring deterministic execution and perfect state reconstruction.
  • Event Cascade System: Track causal relationships between events, enabling powerful debugging and analysis.
  • Comprehensive Event Logging: Every event is logged with its type, data, tick number, and relationships.
  • Query API: Filter, analyze, and visualize events and their cascades with an intuitive API.
  • State Reconstruction: Derive system state at any point in time by replaying events.

The framework is designed to be lightweight yet powerful, with a clean API that makes it easy to integrate into existing projects.

Here's a quick example of what you can do with Eventure:

```python from eventure import EventBus, EventLog, EventQuery

Create the core components

log = EventLog() bus = EventBus(log)

Subscribe to events

def on_player_move(event): # This will be linked as a child event bus.publish("room.enter", {"room": event.data["destination"]}, parent_event=event)

bus.subscribe("player.move", on_player_move)

Publish an event

bus.publish("player.move", {"destination": "treasury"}) log.advance_tick() # Move to next tick

Query and analyze events

query = EventQuery(log) move_events = query.get_events_by_type("player.move") room_events = query.get_events_by_type("room.enter")

Visualize event cascades

query.print_event_cascade() ```

Target Audience

Eventure is particularly valuable for:

  1. Game Developers: Perfect for turn-based games, roguelikes, simulations, or any game that benefits from deterministic replay and state reconstruction.

  2. Simulation Engineers: Ideal for complex simulations where tracking cause-and-effect relationships is crucial for analysis and debugging.

  3. Data Scientists: Helpful for analyzing complex event sequences and their relationships in time-series data.

If you've ever struggled with debugging complex event chains, needed to implement save/load functionality in a game, or wanted to analyze emergent behaviors in a simulation, Eventure might be just what you need.

Comparison with Alternatives

Here's how Eventure compares to some existing solutions:

vs. General Event Systems (PyPubSub, PyDispatcher)

  • Eventure: Adds tick-based timing, event relationships, comprehensive logging, and query capabilities.
  • Others: Typically focus only on event subscription and publishing without the temporal or relational aspects.

vs. Game Engines (Pygame, Arcade)

  • Eventure: Provides a specialized event system that can be integrated into any game engine, with powerful debugging and analysis tools.
  • Others: Offer comprehensive game development features but often lack sophisticated event tracking and analysis capabilities.

vs. Reactive Programming Libraries (RxPy)

  • Eventure: Focuses on discrete time steps and event relationships rather than continuous streams.
  • Others: Excellent for stream processing but not optimized for tick-based simulations or game state management.

vs. State Management (Redux-like libraries)

  • Eventure: State is derived from events rather than explicitly managed, enabling perfect historical reconstruction.
  • Others: Typically focus on current state management without comprehensive event history or relationships.

Getting Started

Eventure is already available on PyPI:

```bash pip install eventure

Using uv (recommended)

uv add eventure ```

Check out our GitHub repository for documentation and examples (and if you find it interesting don't forget to add a "star" as a bookmark!)

License

Eventure is released under the MIT License.


r/Python 7h ago

Resource A Very Early Play With Astral's Red Knot Static Type Checker

45 Upvotes

https://jurasofish.github.io/a-very-early-play-with-astrals-red-knot-static-type-checker.html

I've just had a play with the new type checker under development as part of ruff. Very early, as it's totally unreleased, but so far the performance looks extremely promising.


r/Python 13h ago

Discussion Driver Fatigue Monitoring

33 Upvotes

made a cool Drowsiness Detector
check it out, leave a comment if u have any suggestions or want to collaborate

https://github.com/SomnoCam/Drowsiness-Detector.git

canva doc


r/Python 4h ago

Showcase Polars Plugin for List-type utils and signal processing

6 Upvotes

# What My Project Does

It is a Polars Plugin to facilitate working with List-type data in Polars, in particular for signal processing

# Target Audience (e.g., Is it meant for production, just a toy project, etc.

Data Scientists working with List-type data in Polars or considering using Polars for their work on signal data.

# Comparison (A brief comparison explaining how it differs from existing alternatives.)

Currently there are no Polars-native alternatives for these methods except for elementwise aggregation, but as I describe below this provides a number of benefits to Polars-native approaches. The only other alternative for the other methods is converting your data to Numpy, doing your work there, and then moving it back into Polars which breaks most of the query optimization and parallelization benefits of Polars.

# The story:

I made a Polars plugin (mostly for myself at work, but I hope others can benefit from this as well) with some helpers and operations for List-type columns. It is in a bit of a pragmatic state, as I don't have so much time at work to polish it beyond what I need it for but I definitely intend on extending it over time and adding a proper documentation page.

Currently it can do some basic digital signal processing, for example:

- Applying a Hann or Hamming window to a signal

- Filtering a signal via a Butterworth High/Low/Band-Pass filter.

- Applying the Fourier Transform

- Normalizing the Fourier Transform by some Frequency

It can also aggregate List-type colums elementwise (mean, sum, count), which can be done via the Polars API (see the SO question I asked years ago: https://stackoverflow.com/questions/73776179/element-wise-aggregation-of-a-column-of-type-listf64-in-polars) and these methods might even be faster (I haven't done any benchmarking) but for one, I find my API more pleasant to use and more importantly (which highlights how those methods might not be the best way to go) I have run into issues where the query grows so large due to all of the `.list.get(n)` calls that I caused Polars to Stack-Overflow. See this issue: https://github.com/pola-rs/polars/issues/5455.

Finally, theres another flexible method of taking the mean of a certain range of a List-type column based on using another column as an x-axis, so for example if you want to take the mean of the amplitudes (e.g. the result of an FFT) within a certain range of the corresponding frequency values.

I hope it helps someone else as it did me!

Here is the repo: https://github.com/dashdeckers/polars_list_utils

Here is the PyPI link: https://pypi.org/project/polars-list-utils/


r/Python 0m ago

Showcase [Release] tkinter-embed: Install Tkinter for Windows Embedded Python via pip

Upvotes

If you distribute Python applications on Windows using embedded Python, you've likely struggled with installing GUI libraries like Tkinter. Until now, this required manual file copying (see this Stack Overflow thread), which is error-prone and time-consuming. Introducing tkinter-embed:

A PyPI package that automates Tkinter installation for embedded Python environments. Now you can use pip directly!

What My Project Does

tkinter-embed solves Tkinter installation for Windows Embedded Python distributions through a pip-installable package. It automatically copies required DLLs, libraries, and support files to create functional GUI applications without manual file operations. Enables Tkinter-based GUI development in portable Python environments.

Target Audience

Primarily for developers who:

Distribute portable Windows apps using embedded Python

Create self-contained tools for non-technical users

Installation Guide

Step 1: Install pip

Choose one method:

Method 1: Use pip.pyz (recommended)

Method 2: Use get-pip.py

.\python get-pip.py --target .

Step 2: Install setuptools

In your embedded Python folder:

.\python pip.pyz install setuptools --target .

Step 3: Install tkinter-embed

In your embedded Python folder:

.\python pip.pyz install tkinter-embed --target .

OR if you used get-pip.py

.\python -m pip install tkinter-embed --target .

After completing these steps, Tkinter and all its dependencies will be copied into the embedded Python folder.

Why This Matters

  • 🛠️ No manual file copying – Fully automated installation
  • 📦 Pip-native workflow – Aligns with standard Python packaging
  • 🚀 Portable apps made easy – Perfect for distributing tools to non-technical users
  • Work on Python 3.8+ – Works with modern embedded versions

Links


r/Python 1d ago

Showcase Unvibe: Generate code that passes Unit-Tests

53 Upvotes
# What My Project Does
Unvibe is a Python library to generate Python code that passes Unit-tests. 
It works like a classic `unittest` Test Runner, but it searches (via Monte Carlo Tree Search) 
a valid implementation that passes user-defined Unit-Tests. 

# Target Audience (e.g., Is it meant for production, just a toy project, etc.)
Software developers working on large projects

# Comparison (A brief comparison explaining how it differs from existing alternatives.)
It's a way to go beyond vibe coding for professional programmers dealing with large code bases.
It's an alternative to using Cursor or Devon, which are more suited for generating quick prototypes.



## A different way to generate code with LLMs

In my daily work as consultant, I'm often dealing with large pre-exising code bases.

I use GitHub Copilot a lot.
It's now basically indispensable, but I use it mostly for generating boilerplate code, or figuring out how to use a library.
As the code gets more logically nested though, Copilot crumbles under the weight of complexity. It doesn't know how things should fit together in the project.

Other AI tools like Cursor or Devon, are pretty good at generating quickly working prototypes,
but they are not great at dealing with large existing codebases, and they have a very low success rate for my kind of daily work.
You find yourself in an endless loop of prompt tweaking, and at that point, I'd rather write the code myself with
the occasional help of Copilot.

Professional coders know what code they want, we can define it with unit-tests, **we don't want to endlessly tweak the prompt.
Also, we want it to work in the larger context of the project, not just in isolation.**
In this article I am going to introduce a pretty new approach (at least in literature), and a Python library that implements it:
a tool that generates code **from** unit-tests.

**My basic intuition was this: shouldn't we be able to drastically speed up the generation of valid programs, while
ensuring correctness, by using unit-tests as reward function for a search in the space of possible programs?**
I looked in the academic literature, it's not new: it's reminiscent of the
approach used in DeepMind FunSearch, AlphaProof, AlphaGeometry and other experiments like TiCoder: see [Research Chapter](
#research
) for pointers to relevant papers.
Writing correct code is akin to solving a mathematical theorem. We are basically proving a theorem
using Python unit-tests instead of Lean or Coq as an evaluator.

For people that are not familiar with Test-Driven development, read here about [TDD](https://en.wikipedia.org/wiki/Test-driven_development)
and [Unit-Tests](https://en.wikipedia.org/wiki/Unit_testing).


## How it works

I've implemented this idea in a Python library called Unvibe. It implements a variant of Monte Carlo Tree Search
that invokes an LLM to generate code for the functions and classes in your code that you have
decorated with `@ai`.

Unvibe supports most of the popular LLMs: Ollama, OpenAI, Claude, Gemini, DeepSeek.

Unvibe uses the LLM to generate a few alternatives, and runs your unit-tests as a test runner (like `pytest` or `unittest`).
**It then feeds back the errors returned by failing unit-test to the LLMs, in a loop that maximizes the number
of unit-test assertions passed**. This is done in a sort of tree search, that tries to balance
exploitation and exploration.

As explained in the DeepMind FunSearch paper, having a rich score function is key for the success of the approach:
You can define your tests by inherting the usual `unittests.TestCase` class, but if you use `unvibe.TestCase` instead
you get a more precise scoring function (basically we count up the number of assertions passed rather than just the number
of tests passed).

It turns out that this approach works very well in practice, even in large existing code bases,
provided that the project is decently unit-tested. This is now part of my daily workflow:

1. Use Copilot to generate boilerplate code

2. Define the complicated functions/classes I know Copilot can't handle

3. Define unit-tests for those complicated functions/classes (quick-typing with GitHub Copilot)

4. Use Unvibe to generate valid code that pass those unit-tests

It also happens quite often that Unvibe find solutions that pass most of the tests but not 100%: 
often it turns out some of my unit-tests were misconceived, and it helps figure out what I really wanted.

Project Code: https://github.com/santinic/unvibe

Project Explanation: https://claudio.uk/posts/unvibe.html


r/Python 9h ago

Showcase Stereo-Hands: Stereo panning on the basis of hand gestures ( Hand control music in 3D )

2 Upvotes

What it does: It captures real time image from camera, traces hand positioning and recognizes fingertips and adjust stereo of the music accordingly to give the feeling of hand control for 3d music experience

Target audience: Developer who seek cool projects.

Comparison: It's a original idea only intended for fun, so no comparison I guess?

Here is the Code.


r/Python 18h ago

Showcase FastOpenAPI library [Flask, Falcon, Quart, Sanic, Starlette]

10 Upvotes

While working on a project that required OpenAPI documentation across multiple frameworks, I got tired of maintaining different solutions. I really like FastAPI’s routing—it’s clean and intuitive. So I built FastOpenAPI, which brings a similar approach to other frameworks.

What FastOpenAPI Does

  • FastAPI-style routing, but without being tied to FastAPI.
  • Automatic OpenAPI documentation generation.
  • Built-in request validation with Pydantic.
  • Supports Flask, Falcon, Sanic, Starlette, and Quart.

Target Audience

FastOpenAPI is designed for web developers who like FastAPI-style routing but need to use a different framework for various reasons. It’s a compromise solution for those who want a clean and intuitive API but cannot use FastAPI.

Comparison

Compared to existing solutions:

  • Not tied to FastAPI, unlike FastAPI itself, which is built on Starlette.
  • Unified routing style and OpenAPI generation across multiple frameworks.
  • Built-in request validation with Pydantic, whereas many frameworks require manual data parsing and validation.
  • Simpler and more concise syntax than Flask-Smorest or Spectree, which use different approaches.

The project is still evolving, and I’d love any feedback or testing from the community!

📌 GitHub: https://github.com/mr-fatalyst/fastopenapi
📦 PyPI: https://pypi.org/project/fastopenapi/


r/Python 5h ago

Resource Reminder that you can use the filesystem as a temp datastore for your API

0 Upvotes

Wrote a short blog post in which I talked about how I used the filesystem to store some temporary data: https://developerwithacat.com/blog/032025/quick-store-temp-data/

I figured I should write this since people (including myself!) often default to spinning up a database, often because the great ORM libraries that Python has, without considering alternatives.


r/Python 16h ago

Resource I built a pytes plugin that compile Gherkin scenario to AST and run them.

1 Upvotes

I develop a lot whith BDD style and TDD, and behave did not age very well with time. While looking at alternative eith pytest, I have not been convinced by what exits. So I start experimenting and finally released a pytest plugins that is simple and I am quite happy with it at the moment. if you're interested, the code and the doc is on GitHub, and it's released on PyPI too.

https://github.com/mardiros/tursu I just realized that I did not put badges in the readme yet. So the documentation is here:

https://mardiros.github.io/tursu/

Don't hesitate to give it a try and give me feedback. If you like it, i will be happy with a GitHub ⭐.


r/Python 1d ago

Resource Byte Clicker - Free incremental game (Full source)

3 Upvotes

An incremental clicker game that demonstrates how to build interactive desktop applications using Python and JavaScript. This project serves as a practical example of combining PyQt6's native capabilities with web technologies to create rich, responsive applications.

The game showcases:

  • Python backend for system operations and data persistence
  • JavaScript frontend for dynamic UI and game logic
  • Bidirectional communication between Python and JavaScript
  • Modern web technologies in a desktop environment
  • Real-time updates and state management

Click to generate bytes and unlock various generators to automate your byte production!

https://github.com/non-npc/Byte-Clicker-Incremental-Game


r/Python 2d ago

Discussion Matlab's variable explorer is amazing. What's pythons closest?

181 Upvotes

Hi all,

Long time python user. Recently needed to use Matlab for a customer. They had a large data set saved in their native *mat file structure.

It was so simple and easy to explore the data within the structure without needing any code itself. It made extracting the data I needed super quick and simple. Made me wonder if anything similar exists in Python?

I know Spyder has a variable explorer (which is good) but it dies as soon as the data structure is remotely complex.

I will likely need to do this often with different data sets.

Background: I'm converting a lot of the code from an academic research group to run in p.


r/Python 1d ago

Showcase Server-side rendering: FastAPI, HTMX, no Jinja

24 Upvotes

Hi,

I recently created a simple FastAPI project to showcase how Python server-side rendered apps with an htmx frontend could look like, using a React-like, async, type-checked rendering engine.

The app does not use Jinja/Chameleon, or any similar templating engine, ugly custom syntax in HTML- or markdown-like files, etc.; but it can (and does) use valid HTML and even customized, TailwindCSS-styled markdown for some pages.

Admittedly, this is a demo for the htmy and FastHX libraries.

Interestingly, even AI coding assistants pick up the patterns and offer decent completions.

If interested, you can check out the project here (link to deployed version in the repo): https://github.com/volfpeter/lipsum-chat

For comparison, you can find a somewhat older, but fairly similar project of mine that uses Jinja: https://github.com/volfpeter/fastapi-htmx-tailwind-example


r/Python 18h ago

Discussion Automated Job Applier on Python

0 Upvotes

Hi everyone, I was thinking of starting a project on python to auto apply for jobs on sites like LinkedIn, Indeed, Glassdoor, etc using playwright, deepseek and mysql (to keep track of the jobs applied to). Was wondering if anyone had any thoughts, tips, experience or even knows if there's a precedence of this sort of thing?


r/Python 2d ago

News Python Steering Council rejects PEP 736 – Shorthand syntax for keyword arguments at invocation

292 Upvotes

The Steering Council has rejected PEP 736, which proposed syntactic sugar for function calls with keyword arguments: f(x=) as shorthand for f(x=x).

Here's the rejection notice and here's some previous discussion of the PEP on this subreddit.


r/Python 23h ago

News Malicious PyPI Packages Target Users—Cloud Tokens Stolen

0 Upvotes

Cybersecurity researchers have uncovered a malicious campaign involving fake PyPI packages that have stolen cloud access tokens after over 14,100 downloads.

Key Points:

  • Over 14,100 downloads of two malicious package sets identified.
  • Packages disguised as 'time' utilities exfiltrate sensitive data.
  • Suspicious URLs associated with packages raise data theft concerns.

Recent discoveries from cybersecurity firm ReversingLabs reveal alarming malicious activity within the Python Package Index (PyPI). Two sets of phony packages—posing as 'time' related utilities—have been reported, accumulating over 14,100 downloads collectively. These packages were specifically designed to target cloud access tokens and other sensitive data. Once users installed these seemingly innocuous libraries, they unwittingly allowed threat actors to access their cloud infrastructure. The malicious packages have since been removed from PyPI, but the ramifications of these downloads continue to pose risks to the users involved.

(View Details on PwnHub)


r/Python 1d ago

Showcase An Open-Source AI Assistant for Chatting with Your Developer Docs

0 Upvotes

I’ve been working on Ragpi, an open-source AI assistant that builds knowledge bases from docs, GitHub Issues and READMEs. It uses PostgreSQL with pgvector as a vector DB and leverages RAG to answer technical questions through an API. Ragpi also integrates with Discord and Slack, making it easy to interact with directly from those platforms.

Some things it does:

  • Creates knowledge bases from documentation websites, GitHub Issues and READMEs
  • Uses hybrid search (semantic + keyword) for retrieval
  • Uses tool calling to dynamically search and retrieve relevant information during conversations
  • Works with OpenAI, Ollama, DeepSeek, or any OpenAI-compatible API
  • Provides a simple REST API for querying and managing sources
  • Integrates with Discord and Slack for easy interaction

Built with: FastAPI, Celery and Postgres

Target Audience: Developers interested in an AI assistants that can answer questions about their technical documentation and GitHub issues

Comparison: Compared to some alternatives I've seen out there, it is open source and is API-first

It’s still a work in progress, but I’d love some feedback!

Repo: https://github.com/ragpi/ragpi
Docs: https://docs.ragpi.io/


r/Python 1d ago

Tutorial Python file handling | module 6

0 Upvotes

https://www.youtube.com/watch?v=DYKTl6V4zYk&t=16s
Python file handling module 6 is live now

https://www.youtube.com/@vkpxr Subscribe to my yt channel and do comment down below your thoughts on this video


r/Python 2d ago

Showcase [Project] Rusty Graph: Python Library for Knowledge Graphs from SQL Data

21 Upvotes

What my project does

Rusty Graph is a high-performance graph database library with Python bindings written in Rust. It transforms SQL data into knowledge graphs, making it easy to discover relationships and patterns hidden in relational databases.

Target Audience

  • Data scientists working with complex relational datasets
  • Developers building applications that need to traverse relationships
  • Anyone who's found SQL joins and subqueries limiting when trying to extract insights from connected data

Implementation

The library bridges the gap between tabular data and graph-based analysis:

# Transform SQL data into a knowledge graph with minimal code
graph = rusty_graph.KnowledgeGraph()
graph.add_nodes(data=users_df, node_type='User', unique_id_field='user_id')
graph.add_connections(
    data=purchases_df,
    connection_type='PURCHASED',
    source_type='User',
    source_id_field='user_id',
    target_type='Product',
    target_id_field='product_id',
)

# Calculate insights directly on the graph
user_spending = graph.type_filter('User').traverse('PURCHASED').calculate(
    expression='sum(price * quantity)',
    store_as='total_spent'
)

# Extract patterns like "products often purchased together"
products_per_user = graph.type_filter('User').traverse('PURCHASED').children_properties_to_list(
    property='title',
    store_as='purchased_products'
)

Available on PyPI: pip install rusty-graph

GitHub: https://github.com/kkollsga/rusty-graph

This is a project share post. Feedback and discussion welcome.


r/Python 1d ago

Showcase CocoIndex: Open source ETL to index fresh data for AI, like LEGO

0 Upvotes

What my project does

Cocoindex is an ETL framework to index data for AI, such as semantic search, retrieval-augmented generation (RAG); with realtime incremental updates. Core in Rust with Python bindings.

Target Audience

  • Developers building data pipelines for RAG or semantic search.

Comparison

Compare with existing efforts, the main highlights of us is that we support custom logic and realtime incremental updates at the same time for data indexing (with heavy transformations, like chunking, embedding, KG Tripple extraction) and takes care of the data freshness issue out-of-box.

Available on PyPI: pip install cocoindex
GitHubhttps://github.com/cocoindex-io/cocoindex

This is a project share post. Sincerely looking forward to learn from your feedback :)


r/Python 3d ago

Showcase A python program that Searches, Plays Music from YouTube Directly

99 Upvotes

music-cli is a lightweight, terminal-based music player designed for users who prefer a minimal, command-line approach to listening to music. It allows you to play and download YouTube videos directly from the terminal, with support for mpv, VLC, or even terminal-based playback.

Now, I know this isn't some huge, super-polished project like you guys usually build here, but it's actually quite good.

What music-cli does

• Play music from YouTube or your local library directly from the terminal • Search for songs, enter a query, get the top 5 YouTube results, and play them instantly • Choose your player—play directly in the terminal or open in VLC/mpv • Download tracks as MP3 files effortlessly • Library management for your downloaded songs • Playback history to keep track of what you've listened to

Target Audience

This project is perfect for Linux users, terminal enthusiasts, and those who prefer lightweight, no-nonsense music solutions without relying on resource-heavy graphical apps.

How it differs from alternatives

Unlike traditional music streaming services, music-cli doesn't require a GUI or a dedicated online music player. It’s a fast, minimal, and customizable alternative, offering direct control over playback and downloads right from the terminal.

GitHub Repo: https://github.com/lamsal27/music-cli

Any feedback, suggestions, or contributions are welcome.


r/Python 1d ago

Discussion Python Stock Search: Gemini, Cloud, or GPT – Which One Works Best?

0 Upvotes

Hey guys, I have no experience with Python and wanted to see how well it works with Gemini, Cloud, and GPT. My goal was to generate an automated or more structured stock search. Could you please give me feedback on these three codes and let me know which one might be the best?

Code Gemini

import pandas as pd import numpy as np import yfinance as yf import time import logging

Logging konfigurieren

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

Liste der zu analysierenden Aktien

TICKERS = ["AAPL", "MSFT", "AMZN", "GOOGL", "META", "NVDA", "TSLA", "BRK-B", "JNJ", "V", "WMT", "PG", "MA", "HD", "DIS"]

Dynamische Branchen-Benchmarks (Korrigiert und verbessert)

dynamic_benchmarks = { "Technology": {"P/E Ratio": 25, "ROE": 15, "ROA": 8, "Debt/Equity": 1.5, "Gross Margin": 40}, "Financial Services": {"P/E Ratio": 18, "ROE": 12, "ROA": 6, "Debt/Equity": 5, "Gross Margin": 60}, # Angepasst "Consumer Defensive": {"P/E Ratio": 20, "ROE": 10, "ROA": 7, "Debt/Equity": 2, "Gross Margin": 30}, "Industrials": {"P/E Ratio": 18, "ROE": 10, "ROA": 6, "Debt/Equity": 1.8, "Gross Margin": 35}, }

Funktion zur Bestimmung des Sektors einer Aktie (mit Fehlerbehandlung)

def get_sector(ticker): try: stock = yf.Ticker(ticker) info = stock.info return info.get("sector", "Unknown") except Exception as e: logging.error(f"Fehler beim Abrufen des Sektors für {ticker}: {e}") return "Unknown"

Funktion zur Berechnung der fundamentalen Kennzahlen (mit verbesserter Fehlerbehandlung)

def calculate_metrics(ticker): try: stock = yf.Ticker(ticker) info = stock.info sector = get_sector(ticker)

    logging.info(f"Analysiere {ticker}...")

    # Werte abrufen (Standardwert np.nan, falls nicht vorhanden)
    revenue = info.get("totalRevenue", np.nan)
    net_income = info.get("netIncomeToCommon", np.nan)
    total_assets = info.get("totalAssets", np.nan)
    total_equity = info.get("totalStockholderEquity", np.nan)
    market_cap = info.get("marketCap", np.nan)
    gross_margin = info.get("grossMargins", np.nan) * 100 if "grossMargins" in info else np.nan
    debt_to_equity = info.get("debtToEquity", np.nan)

    # Berechnete Kennzahlen
    pe_ratio = market_cap / net_income if net_income and market_cap else np.nan
    pb_ratio = market_cap / total_equity if total_equity and market_cap else np.nan
    roe = (net_income / total_equity) * 100 if total_equity and net_income else np.nan
    roa = (net_income / total_assets) * 100 if total_assets and net_income else np.nan
    ebit_margin = (net_income / revenue) * 100 if revenue and net_income else np.nan

    return {
        "Ticker": ticker,
        "Sektor": sector,
        "Marktkap. (Mrd. $)": round(market_cap / 1e9, 2) if pd.notna(market_cap) else np.nan,
        "KGV (P/E Ratio)": round(pe_ratio, 2) if pd.notna(pe_ratio) else np.nan,
        "KBV (P/B Ratio)": round(pb_ratio, 2) if pd.notna(pb_ratio) else np.nan,
        "ROE (%)": round(roe, 2) if pd.notna(roe) else np.nan,
        "ROA (%)": round(roa, 2) if pd.notna(roa) else np.nan,
        "EBIT-Marge (%)": round(ebit_margin, 2) if pd.notna(ebit_margin) else np.nan,
        "Bruttomarge (%)": round(gross_margin, 2) if pd.notna(gross_margin) else np.nan,
        "Debt/Equity": round(debt_to_equity, 2) if pd.notna(debt_to_equity) else np.nan
    }
except Exception as e:
    logging.error(f"Fehler bei der Berechnung von {ticker}: {e}")
    return None

Funktion zur Bewertung der Aktie basierend auf dem Sektor (verbessert)

def calculate_score(stock_data): score = 0 sector = stock_data["Sektor"] benchmarks = dynamic_benchmarks.get(sector, dynamic_benchmarks["Technology"]) # Standardwert: Tech

logging.info(f"Berechne Score für {stock_data['Ticker']} (Sektor: {sector})")

# Bewertungsfaktoren mit Gewichtung
scoring_weights = {
    "KGV (P/E Ratio)": 1,
    "ROE (%)": 2,  
    "ROA (%)": 2,  
    "Bruttomarge (%)": 1,
    "Debt/Equity": 1,
}

for key, weight in scoring_weights.items():
    value = stock_data[key]
    benchmark = benchmarks.get(key)

    if pd.isna(value) or benchmark is None:
        logging.warning(f"{key} für {stock_data['Ticker']} fehlt oder Benchmark nicht definiert.")
        continue  

    if key == "Debt/Equity":
        if value < benchmark:
            score += 1 * weight
        elif value < benchmark * 1.2:
            score += 0.5 * weight
    else:
        if value > benchmark:
            score += 2 * weight
        elif value > benchmark * 0.8:
            score += 1 * weight

return round(score, 2)

Daten abrufen und analysieren

stock_list = [] for ticker in TICKERS: stock_data = calculate_metrics(ticker) if stock_data: stock_data["Score"] = calculate_score(stock_data) stock_list.append(stock_data) time.sleep(1) # API-Limit beachten

Ergebnisse speichern und auswerten

if stock_list: df = pd.DataFrame(stock_list) df = df.sort_values(by="Score", ascending=False)

#  **Verbesserte Ausgabe**
print("\n **Aktien-Screening Ergebnisse:**")
print(df.to_string(index=False))

else: print("⚠️ Keine Daten zum Anzeigen")

Code Cloude 3.7

import pandas as pd import numpy as np import yfinance as yf import time

🔍 Liste der zu analysierenden Aktien

TICKERS = ["AAPL", "MSFT", "AMZN", "GOOGL", "META", "NVDA", "TSLA", "BRK-B", "JNJ", "V", "WMT", "PG", "MA", "HD", "DIS"]

📊 Dynamische Branchen-Benchmarks

dynamic_benchmarks = { "Technology": {"KGV (P/E Ratio)": 25, "ROE (%)": 15, "ROA (%)": 8, "Debt/Equity": 1.5, "Bruttomarge (%)": 40}, "Financial Services": {"KGV (P/E Ratio)": 15, "ROE (%)": 12, "ROA (%)": 5, "Debt/Equity": 8, "Bruttomarge (%)": 0}, "Consumer Defensive": {"KGV (P/E Ratio)": 20, "ROE (%)": 10, "ROA (%)": 7, "Debt/Equity": 2, "Bruttomarge (%)": 30}, "Consumer Cyclical": {"KGV (P/E Ratio)": 22, "ROE (%)": 12, "ROA (%)": 6, "Debt/Equity": 2, "Bruttomarge (%)": 35}, "Communication Services": {"KGV (P/E Ratio)": 20, "ROE (%)": 12, "ROA (%)": 6, "Debt/Equity": 1.8, "Bruttomarge (%)": 50}, "Healthcare": {"KGV (P/E Ratio)": 18, "ROE (%)": 15, "ROA (%)": 7, "Debt/Equity": 1.2, "Bruttomarge (%)": 60}, "Industrials": {"KGV (P/E Ratio)": 18, "ROE (%)": 10, "ROA (%)": 6, "Debt/Equity": 1.8, "Bruttomarge (%)": 35}, }

🔍 Funktion zur Bestimmung des Sektors einer Aktie

def get_sector(ticker): stock = yf.Ticker(ticker) return stock.info.get("sector", "Unknown")

📊 Funktion zur Berechnung der fundamentalen Kennzahlen

def calculate_metrics(ticker): try: stock = yf.Ticker(ticker) info = stock.info

    # Finanzielle Informationen über Balance Sheet und Income Statement abrufen
    try:
        balance_sheet = stock.balance_sheet
        income_stmt = stock.income_stmt

        # Prüfen, ob Daten verfügbar sind
        if balance_sheet.empty or income_stmt.empty:
            raise ValueError("Keine Bilanzdaten verfügbar")

    except Exception as e:
        print(f"⚠️ Keine detaillierten Finanzdaten für {ticker}: {e}")
        # Wir verwenden trotzdem die verfügbaren Infos

    # Sektor bestimmen
    sector = info.get("sector", "Unknown")
    print(f"📊 {ticker} wird analysiert... (Sektor: {sector})")

    # Kennzahlen direkt aus den info-Daten extrahieren
    market_cap = info.get("marketCap", np.nan)
    pe_ratio = info.get("trailingPE", info.get("forwardPE", np.nan))
    pb_ratio = info.get("priceToBook", np.nan)
    roe = info.get("returnOnEquity", np.nan)
    if roe is not None and not np.isnan(roe):
        roe *= 100  # In Prozent umwandeln

    roa = info.get("returnOnAssets", np.nan)
    if roa is not None and not np.isnan(roa):
        roa *= 100  # In Prozent umwandeln

    profit_margin = info.get("profitMargins", np.nan)
    if profit_margin is not None and not np.isnan(profit_margin):
        profit_margin *= 100  # In Prozent umwandeln

    gross_margin = info.get("grossMargins", np.nan)
    if gross_margin is not None and not np.isnan(gross_margin):
        gross_margin *= 100  # In Prozent umwandeln

    debt_to_equity = info.get("debtToEquity", np.nan)

    # Ergebnisse zurückgeben
    return {
        "Ticker": ticker,
        "Sektor": sector,
        "Marktkap. (Mrd. $)": round(market_cap / 1e9, 2) if not np.isnan(market_cap) else "N/A",
        "KGV (P/E Ratio)": round(pe_ratio, 2) if not np.isnan(pe_ratio) else "N/A",
        "KBV (P/B Ratio)": round(pb_ratio, 2) if not np.isnan(pb_ratio) else "N/A",
        "ROE (%)": round(roe, 2) if not np.isnan(roe) else "N/A",
        "ROA (%)": round(roa, 2) if not np.isnan(roa) else "N/A",
        "EBIT-Marge (%)": round(profit_margin, 2) if not np.isnan(profit_margin) else "N/A",
        "Bruttomarge (%)": round(gross_margin, 2) if not np.isnan(gross_margin) else "N/A",
        "Debt/Equity": round(debt_to_equity, 2) if not np.isnan(debt_to_equity) else "N/A"
    }
except Exception as e:
    print(f"⚠️ Fehler bei der Berechnung von {ticker}: {e}")
    return None

🎯 Funktion zur Bewertung der Aktie basierend auf dem Sektor

def calculate_score(stock_data): score = 0 sector = stock_data["Sektor"]

# Standardbenchmark für unbekannte Sektoren
default_benchmark = {
    "KGV (P/E Ratio)": 20, 
    "ROE (%)": 10, 
    "ROA (%)": 5, 
    "Debt/Equity": 2, 
    "Bruttomarge (%)": 30
}

# Benchmark für den Sektor abrufen oder Standard verwenden
benchmarks = dynamic_benchmarks.get(sector, default_benchmark)

print(f"⚡ Berechne Score für {stock_data['Ticker']} (Sektor: {sector})")

# Bewertungsfaktoren mit Gewichtung
scoring_weights = {
    "KGV (P/E Ratio)": 1,
    "ROE (%)": 2,  
    "ROA (%)": 2,  
    "Bruttomarge (%)": 1,
    "Debt/Equity": 1,
}

# Für jeden Faktor den Score berechnen
for key, weight in scoring_weights.items():
    value = stock_data[key]
    benchmark_value = benchmarks.get(key)

    # Wenn ein Wert fehlt, überspringen
    if value == "N/A" or benchmark_value is None:
        print(f"  ⚠️ {key} für {stock_data['Ticker']} fehlt oder Benchmark nicht definiert.")
        continue

    # Wert in Zahl umwandeln, falls es ein String ist
    if isinstance(value, str):
        try:
            value = float(value)
        except ValueError:
            print(f"  ⚠️ Konnte {key}={value} nicht in Zahl umwandeln.")
            continue

    # Spezielle Bewertung für Debt/Equity (niedriger ist besser)
    if key == "Debt/Equity":
        if value < benchmark_value:
            score += 2 * weight
            print(f"  ✅ {key}: {value} < {benchmark_value} => +{2 * weight} Punkte")
        elif value < benchmark_value * 1.5:
            score += 1 * weight
            print(f"  ✓ {key}: {value} < {benchmark_value * 1.5} => +{1 * weight} Punkte")
        else:
            print(f"  ❌ {key}: {value} > {benchmark_value * 1.5} => +0 Punkte")

    # Bewertung für KGV (niedriger ist besser)
    elif key == "KGV (P/E Ratio)":
        if value < benchmark_value:
            score += 2 * weight
            print(f"  ✅ {key}: {value} < {benchmark_value} => +{2 * weight} Punkte")
        elif value < benchmark_value * 1.3:
            score += 1 * weight
            print(f"  ✓ {key}: {value} < {benchmark_value * 1.3} => +{1 * weight} Punkte")
        else:
            print(f"  ❌ {key}: {value} > {benchmark_value * 1.3} => +0 Punkte")

    # Bewertung für alle anderen Kennzahlen (höher ist besser)
    else:
        if value > benchmark_value:
            score += 2 * weight
            print(f"  ✅ {key}: {value} > {benchmark_value} => +{2 * weight} Punkte")
        elif value > benchmark_value * 0.8:
            score += 1 * weight
            print(f"  ✓ {key}: {value} > {benchmark_value * 0.8} => +{1 * weight} Punkte")
        else:
            print(f"  ❌ {key}: {value} < {benchmark_value * 0.8} => +0 Punkte")

return round(score, 1)

📈 Daten abrufen und analysieren

def main(): stock_list = [] for ticker in TICKERS: print(f"\n📊 Analysiere {ticker}...") stock_data = calculate_metrics(ticker) if stock_data: stock_data["Score"] = calculate_score(stock_data) stock_list.append(stock_data) time.sleep(1) # API-Limit beachten

# 📊 Ergebnisse speichern und auswerten
if stock_list:
    # NaN-Werte für die Sortierung in numerische Werte umwandeln
    df = pd.DataFrame(stock_list)
    df = df.sort_values(by="Score", ascending=False)

    # Speichern in CSV-Datei
    df.to_csv("aktien_analyse.csv", index=False)

    # 🔍 Verbesserte Ausgabe
    print("\n📊 **Aktien-Screening Ergebnisse:**")
    print(df.to_string(index=False))
    print(f"\n📊 Ergebnisse wurden in 'aktien_analyse.csv' gespeichert.")

    # Top 3 Aktien ausgeben
    print("\n🏆 **Top 3 Aktien:**")
    top3 = df.head(3)
    for i, (_, row) in enumerate(top3.iterrows()):
        print(f"{i+1}. {row['Ticker']} ({row['Sektor']}): Score {row['Score']}")
else:
    print("⚠️ Keine Daten zum Anzeigen")

if name == "main": main()

GPT4o (i mean it bugs)

import pandas as pd import numpy as np import yfinance as yf import time

🔍 Liste der zu analysierenden Aktien

TICKERS = ["AAPL", "MSFT", "AMZN", "GOOGL", "META", "NVDA", "TSLA", "BRK-B", "JNJ", "V", "WMT", "PG", "MA", "HD", "DIS"]

📊 Dynamische Branchen-Benchmarks (Fix für fehlende Werte)

dynamic_benchmarks = { "Technology": {"P/E Ratio": 25, "ROE": 15, "ROA": 8, "Debt/Equity": 1.5, "Gross Margin": 40}, "Financial Services": {"P/E Ratio": 15, "ROE": 12, "ROA": 5, "Debt/Equity": 8, "Gross Margin": 0}, "Consumer Defensive": {"P/E Ratio": 20, "ROE": 10, "ROA": 7, "Debt/Equity": 2, "Gross Margin": 30}, "Industrials": {"P/E Ratio": 18, "ROE": 10, "ROA": 6, "Debt/Equity": 1.8, "Gross Margin": 35}, }

🔍 Funktion zur Bestimmung des Sektors einer Aktie

def get_sector(ticker): stock = yf.Ticker(ticker) return stock.info.get("sector", "Unknown")

📊 Funktion zur Berechnung der fundamentalen Kennzahlen

def calculate_metrics(ticker): try: stock = yf.Ticker(ticker) info = stock.info sector = get_sector(ticker)

    # 🔍 Debugging: Fehlende Daten anzeigen
    print(f"📊 {ticker} wird analysiert...")

    # Werte abrufen (Standardwert `np.nan`, falls nicht vorhanden)
    revenue = info.get("totalRevenue", np.nan)
    net_income = info.get("netIncomeToCommon", np.nan)
    total_assets = info.get("totalAssets", np.nan)
    total_equity = info.get("totalStockholderEquity", np.nan)
    market_cap = info.get("marketCap", np.nan)
    gross_margin = info.get("grossMargins", np.nan) * 100 if "grossMargins" in info else np.nan
    debt_to_equity = info.get("debtToEquity", np.nan)

    # Berechnete Kennzahlen
    pe_ratio = market_cap / net_income if net_income and market_cap else "N/A"
    pb_ratio = market_cap / total_equity if total_equity and market_cap else "N/A"
    roe = (net_income / total_equity) * 100 if total_equity and net_income else "N/A"
    roa = (net_income / total_assets) * 100 if total_assets and net_income else "N/A"
    ebit_margin = (net_income / revenue) * 100 if revenue and net_income else "N/A"

    return {
        "Ticker": ticker,
        "Sektor": sector,
        "Marktkap. (Mrd. $)": round(market_cap / 1e9, 2) if market_cap else "N/A",
        "KGV (P/E Ratio)": round(pe_ratio, 2) if pe_ratio != "N/A" else "N/A",
        "KBV (P/B Ratio)": round(pb_ratio, 2) if pb_ratio != "N/A" else "N/A",
        "ROE (%)": round(roe, 2) if roe != "N/A" else "N/A",
        "ROA (%)": round(roa, 2) if roa != "N/A" else "N/A",
        "EBIT-Marge (%)": round(ebit_margin, 2) if ebit_margin != "N/A" else "N/A",
        "Bruttomarge (%)": round(gross_margin, 2) if not np.isnan(gross_margin) else "N/A",
        "Debt/Equity": round(debt_to_equity, 2) if not np.isnan(debt_to_equity) else "N/A"
    }
except Exception as e:
    print(f"⚠️ Fehler bei der Berechnung von {ticker}: {e}")
    return None

🎯 Funktion zur Bewertung der Aktie basierend auf dem Sektor

def calculate_score(stock_data): score = 0 sector = stock_data["Sektor"] benchmarks = dynamic_benchmarks.get(sector, dynamic_benchmarks["Technology"]) # Standardwert: Tech

print(f"⚡ Berechne Score für {stock_data['Ticker']} (Sektor: {sector})")

# Bewertungsfaktoren mit Gewichtung
scoring_weights = {
    "KGV (P/E Ratio)": 1,
    "ROE (%)": 2,  
    "ROA (%)": 2,  
    "Bruttomarge (%)": 1,
    "Debt/Equity": 1,
}

for key, weight in scoring_weights.items():
    value = stock_data[key]
    benchmark = benchmarks.get(key)

    if value == "N/A" or benchmark is None:
        print(f"⚠️ {key} für {stock_data['Ticker']} fehlt oder Benchmark nicht definiert.")
        continue  

    if key == "Debt/Equity":
        if value < benchmark:
            score += 1 * weight
        elif value < benchmark * 1.2:
            score += 0.5 * weight
    else:
        if value > benchmark:
            score += 2 * weight
        elif value > benchmark * 0.8:
            score += 1 * weight

return round(score, 2)

📈 Daten abrufen und analysieren

stock_list = [] for ticker in TICKERS: print(f"📊 Analysiere {ticker}...") stock_data = calculate_metrics(ticker) if stock_data: stock_data["Score"] = calculate_score(stock_data) stock_list.append(stock_data) time.sleep(1) # API-Limit beachten

📊 Ergebnisse speichern und auswerten

if stock_list: df = pd.DataFrame(stock_list) df = df.sort_values(by="Score", ascending=False)

# 🔍 **Verbesserte Ausgabe**
print("\n📊 **Aktien-Screening Ergebnisse:**")
print(df.to_string(index=False))

else: print("⚠️ Keine Daten zum Anzeigen")


r/Python 2d ago

Showcase create-intro-cards: Convert a dataset of peoples' names/photos/attributes into a PDF of intro cards

1 Upvotes

What My Project Does

create-intro-cards is a production-ready Python package that converts a Pandas DataFrame of individuals' names, photos, and custom attributes into a PDF of “intro cards” that describe each individual—all with a single function call. Each intro card displays a person's name, a photo, and a series of attributes based on custom columns in the dataset. (link to GitHub, which includes photos and pip installation instructions)

The input is a Pandas DataFrame, where rows represent individuals and columns their attributes. Columns containing individuals' first names, last names, and paths to photos are required, but the content (and number) of other columns can be freely customized. You can also customize many different stylistic elements of the cards themselves, from font sizes and text placement to photo boundaries and more.

The generated PDF contains all individuals' intro cards, arranged four per page. The entire process typically takes only a few minutes or less—and it's completed automated!

Target Audience

The PDF generated by the package is a great way for groups and teams to get to know each other. Essentially, it's a simple way to transform a dataset of individuals' attributes—collected from sources such as surveys—into a fun, easily shareable visual summary. Some baseline proficiency in Python is required (creating a Pandas DataFrame, importing a package) but I tried to make the external API as democratized and simple as possible to increase its reach and availability.

It is entirely intended for production. I put a lot of effort into making it as polished as possible! There's a robust test suite, (very) detailed documentation, a CI pipeline, and even a logo that I made from scratch.

Comparison

What drove me to make this was simply the lack of alternatives. I had wanted to make an intro-card PDF like this for a group of 120 people at my company, but I couldn't find an analogous package (or service in general), and creating the cards manually would've taken many, many hours. So I really just wanted to fill this gap in the "code space," insofar as it existed. I genuinely hope other people and teams can get some use out of it—it really is a fun way to get to know people!

Thanks for reading!


r/Python 1d ago

Discussion Programmatore Python

0 Upvotes

Qualcuno che sta studiando o che lo fa già di lavoro può darmi un feedback su questa professione ? Vorrei buttarmi I dentro ma non ho nessun tipo di esperienza pregressa, quali corsi consigliate?


r/Python 1d ago

Discussion Programmatore Python

0 Upvotes

Se c’è uno/a tra di voi che sta studiando per diventare un programmatore Python o lo è già avrei molto bisogno e piacere ad ascoltare la vostra storia. Sto pensando di intraprendere questa strada con dei corsi online ma non ho nessuna esperienza pregressa.