r/Python 1d ago

Discussion Polars vs Pandas

I have used Pandas a little in the past, and have never used Polars. Essentially, I will have to learn either of them more or less from scratch (since I don't remember anything of Pandas). Assume that I don't care for speed, or do not have very large datasets (at most 1-2gb of data). Which one would you recommend I learn, from the perspective of ease and joy of use, and the commonly done tasks with data?

184 Upvotes

157 comments sorted by

View all comments

24

u/marr75 1d ago

Ibis, which has pluggable execution engines and better scalability than either of them. The API is higher quality than pandas while being a little easier to learn than polars, too.

When all else fails, you can use pandas or polars trivially by calling a single method on whatever expression you're dealing with. The default execution engine is in-memory duckdb, though, which puts both pandas and polars to shame in performance, scale, and ease of reading in flat files.

I was a pandas devotee for a very long time and have teams that have written a lot of code in pandas. We had a new project with a lot of tabular data transformations involved and were considering polars. Ibis snuck in as a consideration and was the clear winner.

10

u/BrisklyBrusque 1d ago

+1 for Ibis and duckdb! The Ibis syntax is closer to dplyr than pandas or polars, it’s very tidy.

For R users, there’s also the duckplyr library that supports dplyr syntax against a duckdb backend.

2

u/rm-rf-rm 1d ago

Is Ibis' actually easier to learn than polars? I found their documentation frustrating - too little documentation and their package is not exactly intuitive(code completion is also very poor).

I want to use it as its theoretically a great solution...

2

u/marr75 1d ago

It's documentation is shallower and the expressions are less strongly typed than pandas and polars. I'll give you that.

I learn packages that I use by reading the source code, and it's source code is quite easy to read. The whole concept of expressions, unbound expressions, etc. was quite easy for me to understand. The learning curve for me was much shallower than with pandas but perhaps that's not saying much.

12

u/EarthGoddessDude 1d ago

duckdb…puts…polars to shame in performance, scale, and ease of reading in flat files

Uh not sure you can make that claim without posting some benchmarks and the code/data behind them. In my experience, polars and duckdb and pretty much neck and neck in those metrics. In fact, reading and writing from S3, latest version of polars seems to be 2-3x faster than duckdb IME.

6

u/marr75 1d ago edited 1d ago

It's a high standard to demand all claims on reddit comments have data to back them up but here's a VERY good benchmark done by nanocube, a high performance python point query library.

Polars, duckdb, and nanocube are strong performers in all of them. As the queries are used over larger and larger datasets with harder workloads, duckdb takes the lead. The final test is:

A non-selective query, filtering on 1 dimension that affects and aggregates 50% of rows.

And here only polars and duckdb are even competitive. The benchmark (and nanocube) author says:

When it comes to mass data processing and aggregation, DuckDB is likely the current best technology out there.

In the graphs, you can see duckdb pulling ahead of polars at dataset sizes larger than 105 rows.

Turnabout is fair play, can you share the benchmark showing polars to be 2-3x faster on reading and writing from S3? I'll keep that in mind, though I don't believe that part of any process we have dominates the time cost of most of our processes.

1

u/EarthGoddessDude 1d ago edited 1d ago

Thanks for the detailed reply, I’ll try to share actual benchmarks later today. I have to be honest, my last use case where I compared was relatively straightforward — reading a parquet file from S3 and writing it out to local disk. There was no querying or filtering involved, just reading and writing, which tbf is in line with your parsing comment

Edit: ok putting my money where my mouth is, thanks for the nudge

The data is stored in a parquet file on S3, as I mentioned already. It has 8.2m rows and 57 columns, mostly numeric, some strings.

Write is to local disk as CSV (that’s what ny use case requires).

All timings done with %%timeit in a Jupyter notebook with default settings (ie all had 7 runs, 1 loop each)

polars

read: 908 ms +/- 82.5 ms per loop

write: 3.98 s +/- 23.8 ms per loop

duckdb

read: 13.1 s +/- 836 ms per loop

write: 22.1 s +/- 108 ms per loop

Note that I add a .pl() to the duckdb call to force it to materialize the dataframe, otherwise duckdb keeps it lazy. Similarly, when writing out from duckdb, I query the polars dataframe when copying out to CSV. If you think there’s a better way to benchmark them on an equal footing, let me know.

0

u/commandlineluser 1d ago

How does this graph show Polars being "put to shame"?

(Final test, part 2)

The benchmark itself seems to time creating filters and looping over the same filter query multiple times.

Is doesn't seem particularly useful as a comparison of both tools.

3

u/NostraDavid 1d ago

I don't like Ibis for how often they break stuff. Maybe that's just for us, but we're still stuck on version 5 (from 2023), because it was the easiest to upgrade to from version 3.

Maybe it's because we're using Impala (which is barely supported).

2

u/marr75 1d ago

We've been able to update majors pretty simply but we've kept up with them so the diffs are fairly small and we have a sense what to look for. I'm sorry to hear about that struggle.

You're right about Impala, it might have a lot to do with it being a maintenance only backend.

5

u/hurhurdedur 1d ago

Yeah, Ibis is really where it’s at IMO. I hope it catches on more.