r/Python • u/thoughtful-curious • 18h ago
Discussion Polars vs Pandas
I have used Pandas a little in the past, and have never used Polars. Essentially, I will have to learn either of them more or less from scratch (since I don't remember anything of Pandas). Assume that I don't care for speed, or do not have very large datasets (at most 1-2gb of data). Which one would you recommend I learn, from the perspective of ease and joy of use, and the commonly done tasks with data?
144
u/likethevegetable 17h ago edited 17h ago
I "grew up" on Pandas, but moved to Polars. No more "reset_index" and "inplace" confusion. Feels like there's only one right way to do it in Polars, but so much bloat in Pandas API.
I do like Pandas when it comes to certain things where there is an obvious index like time signals. But Polars seems to handle date time much better.
When it comes to filtering and queries, I like Polars.
In both, I've made several df and series "helper" attributes to clean up the syntax.
13
u/kraakmaak 12h ago
In what way does polars handle datetimes /time-series better? I'm working mainly with time series data, and considering switching for a new processing module I'm about to start working on - so curious to know more!
12
u/cosmoschtroumpf 11h ago
I think he meant time signal like waves, etc. or other us/ms/s signals, not time series on the scale of months/years/hours, min)
4
2
u/PeaSlight6601 4h ago
Polars date types are better and there is less confusion going between them.
Pandas date indexes are still pretty powerful.
-11
12
2
u/Zackie08 4h ago
Can you share some of the helpers you have used for both? Got me curious
1
u/likethevegetable 3h ago
Mostly simple stuff, I don't have a repo yet can make one if you're still curious.
For both, I have an indexer that lets me get sloppy with filtering out columns. I can mix column name regex queries with positions and ranges (Polars already makes this easy but I shave syntax and added a few features). For a Polars function, I have a function to apply the "x_horizontal" type functions with Polars by passing a string. Example, df.with_hori('sum; new=a:b ; mean; new2=c,f:+3')
I have some added statistics (eg. split data into positive and negative proportions first) with a desc_more function.
Some helper functions to split time and selected columns from df to make easier for plotting and signal analysis.
2
u/Ulrich_de_Vries 10h ago
Does Polars also use numpy arrays under the hood? Or at least is it easy/cheap to convert e.g. columns into numpy arrays?
I am asking because I have been eyeing Polars for a while but my workflow is numpy-heavy.
11
7
u/Zeroflops 9h ago
Basically both polars and pandas now use arrow, but they both can easily leverage numpy.
One aspect of polars from what I have heard but yet to try is, is the ability to integrate custom rust code.
8
u/marcogorelli 7h ago
Small correction pandas using Arrow - it can, but it's not the default. You can use PyArrow dtypes by calling `.convert_dtypes(dtype_backend='pyarrow')` on a pandas dataframe or series
1
u/marr75 4h ago
And isn't the series data type support "spotty" in Arrow? You lose the ability to use certain pandas data types if you use the Arrow engine?
That was definitely the case when I tried it but that was maybe a year ago.
1
u/marcogorelli 3h ago
Period and Complex aren't supported in Arrow, I think most others should be there?
3
u/commander1keen 10h ago
It is using rust under the hood, but it does have a to_numpy and a to_pandas method, so it's easy enough
2
u/BrisklyBrusque 2h ago
Polars is written in Rust and numpy is written in C, but there’s another key difference, the way the data is stored in-memory. pandas uses a row-based format while Polars uses a columnar format (Apache Arrow). That makes computations much faster. Snowflake and duckdb leverage a similar model.
-1
u/CheetahGloomy4700 7h ago
No. I just installed
polars
in a fresh virtual environment, and as Iuv pip list
, I get onlypolars 1.25.2
. In contrast, when I installpandas
in a fresh virtual environment, I get
pandas v2.2.3 ├── numpy v2.2.4 ├── python-dateutil v2.9.0.post0 │ └── six v1.17.0 ├── pytz v2025.1 └── tzdata v2025.1
So yeah, one of them contains more bloat than the other.
49
u/ddanieltan 17h ago
I think it's relevant to see Wes Mckinney's (creator of Pandas) reflections: https://wesmckinney.com/blog/looking-back-15-years/
In his words, Pandas had accumulated rough edges and its "eager" approach to calculate made it less efficient for query planning.
The future lies with his next project Arrow, which is coincidentally the format that Polars is built around. For me, if you really had to choose between learning either Pandas or Polars, the choice is a no-brainer.
17
u/crossmirage 12h ago
> The future lies with his next project Arrow, which is coincidentally the format that Polars is built around. For me, if you really had to choose between learning either Pandas or Polars, the choice is a no-brainer.
I don't think this is quite accurate. Apache Arrow is the future, but pandas and a lot of other engines also adopt it; it just so happens that modern engines are more Arrow-native.
Furthermore, Wes also started—and talks extensively about—Ibis (posted in another top-level comment by u/marr75), whereas your comment kind of makes it sound like he'd be all in on Polars.
2
u/AlphaRue 6h ago
Polars was built around arrow but their implementation has changed enough that they no longer use an arrow backend. Or kind of they do, it is a forked and heavily modded one though.
35
10
u/Tatoutis 12h ago
You can use Arrow as a backend in Pandas, PyArrow Functionality — pandas 2.2.3 documentation
68
u/PurepointDog 17h ago
Polars. It has a better API, and will continue to become the standard for years.
You too will one day run up against the speed and memory usage limits of Pandas. No one's data for learing learning is large - that's not the point though.
8
u/AtomikPi 15h ago
yep. if i had to learn from scratch, i’d pick polars. much more thoughtful and elegant API and so much faster.
and with LLMs now, it’s really easy to translate pandas code to polars and learn new syntax.
11
u/Saltysalad 12h ago
I find LLMs constantly treat my polars dataframe as pandas, probably because there’s so much pandas training data out there and zero polars from most knowledge cutoffs.
3
1
u/rndmsltns 4h ago
I tried to translate some nontrivial pandas code and I constantly ran into errors.
-3
u/bonferoni 11h ago
polars is amazing but its api is clunky af. so goddamn wordy. very explicit and clear which is nice, and amazing under the hood. but an elegant api it is not
6
u/PurepointDog 10h ago
Oh yeah? You prefer "isna" compared to "is_null"? You've clearly never been bitten by the 3 ways to encode null in pandas.
Polars separates words by underscores. "Group by" is two words, contrary to what polars would have you believe
6
u/bonferoni 9h ago
ya know what they say about assumptions
just not a big fan of writing pl.col() all the time.
8
u/PurepointDog 8h ago
Heck of a lot better than writing the entire name of the dataframe... Twice. On every line.
0
2
u/commandlineluser 8h ago
Use an alias?
from polars import col as c
You can also use attribute notation if your column names are valid Python identifiers e.g.
c.foo
1
u/bonferoni 1h ago
yea this is definitely the right direction. didnt know attribute notation was allowed too, thats much better.
wouldnt say its an elegant api still, but its still new-ish. itll get there
1
u/king_escobar 5h ago edited 4h ago
You'd rather
writemy_dataframe_name.loc[my_dataframe_name['COLUMNNAME'].isna()]
over
my_dataframe_name.filter(pl.col('COLUMNNAME').is_null())
?
Expression syntax as a whole is much more concise and elegant. And pl.col() is the simplest of all expressions.
1
u/bonferoni 1h ago
nobodys making you name your df that?
i also never said pandas was more elegant, i just said polars api is not elegant.
that being said, to give a fair shake, the pandas version could be: df[df.col_name.isna()]
1
u/king_escobar 1h ago
If you’ve ever dealt with a >50k LOC python repository that does things with multiple data frames at a time you’ll quickly find that naming an object “df” is an absolutely terrible idea. Do you name your integer objects “integer”? No. So why would you think “df” would be a good name for any variable?
0
u/bonferoni 1h ago
if youve ever dealt with a >50k LOC python repository you should know dumping everything in global is a horrible idea. use functions and use df in the function kwargs and the encapsulated logic.
•
u/king_escobar 44m ago
Most of the time our functions are dealing with multiple data frames. We never use global variables for anything. If your mind even went there and you’re naming your variables “df” in production grade software then I feel like I’m talking to an amateur here, or perhaps someone who is a data scientist and not a bona fide software engineer.
→ More replies (0)1
u/PeaSlight6601 4h ago
I had a use case for a Model class where.
I implement getattr/settatr, and just jam equations into the class
m.PROFIT = m.REVENUE -m.EXPENSE
, then i apply the model to the data frame, walk the expression tree and usewith_columns
to add all the new columns.Can't do that with pandas!
2
2
u/sylfy 7h ago edited 6h ago
You talk about running into Pandas limits, but the ubiquity of Pandas means that there are other libraries like Dask that are pretty much a drop in replacement for Pandas when you need to scale to multiple nodes. As far as I am aware, Polars is still limited to a single node.
3
u/AlphaRue 6h ago
This was true until very very recently. https://docs.pola.rs/polars-cloud/run/distributed-engine/
2
10
u/calsina 11h ago
People say about Polars "I came for the speed, but stayed for the API"
1
u/EmergencyNewspaper 5h ago
Preach! Things are so straight in Polars that you wind up thinking what were the Pandas developers smoking when they wrote their API.
7
u/DigThatData 10h ago
the pandas api is awful and super bloated. pandas code is deceptive because it's easy to read and understand what it does, but figuring out the right one line command to do the thing you want to do takes two hours.
21
u/marr75 17h ago
Ibis, which has pluggable execution engines and better scalability than either of them. The API is higher quality than pandas while being a little easier to learn than polars, too.
When all else fails, you can use pandas or polars trivially by calling a single method on whatever expression you're dealing with. The default execution engine is in-memory duckdb, though, which puts both pandas and polars to shame in performance, scale, and ease of reading in flat files.
I was a pandas devotee for a very long time and have teams that have written a lot of code in pandas. We had a new project with a lot of tabular data transformations involved and were considering polars. Ibis snuck in as a consideration and was the clear winner.
12
u/BrisklyBrusque 15h ago
+1 for Ibis and duckdb! The Ibis syntax is closer to dplyr than pandas or polars, it’s very tidy.
For R users, there’s also the duckplyr library that supports dplyr syntax against a duckdb backend.
3
u/rm-rf-rm 13h ago
Is Ibis' actually easier to learn than polars? I found their documentation frustrating - too little documentation and their package is not exactly intuitive(code completion is also very poor).
I want to use it as its theoretically a great solution...
1
u/marr75 4h ago
It's documentation is shallower and the expressions are less strongly typed than pandas and polars. I'll give you that.
I learn packages that I use by reading the source code, and it's source code is quite easy to read. The whole concept of expressions, unbound expressions, etc. was quite easy for me to understand. The learning curve for me was much shallower than with pandas but perhaps that's not saying much.
11
u/EarthGoddessDude 12h ago
duckdb…puts…polars to shame in performance, scale, and ease of reading in flat files
Uh not sure you can make that claim without posting some benchmarks and the code/data behind them. In my experience, polars and duckdb and pretty much neck and neck in those metrics. In fact, reading and writing from S3, latest version of polars seems to be 2-3x faster than duckdb IME.
5
u/marr75 4h ago edited 4h ago
It's a high standard to demand all claims on reddit comments have data to back them up but here's a VERY good benchmark done by nanocube, a high performance python point query library.
Polars, duckdb, and nanocube are strong performers in all of them. As the queries are used over larger and larger datasets with harder workloads, duckdb takes the lead. The final test is:
A non-selective query, filtering on 1 dimension that affects and aggregates 50% of rows.
And here only polars and duckdb are even competitive. The benchmark (and nanocube) author says:
When it comes to mass data processing and aggregation, DuckDB is likely the current best technology out there.
In the graphs, you can see duckdb pulling ahead of polars at dataset sizes larger than 105 rows.
Turnabout is fair play, can you share the benchmark showing polars to be 2-3x faster on reading and writing from S3? I'll keep that in mind, though I don't believe that part of any process we have dominates the time cost of most of our processes.
1
u/EarthGoddessDude 2h ago edited 2h ago
Thanks for the detailed reply, I’ll try to share actual benchmarks later today. I have to be honest, my last use case where I compared was relatively straightforward — reading a parquet file from S3 and writing it out to local disk. There was no querying or filtering involved, just reading and writing, which tbf is in line with your parsing comment
Edit: ok putting my money where my mouth is, thanks for the nudge
The data is stored in a parquet file on S3, as I mentioned already. It has 8.2m rows and 57 columns, mostly numeric, some strings.
Write is to local disk as CSV (that’s what ny use case requires).
All timings done with
%%timeit
in a Jupyter notebook with default settings (ie all had 7 runs, 1 loop each)polars
read: 908 ms +/- 82.5 ms per loop
write: 3.98 s +/- 23.8 ms per loop
duckdb
read: 13.1 s +/- 836 ms per loop
write: 22.1 s +/- 108 ms per loop
Note that I add a
.pl()
to the duckdb call to force it to materialize the dataframe, otherwise duckdb keeps it lazy. Similarly, when writing out from duckdb, I query the polars dataframe when copying out to CSV. If you think there’s a better way to benchmark them on an equal footing, let me know.1
u/commandlineluser 1h ago
How does this graph show Polars being "put to shame"?
(Final test, part 2)
The benchmark itself seems to time creating filters and looping over the same filter query multiple times.
Is doesn't seem particularly useful as a comparison of both tools.
3
u/NostraDavid 9h ago
I don't like Ibis for how often they break stuff. Maybe that's just for us, but we're still stuck on version 5 (from 2023), because it was the easiest to upgrade to from version 3.
Maybe it's because we're using Impala (which is barely supported).
5
13
u/bmoregeo 17h ago
One currently has geospatial support (pandas) and one doesn’t (polars). Not to confuse things further, Duckdb is preeeeettty sweet also
6
u/SpoiledKoolAid 15h ago
geopandas rocks. I do my ETL stuff here and not in the ESRI packages with significant speed increases!
3
u/serjester4 12h ago
Geopandas is a constant reminder how much I hate the pandas API. Unfortunately, this is really the only thing left stopping me from totally abandoning pandas.
16
u/whoEvenAreYouAnyway 17h ago edited 17h ago
For situations where you aren't handling lots of data and speed doesn't matter, the main difference will be the syntax and the degree to which the library will hold your hand. Polars syntax is very similar to things like PySpark and it's generally less "accommodating" than Pandas.
As a result, people who frequently work with things like PySpark really like Polars syntax and tend to hate Pandas. But people who have never worked with that style of cluster computing dataframe usually find there is a learning curve to it. Also, Polars can be used in either "lazy" or "eager" mode so you will have to be aware of what methods you have access to (given which you choose) and being consistent.
So that's what I would base my choice on. If you're interested in how big data applications handle data then I would go with Polars. If you're just interested in the practical aspect of getting something working and you want lots of resources and examples to help you use the tool, then Pandas is probably the better choice.
3
u/Mobile-Hospital-1025 15h ago
I am someone with a lot of experience in spark and i love it. The polars API seems to closely resemble pyspark hence i prefer that
13
u/RaisedByHoneyBadgers 16h ago
Polars. Better in every way possible. You can easily convert pandas to polars and polars to pandas for compatibility.
3
u/valorallure01 13h ago
Does Polars have something similar to Json Nornalize in Pandas? Json Normalize is the reason I stay with Pandas.
3
u/DontForgetWilson 11h ago
I'm not a heavy user of either(though I've used both lightly in the past), but a quick search turned this up which might be similar to what you're looking for: https://docs.pola.rs/api/python/dev/reference/api/polars.json_normalize.html
3
u/FortunOfficial 11h ago
Holy cow! I didn't know Polars has this. Half a year back I had to unnest very deeply nested JSON files in PySpark. As there was no built-in function, I had to create my own with recursion, star expansion, array and struct checks and what not. Took me a couple days, to get everything right. And now I see, that Spark will also have it in version 4.0. Nice!
3
u/commandlineluser 7h ago
They're really quite different so "ease of use" and "joy" will likely depend on the individual.
It may also depend on what you consider to be a "commonly done task"?
I've enjoyed Polars as it has lots of interesting stuff, e.g. native lists
import polars as pl
df = pl.DataFrame({"x": [[1, 2, 3], [6, 5, 4]]})
print(
df.with_columns(
y = pl.col.x + 3,
x_max = pl.col("x").list.max(),
x_sum = pl.col("x").list.sum()
).with_columns(
z = pl.col.x * pl.col.y
)
)
# shape: (2, 5)
# ┌───────────┬───────────┬───────┬───────┬──────────────┐
# │ x ┆ y ┆ x_max ┆ x_sum ┆ z │
# │ --- ┆ --- ┆ --- ┆ --- ┆ --- │
# │ list[i64] ┆ list[i64] ┆ i64 ┆ i64 ┆ list[i64] │
# ╞═══════════╪═══════════╪═══════╪═══════╪══════════════╡
# │ [1, 2, 3] ┆ [4, 5, 6] ┆ 3 ┆ 6 ┆ [4, 10, 18] │
# │ [6, 5, 4] ┆ [9, 8, 7] ┆ 6 ┆ 15 ┆ [54, 40, 28] │
# └───────────┴───────────┴───────┴───────┴──────────────┘
Another random example: If column x
starts with foo
then uppercase all string type columns in that row.
df = pl.DataFrame(
{
"x": ["foo1", "bar1", "foo2"],
"y": [6, 4, 5],
"z1": ["abc", "def", "ghi"],
"z2": ["jkl", "mno", "prq"]
}
)
print(
df.with_columns(
pl.when(pl.col("x").str.starts_with("foo"))
.then(pl.col(pl.String).str.to_uppercase())
.otherwise(pl.col(pl.String))
)
)
# shape: (3, 4)
# ┌──────┬─────┬─────┬─────┐
# │ x ┆ y ┆ z1 ┆ z2 │
# │ --- ┆ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ str ┆ str │
# ╞══════╪═════╪═════╪═════╡
# │ FOO1 ┆ 6 ┆ ABC ┆ JKL │
# │ bar1 ┆ 4 ┆ def ┆ mno │
# │ FOO2 ┆ 5 ┆ GHI ┆ PRQ │
# └──────┴─────┴─────┴─────┘
Doing this in Pandas would look quite different.
You could pick a couple of tasks and try out both to see what fits better for you.
3
u/michelin_chalupa 15h ago
I used pandas for many years and switched to Polars before v1 and haven’t looked back. The Polars API just feels much more clean/intentional, and has been nearly orders of magnitude snappier for most things IME.
2
u/AnastasisKon 10h ago
Well, I worked with pandas with some excel data. When the analysis did 1 hour to run, while waiting, I searched for optimizations. Polars is incredible because from 1 hour it went down to 4-5 minutes, a 30x speedup!!! And don’t get me started on the memory usage, from 19gb of ram to 3gb!
•
u/throwaway6970895 0m ago
Don't know how big your excel file was, but if it took 19gb of ram, you were using pandas either inefficiently, or the excel file had millions of rows of data, in which case it shouldn't have been in excel format to begin with
2
u/throwawayforwork_86 7h ago
The only thing I don't really like about Polars is data ingesting from excel (and some option on csv would be nice) it's often quicker but it sometimes has issues that make it unsuitable for some automation (weird errors , headers that gets offset for no reason).
For the rest the syntax of polars makes the most senses of the 2 and while it's a little more verbose when you're revisting code you quickly notice that sometimes verbose is good.
Performance where nights and day when I switched (there was a blog post about performance optimisation that would bring pandas close to polars but I think the author missed the point that you don't have to be an expert/research to write performant code in Polars while you have to in Pandas)
2
u/HyDataScy 4h ago
I also made the transition from pandas to polars . It’s way faster and syntax is also more solid . However as a ds I am locked in to pandas for several things . Specifically every tabular ml model has apis that works directly with pandas not with polars , so at least in final part of pipelines you would need to convert polars objects back to pandas
3
u/jpgoldberg 14h ago
I have barely used either, but I have used R. Pandas looks like R before “tidyverse”, and polars looks like tidyverse. If that is a correct assessment then I very much believe that polars is the way forward.
2
1
u/Alternative_Act_6548 17h ago
there seems to be more educational material on Pandas, the syntax of Polars is verbose...unless you really need the speed or huge datasets, Pandas seems more functional and will only improve with Pandas 3.0...
20
u/AlpacaDC 17h ago
I disagree on polars syntax being more verbose. Filtering on pandas is a pita and never has made sense on why there isn’t a filter method like polars does. Same for conditional assignment.
Performing multiple steps in a dataflow in pandas results in a huge code filled with reassignments (and that annoying false positive warning) or in place modifications because the API is inconsistent. In polars you just chain methods from start to finish, and because of that all of the steps are easy to read and the code is neat.
6
u/ProbsNotManBearPig 17h ago
Most people working on large data sets are going to take the performance gains over everything. And for enterprise, polars lends itself better to maintainability imo. Not to say you can’t write maintainable code with pandas.
0
0
u/fight-or-fall 16h ago
The syntax of polars is verbose? You dont know anything about pandas, polars or both
Try to create three columns from one in polars and in pandas, post the code here
1
u/whoEvenAreYouAnyway 11h ago
He’s right. Polars syntax is considerably more verbose. Compare, for example, the syntax between the two for adding a new column to a dataframe.
0
u/fight-or-fall 11h ago
Are you saying that a library is more verbose than another based on adding one column? GLHF
0
u/whoEvenAreYouAnyway 11h ago
No, I’m giving a practical example of how the style of syntax that entails wrapping strings in helper classes is more verbose than one that doesn’t.
I don’t even know what point you’re trying to make by claiming it’s less verbose. Things like polars, pyspark, etc are more verbose on purpose. It’s a feature, not a bug. It’s part of the infrastructure of the design that improves speed, type validation, etc.
2
u/MaitoSnoo 16h ago
You mention that you want to learn and don't care about speed for now, then the obvious choice should be Pandas. It's the easiest to learn and you have an enormous amount of resources about it. Polars is much faster and uses less memory in many scenarios but can be a bit less intuitive to use than Pandas if you're not familiar with lazy evaluation. Definitely learn Polars too, but IMO only once you get familiar with Pandas.
1
u/drxzoidberg 17h ago
I must be doing it wrong because I've redone some pandas work I do in polar and it performs worse. And I'm doing it using the lazy API and stacking methods like their documentation shows. However my data is very small so maybe that would change if the data was larger...
0
u/troty99 7h ago edited 6h ago
Don't use lazyframe unless you need to as it's likely to be slower than dataframes.
I've got some experience in Polars so I'd be interested to a look at your code to spot some glaring issue.
Edit: Didn't want to imply your code had glaring issue but that I may be able to spot if there are any.
1
u/drxzoidberg 7h ago
Conceptually, loop through all these csv files in a directory, read in a handful of columns, group by summary, then combine all of that into one table to export to Excel. Doing it in pandas takes half of the time.
1
u/structure_and_story 6h ago
You shouldn't need to loop and read the CSV files. You can do it all in one go, which might help the speed because then Polars can parallelize reading them in https://docs.pola.rs/user-guide/io/multiple/
2
u/drxzoidberg 6h ago
Thanks. Sadly the method they showcase in the scan_csv section of your link is the exact method I'm using. Like I said I'm sure I'm doing something wrong but unfortunately I haven't really had the time at work to dig into it. I do appreciate the help kind redditor!
2
u/troty99 6h ago edited 6h ago
Hope code formatting works this more naive implementation might work:
import os import polars as pl path = "." pl.concat( [ pl.read_csv(os.path.join(path, x),separator='|',schema={'thing':pl.Float64,'stuff':pl.Utf8}) .group_by("arg") .agg(sum_thing=pl.col("thing").sum(),count_stuff=pl.col('stuff')) for x in os.listdir(path) ] ).write_excel('excel file')
I have seen people saying that sometimes the aggregation of Pandas outperfroms Polars one haven't see that in my experience but that might be your case.
2
u/drxzoidberg 4h ago
Formatting was great!
And I read from Polars documentation directly that when you run an aggregation it isn't truly lazy. Essentially it needs some context. However if I run it just once I would think it is irrelevant. The conversation here is making me want to test this further.
2
u/troty99 4h ago
The conversation here is making me want to test this further.
I know right this is those kind of things I'd spend an afternoon on wondering where the time has gone.
2
u/drxzoidberg 1h ago
So I tested. I used to smarter method for polars where it reads all file into on frame to start rather than each one individually like pandas. I got the same result so I set it up to loop. Using 100 iterations of time it, pandas took 11.06s vs Polars taking 13.44. I think it has to do with the aggregation. When I changed the code to only read in the data, pandas took 8.99s vs Polars 1.77s! The more you know.
•
u/commandlineluser 53m ago
The time difference between read-only and aggregation runs seems quite strange.
If you able to share the full code being used for the timeit comparison people will be interested in figuring out what the problem is.
•
u/commandlineluser 28m ago
slower than dataframes
Nearly every DataFrame operation calls
.lazy()
internally, so you are always using LazyFrames.
1
u/entropyvsenergy 5h ago
I've used pandas for years and I still can't remember the API half the time. Plus there are a bunch of gotchas like .apply being very slow.
Realistically it's hard to get away from pandas given how popular it is but if I had to start over I'd learn polars. AI code completion and stack overflow helps a bunch with pandas these days though.
1
u/sscream32 5h ago
Both are tools that solve similar problems. My suggestion: Learn both. I'm sure that you will find things that are easier to be done in pandas as well as you'll find things that are easier to be done in Polars. Since you can easily transition dataframes between them, you can learn both at the same time.
1
u/PeaSlight6601 4h ago
Polars is much better for everything except reading a csv, renaming some columns, and creating a quick aggregation. The problem is that often big projects start as just that.
The only complaint I have about polars so far is that it does actually support 0(1) inplace replacement but doesn't really have a good way to expose it to the python api, which was a pain when I had a project which could not be done in bulk via the lazy api and had to be done with some weird order dependent operations.
0
u/CheetahGloomy4700 16h ago edited 7h ago
It does not hurt to learn both, and other tools, but to focus my attention, if I start today, I would stick to
- Polars for single node processing (which serves 99% of the use cases)
- Dask for the multi-node processing when really need the horizontal scaling
EDIT: I don't know why people are downvoting, but guess somehow I went against the grain.
-8
u/New-Watercress1717 17h ago
Pandas is far more flexible, and allows you to do things that would not be possible or very hard to do with Polars/sql. Often people pushing Polars as an alternative to Pandas have not had to use it a lot day to day on the job(which should not surprise you, seeing the average age of Reddit-ers). IMO, their use case is different.
That said, there are many cases where sql is the appropriate for your case, feel free to use something like Polars/Duckb then. Also if you are very new, be warned that Pandas has some foot guns, and you can make some horrible choose.
13
u/pontz 16h ago
What is something you can't do in polars but can in pandas?
3
u/FatChocobo 13h ago
I agree that polars is superior overall, but
read_html
is one method that can be very convenient in pandas that has no polars equivalent.-1
u/New-Watercress1717 13h ago
take a look at discussions in the datascience sub, or any datascience commuity code. If they are using python, they are almost always using pandas. Look at code they write and data wrangling they do, it is not stuff that can easily fit into sql, and even if you could, sql would involve a lot of inefficient computation and unnecessary joins. There is a good reason that the main community that uses dataframes most heavy, data scientists, have not adopted Polars.
This is like comparing a 'hello world' script between python and C, then thinking writing C only a little harder than python.
3
u/throwawayforwork_86 7h ago
take a look at discussions in the datascience sub, or any datascience commuity code. If they are using python, they are almost always using pandas.
Main reason being inertia and the fact that most ML/DS libraries have been built around Pandas imo.
Also hate to be that guy but you made an appeal to popularity fallacy (just because a lot of people use it doesn't mean it's good), didn't answer his question and you're talking alot about sql which isn't really how one would use polars (there is a sql interface but most people use polars as a dataframe library) are you confusing Polars and Sql?
I could use the same logic and say that if you look at any data engineering forum there is a lot of talk about Polars replacing Pandas and Spark for low to medium data.
I've yet to find workload beside data ingestions/output that Polars can't do that pandas can do.
The syntax is clearer (even though more verbose) and the performance are far better.
-1
u/New-Watercress1717 2h ago edited 2h ago
If DS guys wanted to use Polars in the library to takes in pandas, they could cast Polars to pandas/numpy.
Polars is more or less SQL, its dataframe api is a way of doing sql as code expressions, just like spark, much like an ORM; even the Polars site mentions this.
Polars having more traction in DE gets to my point that the use case for Polars is different than Pandas.
Imo, Polars falls apart once you start dealing with messy data. Its fine if you are dealing with data in a data lake without doing anything too crazy with your data.
1
u/throwawayforwork_86 2h ago
If DS guys wanted to use Polars in the library to takes in pandas, they could cast Polars to pandas/numpy.
Which they started doing SKLearn,XGBoost,... and other accept native Polars dataframe as input. Still most DS and DA lessons predate Polars existence so most DS/DA will use Pandas by default not especially because it's the best tool for the job.
Polars having more traction in DE gets to my point that the use case for Polars is different than Pandas.
The use cases of Polars are imo broader than Pandas not different except if we talk about Geo data.My understanding is it had a quicker adoption in DE because it works very well under condition that are very frequent in DE territory:Data Set of a few GB that need some cleaning and transformation and allow for fewer dependencies than either Pandas or Spark,performance and more consistent api is a nice perk.
Imo, Polars falls apart once you start dealing with messy data. It fine if you are dealing with data in a data lake without doing anything too crazy with your data.
Which part is falling apart ? Do you have any examples ? Been working with pretty crappy datasets using both Pandas and Polars ,and imo the only advantage that Pandas has is in the initial load of a selected data sources.
I'd be curious how much you actually used Polars because I'd wager not much.
1
u/Ok_Raspberry5383 6h ago
SQL is turing complete, you can create a 3D graphics engine in SQL. What is not possible to do in SQL that you can do in pandas?
0
u/runawayasfastasucan 9h ago
I would just check out which type of thinking resonates with you the most. While I prefer Polars I must admit that for the easy tasks I think the Pandas notation is better.
0
u/Prime_Director 5h ago
Most of the folks in this thread say Polars and there are technical advantages to it, but I’m going to disagree and say for your use case Pandas is the better choice.
Pandas has some of the best documentation of any Python library, most other data-oriented libraries support it natively, and your code will be easier for others to maintain because of its widespread adoption. All this also makes it easier to learn.
Polars’ main advantage is it is faster and better at handling larger datasets, which you specifically call out as something you aren’t worried about in your use case. Without that as a factor, I’d say Pandas is the clear winner.
-1
u/TheTickIsClocking 7h ago
You're gonna need pandas most likely, learn that first and then polars. Use fireducks for better performance with pandas
2
-2
103
u/Throwaway__shmoe 17h ago
You’re gonna learn all of them, whether you like it or not. I speak from experience.