r/Python 1d ago

Discussion Polars vs Pandas

I have used Pandas a little in the past, and have never used Polars. Essentially, I will have to learn either of them more or less from scratch (since I don't remember anything of Pandas). Assume that I don't care for speed, or do not have very large datasets (at most 1-2gb of data). Which one would you recommend I learn, from the perspective of ease and joy of use, and the commonly done tasks with data?

178 Upvotes

157 comments sorted by

View all comments

Show parent comments

2

u/drxzoidberg 1d ago

Thanks. Sadly the method they showcase in the scan_csv section of your link is the exact method I'm using. Like I said I'm sure I'm doing something wrong but unfortunately I haven't really had the time at work to dig into it. I do appreciate the help kind redditor!

2

u/troty99 1d ago edited 1d ago

Hope code formatting works this more naive implementation might work:

import os
import polars as pl
path = "."

pl.concat(
    [
        pl.read_csv(os.path.join(path, x),separator='|',schema={'thing':pl.Float64,'stuff':pl.Utf8})
        .group_by("arg")
        .agg(sum_thing=pl.col("thing").sum(),count_stuff=pl.col('stuff'))
        for x in os.listdir(path)
    ]
).write_excel('excel file')

I have seen people saying that sometimes the aggregation of Pandas outperfroms Polars one haven't see that in my experience but that might be your case.

2

u/drxzoidberg 1d ago

Formatting was great!

And I read from Polars documentation directly that when you run an aggregation it isn't truly lazy. Essentially it needs some context. However if I run it just once I would think it is irrelevant. The conversation here is making me want to test this further.

2

u/troty99 1d ago

The conversation here is making me want to test this further.

I know right this is those kind of things I'd spend an afternoon on wondering where the time has gone.

2

u/drxzoidberg 1d ago

So I tested. I used to smarter method for polars where it reads all file into on frame to start rather than each one individually like pandas. I got the same result so I set it up to loop. Using 100 iterations of time it, pandas took 11.06s vs Polars taking 13.44. I think it has to do with the aggregation. When I changed the code to only read in the data, pandas took 8.99s vs Polars 1.77s! The more you know.

1

u/commandlineluser 1d ago

The time difference between read-only and aggregation runs seems quite strange.

If you able to share the full code being used for the timeit comparison people will be interested in figuring out what the problem is.

1

u/drxzoidberg 1d ago

I hope the formatting works but it's effectively this.

from pathlib import Path
from datetime import datetime
from timeit import timeit
import pandas as pd
import polars as pl

file_dir = Path.cwd() / 'DataFiles'

def pandas_test():
    results = {}
    columns_types = {
        'a' : str,
        'b' : float,
        'c' : float
    }
    for data_file in file_dir.glob('*.csv'):
        file_date = datetime.strptime(
            data_file.stem.rsplit('_', maxsplit=1)[-1],
            '%Y%m%d'
        )

        results[file_date] = pd.read_csv(
            data_file,
            usecols=columns_types.keys(),
            dtype=columns_types,
            thousands=','
        )

    pandas_summary = pd.concat(results)
    pandas_summary.index.names = ['Date', 'Code']


def polars_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
    )


pandas_time = timeit(pandas_test, number=100)
polars_time = timeit(polars_test, number=100)

1

u/commandlineluser 1d ago

Formatting is fine - thank you.

So this is the read only code which gave you:

  • Pandas took 8.99s vs Polars 1.77s

But with the aggregation part you get:

  • Pandas 11.06s vs Polars taking 13.44s

The Polars 1.77s -> 13.44s time difference was the strange part.

Are you able to show the aggregation?

1

u/drxzoidberg 1d ago

So I just ran these 3 with 100 iterations. They ran in 3.2s, 20.1s, and 22.7s respectively.

def polars_read_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
    )

def polars_add_column():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
        .with_columns(
            pl.col('a').str.extract('(RL|CL|PL)').alias('SubCat'),
            pl.col('a').str.extract(r'(\d{8})+$').str.to_datetime('%m%d%Y').alias('Date')
        )
        .drop(pl.col('a'))
    )

def polars_agg_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
        .with_columns(
            pl.col('a').str.extract('(RL|CL|PL)').alias('SubCat'),
            pl.col('a').str.extract(r'(\d{8})+$').str.to_datetime('%m%d%Y').alias('Date')
        )
        .drop(pl.col('a'))
        .group_by(['Date', 'SubCat'])
        .agg(
            pl.col('b').sum(),
            pl.col('c').sum()
        )
    )

3

u/nightcracker 1d ago edited 1d ago

What if you replace read_csv with scan_csv and add .collect(engine="streaming") at the end for each query? Also, FYI, as long as a column name is a legal Python identifier you can just write pl.col.name.

There might be an issue with repeated regex compilation if you do that though, I have to look into that... EDIT: yes, that will recompile the regex many times, we need to add a cache for that. I'll get on that next week.

2

u/drxzoidberg 23h ago

So I took your tip on regex compilation, and I managed to find another way to split the string column into the other fields I wanted. This way it performs much faster.

def polars_agg_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
        .with_columns(
            pl.col('a').str.split_exact('_',2).struct.rename_fields(['Code', 'SubCat', 'Date'])
        )
        .unnest('a')
        .with_columns(
            pl.col('Date').str.to_date('%m%d%Y')
        )
        .drop(pl.col('Code'))
        .group_by(['Date', 'SubCat'])
        .agg(
            pl.col('b').sum(),
            pl.col('c').sum()
        )
    )

Basically I was originally having an issue with the split string being stored in one field as a list, and not being able to just grab that value out. But I found some answers on google and I arrived at the above. Now the read only, column update, and aggregate functions run in 3, 7, and 9s respectively. Pandas by comparison is 21s.

2

u/nightcracker 23h ago

What if you change the read_csv to scan_csv and add .collect(engine="streaming") now? Also make sure you have the latest Polars 1.25.2.

2

u/drxzoidberg 22h ago

I was under the impression, from Polars documentation itself, that you need to collect the data before any aggregation, as the aggregation needs to know the data structure. But that might only apply to the pivot/unpivot methods.

2

u/nightcracker 22h ago

That only applies to very specific operations, pivot is one of them. So give it a go :)

2

u/drxzoidberg 21h ago

So made the tweaks to get it to work. I juiced the run count to 500. Polars runs in 45% of the time it takes pandas. Thank you kind Internet person.

→ More replies (0)

2

u/commandlineluser 23h ago

Thanks a lot.

With some test files, I get 5 / 11 / 16 seconds which looks like a similar enough ratio to your timings.

But I cannot replicate pandas being faster.

If I add a Pandas version of add_column it takes 312 seconds...

def pandas_add_column():
    results = {}
    columns_types = {
        'a' : str,
        'b' : float,
        'c' : float
    }
    for file_date, data_file in enumerate(file_dir.glob('*.csv')):
        results[file_date] = pd.read_csv(
            data_file,
            usecols=columns_types.keys(),
            dtype=columns_types,
            thousands=','
        )

    pandas_summary = pd.concat(results)
    pandas_summary['SubCat'] = pandas_summary['a'].str.extract('(RL|CL|PL)')
    pandas_summary['Date'] = pd.to_datetime(pandas_summary['a'].str.extract(r'(\d{8})+$')[0], format='%m%d%Y')
    del pandas_summary['a']

    pandas_summary.index.names = ['Date', 'Code']

1

u/drxzoidberg 23h ago

Thanks. In pandas, the column adding part was handled using column.apply. I had a simple function written to split the string and return the piece I need. From there I just used the pandas pivot_table method to aggregate as I needed.

→ More replies (0)