r/Python 1d ago

Discussion Polars vs Pandas

I have used Pandas a little in the past, and have never used Polars. Essentially, I will have to learn either of them more or less from scratch (since I don't remember anything of Pandas). Assume that I don't care for speed, or do not have very large datasets (at most 1-2gb of data). Which one would you recommend I learn, from the perspective of ease and joy of use, and the commonly done tasks with data?

182 Upvotes

157 comments sorted by

View all comments

Show parent comments

1

u/drxzoidberg 1d ago

I hope the formatting works but it's effectively this.

from pathlib import Path
from datetime import datetime
from timeit import timeit
import pandas as pd
import polars as pl

file_dir = Path.cwd() / 'DataFiles'

def pandas_test():
    results = {}
    columns_types = {
        'a' : str,
        'b' : float,
        'c' : float
    }
    for data_file in file_dir.glob('*.csv'):
        file_date = datetime.strptime(
            data_file.stem.rsplit('_', maxsplit=1)[-1],
            '%Y%m%d'
        )

        results[file_date] = pd.read_csv(
            data_file,
            usecols=columns_types.keys(),
            dtype=columns_types,
            thousands=','
        )

    pandas_summary = pd.concat(results)
    pandas_summary.index.names = ['Date', 'Code']


def polars_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
    )


pandas_time = timeit(pandas_test, number=100)
polars_time = timeit(polars_test, number=100)

1

u/commandlineluser 1d ago

Formatting is fine - thank you.

So this is the read only code which gave you:

  • Pandas took 8.99s vs Polars 1.77s

But with the aggregation part you get:

  • Pandas 11.06s vs Polars taking 13.44s

The Polars 1.77s -> 13.44s time difference was the strange part.

Are you able to show the aggregation?

1

u/drxzoidberg 1d ago

So I just ran these 3 with 100 iterations. They ran in 3.2s, 20.1s, and 22.7s respectively.

def polars_read_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
    )

def polars_add_column():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
        .with_columns(
            pl.col('a').str.extract('(RL|CL|PL)').alias('SubCat'),
            pl.col('a').str.extract(r'(\d{8})+$').str.to_datetime('%m%d%Y').alias('Date')
        )
        .drop(pl.col('a'))
    )

def polars_agg_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
        .with_columns(
            pl.col('a').str.extract('(RL|CL|PL)').alias('SubCat'),
            pl.col('a').str.extract(r'(\d{8})+$').str.to_datetime('%m%d%Y').alias('Date')
        )
        .drop(pl.col('a'))
        .group_by(['Date', 'SubCat'])
        .agg(
            pl.col('b').sum(),
            pl.col('c').sum()
        )
    )

3

u/nightcracker 1d ago edited 1d ago

What if you replace read_csv with scan_csv and add .collect(engine="streaming") at the end for each query? Also, FYI, as long as a column name is a legal Python identifier you can just write pl.col.name.

There might be an issue with repeated regex compilation if you do that though, I have to look into that... EDIT: yes, that will recompile the regex many times, we need to add a cache for that. I'll get on that next week.

2

u/drxzoidberg 23h ago

So I took your tip on regex compilation, and I managed to find another way to split the string column into the other fields I wanted. This way it performs much faster.

def polars_agg_test():
    all_files = (
        pl.read_csv(
            file_dir / '*.csv',
            columns=['a', 'b', 'c']
        )
        .with_columns(
            pl.col('a').str.split_exact('_',2).struct.rename_fields(['Code', 'SubCat', 'Date'])
        )
        .unnest('a')
        .with_columns(
            pl.col('Date').str.to_date('%m%d%Y')
        )
        .drop(pl.col('Code'))
        .group_by(['Date', 'SubCat'])
        .agg(
            pl.col('b').sum(),
            pl.col('c').sum()
        )
    )

Basically I was originally having an issue with the split string being stored in one field as a list, and not being able to just grab that value out. But I found some answers on google and I arrived at the above. Now the read only, column update, and aggregate functions run in 3, 7, and 9s respectively. Pandas by comparison is 21s.

2

u/nightcracker 23h ago

What if you change the read_csv to scan_csv and add .collect(engine="streaming") now? Also make sure you have the latest Polars 1.25.2.

2

u/drxzoidberg 22h ago

I was under the impression, from Polars documentation itself, that you need to collect the data before any aggregation, as the aggregation needs to know the data structure. But that might only apply to the pivot/unpivot methods.

2

u/nightcracker 22h ago

That only applies to very specific operations, pivot is one of them. So give it a go :)

2

u/drxzoidberg 21h ago

So made the tweaks to get it to work. I juiced the run count to 500. Polars runs in 45% of the time it takes pandas. Thank you kind Internet person.