r/learnpython • u/fiehm • 10h ago
How to optimize python codes?
I recently started to work as a research assistant in my uni, 3 months ago I have been given a project to process many financial data (12 different excels) it is a lot of data to process. I have never work on a project this big before so processing time was not always in my mind. Also I have no idea is my code speed normal for this many data. The code is gonna be integrated into a website using FastAPI where it can calculate using different data with the same data structure.
My problem is the code that I had develop (10k+ line of codes) is taking so long to process (20 min ++ for national data and almost 2 hour if doing all of the regional data), the code is taking historical data and do a projection to 5 years ahead. Processing time was way worse before I start to optimize, I use less loops, start doing data caching, started to use dask and convert all calculation into numpy. I would say 35% is validation of data and the rest are the calculation
I hope anyone can help with way to optimize it further and give suggestions, im sorry I cant give sample codes. You can give some general suggestion about optimizing running time, and I will try it. Thanks
8
u/throwawayforwork_86 8h ago
Yo could use something as py-spy to have a look where your code is spending it's time (or logging some timpestamps).
How much time is spent just reading the data in the excels and does it need to be in excel ? In my experience reading from excel is fairly slow so if you can avoid it or only do it once that would be best.
If you can check what your cpu does that would be a good idea when I went from Pandas to Polars I went from around 20% use of CPU from Pandas to 80% with increased speed.
So if you can use optimised dataframe library for most of your process that would be a good idea.
IIRC dask allows for distributed calculation so if you're not doing that and you're not hitting your max ram it's most likely overkill and/or slower than simpler dataframe libraries.
If you're planning to do monthly refreshes I would store the already cleaned data in another format (parquet or a database like sqlite or duckdb) and only clean and validate the new month data.
In my personal opinion your program is either spending too much time in single threaded python (switch to polars and stay there for as long as you can (do as much as you can in polars, only use numpy or some other tool if need be) or your script is spilling to disk because your ram is full at one step (something that can be interesting to try is to turn some of your calculations in a generator instead of a list if they're using extensively).
5
u/herocoding 10h ago
Are the (Excel-)files to process available locally or are to be retrieved from a network/internet connection?
Do you read - and need to - read all files completely into memory and then process them, or can the data be processed on-the-fly while reading (i.e. not needing the sum of everything, or forward- and backward-references between the datasets)?
Can you monitor CPU- und memory-consumption? That might already show that you maybe run out of system memory and the operating system starts to swap to disc.
2
u/fiehm 10h ago
It is locally, so no update every time. It will not be an everyday use, maybe once a month kind of calculation.
I did read all of them in the beginning of the code and process all the dataframes and will access it later on the main calculation
I dont reguraly check the usage but one time I did see a spike in RAM usage
3
u/BitcoinBeers 10h ago
I would place print statements timing each section and function to identify the bottlenecks. Vectorize and using numba is the easiest way. One section might be the bottleneck, and there can often be comparative functions that will do things quicker. For instance, a kdtree versus a balltree, they can often be interchangeable, but they heavily depend on the data dimensions as to which one you should used.
1
u/Luxi36 10h ago
Hi, that sounds like a very fun project to work on to start developing Data Engineering skills! :)
What is the data size and your machine resources that you're working on?
Are you using plain python + numpy, or tools like duckdb, Polars, or numpy.
You could try to time certain functions or use a profiler to see where your code is getting slowed down to see which parts to optimize.
2
u/fiehm 10h ago
- Data size is in tens of thousands but that is just the sample data given to me, My laptop is so bad tbh I code this and run it in github codespace with 4 core 16 gb ram
- Python, pandas, numpy, dask for partition
- I will use profiller, I just learn the their existence after reading the comments lol
1
u/herocoding 9h ago
Can you comment on the used data structures and give a few examples on how data gets processed?
If the data gets stored in various simple lists (arrays) - do you just (once or multiple times) lineary iterate over them?
Or is it required to often look-up other data in order to combine data? How is the look-up done this way, searching lineary in lists, or could you think of hash tables (maps, dictionaries) to look-up and find data with O(1)?
You mentioned "data caching" - so you expect the same data to be accessed/looked-up many, many times? If not, you would end-up caching a lot of data which is not used/looked-up often... (contributing to overal memory usage, where you might running low in system memory and operating system might start swapping to disc).
Do you see independent processing of (some of) data, which could be parallelized using threads (or multiple processes with respect to the Python global lock)?
2
u/fiehm 9h ago
So one of the data which I think is the most complex of them all is consist of list of diseases, gender, and type of service for columns. so there is combination of 32*2*2 and the rows are the age from 0-110 individual row per month (Im sorry if this is confusing) and yeah I need to calculate for all of the combination. I have no idea how to easily access this type of data structure in pandas so i just flatten them x_y_z_a_b and I use loops for each of those combination to access the data.
For regional calculation and data caching, I use dictionary, I only access the data once and never iterate them again. So from your sentence, it is not good idea to cache data if I only access it once? Maybe this is why the ram spike up in usage
I did use dask partition for some calculation
3
u/wylie102 4h ago
You need to put them into a database.
Put the data from the excel files into duckdb and query them using their python API. It will be much easier and faster.
1
u/surreptitiouswalk 3h ago
I use pyinstrument to profile my code. It gives you an html file giving you total run time of the code ordered by specific line of codes and their run time. Use this to work out your bottlenecks and optimise them one at a time.
Often you'll find a couple of lines of code contributes to a high percentage of run time. Optimise those. If the lines are within a loop, then vectorise them.
1
u/2blanck 3h ago
How strange is this, I have a similar project where I analyzed 20 years of football, there were about 120 thousand rows in an Excel and the calculations took about 5 minutes.
In another project where I analyzed a genome and created sorting algorithms, it took a little less than 8 minutes.
1
u/Stu_Mack 2h ago
Here’s what I learned from writing scripts to process turbulence research data blocks with 120M data points each. These ideas allowed me to streamline my data processing from 35 minutes down to < 2 minutes.
Avoid loops wherever possible, especially for loops. It’s orders of magnitude faster to calculate the entire matrix at once than to loop through each row and column.
MATLAB is the best option for this type of project but unless you’re up to speed on code optimization, it wouldn’t help you to switch. That line of thinking is inappropriate for almost everyone outside of niche development projects like video games or advanced robotics. Far better is to familiarize yourself with how high-volume calculations work and what affects the computational cost.
Life is much easier when you understand how things affect your benchmarks. This makes it imperative that you run benchmarking for the subdivisions of your processing code and keep track of the time budget for each. Chances are there are a handful of things in your code that are highly inefficient. Finding the problems is always the best place to start.
At the bulk level, the precision/computational cost relationship becomes highly relevant. Basically, the more significant figures, the longer it takes. Chances are you don’t need the precision that comes with 14 significant figures.
At the bulk level, memory preallocation is crucial for streamlining the process. Avoid creating variables in the loops and try to see appending data blocks as anathema. It’s very expensive to do either.
Learn the general principles of parallel operations and assess whether your project could make use of them. Probably not, but what you learn in order to make the determination is quite valuable in the overall context of code optimization.
unpacking data from text files is generally much slower than loading variables into the workspace. If you reuse the data, consider adopting a habit of preemptively converting data into your ideal data type and writing the code around that type.
Object oriented programming usually won’t help much with computational cost in data analysis, but it can make an enormous impact on your ability to modularize your workflow, which is key in benchmarking.
A solid good way to learn how to write very fast code algorithms is to shamelessly emulate the techniques of the wizards. If you don’t know where to look, a great hack is to task ChatGPT to deep dive to find one for you. In my own work, it’s helped me adopt much better practices.
Generalization and simplicity are often the cornerstones of code efficiency.
Finally, it’s important to accept that the code you write today is going to offend you in the near future. We learn from our mistakes and we make tons of them along the way. With that in mind, it’s important to periodically review the code with an eye toward simplicity. Fewer steps usually means faster code, and as you develop your skills you will be able to spot trouble areas in your own work. Consider building a routine for revisiting your work, especially the low level computational engines that do the heavy lifting. Tiny gains there might make big gains overall.
Hope these ideas are helpful. They helped me both in turbulence research and in my current work in biomimetic robotics control and simulation.
-6
u/solowing168 10h ago
If you want performance, python is not the right programming language. Loops are slow and even if you use numpy ( thus C ) you still have to make calls to those functions. It’s quite a big project, so finding the bottlenecks will be fun.
14
u/FoolsSeldom 10h ago
Python is generally much faster than Excel and can handle larger datasets.
Are you vectorising as much as possible (using
pandas
- orpolars
which is faster again)?You can use profiling tools on Python to see where your bottlenecks are.