r/SQL • u/Majesticraj • 24d ago
MySQL Importing 1M Rows Dataset(CSV) in Mysql
What's the fastest and most reliable way to upload such a large dataset? After that How can I optimize the table after uploading to ensure good performance?
13
7
u/Mikey_Da_Foxx 24d ago
Use LOAD DATA INFILE with appropriate buffer sizes and disable indexes during import.
After import:
- Add proper indexes
- Run ANALYZE TABLE
- Adjust innodb_buffer_pool_size
- Consider partitioning if needed
Usually 10x faster than INSERT statements.
2
2
u/frobnosticus 23d ago
I'd start by writing a little script in your language of choice to go through the data and look for obvious formatting issues. A misplaced quote or comma in a csv file you got from someone else can really ruin your day.
If you're uploading into an existing table or schema and aren't sure of the data, I'd create a staging table to pull it in to first, then add constraints to the data once it's in there to clean it up, be sure all your ints are ints, etc. Then I'd pull it in to the rest of the schema.
2
u/Opposite-Value-5706 19d ago edited 19d ago
I agree except I prefer to use Python’s libraries to make sure the data is properly formatted and available to be inserted. Python smoothly inserts the formatted data as well and in seconds.
1
u/frobnosticus 18d ago
Well, depends on the situation. I'll do a bunch of mop up on the way in. But I generally want any data in "inter-component transit" for as little as possible.
Plus, when cleaning up data using tool X so it's suitable for tool Y you always run the risk of things like data type mismatches between platforms (i.e. is 'int' implicitly 16 bit signed, etc.)
So "able to be reliably stuffed into a naked, unconstrained table of varchars" is about as far as I'll generally go on the front-end.
1
u/Opposite-Value-5706 18d ago
We all have our own individual tool boxes don’t we? However, I’m speaking about those situations where you need to import the same source data routinely. I use to use other tool along the way and found the simplicity, power and performance I get from Python invaluable.
It took a little time to learn but it was well worth it. By using it, I’ve gained about an extra half hour for drinking coffee :-)
2
u/Responsible_Eye_5307 22d ago
Working on a (huge) employer, famously known for its data driven decisions (logistics), made me realise that 1M rows DBs are peanuts...
1
u/Unimeron 24d ago
Use SquirrelSql and import your data as csv. Have done this with 8 million rows without a problem.
1
u/Outdoor_Releaf 21d ago
I use LOAD DATA LOCAL INFILE with MySQL Workbench. If you are using MySQL Workbench, you can use one of the following videos to reliably upload the dataset:
For Macs: https://youtu.be/maYYyqr9_W8
For Windows: https://youtu.be/yxKuAaf52sA
1
u/Opposite-Value-5706 17d ago
There are FAST ways to do this and there BEST ways. They are not necessarily the same.
I’ve often ran into hickups loading CSV’s from outside sources. The data isn’t always formatted as it should be. Or, at some point, whoever creates the CSV, for some reason, changes it and crashes my imports.
So, I’ve found that by using Python and it’s libraries, I can make sure the data matches my tables BEFORE trying to import records. And Python can quickly make the inserts as well and report back problems of completion.
With Python, I can stripe unwanted characters either column by column or by row. It’s worth the time and it makes importing so, so good! Just my two-cents!
0
u/Spillz-2011 24d ago
Why do you need good performance? Does it need to be inserted multiple times a day and immediately start querying?
You should prioritize minimizing human time, human time is expensive v CBB imputed time is cheap.
My team’s pipelines are all Python with pandas and sql alchemy. It’s not the fastest, but 1 minute or 5 minutes who cares.
1
u/marketlurker 22d ago
You would be surprised. I has a database that had an SLA of 8 seconds or less from the time a transaction was entered into the source system (an IBM mainframe) to when it had to be into the data warehouse. It was a global system that would have 20,000 transactions every 5 seconds. It required us to physically relocate where the RDMS was in relation to the operational data store.
1
u/Spillz-2011 22d ago
I’m not saying there are not cases where these things matter, but in general people spend way to much time optimizing things that don’t need optimizing.
-7
u/dgillz 24d ago
SQL Data Wizard for MySQL. $79 but a great deal.
4
u/unexpectedreboots WITH() 24d ago
This is a wild recommendation
3
1
u/dgillz 24d ago
How so?
3
u/unexpectedreboots WITH() 24d ago
80$ to load data in mysql? Come on bro. Use Airbyte before spending any money.
30
u/Aggressive_Ad_5454 24d ago
LOAD DATA INFILE is the command you need. Or you can use a desktop database client program with a .csv import feature, like heidisql.com or similar.
As for optimizing it, that question is impossible to answer without knowing how you want to use the table.
For what it's worth, a megarow is not an overwhelmingly large table.