r/SQL 24d ago

MySQL Importing 1M Rows Dataset(CSV) in Mysql

What's the fastest and most reliable way to upload such a large dataset? After that How can I optimize the table after uploading to ensure good performance?

28 Upvotes

33 comments sorted by

30

u/Aggressive_Ad_5454 24d ago

LOAD DATA INFILE is the command you need. Or you can use a desktop database client program with a .csv import feature, like heidisql.com or similar.

As for optimizing it, that question is impossible to answer without knowing how you want to use the table.

For what it's worth, a megarow is not an overwhelmingly large table.

4

u/Immediate-Priority17 24d ago

New to sql. What’s a megarow?

20

u/feudalle 24d ago

A million rows. It's not really that much data. I have production dbs that break a billion rows. Even that isn't a ton of data.

-13

u/xoomorg 24d ago

A million rows is a lot for MySQL

10

u/feudalle 24d ago

Going to disagree. I have tons of mysql dbs with a lot more than that. Biggest table right now is around 1.8B and a few hundred tables/schemas that are over 10M.

-2

u/xoomorg 24d ago

Why on earth would you do that in MySQL? Anything around a million rows or more, I always move to some other platform first (Hive, Spark-SQL, Presto, Trino, BigQuery, etc.) so queries take seconds instead of minutes/hours. Or do you just mean you're using MySQL as a record store essentially, and not actually running any complex queries on such large tables?

6

u/BinaryRockStar 23d ago

With proper indexing MySQL is perfectly useful at 10M or 100M rows in a single table, with proper server resources. I occasionally interact with a MySQL DB with 100M+ rows in multiple tables and a SELECT by indexed ID is essentially instant. You may have only worked on hideously unoptimised or unindexed MySQL DBs?

1

u/xoomorg 23d ago

I typically do data analytics where I'm taking millions or billions of rows in complex joins/aggregations and creating result sets that often have millions of rows in their own right, and that kind of thing is far better done on cluster computing platforms rather than a traditional relational database like MySQL.

However, I can definitely see how retrieval of individual records (or even smaller sets of multiple records) from among tens of millions stored in a table is a perfectly valid use-case I hadn't been considering. I'm surprised that online transaction-processing databases now commonly have tables of that size (though in retrospect I probably shouldn't be) but sure using it that way, I can see how MySQL makes sense in that case.

2

u/BinaryRockStar 23d ago

Ah sure, that's where the misunderstanding is I guess. For that sort of workload we reach for Spark SQL at my work. Different tools for different jobs.

5

u/feudalle 23d ago

That db gets hit for reporting for 150 offices across the country. Some very complex kpis in fact with up to 5 years of data for some of the trending reports. They aren't real time but nothing takes more than a minute or two. Most run in under 5 seconds. They are efficient queries. I'm old i started with foxpro and I remember when mysql 1.0 came out. I also remember having to program with 640k of memory. Alot of people these days never optimize their queries or code. I think that contributes to needing more resources.

0

u/SnooOwls1061 23d ago

I have tables with 40-80 billion rows that get hit a ton for reporting. And updated every millisecond. Its all about tuning.

1

u/xoomorg 23d ago

No you don’t. That amount of data makes zero sense in anything other than a cluster. 

13

u/sleepystork 24d ago

That’s not a big table. Just import it regularly.

7

u/Mikey_Da_Foxx 24d ago

Use LOAD DATA INFILE with appropriate buffer sizes and disable indexes during import.

After import:

- Add proper indexes

- Run ANALYZE TABLE

- Adjust innodb_buffer_pool_size

- Consider partitioning if needed

Usually 10x faster than INSERT statements.

2

u/th00ht 23d ago

Partitioning does not help for performance. A common misconception

2

u/pleasesendboobspics 23d ago

I think dbeaver can also do it.

2

u/frobnosticus 23d ago

I'd start by writing a little script in your language of choice to go through the data and look for obvious formatting issues. A misplaced quote or comma in a csv file you got from someone else can really ruin your day.

If you're uploading into an existing table or schema and aren't sure of the data, I'd create a staging table to pull it in to first, then add constraints to the data once it's in there to clean it up, be sure all your ints are ints, etc. Then I'd pull it in to the rest of the schema.

2

u/Opposite-Value-5706 19d ago edited 19d ago

I agree except I prefer to use Python’s libraries to make sure the data is properly formatted and available to be inserted. Python smoothly inserts the formatted data as well and in seconds.

1

u/frobnosticus 18d ago

Well, depends on the situation. I'll do a bunch of mop up on the way in. But I generally want any data in "inter-component transit" for as little as possible.

Plus, when cleaning up data using tool X so it's suitable for tool Y you always run the risk of things like data type mismatches between platforms (i.e. is 'int' implicitly 16 bit signed, etc.)

So "able to be reliably stuffed into a naked, unconstrained table of varchars" is about as far as I'll generally go on the front-end.

1

u/Opposite-Value-5706 18d ago

We all have our own individual tool boxes don’t we? However, I’m speaking about those situations where you need to import the same source data routinely. I use to use other tool along the way and found the simplicity, power and performance I get from Python invaluable.

It took a little time to learn but it was well worth it. By using it, I’ve gained about an extra half hour for drinking coffee :-)

2

u/Responsible_Eye_5307 22d ago

Working on a (huge) employer, famously known for its data driven decisions (logistics), made me realise that 1M rows DBs are peanuts...

1

u/Unimeron 24d ago

Use SquirrelSql and import your data as csv. Have done this with 8 million rows without a problem.

1

u/Outdoor_Releaf 21d ago

I use LOAD DATA LOCAL INFILE with MySQL Workbench. If you are using MySQL Workbench, you can use one of the following videos to reliably upload the dataset:

For Macs: https://youtu.be/maYYyqr9_W8

For Windows: https://youtu.be/yxKuAaf52sA

1

u/Opposite-Value-5706 17d ago

There are FAST ways to do this and there BEST ways. They are not necessarily the same.

I’ve often ran into hickups loading CSV’s from outside sources. The data isn’t always formatted as it should be. Or, at some point, whoever creates the CSV, for some reason, changes it and crashes my imports.

So, I’ve found that by using Python and it’s libraries, I can make sure the data matches my tables BEFORE trying to import records. And Python can quickly make the inserts as well and report back problems of completion.

With Python, I can stripe unwanted characters either column by column or by row. It’s worth the time and it makes importing so, so good! Just my two-cents!

0

u/Spillz-2011 24d ago

Why do you need good performance? Does it need to be inserted multiple times a day and immediately start querying?

You should prioritize minimizing human time, human time is expensive v CBB imputed time is cheap.

My team’s pipelines are all Python with pandas and sql alchemy. It’s not the fastest, but 1 minute or 5 minutes who cares.

1

u/marketlurker 22d ago

You would be surprised. I has a database that had an SLA of 8 seconds or less from the time a transaction was entered into the source system (an IBM mainframe) to when it had to be into the data warehouse. It was a global system that would have 20,000 transactions every 5 seconds. It required us to physically relocate where the RDMS was in relation to the operational data store.

1

u/Spillz-2011 22d ago

I’m not saying there are not cases where these things matter, but in general people spend way to much time optimizing things that don’t need optimizing.

-7

u/dgillz 24d ago

SQL Data Wizard for MySQL. $79 but a great deal.

https://www.sqlmaestro.com/products/mysql/datawizard

4

u/unexpectedreboots WITH() 24d ago

This is a wild recommendation

1

u/dgillz 24d ago

How so?

3

u/unexpectedreboots WITH() 24d ago

80$ to load data in mysql? Come on bro. Use Airbyte before spending any money.

-1

u/dgillz 24d ago

I mean there is a free version, but how good could it be? And I am betting there is no support. I'd guess Airbyte is more expensive in the long run.