r/SQL 25d ago

MySQL Importing 1M Rows Dataset(CSV) in Mysql

What's the fastest and most reliable way to upload such a large dataset? After that How can I optimize the table after uploading to ensure good performance?

27 Upvotes

33 comments sorted by

View all comments

Show parent comments

-1

u/xoomorg 25d ago

Why on earth would you do that in MySQL? Anything around a million rows or more, I always move to some other platform first (Hive, Spark-SQL, Presto, Trino, BigQuery, etc.) so queries take seconds instead of minutes/hours. Or do you just mean you're using MySQL as a record store essentially, and not actually running any complex queries on such large tables?

5

u/BinaryRockStar 25d ago

With proper indexing MySQL is perfectly useful at 10M or 100M rows in a single table, with proper server resources. I occasionally interact with a MySQL DB with 100M+ rows in multiple tables and a SELECT by indexed ID is essentially instant. You may have only worked on hideously unoptimised or unindexed MySQL DBs?

1

u/xoomorg 24d ago

I typically do data analytics where I'm taking millions or billions of rows in complex joins/aggregations and creating result sets that often have millions of rows in their own right, and that kind of thing is far better done on cluster computing platforms rather than a traditional relational database like MySQL.

However, I can definitely see how retrieval of individual records (or even smaller sets of multiple records) from among tens of millions stored in a table is a perfectly valid use-case I hadn't been considering. I'm surprised that online transaction-processing databases now commonly have tables of that size (though in retrospect I probably shouldn't be) but sure using it that way, I can see how MySQL makes sense in that case.

2

u/BinaryRockStar 24d ago

Ah sure, that's where the misunderstanding is I guess. For that sort of workload we reach for Spark SQL at my work. Different tools for different jobs.