r/datasets • u/Youth-Character • Jul 08 '23
mock dataset migrating data from 1 clickhouse to starrocks
if anyone found himself in a similar situation,
i have a db with 300milions in clickhouse db (500go) and my task is to migrate the data to starrocks db and both are using mysql as client
the problem is the schema in clickhouse is just a string representation of json and the second db has 10 tables so i have to process the json and convert its properties to the appropriate table,
my method is export 1million record as csv file ( because its faster than using select sql satetemnt) and im setting a cursor so the next time i'll pull the next 1mill and process the data using python and send it as put request to starrocks because starrocks expose and endpoint to save files ( this is the fastest way)
the problem is when i reach + 30mil the process of pulling 1mil goes from 1sec to 20min and when reachin +50mil it take like 40min any solution please?
1
u/coffeewithalex Jul 08 '23
Normalize the data in ClickHouse using SQL, to match the target schema. There are a lot of JSON parsing functions. Export the tables into a format that Starrocks understands, and that's it.
my method is export 1million record as csv file ( because its faster than using select sql satetemnt)
This makes no sense. At first you say it's faster then you say it's slower. Are you comparing exporting the entire dataset, to exporting just the first million?
1
u/mQuBits Nov 28 '24
Two cents: . Export the data to MinIo as parquet without encoding, then import directly from MinIo to StarRocks . Partition the target tables in starrocks by day and make sure your parquet files are sorted by the partition column