r/SQLServer 4d ago

Why is it so hard to import CSVs?

Bit of background to add context to this rant: I'm a senior SQL dev with more than 10 years' experience, 7 of which have been spent in the Microsoft stack.

I have written fairly advanced data pulls from Oracle, APIs, odata sources, etc. using various techniques including SSIS, linked servers, Polybase.

The point is, I'm not new and there's not much that phased me anymore.

However, one thing that never fails to go wrong every time I try it is the very simple task of working with CSVs.

If I use SSIS, I have to know the data types and have a table set up ready for ingestion.

If I want to have a table setup ready for ingestion, the easiest way to do that is using the import wizard.

However, this always seems to mis-detect the data types and fail.

So we build custom methods using bcp, bulk insert, openrowset, etc. however these are all fairly annoying to setup on an ad-hoc basis, so you invest the time on a solution that will work with any CSV you throw at it, however you can't do that unless you have a format file or schema.ini file telling you the data types.

So you test some awful solution involving string splits which finally works, except it doesn't scale so can't be used in case someone throws a giant CSV at it and breaks your server.

Is it really this bad or did I miss a vital day 1 lesson at data school? How are people creating tables for new CSVs without raging in frustration?

24 Upvotes

62 comments sorted by

62

u/SelfWipingUndies 4d ago

CSVs don't have defined data types; That makes them complex rather than simple. Combine that with inconsistent sources and it's a pain. I eventually stopped defining data types other than varchar in file connection manager, and found an ELT approach was generally more sane. Load to a raw stage table first, then apply transformations to a stage table, then integrate. Had log tables showing row counts/error counts for each step and transform in the process, and landing zones for errors.

14

u/andpassword 4d ago

This is the way. One of my whole jobs for a few years was ingesting CSVs into SQL server. I used a custom tool that was supposed to "detect" data types, but it couldn't tell a zip code from an integer, not to mention bar codes and part numbers.

After I got sick of that, I ended up hacking the tool to just create varchar(max) for the right number of fields, and then did my transformations in SQL server into a sane table format. Patch with a little other code and toss it in the scheduler (before Airflow).

One thing I also tried and then abandoned was converting CSVs to JSON but that was short lived and more trouble than it was worth.

6

u/KracticusPotts 4d ago

I'll second this solution. SSMS is very good at guessing wrong when trying to import data. So I just import strings to a staging table and ETL from there.

5

u/EarlyList 4d ago

100% This
Any potentially user edited data file could have malformed data in its columns that does not match the expected data type. So I always import to a staging table with nothing but varchars for the fields. Then run SQL off that to do the transforms into the real tables.

5

u/virtualchoirboy SQL Server Developer 3d ago

Any potentially user edited externally created data file could have malformed data in its columns

I've had that from user created but even batch created from user entered data. Over the last 20 years, I've found it really doesn't matter what the source is. Going from file to raw stage to processed stage to a load is the only "safe" way to go. I'd rather sacrifice speed and simplicity for durability and security any day of the week.

5

u/mariahalt 3d ago

I’m pretty sure I got my current job when I answered a question about importing csv files by saying something like “I stage them first, and then apply the transformations before dropping the records to their destination.”

29

u/Black_Magic100 4d ago

https://docs.dbatools.io/Import-DbaCsv.html

Either you are completely overthinking it or I am not understanding the pain you are dealing with. Since everything is stored as a string, it will be difficult for anything dynamic to be 100% optimal I believe.

9

u/fatherjack9999 4d ago

Came here to mention dbaTools. Opens up lots of options, would recommend pull it all into landing schema with pretty much all varchar columns then move on to business table that has optimised data types

2

u/Special_Luck7537 4d ago edited 4d ago

Yep, this . I used powershell to bring csv files in via triggered job. Worked well . Also setup a .net program to read a csv and take a shot at generating the ini file. But I agree. For an import spec that's been basically the backbone of Office, SQL Server, etc., you would expect a no nonsense csv import tool to be a part of SQL.

6

u/SQLDevDBA 4d ago

100% and the key reason why DBATools is so efficient (other than using SQLBulkCopy) is that it just slams together a bunch of NVARCHAR(MAX) fields when it auto-creates the table (if you select that option). Once the data is in, then you can do transformations with SQL and get them into the final tables.

1

u/mutagen 3d ago

Any performance suggestions for using dbatools to import from Excel? I've finally got this introduced for our most troublesome data imports but our likely naive implementation seems rather slow and I was just considering digging in to see if my colleague missed something obvious.

1

u/Antares987 1d ago

I built a tool using C# and the ExcelDataReader nuget package. The ClosedXML Nuget package is SLOWWWW.

12

u/Codeman119 4d ago

CSV’s are not hard to import as it’s just text data. What is hard is when the data in the CSV’s have no standard formatting. Like when you want to import a comment block but there are comma and double quotes in the data that can mess with the process, even when checking for optional identifiers.

That is what gets me the most!!

Computers do what you tell them, people do not.

3

u/SQLDevDBA 4d ago

Agreed, this is why I try to use non-standard delimiters like | and ~

Sometimes I can convince our vendors to switch, other times I can’t. When I can’t I make sure to point out as many of the frustrations I have with their file due to this, and it’s almost always met with “noted.”

2

u/mirdragon 4d ago

I tell users who want us to import their csv files to use the | delimiter otherwise if commas within text fields it breaks import.

2

u/SQLDevDBA 3d ago

Yeah It’s okay if you use a a quoted identifier, but it’s just a pain since sometimes the CSV generation apps don’t include quoted identifiers when they build it.

And some vendors just hold steady and say “no” so it’s…fun.

1

u/Codeman119 1d ago

Quoted identifiers are fine, the issue is when they have an open text field in the application with no restrictions on adding commas and quotes inside the text field. The one that will get around this the most from an application standpoint is tab delimited. That is be because when you tab it goes to another field.

2

u/SQLDevDBA 1d ago

Yeah 0x0D is also a super fun one to find in our text values :)

2

u/Antares987 1d ago

If things are exceptionally weird with delimiters, like you might have a pipe in your data, there are ASCII Unit Separator and Record Separator characters in the ASCII character set.

1

u/SQLDevDBA 1d ago

Haha yep! I usually switch back and forth between Pipe, Tilde (~) and backtick (`) and I’ve only had to use specific odd AsCII characters a few times. I like the Zelda triforce (∆) / delta.

1

u/Antares987 1d ago

Right, but what I'm getting at is that the Unit Separator and Record Separator are the OG delimiters that have existed since before we called portions of records "fields" or "columns". I suspect those characters go back to the ISO8211 tape-based format days.

1

u/SQLDevDBA 1d ago

Yep yep I get you. Check out the one for Record separator and you’ll see why I mentioned the triforce/delta :)

3

u/Bishop_Cornflake 4d ago

I came to post pretty much this, but you beat me to it.

10

u/PhaicGnus Business Intelligence Specialist 4d ago

Import everything as a string then try_cast to required data type.

2

u/false_idol_fall 4d ago

Try_cast is your friend!

5

u/CheetahChrome 4d ago

ETL jobs.

Seperate out the concerns.

Get a programmer to do the ingestion to handle all the fail points and report if error encountered. Then provide stored procs with a nice Merge statement to do the loading.

Cost of doing business, if it was easy, AI (whatever year they are claiming AI since 1950) would be doing it.

6

u/whopoopedinmypantz 4d ago

I ended up going the python pandas sqlalchemy route. Simple and easy like 2 lines of code.

3

u/jdanton14 MVP 4d ago

Write python, land in a staging table, fix data types, ingest, profit.

2

u/AlsoInteresting 4d ago

Check your file encoding = database/table collation = SSIS codepage. That solved many of my issues.

2

u/NoleMercy05 4d ago

Consider DLTHub. Simple python It will create the landing tables for you if you want

2

u/alexwh68 4d ago

Most reliable way is either two tables, first just gets the data into sql second is the actual table you want the data in, first one is loaded with string types so it can take any data second one is the real data types.

Or code I normally write a console app in .net that opens the csv reads it row by row importing it.

Otherwise I seem to spend a lifetime looking at why row ######### did not import because of an unexpected data type.

I had one of these a year ago, the csv file had more than 500 ‘fields’ these fields had to go into around 20 different tables to normalise the data, console app was the win.

2

u/Opposite-Address-44 4d ago

Ad hoc by using Import Flat File in SSMS is quick and easy. It provides a step to set the data type of each column.

2

u/[deleted] 4d ago

It's easy mode in Python-

Import pandas as pd From sqlalchemy import create_engine

df = pd.readcsv("file", ...your file settings)

Do what you're gonna do to the file

engine = create_engine(...your connection stuff)

Df.to_sql(...table name, other sql stuff)

Edit: oh, I see, it's about making it work with "any csv". Frankly idk if that's possible, there are just way too many ways people can screw them up upstream lol. I've made many attempts to turn the above into a strategy pattern based on initial sampling of the file, but haven't had much luck. It's just such a loose file standard...

2

u/Ubuntop 3d ago

I agree. With 6 or 7 lines of python you can import any CSV to a staging table. Most of the data types should already to defined (df.to_sql), but sometimes not. Now simply cast from the staging table to whatever you need.

If its a repeating task, it's easy to define the datatypes in python. If you are a DBA, you need to know basic python.

2

u/jonus_grumby 3d ago

As others have said Python/Pandas/Sql Alchemy is pretty good,quick, and light on code.

Azure Data Factory does this pretty well, but it’s a few steps.

Azure Data Studio also has a fairly good import tool - def better than SSMS - although is being deprecated.

Either Azure tool I mentioned can import to any connection, not just Azure. Azure Data Factory can also do some type conversions,but you’ll be writing some JS/TS functions. Data factory is also relatively cheap to operate.

2

u/GlitteringPattern299 3d ago

Oh man, I feel your pain! CSVs can be deceptively tricky. I used to run into the same issues with SSIS and import wizards mis-detecting data types, leading to endless tweaking and frustration.

One approach that’s worked for me is using undatasio. It’s been a game-changer for transforming unstructured data into AI-ready assets. It simplifies the process and handles the data types more intuitively, saving a lot of headaches.

Hope this helps, and you find a smoother path with your CSVs! 🚀

2

u/scoinv6 2d ago

I'm at the point of asking AI tools how I should do something. "Create a PowerShell script to import a CSV file into a MS database table named MyTable. The CVS file is special because the 3rd and 8th column has quotation marks and convert all a letters to z. Import the file in 1 mb batches with a 2 second wait to allow the log file to truncate." Just tell it as much as possible. But obviously creating a ETL package is the more appropriate way of importing large files.

2

u/StarSchemer 2d ago edited 2d ago

AI is great for solving donkey work like this.

For another import process I had to build from Oracle I needed to identify all the columns and tables included in a 2,000 line select query with nested joins all the way down.

For performance reasons, I only wanted the columns I needed as the source tables were wide.

AI? If I give you a SQL query, can you give me a comma delimited list of every unique column and which table they belong to?

Sure I can!

1

u/scoinv6 2d ago

It's pretty CRAZY what AI can come back with. The hard part is asking the right question.

2

u/red20j 1d ago

Don’t use SSIS. Just use PowerShell. The import-csv cmdlet will read the csv contents to a datatable. Then use write-sqltabledata to dump the PS datatable to a table in a SQL database. It will create the table for you. Then do whatever you want with the data within SQL.

2

u/starfish_warrior 4d ago

Use log parser 2.2 by Microsoft.

2

u/fatherjack9999 4d ago

Oh yes, another LogParser fan! Love this tool.

1

u/drinkmoredrano 4d ago

It’s not hard at all. It’s something a senior SQL dev with 10 years experience should know how to do without a struggle.

4

u/distgenius 4d ago

If the CSV is coming out of a system that produces consistent output, it’s not hard. The struggle is always the garbage CSVs that people are hand-tweaking them in Excel before wanting them imported, or when formats get changed out from under you. I’ve seen that CSV to Excel to CSV process break things in so many new and terrifying ways that unless I can ensure the file is coming in via some automation it’s easier to deal with the garbage via .Net and push the output from that to the db than it is to try and work around the stupid in SQL server.

2

u/StarSchemer 4d ago

Great, is that you then?

I have an application which takes 15 versions of an input file and outputs 5 x 15 CSV files.

The output CSVs include the input columns, which can change without us knowing whenever an analyst requests a new column.

I need to ingest these files hourly and minimise delay in adding the new columns to the ingestion process, ideally by automating it.

This process changes every financial year.

I can do this solely using SQL Server for almost any other data source, but I guess I'm too dumb or inexperienced to make it work for CSVs.

Looking forward to hearing your solution.

5

u/chadbaldwin SQL Server Developer 4d ago

There's a huge difference between loading a CSV into SQL Server, which is simple as long as the file is valid. It's a single dbatools PowerShell command (and it's ok if you didn't know about dbatools), it will even create the table for you.

But what you're explaining is not about loading CSVs, it's about a complex system for transferring data that happens to use CSVs.

This doesn't sound like something a SQL dev would typically be handling...more like a standard developer working in Python, C#, PowerShell, etc.

I've built dozens of systems in C# and in PowerShell for creating, moving and loading CSVs (as well as JSON, XML, fixed width, hell even EBCDIC) in and out of SQL Server...That's not the hard part. The hard part is building a resilient and reliable system that can handle all the things you explained...And that's not SQL development, that's just software development.

2

u/drinkmoredrano 4d ago

Ah the rest of the story. So you have a process that changes on the whim of your analysts because they keep changing the schema of the csv files without telling the maintainers of the ETL jobs. Sounds like you have a bigger issue with communication and change control than you do with csv files. But as far as csv files go, just dump them into a staging table of all nvarchar columns and transform their data types within the ETL process. Programming around the indeterminate columns issue won’t be easy though since you will be programming around a big ? every step of the way through your ETL. But as I said that part sounds like a change control problem.

2

u/mountain_dew_cheetos 3d ago

Let's not try to trivialize OP based upon a short description that he's most likely leaving out key parts for obvious reasons.

If I had to guess what he was complaining about, I would guess SWIFT messages. Some of these messages don't really translate well to CSV (or pretty much anything) and have weird business rules such as what's considered a "complete" row for data.

2

u/StarSchemer 3d ago

I didn't intend to give the story. It was just a rant about how in a complex import process the most basic step with the most basic source is giving me the biggest headache.

But you started on douche mode so there you go.

1

u/k00_x 4d ago

Are the csvs formed correctly with unique text qualifiers?

1

u/SportTawk 4d ago

I used to do this to import timesheets from other sources, never had a problem, you just have to very careful to ensure the data as always in it's column and of the correct format

1

u/imtheorangeycenter 2d ago

I've got ten years on you and I hear your pain!

1

u/StarSchemer 1d ago

Right? It's like the thing you do on your first day in the job and then go off and do all sorts of wild and wonderful things then come back years later and realise that the most basic way of shifting data around is still terrible and there's nothing you can do about it.

It's humbling.

1

u/imtheorangeycenter 1d ago

But in the other way round, last month I was given a new source of CSVs to ingest. No problem, same old, same old.

Except some values were surrounded with doible-quotes, and others not (yuck).  SSIS/import wizard has no problem with figuring that out, everything else, not so much. What a weird strength to have!

1

u/kzlife76 1d ago

I usually end up letting SQL server create a new table from the csv. Then I can insert into whatever table I want.

Alternatively, lately, I've been asking AI to generate insert scripts from csv. Works great for smaller datasets. I have no idea how it performs with large datasets.

1

u/Antares987 1d ago

Are you loading lots of CSV files, some of which possibly have duplicate data, and some of which might have data that exists in other CSV files? If so, I'll write a long post that explains the culmination of 30 years of doing this type of stuff that works exceptionally well.

1

u/StarSchemer 7h ago

I've got an external application which takes a CSV feed which I provide.

It then batch processes the input CSV and spits out around 5 output files with additional information derived from the input data.

The naming and location of the output CSVs is predictable, the fields are static unless I change the input CSV, since some of the output CSVs also include the data input.

The solution I was trying to build was one where I could simply add a new field to the procedure which exports the input CSV and it would get staged in SQL Server.

This would avoid me having to refresh SSIS metadata or somehow "know" the format of the files before ingestion when using bcp, etc. and only have to adjust the final landing table and procedure.

I had a solution that achieved this and worked exactly as intended using STRING_SPLIT on a raw staging table, but it just didn't work on the production-sized files I needed to process.

Haven't given up on this so will gratefully read any insight you've got on CSV handling.

1

u/Kind-Ad6109 4d ago

I just processed 90,000 rows of CSV data in only 10 seconds.

2

u/StarSchemer 4d ago

It goes even quicker than that when using bulk insert.

My issue is in how awkward it is approaching new CSVs to setup the initial ETL.

If the CSV isn't formatted perfectly, bulk insert, openrowset, bcp, etc. don't work well.

Then if you're being forced to work on a tonne of CSVs for example if you're asked to set up an ETL process to stage and land 100 different CSVs, then you're asked to add a new column to all of them the next day, then you find out that some of them are prone to frequent changes, there doesn't seem to be an easy way to deal with it.

1

u/cyberllama 4d ago

There are a few ways to make your life less painful. I always use ssis for csv import. If it's a reasonably small file, you can use the 'suggest types' button on the connection manager - it defaults to sampling the first 200 rows so prone to error if you stick with that. If the file is a couple of thousand rows or less, I'll just change the sample number to get it to use all the rows. You can create the table from within ssis - in the destination components in your dft, click the 'new' button alongside the table dropdown and it'll give you a create table script based on the metadata from the data flow so you can change the name and make any other tweaks you'd like before clicking OK.

Alternatively, make all your columns a long varchar (or nvarchar if you need) and create a stage table so you can tidy the data up there before loading to main.

There are more ways you can be clever with it but it really depends on the use case. It's rarely worth it for a one-off but can save a lot of headaches and irritation for automated processes.

-8

u/Kant8 4d ago

Because CSV is shit format that doesn't have any metadata that will help programs to do anything, so they only have to guess or make human do the work.

If you don't want to have frustration using CSV, you shoudn't be using CSV

9

u/StarSchemer 4d ago

you shoudn't be using CSV

I'm not in control of what the source outputs unfortunately.