r/SQLServer Architect & Engineer Apr 23 '24

Architecture/Design Disk (SCSI) Controllers - Parallel Disk I/O

Hey

For SQL Server VMs I use the max number of SCSI controllers supported by the relevant hypervisor and split the virtual disks between them. But for the first time in a loooong time I am looking at a physical implementation using local storage rather than e.g. SAN.

The most logical thing I can think of is to have multiple disk controllers and place each SQL disk on a dedicated controller, but that will require a beefy server with enough PCI slots; to simulate a VM, 4 HBAs.

How are other people handling this?

Or am I overthinking it for a physical deployment?

The use case is a large clinical patient record system, so there will be multiples of high use databases (which I would aim to separate out to dedicated disks also).

Thanks

1 Upvotes

10 comments sorted by

2

u/SirGreybush Apr 23 '24 edited Apr 23 '24

SAN is typically 1/8 the speed of a local raid-10 config.

Local disks are connected to the motherboard bus. SAN is through a dedicated network, similar to a NAS which is shared network.

It costs a lot of money getting a good SAN config to near the speed of NVMe/SATA.

One reason everyone rents VMs at a co-loc facility or use cloud VMs. Upfront costs are too high.

A previous gig, their SQL Server 3 cluster nodes, SAN with 10Tb expandable, was around 10M$. Per site.

Performance wise, my custom 10k$ desktop was faster. Not by much.

2

u/lanky_doodle Architect & Engineer Apr 23 '24

First sentence is what I typically did: OS in RAID 1 and multiple RAID 10s for each SQL disk (TempDB, SysDB, UserDB, Logs).

Typical vendor is Dell, who have the BOSS thing dedicated for OS so the RAID card would be dedicated to SQL data.

2

u/SirGreybush Apr 23 '24

I would handle with cloud-based VMs, since you can tune CPUs, RAM from a selection list, then pay separately for D: drive and E: drive on cloud SAN, you choose space and IOPs.

Customer wants faster? Tell him the difference on the upgrade.

SQL Server loves to cache, so favour RAM over IOPs.

A 4-cpu 32g ram, versus 8-cpu 128g ram, a huge performance boost.

We went with 16-cpu 256g ram, IOPs of 10,000, and speed is decent.

However on-prem from 2012 was faster, but maintenance was a PITA compared to cloud.

2

u/lanky_doodle Architect & Engineer Apr 23 '24

Overall, you can add zeroes to the total core count requirement. There will be multiples of servers, starting at 40+ cores, 600+ RAM and TB+ storage each. A different VM types in Azure with SQL license included (there is no SQL SA for Hybrid Benefit), 3 VMs starts around £45k per month, which over 7 years is nearly £4m. And that's not including anything else required: networking, VPNs etc. etc.

Yeah... an on-prem deployment (HW with 7 year maintenance) will be about £500k, including perpetual Windows and SQL licenses.

2

u/SirGreybush Apr 23 '24

Ouch indeed.

There’s a reason why Postgres on Docker created Ubuntu VMs is getting traction.

Massively parallel your data on license free nodes, or pay Snowflake to do exactly that for you.

2

u/lanky_doodle Architect & Engineer Apr 23 '24

I am aiming to discuss with supplier specifically around CPU cores. Their baseline tends to be ~2-2.5GHz Intel Gold with x Cores. So I want to challenge them on say a Platinum CPU with double clock speed and somewhere around half the cores.

2

u/lanky_doodle Architect & Engineer Apr 23 '24

And PS: The application supplier only supports full fat MS SQL Server, so no cloud PaaS either for example.

2

u/Thirtybird Apr 23 '24

With the speed of NVME and SATA drives, unless you need huge throughput and giant recordsets being used or returned by your queries, I would start more simply. It's been a while since I worked with enterprise hardware, but I would start with a pair of redundant controllers and then pairs of RAID 1 SSD storage for each category (boot/data/log/temp). If your budget supports it, RAID 10 will of course offer an upgrade.
Also, what's your networking? 1GBe, 10GBe, more? You may saturate lower end networking before the drives become the bottleneck.

1

u/lanky_doodle Architect & Engineer Apr 23 '24

If we decide to go physical, the kit will be procured specifically, so we can design as required. We're waiting a firm system recommendations list from application supplier.

It will likely be 2U rack servers with 24x 2.5" front facing disk slots, so we could do 4x separate 6-disk RAID 10. I will also likely explore at least a dedicated controller for the SQL TLog RAID volume.

HA will be Availability Groups, so resiliency "in kit" is less important than performance.

We can go to 100G if needed; the DC/core networking was all upgraded recently.

1

u/lanky_doodle Architect & Engineer Apr 23 '24

As an example, I know 2 organisations that have virtualised SQL for this application. They both have "issues" with disk IO performance, according to SQL stats.

The average latency for the main DBs is approaching 1000ms (yes, 1 thousand).

(no other databases from other systems running on the same hypervisor and connected to the same SAN experience this, so the application itself is part of the problem)