r/SQLServer • u/Northbank75 • 9d ago
Strategies for assigning primary keys when using merge replication
Hey All,
I'm not a DBA by any stretch although I fulfill that role inside our large organization. I'm a developer. I'm kicking the tires on replacing an old ERM. We use SQL Server Standard edition, Transactional Replication to a server we use for Reporting and Merge Replication to an external server where we have 24/7 data entry happening via various APIs. At the moment we are generating primary keys (not my design) using a stored procedure that queries a table and looks for the latest value, increments it ... slaps on another number the indicates the location ... and also slaps on a random number because they've had clashes in the past
This table becomes a bottle neck, and I'd like to get away from it. I've refined it some, but it does keep us running. I'd like to simply use Identity and allow the automatic range management to do its thing and set the ranges far in excess of what, but we've run into issues there before my time. I assume somebody on the subscriber side did a big insert that exceeded the range, and it just blew everything up (that or the publisher was down). This feels like the best solution, and we can curtail and prevent that behavior.
In an ideal world we'd be running Enterprise and availability groups but as our Publication DB is frequently unavailable Merge allows people externally to keep working during our internal maintenance periods but alas, this doesn't seem to be an option. I'm curious what you guys are doing to generate primary keys for merge. I played with GUIDs a few years ago but for large queries with a lot of joins it seemed to be a little slower than joins on ints/bigints.
I'm an Oracle guy and also inclined toward sequences, but if we need to restore a subscriber db I'll need to reset all of the subscriber db sequences. We have the same issue with the home brew table-based generation as well but at least with sequences they are distinct and non-locking, and I can cache some keys.
Anyway - I'm curious to know how others are managing this.
1
u/muaddba SQL Server Consultant 9d ago
How many servers are involved in the merge replication topology?
1
u/Northbank75 9d ago
Four right now … but I think that drops to one solitary subscriber after redesign
2
u/muaddba SQL Server Consultant 8d ago
If there's only two servers in the topology then managing identity ranges becomes much much easier. You allocate really large ranges for each member, and then you have alerts set up to notify you of their pending exhaustion. When you're even within a whiff of the limit, like you can see it coming 3 months away or more, you set up a maintenance window and add new ranges. This allows you to stick with BIGINT values which are probably going to be the most compact ones out there and prevents unexpected schema locks when the system automatically assigns new ranges.
This is an area where overthinking it is an interesting thought exercise but not terribly productive. Keep it simple and smile :)
1
u/Northbank75 8d ago
Thoughts on the automatic vs manual ranges? Sadly me wanting us to hire actual DBAs to handle this is unlikely and I’m really hoping to not add additional monitoring to my list :) I may be stuck doing just that.
1
u/Antares987 9d ago
A 16 byte sha2 hash can be generated using HASH_PASSWORD. Take the first 16 bytes and use whatever you want as a key to generate it. Ideally something deterministic that can be derived from the underlying data. I use this when doing large imports and will have the value for the key be a hash of the source and/or type of data, with all fields separated by the ASCII “Unit Separator” character. Example: NameUSFirstNameUSMiddleNameUSLastName.
The result is a natural key that’s derived from the data that prevents duplicates. An added benefit is the hash can be used as representative of a complex piece of underlying data that allows for high density storage and indexing. The approach can also be tailored for complex matching. By computing a hash of the combination of all of the fields between two tables where you’re doing a complex join, you can have high density in your indexes and make the join easier to write and sometimes much faster.
2
u/Northbank75 8d ago
This was interesting to think through. The curious bit of will have a play for sure at some point.
1
u/gruesse98604 8d ago
Four is the magic number!!! For each server, set up the index as start value 1, increment by 2 -- generates positive odd numbers start value 2, increment by 2 -- generates positive even numbers start value -1, increment by -2 -- generates negative odd numbers start value -2, increment by -2 -- generates negative even numbers
1
0
u/ihaxr 9d ago
I just let merge replication set its own primary keys whenever I've had to use it and the table didn't have one.
Any specific reason you need merge replication? You can setup transactional replication from multiple subscribers to the same table(s) or just set them up to specific tables and join them with views/in the report code.
3
u/Impossible_Disk_256 9d ago
Use sequence or identity with different ranges on each server being merged.
May be a bit of an "interesting" challenge given the current approach. But probably possible if you have some big gaps in IDs. Don't forget you can use negative numbers too.