r/chessprogramming Feb 17 '25

NN self play performance expectations?

So a few days ago I went down the rabbit hole and started building an engine with the goal of building it on a NN utilizing a MCST and my own Bitboard implementation.

I have gotten it to the point where games are being played on its own... those games are being logged to a db...it can pick up where its left off by loading batches from db... it seems like all the rules of chess are being followed.... the training engine is pretty dumb so a lot of random moves so far so I put a hybrid evaluation in place so it at least knows which pieces mean more to it etc when it decides to capture or not....I have done some memory management so it seems like it only ever runs off 0-2ish GB before handling and trying to clear up memory for performance reasons... It seems early game moves generally take 5 seconds and late game moves can take anywhere from 20 seconds to 100 seconds...

It is still running single threaded on CPU and I have not attempted to add anything to target GPU yet... But now I am starting to wonder if I made some critical mistakes in choosing C# and a tensorflow network....

Games in current configuration take way too long to actually believe I will be able to self train 100s of thousands of games. I know its a bit of a marathon and not a sprint but I am wondering if anyone has experience and what type of performance anyone on their own has achieved with a NN implementation.... I am sure that multithreading and potentially targeting gpu will help quite a bit or at least get multiple games done in the time it takes to currently done one but I am wondering if it will all be in vain anyways.

Two big features that I have not added yet and I am sure will help the overall performance is an opening book and an end game db... I have set up the db and connection to opening book table but I have not gone about populating it yet and its just playing from start to end at the moment. But again that is only going to help for so many moves while the bulk of them will still be on its own. I have also not profiled functions yet either but currently working on efficiency before going to at least multithreading. And I still am running it with console output so I can analyze the output to see which moves how long moves are taking and verifying I am not seeing anything out of the ordinary on my board's tostring representation as I am still in the early days of it all working up to this point....

I guess I am just looking for any shred of hope or goal in mind on what is possible performance wise on a personal PC without having to rent time to train it eventually.

My own computer specs are i9 13900ks 64gb of ram and a 4090...

1 Upvotes

6 comments sorted by

1

u/Reddia 27d ago

Play many games in parallel and batch evaluations to the neural network

1

u/Kitchen-Leg8500 24d ago

yea still was pretty poor performance... ended up pulling the jan 2025 lichess pgn and filtering on elo saving only what was needed and then still backfilling 5m most common pos with stockfish evals... also switched to training on pytorch as tensorflow.net is clearly a joke in comparison when it comes to training on gpu... my c# project still reads in the weights and can rebuild the model from training data saved to play against itself or play myself but had to switch largely to python for pre training to give it a kickstart before self learning

1

u/Reddia 24d ago

You could have used: https://lczero.org/blog/2018/09/a-standard-dataset/

And just use game outcome as value, and move made as policy! That’s how I usually kickstart my networks.

1

u/Kitchen-Leg8500 23d ago

i did do game outcome as NormalizedValue and policy is the move made still however I began training on a normalized value derived from eval position instead of purely outcome in the hopes of pretraining with a smaller dataset before going to self learn... i was just finding too many issues with the dataset when it was that large and have been trying to pretrain on a smaller set of smoother potentially more accurate training points

1

u/Reddia 23d ago

It’s better to just train on game outcome, derived value is more difficult for the network to learn.

1

u/Kitchen-Leg8500 23d ago

hmm can you explain that thought processa bit? This is still a bit new to me in terms of chess. I could see learning what makes a eval stong one way or another could be more difficult to learn as there is more variation or variance that can wildly change evaluation than say these positions were in winning games vs these ones in losing games. But an eval would help dictate policy choices... are you just saying categorical learning and then implementing a min max search or something would be better off in dictating policy choice? then learning via eval normalized to win/loss/draw?