Neural networks in games


I used to program neural networks for control systems, and the biggest bottleneck I had was computational power for the amount of time I had for each control loop. This problem has only gotten worse now that neural networks have a ridiculous amount of layers and interconnections.  Think about how much computational power OpenAI needed to play one game of DotA2!

You also need to realize that the larger the neural net, the more training it needs. However, it also needs a very diverse training set, or it can become overtrained on a small subset of what it will actually experience “in the wild,” and that leads to terrible results.

We simply don’t have the need for this kind of complexity in games right now, and it would easily consume an entire GPU to operate a pre-trained, real-time neural net with any sophistication.

Remember, even with scripted AI, the AI can become too “clever” and outsmart players in FPS games. The players don’t realize this and think the AI is cheating. If anything, we’ve had to dumb game AI down more than scale it up.

I think AI needs to have a revolutionary leap beyond dumb-simple neural net architectures where it can learn on the fly like humans, and games need a revolutionary leap in social interactions between entities before we will see the magic of these two worlds combining (that’s where most of human intelligence is centered around, after all). At this rate, I think this will probably start happening in the latter half of the 2020s with some bad prototypes spawning in a few years from now.


Latest posts