Neural networks may be used to create a new level of realism in computer games.
Neural networks were described here, yesterday. In a nutshell, it’s vast-number of electronic neurons that form a network, linking to each other, in a way that is meant to mimic the way the brain operates.
Neural networks are used in cutting edge AI, for example in combination with deep learning to create autonomous cars.
But in the world of computers, innovations often have their routes in applications designed for video games. Or at least, the technology for video games has wider application – so, for example, DeepMind, the AI company now owned by Alphabet, and which created the AI system that won the Chinese game GO, beating one of the world’s best players, was setup by Demis Hassabis, a former video games developer.
Now a new paper – Phase-Functioned Neural Networks for Character Control – has outlined a way to create much more realistic animation in video games using technology that builds on neural networks – what it calls a ‘Phase-Functioned Neural Network’.
The paper’s authors say “Our controller requires very little memory, is fast to compute at runtime, and generates high quality motion in many complex situations. We also present a technique for fitting terrains from virtual environments to separately captured motion data. This is used to train our system so it can naturally traverse rough terrains at runtime.”
But the real point here, is that the resulting animation looks really, really cool.
This particular method, may or may not take-off, but it gives a sneak preview into how neural networks may change the graphics we see in video games, which in turn will relate to virtual reality and may help lead to greater convergence between video games and more passive TV entertainment.