Can machine learning beat human expertise – even when it comes to displaying creativity and original thought?
Google’s DeepMind reports another win towards AI dominance in strategy games. AlphaStar Artificial Intelligence is the first AI machine to defeat a top professional gamer, MaNa.
The game, Starcraft, is a classic. It requires a delicate balance between long-term strategy (building structures, mining resources), tactics and micro-management of the player’s units, fighting against the rival. Add an enormous playground, with a large selection of units and resources, not all of which are exposed to the player (hence, the player has to know how to patrol and uncover the blacked-out arena). All of this makes the game a very complicated task. In fact, it has been considered an impossible task, until now, for a computer to accomplish.
So how was this incredible feat achieved? At first, Google’s Algorithm learned to play by imitating human players. Then, it started to play tournaments against itself, trying new and improved strategies, benchmarking results and throwing away those that proved less successful. The training took a mere couple of weeks. But by using parallel large computer arrays, each virtual player clocked up an accumulative (and impressive) 200 years of play against its virtual rivals. The best virtual players were selected to play against MaNa, one of the most renowned Starcraft players in the world. AlphaStar Artificial Intelligence won with a stunning 5:0 result! Just like DeepMind’s AlphaGo (which was featured in a must-watch Netflix documentary), the new AlphaStar algorithm has revealed some completely new, creative playing strategies that have been unknown to even the most experienced human StarCraft players with more than 20 years playing time.