Can you tell the difference between real and fake?
In the artificial intelligence (AI) universe, an approach called deep learning, or machine learning, has been extremely successful in achieving positive results. It’s doing this by deploying Artificial Neural Networks, a software architecture that allows algorithms to learn and improve from one iteration to the next.
Generative Adversarial Networks, GAN for short are a good example. The GAN network “knows” how to take existing visual data and manipulate it into new data.
Philip Wang, a software engineer, rented a server for $150 and implemented StyleGAN, an algorithm developed and published by Nvidia, an NASDAQ listed AI hardware and platform company. He used images of people from a readily available dataset, and has trained the model to create a new fake face for any refreshment of a browser page.
Put simply, the technology can fabricate new faces that don’t exist, based on a huge database of real images.
Take these faces below as an example: are they real or fake?
You guessed it: the photos above are not real people; they’re not even real faces. They’re just a collection of pixels that the human viewer recognises as a face and therefore a person.
You can check it out for yourself here. Simply refresh the website with F5 or just press the enter button to get a new GAN image.
Though impressive, the StyleGAN algorithm is actually limited: it has limited control over the results, and it doesn’t always produce natural-looking faces. As yet, it doesn’t understand facial features such as muscles or how skin changes with emotions. It also doesn’t generate a brand-new face from scratch, but uses an existing database of real faces.
The video demonstrate the fake face generation process:
However, other AI algorithms are more advanced.
What does this emerging technology mean for society? The implications of this are vast: it could be a huge win for designers and illustrators. But it may also force us to think twice about what’s real and what’s not. For example, will we start seeing fewer human models, at least in digital ads? A GAN network can create the perfect model for a campaign, and use it again and again. It is cost effective, fairly easy and much cheaper than employing a real model.
Will we believe such photos or ads? Do we actually care? Will advertisers have to declare “the presenter is not a human”? Will copyrights be issued and royalties for use will be paid to an algorithm?
Where will it lead us? Will we have virtual personal assistants in the future, one that can accompany us in bad and good times, and complement our weaknesses with an ease?