07/24/2018 / By Edsel Cook
Researchers have just created yet another terrifying artificial intelligence (AI) tool to cause confusion and spread misinformation throughout mass media. In an article from The Register, experts in machine learning have devised a computer system that can change the movement of a person’s face in a video.
It is far worse than Deepfakes when it comes to the potential for trouble. Deepfakes, of course, is an earlier AI-based image alteration technique that mapped the faces of well-known people onto the bodies of porn stars.
The new technology takes the face of its target and manipulates the image into following the movements and expressions of a different source. It can be used to make fake footage of people making statements that they never actually said in their entire lives – say, by taking the well-known face of Vladimir Putin and using the equally-distinct facial mannerisms of Barack Obama. The result is a video clip of Putin giving Obama’s speech.
The paper describing the image-synthesizing neural program claimed to have achieved realistic results. This is troubling, to say the least.
While disgusting, videos altered by Deepfakes were also easy enough to recognize as fake. The eyebrows did not match the forehead, for one, and the movements were notably clumsy.
A similar image mapping project combined lip-syncing and an audio clip to create fake videos. Again, the movement of the lips did not match 100 percent of the time.
The new machine learning system is reputedly much more advanced than those two. It is the first such AI that can map the source’s head position and rotation, the facial expressions, and the blinking and gaze of the eyes for transfer onto a portrait video of the target. (Related: Use of A.I. has revealed “the most predictive factors” of corruption.)
According to their paper, the artificial neural network rebuilt a face based on certain striking parts. It could follow the head and facial movements, which allowed it to capture the facial expressions for every frame of the source and target videos.
A facial representation procedure was used to calculate the data of the face on both videos. The data from the source face was altered a little before being mapped onto the target face.
The artificial neural network – called a “generative adversarial network” – handled the rendering. It watched the tracked frames of the target video.
The AI could learn from just a minute of footage of the source. Given just 2,000 frames of reference, it would be able to create fake images based on the source frames that can pass for the real ones.
Right now, the AI is limited to altering the facial expressions of the target. It is unable to replicate the upper body of the target. Furthermore, it is confused by constantly-changing backgrounds.
Saner minds have previously called attention to the dangers of this technology. Fake videos made by the AI can be used to mislead people. It could also call political security into question.
The researchers glossed over these valid issues. They did mention in their paper that videos needed invisible watermarking and other means of verifying their authenticity.
“I’m aware of the ethical implications of those reenactment projects,” claimed Justus Thies, a Technical University of Munich (TUM) researcher who served as one of the co-authors of the paper. “That is also a reason why we published our results. I think it is important that the people get to know the possibilities of manipulation techniques.”
Thies said facial reenactment has been used for a long time. The best-known examples are the dubbing and post-production processes of movies.
Stay informed about the threats posed by malignant artificial intelligence at FutureScienceNews.com.
Sources include:
Tagged Under: artificial intelligence, artificial neural networks, computing, facial expressions, facial scanning, fake news, fake videos, Glitch, machine learning, video fakes
COPYRIGHT © 2017 COMPUTING NEWS