Site navigation

Microsoft Moving to Combat Disinformation After Rise of ‘Deepfakes’

David Paul


Micrsoft deepfake

The company has released two tools to help deal with the spread of false information and speed up the education of the public.

Tech giant Microsoft has announced tools which detect deepfake software in images and videos to combat the spread of disinformation.

The release comes months before the 2020 US election between Donald Trump and Joe Biden, a situation ripe for the technology to be misused.

Deepfake tech uses artificial intelligence (AI) to allow someone to manipulate and alter images and videos to make it seem as if a visible person looks like someone else. The AI is fed with still images of one person and video footage of another.

It then generates a new video featuring the former’s face in the place of the latter’s, with matching expressions, lip-synchs and other movements.

There is the possibility that the technology could be used by malicious actors to spread disinformation about either Trump or Biden before the election, potentially skewing the final vote.

Microsoft’s new ‘Video Authenticator’ tool analyses the manipulated photos and videos and provides a ‘confidence score’ to determine whether it is likely to have been created artificially.

Microsoft also hopes tools build into the new tech, such as an interactive quiz, will help the public to learn how to spot the deepfakes themselves.


In a company blog, Microsoft said: “We expect that methods for generating synthetic media will continue to grow in sophistication.

“As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods.

“Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.

“There are few tools today to help assure readers that the media they are seeing online came from a trusted source and that it wasn’t altered.”

The first of the two tools will be built into Microsoft Azure and enable a content producer to add digital hashes and certificates to a piece of content. These then live with the content as “metadata wherever it travels online,” Microsoft said.

The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, “letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it,” the statement read.

Microsoft says the tools will initially be available to political and media organisations “involved in the democratic process”.


The use of deepfakes has also concerned social media companies, with Facebook vowing in January 2020 to “crackdown” on such videos being posted on its site.

Facebook’s head of global policy management, Monika Bickert, said the company would ban the videos and all types of manipulated media that mislead its users.

“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” Bickert said.

David Paul

Staff Writer, DIGIT

Latest News

Cybersecurity Data Protection
Editor's Picks Events Trending Articles
Editor's Picks Gaming
%d bloggers like this: