Trending: Deepfakes

Deepfakes are videos that use AI technology and data (the more data the better) to manipulate images. This can be done with images or audio, but what makes a deepfake is the element of AI and data combined to drastically alter an image. The AI used to create deepfakes is meant to mimic the human brain using an algorithm based on “deep learning” neural networks. At it’s inception, only people with access to an expansive bandwidth of computing power were able to create deepfakes, but in the course of a year, deepfakes have become so realistic and easy to make, it’s become difficult to distinguish what’s real and what’s fake and pretty much anyone with a computer can create one now. This has lead to some beneficial uses as well as some potentially harmful content being created.

One example of a positive use of deepfakes is recreating a deceased loved ones image or voice likeness using a large database of previously recorded video/audio content to allow one’s descendants to watch or listen to in the future and show them what their relatives were like during their lifetime. One way it can be harmful is when someones face or voice is used to create revenge porn in order to defame a person, or if the president’s face and voice are used to put out a fake news video, maybe declaring that the U.S. has been attacked when it actually hasn’t, or by spreading false political statements. This, of course, can pose a threat to national security. Deepfakes may also be created for a multitude of other good and bad purposes such as extortion, fabricated crimes, entertainment, and parody.

One of the top examples of a deepfake I have found from multiple sources online was a video, seemingly of former president Barack Obama, but was actually of Jordan Peele. The image was Barack Obama’s face that was edited in production to follow the movement of Jordan Peele’s mouth to make it look like it was really Obama speaking, while Peele’s voice was altered to sound exactly like Obama’s. This was shocking to watch because if I hadn’t known ahead of time that this was a deepfake, I think I may have thought that video was actually of Barack Obama at first. This was Jordan Peele’s goal: warning people about the dangers of deepfakes. For this video, other deepfake video examples, and additional political concerns regarding deepfakes click here to watch.

deepfake

I did a little research to look up other examples of audio deepfakes and found a really good satirical YouTube account that creates short audio clips imitating Donald Trump’s voice. You can watch one here.

Two of the main dangers of deepfakes include the ability to instantly share them on social media and low media literacy. These two things go hand in hand. Both adults and digital experts alike can experience difficulty when it comes to discerning what’s real and what’s not trustworthy. We are all guilty of sharing “fake news,” stories and sources occasionally! When it comes to addressing the issue of spotting deepfakes or doing something about it, things get a bit tricky. The first solution is creating new AI’s/algorithms to detect deepfakes, but there are a couple of challenges. As a first point, deepfakes are rapidly evolving and becoming more realistic, so it would be tough to create an algorithm to keep up with this steady growth and level of skill. Secondly, this technology would have to be implemented at the very top of the distribution channel – either when these deepfakes are created somehow, or right when they are posted online and finding out who anonymous deepfake creators are. Otherwise they can spread like wildfire over the internet. Another way deepfakes will most likely become regulated is through Congress, although there will probably be some constitutional implications, such as freedom of speech. Lastly, social media sites are making strides in handling harmful content posted on their platforms. I can definitely see these sites implementing their own algorithms and rules to control deepfakes and their publishers in the future.

As I was thinking of ways in which deepfakes could apply to marketing, PR, journalism and advertising, I immediately thought of using a celebrity or influencers likeness to create promotional content without actually having to pay these people to physically come in to record or film content for a company. This could be more cost efficient than flying a celebrity or influencer to your HQ, paying for their hotel, food, and other transportation costs in addition to compensating them for their work. Instead, using deepfakes can maybe allow companies and public figures to come to an agreement about one time payment options to use their likeness or negotiate some other compensation contract similar to this (i.e., royalties). This is important because it could make promoting a business more easily attainable and successful for low-budget companies such as start-ups trying to make their product or service known to target markets.

One issue I could see rising from the use of deepfakes in marketing would be deception, particularly in advertising. Using someone’s likeness in a deceptive ad would not bode well for the company or the person whose image is being used. To prevent this from happening, I think deepfakes will require disclosure to audiences, detailed contracts,  extreme attention to the law, and even more attention to each step of the production process.

What are your thoughts on deepfakes?

Leave a comment