Going off of the Snowden piece I wrote two weeks ago, I would like to focus on Deepfake Technology this week.
Deepfake technology is essentially any audio or video that makes something appear to be real… when in actuality it is not at all.
An example of this would be a viral video of a politician posted on Twitter. The politician may be seen making an incredibly controversial or incriminating remark. The entire world begins to freak out, only to learn hours later that the video was never real to begin with.
Do yourself a favor and check out this quick three-minute video on Deepfake:
This shit is real, rapidly improving, and increasingly harder to detect.
While the technology originated as a means to implement celebrities’ faces into porn videos (as per above link), it now is used in a variety of other ways. Namely politics.
Combine the quick evolution of social media, namely Twitter, becoming synonymous with political marketing and outreach, and it isn’t hard to see why this is a growing issue.
What I am wondering is if this becomes a continual and common concern going forward, is there even technology to detect and combat Deepfake?
As the video mentions, there are some good things about the technology, namely the development of voices for those who have lost theirs.
But the question remains: are we going too far here?
It seems the technology does more bad than good, and its power is just beginning to be utilized.
While the technological advancements around the world are undoubtedly fascinating, at times I feel we are pushing things a step too far. Perhaps Deepfake is an example of that.
Thoughts?
Comments