The risks associated with emerging tech can often take years to materialize and early warnings can seem like fear-mongering and science fiction.
However, the speed of AI is beginning to make the time to both understand new tech and deal with the risks much shorter.
One of the most discussed and ethically fraught areas associated with AI is deepfakes – the ability of a computer to imitate the faces and voices of other people.
Several new tools recently announced by big tech companies like Google and Microsoft are making it easier for anyone to make their own deep fakes.
Microsoft, during its annual developer conference recently, announced Azure AI Speech which allows users to write a script and immediately transform it into a video using an avatar.
While the tool initially only allowed users to select from a preset list of avatars, the company left the door open for customized avatars that can look like individuals.
These avatars can also be plugged directly into generative AI tools through Microsoft’s partnership with OpenAI to create avatars that are able to respond to questions from users.
Google-owned YouTube is testing an AI-powered tool called Dream Track that will allow users to hum a few notes and have those converted into a song recorded in the style of a musician.
Currently, the system is pre-set to deliver songs based on 9 musicians, including Sia, Demi Lovato, and John Legend, who have agreed to work with Google on the project.
Much of this technology goes right to the heart of the conversation around the ownership of one’s face and voice, which was a central component of the recent SAG-AFTRA strike.
What is clear is that the technology evolutions are far outpacing the ethical discussions that surround the safe use of these tools which make all kinds of worrying misuse much easier for bad actors to achieve.