PORTLAND, Ore. (KOIN) – With advancements in artificial intelligence technology, Oregon’s largest private employer — Intel — has a team aiming to restore online trust with AI recognition and finding responsible ways to use the tech.
Senior Staff Research Scientist at Intel Labs Dr. Ilke Demir, leads Intel’s trust in media team and points out that AI is already being used in different domains, including microchip manufacturing, and could be used for other forces for good such as education, by creating interactive spaces for kids.
In November 2022, Intel announced its FakeCatcher, which the company claims can detect fake video with 96% accuracy. Demir helped design the tool alongside Umur Ciftci from the State University of New York at Binghamton.
“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” Demir said in a statement released by Intel.
“Deepfakes are spreading political misinformation, impersonation,” Demir told KOIN 6 News –noting the videos can create digital, personal and societal distrust.
According to the research scientist, the unintended consequences of AI are too heavily emphasized but says it’s still important to not blindly trust the technology.
In the future, Demir thinks AI will not be limited to 2-D screens.
“I think it will actually take us into 3-D worlds where we can express ourselves better, where we can have those interactions and these communications and media, entertainment, games,” Demir said.
Demir says AI should be used as a tool for enhancing human work and experiences, saying the technology is an elevated form of automation.
“As long as the limitations of AI approaches are known, as long as what they can do and what they cannot do is clearly and transparently explained to the public, I don’t think there is anything to fear,” Demir said.
Advancements in AI vary from ChaptGPT creating text-based works to deep fake videos creating realistic looking and sounding humans.
But as far as creating humanoid robots, Demir says imitating some human features is a difficult feat.
“I think we are far from ‘Westworld,” Demir said. “Deploying all of those [AI approaches] into a humanoid and having those synthetic skin and having human movement – even…being able to realize my gestures…is very hard.”
Demir added, “the generalization of those AI approaches are not actually that fine-tuned for personal replication in a way that replicates in a multi-model setting – multi-model meaning my speech, my face, may hair, my body movement – all of these are not really easy to replicate in the whole multi-model fashion.”
When it comes to copyright concerns with AI, Demir pointed to the Coalition for Content Provenance and Authority, or C2PA — with several companies signing on such as Intel, Adobe and Microsoft. The coalition claims to develop a standard for finding the original media source when copyrighted media is used.