Meta will begin labeling images created by OpenAI, Midjourney and other artificial intelligence products, Nick Clegg, the company’s president of global affairs, announced in an interview with ABC’s “Good Morning America” on Tuesday.

The labels, set to roll out in the coming months, will identify AI-generated images posted on Facebook, Instagram and Threads, Clegg said. Images created by a Meta’s own tool will also be labeled, Clegg added.

“I hope this is a big step forward in trying to make sure that people know what they’re looking at, and they know where what they’re looking at comes from,” Clegg told GMA. “As the difference between human and synthetic content gets blurred, people want to know where the boundary lies.”

The labels, however, will not be a “perfect solution,” Clegg acknowledged, citing the scale and complexity of AI-generated content on the platforms.

Meta cannot currently identify AI-generated audio and video produced using outside tech platforms, Clegg said in a blog post on Tuesday. To address this issue, Meta will add a feature that allows users to voluntarily label audio or video as AI-generated when they upload it to a platform, Clegg said.

The risks posed by AI-generated content have stoked wide concern in recent weeks.

Fake, sexually explicit AI-generated images of pop star Taylor Swift went viral on social media late last month, garnering millions of views. In response, the White House called on Congress and tech firms to take action.

“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people,” White House Press Secretary Karine Jean-Pierre told ABC News White House correspondent Karen Travers.

Another incident last month drew attention to election risks posed by AI. A fake robocall impersonating President Joe Biden’s voice discouraged individuals from voting in the New Hampshire Primary.

As Americans head to the polls in 2024, tech companies should take action to assure users that they will be able to identify whether or not online content is authentic, Clegg said.

PHOTO: Nick Clegg, President Global Affairs at Meta, speaks at an event of the World Economic Forum (WEF), Jan. 18, 2024, in Davos, Switzerland.

Nick Clegg, President Global Affairs at Meta, speaks at an event of the World Economic Forum (WEF), Jan. 18, 2024, in Davos, Switzerland.

Hannes P Albert/Picture Alliance via Getty Images, FILE

“In an election year like this, it’s incumbent upon us as an industry to make sure we do as much as the technology allows to provide as much visibility to people so they can distinguish between what’s synthetic and what’s not synthetic,” Clegg added.

In September, a bipartisan group of senators proposed a bill that would ban the use of deceptive AI content falsely portraying candidates for federal office in political ads.

When asked whether Meta supports the bill, Clegg said he backs legislation regulating AI but did not specifically comment on the Senate measure.

“I think it’s right that you have certain guardrails in place to make sure that there’s proper transparency about how these big AI models are built, to make sure that they’re properly stress tested so they’re as safe as they can be,” Clegg said. “Yes, I think there’s definitely a role for governments.”

Meta plans to label AI-generated images through next year because “a number of important elections are taking place around the world,” Clegg said in the blog post. The extended period of time will afford Meta an opportunity to evaluate its efforts.

“What we learn will inform industry best practices and our own approach going forward,” Clegg said.

Source link