Can You Identify AI Content?

Morgan O'Rourke

|

March 19, 2026

A CNET survey found that only 44% U.S. adults who use social media are ­confident they can tell the difference between real content and content that was created or altered by AI, potentially leaving them vulnerable to disinformation, fraud and abuse. Almost three-­quarters of social media users (72%) take steps to determine if images or videos are legitimate. These actions include looking for visual cues like out-of-place lighting or shadows, distorted hands or skin textures, and odd backgrounds (60%); scanning for labels indicating content was AI-generated (30%); ­searching for the image or video elsewhere online (25%); and using deepfake detection tools (5%). However, 25% of social media users do nothing—including 36% of Boomers and 29% of Generation X.

This ­uncertainty has contributed to negative attitudes concerning AI content, with 28% saying that it provides little to no value. Half of those surveyed believe AI content needs better labeling, and 36% think it should be better regulated on social media with labeling requirements, frequency caps or restrictions on which accounts are permitted to post such material. One in five adults (21%) thought AI content should be prohibited from social media platforms altogether.  

Morgan O’Rourke is editor in chief of Risk Management and vice president of content and publications for the Risk & Insurance Management Society, Inc. (RIMS)