When Everything Is Fake, What’s the Point of Social Media?
How AI-Generated Videos Are Reshaping Reality, Young Minds & the Ecosystem of Trust
By Abijohn.com
“For years, the internet has been a place where people go to feel connected. But if everything online starts to feel fake… people will start retreating back into what’s physically provable.” — Kashyap Rajesh, Encode. TIME
The recent explosion of hyper-realistic AI-generated videos—especially with tools like Sora 2—has triggered a silent crisis of trust. As deepfakes blur the line between genuine and synthetic, we face not just a media problem, but an ecological, social, developmental, and regulatory one.
Below are 50 ways this transformation could affect humans, arranged by domain, with commentary on what it means and where we might go. We then examine whether AI can be regulated, and how we might begin building systems of resilience.
I. Erosion of Digital Ecosystem & Trust (1-10)
-
Collapse of authenticity – When viewers can’t tell if a video is real or AI-generated, they may gradually distrust all visual media. As one article notes, if everything on your “For You” feed might be AI “slop”, you lose faith in the feed itself. TIME
-
Algorithmic fatigue – Social-media platforms thrive on engagement, but when synthetic content floods feeds, users may feel overwhelmed by “too much spectacle,” leading to disengagement or avoidance of platforms.
-
Credibility vacuum for creators – Genuine creators will struggle to prove their work is real and human-made, reducing the value of human effort and raising barriers for small creators.
-
Amplified misinformation – Faux videos can depict crises, conflicts or disasters that never happened, sabotaging newsrooms and fact-checking systems. jgspl.org+1
-
Brand reputation risk – Companies may face backlash if synthetic video content linked to them goes viral and is perceived as misleading. Aragon Research
-
Social media reputational cascade – One fake video can trigger chains of discussion, suspicion, or panic, with real-life consequences (stock market dips, activism, law enforcement responses).
-
Trust drain across platforms – Users may assume all videos are fake, and thus dismiss real footage of crime, activism or disasters. This “liar’s dividend” effect means truth-tellers lose credibility.
-
Digital attention scarcity – With endless synthetic content, the human brain becomes desensitized; distinguishing novel real stories from AI noise becomes harder, reducing attention span.
-
Cultural memory contamination – Over time, recordings and social memory may include synthetic events that never occurred, altering collective recollection and history.
-
Platform-ecosystem shift – Platforms may migrate from user-generated video to ultra-filtered or highly curated content, raising access barriers and limiting diversity.
II. Social & Interpersonal Impacts (11-20)
-
Emotional alienation – If friends’ videos, family moments or shared memories can be faked, people may withdraw from sharing or trusting intimate media, reducing connection.
-
Authentic relationship erosion – Dating, friendships and social proof (stories, reels) rely on authenticity; if authenticity is questioned, social bonds weaken.
-
Identity confusion in young users – Children growing up seeing synthetic selves may struggle to differentiate real identity from manufactured persona, affecting self-image.
-
Peer comparison distortion – Influencers may use AI-video to fake lifestyles, setting unrealistic standards for youth who believe what they see is “real.”
-
Bullying and deepfake harassment – AI-videos can depict minors in fake compromising or false situations, which may be used for bullying or extortion.
-
Family memory disruption – Home-videos, celebrations, milestones—all of these can be recreated or manipulated, undermining trust in what counts as personal archive.
-
Collective anxiety and paranoia – As people worry whether what they see is real, anxiety increases about everyday experiences—did that video of a riot really happen?
-
Shared reality fracture – Friends, communities, even families may disagree on what is “real,” because one side thinks the evidence is synthetic. This fractures consensus.
-
Peer-to-peer commerce vulnerability – Young people using social proof (videos) for side hustles may be misled by fake endorsements, fake product videos, or AI scams.
-
Reduced role-model transparency – When celebrities or influencers use synthetic media, youth may lose trust in role models and the notion of “making it by hard work.”
III. Youth & Developmental Effects (21-30)
-
Early skepticism of media – Kids will grow up assuming video content is unreliable, potentially undermining media literacy and trust in school-materials, e-learning videos, etc.
-
Digital self-representation confusion – As deepfakes become standard, kids may feel pressure to present “perfect” AI-enhanced selves, generating anxiety and identity dissonance.
-
Diminished attention spans – Synthetic videos often maximize sensationalism; young viewers habituated to that style may struggle with slower, real content (ex: documentaries).
-
Moral disengagement – If videos of wrongdoing can be faked, youth may become cynical about justice, accountability and testimony, believing evidence is worthless.
-
Career aspiration distortion – Seeing AI-perfected influencer lives may make real-work careers appear less appealing, shifting ambitions toward “viral persona” rather than skill-learning.
-
Fake achievement anxiety – When AI-videos show “overnight success,” genuine youth effort may feel undervalued, causing frustration and dropout from traditional paths.
-
Trust erosion in education media – Teachers who rely on video content may face credibility challenges if students assume clips are AI-manipulated.
-
Desensitization to violence – Synthetic violence may become normalized, reducing empathy and increasing tolerance toward graphic content.
-
Degraded social cues – AI-generated expressions or interactions may distort children’s learning of real social signalling, leading to social-skills gaps.
-
Consumer vulnerability – Young people may be targeted with fake product videos, fake brand endorsements, or AI-driven commerce traps before they develop discernment.
IV. Ecological & Environmental Impacts (31-40)
-
Greenwashing escalation via deepfakes – AI-videos can simulate environmental commitments, forest regrowth or clean-energy success, misleading stakeholders and distorting climate action. World Economic Forum+1
-
Resource-intensive generation – Training text-to-video models devours large amounts of compute and energy, contributing to carbon emissions and electronic-waste burdens.
-
Wildlife misrepresentation – Fake wildlife and ecological videos can mislead public perception about species, conservation status and habitats. conbio.onlinelibrary.wiley.com
-
Biodiversity misinformation – AI clips may show “revived” extinct species or false conservation successes, diverting attention from real ecological crises.
-
Reduced trust in nature footage – If real nature-documentary footage is suspected to be fake, public support for conservation may decline.
-
Supply-chain fraud – Commercial ecological claims backed by fake videos (e.g., a mine “closing”, a river “cleaned”) can mislead investors and communities.
-
Plastic pollution of digital content – Like junk visuals clogging social-media feeds, synthetic “AI slop” saturates digital ecosystems, reducing value of legitimate environmental messaging. Wikipedia
-
Habitat targeting via fake-viral videos – Viral AI-clips showing endangered creatures might drive tourist or visitor influx to fragile habitats, harming ecology.
-
Data centre proliferation – More demand for AI video generation means more data-centres, cooling-systems, land use—raising environmental footprint.
-
Attention leakage from real-world activism – Viral fake videos may instantly divert attention from real protests, environmental campaigns, or natural-disaster recovery efforts.
V. Political, Legal & Governance Effects (41-50)
-
Electoral manipulation – Fake videos can depict candidates doing things they never did, altering perceptions and disrupting democratic processes. RAND Corporation+1
-
International conflict escalation – AI-videos showing fabricated attacks or war crimes could provoke cross-border responses before verification.
-
Legal evidence invalidation – Video evidence treated as gold may be questioned if synthetic content becomes ubiquitous, undermining justice.
-
Regulatory lag – Technology evolves faster than regulation; generative-video tools are outpacing laws meant to govern them. ScienceDirect+1
-
Copyright and identity theft threats – AI-videos often use likenesses without consent, raising complex IP and personal-rights challenges. Wikipedia
-
Rise of AI-enabled propaganda – State or non-state actors can generate highly realistic videos to spread disinformation at scale and low cost.
-
Normalized surveillance society – Synthetic videos blur the boundary between real footage and manipulated media—making surveillance, profiling and attribution harder.
-
Regulatory fragmentation – Without global alignment, regulatory regimes will vary, enabling “AI jurisdiction shopping” by bad-actors.
-
Marketplace integrity degradation – Money-making scams built on fake-influencer videos or fake product endorsements become harder to police.
-
Trust tax on technology adoption – Widespread skepticism may slow adoption of AVs, smart-cities, drones and other technologies because “if the video could be fake, what else is?”
Can AI be regulated?
Yes—but it will require multi-layered, global, enforceable frameworks.
Key paths forward:
-
Mandatory watermarking & provenance metadata: Technologies like C2PA (content credentials) can tag AI-videos with digital fingerprints. But as Sora 2 models already circumvent watermarks, enforcement is critical. Axios+1
-
Disclosure laws: Platforms should require clear labels when content is AI-generated. Some jurisdictions (EU’s AI Act) are beginning to address this.
-
Liability frameworks: The creators, distributors and hosts of synthetic videos should bear legal liability for harm generated.
-
Education & media literacy: Young people and citizens must learn to question what they see, spot anomalies and treat digital media critically.
-
Energy & environmental governance: AI-model training should be audited for environmental cost; models that generate massive videos must meet sustainability standards.
-
International coordination: Given the borderless nature of internet video, global treaties may be required to regulate generative-video tools and their misuse.
But regulation alone won’t solve the underlying trust problem.
Conclusion
If everything we scroll through might be fake, we risk losing not just social media’s value—but our sense of connection, our shared reality, our faith in evidence. Whether it’s the social-ecological ecosystem, youth development, or the foundations of democracy, the rise of synthetic media changes everything.
Yet beneath the disruption lies opportunity. Because when people grow tired of what’s fake, they’ll value what’s real—and demand it harder.
The real question isn’t whether everything will become fake, but what we’ll choose to believe when authenticity is rare. And maybe, in a future of synthetic spectacle, that will make the authentic moments all the more precious.
Sources
-
When Everything Is Fake, What’s the Point of Social Media? — TIME Magazine. TIME
-
3 Ways Regulation Can Prevent Deepfake Greenwashing — World Economic Forum. World Economic Forum
-
Risks and benefits of artificial intelligence deepfakes — Hynek, N., 2025. ScienceDirect
-
Social, legal, and ethical implications of AI-Generated … — Ma’arif, A., 2025. ScienceDirect
-
AI Generated Fake Images & Threats to Conservation — Guerrero-Casado, J., 2025. conbio.onlinelibrary.wiley.com
-
Social Media Manipulation in the Era of AI — RAND Corporation, 2024. RAND Corporation