Digital Rights and Deepfake Proliferation

Joshua Evangelista

Innovation Path: Runner-up

Digital rights are increasingly recognized as fourth-generation human rights, especially in light of the rapid advancement of artificial intelligence and the digitalization of society. Major tech companies such as Google and Facebook dominate the digital landscape, often leveraging user data for profit. Furthermore, AI can reinforce and amplify societal biases, manipulating public opinion and undermining individual autonomy (Botes, 2023). Emerging technologies like deepfakes pose additional risks by generating convincing but false content—such as fabricated videos of political figures making controversial statements—which can severely threaten democratic institutions and public trust (Coeckelbergh, 2023). To address the intersecting challenges of privacy, equity and innovation, this brief proposes the creation of a U.S Digital Rights and Innovation Act (DRIA) – a dynamic federal framework that is modelled on proven policy responses and grounded in academic research. The DRIA specifically addresses access to digital rights for all and combatting the harms of deepfakes.

Digital Rights for All

The United States should implement federal data privacy legislation that guarantees all residents the right to access, delete, and control their personal information. This is modelled after the European Union’s General Data Protection Regulation (GDPR)—which successfully enhanced transparency and user trust in digital ecosystems (Greenleaf, 2018). This framework would address critical gaps in current state-level laws, such as the California Consumer Privacy Act (CCPA).

A federal standard must:

  1. Strengthen Enforcement: Provide robust mechanisms to ensure compliance, to avoid
    ambiguities seen in the CCPA.
  2. Clarify Regulations: Offer precise guidelines for businesses and being transparent in
    sharing third-party data sharing.
  3. Bridge the Legal-Tech Divide: Foster collaboration between legal and technology teams
    to operationalize privacy protections effectively. This can be done by sharing know-how to each other in terms that someone with a non-technical background can understand.

Deepfake Protection

The proliferation of deepfakes poses significant threats to freedom of expression and democracy in the United States. AI-generated products can spread misinformation, manipulate public perception, and disproportionately harm marginalized communities—particularly in rural and less educated areas where digital literacy is lower. Without the ability to discern real from fake, individuals in these communities may develop false beliefs, leading to social and economic harm (Barber, 2023).

To mitigate these risks, the following policy measures should be implemented:

1) Dual Policy Response

  • Permissible Use: Deepfakes created for satire, parody, or entertainment where their
    artificial nature is obvious should remain protected under free speech.
  • Restricted Use: Deepfakes designed to deceive, such as manipulated political speeches
    or fraudulent financial announcements (e.g. a fake video of Donald Trump announcing tariff removals to manipulate markets) should be banned or heavily regulated to prevent harm.

2) Public Education & Digital Literacy

  • Expand initiatives like MIT’s “Detect Fakes” toolkit to marginalized communities,
    equipping individuals with skills to identify deepfakes. For instance, lip movements are
    one of the most obvious indicators of deepfake usage. Unnatural or mismatched lip
    movements often signal that a video has been manipulated
  • Teach critical media literacy, including source verification and cross-referencing with
    trusted outlets. Knowing what sources is credible should be a skill and taught throughout
    the United States.

3) Technological Countermeasures

  • Image & Video Verification Tools: Platforms should integrate AI detection tools to flag
    suspected deepfakes. Verification and content flagging on Twitter or Wikipedia already
    exist and should be widely adopted.
  • Content Labeling & Flagging: Mandate clear disclaimers on synthetic media to prevent
    deception.

References

Botes, M. (2023). Autonomy and the social dilemma of online manipulative behavior. AI and Ethics, 3(1), 315–323. https://link.springer.com/article/10.1007/s43681-022-00157-5

Coeckelbergh, M. (2023). Democracy, epistemic agency, and AI: Political epistemology in times of artificial intelligence. AI and Ethics, 3(4), 1341–1350. https://doi.org/10.1007/s43681-022-00279-9

Greenleaf, G. (2018). Global data privacy laws 2017: 120 national data privacy laws, including Indonesia and Turkey. Privacy Laws & Business International Report.

Barber, A. (2023). Freedom of expression meets deepfakes. Synthese, 202(2). https://doi.org/10.1007/s11229-023-04042-0