Personhood credentials: Artificial intelligence (AI) and urgent need to distinguish who is real human user online

Ravikant Agrawal

--

Fig 1: Image source: https://arxiv.org/pdf/2408.07892

The rise of artificial intelligence (AI) has introduced significant challenges in the realm of online interactions, particularly concerning the issue of deception. As malicious actors increasingly exploit anonymity to engage in fraud and spread misinformation, distinguishing between real users and sophisticated AI-generated entities has become more complex. To address this pressing concern, a solution like personhood credentials (PHCs) will be impactful. These digital credentials empower users to verify their identity as genuine individuals while safeguarding their privacy, thereby enhancing trustworthiness in online environments.

1. Challenges with AI-powered deception

With access to increasingly capable AI, malicious actors can potentially orchestrate more effective deceptive schemes. Two trends contribute to these schemes’ potential impact:

Fig 2: Image source: https://arxiv.org/pdf/2408.07892

Taken together, these two trends suggest that AI may help to make deceptive activity more convincing (through increased indistinguishability) and easier to carry out (through increased scalability)

2. Gaps in current solutions for countering AI-powered deception

There are many tools currently used to reduce deceptive and malicious activity online, particularly when the activity is AI-powered. Below table showcase existing tools for countering AI-powered deception and their main deficits

Fig 3: Image source: https://arxiv.org/pdf/2408.07892

3. Solution option: Defining personhood credentials (PHC)

Personhood credentials or PHCs empower users and services to counter deception. Adding options to verify with PHCs could enhance users’ ability to protect their privacy and services’ ability to counter deception.

To counter scalable deception while maintaining user privacy, PHC systems must meet two foundational requirements:

  1. Credential limits: The PHC issuer aims to issue only one credential per person and provides ways to mitigate the impact of transfer or theft of credentials. Two key considerations here includes: a) Issuers check one-per-person requirement at enrollment; b) Expiry or regular re-authentication
  2. Unlinkable pseudonymity: PHCs let a user interact with services anonymously through a service-specific pseudonym; the user’s digital activity is untraceable by the issuer and unlinkable across service providers, even if service providers and issuers collude. Three key considerations here includes: a) Minimal identifying information stored during enrolment; b) Minimal disclosure during usage; c) Unlinkability by default

They work as follows:

Fig 4: Image source: https://arxiv.org/pdf/2408.07892

PHCs improve on and complement existing approaches to countering AI-powered deception online.

4. Potential challenges for a robust PHC system implementation

To achieve their benefits, PHC systems must be designed and implemented with care. PHCs’ impacts should be carefully managed in the following four areas:

  1. Equitable access to digital services that use PHCs
  2. Free expression supported by confidence in the privacy of PHCs
  3. Checks on power of service providers and PHC issuers
  4. Robustness to attack and error by different actors in the PHC ecosystem
Fig 5: Image source: https://arxiv.org/pdf/2408.07892

5. Benefits enabled with PHCs

PHCs give digital services a tool to reduce the efficacy and prevalence of deception, especially in the form of:

  1. Sockpuppets: deceptive actors purporting to be “people” that do not actually exist.
  2. Bot attacks: networks of bots controlled by malicious actors to carry out automated abuse (e.g., breaking site rules and evading suspension by creating new accounts).
  3. Misleading agents: AI agents misrepresenting whose goals they serve.
Fig 6: Image source: https://arxiv.org/pdf/2408.07892

6. Next steps for consideration

Internet is inadequately prepared for the challenges highly capable AI may pose. Governments, technologists, and standards bodies could, in close consultation with the public, consider the following ideas to address the threat of Al-powered scalable deception online:

  1. Invest in development and piloting of personhood credentialing systems. e.g., explore building PHCs incrementally atop existing credentials such as digital driver’s licenses.
  2. Encourage adoption of personhood credentials. e.g., determine services for which PHCs ought to be substitutable for ID verification.
Fig 7: Image source: https://arxiv.org/pdf/2408.07892

7. Conclusion

This article largely summaries the white paper titled “Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online.” The authors provide a compelling examination of how personhood credentials can mitigate online deception while preserving user privacy. By addressing the challenges posed by malicious actors, their work paves the way for more secure and authentic online interactions.

For a comprehensive understanding and detailed insights, refer to the full white paper: arXiv

--

--

No responses yet