Securing the AI Future: The Role of Decentralized Identities
Super excited to co-author this article with Eugenio Reggianini, an identity giga brain and a tech evangelist.
Artificial Intelligence (AI) is revolutionising industries by automating tasks, providing insights, and enhancing productivity. However, as AI systems become more prevalent, concerns about the authenticity and security of digital information arise. Decentralized Identities (DIDs) offer a promising solution to these challenges by providing a reliable and user-centric identity system for humans and machines.
Governments and regulatory bodies are increasingly emphasizing the importance of open and decentralized approaches to digital identity and AI. For example, the European Union’s EIDAS (Electronic Identification, Authentication, and Trust Services) framework aims to create a secure and trustworthy environment for electronic transactions across EU member states though the European Digital Wallet. Similarly, the EU’s proposed AI Act seeks to ensure that AI systems are developed and used in a way that respects fundamental rights, safety, and transparency. These regulatory frameworks underscore the need for decentralized identity solutions to ensure AI systems are accountable and transparent. By integrating DIDs, AI systems can better comply with regulatory requirements, enhancing trust in AI-generated outputs.
Big tech companies often establish corporate networks that may not align with the AI open-source community’s goals. These corporate networks can lead to centralized control, limiting innovation and collaboration.
The open-source community has been a driving force behind software innovation, including AI development. However, existing open-source licenses can be problematic, often being too permissive, allowing unrestricted use and modification, or too restrictive, limiting commercial applications. Digital identities can play a key role by providing a verifiable identity system that tracks contributions and incentives e.g. community-built open AGI efforts by @sentient_agi or Initial Model Offering (IMO) by @OraProtocol. A privacy focused identity system can ensure that developers receive appropriate credit and compensation, while businesses can securely build on open-source innovations.
With Generative AI, anyone can become a news agency, who can create content ranging from text and images to music and code. It also raises concerns about the authenticity of digital information, intellectual property rights, and freedom. Marc Andreessen says detecting deepfakes is not going to work because AI is getting better everyday, so the solution is to certify content as real. The ability to generate realistic but fake content can undermine trust in digital media, erode information integrity and can cause mass-manipulation to extreme levels. The web3 industry may receive huge benefits by integrating DIDs into AI systems, generating a decentralized verification engine on-chain and off-chain over the qualitative analysis of data computed by AI models in the Dapps.
Let us double down on some of the key use cases of identity powering AI systems
- Enabling identities for personal agents and machines
In on-chain network attestations, establishing a trustworthy verification system of sources is critical. DIDs can be used to create decentralized verification logics where users, bots and machines are assigned identifiers that verify their credibility. This system can enhance trust in the data and insights generated by AI models, ensuring they are based on reliable and verified sources. This hierarchical identity management system will enhance coordination and reliability within different identifiers and complex AI networks. Specifically fitting is the case for personal intelligence (Pi) agents that can authenticate themselves, train as per your personality and have access to your accounts, like a chief of staff for an individual. This will allow agents to operate autonomously while maintaining accountability and trustworthiness in their interactions. DIDs can enable such capabilities by providing a secure and verifiable identity system for AI agents. Going a step beyond, in a parent-child scenario, parent AI agents can oversee a set of subordinate AI agents acting on behalf of the child, DIDs can ensure that each agent’s actions are traceable and accountable with Pairwise keys. In a business context like financial services, investors can manage multiple AI agents e.g. an agent for trades or a compliance agent performing specific tasks like transaction monitoring and reporting. The market is confirming this interesting need as we see companies as biconomy launching a decentralized agent network DAN. Revocation of identity for an agent or a machine or just revoking certain permissions is also an important consideration, which can be enabled with advance digital identity solution like Privado.id
Picture Courtesy: Personal intelligence in action from pi.ai
2. AI & DID for DAOs, operating self sufficient organizations through on chain reputation systems
AI agents could also be implemented in streamlining the execution of governance operations such as activities for DAOs. In this context, DIDs could empower DAO tokens holders as a form of rewards to trusted issuers and unlock premium revenues opportunities for Dapps and reduce risk of Sybil Attacks.
Specifically, you may see agents operating ML models operating Sub-DAOs being accountable for the actions (like for the Maker endgame), and managing specific tasks or projects, requiring a reliable verification system for voting and grant execution. DIDs can provide this verification, ensuring that only eligible members participate in governance activities and that decisions are transparently documented and executed.
Empowering onchain attestation as verification is able to unlock trustworthy AI reputation which can be applied in the context of DAOs for distributing governance incentives across trustworthy stakeholders and potentially unlock additional revenues to most reliable agents.
To confirm this market need, DAOBase, an AI-driven data and infrastructure platform tailored for DAOs which, leverages advanced algorithms to help DAOs and voters establish their on-chain and off-chain data reputation layers, integrates seven blockchains, covering over 160,000 DAOs, 6 million voters, and more than 200 certified partner.
3. Content authentication to prevent deep fakes and frauds using Gen AI: Leverage identities with reputation
Integrating digital identities into AI systems can provide a reliable verification mechanism, ensuring that outputs are traceable to their sources. This traceability is crucial for maintaining authenticity and protecting against fake Gen AI content. By assigning DIDs to humans and machines, we can establish a robust framework for verifying the origin and integrity of AI-generated content. Publisher’s identity with (positive and negative) reputation score can let the reader decide to what degree they should trust the content. Taking a step forward in the reputation solutioning, “Context Based Unique Identifiers explained by @sebastian Rodriguez” could help to reduce the risk of having a unique permanent digital identifier by providing a different “alias” of that identifier to each service provider. This system is crucial for media platforms and news agencies to filter out manipulated content, maintain the integrity of digital information and protect public trust. Content creators can link their work to verifiable identities and secure the chain of trust and fight deep fakes, as explained in the Privado id blog.
Additional dimension of leveraging identity is to positively build and maintain the integrity via Reinforcement learning from human feedback (RLHF) loops and filter out low-quality or toxic feedback in the AI networks. A Sybil resistance and user reputation system is crucially necessary for such a review and rating system. By rewarding genuine and valuable user feedback and penalizing for any toxic activities, sustained improvement in the AI model’s capabilities can be ensured.
4. Marketplace for new content creation with verified users
AI systems are running out of high-quality data to train LLMs. Researchers estimate that, by 2026, we will exhaust high quality text data for training LLMs — a trend that can slow down AI progress. This fact is also emphasized by Mustafa Suleyman, the CEO of Microsoft AI. User generated content (UGC) across different focus areas such as specific skills like medical surgery, experience like combat, domain knowledge in wildlife plus access to proprietary content will become hot licensing and acquisition targets to help further development of AI models. Solutions like Personal Data Bridge by Verida to share users’ private data could enable the next generation of hyper-focused agents development.
Inspiration could also be taken from what Vana is building with Data Liquidity Pool, a firm that trained over 700,000 AI models, where one of success stories is Reddit data DAO, showcasing how 140,000+ users contributed their data and got incentives.
In that context a two layered verification process can be envisioned: the first one to authenticate the ownership of the users capabilities as necessary skills to allow the content generation and a second one provided by the LLM in order to verify the authenticity of the content generated.
5. Seamless experience across AI powered wearable IoT devices
AI breakthroughs have launched the search for the next iPhone. Several startups are developing consumer AI devices such as Humane.com, Tab.ai and others, which will become part of our daily lives. OpenAI is in talks with LoveFrom to build the “iPhone of artificial intelligence,”. A portable and secure user identity across these devices should leverage contract relayers associated with different Unique ID / DID relating to wearable devices that may become an entry point for authentication and potential executions of onchain / offchain services.
Summarizing
Digital interactions are growing exponentially each day and hence the need for effective digital identities. Integrating DIDs into AI systems is a critical step towards creating a secure, transparent, and trustworthy digital ecosystem. By assigning identities to AI modules, ensuring clear roles and responsibilities, implementing Sybil resistance mechanisms, leveraging delegated identifiers, can address many of the existing and upcoming challenges with digital infrastructures. We think blockchain could also be an important puzzle piece that can help build trust infrastructure needed for self sovereign identity systems, such as credential revocations, key rotations and trust registries.