Identifiable and Verifiable AI | Unlocking safe agentic use-cases with Privado.id and ORA
A point of view co-authored by @0xAlecJames (@ORAProtocol) and @ravikantagrawal (@PrivadoId): How Onchain AI Oracle by ORA and agent identity (‘Know Your Agent’) by Privado.id enable practical use-cases.
We’re certainly in the midst of the first true AI agent boom. Satya Nadella, CEO of Microsoft, predicts AI agents will become the primary way we interact with computers, understanding our needs and proactively helping with tasks. Y Combinator suggests that vertical agents will grow to 10x the size of the Saas industry. Already this year we are seeing lots of attention flow to the development of agentic frameworks and tooling, with several early agents taking the attention of crypto users by storm.
While all that seems exciting, Dario Amodei, CEO of Anthropic, has discussed the importance of responsible scaling policies for AI systems and the need to address potential risks as AI becomes more powerful. This is where sustainable growth for AI will require transparency, auditability and security guarantees. Verifiability and identity are crucial components of this approach.
Verifiable AI
Verifiable AI inference is one mechanism for attributing accountability and transparency to AI interactions. Inference is part of the agentic pipeline, acting as the output from AI models which may be used as information or to take an action.
There are several frameworks for providing verifiable AI inference: zkML, opML and PoS solutions. ORA uses Optimistic Machine Learning (opML) as an efficient, scalable and secure middleground for its approach to decentralized and verifiable inference.
The best way to understand why verifiability is important is to consider the current AI memecoin meta. Supposedly autonomous agents are amassing audiences on X and launching tokens at some stage in their journey. How do you know that the agent itself decided to launch a token? How can you check it isn’t the team LARP-ing as the agent and launching a token for their own gain. Verifiable AI Inference can demonstrate that information or actions originating from an AI are truly the result of its own inspiration and decision-making. Verifiable inference proves there is no human behind the agent.
Additionally, adding verifiability creates a means for auditing AI inference, allowing actors to backtrace why an agent acted in some way and to authenticate the origination of that action. This is important to prove that an agent is interacting as expected by the developer/owner, and has relevance for all of AI as we monitor, test and benchmark positive and negative qualities of new AI models. Anthropic itself recently highlighted the need for transparent AI testing mechanisms, so that the models involved in testing other models for their ability to deceive, or otherwise negatively influence humans, aren’t also susceptible to the qualities they test.
Agent Identity
AI agents will take on tedious tasks like managing routine emails, completing expense reports, booking travel, and handling repetitive e-commerce activities. To reach optimal usage of agents, the infrastructure to verify, delegate, audit, and manage AI agents must be ready to support this next wave of innovation.
Until now, agents’ utility has been limited by their inability to authenticate and sign a transaction. Privado iD is working on several approaches for agent identity and governance that can enable tons of more practical use cases. For personal AI agents to deliver frictionless experiences, the “Know Your AI Agent” (KYA) process must by:
- Trustworthy
- Cost-effective
- Universally applicable
Decentralized Identifiers (DIDs) are highly suitable for AI agents, providing self-sovereignty, interoperability, and robust cryptographic functionality. By utilizing a cryptographically verifiable identifier, an AI agent can receive attestations from other identifiable entities, whether they are humans or other agents. Leveraging this approach, Privado ID is creating an agent identity framework that ensures a balance between transparency and privacy while delivering scalable and compliant identity solutions.
Additionally, with onchain verification capabilities provided by Privado, the zk-proofs generated by users / agents can be verified by smart contracts, making Web3 use-cases more efficient and programmable.
Verifiability and Identity, Combined
Together, verifiability and identity facilitate a more efficient and secure programmatic landscape for AI agents to interact. One in which users and protocols can operate and automatically prioritize agents that are acting genuinely, while restricting bad actors. Further, this allows the creation of onchain apps/services which cater to specific types of agents, as designated by their identity.
Identity comes into play at the very start of an interaction with an agent. Protocols, users and other agents can validate that an agent is what it says it is. Verifiability comes into play at the end, allowing entities to interact with agents’ assets, actions and information in a trustless way. Together they provide an end-to-end layer of accountability for agent interactions.
Example Scenario
Take this example to ground what we’ve been talking about in real interactions. There has been exploration of the efficacy of current LLMs in healthcare. In the future, we may see personal healthcare agents emerge to diagnose users (Human-agent interaction scenario) and work with them to acquire a healthcare treatment routine across a number of domains (physical, mental, etc.).
Identity will be crucial for this scenario. Users may only trust agents that are developed by well-known, trusted healthcare providers. Privado’s KYA identification layer will allow users to filter for the specific agents they want to interact with, and programmatically prevent any interactions with agents that don’t fit within those categories. This will be crucial in preventing unintended interactions between users and agents that could lead to impactful consequences, such as with healthcare treatment.
Verifiability is also crucial here. Verifiable AI allows a user to backtrace how an AI agent arrived at its conclusion for treatment recommendation; what the inputs were and what models were used. Further, it allows users to check that the healthcare recommendation they receive came from an agent, specifically designed for that task and removed from human error. Verifiability gives users confidence in trusting agents.
We can extend this example further, by considering a user’s personal agent interacting with this healthcare agent (Agent-agent interaction scenario). Perhaps a user has tasked their personal assistant agent to get their diagnosis and healthcare treatment plan. A high quality personal assistant agent will also require identity and verifiability in order to interact with another agent. The programmatic nature of onchain identity and verifiability means if these are suitably met, the interaction will also likely happen very efficiently.
For any inputs or queries, please feel free to reach out to either of us on X and we would be happy to exchange thoughts.