PersonalAgentKit
Learn about the Verida PersonalAgentKit
The Verida PersonalAgentKit provides integrations with AI frameworks for simple integration into your applications.
For the first release, we support LangGraph tools to easily integrate with any LangChain / LangGraph application.
Verida PersonalAgentKit supported the following tools:
Profiles — Allow the LLM to know about your external profiles (ie: Google or Telegram accounts)
Query — Allow the LLM to query your data (ie: Query your emails, message history, Youtube favourites etc.)
Get Record — Allow the LLM to fetch a specific record (ie: A specific file or a specific email)
Chat Search — Allow the LLM to perform a keyword search of your message chat threads
Datastore Search — Allow the LLM to perform a keyword search across any of your supported data (ie: Find all emails that mention "utility bills")
Universal Search — Allow the LLM to perform a keyword search across all your data (ie: Find all mentions of "devcon 2025" in all my messages, emails and favourites)
User Information — Allow the LLM to know more about the user account (DID) on the Verida network and what data permissions it can access.
Which LLM to use?
Not all LLM's are created equal. When you use these tools, the LLM must be capable of understanding the tool description and parameters to correctly call the tools in a useful way to respond to a user's prompt.
Here is what we have learned about leading LLM's to date:
Llama 3.3-70B — Recommended. Works well. Open source.
Claude Haiku 3.5 — Recommended. Works very well.
OpenAI (gpt-4o-mini, gpt-4o) — Not recommended. It fails to leverage the
selector
andsort
parameters from the Query tool for no obvious reason.
LLM Provider QuickLinks
The following LLM providers support the OpenAI API format, so can be easily used with the OpenAI LangGraph plugin.
Get a RedPill AI key (Highly secure, runs Llama 3.3-70B in a TEE, slow)
Get an Anthropic key (for Claude)
Get a Groq key (Groq is fast)
LLM Context Length Notes
We recommend using LLM's with large context lengths for best results.
LLM's have different context lengths that limit how much user data can be sent to them for processing. The current suite of tools don't provide any limitations on the length of user data that is sent to the LLM, so in some instances the LLM will throw an error saying the context limit size was reached.
A future enhancement would be to provide a configurable character limit for the user data tool responses. We accept PR's! :)
LLM Privacy Considerations
Centralized LLM services (OpenAI, Anthropic, Groq etc.) can access your prompts and any data sent to them.
Verida's API's operate within a secure, confidential compute, environment ensuring that user data is only exposed via the API requests with the permissions granted by a user.
When you connect a LLM to the Verida API's, the LLM can't access all the user data, but it can access any data response from the API's. For example; if you request a summary of your last 24 hours of emails, those emails will be sent to the LLM for processing, but the LLM can't automatically access all your emails.
This is very important to understand, because you are trusting these centralized LLM services to not expose your prompts or data sent to their LLM's. These could be exposed by malicious employees or as a result of a third party hack on that service.
How to Secure your LLM Requests
There are two key ways you can eliminate the security risks associated with centralized LLM services:
Operate the LLM on a local device that you control
Use a LLM from a third party service that runs within a Trusted Execution Environment (TEE) such as Redpill AI (Very secure, but also very slow in our testing as of 26 Mar 2025).
Last updated