Verida AI Docs
  • Welcome
  • Getting Started
    • How it works
    • Developer Console
    • Get an API key
    • Authentication
    • Scopes
    • API Requests
    • Connectors
      • Build a Connector
    • Example Apps
  • Integrations
    • PersonalAgentKit
    • LangGraph
    • NEAR Protocol
    • Nillion Network
  • Data APIs
    • Overview
    • Data Types
    • API Endpoints
      • Queries
      • Search
      • AI Prompts
      • Actions
      • Other Endpoints
    • API Performance
    • Data examples
    • API Reference
  • Resources
    • Tutorials
      • Accessing User Data (Telegram)
    • Learn
      • Anatomy of an AI Agent
      • Dynamic Loading of Data for Realtime AI
      • Data Privacy Issues
      • Web3 & DePIN solves AI's privacy problems
      • Privacy Preserving AI Tech Stack
      • Confidential Compute Litepaper
    • AI Use Cases
    • Verida Vault
    • Privacy & Security
    • Pricing
    • Grants
    • Support
Powered by GitBook
On this page
  • Which LLM to use?
  • LLM Provider QuickLinks
  • LLM Context Length Notes
  • LLM Privacy Considerations
  • How to Secure your LLM Requests
Export as PDF
  1. Integrations

PersonalAgentKit

Learn about the Verida PersonalAgentKit

PreviousExample AppsNextLangGraph

Last updated 1 month ago

The Verida provides integrations with AI frameworks for simple integration into your applications.

For the first release, we support LangGraph tools to easily integrate with any LangChain / LangGraph application.

Verida PersonalAgentKit supported the following tools:

  1. Profiles — Allow the LLM to know about your external profiles (ie: Google or Telegram accounts)

  2. Query — Allow the LLM to query your data (ie: Query your emails, message history, Youtube favourites etc.)

  3. Get Record — Allow the LLM to fetch a specific record (ie: A specific file or a specific email)

  4. Chat Search — Allow the LLM to perform a keyword search of your message chat threads

  5. Datastore Search — Allow the LLM to perform a keyword search across any of your supported data (ie: Find all emails that mention "utility bills")

  6. Universal Search — Allow the LLM to perform a keyword search across all your data (ie: Find all mentions of "devcon 2025" in all my messages, emails and favourites)

  7. User Information — Allow the LLM to know more about the user account (DID) on the Verida network and what data permissions it can access.

When the tools make requests to access user data, that data is queried within the Verida confidential compute environment (see Privacy & Security). Due to the enhanced privacy model, there are currently performance considerations (see API Performance).

Which LLM to use?

Not all LLM's are created equal. When you use these tools, the LLM must be capable of understanding the tool description and parameters to correctly call the tools in a useful way to respond to a user's prompt.

Here is what we have learned about leading LLM's to date:

  1. Llama 3.3-70B — Recommended. Works well. Open source.

  2. Claude Haiku 3.5 — Recommended. Works very well.

  3. OpenAI (gpt-4o-mini, gpt-4o) — Not recommended. It fails to leverage the selector and sort parameters from the Query tool for no obvious reason.

LLM Provider QuickLinks

The following LLM providers support the OpenAI API format, so can be easily used with the OpenAI LangGraph plugin.

LLM Context Length Notes

We recommend using LLM's with large context lengths for best results.

LLM's have different context lengths that limit how much user data can be sent to them for processing. The current suite of tools don't provide any limitations on the length of user data that is sent to the LLM, so in some instances the LLM will throw an error saying the context limit size was reached.

A future enhancement would be to provide a configurable character limit for the user data tool responses. We accept PR's! :)

LLM Privacy Considerations

Centralized LLM services (OpenAI, Anthropic, Groq etc.) can access your prompts and any data sent to them.

Verida's API's operate within a secure, confidential compute, environment ensuring that user data is only exposed via the API requests with the permissions granted by a user.

When you connect a LLM to the Verida API's, the LLM can't access all the user data, but it can access any data response from the API's. For example; if you request a summary of your last 24 hours of emails, those emails will be sent to the LLM for processing, but the LLM can't automatically access all your emails.

This is very important to understand, because you are trusting these centralized LLM services to not expose your prompts or data sent to their LLM's. These could be exposed by malicious employees or as a result of a third party hack on that service.

How to Secure your LLM Requests

There are two key ways you can eliminate the security risks associated with centralized LLM services:

  1. Operate the LLM on a local device that you control

(Highly secure, runs Llama 3.3-70B in a TEE, slow)

(for Claude)

(Groq is fast)

Use a LLM from a third party service that runs within a Trusted Execution Environment (TEE) such as (Very secure, but also very slow in our testing as of 26 Mar 2025).

PersonalAgentKit
Get an OpenAI API key
Get a RedPill AI key
Get an Anthropic key
Get a Groq key
Redpill AI