AI Prompts
Verida’s LLM APIs enable your application to run Large Language Model prompts. Depending on the scope granted, these prompts can range from basic text generation to context-enriched prompts powered by user data. Below is an overview of the available LLM endpoints, the required scopes, and their respective credit costs.
1. LLM Prompt ()
Summary: Runs an LLM prompt without access to user data. This is ideal for basic text generation, summarization, or other language tasks where user data context is not needed.
Credits Usage: 2 credits
Scope:
api:llm-prompt
Example Usage
Response (example):
For full request/response structures and parameters, see the LLM Prompt Documentation.
2. LLM Agent Prompt
Summary: Runs an LLM agent prompt that has access to user data. This allows you to leverage personal datastore or database records (based on your granted scopes) as context for the LLM, enabling personalized recommendations, summaries, or insights. This is avery intensive request as it can involve multiple LLM requests and multiple data queries.
Credits Usage: 5 credits
Scope Required:
api:llm-agent-prompt
Example Usage
Response (example):
Note: The LLM agent uses the provided
context
to query datastores or databases your token is authorized to access. Ensure you request the necessary datastore or database scopes (e.g.,ds:r:file
or similar) alongsideapi:llm-agent-prompt
.
For more advanced usage and available parameters, consult the LLM Agent Prompt Documentation.
3. LLM Profile Prompt
Summary: Generates or refines a user profile by analyzing user data. Ideal for creating advanced personalized experiences, such as dynamic user profiles, recommendation engines, or curated content based on a user’s personal data footprint.
Credits Usage: 10 credits
Scope Required:
api:llm-profile-prompt
Example Usage
Response (example):
Tip: Because
api:llm-profile-prompt
can involve analyzing potentially large volumes of user data, it consumes more credits (10 credits). Make sure your token and user have granted the necessary access to the relevant datastores or databases.
For detailed query structures, limitations, and best practices, check the LLM Profile Prompt Documentation.
Best Practices
Minimal Scope Principle Only request the LLM scope(s) you truly need. For instance, if you only require basic text generation, request
api:llm-prompt
rather than the more credit-expensiveapi:llm-agent-prompt
orapi:llm-profile-prompt
.Efficient Prompting Write concise prompts and consider the size of any included user context to avoid unnecessary token or data usage.
Store Results Securely If you need to retain LLM-generated data, store it in a datastore or database with the appropriate permissions.
Token & Scope Validation Always confirm your
auth_token
includes the correct LLM scope before invoking these endpoints.Handle Errors Gracefully LLM endpoints may return standard HTTP error codes or custom error messages. Review responses carefully and provide clear feedback to users if something goes wrong.
Last updated