Venice APIs
API Spec
Venice’s text inference API implements the OpenAI API specification, ensuring compatibility with existing OpenAI clients and tools. This document outlines how to integrate with Venice using this familiar interface.
Base Configuration
Required Base URL
All API requests must use Venice’s base URL:
Client Setup
Configure your OpenAI client with Venice’s base URL:
Available Endpoints
Models
- Endpoint:
/api/v1/models
- Documentation: Models API Reference
- Purpose: Retrieve available models and their capabilities
Chat Completions
- Endpoint:
/api/v1/chat/completions
- Documentation: Chat Completions API Reference
- Purpose: Generate text responses in a chat-like format
System Prompts
Venice provides default system prompts designed to ensure uncensored and natural model responses. You have two options for handling system prompts:
- Default Behavior: Your system prompts are appended to Venice’s defaults
- Custom Behavior: Disable Venice’s system prompts entirely
Disabling Venice System Prompts
Use the venice_parameters
option to remove Venice’s default system prompts:
Best Practices
- Error Handling: Implement robust error handling for API responses
- Rate Limiting: Be mindful of rate limits during the beta period
- System Prompts: Test both with and without Venice’s system prompts to determine the best fit for your use case
- API Keys: Keep your API keys secure and rotate them regularly
Differences from OpenAI’s API
While Venice maintains high compatibility with the OpenAI API specification, there are some Venice-specific features and parameters:
- venice_parameters: Additional configuration options specific to Venice
- System Prompts: Different default behavior for system prompt handling
- Model Names: Venice-specific model identifiers