Reference to Video lets you lock in the appearance of characters, objects, and scenes so your AI-generated videos stay visually consistent. Instead of hoping the model interprets your prompt correctly, you provide visual anchors — reference images that tell the model exactly what your subject looks like.
This feature is available on Kling O3 models in the Venice Video Studio.
When to use Reference to Video
Use Reference to Video when you need:
- Character consistency — the same person or character across multiple shots
- Product accuracy — a real product that must look identical to the original
- Scene continuity — a specific environment or background across generations
- Multi-character scenes — multiple distinct characters interacting without blending
For simple text-to-video or image-to-video where consistency isn’t critical, the standard Kling models work well without references.
Core concepts
Reference to Video uses three types of visual input that work together:
| Input | Required | Purpose | How to reference in prompt |
|---|
| Elements | At least one visual input* | Lock a character or object’s identity | @Element1, @Element2, etc. |
| Scene Reference Images | At least one visual input* | Set the environment, style, and mood | @Image1, @Image2, etc. |
| Start Frame | At least one visual input* | Control the first frame of the video | N/A (set via upload) |
| End Frame | No | Control the last frame of the video | N/A (set via upload) |
*At least one of: start frame, elements, or scene reference images is required.
Elements
An Element is a character or object you want to keep visually stable throughout the video. Each element consists of:
- Frontal Image (required per element) — a clear, front-facing photo of the subject. This is the primary identity anchor. Think of it as the “passport photo” of your character or product.
- Reference Images (1–3, optional) — additional angles of the same subject (side view, 45-degree angle, back). These help the model understand the subject in 3D space. If not provided, the frontal image is automatically used as the reference.
You can add up to 7 elements per generation (limited by combined total). Reference them in your prompt using @Element1, @Element2, etc.
Scene Reference Images
Scene references define the “stage” where the action takes place. They influence:
- Lighting and color palette
- Architecture and environment details
- Overall visual style and mood
You can add up to 4 scene images. Reference them as @Image1, @Image2, etc. in your prompt.
Limitations
The total number of images across all input types is limited:
| Limit | Value |
|---|
| Minimum required | At least 1 visual input (start frame, element, or scene image) |
| Combined total (first frame + last frame + elements + scene images) | 7 maximum |
| Elements (without start/end frame) | 7 maximum |
| Elements (with start or end frame) | 3 maximum |
| Scene reference images | 4 maximum |
| Reference images per element | 1–3 |
Example scenarios:
- 7 elements + 0 scene images = 7 ✓ (no frames)
- 5 elements + 2 scene images = 7 ✓ (no frames)
- First frame (1) + 3 elements + 3 scene images = 7 ✓
- First frame (1) + last frame (1) + 3 elements + 2 scene images = 7 ✓
- First frame (1) + 4 elements = ✗ (max 3 elements with frame)
- First frame (1) + last frame (1) + 4 elements = ✗ (max 3 elements with frames)
Each element requires a frontal image. If you don’t provide reference images for an element, the frontal image is automatically used as the reference.
Multi-shot mode
Multi-shot lets you break a single generation into multiple scenes, each with its own prompt and duration. Elements and scene references carry across all shots, maintaining consistency. The total duration across all shots cannot exceed 15 seconds.
Step-by-step guide (Video Studio)
1. Open Video Studio and select the model
Go to venice.ai/video. In the Model Browser on the left, select one of the Kling O3 Reference to Video models:
- Kling O3 Pro R2V — higher quality, longer generation time (~6 min)
- Kling O3 Standard R2V — faster, more cost-effective for iteration
You must provide at least one visual input to generate a video: a start frame, an element, or a scene reference image. In the Input Panel, you’ll see the Elements section. Click Add Element to create an element for characters or objects you want to keep visually consistent.
For each element:
- Click the Frontal tile to upload a clear, front-facing image of your character or object
- Optionally click Add under Reference Images to upload additional angles (1–3)
Repeat for additional characters or objects (up to 7 elements total, or 3 if using start/end frames).
The combined total of first frame, last frame, elements, and scene images cannot exceed 7. See Limitations for details.
Best reference images: Use well-lit photos with a clean background. Provide front, side, and 45-degree angle views for the strongest identity lock. Make sure all reference images share the same visual style (don’t mix photorealistic and anime).
3. Add Scene Reference Images (optional)
Below the Elements section, you’ll see Scene Reference Images. Upload images that define the environment you want — a specific location, lighting setup, or art style.
These are tagged automatically as @Image1, @Image2, etc.
4. Upload a Start Frame (optional)
If you want to control the exact first frame of your video, switch to the Image input type and upload a start frame. You can also optionally set an end frame.
5. Write your prompt
In the prompt field, describe the action you want while referencing your elements and scene images using the @ tags:
@Element1 walks through the streets of @Image1, looking up at the buildings.
The camera slowly tracks from behind, revealing the city skyline.
For multi-character scenes:
@Element1 and @Element2 enter the cafe in @Image1 from opposite sides.
@Element1 waves and walks toward @Element2, who is sitting at a corner table.
Open Video Settings to adjust:
| Setting | Options | Default |
|---|
| Duration | 3s – 15s | 5s |
| Aspect Ratio | 16:9, 9:16, 1:1 | 16:9 |
| Generate Audio | On/Off | Off |
Audio generation adds native sound effects, dialogue, and ambient audio synchronized to the video. It increases cost by ~25%.
7. Generate
Click Generate Video. Kling O3 typically takes 4–6 minutes depending on the model tier and duration. You can queue multiple generations and browse results in the Video Gallery.
Multi-shot storyboarding
For narrative sequences, use multi-shot mode to define separate scenes within a single generation.
- In the prompt area, click Add Shot to create additional shots
- Write a separate prompt for each shot
- Set the duration for each shot (3–15s each, total ≤ 15s)
Elements and scene references persist across all shots automatically:
Shot 1 (5s): @Element1 stands at the edge of @Image1, looking out at the horizon.
Slow camera push forward.
Shot 2 (5s): Close-up of @Element1's face as they turn toward the camera.
Soft natural lighting, shallow depth of field.
Shot 3 (5s): @Element1 walks away from camera into the distance.
Wide cinematic shot, golden hour lighting.
Multi-shot total duration cannot exceed 15 seconds. For example, three 5-second shots = 15s maximum.
Prompting tips
Structure your prompt
Follow this pattern for reliable results:
[subject with @Element tag] + [action] + [environment with @Image tag] + [camera movement] + [lighting/style]
Example:
@Element1 hops happily across the candy ground of @Image1, stops to look at a
giant lollipop, tilts its head curiously. Cinematic tracking shot, soft warm lighting.
Keep prompts 50–150 words
Shorter prompts lack detail. Longer prompts introduce contradictions. Aim for the sweet spot.
Use simple camera language
The model responds best to straightforward camera directions:
| Use | Avoid |
|---|
slow camera push forward | dolly zoom with rack focus transition |
tracking shot from behind | complex handheld parallax movement |
close-up | extreme macro with tilt-shift bokeh |
wide cinematic shot | anamorphic ultra-wide establishing crane shot |
Use consistent vocabulary
If you describe a character wearing “a red jacket” in one prompt, don’t switch to “crimson coat” in the next. The model treats different words as different intent.
Place camera instructions early
Put the camera direction near the beginning of the prompt for more reliable results:
Cinematic tracking shot of @Element1 walking through @Image1, leaves
blowing in the wind, golden afternoon light.
Pricing
Reference to Video models use duration-based pricing:
| Model | Per second (no audio) | Per second (with audio) |
|---|
| Kling O3 Pro R2V | $0.112 | $0.140 |
| Kling O3 Standard R2V | $0.112 | $0.140 |
Example: A 10-second video with audio = 10 × 0.14=∗∗1.40**
Use the Video Quote API for exact pricing before generation.
API usage
Reference to Video is also available via the Venice API. See the Video Queue API for full details.
Python
import requests
response = requests.post(
"https://api.venice.ai/api/v1/video/queue",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={
"model": "kling-o3-pro-reference-to-video",
"prompt": "@Element1 walks through @Image1, camera tracking from behind",
"duration": "8",
"aspect_ratio": "16:9",
"audio": True,
"elements": [
{
"frontal_image_url": "https://example.com/character-front.jpg",
"reference_image_urls": [
"https://example.com/character-side.jpg",
"https://example.com/character-angle.jpg"
]
}
],
"image_urls": [
"https://example.com/scene-background.jpg"
]
}
)
queue_id = response.json()["id"]
Node.js
const response = await fetch("https://api.venice.ai/api/v1/video/queue", {
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "kling-o3-pro-reference-to-video",
prompt: "@Element1 walks through @Image1, camera tracking from behind",
duration: "8",
aspect_ratio: "16:9",
audio: true,
elements: [
{
frontal_image_url: "https://example.com/character-front.jpg",
reference_image_urls: [
"https://example.com/character-side.jpg",
"https://example.com/character-angle.jpg"
]
}
],
image_urls: [
"https://example.com/scene-background.jpg"
]
})
});
const { id: queueId } = await response.json();
cURL
curl https://api.venice.ai/api/v1/video/queue \
-H "Authorization: Bearer $VENICE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "kling-o3-pro-reference-to-video",
"prompt": "@Element1 walks through @Image1, camera tracking from behind",
"duration": "8",
"aspect_ratio": "16:9",
"audio": true,
"elements": [
{
"frontal_image_url": "https://example.com/character-front.jpg",
"reference_image_urls": [
"https://example.com/character-side.jpg",
"https://example.com/character-angle.jpg"
]
}
],
"image_urls": [
"https://example.com/scene-background.jpg"
]
}'
Element schema
Each element in the elements array accepts:
| Field | Type | Required | Description |
|---|
frontal_image_url | string | Yes | Clear front-facing image URL |
reference_image_urls | string[] | No | Additional angle URLs (1–3). If omitted, the frontal image is used as the reference. |
The API also supports video_url for video-based elements, but this is not currently available in the Video Studio UI.
Troubleshooting
| Problem | Likely cause | Fix |
|---|
| Generate button is disabled | No visual inputs provided | Add at least one visual input: start frame, element, or scene reference image |
| ”Number of images exceeds the limit” error | Too many combined inputs | Total of first frame + last frame + elements + scene images must be ≤ 7 |
| Character face changes between shots | Different or missing frontal image | Use the same frontal image consistently, keep description identical |
| Camera movement feels random | Multiple or conflicting camera instructions | Use a single camera instruction, place it early in the prompt |
| Style shifts between generations | Inconsistent scene references or mixed styles | Reuse the same scene images, keep style keywords consistent |
| Elements blend together in multi-character scenes | Vague spatial instructions | Be explicit about each element’s position: “foreground left”, “entering from right” |
| Background looks distorted | Cluttered or complex scene reference image | Use clean, high-quality scene reference images |
| Motion looks unnatural | Too many actions in one prompt | Simplify the action, use shorter duration, one action per shot |
Test with a 3–5 second clip before committing to longer durations. Shorter clips maintain better consistency and let you iterate faster.