Skip to main content
Reference to Video lets you lock in the appearance of characters, objects, and scenes so your AI-generated videos stay visually consistent. Instead of hoping the model interprets your prompt correctly, you provide visual anchors — reference images that tell the model exactly what your subject looks like. This feature is available on Kling O3 models in the Venice Video Studio.

When to use Reference to Video

Use Reference to Video when you need:
  • Character consistency — the same person or character across multiple shots
  • Product accuracy — a real product that must look identical to the original
  • Scene continuity — a specific environment or background across generations
  • Multi-character scenes — multiple distinct characters interacting without blending
For simple text-to-video or image-to-video where consistency isn’t critical, the standard Kling models work well without references.

Core concepts

Reference to Video uses three types of visual input that work together:
InputRequiredPurposeHow to reference in prompt
ElementsAt least one visual input*Lock a character or object’s identity@Element1, @Element2, etc.
Scene Reference ImagesAt least one visual input*Set the environment, style, and mood@Image1, @Image2, etc.
Start FrameAt least one visual input*Control the first frame of the videoN/A (set via upload)
End FrameNoControl the last frame of the videoN/A (set via upload)
*At least one of: start frame, elements, or scene reference images is required.

Elements

An Element is a character or object you want to keep visually stable throughout the video. Each element consists of:
  • Frontal Image (required per element) — a clear, front-facing photo of the subject. This is the primary identity anchor. Think of it as the “passport photo” of your character or product.
  • Reference Images (1–3, optional) — additional angles of the same subject (side view, 45-degree angle, back). These help the model understand the subject in 3D space. If not provided, the frontal image is automatically used as the reference.
You can add up to 7 elements per generation (limited by combined total). Reference them in your prompt using @Element1, @Element2, etc.

Scene Reference Images

Scene references define the “stage” where the action takes place. They influence:
  • Lighting and color palette
  • Architecture and environment details
  • Overall visual style and mood
You can add up to 4 scene images. Reference them as @Image1, @Image2, etc. in your prompt.

Limitations

The total number of images across all input types is limited:
LimitValue
Minimum requiredAt least 1 visual input (start frame, element, or scene image)
Combined total (first frame + last frame + elements + scene images)7 maximum
Elements (without start/end frame)7 maximum
Elements (with start or end frame)3 maximum
Scene reference images4 maximum
Reference images per element1–3
Example scenarios:
  • 7 elements + 0 scene images = 7 ✓ (no frames)
  • 5 elements + 2 scene images = 7 ✓ (no frames)
  • First frame (1) + 3 elements + 3 scene images = 7 ✓
  • First frame (1) + last frame (1) + 3 elements + 2 scene images = 7 ✓
  • First frame (1) + 4 elements = ✗ (max 3 elements with frame)
  • First frame (1) + last frame (1) + 4 elements = ✗ (max 3 elements with frames)
Each element requires a frontal image. If you don’t provide reference images for an element, the frontal image is automatically used as the reference.

Multi-shot mode

Multi-shot lets you break a single generation into multiple scenes, each with its own prompt and duration. Elements and scene references carry across all shots, maintaining consistency. The total duration across all shots cannot exceed 15 seconds.

Step-by-step guide (Video Studio)

1. Open Video Studio and select the model

Go to venice.ai/video. In the Model Browser on the left, select one of the Kling O3 Reference to Video models:
  • Kling O3 Pro R2V — higher quality, longer generation time (~6 min)
  • Kling O3 Standard R2V — faster, more cost-effective for iteration

2. Add Visual Inputs (at least one required)

You must provide at least one visual input to generate a video: a start frame, an element, or a scene reference image. In the Input Panel, you’ll see the Elements section. Click Add Element to create an element for characters or objects you want to keep visually consistent. For each element:
  1. Click the Frontal tile to upload a clear, front-facing image of your character or object
  2. Optionally click Add under Reference Images to upload additional angles (1–3)
Repeat for additional characters or objects (up to 7 elements total, or 3 if using start/end frames).
The combined total of first frame, last frame, elements, and scene images cannot exceed 7. See Limitations for details.
Best reference images: Use well-lit photos with a clean background. Provide front, side, and 45-degree angle views for the strongest identity lock. Make sure all reference images share the same visual style (don’t mix photorealistic and anime).

3. Add Scene Reference Images (optional)

Below the Elements section, you’ll see Scene Reference Images. Upload images that define the environment you want — a specific location, lighting setup, or art style. These are tagged automatically as @Image1, @Image2, etc.

4. Upload a Start Frame (optional)

If you want to control the exact first frame of your video, switch to the Image input type and upload a start frame. You can also optionally set an end frame.

5. Write your prompt

In the prompt field, describe the action you want while referencing your elements and scene images using the @ tags:
@Element1 walks through the streets of @Image1, looking up at the buildings.
The camera slowly tracks from behind, revealing the city skyline.
For multi-character scenes:
@Element1 and @Element2 enter the cafe in @Image1 from opposite sides.
@Element1 waves and walks toward @Element2, who is sitting at a corner table.

6. Configure settings

Open Video Settings to adjust:
SettingOptionsDefault
Duration3s – 15s5s
Aspect Ratio16:9, 9:16, 1:116:9
Generate AudioOn/OffOff
Audio generation adds native sound effects, dialogue, and ambient audio synchronized to the video. It increases cost by ~25%.

7. Generate

Click Generate Video. Kling O3 typically takes 4–6 minutes depending on the model tier and duration. You can queue multiple generations and browse results in the Video Gallery.

Multi-shot storyboarding

For narrative sequences, use multi-shot mode to define separate scenes within a single generation.
  1. In the prompt area, click Add Shot to create additional shots
  2. Write a separate prompt for each shot
  3. Set the duration for each shot (3–15s each, total ≤ 15s)
Elements and scene references persist across all shots automatically:
Shot 1 (5s): @Element1 stands at the edge of @Image1, looking out at the horizon.
Slow camera push forward.

Shot 2 (5s): Close-up of @Element1's face as they turn toward the camera.
Soft natural lighting, shallow depth of field.

Shot 3 (5s): @Element1 walks away from camera into the distance.
Wide cinematic shot, golden hour lighting.
Multi-shot total duration cannot exceed 15 seconds. For example, three 5-second shots = 15s maximum.

Prompting tips

Structure your prompt

Follow this pattern for reliable results:
[subject with @Element tag] + [action] + [environment with @Image tag] + [camera movement] + [lighting/style]
Example:
@Element1 hops happily across the candy ground of @Image1, stops to look at a
giant lollipop, tilts its head curiously. Cinematic tracking shot, soft warm lighting.

Keep prompts 50–150 words

Shorter prompts lack detail. Longer prompts introduce contradictions. Aim for the sweet spot.

Use simple camera language

The model responds best to straightforward camera directions:
UseAvoid
slow camera push forwarddolly zoom with rack focus transition
tracking shot from behindcomplex handheld parallax movement
close-upextreme macro with tilt-shift bokeh
wide cinematic shotanamorphic ultra-wide establishing crane shot

Use consistent vocabulary

If you describe a character wearing “a red jacket” in one prompt, don’t switch to “crimson coat” in the next. The model treats different words as different intent.

Place camera instructions early

Put the camera direction near the beginning of the prompt for more reliable results:
Cinematic tracking shot of @Element1 walking through @Image1, leaves
blowing in the wind, golden afternoon light.

Pricing

Reference to Video models use duration-based pricing:
ModelPer second (no audio)Per second (with audio)
Kling O3 Pro R2V$0.112$0.140
Kling O3 Standard R2V$0.112$0.140
Example: A 10-second video with audio = 10 × 0.14=0.14 = **1.40** Use the Video Quote API for exact pricing before generation.

API usage

Reference to Video is also available via the Venice API. See the Video Queue API for full details.

Python

import requests

response = requests.post(
    "https://api.venice.ai/api/v1/video/queue",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "model": "kling-o3-pro-reference-to-video",
        "prompt": "@Element1 walks through @Image1, camera tracking from behind",
        "duration": "8",
        "aspect_ratio": "16:9",
        "audio": True,
        "elements": [
            {
                "frontal_image_url": "https://example.com/character-front.jpg",
                "reference_image_urls": [
                    "https://example.com/character-side.jpg",
                    "https://example.com/character-angle.jpg"
                ]
            }
        ],
        "image_urls": [
            "https://example.com/scene-background.jpg"
        ]
    }
)

queue_id = response.json()["id"]

Node.js

const response = await fetch("https://api.venice.ai/api/v1/video/queue", {
  method: "POST",
  headers: {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    model: "kling-o3-pro-reference-to-video",
    prompt: "@Element1 walks through @Image1, camera tracking from behind",
    duration: "8",
    aspect_ratio: "16:9",
    audio: true,
    elements: [
      {
        frontal_image_url: "https://example.com/character-front.jpg",
        reference_image_urls: [
          "https://example.com/character-side.jpg",
          "https://example.com/character-angle.jpg"
        ]
      }
    ],
    image_urls: [
      "https://example.com/scene-background.jpg"
    ]
  })
});

const { id: queueId } = await response.json();

cURL

curl https://api.venice.ai/api/v1/video/queue \
  -H "Authorization: Bearer $VENICE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "kling-o3-pro-reference-to-video",
    "prompt": "@Element1 walks through @Image1, camera tracking from behind",
    "duration": "8",
    "aspect_ratio": "16:9",
    "audio": true,
    "elements": [
      {
        "frontal_image_url": "https://example.com/character-front.jpg",
        "reference_image_urls": [
          "https://example.com/character-side.jpg",
          "https://example.com/character-angle.jpg"
        ]
      }
    ],
    "image_urls": [
      "https://example.com/scene-background.jpg"
    ]
  }'

Element schema

Each element in the elements array accepts:
FieldTypeRequiredDescription
frontal_image_urlstringYesClear front-facing image URL
reference_image_urlsstring[]NoAdditional angle URLs (1–3). If omitted, the frontal image is used as the reference.
The API also supports video_url for video-based elements, but this is not currently available in the Video Studio UI.

Troubleshooting

ProblemLikely causeFix
Generate button is disabledNo visual inputs providedAdd at least one visual input: start frame, element, or scene reference image
”Number of images exceeds the limit” errorToo many combined inputsTotal of first frame + last frame + elements + scene images must be ≤ 7
Character face changes between shotsDifferent or missing frontal imageUse the same frontal image consistently, keep description identical
Camera movement feels randomMultiple or conflicting camera instructionsUse a single camera instruction, place it early in the prompt
Style shifts between generationsInconsistent scene references or mixed stylesReuse the same scene images, keep style keywords consistent
Elements blend together in multi-character scenesVague spatial instructionsBe explicit about each element’s position: “foreground left”, “entering from right”
Background looks distortedCluttered or complex scene reference imageUse clean, high-quality scene reference images
Motion looks unnaturalToo many actions in one promptSimplify the action, use shorter duration, one action per shot
Test with a 3–5 second clip before committing to longer durations. Shorter clips maintain better consistency and let you iterate faster.