Happy Horse API is now available! 🐴
HailuoVideo API

Hailuo APIfor developers.

Generate text-to-video clips and first-frame image-to-video tasks through ImaRouter's OpenAI-style /v1/videos endpoint.Use MiniMax Hailuo model variants with async polling, clear size rules, and production-ready task handling.

see more

Creative Direction

Visual references for commercial API workflows

These stills come from external IMA creative assets and are used here as art direction reference for image-led or campaign-style motion workflows.

Luxury product still used as visual direction for a Hailuo prompt-led video workflow.

Prompt Direction

Luxury Atmosphere Frame

A premium commercial still like this is useful for Hailuo prompt writing because it forces the prompt to describe subject, lighting, surface detail, and camera behavior precisely.

Structured visual reference used as a first-frame style example for Hailuo image-to-video.

First Frame

Structured Motion Start

Hailuo 2.3 Fast supports first-frame image input. A clean opening frame like this helps when the product needs more deliberate motion than freeform text-only generation.

Models

Hailuo 02 / 2.3 / 2.3 Fast

Three documented MiniMax Hailuo variants exposed through the same /v1/videos task interface

Modes

T2V and first-frame I2V

Prompt-led generation plus first-frame image-guided video on the Fast variant

Duration

6s and 10s

Current docs show 6 or 10 second tasks, with some 1080p combinations capped at 6 seconds

Size

512p / 768p / 1080p

Available sizes depend on the chosen Hailuo model variant

Task flow

Submit + poll

Create a task on /v1/videos, then retrieve the result from /v1/videos/{task_id}

Input image

first_frame_image

MiniMax-Hailuo-2.3-Fast requires metadata.first_frame_image for image-guided generation

Available Endpoints

Start building with the Hailuo API

Multiple endpoints for text-to-video, image-to-video, fast preview flows, and async job retrieval. This section is laid out more like a product catalog than raw docs so users can scan what to use first.

NewCore

Endpoint

Text-to-Video Task

/v1/videos

Unified endpointText-to-videoMiniMax-Hailuo-02MiniMax-Hailuo-2.3

Create a Hailuo text-to-video task by setting model to MiniMax-Hailuo-02 or MiniMax-Hailuo-2.3, then submitting prompt, duration, and size.

Best for: Use this for prompt-led concept generation, launch visuals, short-form ad tests, or any workflow that does not need an input image.

New

Endpoint

First-Frame Image-to-Video Task

/v1/videos

Unified endpointFirst-frame inputMiniMax-Hailuo-2.3-FastImage-guided

Create a Hailuo image-guided task with model set to MiniMax-Hailuo-2.3-Fast and pass the first frame through metadata.first_frame_image.

Best for: Best for cases where the opening composition already matters, such as product reveals, creator shots, or controlled motion from an approved hero frame.

New

Endpoint

Task Status

/v1/videos/{task_id}

PollingAsync taskHosted outputProduction flow

Poll a submitted Hailuo task until the render completes, fails, or returns the hosted output URL in the task payload.

Best for: Needed for production applications that queue tasks, surface progress, and persist the final output once the run completes.

Get started today

Ready to integrate Hailuo?

Try the API directly in the console, or reach out to the team for onboarding, pricing, and enterprise setup.

API Documentation

How to get access to Hailuo API

Hailuo in ImaRouter follows the documented OpenAI-style video task flow: submit to /v1/videos with the chosen MiniMax model, keep the returned task id, and then poll /v1/videos/{task_id} until the video is ready.

Selected endpoint

/v1/videos

The main differences between Hailuo variants are the allowed size matrix and whether the request needs metadata.first_frame_image. The polling pattern stays the same.

Use this for prompt-led concept generation, launch visuals, short-form ad tests, or any workflow that does not need an input image.

const apiKey = process.env.IMAROUTER_API_KEY;

async function createHailuoVideo() {
  const createResponse = await fetch("https://api.imarouter.com/v1/videos", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${apiKey}`,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      model: "MiniMax-Hailuo-2.3",
      prompt: "Luxury skincare bottle on reflective black stone, soft haze, dramatic specular highlights, slow orbit camera, premium launch-film lighting",
      duration: 6,
      size: "768p"
    })
  });

  const task = await createResponse.json();

  let status = "";
  while (status !== "completed") {
    await new Promise((resolve) => setTimeout(resolve, 3000));

    const statusResponse = await fetch(`https://api.imarouter.com/v1/videos/${task.task_id ?? task.id}`, {
      headers: {
        "Authorization": `Bearer ${apiKey}`
      }
    });

    const taskState = await statusResponse.json();
    status = taskState.status;

    if (status === "failed") {
      throw new Error(taskState.error ?? "Hailuo generation failed");
    }

    if (status === "completed") {
      return taskState.metadata?.url ?? taskState.video?.url ?? taskState.output?.[0]?.url;
    }
  }
}

Async flow

  1. 1

    Choose the model first: MiniMax-Hailuo-02, MiniMax-Hailuo-2.3, or MiniMax-Hailuo-2.3-Fast.

  2. 2

    Submit the task to /v1/videos with prompt, duration, size, and metadata.first_frame_image when the Fast image-guided path is needed.

  3. 3

    Store the returned task id in your backend or pass it back to the frontend for progress tracking.

  4. 4

    Poll /v1/videos/{task_id} until the task completes, then read the hosted output URL from the completed payload.

What Makes It Different

What makes the Hailuo API different

This section is laid out to read more like a product narrative than a feature list. Each row shows a capability, why it matters, and what that looks like in a real workflow.

Preview

OpenAI-style video task flow

That keeps your backend integration simpler if the product later adds more video models, retries, or routing logic.

Capability

OpenAI-style video task flow

Hailuo on ImaRouter uses the same /v1/videos create-and-poll pattern as the rest of the public media interface instead of exposing provider-native endpoint sprawl.

That keeps your backend integration simpler if the product later adds more video models, retries, or routing logic.

Example scenario

A video generation product adds Hailuo beside Seedance and Wan without having to rebuild its polling or task-state system.

Capability

Three model variants with distinct tradeoffs

The public docs expose MiniMax-Hailuo-02, MiniMax-Hailuo-2.3, and MiniMax-Hailuo-2.3-Fast rather than one monolithic Hailuo mode.

Developers can choose a lower-friction prompt-first path or a more constrained first-frame-guided path depending on the workflow.

Example scenario

A team uses Hailuo 2.3 for general prompt-led concepting and Hailuo 2.3 Fast only when the approved opening frame must stay intact.

Preview

Three model variants with distinct tradeoffs

Developers can choose a lower-friction prompt-first path or a more constrained first-frame-guided path depending on the workflow.

Preview

First-frame image guidance on Fast

This gives product teams a cleaner way to anchor composition, subject position, and opening-shot identity without inventing a separate image-to-video API surface.

Capability

First-frame image guidance on Fast

MiniMax-Hailuo-2.3-Fast supports a documented metadata.first_frame_image input, which is useful when freeform generation is too loose.

This gives product teams a cleaner way to anchor composition, subject position, and opening-shot identity without inventing a separate image-to-video API surface.

Example scenario

A brand content workflow starts from an approved product hero image, then turns it into motion while preserving the opening composition.

Capability

Clear duration and size constraints

The docs make the duration and size matrix explicit. That matters because some 1080p combinations are capped at 6 seconds and the Fast variant expects a narrower size range.

You can validate requests before submission and avoid building a UX that promises unsupported output combinations.

Example scenario

A frontend disables 10-second 1080p selections for the variants that do not support them instead of letting users submit doomed jobs.

Preview

Clear duration and size constraints

You can validate requests before submission and avoid building a UX that promises unsupported output combinations.

Unified API Platform

Two API tiers for different use cases

Pick the right balance of quality, speed, and cost for your workflow. The section stays data-driven, but the presentation is closer to a clean product comparison table.

Feature
MiniMax-Hailuo-02
MiniMax-Hailuo-2.3Recommended
MiniMax-Hailuo-2.3-Fast
Best forGeneral prompt-first video generationCurrent prompt-led generationFirst-frame-guided image-to-video
SpeedAsync task flowAsync task flowFast image-guided path
QualityBalanced baseline optionStronger current Hailuo variant for T2VConstrained but practical for anchored opening frames
CostEntry tierMid-tierMid-tier to premium
Recommended useUse this when you want the broadest basic text-to-video path, including 512p support.Use this when you want the newer Hailuo generation path for prompt-led video without first-frame input.Use this when you need metadata.first_frame_image and tighter opening-frame control.
API endpoints/v1/videos/v1/videos/v1/videos, /v1/videos/{task_id}

Use Cases

Industries using the Hailuo API

This section keeps the same reusable data model, but the presentation is closer to a grid of industry cards instead of long narrative boxes.

Creative tools, growth teams, and AI video apps

Prompt-first concept generation

Turn text prompts into short ad concepts, launch shots, and social clips without attaching a source frame.

Hailuo 02 and Hailuo 2.3 both fit products that want a simple prompt-to-video starting point under one async API.

Brand teams and ecommerce creatives

First-frame-guided brand motion

Start from an approved hero image and generate motion while preserving the opening visual state more tightly than pure prompt generation.

Hailuo 2.3 Fast is the documented fit when first-frame guidance matters to the workflow.

Performance marketers and agencies

Short-form ad testing

Generate several short commercial directions with different prompts, pacing, and visual tone before picking the winning concept.

The 6-second and 10-second task model is practical for ad testing because it keeps outputs short and comparable.

Creator platforms and social apps

Creator-style portrait motion

Animate people, product intros, and editorial shots into cleaner motion while keeping the backend on one OpenAI-style task flow.

Hailuo is a good fit when the product wants simple submit-and-poll logic rather than many provider-native moving parts.

Platform teams and multimodel builders

Video model routing expansion

Add Hailuo as one option in a wider video stack without introducing a different job model from the rest of the site's media APIs.

ImaRouter's consistent task flow makes it easier to add or remove models while keeping the integration shape stable.

Internal tools and enterprise builders

Constraint-aware enterprise UX

Expose only the supported duration and size combinations for each model variant so operators are not sending impossible requests.

The documented Hailuo matrix is valuable because it lets teams validate requests before they hit the queue.

Examples

Hailuo API examples

Prompt directions paired with visual reference frames. Use them as inspiration for landing pages, creator tooling, commercial mockups, or API playground defaults.

Luxury product image used as style direction for a Hailuo prompt-first example.

Luxury launch film

Prompt-first commercial direction

A strong Hailuo prompt describes the product, lighting, surface behavior, and camera path directly instead of relying on vague cinematic buzzwords.

Luxury fragrance bottle on reflective black stone, soft haze, dramatic specular highlights, slow orbit camera, premium launch-film lighting

text-to-videoluxuryproduct film
Structured visual used as a first-frame reference example for Hailuo image-guided generation.

First-frame portrait motion

First-frame-guided motion

This is the kind of request where Hailuo 2.3 Fast is more useful than freeform prompt generation because the opening frame already matters.

Preserve the opening portrait composition, add soft wind-driven hair motion, gentle forward camera drift, and polished editorial pacing

first frameportraiteditorial
Lifestyle scene used as inspiration for a Hailuo short-form motion example.

Lifestyle reveal shot

Grounded short-form motion

A useful direction for creator tools, lifestyle products, and onboarding clips where the result should feel more grounded than abstract.

Warm residential morning scene, natural walking pace, soft window light, candid body language, subtle handheld realism, creator-friendly framing

lifestylecreatorsocial clip
Demo

Automotive night hero

Commercial motion test

Good for short commercial runs where the goal is a clearly art-directed ad concept rather than uncontrolled visual novelty.

Automotive hero film, wet asphalt reflections, controlled lateral camera drift, deep blue night palette, premium ad pacing, restrained lens flare

automotivenightcommercial

How To Use This API

How to use Hailuo API

This quick-start walkthrough is written to rank for integration-style searches while staying concise enough for busy developers and operators.

  1. 1

    Choose the Hailuo variant

    Pick MiniMax-Hailuo-02, MiniMax-Hailuo-2.3, or MiniMax-Hailuo-2.3-Fast based on whether the workflow is prompt-first or first-frame-guided.

  2. 2

    Validate size and duration

    Check the allowed duration and size matrix before submission, especially if the workflow needs 1080p or the Fast image-guided variant.

  3. 3

    Prepare the prompt or first frame

    Use a direct prompt for T2V or provide metadata.first_frame_image when the opening composition needs to be anchored.

  4. 4

    Submit the task

    Send the request to /v1/videos with model, prompt, duration, size, and metadata when the chosen Hailuo variant requires it.

  5. 5

    Poll and store the output

    Use the returned task id to poll /v1/videos/{task_id}, then download or persist the hosted output URL once the task completes.

FAQ

Frequently asked questions about Hailuo API

FAQs stay compact and skimmable here. The content is still data-driven for SEO, but the layout is cleaner and less visually heavy.

What is Hailuo API?

Hailuo API is ImaRouter's OpenAI-style task interface for MiniMax Hailuo video generation, covering MiniMax-Hailuo-02, MiniMax-Hailuo-2.3, and MiniMax-Hailuo-2.3-Fast.

Does Hailuo support image-to-video?

Yes. The public docs expose MiniMax-Hailuo-2.3-Fast with metadata.first_frame_image for first-frame-guided video generation.

What endpoint does Hailuo use in ImaRouter?

Hailuo uses the same OpenAI-style video task endpoints as the rest of the public media interface: submit on /v1/videos and poll on /v1/videos/{task_id}.

What durations are supported?

The documented Hailuo task schema supports 6-second and 10-second runs, with some 1080p combinations limited to 6 seconds.

What sizes are supported?

MiniMax-Hailuo-02 supports 512p, 768p, and 1080p. MiniMax-Hailuo-2.3 and MiniMax-Hailuo-2.3-Fast are documented with 768p and 1080p support, with additional limits on some combinations.

Do I need a separate endpoint for the Fast image-guided mode?

No. The docs keep Hailuo on the same /v1/videos endpoint. The difference is the model value and the required metadata.first_frame_image input.

How do I get the final video URL?

Poll /v1/videos/{task_id} until the task reaches completed, then read the hosted output URL from the completed task payload.

Why use ImaRouter for Hailuo instead of wiring providers one by one?

ImaRouter combines model routing, five-modality coverage, transparent pricing, automatic failover, and faster new-model onboarding so teams do not have to integrate and monitor providers one by one.

Model Directory

Browse the full model market before you choose your route.

Use the `/models` catalog to scan providers, modalities, reasoning support, context windows, and pricing metadata from a local OpenRouter snapshot. It is the fastest way to compare what exists before you decide which models should be prioritized on ImaRouter.

Get Started

Add Hailuo to your product without building a custom video task layer

Use one OpenAI-style /v1/videos flow for MiniMax Hailuo prompt generation and first-frame-guided motion, then expand the same pattern across the rest of your routed video stack. Use one API surface for 200+ models across five modalities, with transparent routing, automatic failover, and fast new-model onboarding.