Video Generation API is now live!

Models

Explore the active model market,from a local OpenRouter snapshot.

This page reads from a local JSON snapshot synced from OpenRouter, so the catalog stays fast, indexable, and stable. Use it to browse current model coverage by provider, modality, reasoning support, context window, and pricing metadata.

Reset

Results

Showing 48 of 683 matching models

Snapshot source: OpenRouter. Synced April 21, 2026 at 8:00 AM. Page 8 of 15.

This route is built from local JSON so the catalog stays stable for browsing and SEO. If you need a specific model on ImaRouter, treat this page as a discovery reference and then contact the team for availability.

Text

Inflection

Inflection: Inflection 3 Pi

Inflection 3 Pi powers Inflection's [Pi](https://pi.ai) chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like customer support and roleplay. Pi has been trained to mirror your tone and style, if you use more emojis, so will Pi! Try experimenting with various prompts and conversation styles.

Text

Context

8K

Group

Other

Pricing preview

Input Price: $2.5 /M tokens

Output Price: $10 /M tokens

Slug

inflection/inflection-3-pi

Text

Inflection

Inflection: Inflection 3 Productivity

Inflection 3 Productivity is optimized for following instructions. It is better for tasks requiring JSON output or precise adherence to provided guidelines. It has access to recent news. For emotional intelligence similar to Pi, see [Inflect 3 Pi](/inflection/inflection-3-pi) See [Inflection's announcement](https://inflection.ai/blog/enterprise) for more details.

Text

Context

8K

Group

Other

Pricing preview

Input Price: $2.5 /M tokens

Output Price: $10 /M tokens

Slug

inflection/inflection-3-productivity

Text

Unknown provider

Google: Gemini 1.5 Flash 8B

Gemini Flash 1.5 8B is optimized for speed and efficiency, offering enhanced performance in small prompt tasks like chat, transcription, and translation. With reduced latency, it is highly effective for real-time and large-scale operations. This model focuses on cost-effective solutions while maintaining high-quality results. [Click here to learn more about this model](https://developers.googleblog.com/en/gemini-15-flash-8b-is-now-generally-available-for-use/). Usage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).

TextImage

Context

1M

Group

Gemini

Pricing preview

No display pricing published in the current snapshot.

Slug

google/gemini-flash-1.5-8b

Text

NextBit

TheDrummer: Rocinante 12B

Rocinante 12B is designed for engaging storytelling and rich prose. Early testers have reported: - Expanded vocabulary with unique and expressive word choices - Enhanced creativity for vivid narratives - Adventure-filled and captivating stories

Text

Context

32.8K

Group

Qwen

Pricing preview

Input Price: $0.17 /M tokens

Output Price: $0.43 /M tokens

Slug

thedrummer/rocinante-12b

Text

Unknown provider

EVA Qwen2.5 14B

A model specializing in RP and creative writing, this model is based on Qwen2.5-14B, fine-tuned with a mixture of synthetic and natural data. It is trained on 1.5M tokens of role-play data, and fine-tuned on 1.5M tokens of synthetic data.

Text

Context

32.8K

Group

Qwen

Pricing preview

No display pricing published in the current snapshot.

Slug

eva-unit-01/eva-qwen-2.5-14b

Text

Unknown provider

Liquid: LFM 40B MoE

Liquid's 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems. LFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals. See the [launch announcement](https://www.liquid.ai/liquid-foundation-models) for benchmarks and more info.

Text

Context

32.8K

Group

Other

Pricing preview

No display pricing published in the current snapshot.

Slug

liquid/lfm-40b

Text

Unknown provider

Magnum v2 72B

From the maker of [Goliath](https://openrouter.ai/models/alpindale/goliath-120b), Magnum 72B is the seventh in a family of models designed to achieve the prose quality of the Claude 3 models, notably Opus & Sonnet. The model is based on [Qwen2 72B](https://openrouter.ai/models/qwen/qwen-2-72b-instruct) and trained with 55 million tokens of highly curated roleplay (RP) data.

Text

Context

32.8K

Group

Qwen

Pricing preview

No display pricing published in the current snapshot.

Slug

anthracite-org/magnum-v2-72b

Text

Unknown provider

Meta: Llama 3.2 90B Vision Instruct

The Llama 90B Vision model is a top-tier, 90-billion-parameter multimodal model designed for the most challenging visual reasoning and language tasks. It offers unparalleled accuracy in image captioning, visual question answering, and advanced image-text comprehension. Pre-trained on vast multimodal datasets and fine-tuned with human feedback, the Llama 90B Vision is engineered to handle the most demanding image-based AI tasks. This model is perfect for industries requiring cutting-edge multimodal AI capabilities, particularly those dealing with complex, real-time visual and textual analysis. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

TextImage

Context

131.1K

Group

Llama3

Pricing preview

No display pricing published in the current snapshot.

Slug

meta-llama/llama-3.2-90b-vision-instruct

Text

Cloudflare

Meta: Llama 3.2 1B Instruct

Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate efficiently in low-resource environments while maintaining strong task performance. Supporting eight core languages and fine-tunable for more, Llama 1.3B is ideal for businesses or developers seeking lightweight yet powerful AI solutions that can operate in diverse multilingual settings without the high computational demand of larger models. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

Text

Context

60K

Group

Llama3

Pricing preview

Input Price: $0.027 /M tokens

Output Price: $0.2 /M tokens

Slug

meta-llama/llama-3.2-1b-instruct

Text

DeepInfra

Meta: Llama 3.2 11B Vision Instruct

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

TextImage

Context

131.1K

Group

Llama3

Pricing preview

Input Price: $0.245 /M tokens

Output Price: $0.245 /M tokens

Slug

meta-llama/llama-3.2-11b-vision-instruct

Text

Venice

Meta: Llama 3.2 3B Instruct (free)

Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it supports eight languages, including English, Spanish, and Hindi, and is adaptable for additional languages. Trained on 9 trillion tokens, the Llama 3.2 3B model excels in instruction-following, complex reasoning, and tool use. Its balanced performance makes it ideal for applications needing accuracy and efficiency in text generation across multilingual settings. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

Text

Context

131.1K

Group

Llama3

Pricing preview

Input Price: $0 /M tokens

Output Price: $0 /M tokens

Slug

meta-llama/llama-3.2-3b-instruct

Text

Cloudflare

Meta: Llama 3.2 3B Instruct

Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it supports eight languages, including English, Spanish, and Hindi, and is adaptable for additional languages. Trained on 9 trillion tokens, the Llama 3.2 3B model excels in instruction-following, complex reasoning, and tool use. Its balanced performance makes it ideal for applications needing accuracy and efficiency in text generation across multilingual settings. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

Text

Context

80K

Group

Llama3

Pricing preview

Input Price: $0.051 /M tokens

Output Price: $0.34 /M tokens

Slug

meta-llama/llama-3.2-3b-instruct

Text

DeepInfra

Qwen2.5 72B Instruct

Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. - Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. - Long-context Support up to 128K tokens and can generate up to 8K tokens. - Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).

Text

Context

32.8K

Group

Qwen

Pricing preview

Input Price: $0.12 /M tokens

Output Price: $0.39 /M tokens

Slug

qwen/qwen-2.5-72b-instruct

Text

Unknown provider

NeverSleep: Lumimaid v0.2 8B

Lumimaid v0.2 8B is a finetune of [Llama 3.1 8B](/models/meta-llama/llama-3.1-8b-instruct) with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged. Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Text

Context

131.1K

Group

Llama3

Pricing preview

No display pricing published in the current snapshot.

Slug

neversleep/llama-3.1-lumimaid-8b

TextReasoning

Unknown provider

OpenAI: o1-mini

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1). Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

Text

Context

128K

Group

GPT

Pricing preview

No display pricing published in the current snapshot.

Slug

openai/o1-mini

Text

Unknown provider

OpenAI: o1-mini (2024-09-12)

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1). Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

Text

Context

128K

Group

GPT

Pricing preview

No display pricing published in the current snapshot.

Slug

openai/o1-mini-2024-09-12

Text

Unknown provider

OpenAI: o1-preview

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1). Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

Text

Context

128K

Group

GPT

Pricing preview

No display pricing published in the current snapshot.

Slug

openai/o1-preview

Text

Unknown provider

OpenAI: o1-preview (2024-09-12)

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1). Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

Text

Context

128K

Group

GPT

Pricing preview

No display pricing published in the current snapshot.

Slug

openai/o1-preview-2024-09-12

Text

Unknown provider

Mistral: Pixtral 12B

The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836.

TextImage

Context

4.1K

Group

Mistral

Pricing preview

No display pricing published in the current snapshot.

Slug

mistralai/pixtral-12b

Text

Unknown provider

Reflection 70B

Reflection Llama-3.1 70B is trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course. The model was trained on synthetic data.

Text

Context

131.1K

Group

Llama3

Pricing preview

No display pricing published in the current snapshot.

Slug

mattshumer/reflection-70b

Text

Cohere

Cohere: Command R+ (08-2024)

command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint the same. Read the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed). Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement).

Text

Context

128K

Group

Cohere

Pricing preview

Input Price: $2.5 /M tokens

Output Price: $10 /M tokens

Slug

cohere/command-r-plus-08-2024

Text

Cohere

Cohere: Command R (08-2024)

command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at math, code and reasoning and is competitive with the previous version of the larger Command R+ model. Read the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed). Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement).

Text

Context

128K

Group

Cohere

Pricing preview

Input Price: $0.15 /M tokens

Output Price: $0.6 /M tokens

Slug

cohere/command-r-08-2024

Text

DeepInfra

Sao10K: Llama 3.1 Euryale 70B v2.2

Euryale L3.1 70B v2.2 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.1](/models/sao10k/l3-euryale-70b).

Text

Context

131.1K

Group

Llama3

Pricing preview

Input Price: $0.85 /M tokens

Output Price: $0.85 /M tokens

Slug

sao10k/l3.1-euryale-70b

Text

Unknown provider

Qwen: Qwen2.5-VL 7B Instruct

Qwen2.5 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements: - SoTA understanding of images of various resolution & ratio: Qwen2.5-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. - Understanding videos of 20min+: Qwen2.5-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. - Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2.5-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. - Multilingual Support: to serve global users, besides English and Chinese, Qwen2.5-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. For more details, see this [blog post](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub repo](https://github.com/QwenLM/Qwen2-VL). Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).

TextImage

Context

32.8K

Group

Qwen

Pricing preview

No display pricing published in the current snapshot.

Slug

qwen/qwen-2.5-vl-7b-instruct

Text

Unknown provider

Google: Gemini 1.5 Flash Experimental

Gemini 1.5 Flash Experimental is an experimental version of the [Gemini 1.5 Flash](/models/google/gemini-flash-1.5) model. Usage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms). #multimodal Note: This model is experimental and not suited for production use-cases. It may be removed or redirected to another model in the future.

TextImage

Context

1M

Group

Gemini

Pricing preview

No display pricing published in the current snapshot.

Slug

google/gemini-flash-1.5-exp

Text

Unknown provider

Lynn: Llama 3 Soliloquy 7B v3 32K

Soliloquy v3 is a highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 2 billion tokens of roleplaying data, Soliloquy v3 boasts a vast knowledge base and rich literary expression, supporting up to 32k context length. It outperforms existing models of comparable size, delivering enhanced roleplaying capabilities. Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Text

Context

32.8K

Group

Llama3

Pricing preview

No display pricing published in the current snapshot.

Slug

lynn/soliloquy-v3

Text

Unknown provider

Yi 1.5 34B Chat

The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This is a predecessor to the Yi 34B model. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples..

Text

Context

4.1K

Group

Yi

Pricing preview

No display pricing published in the current snapshot.

Slug

01-ai/yi-1.5-34b-chat

Text

Unknown provider

AI21: Jamba 1.5 Mini

Jamba 1.5 Mini is the world's first production-grade Mamba-based model, combining SSM and Transformer architectures for a 256K context window and high efficiency. It works with 9 languages and can handle various writing and analysis tasks as well as or better than similar small models. This model uses less computer memory and works faster with longer texts than previous designs. Read their [announcement](https://www.ai21.com/blog/announcing-jamba-model-family) to learn more.

Text

Context

256K

Group

Other

Pricing preview

No display pricing published in the current snapshot.

Slug

ai21/jamba-1-5-mini

Text

Unknown provider

AI21: Jamba 1.5 Large

Jamba 1.5 Large is part of AI21's new family of open models, offering superior speed, efficiency, and quality. It features a 256K effective context window, the longest among open models, enabling improved performance on tasks like document summarization and analysis. Built on a novel SSM-Transformer architecture, it outperforms larger models like Llama 3.1 70B on benchmarks while maintaining resource efficiency. Read their [announcement](https://www.ai21.com/blog/announcing-jamba-model-family) to learn more.

Text

Context

256K

Group

Other

Pricing preview

No display pricing published in the current snapshot.

Slug

ai21/jamba-1-5-large

Text

Unknown provider

Microsoft: Phi-3.5 Mini 128K Instruct

Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as [Phi-3 Mini](/models/microsoft/phi-3-mini-128k-instruct). The models underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3.5 models showcased robust and state-of-the-art performance among models with less than 13 billion parameters.

Text

Context

128K

Group

Other

Pricing preview

No display pricing published in the current snapshot.

Slug

microsoft/phi-3.5-mini-128k-instruct

Text

DeepInfra

Nous: Hermes 3 70B Instruct

Hermes 3 is a generalist language model with many improvements over [Hermes 2](/models/nousresearch/nous-hermes-2-mistral-7b-dpo), including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. Hermes 3 70B is a competitive, if not superior finetune of the [Llama-3.1 70B foundation model](/models/meta-llama/llama-3.1-70b-instruct), focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.

Text

Context

131.1K

Group

Llama3

Pricing preview

Input Price: $0.3 /M tokens

Output Price: $0.3 /M tokens

Slug

nousresearch/hermes-3-llama-3.1-70b

Text

Venice (Beta)

Nous: Hermes 3 405B Instruct (free)

Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. Hermes 3 405B is a frontier-level, full-parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills. Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two.

Text

Context

131.1K

Group

Llama3

Pricing preview

Input Price: $0 /M tokens

Output Price: $0 /M tokens

Slug

nousresearch/hermes-3-llama-3.1-405b

Text

DeepInfra

Nous: Hermes 3 405B Instruct

Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. Hermes 3 405B is a frontier-level, full-parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills. Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two.

Text

Context

131.1K

Group

Llama3

Pricing preview

Input Price: $1 /M tokens

Output Price: $1 /M tokens

Slug

nousresearch/hermes-3-llama-3.1-405b

Text

Unknown provider

OpenAI: ChatGPT-4o

OpenAI ChatGPT 4o is continually updated by OpenAI to point to the current version of GPT-4o used by ChatGPT. It therefore differs slightly from the API version of [GPT-4o](/models/openai/gpt-4o) in that it has additional RLHF. It is intended for research and evaluation. OpenAI notes that this model is not suited for production use-cases as it may be removed or redirected to another model in the future.

TextImage

Context

128K

Group

GPT

Pricing preview

No display pricing published in the current snapshot.

Slug

openai/chatgpt-4o-latest

Text

DeepInfra (Turbo)

Sao10K: Llama 3 8B Lunaris

Lunaris 8B is a versatile generalist and roleplaying model based on Llama 3. It's a strategic merge of multiple models, designed to balance creativity with improved logic and general knowledge. Created by [Sao10k](https://huggingface.co/Sao10k), this model aims to offer an improved experience over Stheno v3.2, with enhanced creativity and logical reasoning. For best results, use with Llama 3 Instruct context template, temperature 1.4, and min_p 0.1.

Text

Context

8.2K

Group

Llama3

Pricing preview

Input Price: $0.04 /M tokens

Output Price: $0.05 /M tokens

Slug

sao10k/l3-lunaris-8b

Text

Unknown provider

Aetherwiing: Starcannon 12B

Starcannon 12B v2 is a creative roleplay and story writing model, based on Mistral Nemo, using [nothingiisreal/mn-celeste-12b](/nothingiisreal/mn-celeste-12b) as a base, with [intervitens/mini-magnum-12b-v1.1](https://huggingface.co/intervitens/mini-magnum-12b-v1.1) merged in using the [TIES](https://arxiv.org/abs/2306.01708) method. Although more similar to Magnum overall, the model remains very creative, with a pleasant writing style. It is recommended for people wanting more variety than Magnum, and yet more verbose prose than Celeste.

Text

Context

12K

Group

Mistral

Pricing preview

No display pricing published in the current snapshot.

Slug

aetherwiing/mn-starcannon-12b

Text

Azure

OpenAI: GPT-4o (2024-08-06)

The 2024-08-06 version of GPT-4o offers improved performance in structured outputs, with the ability to supply a JSON schema in the respone_format. Read more [here](https://openai.com/index/introducing-structured-outputs-in-the-api/). GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities. For benchmarking against other models, it was briefly called ["im-also-a-good-gpt2-chatbot"](https://twitter.com/LiamFedus/status/1790064963966370209)

TextImageFile

Context

128K

Group

GPT

Pricing preview

Input Price: $2.5 /M tokens

Output Price: $10 /M tokens

Slug

openai/gpt-4o-2024-08-06

Text

Unknown provider

Mistral Nemo 12B Celeste

A specialized story writing and roleplaying model based on Mistral's NeMo 12B Instruct. Fine-tuned on curated datasets including Reddit Writing Prompts and Opus Instruct 25K. This model excels at creative writing, offering improved NSFW capabilities, with smarter and more active narration. It demonstrates remarkable versatility in both SFW and NSFW scenarios, with strong Out of Character (OOC) steering capabilities, allowing fine-tuned control over narrative direction and character behavior. Check out the model's [HuggingFace page](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9) for details on what parameters and prompts work best!

Text

Context

32K

Group

Mistral

Pricing preview

No display pricing published in the current snapshot.

Slug

nothingiisreal/mn-celeste-12b

Text

Unknown provider

Meta: Llama 3.1 405B (base)

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This is the base 405B pre-trained version. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Text

Context

131.1K

Group

Llama3

Pricing preview

No display pricing published in the current snapshot.

Slug

meta-llama/llama-3.1-405b

Text

Unknown provider

01.AI: Yi Large FC

The Yi Large Function Calling (FC) is a specialized model with capability of tool use. The model can decide whether to call the tool based on the tool definition passed in by the user, and the calling method will be generate in the specified format. It's applicable to various production scenarios that require building agents or workflows.

Text

Context

16.4K

Group

Yi

Pricing preview

No display pricing published in the current snapshot.

Slug

01-ai/yi-large-fc

Text

Unknown provider

01.AI: Yi Large Turbo

The Yi Large Turbo model is a High Performance and Cost-Effectiveness model offering powerful capabilities at a competitive price. It's ideal for a wide range of scenarios, including complex inference and high-quality text generation. Check out the [launch announcement](https://01-ai.github.io/blog/01.ai-yi-large-llm-launch) to learn more.

Text

Context

4.1K

Group

Yi

Pricing preview

No display pricing published in the current snapshot.

Slug

01-ai/yi-large-turbo

Text

Unknown provider

01.AI: Yi Vision

The Yi Vision is a complex visual task models provide high-performance understanding and analysis capabilities based on multiple images. It's ideal for scenarios that require analysis and interpretation of images and charts, such as image question answering, chart understanding, OCR, visual reasoning, education, research report understanding, or multilingual document reading.

TextImage

Context

16.4K

Group

Yi

Pricing preview

No display pricing published in the current snapshot.

Slug

01-ai/yi-vision

Text

Unknown provider

Google: Gemini 1.5 Pro Experimental

Gemini 1.5 Pro Experimental is a bleeding-edge version of the [Gemini 1.5 Pro](/models/google/gemini-pro-1.5) model. Because it's currently experimental, it will be **heavily rate-limited** by Google. Usage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms). #multimodal

TextImage

Context

1M

Group

Gemini

Pricing preview

No display pricing published in the current snapshot.

Slug

google/gemini-pro-1.5-exp

Text

NovitaAI

Meta: Llama 3.1 8B Instruct

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Text

Context

16.4K

Group

Llama3

Pricing preview

Input Price: $0.02 /M tokens

Output Price: $0.05 /M tokens

Slug

meta-llama/llama-3.1-8b-instruct

Text

Unknown provider

Meta: Llama 3.1 405B Instruct

The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs. Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 405B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong performance compared to leading closed-source models including GPT-4o and Claude 3.5 Sonnet in evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Text

Context

131.1K

Group

Llama3

Pricing preview

No display pricing published in the current snapshot.

Slug

meta-llama/llama-3.1-405b-instruct

Text

DeepInfra

Meta: Llama 3.1 70B Instruct

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Text

Context

131.1K

Group

Llama3

Pricing preview

Input Price: $0.4 /M tokens

Output Price: $0.4 /M tokens

Slug

meta-llama/llama-3.1-70b-instruct

Text

Unknown provider

Dolphin Llama 3 70B 🐬

Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a fine-tune of [Llama 3 70B](/models/meta-llama/llama-3-70b-instruct). It demonstrates improvements in instruction, conversation, coding, and function calling abilities, when compared to the original. Uncensored and is stripped of alignment and bias, it requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-models). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Text

Context

8.2K

Group

Llama3

Pricing preview

No display pricing published in the current snapshot.

Slug

cognitivecomputations/dolphin-llama-3-70b

Text

Unknown provider

Mistral: Codestral Mamba

A 7.3B parameter Mamba-based model designed for code and reasoning tasks. - Linear time inference, allowing for theoretically infinite sequence lengths - 256k token context window - Optimized for quick responses, especially beneficial for code productivity - Performs comparably to state-of-the-art transformer models in code and reasoning tasks - Available under the Apache 2.0 license for free use, modification, and distribution

Text

Context

256K

Group

Mistral

Pricing preview

No display pricing published in the current snapshot.

Slug

mistralai/codestral-mamba

Page 8 of 15

Need a model request?

Use the market snapshot for discovery, then ask ImaRouter for rollout.

If a model matters for your product, send the slug, expected traffic, target region, and latency expectations. The team can confirm support status, onboarding priority, or a migration path to an equivalent route on ImaRouter.

Contact

support@imarouter.com

Best for model availability questions, onboarding priority, routing strategy, and enterprise rollout planning.

Models | ImaRouter