Try Qwen3 Online

Interact with our AI model directly in this live demo

New🎉 Qwen3 Released

Qwen3: Think Deeper, Act Faster

Qwen3 is the latest large language model series developed by Qwen team, Alibaba Cloud.
Powerful language models with hybrid thinking capabilities.

🔍 Advanced AI with 119 languages support

from 99+ happy users

placeholder hero

What is Qwen3

Qwen3 is the latest addition to the Qwen family of large language models, featuring both dense and Mixture-of-Experts (MoE) models of various sizes.

  • Hybrid Thinking Modes
    Seamlessly switch between thinking mode (for complex reasoning) and non-thinking mode (for efficient responses).
  • Multilingual Support
    Support for 119 languages and dialects with strong capabilities for instruction following and translation.
  • Agentic Capabilities
    Precise integration with external tools for complex agent-based tasks and leading performance among open-source models.
Benefits

Why Choose Qwen3

Qwen3 offers significant improvements over previous models with enhanced reasoning capabilities and human preference alignment.

Significantly enhanced reasoning capabilities for mathematics, coding, and commonsense logical reasoning.

Advanced Reasoning
Human Alignment
Versatile Model Options

How to Use Qwen3

Get started with Qwen3 in a few simple steps:

Key Features of Qwen3

Qwen3 offers state-of-the-art capabilities for a wide range of AI applications.

Hybrid Thinking Modes

Switch between thinking mode for complex reasoning and non-thinking mode for quick responses.

Multilingual Support

Support for 119 languages and dialects with strong translation capabilities.

Long Context Windows

Process up to 128K tokens of context for comprehensive understanding of large documents.

Tool Usage

Seamless integration with external tools for enhanced capabilities as an agent.

Complex Reasoning

Enhanced capabilities for mathematics, coding, and logical reasoning tasks.

Flexible Deployment

Deploy on various platforms with support for quantization and optimized inference.

Stats

Qwen3 Performance

Leading capabilities across various benchmarks.

Supported

119+

Languages

Context Length

128K

Tokens

Model Options

8+

Variants

Testimonial

What Users Say About Qwen3

Hear from developers and researchers who have integrated Qwen3 into their projects.

David Chen

AI Researcher

Qwen3's hybrid thinking mode has revolutionized our research. We can now tackle complex reasoning tasks with unprecedented accuracy and transparency.

Rachel Kim

NLP Engineer

The multilingual capabilities of Qwen3 are exceptional. Supporting 119 languages has allowed us to create truly global applications with minimal effort.

Marcus Thompson

Full-Stack Developer

Qwen3's MoE models offer an incredible balance of performance and efficiency. We can now run advanced AI capabilities even with limited resources.

Sofia Garcia

AI Product Manager

The tool-use capabilities of Qwen3 have transformed our agent-based applications. It integrates seamlessly with our existing systems and delivers accurate results.

James Wilson

ML Engineer

Qwen3's reasoning capabilities are outstanding. We've seen significant improvements in code generation and mathematical problem-solving tasks.

Anna Zhang

Startup Founder

The flexibility of Qwen3 deployment options has been crucial for our startup. We started with the smaller models and scaled up as our user base grew.
FAQ

Frequently Asked Questions About Qwen3

Have another question? Contact us on Discord or by email.

1

What is Qwen3 and how does it differ from previous versions?

Qwen3 is the latest large language model series from the Qwen team at Alibaba Cloud. It features both dense models (0.6B to 32B) and Mixture-of-Experts models (30B-A3B, 235B-A22B). Compared to previous versions, Qwen3 offers enhanced reasoning capabilities, better human preference alignment, and hybrid thinking modes.

2

What are the system requirements to run Qwen3?

System requirements vary depending on the model size. Smaller models like Qwen3-0.6B can run on consumer hardware, while larger models require more powerful GPUs. Quantized versions (Int4, Int8) are available to reduce memory requirements. You can also use cloud-based deployment options.

3

How can I control the thinking mode in Qwen3?

You can control Qwen3's thinking mode by setting 'enable_thinking=True/False' when using the tokenizer's apply_chat_template method. You can also use /think and /no_think instructions in your prompts to dynamically switch modes during multi-turn conversations.

4

Which frameworks support Qwen3?

Qwen3 is supported by multiple frameworks including Hugging Face Transformers, ModelScope, vLLM, SGLang, llama.cpp, Ollama, LMStudio, and MLX on Apple Silicon. This provides flexibility for deployment across various environments.

5

How can I use Qwen3 for tool integration and agent capabilities?

Qwen3 excels at tool integration. We recommend using Qwen-Agent, which provides wrappers for tool use with MCP support. Alternatively, you can use frameworks like SGLang, vLLM, Transformers, llama.cpp, or Ollama with appropriate configurations.

6

What is the license for Qwen3 models?

All Qwen3 open-source models are licensed under Apache 2.0. You can find the license files in the respective Hugging Face repositories. This allows for both research and commercial use.

Start Building with Qwen3 Today

Experience the power of advanced AI with hybrid thinking capabilities.