Qwen Studio

Pioneering the Future of AI Research

We develop and open-source advanced large language models and interpretability toolkits, driving innovation in AI research and development.

Explore Our Models
Deep Research

Latest Innovations & Open-Source Releases

Introducing Qwen-Scope: LLM Interpretability Toolkit

Interpretability research has emerged as a critical area for understanding LLM behaviors, informing performance optimization, and enabling more controllable model outputs. Today, we are excited to introduce Qwen-Scope, an interpretability toolkit trained on the Qwen3 and Qwen3.5 series models. Specifically, we inserted and trained Sparse Autoencoders (SAEs) within Qwen’s hidden layers. By...

Qwen3.6-27B: A Dense Multimodal Model

Following the launch of Qwen3.6-Plus and Qwen3.6-35B-A3B, we are excited to open-source Qwen3.6-27B — a dense 27-billion-parameter multimodal model at the scale the community has been asking for most. Still supporting both multimodal thinking and non-thinking modes, Qwen3.6-27B delivers flagship-level agentic coding performance, surpassing the previous-generation open-source flagship...

Early Preview: Qwen3.6-Max-Preview

Following the release of Qwen3.6-Plus, we are sharing an early preview of our next proprietary model: Qwen3.6-Max-Preview. Compared to Qwen3.6-Plus, this preview release brings stronger world knowledge and instruction following, along with significant agentic coding improvements across a wide range of benchmarks. As a preview, the model is still under active development — we are continuing to...

Connect with Qwen Studio

Have questions about our research, models, or toolkits? We'd love to hear from you. Fill out the form below and our team will get back to you shortly.

Stay Ahead with Qwen Studio Updates

Subscribe to our newsletter for the latest breakthroughs in AI research, new model releases, and interpretability toolkit advancements from Qwen Studio.

Join Our Community

Get exclusive insights directly to your inbox.

We respect your privacy. Unsubscribe at any time.