News

Unsloth
unsloth.ai > docs > models > glm-5.1

GLM-5.1 - How to Run Locally | Unsloth Documentation

4+ hour, 4+ min ago  (844+ words) Run the new GLM-5.1 model by Z.ai on your own local device! GLM-5.1 is Z.ai's new open model. Compared with GLM-5, it delivers major improvements in coding, agentic tool use, reasoning, role-play, long-horizon agentic tasks, and overall chat quality. The…...

Unsloth
unsloth.ai > docs > models > gemma-4

Gemma 4 - How to Run Locally | Unsloth Documentation

5+ day, 47+ min ago  (997+ words) Run Google's new Gemma 4 models locally, including E2B, E4B, 26B A4B, and 31B. Gemma 4 is Google DeepMind's new family of open models, including E2B, E4B, 26B-A4B, and 31B. These multimodal, hybrid-thinking models support 140+ languages, up to 256K context, and come in both dense and MoE variants. E2B and E4B also support image…...

Unsloth
unsloth.ai > docs > models > gemma-4 > train

Gemma 4 Fine-tuning Guide | Unsloth Documentation

5+ day, 4+ hour ago  (717+ words) Train Gemma 4 by Google with Unsloth. You can now fine-tune Google's Gemma 4 E2B, E4B, 26B-A4B and 31B with Unslotharrow-up-right. Support includes all vision, text, audio and RL fine-tuning. Fine-tune Gemma 4 via our free Google Colab notebooks: If you want to preserve reasoning ability, you…...

Unsloth
unsloth.ai > docs > new > studio > start

Get started with Unsloth Studio | Unsloth Documentation

3+ week, 2+ hour ago  (324+ words) A guide for getting started with the fine-tuning studio, data recipes, model exporting, and chat. Unsloth Studio is a local, browser-based GUI for fine-tuning LLMs without writing any code. It wraps the training pipeline in a clean interface that handles…...

Unsloth
unsloth.ai > docs > new > studio > install

Unsloth Studio Installation | Unsloth Documentation

2+ week, 6+ day ago  (528+ words) Learn how to install Unsloth Studio on your local device. Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device. Training: Supported on…...

Unsloth
unsloth.ai > docs > new > studio

Introducing Unsloth Studio | Unsloth Documentation

3+ week, 49+ min ago  (966+ words) Run and train AI models locally with Unsloth Studio. Today, we're launching Unsloth Studio (Beta): an open-source, no-code web UI for training, running and exporting open models in one unified local interface. Run GGUF and safetensor models locally on Mac,…...

Unsloth
unsloth.ai > docs > new > studio > chat

How to Run models with Unsloth Studio | Unsloth Documentation

3+ week, 1+ hour ago  (539+ words) Run AI models, LLMs and GGUFs locally with Unsloth Studio. Unsloth Studio lets you run AI models 100% offline on your computer. Run model formats like GGUF and safetensors from Hugging Face or from your local files. Works on all MacOS,…...

Unsloth
unsloth.ai > docs > models > nemotron-3

NVIDIA Nemotron 3 Nano - How To Run Guide | Unsloth Documentation

3+ week, 1+ day ago  (702+ words) Run & fine-tune NVIDIA Nemotron 3 Nano locally on your device! NVIDIA releases Nemotron-3-Nano-4B, a 4B open hybrid MoE model that follows Nemotron-3-Super-120B-A12B and Nemotron-3-Nano-30B-A3B. The Nemotron family is designed for fast, accurate coding, math, and agentic workloads. They feature…...

Unsloth
unsloth.ai > docs

Unsloth Docs | Unsloth Documentation

3+ week, 1+ day ago  (411+ words) Train your own model with Unsloth, an open-source framework for LLM fine-tuning and reinforcement learning. Our docs will guide you through running & training your own model locally. Get started Our GitHub New Qwen3.5 Small & Medium LLMs are here! Run the new…...

Unsloth
unsloth.ai > docs > models > qwen3.5 > fine-tune

Qwen3.5 Fine-tuning Guide | Unsloth Documentation

1+ mon, 3+ day ago  (624+ words) Learn how to fine-tune Qwen3.5 LLMs with Unsloth. You can now fine-tune Qwen3.5 model family (0.8B, 2B, 4B, 9B, 27B, 35B'A3B, 122B'A10B) with Unslotharrow-up-right. Support includes both vision and text fine-tuning. Qwen3.5'35B'A3B - bf16 LoRA works on 74GB VRAM. Unsloth makes Qwen3.5 train 1.5" faster and uses 50% less VRAM than FA2 setups. Qwen3.5 bf16 LoRA VRAM use: 0.8B: 3GB " 2B: 5GB " 4B: 10GB " 9B: 22GB " 27B: 56GB Fine-tune…...