← All Tools ← 全部工具
⚙️ Skill Framework 技能框架 ★ 36k+ GitHub Stars fine-tuning llm framework

LLaMA-Factory – LLaMA-Factory 微调框架

Unified fine-tuning framework for 100+ LLMs with WebUI

View on GitHub ↗ 在 GitHub 查看 ↗ Official Website ↗ 官方网站 ↗
Category分类
Skill Framework 技能框架
skill
GitHub StarsGitHub 星数
36k+
Community adoption社区认可度
License许可证
Apache-2.0
Check repository 查看仓库
Tags标签
fine-tuning, llm, framework
4 tags total个标签

What Is LLaMA-Factory? LLaMA-Factory 是什么?

LLaMA-Factory is an open-source developer framework for building AI applications with 36k+ GitHub stars. Unified fine-tuning framework for 100+ LLMs with WebUI

As a developer framework for building AI applications, LLaMA-Factory is designed to help developers and teams build production-ready AI applications with reliable, tested abstractions. It handles the complexity of connecting LLMs to external data and tools, so engineers can focus on business logic instead of plumbing.

The project is maintained on GitHub at github.com/hiyouga/LLaMA-Factory and is actively developed with a strong open-source community. With 36k+ stars, it is one of the most widely adopted tools in its category.

LLaMA-Factory is the most comprehensive open-source fine-tuning toolkit for LLMs. It supports every major PEFT method (LoRA, QLoRA, DoRA, full fine-tuning) on 100+ model architectures via a single unified interface. If you're fine-tuning Llama, Qwen, Mistral, or DeepSeek models, this is where to start — the WebUI makes supervised fine-tuning accessible to ML engineers without a research background.

LLaMA-Factory is the most comprehensive open-source fine-tuning toolkit for LLMs. It supports every major PEFT method (LoRA, QLoRA, DoRA, full fine-tuning) on 100+ model architectures via a single unified interface. If you're fine-tuning Llama, Qwen, Mistral, or DeepSeek models, this is where to start — the WebUI makes supervised fine-tuning accessible to ML engineers without a research background.

— AI Nav Editorial Team

Getting Started with LLaMA-Factory LLaMA-Factory 快速开始

Install LLaMA-Factory via pip and follow the official README for configuration examples. Most Python frameworks can be installed in one line: pip install llama-factory

💡 Tip: Check the Releases page for the latest stable version and migration notes, and Discussions for community Q&A.

Papers & Further Reading 论文与延伸阅读

Key Features 核心功能

  • 🎯
    Fine-Tuning — Customize pre-trained models on domain-specific data for improved accuracy and specialization.
  • 🤖
    LLM Integration — Seamless integration with major LLMs including GPT-4o, Claude 4, Llama 3, and Mistral for text generation and reasoning.
  • ⚙️
    Modular Framework — Extensible architecture with plugin support; customize and extend for your specific use case.
  • 🔓
    Open Source — MIT/Apache licensed—inspect, fork, modify, and self-host with no vendor lock-in.

Pros & Cons 优缺点

Pros优点

  • One-stop fine-tuning for 100+ models including Llama, Mistral, Qwen, and Gemma
  • Supports LoRA, QLoRA, DoRA, ORPO, DPO, and full fine-tuning
  • LLaMA Board web UI for no-code model training configuration
  • Memory-efficient: QLoRA fine-tunes 7B models on 8GB VRAM

Cons缺点

  • Full fine-tuning of large models still requires high-end GPU clusters
  • Dataset preparation and formatting require careful attention to templates

Use Cases 应用场景

LLaMA-Factory is widely used across the AI development ecosystem. Here are the most common scenarios:

🏗️ LLM Application Development

Build production-grade apps powered by language models with structured pipelines, retry logic, and observability.

📚 RAG & Knowledge Systems

Create document Q&A and knowledge base systems that ground LLM responses in proprietary data.

🤖 Agent Orchestration

Compose multi-step AI workflows where models plan, use tools, and iterate autonomously toward goals.

🔌 Model Provider Abstraction

Write once, run with any LLM provider—switch between OpenAI, Anthropic, and local models without code changes.

Known Limitations & Gotchas 已知局限与注意事项

  • Multi-node distributed training requires additional configuration beyond single-GPU setups
  • The extensive configuration options can be overwhelming — start with the WebUI before tackling YAML configs
  • Model evaluation after fine-tuning requires external tooling (not built into the main training pipeline)
  • Some advanced PEFT methods (GaLore, APOLLO) are experimental and not yet production-validated
Get Started with LLaMA-Factory 立即开始使用 LLaMA-Factory
Visit the official site for documentation, downloads, and cloud plans. 访问官方网站获取文档、下载和云端方案。
Visit Official Site ↗ 访问官方网站 ↗

Similar Skill Frameworks 相似 技能框架

If LLaMA-Factory doesn't fit your needs, here are other popular Skill Frameworks you might consider:

Frequently Asked Questions 常见问题

What is LLaMA-Factory?
LLaMA-Factory is an open-source framework for efficient fine-tuning of large language models. It supports LoRA, QLoRA, and full fine-tuning for 100+ model architectures with a simple YAML configuration.
What is the minimum GPU needed for LLaMA-Factory?
QLoRA fine-tuning of a 7B model requires approximately 8GB VRAM (RTX 3070 or better). Full fine-tuning of a 7B model needs 24GB+ VRAM. Multi-GPU training is supported via DeepSpeed.
How do I fine-tune a model with LLaMA-Factory?
Prepare your dataset in the Alpaca or ShareGPT format, create a YAML config specifying model path, dataset, and LoRA parameters, then run `llamafactory-cli train config.yaml`. The LLaMA Board GUI provides a visual alternative.
Which models can LLaMA-Factory fine-tune?
LLaMA-Factory supports Llama 3/2, Mistral, Qwen2, Gemma 2, Phi-3, ChatGLM, Baichuan, DeepSeek, Yi, InternLM, and 100+ more. See the full list in the GitHub documentation.