← All Tools ← 全部工具
⚙️ Skill Framework 技能框架 ★ 38k+ GitHub Stars llm inference serving

vLLM – vLLM 高吞吐推理

High-throughput LLM serving with PagedAttention

View on GitHub ↗ 在 GitHub 查看 ↗ Official Website ↗ 官方网站 ↗
Category分类
Skill Framework 技能框架
skill
GitHub StarsGitHub 星数
38k+
Community adoption社区认可度
License许可证
Apache-2.0
Check repository 查看仓库
Tags标签
llm, inference, serving
4 tags total个标签

What Is vLLM? vLLM 是什么?

vLLM is an open-source, high-throughput LLM inference and serving engine developed by researchers at UC Berkeley. With 38k+ GitHub stars, it has become the de-facto standard for production LLM serving, used by companies including Nvidia, Microsoft, and major AI startups worldwide.

The core innovation behind vLLM is PagedAttention—a memory management algorithm inspired by OS virtual memory and paging. Traditional LLM serving frameworks pre-allocate a large, contiguous block of GPU memory for the KV (key-value) cache of each request. This wastes significant VRAM because allocated memory often goes unused. PagedAttention manages KV cache in small, non-contiguous "pages," eliminating internal fragmentation and enabling near-zero memory waste. The result: up to 24x higher throughput compared to standard HuggingFace Transformers serving, with the same hardware.

Beyond PagedAttention, vLLM provides an OpenAI-compatible REST API out of the box. Any application built against the OpenAI SDK can switch to a self-hosted vLLM server with a single line change: setting the base_url to your vLLM endpoint. This makes migration from OpenAI or Anthropic APIs to self-hosted open-source models remarkably straightforward.

The project is maintained at github.com/vllm-project/vllm and has a very active community with new releases roughly every two weeks.

Key Features 核心功能

  • 🤖
    LLM Integration — Seamless integration with major LLMs including GPT-4o, Claude 4, Llama 3, and Mistral for text generation and reasoning.
  • High-Performance Inference — Optimized model inference with quantization support, batching, and sub-second latency.
  • 🔓
    Open Source — MIT/Apache licensed—inspect, fork, modify, and self-host with no vendor lock-in.

Pros & Cons 优缺点

Pros优点

  • Up to 24x higher throughput than HuggingFace Transformers
  • PagedAttention algorithm maximizes GPU memory utilization
  • OpenAI-compatible REST API – minimal code changes to integrate
  • Supports LLaMA, Mistral, Gemma, Falcon, and 40+ model architectures

Cons缺点

  • Requires NVIDIA GPU with CUDA; no CPU-only support
  • Minimum 1 GPU with 16GB+ VRAM for most production models

Use Cases 应用场景

🏭 Production API Serving

The primary use case: replace an external OpenAI/Anthropic API call with a self-hosted open-source model (Llama 3, Mistral, Gemma, Qwen, etc.) at dramatically lower cost. vLLM handles concurrent requests efficiently using continuous batching—new requests join the batch dynamically without waiting for the current batch to finish, maximizing GPU utilization. Teams serving millions of requests per day commonly report 70–90% cost reductions vs. commercial APIs.

⚡ Batch Inference Pipelines

For offline workloads—document processing, data enrichment, classification at scale—vLLM's async Python API and OpenAI-compatible batching endpoints allow you to saturate GPU throughput. Combine with Python asyncio to issue thousands of simultaneous requests to the local server and process results as they arrive.

🔬 Research & Experimentation

Researchers use vLLM to benchmark new models, test quantization methods (GPTQ, AWQ, FP8), and experiment with sampling strategies. vLLM supports speculative decoding for accelerating generation of smaller models and tensor parallelism for distributing large models across multiple GPUs.

🏢 Private Enterprise Deployment

Organizations with strict data residency requirements deploy vLLM on-premises or in a private cloud (AWS, GCP, Azure). Since all inference happens in-house, no data leaves the security perimeter. vLLM's OpenAI-compatible API means existing integrations (LangChain, LlamaIndex, custom apps) work without code changes.

Getting Started with vLLM vLLM 快速开始

vLLM requires an NVIDIA GPU with CUDA 11.8+ and Python 3.9+. For most production models you need at least 16GB VRAM (A10G, A100, H100, or RTX 4090).

Step 1: Install

pip install vllm

Step 2: Launch an OpenAI-Compatible Server

# Serve Llama 3.1 8B (downloads model from HuggingFace automatically) vllm serve meta-llama/Llama-3.1-8B-Instruct \ --port 8000 \ --max-model-len 8192 \ --gpu-memory-utilization 0.9

Step 3: Call It Like OpenAI

from openai import OpenAI client = OpenAI( base_url="http://localhost:8000/v1", api_key="not-needed", # vLLM doesn't require auth by default ) response = client.chat.completions.create( model="meta-llama/Llama-3.1-8B-Instruct", messages=[{"role": "user", "content": "Explain PagedAttention in 3 sentences."}], max_tokens=256, ) print(response.choices[0].message.content)

Step 4: Async Batch Inference (Python)

from vllm import LLM, SamplingParams llm = LLM(model="meta-llama/Llama-3.1-8B-Instruct") sampling_params = SamplingParams(temperature=0.7, max_tokens=512) prompts = [ "Summarize this document: ...", "Classify this review as positive or negative: ...", "Extract named entities from: ...", ] outputs = llm.generate(prompts, sampling_params) for output in outputs: print(output.outputs[0].text)
💡 Multi-GPU tip: For models larger than your single GPU's VRAM, add --tensor-parallel-size 4 to split the model across 4 GPUs. vLLM handles the tensor sharding automatically.

Production Deployment Tips

Moving beyond a single-GPU dev setup to a production cluster:

  • Use a reverse proxy (nginx, Caddy) in front of vLLM for TLS termination, rate limiting, and authentication. vLLM itself has no auth layer by default.
  • Set --gpu-memory-utilization 0.9: vLLM pre-allocates 90% of available VRAM for KV cache by default. Adjust down if you have OOM errors, or up on a dedicated inference node.
  • Enable --quantization awq to load quantized 4-bit models, roughly halving VRAM usage with minimal quality loss for most workloads.
  • Monitor with Prometheus: vLLM exposes a /metrics endpoint with throughput, latency histograms, queue depth, and GPU utilization. Plug into Grafana for dashboards.
  • Use Ray Serve for multi-instance scaling: At high traffic volumes, use vllm serve --pipeline-parallel-size N or deploy multiple instances behind a load balancer with Ray Serve for horizontal scaling.

vLLM vs Ollama: Which Should You Use?

Both vLLM and Ollama serve local LLMs, but they target very different scenarios:

  • vLLM: Designed for production serving with multiple concurrent users. Maximizes throughput and GPU utilization. Requires Linux + NVIDIA GPU. More complex to set up. Best for: API backends, batch jobs, high-traffic inference endpoints.
  • Ollama: Designed for developer laptops and single-user local inference. Simple one-command install on macOS/Windows/Linux. Supports CPU inference (slower). Best for: local development, personal assistants, prototyping, and environments without dedicated GPU servers.

Rule of thumb: Use Ollama to build and prototype, switch to vLLM when you're ready for production scale or need to serve more than 5–10 concurrent users.

Get Started with vLLM 立即开始使用 vLLM
Visit the official site for documentation, downloads, and cloud plans. 访问官方网站获取文档、下载和云端方案。
Visit Official Site ↗ 访问官方网站 ↗

Similar Skill Frameworks 相似 技能框架

If vLLM doesn't fit your needs, here are other popular Skill Frameworks you might consider:

Frequently Asked Questions 常见问题

What is vLLM?
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. It uses PagedAttention to manage KV cache efficiently, achieving up to 24x higher throughput than standard HuggingFace Transformers serving.
When should I use vLLM instead of Ollama?
Use vLLM for production serving with high concurrent request volumes. It excels at maximizing GPU utilization and throughput for batch inference. Use Ollama for local development, prototyping, and single-user scenarios where ease of use matters more than throughput.
How do I start vLLM as an OpenAI-compatible server?
Run: vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000. Then point any OpenAI SDK client to http://localhost:8000/v1. The API supports /v1/chat/completions, /v1/completions, and /v1/models endpoints. Set api_key="not-needed" in the client since vLLM has no auth by default.
What models does vLLM support?
vLLM supports 40+ model architectures including Llama 3/3.1/3.2, Mistral, Mixtral, Gemma 2, Phi-3, Qwen 2, Falcon, Command-R, and DeepSeek. Any model hosted on HuggingFace Hub in a supported architecture can be loaded. Check the supported models page for the full list.
How much GPU memory does vLLM need?
It depends on the model size and quantization. A rough guide: Llama 3.1 8B in FP16 needs ~16GB VRAM; with AWQ 4-bit quantization, ~8GB. Llama 3.1 70B needs ~140GB FP16 (4× A100 80GB), or ~40GB with 4-bit quantization. Use --gpu-memory-utilization 0.9 to maximize KV cache allocation.
Does vLLM support streaming responses?
Yes. Set stream=True in your OpenAI client call, exactly as you would with the OpenAI API. vLLM streams tokens via Server-Sent Events (SSE) and is fully compatible with the streaming protocol used by OpenAI, enabling drop-in replacement for streaming chatbots and UIs.
Is vLLM free to use commercially?
Yes. vLLM is released under the Apache 2.0 license, which permits commercial use, modification, and distribution with no restrictions. Note that the models you serve (Llama, Mistral, etc.) have their own licenses—check each model's license before commercial deployment.