参考 https://docs.vllm.ai/en/latest/features/sleep_mode/
在AI推理服务的日常运维中,你是否曾遇到过这样的困境:昂贵的GPU资源在空闲时段被闲置,却无法释放给其他任务使用?或者在RLHF训练和推理交替进行时,不得不频繁重启服务?
今天,我要分享一个革命性的解决方案——vLLM的Sleep模式,本文将启动一个32B大模型,它在睡眠状态下释放了90%的GPU显存,同时保持服务在线!
什么是vLLM Sleep模式?
vLLM的Sleep模式允许你临时释放模型占用的GPU内存(包括模型权重和KV缓存),而无需停止服务器或卸载Docker容器。这一特性特别适用于:
- RLHF训练:在训练和推理间无缝切换
- 成本优化:空闲时段释放GPU资源给其他任务
- 多模型调度:动态切换不同模型而无需重启服务
核心优势:
- ✅释放90%+ GPU显存:将权重卸载到CPU内存,丢弃KV缓存
- ✅快速恢复:秒级唤醒,无需完整重新加载模型
- ✅精细控制:支持分布式部署,可单独唤醒权重或KV缓存
🔧 深度解析:Sleep模式的两种级别
Level 1:轻度睡眠(保留权重)
- 行为:将模型权重卸载到CPU内存,丢弃KV缓存
- 适用场景:短暂暂停后继续使用同一模型
- 内存要求:需要足够CPU内存存储完整权重
- 恢复速度:较快(无需重新读取磁盘)
Level 2:深度睡眠(完全释放)
- 行为:完全丢弃模型权重和KV缓存(仅保留少量缓冲区)
- 适用场景:更换模型或RLHF权重更新
- 内存要求:仅需少量CPU内存
- 恢复速度:较慢(需要重新加载权重)
🧪 实战演练:32B大模型的显存优化之旅
环境配置
vllm 0.11.0
# 启动服务(必须启用开发模式) VLLM_SERVER_DEV_MODE=1 vllm serve /models/Qwen3/Qwen3-32B/ \ --enable-sleep-mode --port 8000基准测试:初始状态
- 模型:Qwen3-32B
- GPU:H20 (90多GB显存)
- 初始显存占用:90.9GB / 97.8GB
- 加载耗时:301秒(约5分钟)
(EngineCore_DP0 pid=271) INFO 11-28 03:04:25 [default_loader.py:267] Loading weights took 298.92 seconds (EngineCore_DP0 pid=271) INFO 11-28 03:04:26 [gpu_model_runner.py:2653] Model loading took 61.0347 GiB and 301.136790 seconds (EngineCore_DP0 pid=271) INFO 11-28 03:05:59 [gpu_worker.py:298] Available KV cache memory: 18.82 GiB检查是否在休眠
curl -X GET 'http://localhost:8000/is_sleeping' {"is_sleeping":false}Level 1 Sleep:轻度睡眠实践
# 进入睡眠模式 curl -X POST 'http://localhost:8000/sleep?level=1' (EngineCore_DP0 pid=271) INFO 11-28 03:21:04 [block_pool.py:378] Successfully reset prefix cache (EngineCore_DP0 pid=271) INFO 11-28 03:21:49 [cumem.py:228] CuMemAllocator: sleep freed 79.91 GiB memory in total, of which 61.04 GiB is backed up in CPU and the rest 18.88 GiB is discarded directly. (EngineCore_DP0 pid=271) INFO 11-28 03:21:49 [gpu_worker.py:117] Sleep mode freed 85.56 GiB memory, 3.28 GiB memory is still in use. (EngineCore_DP0 pid=271) INFO 11-28 03:21:49 [executor_base.py:189] It took 44.788408 seconds to fall asleep. (APIServer pid=9) INFO: 127.0.0.1:29908 - "POST /sleep?level=1 HTTP/1.1" 200 OK效果:
- 显存释放:从90.9GB → 3.3GB(释放87.6GB,96.4%)
- CPU内存占用:71.4GB(权重备份)
- 耗时:44.7秒
(EngineCore_DP0 pid=271) INFO 11-28 03:21:49 [cumem.py:228] CuMemAllocator: sleep freed 79.91 GiB memory in total, of which 61.04 GiB is backed up in CPU and the rest 18.88 GiB is discarded directly. (EngineCore_DP0 pid=271) INFO 11-28 03:21:49 [gpu_worker.py:117] Sleep mode freed 85.56 GiB memory, 3.28 GiB memory is still in use.这个时候尝试再启动一个4B的模型,这个模型也能够提供服务
VLLM_SERVER_DEV_MODE=1 vllm serve /models/Qwen3/Qwen3-4B/ --enable-sleep-mode --port 8001 --gpu-memory-utilization 0.2唤醒测试:
先把8001端口的4B模型休眠,在wake_up 8000端口的32B模型
curl -X POST 'http://localhost:8000/wake_up'- 恢复耗时:2.4秒
- 显存恢复:3.3GB → 87.2GB
- 服务状态:完全恢复正常,可立即处理请求
Level 2 Sleep:深度睡眠与RLHF工作流
完整RLHF流程:
# 1. 深度睡眠 curl -X POST 'http://localhost:8000/sleep?level=2' # 2. 仅唤醒权重(避免OOM) curl -X POST 'http://localhost:8000/wake_up?tags=weights' # 3. 重新加载权重(模拟RLHF更新) curl -X POST 'http://localhost:8000/collective_rpc' \ -H 'Content-Type: application/json' \ -d '{"method":"reload_weights"}' # 4. 唤醒KV缓存 curl -X POST 'http://localhost:8000/wake_up?tags=kv_cache'关键数据:
- 深度睡眠耗时:仅0.14秒(比Level 1快300倍!)
- 权重唤醒耗时:0.13秒
- 权重重载耗时:13.89秒(17个safetensors分片)
- KV缓存唤醒耗时:0.01秒
sleep level=2日志,top查看进程占用内存3G左右
(EngineCore_DP0 pid=280) INFO 11-28 05:52:08 [block_pool.py:378] Successfully reset prefix cache (EngineCore_DP0 pid=280) INFO 11-28 05:52:08 [cumem.py:228] CuMemAllocator: sleep freed 79.91 GiB memory in total, of which 0.00 GiB is backed up in CPU and the rest 79.91 GiB is discarded directly. (EngineCore_DP0 pid=280) INFO 11-28 05:52:08 [gpu_worker.py:117] Sleep mode freed 79.91 GiB memory, 3.28 GiB memory is still in use. (EngineCore_DP0 pid=280) INFO 11-28 05:52:08 [executor_base.py:189] It took 0.145828 seconds to fall asleep. (APIServer pid=9) INFO: 127.0.0.1:22254 - "POST /sleep?level=2 HTTP/1.1" 200 OKwake_up 日志
curl -X POST 'http://localhost:8000/wake_up?tags=weights' (APIServer pid=1146) INFO 11-28 05:58:51 [api_server.py:1016] wake up the engine with tags: ['weights'] (EngineCore_DP0 pid=1281) INFO 11-28 05:58:51 [executor_base.py:205] It took 0.137584 seconds to wake up tags ['weights']. (APIServer pid=1146) INFO: 127.0.0.1:34310 - "POST /wake_up?tags=weights HTTP/1.1" 200 OK 65718MiB / 97871MiB curl -X POST 'http://localhost:8000/collective_rpc' -H 'Content-Type: application/json' -d '{"method":"reload_weights"}' (EngineCore_DP0 pid=1281) INFO 11-28 06:01:40 [gpu_model_runner.py:2705] Reloading weights inplace... Loading safetensors checkpoint shards: 0% Completed | 0/17 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 6% Completed | 1/17 [00:00<00:14, 1.08it/s] Loading safetensors checkpoint shards: 12% Completed | 2/17 [00:01<00:14, 1.02it/s] Loading safetensors checkpoint shards: 18% Completed | 3/17 [00:02<00:13, 1.01it/s] Loading safetensors checkpoint shards: 24% Completed | 4/17 [00:03<00:12, 1.02it/s] Loading safetensors checkpoint shards: 29% Completed | 5/17 [00:04<00:11, 1.09it/s] Loading safetensors checkpoint shards: 35% Completed | 6/17 [00:05<00:09, 1.16it/s] Loading safetensors checkpoint shards: 41% Completed | 7/17 [00:06<00:08, 1.21it/s] Loading safetensors checkpoint shards: 47% Completed | 8/17 [00:06<00:07, 1.24it/s] Loading safetensors checkpoint shards: 53% Completed | 9/17 [00:07<00:06, 1.26it/s] Loading safetensors checkpoint shards: 59% Completed | 10/17 [00:08<00:05, 1.28it/s] Loading safetensors checkpoint shards: 65% Completed | 11/17 [00:09<00:04, 1.28it/s] Loading safetensors checkpoint shards: 71% Completed | 12/17 [00:10<00:03, 1.29it/s] Loading safetensors checkpoint shards: 76% Completed | 13/17 [00:10<00:02, 1.35it/s] Loading safetensors checkpoint shards: 82% Completed | 14/17 [00:11<00:02, 1.34it/s] Loading safetensors checkpoint shards: 88% Completed | 15/17 [00:12<00:01, 1.32it/s] Loading safetensors checkpoint shards: 94% Completed | 16/17 [00:13<00:00, 1.32it/s] Loading safetensors checkpoint shards: 100% Completed | 17/17 [00:13<00:00, 1.33it/s] Loading safetensors checkpoint shards: 100% Completed | 17/17 [00:13<00:00, 1.24it/s] (EngineCore_DP0 pid=1281) (EngineCore_DP0 pid=1281) INFO 11-28 06:01:54 [default_loader.py:267] Loading weights took 13.89 seconds (APIServer pid=1146) INFO: 127.0.0.1:48690 - "POST /collective_rpc HTTP/1.1" 200 OK 65718MiB / 97871MiB curl -X POST 'http://localhost:8000/wake_up?tags=kv_cache' (APIServer pid=1683) INFO 11-28 06:08:56 [api_server.py:1016] wake up the engine with tags: ['kv_cache'] (EngineCore_DP0 pid=1818) INFO 11-28 06:08:56 [executor_base.py:205] It took 0.013068 seconds to wake up tags ['kv_cache']. (APIServer pid=1683) INFO: 127.0.0.1:23936 - "POST /wake_up?tags=kv_cache HTTP/1.1" 200 OK 85048MiB / 97871MiB⚠️ 实战踩坑:问题排查与解决方案
坑1:显存不足导致服务崩溃
现象:同时运行32B(服务状态)和4B(休眠状态)模型,唤醒4B时,4B服务崩溃
data: {"error": {"message": "EngineCore encountered an issue...", "code": 400}}根因:GPU显存不足,vLLM缺乏优雅降级机制
解决方案:
- 严格资源规划:使用
--gpu-memory-utilization参数限制单个模型最大显存 - 顺序操作:先让其他模型进入sleep状态,再唤醒目标模型
- 监控先行:唤醒前检查
nvidia-smi确认有足够显存
把32B模型休眠后,再次wake_up 4B,还是失败,4B服务处于"僵尸状态",无法恢复,只能杀进程
坑2:休眠状态下,请求模型,服务直接崩溃
模型没有恢复到 非sleep 状态时,请求大模型,vllm进程直接崩溃退出,这个还需要vllm社区进一步完善
另外还注意到:sleep level=1 时,第二次休眠耗时 2s(省去了权重卸载到CPU内存的时间),而首次休眠耗时是 40多s
🎯 实践指南
1. 操作顺序黄金法则
同一模型
更换/更新模型
决策点
Level 1 Sleep
Wake_up
正常服务
Level 2 Sleep
Wake_up weights
Reload weights
Wake_up kv_cache
正常服务
2. 监控与自动化脚本
请求之前,检查模型是否休眠,唤醒是否有足够的GPU显存
#!/usr/bin/env python3 import requests import time def safe_wake_up(model_url): """安全唤醒函数,包含显存检查和重试机制""" max_retries = 3 for attempt in range(max_retries): try: # 检查当前睡眠状态 status = requests.get(f"{model_url}/is_sleeping").json() if not status["is_sleeping"]: return True # 检查GPU显存 gpu_memory = check_gpu_memory() # 需要实现 if gpu_memory < required_memory: # 根据模型大小计算 time.sleep(5) continue # 执行唤醒 response = requests.post(f"{model_url}/wake_up") response.raise_for_status() return True except Exception as e: print(f"Attempt {attempt+1} failed: {str(e)}") if attempt == max_retries - 1: raise time.sleep(2) return False🔮 未来展望:Sleep模式的发展方向
- 冷热数据分层:自动将不活跃的模型权重迁移到CPU/磁盘
- 预测性睡眠:基于请求模式预测空闲时段,自动进入睡眠;或者加入规则,比如5分钟内没有请求,自动 level1 休眠
- 增强错误恢复:失败操作的自动回滚和状态恢复
💎 结语:重新定义GPU资源利用率
通过vLLM的Sleep模式,我们成功将32B大模型的GPU显存占用从90GB降至3.3GB,释放了96%的计算资源。这意味着在相同的硬件上,我们可以:
- 降低云成本:空闲时段自动释放昂贵的GPU资源
- 提升资源利用率:训练和推理任务无缝共享同一GPU池
- 加速 RLHF迭代:分钟级模型更新替代小时级服务重启
这不仅是技术优化,更是AI基础设施理念的革新——让每一GB GPU显存都发挥最大价值。
在AI时代,最昂贵的资源是被浪费掉的算力。
转自:https://blog.csdn.net/qq_21201267/article/details/155377092