本文将主要介绍同义万相v2.1视频生成模型的在AWS上部署的初步测试
通义万相AI模型介绍
通义万相模型是阿里云负责大规模生成式模型的团队,最近发布了通义万相2.1(以下称Wan 2.1),这是一个“全面开源的视频基础模型套件,突破了视频生成的边界”。阿里巴巴这次发布了四个通义万相2.1的变体——T2V-1.3B、T2V-14B、I2V-14B-720P和I2V-14B-480P——这些模型能够从文本或图像输入生成图像和视频。本文将重点关注小型(13亿参数)文本转视频模型:Wan2.1-T2V-1.3B。详细信息可参考其官网,通义万相项目的GitHub仓库。
开始部署和测试
如果大家希望复现我们的测试,特别是获取我们在ECS上使用的Docker文件,请fork如下的代码库。
我们的目标是在一个完全隔离且可移植的环境中评估通义万相模型,包括在亚马逊云科技ECS上进行测试。目前通义万相2.1项目尚未提供用于构建和执行该项目的Docker文件,因此我们开发了一个Docker文件(如下所述),可以在配备GPU的笔记本电脑或基于云的容器服务中运行视频生成。
在本次测试中,我们选择了亚马逊云科技的Elastic Container Service(ECS),因为它可以通过特定的EC2实例类型轻松访问多种GPU。
为了进行模型输出的比较,我们使用了Nvidia在2025年1月发布的Cosmos项目中的提示词,以观察不同模型对相同文本的解析方式。
提示词如下(完整内容请参考Cosmos的GitHub仓库):
“一个时尚的类人机器人站在一个巨大的仓库中,仓库里整齐堆放着工业货架上的纸箱。机器人金属质感的身体在明亮均匀的灯光下闪闪发光,突出了其未来感设计和复杂的关节结构。它的胸口散发出幽蓝色的光芒,增添了一丝高科技感。背景中排列整齐的箱子暗示着高度有序的存储系统,地板上铺满了木质托盘,增强了工业环境的氛围。摄像机保持静止,捕捉机器人在有序环境中的站姿,浅景深使机器人成为焦点,同时轻微模糊背景以营造电影般的效果。”
通义万相的代码项目拥有一个基于Hugging Face Gradio的交互界面,使用户输入提示词并查看生成的视频变得更加方便。
本次测试使用50步迭代(diffusion)方法进行视频生成,处理时长约44.5分钟,生成的视频长度为5秒(具体时间可见最后面AWS ECS集群执行日志的最后一行)。在这次测试中,Wan2.1-T2V-1.3B模型部署在ECS集群的EC2实例上,实例类型为g6.12xlarge
,配备4块NVIDIA L4 Tensor Core GPU,每块GPU拥有24GB显存。
部署的Docker文件脚本
本节将介绍Docker文件的脚本内容(完整文件可在GitHub上获取),以便大家在自己的环境中自定义和复用:
- 该Docker文件基于镜像
pytorch/pytorch:2.6.0-cuda12.6-cudnn9-devel
(直接来自Docker Hub,存入Docker Registry后大小超过13GB),该镜像提供了Pytorch、Nvidia CUDA及所有相关依赖,并基于Python 3.11和Ubuntu 22.04(Jammy Jellyfish)。 - 需要扩展和添加Linux环境变量
LD_LIBRARY_PATH
,以允许Pytorch动态加载Mesa库。 - 设置Linux环境变量
PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True"
,优化GPU内存使用,以避免torch.OutOfMemoryError: CUDA out of memory
错误。 - 模型会在容器目录
/home/model/Wan-AI/<模型名称>
中查找权重文件。本次测试使用的模型Wan2.1-T2V-1.3B
,其加载到GPU后大小约14GB(nvidia-smi
命令输出可见),模型的大小就限制了可用于视频生成的GPU型号:GPU显存必须能同时容纳这些权重及相关计算二进制文件。 - 为保持镜像的通用性并减少体积,Docker镜像本身不包含通义万相模型。模型文件应在运行
docker run
命令时通过挂载的卷访问。在ECS上我们执行一个命令,在实例启动时,从S3复制模型到提供计算能力的EC2实例,以避免每次启动时都从Hugging Face拉取。 - 定义了多个环境变量(
$MODEL
、$MODEL_DIR
、$LAUNCHER
),以提供额外的灵活性,使得一个镜像可以动态调整配置。 - 暴露的端口为
7860
,这是Gradio的标准端口。
完整的Docker文件可在GitHub上找到。
FROM pytorch/pytorch:2.6.0-cuda12.6-cudnn9-devel# install tools + libglib2.0-0 & libx11-6 because Gradio needs them on Linux and they are missing in base Nvidia image
# hadolint ignore=DL3008
RUN apt-get update \&& apt-get upgrade -y \&& apt-get install -y --no-install-recommends curl wget git libglib2.0-0 libx11-6 libxrender1 libxtst6 libxi6 \&& apt-get clean \&& rm -rf /var/lib/apt/lists/*# check & upgrade pip
RUN python --version \&& python -m ensurepip --upgrade \&& python -m pip install --upgrade pip# clone project
WORKDIR "/home"
RUN git clone "https://github.com/Wan-Video/Wan2.1.git"
ARG WAN_DIR="/home/Wan2.1"
WORKDIR ${WAN_DIR}# install project requirements + xfuser for multi-GPU support
RUN python -m pip install --upgrade -r requirements.txt \&& python -m pip install --upgrade "xfuser==0.4.1"ARG MODEL_DIR="/home/model"
# model dir must be created at image build time to allow volume bind on container start
WORKDIR ${MODEL_DIR}#back to Wan dir as initial working dir for execution
WORKDIR ${WAN_DIR}# communication parameters
ENV HOST="0.0.0.0"
ENV PORT=7860# MODEL, LAUNCHER & CKPT_DIR can be ovveriden by in docker run with -e / --env option
ENV WAN_DIR=${WAN_DIR}
ENV MODEL="Wan-AI/Wan2.1-T2V-1.3B"
ENV MODEL_DIR=${MODEL_DIR}
ENV LAUNCHER="t2v_14B_singleGPU.py"
ENV CKPT_DIR=${MODEL_DIR}/${MODEL}EXPOSE ${PORT}# Wan model needs access to some Mesa libraries, which are not initially included in LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/nsight-compute/2024.3.2/host/linux-desktop-glibc_2_11_3-x64/Mesa
# to try to avoid CUDA out-of-memory errors
ENV PYTORCH_CUDA_ALLOC_CONF=expandable_segments:TrueCMD ["bash", "-c", "printenv && cd gradio && python ${LAUNCHER} --ckpt_dir ${CKPT_DIR} || sleep infinity"]
在亚马逊云科技ECS集群上的部署和运行万相大模型
当Wan2.1-T2V-1.3B模型作为ECS的一个任务在ECS服务(ECS Service)中部署和启动时,其日志输出如下。在上一节中测试的提示词输入后,模型将执行50次迭代,每次迭代大约需要53.5秒,因此总计算时长为44分34秒(具体数据见日志最后一行)。
==========
== CUDA ==
==========
CUDA Version 12.6.3
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
NV_LIBCUBLAS_VERSION=12.6.4.1-1
NVIDIA_VISIBLE_DEVICES=GPU-3e87a3ac-7976-fd2e-a07d-46800f815303,GPU-5360b4b6-3676-6faf-c8ce-e0e112d637e2,GPU-9c86b415-618d-d34d-3e8a-67bafa3d517c,GPU-95cb2c24-7310-9081-cfdb-c8f9b24dc896
NV_NVML_DEV_VERSION=12.6.77-1
AWS_EXECUTION_ENV=AWS_ECS_EC2
NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.23.4-1+cuda12.6
NV_LIBNCCL_DEV_PACKAGE_VERSION=2.23.4-1
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/fbf4969c-9cbe-470b-9833-0ea1e699a871
HOSTNAME=ip-10-0-1-219.us-west-2.compute.internal
NVIDIA_REQUIRE_CUDA=cuda>=12.6 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551
NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-6=12.6.4.1-1
NV_NVTX_VERSION=12.6.77-1
NV_CUDA_CUDART_DEV_VERSION=12.6.77-1
NV_LIBCUSPARSE_VERSION=12.5.4.2-1
NV_LIBNPP_VERSION=12.3.1.54-1
NCCL_VERSION=2.23.4-1
PWD=/home/Wan2.1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/bee302b2-a59b-4ace-b269-058efbfaf7ba
PORT=7860
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-6=12.6.80-1
NV_LIBNPP_PACKAGE=libnpp-12-6=12.3.1.54-1
NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
NV_LIBCUBLAS_DEV_VERSION=12.6.4.1-1
WAN_DIR=/home/Wan2.1
NVIDIA_PRODUCT_NAME=CUDA
NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-6
NV_CUDA_CUDART_VERSION=12.6.77-1
HOME=/root
MODEL=Wan-AI/Wan2.1-T2V-1.3B
CUDA_VERSION=12.6.3
NV_LIBCUBLAS_PACKAGE=libcublas-12-6=12.6.4.1-1
PYTORCH_VERSION=2.6.0
NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-6=12.6.3-1
CKPT_DIR=/home/model/Wan-AI/Wan2.1-T2V-1.3B
NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-6=12.3.1.54-1
NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-6
NV_LIBNPP_DEV_VERSION=12.3.1.54-1
ECS_AGENT_URI=http://169.254.170.2/api/bee302b2-a59b-4ace-b269-058efbfaf7ba
NV_LIBCUSPARSE_DEV_VERSION=12.5.4.2-1
HOST=0.0.0.0
LIBRARY_PATH=/usr/local/cuda/lib64/stubs
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/bee302b2-a59b-4ace-b269-058efbfaf7ba
SHLVL=1
NV_CUDA_LIB_VERSION=12.6.3-1
NVARCH=x86_64
LAUNCHER=t2v_14B_singleGPU.py
NV_LIBNCCL_PACKAGE=libnccl2=2.23.4-1+cuda12.6
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/opt/nvidia/nsight-compute/2024.3.2/host/linux-desktop-glibc_2_11_3-x64/Mesa
NV_CUDA_NSIGHT_COMPUTE_VERSION=12.6.3-1
NUM_GPUS=1
NV_NVPROF_VERSION=12.6.80-1
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NV_LIBNCCL_PACKAGE_NAME=libnccl2
NV_LIBNCCL_PACKAGE_VERSION=2.23.4-1
MODEL_DIR=/home/model
_=/usr/bin/printenv
Downloading shards: 100%|██████████| 8/8 [07:25<00:00, 55.72s/it]
Loading checkpoint shards: 100%|██████████| 8/8 [00:01<00:00, 7.11it/s]
Step1: Init prompt_expander...done
Step2: Init 14B t2v model...done
* Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
100%|██████████| 50/50 [44:34<00:00, 53.48s/it]
正如预期的那样,由于Wan2.1-T2V-1.3B是一个单GPU模型,在配备4块GPU的g6.12xlarge
实例上,它仅使用了其中的一块GPU(通过nvidia-smi
命令可以确认)。
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.144.03 Driver Version: 550.144.03 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA L4 On | 00000000:38:00.0 Off | 0 |
| N/A 70C P0 71W / 72W | 14645MiB / 23034MiB | 100% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA L4 On | 00000000:3A:00.0 Off | 0 |
| N/A 26C P8 16W / 72W | 4MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA L4 On | 00000000:3C:00.0 Off | 0 |
| N/A 29C P8 16W / 72W | 4MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA L4 On | 00000000:3E:00.0 Off | 0 |
| N/A 28C P8 16W / 72W | 4MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------++-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 7653 C python 14636MiB |
+-----------------------------------------------------------------------------------------+
总结
这篇文章详细介绍了阿里巴巴通义万相v2.1模型在亚马逊云科技ECS上的部署和测试过程,包括模型介绍、Docker环境的搭建、GPU资源利用情况以及执行时间。通过对比Nvidia Cosmos项目提供的提示词,可以更直观地观察不同模型在视频生成方面的表现。对于希望在云端运行通义万相2.1模型的开发者,这篇文章提供了完整的流程和可复现的Docker环境配置指南。