diff --git a/training/a4x/wan2-1-14b/nemo-pretraining-slurm/8node-BF16-GBS64/recipe/README.md b/training/a4x/wan2-1-14b/nemo-pretraining-slurm/8node-BF16-GBS64/recipe/README.md index eafc51b..9e872b1 100644 --- a/training/a4x/wan2-1-14b/nemo-pretraining-slurm/8node-BF16-GBS64/recipe/README.md +++ b/training/a4x/wan2-1-14b/nemo-pretraining-slurm/8node-BF16-GBS64/recipe/README.md @@ -3,7 +3,7 @@ This recipe outlines the steps for running a wan2.1-14b pretraining workload on [a4x Slurm](https://docs.cloud.google.com/ai-hypercomputer/docs/create/create-slurm-cluster) by using the -[NVIDIA NeMo framework](https://github.com/NVIDIA/nemo). +[NVIDIA NeMo framework](https://github.com/NVIDIA-NeMo/Megatron-Bridge). ## Orchestration and deployment tools @@ -98,4 +98,4 @@ tail -f wan-14b-benchmark_{jobID}.out ```bash scancel -u $USER -``` \ No newline at end of file +```