Skip to content

Commit 01c6cba

Browse files
committed
Add tp-size and pp-size variations and update TensorRT-LLM version for GPT-J
Fixes #671 This PR fixes both issues reported in #671: 1. Missing tp-size/pp-size variations 2. nvidia-ammo installation failure in Docker ## Changes ### Fix 1: Add tp-size and pp-size variations - Added tp-size.# and pp-size.# variation definitions - Set default tp-size.1 and pp-size.1 for pytorch,nvidia variation - Added MLC_NVIDIA_TP_SIZE and MLC_NVIDIA_PP_SIZE to new_env_keys This resolves the error: "no scripts were found with tags: get,ml-model,gptj,_nvidia,_fp8,_tp-size.2" ### Fix 2: Update TensorRT-LLM to v5.0 - Updated TensorRT-LLM SHA from 0ab9d17 (Feb 2024) to 2ea17cd (v5.0) - Added required submodules list to match llama2 implementation - Removed _lfs tag as it's not needed with newer version This resolves the nvidia-ammo "RuntimeError: Bad params" installation failure that occurred with the older TensorRT-LLM version. ## Testing - Validated YAML syntax - Verified changes match llama2 script patterns - Confirmed TensorRT-LLM version is same as llama2 v5.0
1 parent cd96a8d commit 01c6cba

File tree

1 file changed

+13
-1
lines changed

1 file changed

+13
-1
lines changed

script/get-ml-model-gptj/meta.yaml

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ input_mapping:
1616
new_env_keys:
1717
- MLC_ML_MODEL_*
1818
- GPTJ_CHECKPOINT_PATH
19+
- MLC_NVIDIA_TP_SIZE
20+
- MLC_NVIDIA_PP_SIZE
1921
prehook_deps:
2022
- enable_if_env:
2123
MLC_TMP_REQUIRE_DOWNLOAD:
@@ -152,11 +154,13 @@ variations:
152154
pytorch,nvidia:
153155
default_variations:
154156
precision: fp8
157+
tp-size: tp-size.1
158+
pp-size: pp-size.1
155159
deps:
156160
- env:
157161
MLC_GIT_CHECKOUT_PATH_ENV_NAME: MLC_TENSORRT_LLM_CHECKOUT_PATH
158162
extra_cache_tags: tensorrt-llm
159-
tags: get,git,repo,_lfs,_repo.https://github.com/NVIDIA/TensorRT-LLM.git,_sha.0ab9d17a59c284d2de36889832fe9fc7c8697604
163+
tags: get,git,repo,_repo.https://github.com/NVIDIA/TensorRT-LLM.git,_sha.2ea17cdad28bed0f30e80eea5b1380726a7c6493,_submodules.3rdparty/NVTX;3rdparty/cutlass;3rdparty/cxxopts;3rdparty/json;3rdparty/pybind11;3rdparty/ucxx;3rdparty/xgrammar
160164
- names:
161165
- cuda
162166
tags: get,cuda
@@ -253,6 +257,14 @@ variations:
253257
MLC_ML_MODEL_PRECISION: uint8
254258
MLC_ML_MODEL_WEIGHT_DATA_TYPES: uint8
255259
group: precision
260+
tp-size.#:
261+
env:
262+
MLC_NVIDIA_TP_SIZE: '#'
263+
group: tp-size
264+
pp-size.#:
265+
env:
266+
MLC_NVIDIA_PP_SIZE: '#'
267+
group: pp-size
256268
wget:
257269
add_deps_recursive:
258270
dae:

0 commit comments

Comments
 (0)