Skip to content

Commit 85bc66e

Browse files
authored
Update Getting started so its up to date (#13)
Signed-off-by: Kelly Brown <kelbrown@redhat.com>
1 parent 6174543 commit 85bc66e

File tree

4 files changed

+214
-248
lines changed

4 files changed

+214
-248
lines changed

docs/getting-started/initilize_ilab.md

Lines changed: 66 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -6,77 +6,91 @@ logo: images/ilab_dog.png
66

77
# 🏗️ Initialize `ilab`
88

9-
1) Initialize `ilab` by running the following command:
9+
### 🏗️ Initialize `ilab`
1010

11-
```shell
12-
ilab config init
13-
```
11+
1. Initialize `ilab` by running the following command:
1412

15-
*Example output*
13+
```shell
14+
ilab config init
15+
```
1616

17-
```shell
18-
Welcome to InstructLab CLI. This guide will help you set up your environment.
19-
Please provide the following values to initiate the environment [press Enter for defaults]:
20-
Path to taxonomy repo [taxonomy]: <ENTER>
21-
```
17+
2. When prompted, clone the `https://github.com/instructlab/taxonomy.git` repository into the current directory by typing **enter**
2218

23-
2) When prompted by the interface, press **Enter** to add a new default `config.yaml` file.
19+
**Optional**: If you want to point to an existing local clone of the `taxonomy` repository, you can pass the path interactively or alternatively with the `--taxonomy-path` flag.
2420

25-
3) When prompted, clone the `https://github.com/instructlab/taxonomy.git` repository into the current directory by typing **y**.
21+
`ilab` will use the default configuration file unless otherwise specified. You can override this behavior with the `--config` parameter for any `ilab` command.
2622

27-
**Optional**: If you want to point to an existing local clone of the `taxonomy` repository, you can pass the path interactively or alternatively with the `--taxonomy-path` flag.
23+
3. When prompted, provide the path to your default model. Otherwise, the default of a quantized [Merlinite](https://huggingface.co/instructlab/merlinite-7b-lab-GGUF) model is used.
2824

29-
*Example output after initializing `ilab`*
25+
*Example output of steps 1 - 3*
3026

3127
```shell
32-
(venv) $ ilab config init
33-
Welcome to InstructLab CLI. This guide will help you set up your environment.
28+
----------------------------------------------------
29+
Welcome to the InstructLab CLI
30+
This guide will help you to setup your environment
31+
----------------------------------------------------
32+
3433
Please provide the following values to initiate the environment [press Enter for defaults]:
35-
Path to taxonomy repo [taxonomy]: <ENTER>
36-
`taxonomy` seems to not exists or is empty. Should I clone https://github.com/instructlab/taxonomy.git for you? [y/N]: y
37-
Cloning https://github.com/instructlab/taxonomy.git...
34+
Path to taxonomy repo [/Users/kellybrown/.local/share/instructlab/taxonomy]:
35+
Path to your model [/Users/kellybrown/.cache/instructlab/models/merlinite-7b-lab-Q4_K_M.gguf]:
3836
```
3937

40-
`ilab` will use the default configuration file unless otherwise specified. You can override this behavior with the `--config` parameter for any `ilab` command.
41-
42-
4) When prompted, provide the path to your default model. Otherwise, the default of a quantized [Merlinite](https://huggingface.co/instructlab/merlinite-7b-lab-GGUF) model will be used - you can download this model with `ilab model download`. The following example output displays the paths of a Mac instance.
38+
You can download this model with `ilab model download` command as well.
4339

44-
```shell
45-
(venv) $ ilab config init
46-
Welcome to InstructLab CLI. This guide will help you set up your environment.
47-
Please provide the following values to initiate the environment [press Enter for defaults]:
48-
Path to taxonomy repo [taxonomy]: <ENTER>
49-
`taxonomy` seems to not exists or is empty. Should I clone https://github.com/instructlab/taxonomy.git for you? [y/N]: y
50-
Cloning https://github.com/instructlab/taxonomy.git...
51-
Path to your model [/Users/USERNAME/Library/Caches/instructlab/models/merlinite-7b-lab-Q4_K_M.gguf]: <ENTER>
52-
```
40+
4. The InstructLab CLI auto-detects your hardware and select the exact system profile that matches your machine. System profiles populate the `config.yaml` file with the proper parameter values based on your detected GPU types and avaiible vRAM.
5341

54-
5) When prompted, please choose a train profile. Train profiles are GPU specific profiles that enable accelerated training behavior. **YOU ARE ON MacOS**, please choose `No Profile (CPU, Apple Metal, AMD ROCm)` by hitting Enter. There are various flags you can utilize with individual `ilab` commands that will allow you to utilize your GPU if applicable. The following example output uses the Linux paths.
42+
*Example output of profile auto-detection*
5543

5644
```shell
57-
Welcome to InstructLab CLI. This guide will help you to setup your environment.
58-
Please provide the following values to initiate the environment [press Enter for defaults]:
59-
Path to taxonomy repo [/home/user/.local/share/instructlab/taxonomy]:
60-
Path to your model [/home/user/.cache/instructlab/models/merlinite-7b-lab-Q4_K_M.gguf]:
61-
Generating `/home/user/.config/instructlab/config.yaml` and `/home/user/.local/share/instructlab/internal/train_configuration/profiles`...
62-
Please choose a train profile to use.
63-
Train profiles assist with the complexity of configuring specific GPU hardware with the InstructLab Training library.
64-
You can still take advantage of hardware acceleration for training even if your hardware is not listed.
65-
[0] No profile (CPU, Apple Metal, AMD ROCm)
66-
[1] Nvidia A100/H100 x2 (A100_H100_x2.yaml)
67-
[2] Nvidia A100/H100 x4 (A100_H100_x4.yaml)
68-
[3] Nvidia A100/H100 x8 (A100_H100_x8.yaml)
69-
[4] Nvidia L40 x4 (L40_x4.yaml)
70-
[5] Nvidia L40 x8 (L40_x8.yaml)
71-
[6] Nvidia L4 x8 (L4_x8.yaml)
72-
Enter the number of your choice [hit enter for no profile] [0]:
73-
No profile selected - any hardware acceleration for training must be configured manually.
74-
Initialization completed successfully, you're ready to start using `ilab`. Enjoy!
45+
Generating config file and profiles:
46+
/home/user/.config/instructlab/config.yaml
47+
/home/user/.local/share/instructlab/internal/train_configuration/profiles
48+
49+
We have detected the AMD CPU profile as an exact match for your system.
50+
51+
--------------------------------------------
52+
Initialization completed successfully!
53+
You're ready to start using `ilab`. Enjoy!
54+
--------------------------------------------
55+
```
56+
57+
5. If there is not an exact match for your system, you can manually select a system profile when prompted. There are various flags you can utilize with individual `ilab` commands that allow you to utilize your GPU if applicable.
58+
59+
*Example output of selecting a system profile*
60+
61+
```shell
62+
Please choose a system profile to use.
63+
System profiles apply to all parts of the config file and set hardware specific defaults for each command.
64+
First, please select the hardware vendor your system falls into
65+
[1] APPLE
66+
[2] INTEL
67+
[3] AMD
68+
[4] NVIDIA
69+
Enter the number of your choice [0]: 1
70+
You selected: APPLE
71+
Next, please select the specific hardware configuration that most closely matches your system.
72+
[0] No system profile
73+
[1] APPLE M1 ULTRA
74+
[2] APPLE M1 MAX
75+
[3] APPLE M2 MAX
76+
[4] APPLE M2 ULTRA
77+
[5] APPLE M2 PRO
78+
[6] APPLE M2
79+
[7] APPLE M3 MAX
80+
[8] APPLE M3 PRO
81+
[9] APPLE M3
82+
Enter the number of your choice [hit enter for hardware defaults] [0]: 8
83+
You selected: /Users/kellybrown/.local/share/instructlab/internal/system_profiles/apple/m3/m3_pro.yaml
84+
85+
--------------------------------------------
86+
Initialization completed successfully!
87+
You're ready to start using `ilab`. Enjoy!
88+
--------------------------------------------
7589
```
7690

77-
The GPU profiles are listed by GPU type and number. If you happen to have a GPU configuration with a similar amount of VRAM as any of the above profiles, feel free to try them out!
91+
The GPU profiles are listed by GPU type and number of GPUs present. If you happen to have a GPU configuration with a similar amount of vRAM as any of the above profiles, feel free to try them out!
7892

79-
## `ilab` directory layout after initializing your system
93+
### `ilab` directory layout after initializing your system
8094

8195
### Mac directory
8296

docs/getting-started/linux_amd.md

Lines changed: 61 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -12,17 +12,17 @@ logo: images/ilab_dog.png
1212
These steps will pull down a premade `qna.yaml` so you can do a local build. Skip the `wget`, `mv`, and `ilab taxonomy diff` if you don't want to do this.
1313

1414
```bash
15-
python3.11 -m venv venv-instructlab-0.18-3.11
16-
source venv-instructlab-0.18-3.11/bin/activate
15+
python3.11 -m venv --upgrade-deps venv
16+
source venv/bin/activate
1717
pip cache remove llama_cpp_python
18-
pip install 'instructlab[rocm]' \
19-
--extra-index-url https://download.pytorch.org/whl/rocm6.0 \
20-
-C cmake.args="-DLLAMA_HIPBLAS=on" \
21-
-C cmake.args="-DAMDGPU_TARGETS=all" \
22-
-C cmake.args="-DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang" \
23-
-C cmake.args="-DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++" \
24-
-C cmake.args="-DCMAKE_PREFIX_PATH=/opt/rocm" \
25-
-C cmake.args="-DLLAMA_NATIVE=off"
18+
CMAKE_ARGS="-DLLAMA_HIPBLAS=on \
19+
-DAMDGPU_TARGETS=all \
20+
-DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang \
21+
-DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ \
22+
-DCMAKE_PREFIX_PATH=/opt/rocm \
23+
-DLLAMA_NATIVE=off" \
24+
pip install 'instructlab[rocm]' \
25+
--extra-index-url https://download.pytorch.org/whl/rocm6.0
2626
which ilab
2727
ilab config init
2828
cd ~/.local/share/instructlab
@@ -35,85 +35,69 @@ ilab model train
3535
ilab model convert --model-dir checkpoints/instructlab-granite-7b-lab-mlx-q
3636
ilab model serve --model-path instructlab-granite-7b-lab-trained/instructlab-granite-7b-lab-Q4_K_M.gguf
3737
```
38+
3839
## Installing `ilab`
3940

40-
1) Create a new directory called `instructlab` to store the files the `ilab` CLI needs when running and `cd` into the directory by running the following command:
41-
42-
```shell
43-
mkdir instructlab
44-
cd instructlab
45-
```
41+
The following steps in this document use [Python venv](https://docs.python.org/3/library/venv.html) for virtual environments. However, if you use another tool such as [pyenv](https://github.com/pyenv/pyenv) or [Conda Miniforge](https://github.com/conda-forge/miniforge) for managing Python environments on your machine continue to use that tool instead. Otherwise, you may have issues with packages that are installed but not found in `venv`.
4642

4743
!!! note
48-
The following steps in this document use [Python venv](https://docs.python.org/3/library/venv.html) for virtual environments. However, if you use another tool such as [pyenv](https://github.com/pyenv/pyenv) or [Conda Miniforge](https://github.com/conda-forge/miniforge) for managing Python environments on your machine continue to use that tool instead. Otherwise, you may have issues with packages that are installed but not found in `venv`.
44+
`pip install` may take some time, depending on your internet connection. In case installation fails with error ``unsupported instruction `vpdpbusd'``, append `-C cmake.args="-DLLAMA_NATIVE=off"` to `pip install` command.
4945

50-
2) There are a few ways you can locally install the `ilab` CLI. Select your preferred installation method from the following instructions. You can then install `ilab` and activate your `venv` environment.
46+
1) Install with AMD ROCm
47+
48+
```bash
49+
python3 -m venv --upgrade-deps venv
50+
source venv/bin/activate
51+
pip cache remove llama_cpp_python
52+
pip install 'instructlab[rocm]' \
53+
--extra-index-url https://download.pytorch.org/whl/rocm6.0 \
54+
-C cmake.args="-DLLAMA_HIPBLAS=on" \
55+
-C cmake.args="-DAMDGPU_TARGETS=all" \
56+
-C cmake.args="-DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang" \
57+
-C cmake.args="-DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++" \
58+
-C cmake.args="-DCMAKE_PREFIX_PATH=/opt/rocm" \
59+
-C cmake.args="-DLLAMA_NATIVE=off"
60+
```
5161

52-
!!! note
53-
`pip install` may take some time, depending on your internet connection. In case installation fails with error ``unsupported instruction `vpdpbusd'``, append `-C cmake.args="-DLLAMA_NATIVE=off"` to `pip install` command.
62+
On Fedora 40+, use `-DCMAKE_C_COMPILER=clang-17` and `-DCMAKE_CXX_COMPILER=clang++-17.`
5463

55-
3) Install with AMD ROCm
64+
2) From your `venv` environment, verify `ilab` is installed correctly, by running the `ilab` command.
5665

57-
```bash
58-
python3 -m venv --upgrade-deps venv
59-
source venv/bin/activate
60-
pip cache remove llama_cpp_python
61-
pip install 'instructlab[rocm]' \
62-
--extra-index-url https://download.pytorch.org/whl/rocm6.0 \
63-
-C cmake.args="-DLLAMA_HIPBLAS=on" \
64-
-C cmake.args="-DAMDGPU_TARGETS=all" \
65-
-C cmake.args="-DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang" \
66-
-C cmake.args="-DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++" \
67-
-C cmake.args="-DCMAKE_PREFIX_PATH=/opt/rocm" \
68-
-C cmake.args="-DLLAMA_NATIVE=off"
69-
```
66+
```shell
67+
ilab
68+
```
7069

71-
On Fedora 40+, use `-DCMAKE_C_COMPILER=clang-17` and `-DCMAKE_CXX_COMPILER=clang++-17.`
70+
*Example output of the `ilab` command*
7271

73-
4) From your `venv` environment, verify `ilab` is installed correctly, by running the `ilab` command.
72+
```shell
73+
(venv) $ ilab
74+
Usage: ilab [OPTIONS] COMMAND [ARGS]...
7475
75-
```shell
76-
ilab
77-
```
76+
CLI for interacting with InstructLab.
7877
79-
*Example output of the `ilab` command*
80-
81-
```shell
82-
(venv) $ ilab
83-
Usage: ilab [OPTIONS] COMMAND [ARGS]...
84-
85-
CLI for interacting with InstructLab.
86-
87-
If this is your first time running InstructLab, it's best to start with `ilab config init` to create the environment.
88-
89-
Options:
90-
--config PATH Path to a configuration file. [default:
91-
/home/user/.config/instructlab/config.yaml]
92-
-v, --verbose Enable debug logging (repeat for even more verbosity)
93-
--version Show the version and exit.
94-
--help Show this message and exit.
95-
96-
Commands:
97-
config Command Group for Interacting with the Config of InstructLab.
98-
data Command Group for Interacting with the Data generated by...
99-
model Command Group for Interacting with the Models in InstructLab.
100-
system Command group for all system-related command calls
101-
taxonomy Command Group for Interacting with the Taxonomy of InstructLab.
102-
103-
Aliases:
104-
chat model chat
105-
convert model convert
106-
diff taxonomy diff
107-
download model download
108-
evaluate model evaluate
109-
generate data generate
110-
init config init
111-
list model model_list
112-
serve model serve
113-
sysinfo system info
114-
test model test
115-
train model train
116-
```
78+
If this is your first time running ilab, it's best to start with `ilab
79+
config init` to create the environment.
80+
81+
Options:
82+
--config PATH Path to a configuration file. [default:
83+
/Users/kellybrown/.config/instructlab/config.yaml]
84+
-v, --verbose Enable debug logging (repeat for even more verbosity)
85+
--version Show the version and exit.
86+
--help Show this message and exit.
87+
88+
Commands:
89+
config Command Group for Interacting with the Config of InstructLab.
90+
data Command Group for Interacting with the Data generated by...
91+
model Command Group for Interacting with the Models in InstructLab.
92+
system Command group for all system-related command calls
93+
taxonomy Command Group for Interacting with the Taxonomy of InstructLab.
94+
95+
Aliases:
96+
chat model chat
97+
generate data generate
98+
serve model serve
99+
train model train
100+
```
117101
118102
!!! important
119103
Every `ilab` command needs to be run from within your Python virtual environment. You can enter the Python environment by running the `source venv/bin/activate` command.

0 commit comments

Comments
 (0)