sanghyuk-vessl commited on
Commit
76d9c4f
·
verified ·
1 Parent(s): 4966342

Add vessl-docs

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. examples/models.md +43 -0
  2. examples/models/llama2.md +59 -0
  3. examples/models/mistral.md +50 -0
  4. examples/models/ssd.md +52 -0
  5. examples/models/stable-diffusion.md +54 -0
  6. examples/models/whisper.md +46 -0
  7. examples/uses.md +25 -0
  8. examples/uses/artifact.md +67 -0
  9. examples/uses/job.md +79 -0
  10. examples/uses/notebook.md +81 -0
  11. guides/clusters/access.md +39 -0
  12. guides/clusters/aws.md +12 -0
  13. guides/clusters/laptops.md +101 -0
  14. guides/clusters/managed.md +38 -0
  15. guides/clusters/monitoring.md +105 -0
  16. guides/clusters/onprem.md +151 -0
  17. guides/clusters/overview.md +24 -0
  18. guides/clusters/quotas.md +25 -0
  19. guides/clusters/remove.md +28 -0
  20. guides/clusters/specs.md +58 -0
  21. guides/datasets/create.md +60 -0
  22. guides/datasets/manage.md +25 -0
  23. guides/datasets/overview.md +15 -0
  24. guides/datasets/tips.md +23 -0
  25. guides/experiments/create.md +125 -0
  26. guides/experiments/distributed.md +78 -0
  27. guides/experiments/local.md +17 -0
  28. guides/experiments/manage.md +62 -0
  29. guides/experiments/monitor.md +53 -0
  30. guides/experiments/overview.md +13 -0
  31. guides/get-started/gpu-notebook.md +160 -0
  32. guides/get-started/llama2.md +191 -0
  33. guides/get-started/overview.md +109 -0
  34. guides/get-started/quickstart.md +192 -0
  35. guides/get-started/stable-diffusion.md +180 -0
  36. guides/models/create.md +46 -0
  37. guides/models/deploy.md +114 -0
  38. guides/models/manage.md +24 -0
  39. guides/models/overview.md +14 -0
  40. guides/organization/billing.md +16 -0
  41. guides/organization/create.md +18 -0
  42. guides/organization/integrations.md +40 -0
  43. guides/organization/members.md +12 -0
  44. guides/organization/notification.md +21 -0
  45. guides/organization/overview.md +18 -0
  46. guides/project/create.md +30 -0
  47. guides/project/overview.md +14 -0
  48. guides/project/repo-dataset.md +45 -0
  49. guides/project/summary.md +86 -0
  50. guides/resources/changelog.md +38 -0
examples/models.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Models
3
+ description: See VESSL Run in action with the latest open-source models
4
+ version: EN
5
+ ---
6
+
7
+ <CardGroup cols={2}>
8
+ <Card title="Llama2-7B Fine-tuning" href="/examples/models/llama2">
9
+ Fine-tune Llama2-7B with a code instructions dataset.
10
+ <br/>
11
+ <img className="rounded-md" src="/images/examples/hub-llama2.png" />
12
+ </Card>
13
+
14
+ <Card title="Mistral-7B Playground" href="/examples/models/mistral">
15
+ Launch a GPU-accelerated Streamlit app of Mistral 7B.
16
+ <br/>
17
+ <img className="rounded-md" src="/images/examples/hub-mistral.png" />
18
+ </Card>
19
+
20
+ <Card title="SSD-1B Playground" href="/examples/models/ssd">
21
+ Interactive playground of a lighter and faster version of Stable Diffusion XL.
22
+ <br/>
23
+ <img className="rounded-md" src="/images/examples/hub-ssd.png" />
24
+ </Card>
25
+
26
+ <Card title="Whisper V3 Playground" href="/examples/models/whisper3">
27
+ Translate audio snippets into text on a Streamlit playground.
28
+ <br/>
29
+ <img className="rounded-md" src="/images/examples/hub-whisper.png" />
30
+ </Card>
31
+
32
+ <Card title="Stable Diffusion Playground" href="/examples/models/diffusion">
33
+ Generate images with a prompt on the web, powered with Stable Diffusion.
34
+ <br/>
35
+ <img className="rounded-md" src="/images/examples/hub-sd.png" />
36
+ </Card>
37
+
38
+ <Card title="Launch your next model" href="/examples/usecases/batch-run">
39
+ Start from our template YAML to build and deploy your next model.
40
+ <br/>
41
+ <img className="rounded-md" src="/images/examples/hub-vessl.png" />
42
+ </Card>
43
+ </CardGroup>
examples/models/llama2.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Llama2-7B Fine-tuning
3
+ description: Fine-tune Llama2-7B with a code instructions dataset
4
+ version: EN
5
+ ---
6
+
7
+ ## Try out this model on [VESSL Hub](https://vessl.ai/hub).
8
+
9
+ This example fine-tunes [Llama 2](https://ai.meta.com/llama/) on a code instruction dataset. The code instruction dataset is consisted of 1.6K samples and follows the format of Stanford's [Alpaca dataset](https://github.com/gururise/AlpacaDataCleaned).
10
+ To optimize the training process into a single GPU with moderate memory, the model uses [8 bit quantization](https://huggingface.co/blog/hf-bitsandbytes-integration) and LoRA (Low-Rank Adaptation).
11
+
12
+ In the code we are referencing under `/code/`, we added our Python SDK for logging key metrics like loss and learning rate. You can check these values in real-time under Plots. The run completes by uploading the model checkpoint to VESSL AI model registry, as defined under `export`.
13
+
14
+ <img
15
+ className="rounded-md"
16
+ src="/images/llama2-metrics.png"
17
+ />
18
+ <img
19
+ className="rounded-md"
20
+ src="/images/llama2-uploaded-model.png"
21
+ />
22
+
23
+ ## Running the model
24
+
25
+ You can run the model with our quick command.
26
+ ```sh
27
+ vessl run create -f llama2_fine-tuning.yaml
28
+ ```
29
+
30
+ Here's a rundown of the `llama2_fine-tuning.yaml` file.
31
+ ```yaml
32
+ name: llama2-finetuning
33
+ description: finetune llama2 with code instruction alpaca dataset
34
+ resources:
35
+ cluster: vessl-gcp-oregon
36
+ preset: v1.l4-1.mem-27
37
+ image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
38
+ import:
39
+ /model/: vessl-model://vessl-ai/llama2/1
40
+ /code/:
41
+ git:
42
+ url: https://github.com/vessl-ai/hub-model
43
+ ref: main
44
+ /dataset/: vessl-dataset://vessl-ai/code_instructions_small_alpaca
45
+ export:
46
+ /trained_model/: vessl-model://vessl-ai/llama2-finetuned
47
+ /artifacts/: vessl-artifact://
48
+ run:
49
+ - command: |-
50
+ pip install -r requirements.txt
51
+ mkdir /model_
52
+ cd /model
53
+ mv llama_2_7b_hf.zip /model_
54
+ cd /model_
55
+ unzip llama_2_7b_hf.zip
56
+ cd /code/llama2-finetuning
57
+ python finetuning.py
58
+ workdir: /code/llama2-finetuning
59
+ ```
examples/models/mistral.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Mistral-7B Playground
3
+ description: Launch a text-generation Streamlit app using Mistral-7B
4
+ version: EN
5
+ ---
6
+
7
+ ## Try out this model on [VESSL Hub](https://vessl.ai/hub).
8
+
9
+ This example runs an app for inference using Mistral-7B which is an open-source LLM developed by [Mistral AI](https://mistral.ai/). The model utilizes a grouped query attention (GQA) and a sliding window attention mechanism (SWA), which enable faster inference and handling longer sequences at smaller cost than other models. As a result, it achieves both efficiency and high performance. Mistral-7B outperforms Llama 2 13B on all benchmarks and Llama 1 34B in reasoning, mathematics, and code generation benchmarks.
10
+
11
+ <img
12
+ className="rounded-md"
13
+ src="/images/mistral-streamlit.png"
14
+ />
15
+
16
+ ## Running the model
17
+
18
+ You can run the model with our quick command.
19
+ ```sh
20
+ vessl run create -f mistral_7b.yaml
21
+ ```
22
+
23
+ Here's a rundown of the `mistral_7b.yaml` file.
24
+ ```yaml
25
+ name: mistral-7b-streamlit
26
+ description: A template Run for inference of Mistral-7B with streamlit app
27
+ resources:
28
+ cluster: vessl-gcp-oregon
29
+ preset: v1.l4-1.mem-42
30
+ image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
31
+ import:
32
+ /model/: hf://huggingface.co/VESSL/Mistral-7B
33
+ /code/:
34
+ git:
35
+ url: https://github.com/vessl-ai/hub-model
36
+ ref: main
37
+ run:
38
+ - command: |-
39
+ pip install -r requirements_streamlit.txt
40
+ streamlit run streamlit_demo.py --server.port 80
41
+ workdir: /code/mistral-7B
42
+ interactive:
43
+ max_runtime: 24h
44
+ jupyter:
45
+ idle_timeout: 120m
46
+ ports:
47
+ - name: streamlit
48
+ type: http
49
+ port: 80
50
+ ```
examples/models/ssd.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: SSD-1B Playground
3
+ description: Interactive playground of a lighter and faster version of Stable Diffusion XL
4
+ version: EN
5
+ ---
6
+
7
+ ## Try out this model on [VESSL Hub](https://vessl.ai/hub).
8
+
9
+ This examples runs an inference app for SSD-1B. After launching, you can access a streamlit web app to generate images with your own prompts. [The Segmind Stable Diffusion Model (SSD-1B)](https://www.segmind.com/) is a distilled 50% smaller version of the Stable Diffusion XL (SDXL), offering a 60% speedup while maintaining high-quality text-to-image generation capabilities.
10
+
11
+ <img
12
+ className="rounded-md"
13
+ src="/images/ssd-streamlit.png"
14
+ />
15
+
16
+ ## Running the model
17
+
18
+ You can run the model with our quick command.
19
+ ```sh
20
+ vessl run create -f ssd-streamlit.yaml
21
+ ```
22
+
23
+ Here's a raundom of the `ssd-streamlit.yaml` file.
24
+ ```yaml
25
+ name: SSD-1B-streamlit
26
+ description: A template Run for inference of SSD-1B with streamlit app
27
+ resources:
28
+ cluster: vessl-gcp-oregon
29
+ preset: v1.l4-1.mem-42
30
+ image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
31
+ import:
32
+ /code/:
33
+ git:
34
+ url: https://github.com/vessl-ai/hub-model
35
+ ref: main
36
+ /model/: hf://huggingface.co/VESSL/SSD-1B
37
+ run:
38
+ - command: |-
39
+ pip install --upgrade pip
40
+ pip install -r requirements.txt
41
+ pip install git+https://github.com/huggingface/diffusers
42
+ streamlit run ssd_1b_streamlit.py --server.port=80
43
+ workdir: /code/SSD-1B
44
+ interactive:
45
+ max_runtime: 24h
46
+ jupyter:
47
+ idle_timeout: 120m
48
+ ports:
49
+ - name: streamlit
50
+ type: http
51
+ port: 80
52
+ ```
examples/models/stable-diffusion.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Stable Diffusion Playground
3
+ description: Generate images with a prompt on the web, powered with Stable Diffusion
4
+ version: EN
5
+ ---
6
+
7
+ ## Try out this model on [VESSL Hub](https://vessl.ai/hub).
8
+
9
+ A simple web app for [stable diffusion](https://stability.ai/) inference is deployed in this example. Some SD model checkpoints are mounted with a VESSL Model so you can try some generation instantly.
10
+
11
+ Stable diffusion is a deep learning, text-to-image model that uses a diffusion technique, which generates an image from noise by iterating gradual denoising steps. Unlike other text-to-image models, stable diffusion performs a diffusion process in the latent space with smaller dimensions and reconstructs the result to the image in a real dimension. Also, a cross-attention mechanism is added for multi-modal tasks such as text-to-image and layout-to-image tasks.
12
+
13
+ <img
14
+ className="rounded-md"
15
+ src="/images/sd-webui-grumpy-cat.png"
16
+ />
17
+
18
+ ## Running the model
19
+
20
+ You can run the model with our quick command.
21
+ ```sh
22
+ vessl run crewate -f sd-webui.yaml
23
+ ```
24
+
25
+ Here's a rundown of the `sd-webui.yaml` file.
26
+ ```yaml
27
+ name: stable-diffusion-webui
28
+ description: A template Run for stable diffusion webui app
29
+ resources:
30
+ cluster: google-oregon
31
+ preset: v1.l4-1.mem-42
32
+ image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
33
+ import:
34
+ /code/:
35
+ git:
36
+ url: https://github.com/vessl-ai/hub-model
37
+ ref: main
38
+ /models/protogen-infinity/: hf://huggingface.co/darkstorm2150/Protogen_Infinity_Official_Release
39
+ /models/sd-v1-5/: hf://huggingface.co/VESSL/stable-diffusion-v1-5-checkpoint
40
+ /models/sd-v2-1/: hf://huggingface.co/VESSL/stable-diffusion-v2-1-checkpoint
41
+ run:
42
+ - command: |-
43
+ pip install -r requirements.txt
44
+ python -u launch.py --no-download-sd-model --ckpt-dir /models --no-half --no-gradio-queue --listen
45
+ workdir: /code/stable-diffusion-webui
46
+ interactive:
47
+ max_runtime: 24h
48
+ jupyter:
49
+ idle_timeout: 120m
50
+ ports:
51
+ - name: gradio
52
+ type: http
53
+ port: 7860
54
+ ```
examples/models/whisper.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Whisper V3 Playground
3
+ description: Translate audio snippets into text on a Streamlit playground.
4
+ version: EN
5
+ ---
6
+
7
+ ## Try out this model on [VESSL Hub](https://vessl.ai/hub).
8
+
9
+ This example runs a general-purpose speech recognition model, [Whisper V3](https://github.com/openai/whisper). It is trained on a 680k hours of diverse labelled audio. Whisper is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. It can generalize to many domains without additional fine-tuning.
10
+
11
+ <img
12
+ className="rounded-md"
13
+ src="/images/whisper-results.png"
14
+ />
15
+
16
+ ## Running the model
17
+
18
+ You can run the model with our quick command.
19
+ ```sh
20
+ vessl run create -f whisper.yaml
21
+ ```
22
+
23
+ If you open log pages, you can see the result of inference for first 5 data in [Librispeech_asr dataset](https://www.openslr.org/12).
24
+
25
+
26
+ Here's a rundown of the `whisper.yaml` file.
27
+ ```yaml
28
+ name: whisper-v3
29
+ description: A template Run for inference of whisper v3 on librispeech_asr test set
30
+ resources:
31
+ cluster: vessl-gcp-oregon
32
+ preset: v1.l4-1.mem-42
33
+ image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
34
+ import:
35
+ /model/: hf://huggingface.co/VESSL/Whisper-large-v3
36
+ /dataset/: hf://huggingface.co/datasets/VESSL/librispeech_asr_clean_test
37
+ /code/:
38
+ git:
39
+ url: https://github.com/vessl-ai/hub-model
40
+ ref: main
41
+ run:
42
+ - command: |-
43
+ pip install -r requirements.txt
44
+ python inference.py
45
+ workdir: /code/whisper-v3
46
+ ```
examples/uses.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Use cases
3
+ description: See VESSL Run in action with common use cases and workflows
4
+ version: EN
5
+ ---
6
+
7
+ <CardGroup cols={2}>
8
+ <Card title="Launch batch jobs on GPUs" href="/examples/uses/job">
9
+ Leverge the power of GPUs to efficiently train batch runs
10
+ <br/>
11
+ {/* <img className="rounded-md" src="/images/hub-llama2.png" /> */}
12
+ </Card>
13
+
14
+ <Card title="Spin-up a notebook server on GPUs" href="/examples/uses/notebook">
15
+ Enable a real-time session of interacitve run on GPUs.
16
+ <br/>
17
+ {/* <img className="rounded-md" src="/images/hub-mistral.png" /> */}
18
+ </Card>
19
+
20
+ <Card title="Backup & Restore data with Artifacts" href="/examples/uses/artifact">
21
+ Run, Backup, Repeat: VESSL Run with VESSL Artifact
22
+ <br/>
23
+ {/* <img className="rounded-md" src="/images/hub-ssd.png" /> */}
24
+ </Card>
25
+ </CardGroup>
examples/uses/artifact.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Backup & Restore data with Artifacts
3
+ description: VESSL Run with VESSL Artifact
4
+ version: EN
5
+ ---
6
+
7
+ # Backup and restore data with VESSL Artifact
8
+ VESSL Artifact provides a manged storage solution to backup and restore data seamlessly during your runs. By setting up the `export` and `import` options in your YAML configuration or through the Web Console, you can easily save and retrieve volumes. This guide covers how to use these features for data persistence.
9
+
10
+ ## Backing Up Data
11
+ When you initiate a run, you can specify volumes to be exported and saved into VESSL Artifact for later use.
12
+
13
+ ### Using YAML
14
+ In your YAML configuration, add the `export` keyword and specify the target volume and artifact name as shown below:
15
+ ```yaml
16
+ export:
17
+ /target/path/: vessl-artifact://{organizationName}/{projectName}/{artifactName}
18
+ ```
19
+ Here, `/target/path/` is the path of the volume in your run container that you wish to export, and `{artifactName}` is the name you want to give to the saved artifact.
20
+
21
+ ### Using Web Console
22
+ Navigate to the `Project > Runs` section in the VESSL Web Console. Locate the `Volumes` section and click `Export > VESSL Artifact` at the target path.
23
+
24
+ ![export_vessl_artifact_1](/images/run/export_vessl_artifact_1.png)
25
+
26
+ Click `Backup Volume` option and specify the `Artifact Name`.
27
+
28
+ ![export_vessl_artifact_2](/images/run/export_vessl_artifact_2.png)
29
+
30
+ Once the run is executed, the specified volume will be exported and saved into VESSL Artifact with the given artifact name after run.
31
+
32
+ ## Restoring Data
33
+ To use the saved data in another run, you can specify the artifact to be imported.
34
+
35
+ ### Using YAML
36
+ In your YAML configuration for the new run, use the `import` keyword as shown:
37
+
38
+ ```yaml
39
+ import:
40
+ /target/path/: vessl-artifact://{organizationName}/{projectName}/{artifactName}
41
+ ```
42
+ `/target/path/` is the path of the volume in your run container that you wish to import, and `{artifactName}` is the name you have saved before.
43
+
44
+
45
+ ### Using Web Console
46
+ Navigate to the `Project > Runs"` section in the VESSL Web Console. Locate the `Volumes` section when creating or editing a run.
47
+ ![import_vessl_artifact_1](/images/run/import_vessl_artifact_1.png)
48
+ Once the new run is executed, the specified volume will be populated with the data from the imported artifact.
49
+
50
+ ## What's Next
51
+ For more detailed YAML reference and definitions, please visit:
52
+ <CardGroup cols={2}>
53
+ <Card
54
+ title="YAML Reference"
55
+ icon="book"
56
+ href="/yaml-reference/en/yaml"
57
+ >
58
+ A complete descriptions of YAML references.
59
+ </Card>
60
+ <Card
61
+ title="YAML Cheat Sheet"
62
+ icon="file-lines"
63
+ href="/yaml-reference/en/yaml_cheat_sheet"
64
+ >
65
+ A complete list of YAML definitions.
66
+ </Card>
67
+ </CardGroup>
examples/uses/job.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Launch batch jobs on GPUs
3
+ description: Leverge the power of GPUs to efficiently train batch runs
4
+ version: EN
5
+ ---
6
+
7
+ ## Batch Run
8
+ Batch runs are designed to execute a series of commands defined in your YAML configuration and then terminate. Batch job is suitable for large-scale, long-running tasks. These tasks are powered by the robustness of GPU capabilities, which significantly hasten model training times.
9
+
10
+ ### A Simple Batch Run
11
+ Here is an example of a simple batch run YAML configuration. It specifies Docker image to be used, the resource required for the run, and the commands to be exectued during the run.
12
+
13
+ ```yaml Simple batch run definition
14
+ name: gpu-batch-run
15
+ description: Run a GPU-backed batch run.
16
+ image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422
17
+ resources:
18
+ cluster: vessl-gcp-oregon
19
+ preset: v1.l4-1.mem-42
20
+ run:
21
+ - command: |
22
+ nvidia-smi
23
+ ```
24
+ In this example, the `resources.preset=v1.v100-1.mem-52` will request a V100 GPU instance. Next, the `nvidia-smi` command will be executed to display the
25
+ NVIDIA system management inteface and then terminate the run.
26
+
27
+ ### Termination Protection
28
+
29
+ You can also define termination protection in a batch run. Termination protection keeps your run active for a specified duration even after your commands have finished executing. This can be usefrul for debugging or retrieving intermediate files.
30
+ ```yaml Enable termination protection
31
+ name: gpu-batch-run
32
+ description: Run a GPU-backed batch run.
33
+ image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422
34
+ resources:
35
+ cluster: vessl-gcp-oregon
36
+ preset: v1.l4-1.mem-42
37
+ run:
38
+ - command: |
39
+ nvidia-smi
40
+ termination_protect: true
41
+ ```
42
+ In this example, the `termination_protect` will protect the container termination after running `nvidia-smi` command.
43
+
44
+ ## Train a Thin-Plate Spline Motion Model with GPU resource
45
+ Now let's dive in more complex batch run configuration. This configuration file describes a batch run for training a Thin-Plate Spline Motion Model utilizing a V100 GPU.
46
+
47
+ ```yaml Batch run YAML for training Thin-Plate Spline Motion Model
48
+ name: Thin-Plate-Spline-Motion-Model
49
+ description: "Animate your own image in the desired way with a batch run on VESSL."
50
+ image: nvcr.io/nvidia/pytorch:21.05-py3
51
+ resources:
52
+ cluster: vessl-gcp-oregon
53
+ preset: v1.l4-1.mem-42
54
+ run:
55
+ - workdir: /root/examples/thin-plate-spline-motion-model
56
+ command: |
57
+ pip install -r requirements.txt
58
+ python run.py --config config/vox-256.yaml --device_ids 0
59
+ import:
60
+ /root/examples: git://github.com/vessl-ai/examples
61
+ /root/examples/vox: s3://vessl-public-apne2/vessl_run_datasets/vox/
62
+ ```
63
+ In this batch run, the Docker image `nvcr.io/nvidia/pytorch:21.05-py3` is used, and a V100 GPU (`resources.preset=v1.v100-1.mem-52`) is allocated for the run. This will ensure that the training job runs on top of the V100 GPU.
64
+
65
+ The model and scripts used in this run are fetched from a Github repository (`/root/examples: git://github.com/vessl-ai/examples`).
66
+
67
+ The commands executed in the run first install the requriements, and train the model using the `run.py` script.
68
+
69
+ This example demonstrates how you can set up a batch run for GPU-backed training a machine learning model with a single YAML configuration.
70
+
71
+ ## What's Next
72
+ For more advanced configurations and examples. please visit [VESSL Hub](https://vesslai.notion.site/9e42f785bbdf42379b2112b859d8c873?v=8d1527bc18154381b9baf35d4068b227&pvs=4).
73
+ <Card
74
+ title="VESSL Hub"
75
+ icon="database"
76
+ href="https://vesslai.notion.site/9e42f785bbdf42379b2112b859d8c873?v=8d1527bc18154381b9baf35d4068b227&pvs=4"
77
+ >
78
+ A variatey of YAML examples that you can use as references
79
+ </Card>
examples/uses/notebook.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Spin-up a notebook server on GPUs
3
+ description: Enable a real-time session of interacitve run on GPUs.
4
+ version: EN
5
+ ---
6
+
7
+ ## Interactive run
8
+ Interactive runs are designed to using Jupyter or SSH for live interaction with your data, code, and GPUs. Interactive runs are useful for tasks such as data exploration, model debugging, and algorithm development. They also allow you to expose additional ports on the container and communicate through those ports.
9
+
10
+ ### A Simple Interactive Run
11
+ Here is an example of a simple interactive run. It specifies resources, a container image, and the duration of the interactive runtime. It's important to note that by default, port `22/tcp` is exposed for SSH and `8888/http` is exposed for JupyterLab.
12
+
13
+ ```yaml Simple interactive run definition
14
+ name: gpu-interactive-run
15
+ description: Run an interactive GPU-backed Jupyter and SSH server.
16
+ resources:
17
+ cluster: vessl-gcp-oregon
18
+ preset: v1.l4-1.mem-42
19
+ image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422
20
+ interactive:
21
+ max_runtime: 24h
22
+ jupyter:
23
+ idle_timeout: 120m
24
+ ```
25
+
26
+ ### Port
27
+ You can also specify additional ports to be exposed during an interactive run.
28
+
29
+ ```yaml Simple interactive run definition
30
+ name: gpu-interactive-run
31
+ description: Run an interactive GPU-backed Jupyter and SSH server.
32
+ resources:
33
+ cluster: vessl-gcp-oregon
34
+ preset: v1.l4-1.mem-42
35
+ image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422
36
+ interactive:
37
+ max_runtime: 24h
38
+ jupyter:
39
+ idle_timeout: 120m
40
+ ports:
41
+ - 8501
42
+ ```
43
+ In this example, in addition to the default ports, 8501 will be exposed. Note that the `ports` field takes a list, so you can specify multiple ports if necessary. Also if you want to specify a TCP port, you can append `/tcp` to the port number; otherwise, `/http` is used implicitly.
44
+
45
+ ## Run a stable diffusion demo with GPU resources
46
+
47
+ Now move onto running a stable diffusion demo. The following configuration outlines an interactive run set up to execute a Stable Diffusion demo utilizing a V100 GPU and exposing the interactive Streamlit demo on port 8501.
48
+ ```yaml Interactive run YAML for a Stable Diffsuion inference demo
49
+ name: Stable Diffusion Web
50
+ description: Run an inference web app of stable diffusion demo.
51
+ image: nvcr.io/nvidia/pytorch:22.10-py3
52
+ resources:
53
+ cluster: vessl-gcp-oregon
54
+ preset: v1.l4-1.mem-42
55
+ run:
56
+ - command: |
57
+ exit
58
+ bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
59
+ bash webui.sh
60
+ interactive:
61
+ max_runtime: 24h
62
+ jupyter:
63
+ idle_timeout: 120n
64
+ ports:
65
+ -8501
66
+ ```
67
+ In this interactive run, the Docker image `image=nvcr.io/nvidia/pytorch:22.10-py3` is utilized, and a V100 GPU (`resources.preset=v1.v100-1.mem-52`) is allocated for the run. The interactive run is designed to run for 24 hours (`interactive.max_runtime: 24h`) and the Streamlit demo will be accessible via port 8501.
68
+
69
+ The run commands first execute a bash script from a remote location (`bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)`), followed by the execution of `webui.sh`.
70
+
71
+ This configuration provides an example of setting up an interactive run for executing a GPU-accelerated demo with real-time user interaction facilitated via a specified port.
72
+
73
+ ## What's Next
74
+ For more advanced configurations and examples. please visit [VESSL Hub](https://vesslai.notion.site/9e42f785bbdf42379b2112b859d8c873?v=8d1527bc18154381b9baf35d4068b227&pvs=4).
75
+ <Card
76
+ title="VESSL Hub"
77
+ icon="database"
78
+ href="https://vesslai.notion.site/9e42f785bbdf42379b2112b859d8c873?v=8d1527bc18154381b9baf35d4068b227&pvs=4"
79
+ >
80
+ A variatey of YAML examples that you can use as references
81
+ </Card>
guides/clusters/access.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Access control
3
+ version: EN
4
+ ---
5
+
6
+ <img style={{ borderRadius: '0.5rem' }}
7
+ src="/images/clusters/access/1_access.png"
8
+ />
9
+
10
+ Under **Access Control**, you can grant, manage, and retrieve access to shared clusters.&#x20;
11
+
12
+ #### (1) Grant access
13
+
14
+ Click **Grant access** and define the following parameters.
15
+
16
+ <img style={{ borderRadius: '0.5rem' }}
17
+ src="/images/clusters/access/2_grant.png"
18
+ />
19
+
20
+ * **Organization name** — Name of the Organization that you want to cluster with.&#x20;
21
+ * **Kubernetes namespace** — [🔗 Kubernetes namespaces](https://www.aquasec.com/cloud-native-academy/kubernetes-101/kubernetes-namespace/) separate a cluster into logical units. It helps granularly organize, allocate, manage, and secure cluster resources. If you are using a naming convention like `group-subgroup`, use a namespace `subgroup`.&#x20;
22
+ * **Node** — Select the nodes to share by clicking the checkbox.&#x20;
23
+
24
+ #### (2) Edit access
25
+
26
+ Click **Edit** to edit access to certain nodes.&#x20;
27
+
28
+ <img style={{ borderRadius: '0.5rem' }}
29
+ src="/images/clusters/access/3_edit.gif"
30
+ />
31
+
32
+ #### (3) Revoke access
33
+
34
+ Click **Revoke** to remove the organization from accessing the clusters.
35
+
36
+ <img style={{ borderRadius: '0.5rem' }}
37
+ src="/images/clusters/access/4_revoke.gif"
38
+ />
39
+
guides/clusters/aws.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: AWS
3
+ version: EN
4
+ ---
5
+
6
+ <Note>
7
+ Private cloud support for AWS, Google Cloud, Microsoft Azure, and Oracle Cloud is currently in development.&#x20;
8
+
9
+ In the meantime, you can contact our support team to receive AWS integration support backed by our engineering team or request early access.&#x20;
10
+
11
+ Contact us at [[email protected]](https://vessl.ai/talk-to-sales) or through our [community Slack](https://join.slack.com/t/vessl-ai-community/shared\_invite/zt-1a6schu04-NyjRKE0UMli58Z\_lthBICA).&#x20;
12
+ </Note>
guides/clusters/laptops.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Personal laptops
3
+ version: EN
4
+ ---
5
+
6
+ <img style={{ borderRadius: '0.5rem' }}
7
+ src="/images/clusters/laptops/1_mac.png"
8
+ />
9
+
10
+ Integrating your Mac or Linux machine to VESSL allows you to use your personal machines as a Kubernetes-backed single-node cluster and optimize your laptop for ML.
11
+
12
+ * Launch training jobs in seconds on VESSL's easy-to-use web interface or CLI, without writing YAML or scrappy scripts.
13
+ * Build a baseline model on your laptop and transition seamlessly to VESSL Cloud to scale your model.
14
+ * Keep track of your hardware and runtime environments together with `vessl.log`.
15
+
16
+ VESSL Clusters' single-line CLI command automatically checks, installs, and configures all dependencies such as Kubernetes, and connects your laptop to VESSL.
17
+
18
+ ## Step-by-step Guide
19
+
20
+ <Warning>There is an ongoing [🔗 issue related to Kubernetes hostname](https://github.com/kubernetes/kubernetes/issues/71140#issue-381687745) containing capital letters. Please make sure your machine's hostname is in lowercase.&#x20;</Warning>
21
+
22
+ ### (1) Prerequisites
23
+
24
+ You should first have **Docker**, **Helm**, and **VESSL SDK** installed on your machine.&#x20;
25
+
26
+ #### Install Docker and Helm
27
+
28
+ Install Docker and Helm. If you are on Mac and have Homebrew installed, the easiest way is to `brew install`. Check out the [🔗 Docker](https://docs.docker.com/get-docker/) and [🔗 Helm](https://helm.sh/docs/intro/install/) installation guides for more details.&#x20;
29
+
30
+ ```bash
31
+ brew install docker
32
+ ```
33
+
34
+ ```bash
35
+ brew install helm
36
+ ```
37
+
38
+ <Note>You should have Docker running in the background with your [account](https://hub.docker.com/signup) logged in while using your laptop with VESSL Clusters.&#x20;</Note>
39
+
40
+ #### **Set up the VESSL environment**
41
+
42
+ Set up a VESSL environment on your laptop and grant access. Refer to our docs on VESSL Client for more detailed setup guides.
43
+
44
+ ```bash
45
+ pip install vessl
46
+ ```
47
+
48
+ ```bash
49
+ vessl configure
50
+ ```
51
+ <img style={{ borderRadius: '0.5rem' }}
52
+ src="/images/clusters/laptops/2_token.png"
53
+ />
54
+
55
+ ### (2) VESSL integration
56
+
57
+ The following single-line command connects your Mac.&#x20;
58
+
59
+ ```
60
+ vessl cluster create --name '[CLUSTER_NAME_HERE]' --mode single
61
+ ```
62
+
63
+ The most common flags used with `vessl cluster create` commands are as follows. Check out our docs on VESSL CLI for additional flags.
64
+
65
+ * `--name` — Define your cluster name
66
+ * `--mode single` — Specifies that are installing a single-node cluster. &#x20;
67
+ * `--help` — See additional command options.&#x20;
68
+
69
+ The command will automatically check dependencies and ask you to install Kubernetes. This process will take a few minutes. Proceed by entering `y`.&#x20;
70
+
71
+ <img style={{ borderRadius: '0.5rem' }}
72
+ src="/images/clusters/laptops/3_create.png"
73
+ />
74
+
75
+ If you have Kubernetes installed on your machine, the command will then ask you to install VESSL agent on the Kubernetes cluster. Enter `y` and proceed.
76
+
77
+ <img style={{ borderRadius: '0.5rem' }}
78
+ src="/images/clusters/laptops/4_install.png"
79
+ />
80
+
81
+ By this point, you have successfully completed the integration.
82
+
83
+ ### (3) Confirm integration
84
+
85
+ Confirm your integration using the `list` command which returns all the clusters available in your Organization.&#x20;
86
+
87
+ ```bash
88
+ vessl cluster list
89
+ ```
90
+
91
+ <img style={{ borderRadius: '0.5rem' }}
92
+ src="/images/clusters/laptops/5_list.png"
93
+ />
94
+
95
+ Finally, try running a training job on your laptop. Your first run may take a few minutes to get the Docker images installed on your device.
96
+
97
+ <iframe>
98
+ src="https://www.youtube.com/watch?v=HUa-EVd25hA"
99
+ </iframe>
100
+
101
+ <Note>Windows support for single-node cluster integration is currently in beta. Cluster usage and status monitoring may be limited for Windows machines.&#x20;</Note>
guides/clusters/managed.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Managed cloud
3
+ version: EN
4
+ ---
5
+
6
+ Out-of-the-box, VESSL comes with a fully managed AWS and provides the easiest way to run compute-demanding jobs on the cloud. VESSL Cloud solves the common engineering overheads in ML in the cloud, such as volume mount, runtime customization, spot instances, and model checkpoints — at zero additional cost. &#x20;
7
+
8
+ VESSL Cloud provides support for two different [🔗 AWS Regions](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html#Concepts.RegionsAndAvailabilityZones.Regions) — Asia Pacific (Seoul, ap-northeast-2) and US West (Oregon, us-west-2). In addition to your default region, you can an additional region on the web.&#x20;
9
+
10
+ Users on the Pro plan can take advantage of the powerful GPU instances. Below are the GPU specs and equivalent Amazon EC2 instances that VESSL Cloud supports. Check out the links for more details.
11
+
12
+ * T4 — [🔗 EC2 G4 Instances](https://aws.amazon.com/ec2/instance-types/g4/)
13
+ * K80 — [🔗 EC2 P2 Instances](https://aws.amazon.com/ec2/instance-types/p2/)
14
+ * V100 — [🔗 EC2 P3 Instances](https://aws.amazon.com/ec2/instance-types/p3/)
15
+
16
+ ## Step-by-step Guide
17
+
18
+ To add an alternative region follow the following steps.
19
+
20
+ #### (1) Click **New cluster**.
21
+
22
+ <img style={{ borderRadius: '0.5rem' }}
23
+ src="/images/clusters/managed/1_new.png"
24
+ />
25
+
26
+ #### (2) Under VESSL Cloud, select your **Region** and **Cluster**.
27
+
28
+ <img style={{ borderRadius: '0.5rem' }}
29
+ src="/images/clusters/managed/2_manㅁged.png"
30
+ />
31
+
32
+ #### (3) Click **Done** to complete the setup and confirm your integration under **🗂️ Clusters**.
33
+
34
+ <img style={{ borderRadius: '0.5rem' }}
35
+ src="/images/clusters/managed/3_lisㅅ.png"
36
+ />
37
+
38
+ <Warning>If you are on the Enterprise plan and wish to deactivate VESSL Cloud for preventive measures, contact [[email protected]](mailto:[email protected]).</Warning>
guides/clusters/monitoring.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Monitor clusters
3
+ version: EN
4
+ ---
5
+
6
+ VESSL Clusters comes with a built-in **cluster dashboard** that provides a visualization of cluster usage and status down to each node and workload. This is enabled by the **VESSL Cluster Agent** which sends real-time information about the clusters and workloads running on the cluster such as node specifications and model metrics.&#x20;
7
+
8
+ Dashboard setup is done automatically as you integrate your cloud or on-prem servers using `vessl cluster create` command.
9
+
10
+ <Note>
11
+ Users on the Enterprise can use VESSL's custom cluster agent to route the monitoring information to your monitoring tools like Datadog and Grafana.&#x20;
12
+ Contact us at [[email protected]](https://vessl.ai/talk-to-sales) or through our [community Slack](https://join.slack.com/t/vessl-ai-community/shared\_invite/zt-1a6schu04-NyjRKE0UMli58Z\_lthBICA) for more details.
13
+ </Note>
14
+
15
+ ## Cluster-level Monitoring
16
+
17
+ Multi-cluster monitoring of resource usage and ongoing workloads is available under **🗂️ Clusters**. Here, you can get an overview of the integrated clusters.
18
+
19
+ <img style={{ borderRadius: '0.5rem' }}
20
+ src="/images/clusters/monitoring/1_cluster_.png"
21
+ />
22
+
23
+ * Cluster status — Connection and incident status of a cluster.
24
+ * Available nodes — Available number of worker nodes.
25
+ * Real-time resource usage — Real-time resource usage of CPU cores, RAM, and GPUs.
26
+ * Ongoing workloads by type — The number of running notebook servers (Workspaces) and training jobs (Experiments).
27
+
28
+ Clicking the cluster guides you to the **Summary** tab which holds more detailed information about the cluster.
29
+
30
+ #### (1) Summary
31
+
32
+ The summary section presents the basic information about the cluster including the connection and incident status.
33
+
34
+
35
+ <img style={{ borderRadius: '0.5rem' }}
36
+ src="/images/clusters/monitoring/2_summar.png"
37
+ />
38
+
39
+ #### (2) Quotas & Usage
40
+
41
+ Quotas & Usage shows the organization-wide and personal resource quota for the cluster, including the number of GPU hours and occupiable GPUs and CPUs. This is set by the organization admin. Refer to our next section in the documentation VESSL Cluster's features on cluster administration.&#x20;
42
+
43
+ <img style={{ borderRadius: '0.5rem' }}
44
+ src="/images/clusters/monitoring/3_quotas.png"
45
+ />
46
+
47
+ #### (3) Resource Statistics
48
+
49
+ This section shows you how much CPU, GPU, and memory have been requested (and allocated) and are currently being used.&#x20;
50
+
51
+ <Note>Note that when you are using VESSL Workspaces (notebook servers) you may be occupying a node without actively using the resources — you are only actively using the resources only when the cell is running.</Note>
52
+
53
+ <img style={{ borderRadius: '0.5rem' }}
54
+ src="/images/clusters/monitoring/4_stats.png"
55
+ />
56
+
57
+ #### (4) Workloads
58
+
59
+ This section shows all ongoing workloads on the cluster with information on the occupying node, resource consumption, creator, and the created date. If you are an organization admin, clicking the workload name guides you to the detailed workload page under **🗂️ Projects** or **🗂️ Workspaces**.
60
+
61
+ <img style={{ borderRadius: '0.5rem' }}
62
+ src="/images/clusters/monitoring/5_workloads.png"
63
+ />
64
+
65
+ ## Node-level Monitoring
66
+
67
+ Under **Nodes**, you can view all the worker nodes tied to the cluster with their real-time CPU, Memory, and GPU usage, ongoing workloads by their type, and incident status. You can select the checkbox to get more in-depth information.
68
+
69
+ <img style={{ borderRadius: '0.5rem' }}
70
+ src="/images/clusters/monitoring/6_node.png"
71
+ />
72
+
73
+ #### (1) Metadata
74
+
75
+ <img style={{ borderRadius: '0.5rem' }}
76
+ src="/images/clusters/monitoring/7_node-meta.png"
77
+ />
78
+
79
+ #### (2) System metrics
80
+
81
+ <img style={{ borderRadius: '0.5rem' }}
82
+ src="/images/clusters/monitoring/8_node-systems.png"
83
+ />
84
+
85
+ #### (3) Workloads
86
+
87
+ <img style={{ borderRadius: '0.5rem' }}
88
+ src="/images/clusters/monitoring/9_node-workloads-.png"
89
+ />
90
+
91
+ #### (4) Issues
92
+
93
+ <img style={{ borderRadius: '0.5rem' }}
94
+ src="/images/clusters/monitoring/10_node-issues.png"
95
+ />
96
+
97
+ ## Workload-level Monitoring
98
+
99
+ Under **Workloads**, you can view the workload log related to the cluster with the current status, occupying node, resource consumption, and a visualization of the usage history.&#x20;
100
+
101
+ <img style={{ borderRadius: '0.5rem' }}
102
+ src="/images/clusters/monitoring/11_workloads.png"
103
+ />
104
+
105
+ <Note>If you are on the Enterprise plan and wish to send the cluster information collected by VESSL Cluster Agent to your central infra monitoring tool such as Datadog and Grafana, contact us at [[email protected]](https://vessl.ai/talk-to-sales).</Note>
guides/clusters/onprem.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: On-premise clusters
3
+ version: EN
4
+ ---
5
+
6
+ In the background, VESSL Clusters leverages GPU-accelerated Docker containers and Kubernetes pods. It abstracts the complex compute backends and system details of Kubernetes-backed GPU infrastructure into an easy-to-use web interface and simple CLI commands. Data Scientists and Machine Learning Researchers without any software or DevOps backgrounds can use VESSL's single-line CURL command to set up and configure on-premise GPU servers for ML.&#x20;
7
+
8
+ VESSL’s cluster integration is composed of four primitives.
9
+
10
+ - **VESSL API Server** — Enables communication between the user and the GPU clusters, through which users can launch containerized ML workloads.
11
+ - **VESSL Cluster Agent** — Sends information about the clusters and workloads running on the cluster such as the node specifications and model metrics.
12
+ - **Control plane node** — Acts as the [🔗 cluster-wide control tower](https://www.containiq.com/post/kubernetes-control-plane) and orchestrates subsidiary worker nodes.&#x20;
13
+ - **Worker nodes** — Run specified ML workloads based on the runtime spec and environment received from the control plane node.
14
+
15
+ ![](<../../../../assets/clusters/clusters-clsuter_integrations-onprem (1).png>)
16
+ <img style={{ borderRadius: '0.5rem' }}
17
+ src="/images/clusters/onprem/1_onprem.png"
18
+ />
19
+
20
+ Integrating more powerful, multi-node GPU clusters for your team is as easy as integrating your personal laptop. To make the process easier, we’ve prepared a **single-line curl command** that installs all the binaries and dependencies on your server.&#x20;
21
+
22
+ ## Step-by-step Guide
23
+
24
+ <Warning>There is an ongoing [🔗 issue related to Kubernetes hostname](https://github.com/kubernetes/kubernetes/issues/71140#issue-381687745) containing capital letters. Please make sure your machine's hostname is in lowercase.&#x20;</Warning>
25
+
26
+ ### (1) Prerequisites
27
+
28
+ Note that **Ubuntu 18.04** or **CentOS 7.9** or higher Linux OS is installed on your server.
29
+
30
+ #### Install dependencies
31
+
32
+ You can install all the dependencies required for cluster integration using a single-line `curl` command. The command
33
+
34
+ - Installs [🔗 Docker](https://docs.docker.com/get-docker/) if it’s not already installed.
35
+ - Installs and configures [🔗 NVIDIA container runtime](https://developer.nvidia.com/nvidia-container-runtime).
36
+ - Installs [🔗 k0s](https://k0sproject.io/), a lightweight Kubernetes distribution, and designates and configures a control plane node.
37
+ - Generates a token and a command for connecting worker nodes to the control plane node configured above.
38
+
39
+ If you wish to use your control plane solely for the control plane node — meaning not running any ML workloads on the control plane node and only using it for admin and monitoring purposes — add a `--taint-controller` flag at the end of the command.
40
+
41
+ ```bash
42
+ curl -sSLf https://install.vessl.ai/bootstrap-cluster/bootstrap-cluster.sh | sudo bash -s -- --role=controller
43
+ ```
44
+
45
+ ![](<../../../../assets/clusters/clusters-clsuter_integrations-onprem-1 (1).png>)
46
+ <img style={{ borderRadius: '0.5rem' }}
47
+ src="/images/clusters/onprem/2_curl.png"
48
+ />
49
+
50
+ Upon installing all the dependencies, the command returns a follow-up command with a token. You can use this to add worker nodes to the control plane. If you don’t want to add an additional worker node you can skip to the next step.&#x20;
51
+
52
+ ```bash
53
+ curl -sSLf https://install.vessl.ai/bootstrap-cluster/bootstrap-cluster.sh | sudo bash -s -- --role worker --token '[TOKEN_HERE]'
54
+ ```
55
+
56
+ You can confirm that your control plane and worker node have been successfully configured using a `k0s` command.
57
+
58
+ ```bash
59
+ sudo k0s kubectl get nodes
60
+ ```
61
+
62
+ ![](../../../../assets/clusters/clusters-clsuter_integrations-onprem-2.png)
63
+ <img style={{ borderRadius: '0.5rem' }}
64
+ src="/images/clusters/onprem/3_k0s.png"
65
+ />
66
+
67
+ ### (2) VESSL integration
68
+
69
+ You are now ready to integrate the Kubernetes cluster with VESSL. Make sure you have VESSL Client installed on the server and configured for your organization.
70
+
71
+ ```bash
72
+ pip install vessl --upgrade
73
+ ```
74
+
75
+ ```bash
76
+ vessl configure
77
+ ```
78
+
79
+ The following single-line command connects your Kubernetes-backed GPU cluster to VESSL.
80
+
81
+ ```bash
82
+ vessl cluster create
83
+ ```
84
+
85
+ Follow through prompting your configurtaion options. You can press `Enter` to use the default values.
86
+
87
+ By this point, you have successfully completed the integration.
88
+
89
+ ### (3) Confirm integration
90
+
91
+ You can use VESSL CLI command or visit **🗂️ Clusters** to confirm your integration.&#x20;
92
+
93
+ ```bash
94
+ vessl cluster list
95
+ ```
96
+
97
+ ![](../../../../assets/clusters/clusters-clsuter_integrations-onprem-5.png)
98
+ <img style={{ borderRadius: '0.5rem' }}
99
+ src="/images/clusters/onprem/4_clusters.png"
100
+ />
101
+
102
+ ### Common troubleshooting commands
103
+
104
+ Here are common problems that our users face as they integrate on-premise Clusters. You can use the `journalctl` command to get a more detailed log of the issue. Please share this log as you reach out for support.
105
+
106
+ ```
107
+ sudo journalctl -u k0scontroller | tail -n 20
108
+ ```
109
+
110
+ #### VesslApiException: PermissionDenied (403): Permission denied.
111
+
112
+ ```
113
+ kernel_clsuter.py111] VESSL cluster agent installed. Waiting for the agent to be connected with VESSL...
114
+ _base.py:107] VesslApiException: PermissionDenied (403): Permission denied.
115
+ ```
116
+
117
+ It's likely that you don't have full access to install VESSL cluster agent on the server. Contact your organization's cluster and infrastructure administrator to receive help.&#x20;
118
+
119
+ #### VesslApiException: NotFound (404) Requested entity not found.
120
+
121
+ ```
122
+ kernel_cluster.py:289] Existing VESSL cluster installation found! getting cluster information...
123
+ _base.py:107] VesslApiException: NotFound (404) Requested entity not found.
124
+ ```
125
+
126
+ Try again after running the following command:&#x20;
127
+
128
+ ```bash
129
+ sudo helm uninstall vessl =n vessl --kubeconfig="var/lib/k0s/pki/admin/conf"
130
+ ```
131
+
132
+ **Invalid value: "k0s-ctrl-\[HOSTNAME]"**
133
+
134
+ ```
135
+ leaderelection.go:334] error initially creating leader election record: Lease.coordination.k8s.io "k0s-ctrl-[HOSTNAME]" is invalide: metadata.name: Invalid value: "k0s-ctrl-[HOSTNAME]": a lowercase RFC 1123 subdomain must consist of lowercase alphanumeric characters.
136
+ ```
137
+
138
+ There is an ongoing [🔗 issue related to Kubernetes hostname](https://github.com/kubernetes/kubernetes/issues/71140#issue-381687745) containing capital letters. Your hostname must be in lowercase alphanumeric characters.&#x20;
139
+
140
+ You can solve this issue by contacting your organization's cluster and infrastructure administrator to change your hostname, or by changing your hostname yourself using the following `sudo` command:
141
+
142
+ <Warning>Changing your hostname may have unexpected side effects, and might be prohibited depending on your organization's IT policy.&#x20;</Warning>
143
+
144
+ ```
145
+ sudo hostname [HOSTNAME]
146
+ sudo systemctl restart k0scontroller.
147
+ ```
148
+
149
+ ## Troubleshooting
150
+
151
+ If you're experiencing issues with your on-premises cluster, or can't figure out what's causing them, try [VESSL Flare](/docs/troubleshooting/vessl-flare.md).
guides/clusters/overview.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Overview
3
+ version: EN
4
+ ---
5
+
6
+ <img style={{ borderRadius: '0.5rem' }}
7
+ src="/images/clusters/overview/1_clusters.png"
8
+ />
9
+
10
+ VESSL enables seamless scaling of containerized ML workloads from a personal laptop to cloud instances or Kubernetes-backed on-premise clusters. &#x20;
11
+
12
+ While VESSL comes with an out-of-the-box, fully managed AWS, you can also integrate **an unlimited number** of (1) personal Linux machines, (2) on-premise GPU servers, and (3) private clouds. You can then use VESSL as a single point of access to multiple clusters.&#x20;
13
+
14
+ VESSL Clusters simplifies the end-to-end management of large-scale, organization-wide ML infrastructure from integration to monitoring. These features are available under **🗂️ Clusters**.
15
+
16
+ * **Single-command Integration** — Set up a hybrid or multi-cloud infrastructure with a single command.
17
+ * **GPU-accelerated workloads** — Run training, optimization, and inference tasks on GPUs in seconds
18
+ * **Resource optimization** — Match and scale workloads automatically based on the required compute resources
19
+ * **Cluster Dashboard** — Monitor real-time usage and incident & health status of clusters down to each node.
20
+ * **Reproducibility** — Record runtime metadata such as hardware and instance specifications.
21
+
22
+ <img style={{ borderRadius: '0.5rem' }}
23
+ src="/images/clusters/overview/2_figure.png"
24
+ />
guides/clusters/quotas.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Quotas and limits
3
+ version: EN
4
+ ---
5
+
6
+ <img style={{ borderRadius: '0.5rem' }}
7
+ src="/images/clusters/quotas/1_quotas.png"
8
+ />
9
+
10
+ Under **Cluster Quotas**, you can set quotas for the entire Organization or certain user groups based on the number of GPU hours, occupiable GPUs, run hours, and disk size.&#x20;
11
+
12
+ * Shared Quotas — Set a quota for all organizations or users that have access to the cluster.
13
+ * Individually Defined Quotas — Set individually defined quotas for specific organizations or users.
14
+
15
+ ## Step-by-step Guide
16
+
17
+ Click **Add new quota** and set the following parameters.
18
+
19
+ <img style={{ borderRadius: '0.5rem' }}
20
+ src="/images/clusters/quotas/2_create.png"
21
+ />
22
+
23
+ <img style={{ borderRadius: '0.5rem' }}
24
+ src="/images/clusters/quotas/3_user.png"
25
+ />
guides/clusters/remove.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Remove a cluster
3
+ version: EN
4
+ ---
5
+
6
+ <img style={{ borderRadius: '0.5rem' }}
7
+ src="/images/clusters/remove/1_delete.png"
8
+ />
9
+
10
+ <Warning>Once you delete a cluster, there is no going back.&#x20;</Warning>
11
+
12
+ #### Remove cluster from VESSL
13
+
14
+ Under **General**, you can remove your clusters from VESSL. Click Delete cluster to proceed. Note that you should first terminate all ongoing workloads.&#x20;
15
+
16
+ <img style={{ borderRadius: '0.5rem' }}
17
+ src="/images/clusters/remove/2_modal.png"
18
+ />
19
+
20
+ #### Remove all dependencies from the cluster
21
+
22
+ To remove all dependencies including the VESSL cluster agent from the cluster, run the following command.&#x20;
23
+
24
+ ```
25
+ docker stop vessl-k0s && docker rm vessl-k0s
26
+ docker run -it --rm --privileged --pid=host alpine nsenter -t 1 -m -- sh -c “rm -rfv /var/lib/k0s”
27
+ ```
28
+
guides/clusters/specs.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Default resource specs
3
+ version: EN
4
+ ---
5
+
6
+ <img style={{ borderRadius: '0.5rem' }}
7
+ src="/images/clusters/specs/1_specs.png"
8
+ />
9
+
10
+ Under **Resource Specs**, you can set custom resource presets that users can only select and use to launch ML workloads. You can also specify the **priority** of the defined options. For example, when you set the resource specs as above users will only be able to select the four options below.
11
+
12
+ <img style={{ borderRadius: '0.5rem' }}
13
+ src="/images/clusters/specs/2_resource.png"
14
+ />
15
+
16
+ These default options can help admins optimize resource usage by (1) preventing someone from occupying an excessive number of GPUs and (2) preventing unbalanced resource requests which cause skewed resource usage. As for average users, they can simply get going without thinking and configuring the exact number of CPU cores and memories they need to request.
17
+
18
+ ## Step-by-step Guide
19
+
20
+ Click **New resource spec** and define the following parameters.
21
+
22
+ <img style={{ borderRadius: '0.5rem' }}
23
+ src="/images/clusters/specs/3_add.png"
24
+ />
25
+
26
+ * **Name** — Set a name for the preset. Use names that well represent the preset like `a100-2.mem-16.cpu-6`.
27
+ * **Processor type** — Define the preset by the processor type, either by CPU or GPU.&#x20;
28
+ * **CPU limit** — Enter the number of CPUs. For `a100-2.mem-16.cpu-6`, enter `6`.
29
+ * **Memory limit** — Enter the amount of memory in GB. For `a100-2.mem-16.cpu-6`, the number would be 16.
30
+ * **GPU type** — Specify which GPU you are using. You can get this information by using the `nvidia-smi` command on your server. In the following example, the value is `a100-sxm-80gb`.
31
+
32
+ ```bash
33
+ nvidia-smi
34
+ ```
35
+
36
+ ```bash
37
+ Thu Jan 19 17:44:05 2023
38
+ +-----------------------------------------------------------------------------+
39
+ | NVIDIA-SMI 510.73.08 Driver Version: 510.73.08 CUDA Version: 11.6 |
40
+ |-------------------------------+----------------------+----------------------+
41
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
42
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
43
+ | | | MIG M. |
44
+ |===============================+======================+======================|
45
+ | 0 <a data-footnote-ref href="#user-content-fn-1">NVIDIA A100-SXM</a>... On | 00000000:01:00.0 Off | 0 |
46
+ | N/A 40C P0 64W / 275W | 0MiB / 81920MiB | 0% Default |
47
+ | | | Disabled |
48
+ +-------------------------------+----------------------+----------------------+
49
+ ```
50
+
51
+ * **GPU limit** — Enter the number of GPUs. For `gpu2.mem16.cpu6`, enter `2`. You can also place decimal values if you are using Multi-Instance GPUs (MIG).
52
+ * **Priority** — Using different values for priority disables FIFO scheduler and assigns workloads according to priority, with lower priority being first. The example preset below always puts workloads running on `gpu-1` ahead of any other workloads.&#x20;
53
+
54
+ <img style={{ borderRadius: '0.5rem' }}
55
+ src="/images/clusters/specs/4_list.png"
56
+ />
57
+
58
+ * **Available workloads** — Select the type of workloads that can use the preset. With this, you can guide users to use 🗂️ **Experiments** by preventing them from running ️ **Workspaces** with 4 or 8 GPUs. &#x20;
guides/datasets/create.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Create a new model
3
+ version: EN
4
+ ---
5
+
6
+ When you click the **NEW DATASET** under **DATASET** page, you will be asked to add new dataset either from local or external data source. You have three data provider options: VESSL, Amazon Simple Storage Service, and Google Cloud Storage.
7
+
8
+ <img style={{ borderRadius: '0.5rem' }}
9
+ src="/images/datasets/create/1_new.png"
10
+ />
11
+
12
+ <img style={{ borderRadius: '0.5rem' }}
13
+ src="/images/datasets/create/2_datasets.png"
14
+ />
15
+
16
+ <Tabs>
17
+ <Tab title="Managed">
18
+ When you select a VESSL dataset, you can upload data from the local disk. To create a VESSL dataset:
19
+
20
+ 1. Enter **Dataset Name**
21
+ 2. Click **Upload Files**
22
+ 3. Click **Submit**
23
+ </Tab>
24
+
25
+ <Tab title="S3">
26
+ You can retrieve dataset from S3 by selecting Amazon Simple Storage Service. To create a dataset from S3
27
+
28
+ 1. Enter **Dataset Name**
29
+ 2. Enter **ARN**
30
+ 3. Enter **Bucket Path**
31
+ 4. Click **Create**
32
+ </Tab>
33
+
34
+ <Tab title="GCS">
35
+ You also have an option to retrieve dataset from Google Cloud Storage. To create a dataset from GCS:
36
+
37
+ 1. Enter **Dataset Name**
38
+ 2. Enter **Bucket Path**
39
+ 3. Click **Create** button
40
+ </Tab>
41
+
42
+ <Tab title="Local storage">
43
+ If the dataset exists inside the cluster (NAS, host machine, etc.) and you want to mount it only inside the cluster, you can select Local Storage option. In this case, VESSL only stores the location of the dataset, and mounts the path when an experiment is created.
44
+
45
+ VESSL supports 3 types of local mounts
46
+
47
+ * NFS
48
+ * HostPath
49
+ * FlexVolume (e.g. [CIFS mount](tips-and-limitations.md#cifs-mount))
50
+
51
+ <img style={{ borderRadius: '0.5rem' }}
52
+ src="/images/datasets/create/3_nfs.png"
53
+ />
54
+
55
+
56
+ Since **VESSL does not have access to the local dataset**, you cannot browse local dataset files on VESSL.
57
+ </Tab>
58
+ </Tabs>
59
+
60
+ <Note>A detailed integration guide is provided on each **Create Dataset** dialog.</Note>
guides/datasets/manage.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Manage models
3
+ version: EN
4
+ ---
5
+
6
+ Under Datasets, you can view the file tree of your dataset. Here, you can also upload file, create folder.
7
+
8
+ <img style={{ borderRadius: '0.5rem' }}
9
+ src="/images/datasets/manage/files.png"
10
+ />
11
+
12
+ ### Dataset Versioning (Enterprise only)
13
+
14
+ **Dataset Version** is a specific snapshot of a dataset captured at a particular point in time. To enable this feature, you have to check `Enable Versioning` when creating dataset.
15
+
16
+ Dataset Version can be created by yourself on the `VERSIONS` tab, or be automatically created when an experiment is created to provider reproducibility of the experiment. You can also choose the specific dataset version to use when creating an experiment.&#x20;
17
+
18
+ <Warning>This feature is currently unavailable for AWS S3 or GCS dataset source types.</Warning>
19
+
20
+ #### How it works
21
+
22
+ If you enable versioning when creating a dataset, all dataset files are incrementally saved as blob. Each Version stores the mapping from blob to actual file path.
23
+
24
+ Each blob is stored incrementally by hashsum - the dataset size does not increase even if dataset version is created frequently, unless the files are changed.
25
+
guides/datasets/overview.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Overview
3
+ version: EN
4
+ ---
5
+
6
+ VESSL **dataset** is a collection of data sourced from local disks or cloud vendors such as AWS S3 and Google Cloud Storage. You can mount the registered dataset to the experiment container in runtime.
7
+
8
+ ### Managing Dataset on Web Console
9
+
10
+ Click **DATASETS** tab to view your organization's datasets.&#x20;
11
+
12
+ <img style={{ borderRadius: '0.5rem' }}
13
+ src="/images/datasets/overview/datasets.png"
14
+ />
15
+
guides/datasets/tips.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Tips & Limitations
3
+ version: EN
4
+ ---
5
+
6
+ ### CIFS mount
7
+
8
+ VESSL providers FlexVolume storage type to support CIFS mount.
9
+
10
+ 1. Install [CIFS FlexVolume plugin](https://github.com/fstab/cifs).&#x20;
11
+ 2. Create `secret.yml` for CIFS mount.
12
+ 3. Fill the options in the create dataset dialog.
13
+
14
+ <img style={{ borderRadius: '0.5rem' }}
15
+ src="/images/datasets/tips/cifs.png"
16
+ />
17
+
18
+ ### Use other mount options not supported by VESSL
19
+
20
+ By using HostPath mount, you can work around to use other mount options which are not supported by VESSL.
21
+
22
+ 1. Mount the storage on all host machines, in the same path. (e.g. `/mnt/s3fs-mnist-data`)
23
+ 2. Mount dataset with the HostPath option.
guides/experiments/create.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Create a new experiment
3
+ ---
4
+
5
+ To create an experiment, first specify a few options such as cluster, resource, image, and start command. Here is an explanation of the config options.
6
+
7
+ <img style={{ borderRadius: '0.5rem' }}
8
+ src="/images/experiment/create/1_experiment.jpeg"
9
+ />
10
+
11
+ ### Cluster & Resource (Required) <a href="#runtime" id="runtime"></a>
12
+
13
+ You can run your experiment on either VESSL's managed cluster or your custom cluster. Start by selecting a cluster.&#x20;
14
+
15
+ <Tabs>
16
+ <Tab title="Managed Cluster">
17
+ Once you selected VESSL's managed cluster, you can view a list of available resources under the dropdown menu.&#x20;
18
+
19
+ <img style={{ borderRadius: '0.5rem' }}
20
+ src="/images/experiment/create/2_cluster-managed.png"
21
+ />
22
+
23
+
24
+ You also have an option to use spot instances.
25
+
26
+ Check out the full list of resource types and corresponding prices:
27
+ </Tab>
28
+
29
+ <Tab title="Custom Cluster">
30
+ Your custom cluster can be either on-premise or on-cloud. For on-premise clusters, you can specify the processor type and resource requirements. The experiment will be assigned automatically to an available node based on the input resource requirements.&#x20;
31
+
32
+ <img style={{ borderRadius: '0.5rem' }}
33
+ src="/images/experiment/create/3_cluster-custom.png"
34
+ />
35
+
36
+ </Tab>
37
+ </Tabs>
38
+
39
+ ### Distribution Mode (Optional) <a href="#image" id="image"></a>
40
+
41
+ You have an option to use multi-node distributed training. The default option is single-node training.&#x20;
42
+
43
+ <img style={{ borderRadius: '0.5rem' }}
44
+ src="/images/experiment/create/4_distributed.png"
45
+ />
46
+
47
+
48
+ ### Image (Required) <a href="#image" id="image"></a>
49
+
50
+ Select the Docker image that the experiment container will use. You can either use a managed image provided by VESSL or your own custom image.&#x20;
51
+
52
+ <Tabs>
53
+ <Tab title="Managed Image">
54
+ Managed images are pre-pulled images provided by VESSL. You can find the available image tags at VESSL's [Amazon ECR Public Gallery](https://gallery.ecr.aws/vessl/kernels)_._&#x20;
55
+
56
+ <img style={{ borderRadius: '0.5rem' }}
57
+ src="/images/experiment/create/5_image-managed.png"
58
+ />
59
+ </Tab>
60
+
61
+ <Tab title="Custom Image">
62
+ You can pull your own custom images from either [Docker Hub](https://hub.docker.com) or [Amazon ECR](https://aws.amazon.com/ecr/).&#x20;
63
+
64
+ #### Public Images
65
+
66
+ To pull images from the public Docker registry, simply pass the image URL. The example below demonstrates pulling the official TensorFlow development GPU image from Docker Hub.&#x20;
67
+
68
+ <img style={{ borderRadius: '0.5rem' }}
69
+ src="/images/experiment/create/6_image-custom.png"
70
+ />
71
+
72
+ #### Private Images
73
+
74
+ To pull images from the private Docker registry, you should first integrate your credentials in organization settings.
75
+
76
+ Then, check the private image checkbox, fill in the image URL, and select the credential.
77
+
78
+ <img style={{ borderRadius: '0.5rem' }}
79
+ src="/images/experiment/create/7_image-custom.png"
80
+ />
81
+ </Tab>
82
+ </Tabs>
83
+
84
+ ### Start Command (Required) <a href="#start-command" id="start-command"></a>
85
+
86
+ Specify the start command in the experiment container. Write a running script with command-line arguments just as you are using a terminal. You can put multiple commands by using the `&&` command or a new line separation.&#x20;
87
+
88
+ <img style={{ borderRadius: '0.5rem' }}
89
+ src="/images/experiment/create/8_command.png"
90
+ />
91
+
92
+ ### Volume (Optional)
93
+
94
+ You can mount the project, dataset, and files to the experiment container.
95
+
96
+ <img style={{ borderRadius: '0.5rem' }}
97
+ src="/images/experiment/create/9_volume.png"
98
+ />
99
+
100
+
101
+ Learn more about volume mount on the following page:
102
+
103
+ ### Hyperparameters
104
+
105
+ You can set hyperparameters as key-value pairs. The given hyperparameters are automatically added to the container as environment variables with the given key and value. A typical experiment will include hyperparameters like `learning_rate` and `optimizer`.&#x20;
106
+
107
+ <img style={{ borderRadius: '0.5rem' }}
108
+ src="/images/experiment/create/10_hyperparam.png"
109
+ />
110
+
111
+ You can also use them at runtime by appending them to the start command as follows.
112
+
113
+ ```bash
114
+ python main.py \
115
+ --learning-rate $learning_rate \
116
+ --optimizer $optimizer
117
+ ```
118
+
119
+ ### Termination Protection&#x20;
120
+
121
+ Checking the termination protection option puts experiments in idle once it completes running, so you to access the container of a finished experiment.&#x20;
122
+
123
+ <img style={{ borderRadius: '0.5rem' }}
124
+ src="/images/experiment/create/11_termination.png"
125
+ />
guides/experiments/distributed.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Run distributed training jobs
3
+ ---
4
+
5
+ <Note>Only the PyTorch framework is supported distributed experiment currently.</Note>
6
+
7
+ ### What is a distributed experiment?
8
+
9
+ A **distributed experiment** is a single machine learning run on top of multi-node or multi-GPUs. The distributed experiment results are consist of logs, metrics, and artifacts for each worker which you can find under corresponding tabs.
10
+
11
+ <Warning>Multi-node training is not always an optimal solution. We recommend you try several experiments with a few epochs to see if multi-node training is the correct choice for you.</Warning>
12
+
13
+ #### Environment variables
14
+
15
+ VESSL automatically sets the below environment variables based on the configuration.
16
+
17
+ `NUM_NODES`: Number of workers
18
+
19
+ `NUM_TRAINERS`: Number of GPUs per node
20
+
21
+ `RANK`: The global rank of node
22
+
23
+ `MASTER_ADDR`: The address of the master node service
24
+
25
+ `MASTER_PORT`: The port number on the master address
26
+
27
+ ### Creating a distributed experiment
28
+
29
+ #### Using Web Console
30
+
31
+ Running a distributed experiment on the web console is similar to a single node experiment. To create a distributed experiment, you only need to specify the number of workers. Other options are the same as those of a single node experiment.
32
+
33
+ #### Using CLI
34
+
35
+ To run a distributed experiment using CLI, the number of nodes must be set to an integer greater than one.
36
+
37
+ ```bash
38
+ vessl experiment create --worker-count 2 --framework-type pytorch
39
+ ```
40
+
41
+ ### Examples: Distributed CIFAR
42
+
43
+ You can find the full example codes [here](https://github.com/savvihub/examples/tree/main/distributed\_cifar).
44
+
45
+ #### Step 1: Prepare CIFAR-10 dataset
46
+
47
+ Download the CIFAR dataset with the scripts below. and add a vessl type dataset to your organization.
48
+
49
+ ```bash
50
+ wget -c --quiet https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
51
+ tar -xvzf cifar-10-python.tar.gz
52
+ ```
53
+
54
+ Or, you can simply add an AWS S3 type dataset to your organization with the following public bucket URI.
55
+
56
+ ```
57
+ s3://savvihub-public-apne2/cifar-10
58
+ ```
59
+
60
+ #### Step 2: Create a distributed experiment
61
+
62
+ To run a distributed experiment we recommend to use [`torch.distributed.launch`](https://pytorch.org/docs/stable/distributed.html) package. The example start command that runs on two nodes and one GPU for each node is as follows.
63
+
64
+ ```
65
+ python -m torch.distributed.launch \
66
+ --nnodes=$NUM_NODES \
67
+ --nproc_per_node=$NUM_TRAINERS \
68
+ --node_rank=$RANK \
69
+ --master_addr=$MASTER_ADDR \
70
+ --master_port=$MASTER_PORT \
71
+ examples/distributed_cifar/pytorch/main.py
72
+ ```
73
+
74
+ VESSL will automatically set environment variables of `--node_rank`, `--master_addr`, `--master_port`, `--nproc_per_node` and `--nnodes`.
75
+
76
+ ### Files
77
+
78
+ In a distributed experiment, all workers share an output storage. Please be aware that files can be overrided by other workers when you use same output path.
guides/experiments/local.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Run local experiments
3
+ ---
4
+
5
+ # Local Experiments
6
+
7
+ On VESSL, you can also monitor experiments that you have run locally. This can easily be done by adding a few lines to your code.
8
+
9
+ Start monitoring your experiment using [`vessl.init`](../../api-reference/python-sdk/utils/vessl.init.md) . This will launch a new experiment under your project. You can view the experiment output under [**LOGS**](experiment-results.md#logs), just like you would in a VESSL-managed experiment. Your local environment's system metrics are also monitored and can be viewed under [**PLOTS**](experiment-results.md#plots).
10
+
11
+ <img style={{ borderRadius: '0.5rem' }}
12
+ src="/images/experiment/local/metrics.png"
13
+ />
14
+
15
+ In a VESSL-managed experiment, files under the output volume are saved by default. In a local experiment, you can use [`vessl.upload`](../../api-reference/python-sdk/utils/vessl.upload.md) to upload any output files. You can view these files under [**FILES**](experiment-results.md#files).
16
+
17
+ By default, VESSL will stop monitoring your local experiment when your program exits. If you wish to stop it manually, you can use [`vessl.finish`](../../api-reference/python-sdk/utils/vessl.finish.md).
guides/experiments/manage.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Manage experiments
3
+ ---
4
+
5
+ Under the experiments page, you can view the details of each experiment such as experiment status and logs. Here, you can also terminate or reproduce experiments.&#x20;
6
+
7
+ <img style={{ borderRadius: '0.5rem' }}
8
+ src="/images/experiment/manage/1_actions.png"
9
+ />
10
+
11
+ ### Experiment Status
12
+
13
+ | Type | Description |
14
+ | ------------- | --------------------------------------------------------------------------------------------------------------------- |
15
+ | **Pending** | An experiment is created with a pending status until the experiment node is ready. (VESSL-managed experiment only) |
16
+ | **Running** | The experiment is running. |
17
+ | **Completed** | The experiment has successfully finished (exited in 0). |
18
+ | **Idle** | The experiment is completed but still approachable due to the termination protection. (VESSL-managed experiment only) |
19
+ | **Failed** | The experiment has unsuccessfully finished. |
20
+
21
+ <Note>VESSL-managed experiments' status depends on its [Kubernetes pod lifecycle](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/).</Note>
22
+
23
+ <Note>To track the progress of your running experiment, use [`vessl.progress`](../../api-reference/python-sdk/utils/vessl.progress.md). VESSL will calculate the remaining running time, which you can view by hovering over the status mark.</Note>
24
+
25
+ ### Experiment Terminal
26
+
27
+ If you activate the **TERMINAL**, you can SSH access the experiment container through a web terminal. You can directly attach the SSH terminal to the experiment process or open a new experiment shell.&#x20;
28
+
29
+ #### Attaching to the experiment process
30
+
31
+ By attaching SSH directly to the experiment process, you can view the same logs displayed on the Web Console under the **LOGS** tab. You can take various commands such as interrupting the process.&#x20;
32
+
33
+ #### Creating a new shell
34
+
35
+ Opening a new SSH terminal allows you to navigate the experiment container to see where the datasets or projects are mounted.
36
+
37
+ ### Reproducing Experiments
38
+
39
+ One of the great features of VESSL is that all the experiments can be reproduced. VESSL keeps track of all experiment configurations including the dataset snapshot and source code version. and allows you to reproduce any experiment with just a single click. You can reproduce experiments either on the Web Console or via VESSL CLI. &#x20;
40
+
41
+ <img style={{ borderRadius: '0.5rem' }}
42
+ src="/images/experiment/manage/2_reproduce.png"
43
+ />
44
+
45
+ ### Terminating Experiments
46
+
47
+ You can stop running the experiment and delete the experiment pod.
48
+
49
+ ### Unpushed Changes
50
+
51
+ A warning titled **UNPUSHED CHANGES** will appear in the experiment details if you run an experiment through CLI without pushing the local changes to GitHub. To solve this issue, download the `.patch` file containing `git diff` and apply it by running the following commands.&#x20;
52
+
53
+ ```
54
+ # Change directory to your project
55
+ cd path/to/project
56
+
57
+ # Checkout your recent commit with SHA
58
+ git checkout YOUR_RECENT_COMMIT_SHA
59
+
60
+ # Apply .patch file to the commit
61
+ git apply your_git_diff.patch
62
+ ```
guides/experiments/monitor.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Monitor experiment results
3
+ ---
4
+
5
+ ### Experiment Summary
6
+
7
+ Under **SUMMARY**, you can view all the experiment configurations such as environment variables, quick reproduce via CLI, Docker image, and resource specification.
8
+
9
+ <img style={{ borderRadius: '0.5rem' }}
10
+ src="/images/experiment/monitor/1_summary.png"
11
+ />
12
+
13
+ ### Logs
14
+
15
+ Under **LOGS**, you can monitor the logs from the experiment Docker container including status updates and `print()` statements.
16
+
17
+ <img style={{ borderRadius: '0.5rem' }}
18
+ src="/images/experiment/monitor/2_logs.png"
19
+ />
20
+
21
+ ### Plots
22
+
23
+ <Note>You need to use VESSL's Python SDK to view metrics or multimedia files.&#x20;</Note>
24
+
25
+ Under **PLOTS**, you can view key metrics for your experiments such as accuracy and loss. You can also filter out the outliers by checking **ignore outlier** box and controlling the **smoothing** of the curves.&#x20;
26
+
27
+ <img style={{ borderRadius: '0.5rem' }}
28
+ src="/images/experiment/monitor/3_plots.png"
29
+ />
30
+
31
+ #### Multimedia
32
+
33
+ You can also view images. You can configure the number of displayed images using VESSL's Python SDK.
34
+
35
+ <img style={{ borderRadius: '0.5rem' }}
36
+ src="/images/experiment/monitor/4_media.png"
37
+ />
38
+
39
+ #### System Metrics
40
+
41
+ You can monitor system metrics such as CPU, GPU, memory, disk, and network usage.
42
+
43
+ <img style={{ borderRadius: '0.5rem' }}
44
+ src="/images/experiment/monitor/5_metrics.png"
45
+ />
46
+
47
+ ### Files
48
+
49
+ Under **FILES**, you can navigate and download the output and input files. You can also do this using VESSL Client CLI.
50
+
51
+ <img style={{ borderRadius: '0.5rem' }}
52
+ src="/images/experiment/monitor/6_files.png"
53
+ />
guides/experiments/overview.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Overview
3
+ ---
4
+
5
+ An **experiment** is a single machine learning run in a project with a specific dataset. The experiment results consist of logs, metrics, and artifacts, which you can find under corresponding tabs.
6
+
7
+ ### Managing Experiments on Web Console
8
+
9
+ You can find a list of experiments under each project page. You delve into the details of the experiment by clicking the name of the experiment.&#x20;
10
+
11
+ <img style={{ borderRadius: '0.5rem' }}
12
+ src="/images/experiment/overview/experiments.png"
13
+ />
guides/get-started/gpu-notebook.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: GPU-accelerated Notebook
3
+ description: Launch a Jupyter Notebook server with an SSH connection
4
+ icon: "circle-1"
5
+ version: EN
6
+ ---
7
+
8
+ This example deploys a Jupyter Notebook server. You will also learn how you can connect to the server on VS Code or an IDE of your choice.
9
+
10
+ <CardGroup cols={2}>
11
+ <Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/jupyter-notebook">
12
+ Try out the Quickstart example with a single click on VESSL Hub.
13
+ </Card>
14
+
15
+ <Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/">
16
+ See the completed YAML file and final code for this example.
17
+ </Card>
18
+ </CardGroup>
19
+
20
+ ## What you will do
21
+
22
+ <img
23
+ style={{ borderRadius: '0.5rem' }}
24
+ src="/images/get-started/quickstart-title.png"
25
+ />
26
+
27
+ - Launch a GPU-accelerated interactive workload
28
+ - Set up a Jupyter Notebook
29
+ - Use SSH to connect to the workload
30
+
31
+
32
+ ## Writing the YAML
33
+
34
+ Let's fill in the `notebook.yml` file.
35
+
36
+ <Steps titleSize="h3">
37
+ <Step title="Spin up a workload">
38
+ Let's repeat the steps from [Quickstart](quickstart.mdx) and define the compute resource and runtime environment for our workload. Again, we will use the L4 instance from our managed cloud and the latest PyTorch container from NVIDIA NGC.
39
+
40
+ ```yaml
41
+ name: Jupyter notebook
42
+ description: A Jupyter Notebook server with an SSH connection
43
+ resources:
44
+ cluster: vessl-gcp-oregon
45
+ preset: gpu-l4-small
46
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
47
+ ```
48
+ </Step>
49
+
50
+ <Step title="Configure an interactive run">
51
+ By default, workloads launched with VESSL Run are batch jobs like the one we launched in our Quickstart example. On the other hand, interactive runs are essentially virtual machines running on GPUs for live interaction with your models and datasets.
52
+
53
+ You can enable this with the `interactive` key, followed by the `jupyter` key. Interactive runs come with a default field for idle culler which automatically shuts down user notebook servers when they have not been used for a certain period.
54
+
55
+ `max_runtime` works with `idle_timeout` as an additional measure to prevent resource overuse
56
+
57
+ ```yaml
58
+ name: Jupyter notebook
59
+ description: A Jupyter Notebook server with an SSH connection
60
+ resources:
61
+ cluster: vessl-gcp-oregon
62
+ preset: gpu-l4-small
63
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
64
+ interactive:
65
+ jupyter:
66
+ idle_timeout: 120m
67
+ max_runtime: 24h
68
+ ```
69
+ </Step>
70
+ </Steps>
71
+
72
+ ## Running the workload
73
+
74
+ Now that we have a completed YAML, we can once again run the workload with `vessl run`.
75
+
76
+ ```
77
+ vessl run create -f notebook.yml
78
+ ```
79
+
80
+ <img
81
+ style={{ borderRadius: '0.5rem' }}
82
+ src="/images/get-started/quickstart-notebook.png"
83
+ />
84
+
85
+ Running the command will verify your YAML and show you the current status of the workload. Click the output link in your terminal to see the full details and realtime logs of the Run on the web. Click Jupyter under Connect to launch a notebook.
86
+
87
+ <img
88
+ style={{ borderRadius: '0.5rem' }}
89
+ src="/images/get-started/quickstart-workspace.jpeg"
90
+ />
91
+
92
+ <img
93
+ style={{ borderRadius: '0.5rem' }}
94
+ src="/images/get-started/quickstart-jupyter.jpeg"
95
+ />
96
+
97
+ ## Create an SSH connection
98
+
99
+ An interactive run is essentially a GPU-accelerated workload on a cloud with a port and an endpoint for live interactions. This means you can access the remote workload using SSH.
100
+
101
+ 1. First, get an SSH key pair.
102
+ ```
103
+ ssh-keygen -t ed25519 -C "vesslai"
104
+ ```
105
+ <img
106
+ style={{ borderRadius: '0.5rem' }}
107
+ src="/images/get-started/gpu-notebook_ssh-keygen.png"
108
+ />
109
+
110
+ 2. Add the generated key to your account.
111
+ ```
112
+ vessl ssh-key add
113
+ ```
114
+ <img
115
+ style={{ borderRadius: '0.5rem' }}
116
+ src="/images/get-started/gpu-notebook_ssh-add.png"
117
+ />
118
+
119
+ 3. Connect via SSH.
120
+ Use the workload address from the Run Summary page to connect. You are ready to use [VS Code](https://code.visualstudio.com/docs/remote/ssh) or an IDE of your choice for remote development.
121
+
122
+ <img
123
+ style={{ borderRadius: '0.5rem' }}
124
+ src="/images/get-started/gpu-notebook_ssh-info.jpeg"
125
+ />
126
+
127
+ ```
128
+ ssh -p 22 [email protected]
129
+ ```
130
+
131
+
132
+ ## Tips & tricks
133
+
134
+ Keep in mind that GPUs are fully dedicated to a notebook server --and therefore consume VSSL credits-- even when you are not running compute-intensive cells. To optimize GPU usage, use tools like [nbconvert](https://nbconvert.readthedocs.io/en/latest/usage.html#executable-script) to convert the notebook into a Python file or package it as a Python container and run it as a batch job.
135
+
136
+ You can also mount volumes to interactive workloads by defining `import` and reference files or datasets from your notebook.
137
+
138
+ ## Using our web interface
139
+
140
+ You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
141
+
142
+ <iframe
143
+ src="https://scribehow.com/embed/GPU-accelerated_notebook__KXkgSEfXS_2wPYbjRCDpRw?skipIntro=true&removeLogo=true"
144
+ width="100%" height="640" allowfullscreen frameborder="0"
145
+ style={{ borderRadius: '0.5rem' }} >
146
+ </iframe>
147
+
148
+ ## What's next?
149
+
150
+ Next, let's see how you use our interactive workloads to host a web app on the cloud using tools like Streamlit and Gradio.
151
+
152
+ <CardGroup cols={2}>
153
+ <Card title="Stable Diffusion Playground" href="get-started/stable-diffusion">
154
+ Launch an interactive web application for Stable Diffusion
155
+ </Card>
156
+
157
+ <Card title="Mistral-7B Playground" href="get-started/stable-diffusion">
158
+ Launch a text-generation Streamlit app powered with Mistral-7B
159
+ </Card>
160
+ </CardGroup>
guides/get-started/llama2.md ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Llama 2 Fine-tuning
3
+ description: Fine-tune Llama2-7B with instruction datasets
4
+ icon: "circle-3"
5
+ version: EN
6
+ ---
7
+
8
+ This example fine-tunes Llama2-7B with a code instruction dataset, illustrating how VESSL AI offloads the infrastructural challenges of large-scale AI workloads and help you train multi-billion-parameter models in hours, not weeks.
9
+
10
+ This is the most compute-intensive workload yet but you will see how VESSL AI's efficient training stack enables you to seamlessly scale and execute multi-node training. For a more in-depth guide, refer to our [blog post](https://blog.vessl.ai/ai-infrastructure-llm).
11
+
12
+ <CardGroup cols={2}>
13
+ <Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/ssd-1b-inference">
14
+ Try out the Quickstart example with a single click on VESSL Hub.
15
+ </Card>
16
+
17
+ <Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/tree/main/SSD-1B">
18
+ See the completed YAML file and final code for this example.
19
+ </Card>
20
+ </CardGroup>
21
+
22
+ ## What you will do
23
+
24
+ <img
25
+ style={{ borderRadius: '0.5rem' }}
26
+ src="/images/get-started/llama2-title.png"
27
+ />
28
+
29
+ - Fine-tune an LLM with zero-to-minimum setup
30
+ - Mount a custom dataset
31
+ - Store and export model artifacts
32
+
33
+ ## Writing the YAML
34
+
35
+ Let's fill in the `llama2_fine-tuning.yml` file.
36
+
37
+ <Steps titleSize="h3">
38
+ <Step title="Spin up a training job">
39
+ Let's set spin up an instance. Nothing new here.
40
+
41
+ ```yaml
42
+ name: Llama2-7B fine-tuning
43
+ description: Fine-tune Llama2-7B with instruction datasets
44
+ resources:
45
+ cluster: vessl-gcp-oregon
46
+ preset: gpu-l4-small
47
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
48
+ ```
49
+ </Step>
50
+
51
+ <Step title="Mount the code, modal, and dataset">
52
+ Here, in addition to our GitHub repo and Hugging Face model, we are also mounting a Hugging Face dataset.
53
+
54
+ As with our HF model, mountint data is as simple as referencing the URL beginnging with the `hf://` scheme -- this goes the same for other cloud storages as well, `s3://` for Amazon S3 for example.
55
+
56
+ ```yaml
57
+ name: llama2-finetuning
58
+ description: Fine-tune Llama2-7B with instruction datasetst
59
+ resources:
60
+ cluster: vessl-gcp-oregon
61
+ preset: gpu-l4-small
62
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
63
+ import:
64
+ /model/: hf://huggingface.co/VESSL/llama2
65
+ /code/:
66
+ git:
67
+ url: https://github.com/vessl-ai/hub-model
68
+ ref: main
69
+ /dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca
70
+ ```
71
+ </Step>
72
+
73
+ <Step title="Write the run commands">
74
+ Now that we have the three pillars of model development mounted on our remote workload, we are ready to define the run command. Let's install additiona Python dependencies and run `finetuning.py` -- which calls for our HF model and datasets in the `config.yaml` file.
75
+
76
+ ```yaml
77
+ name: llama2-finetuning
78
+ description: Fine-tune Llama2-7B with instruction datasetst
79
+ resources:
80
+ cluster: vessl-gcp-oregon
81
+ preset: gpu-l4-small
82
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
83
+ import:
84
+ /model/: hf://huggingface.co/VESSL/llama2
85
+ /code/:
86
+ git:
87
+ url: https://github.com/vessl-ai/hub-model
88
+ ref: main
89
+ /dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca
90
+ run:
91
+ - command: |-
92
+ pip install -r requirements.txt
93
+ python finetuning.py
94
+ workdir: /code/llama2-finetuning
95
+ ```
96
+ </Step>
97
+
98
+ <Step title="Export a model artifact">
99
+ You can keep track of model checkpoints by dedicating an `export` volume to the workload. After training is finished, trained models are uploaded to the `artifact` folder as model checkpoints.
100
+
101
+ ```yaml
102
+ name: llama2-finetuning
103
+ description: Fine-tune Llama2-7B with instruction datasetst
104
+ resources:
105
+ cluster: vessl-gcp-oregon
106
+ preset: gpu-l4-small
107
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
108
+ import:
109
+ /model/: hf://huggingface.co/VESSL/llama2
110
+ /code/:
111
+ git:
112
+ url: https://github.com/vessl-ai/hub-model
113
+ ref: main
114
+ /dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca
115
+ run:
116
+ - command: |-
117
+ pip install -r requirements.txt
118
+ python finetuning.py
119
+ workdir: /code/llama2-finetuning
120
+ export:
121
+ /artifacts/: vessl-artifact://
122
+ ```
123
+ </Step>
124
+ </Steps>
125
+
126
+ ## Running the workload
127
+
128
+ Once the workload is completed, you can follow the link in the terminal to get the output files including the model checkpoints under Files.
129
+
130
+ ```
131
+ vessl run create -f llama2_fine-tuning.yml
132
+ ```
133
+
134
+ <img
135
+ style={{ borderRadius: '0.5rem' }}
136
+ src="/images/get-started/llama2-artifacts.jpeg"
137
+ />
138
+
139
+ ## Behind the scenes
140
+
141
+ With VESSL AI, you can launch a full-scale LLM fine-tuning workload on any cloud, at any scale, without worrying about these underlying system backends.
142
+
143
+ * **Model checkpointing** — VESSL AI stores .pt files to mounted volumes or model registry and ensures seamless checkpointing of fine-tuning progress.
144
+ * **GPU failovers** — VESSL AI can autonomously detect GPU failures, recover failed containers, and automatically re-assign workload to other GPUs.
145
+ * **Spot instances** — Spot instance on VESSL AI works with model checkpointing and export volumes, saving and resuming the progress of interrupted workloads safely.
146
+ * **Distributed training** — VESSL AI comes with native support for PyTorch `DistributedDataParallel` and simplifies the process for setting up multi-cluster, multi-node distributed training.
147
+ * **Autoscaling** — As more GPUs are released from other tasks, you can dedicate more GPUs to fine-tuning workloads. You can do this on VESSL AI by adding the following to your existing fine-tuning YAML.
148
+
149
+ ## Tips & tricks
150
+
151
+ In addition to the model checkpoints, you can track key metrics and parameters with `vessl.log` Python SDK. Here's a snippet from [finetuning.py](https://github.com/vessl-ai/hub-model/blob/a74e87564d0775482fe6c56ff811bd8a9821f809/llama2-finetuning/finetuning.py#L97-L109).
152
+
153
+ ```python
154
+ class VesslLogCallback(TrainerCallback):
155
+ def on_log(self, args, state, control, logs=None, **kwargs):
156
+ if "eval_loss" in logs.keys():
157
+ payload = {
158
+ "eval_loss": logs["eval_loss"],
159
+ }
160
+ vessl.log(step=state.global_step, payload=payload)
161
+ elif "loss" in logs.keys():
162
+ payload = {
163
+ "train_loss": logs["loss"],
164
+ "learning_rate": logs["learning_rate"],
165
+ }
166
+ vessl.log(step=state.global_step, payload=payload)
167
+ ```
168
+
169
+ ## Using our web interface
170
+
171
+ You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
172
+
173
+ <iframe
174
+ src="https://scribehow.com/embed/Llama_2_Fine-tuning_with_VESSL_AI__3UJWTqUgTguq1vYrjNu9MA?skipIntro=true&removeLogo=true"
175
+ width="100%" height="640" allowfullscreen frameborder="0"
176
+ style={{ borderRadius: '0.5rem' }} >
177
+ </iframe>
178
+
179
+ ## What's next?
180
+
181
+ We shared ho you can use VESSL AI to go from a simple Python container to a full-scale AI workload. We hope these guides give you a glimpse of what you can achieve with VESSL AI. For more resources, follow along our example models or use casese.
182
+
183
+ <CardGroup cols={2}>
184
+ <Card icon="wand" title="Explore more models" href="https://vessl.ai/hub">
185
+ See VESSL AI in action with the latest open-source models and our example Runs.
186
+ </Card>
187
+
188
+ <Card icon="rectangles-mixed" title="Explore more use casees" href="use-cases/">
189
+ See the top use casese of VESSL AI from experiment tracking to cluster management.
190
+ </Card>
191
+ </CardGroup>
guides/get-started/overview.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Overview
3
+ description: Launch and scale AI workloads without the hassle of managing infrastructure
4
+ icon: "hand-wave"
5
+ version: EN
6
+ ---
7
+
8
+ ## VESSL AI -- Purpose-built cloud for AI
9
+
10
+ VESSL AI provides a unified interface for training and deploying AI models on the cloud. Simply define your GPU resource and pinpoint to your code & dataset. VESSL AI does the orchestration & heavy lifting for you:
11
+ 1. Create a GPU-accelerated container with the right Docker Image.
12
+ 2. Mount your code and dataset from GitHub, Hugging Face, Amazon S3, and more.
13
+ 3. Launcs the workload on our fully managed GPU cloud.
14
+
15
+ <CardGroup cols={2}>
16
+ <Card title="One any cloud, at any scale" href="#">
17
+ Instantly scale workloads across multiple clouds.
18
+ <br/>
19
+ <img className="rounded-md" src="/images/get-started/overview-cloud.png" />
20
+ </Card>
21
+
22
+ <Card title="Streamlined interface" href="#">
23
+ Launch any AI workloads with a unified YAML definition.
24
+ <br/>
25
+ <img className="rounded-md" src="/images/get-started/overview-yaml.png" />
26
+ </Card>
27
+
28
+ <Card title="End-to-end coverage" href="#">
29
+ A single platform for fine-tuning to deployment.
30
+ <br/>
31
+ <img className="rounded-md" src="/images/get-started/overview-pipeline.png" />
32
+ </Card>
33
+
34
+ <Card title="A centralized compute platform" href="#">
35
+ Optimize GPU usage and save up to 80% in cloud.
36
+ <br/>
37
+ <img className="rounded-md" src="/images/get-started/overview-gpu.png" />
38
+ </Card>
39
+ </CardGroup>
40
+
41
+ ## What can you do?
42
+
43
+ - Run compute-intensive AI workloads remotely within seconds.
44
+ - Fine-tune LLMs with distributed training and auto-failover with zero-to-minimum setup.
45
+ - Scale training and inference workloads horizontally.
46
+ - Deploy an interactive web applicaiton for your model.
47
+ - Serve your AI models as web endpoints.
48
+
49
+ ## How to get started
50
+
51
+ Head over to [vessl.ai](https://vessl.ai) and sign up for a free account. No `docker build` or `kubectl get`.
52
+ 1. Create your account at [vessl.ai](https://vessl.ai) and get $30 in free GPU credits.
53
+ 2. Install our Python package — `pip install vessl`.
54
+ 3. Follow our [Quickstart](/get-started/quickstart) guide or try out our example models at [VESSL Hub](https://vessl.ai/hub).
55
+
56
+ ## How does it work?
57
+
58
+ VESSL AI abstracts the obscure infrastructure and complex backends inherent to launching AI workloads into a simple YAML file, so you don't have to mess with AWS, Kubernetes, Docker, or more. Here's an example that launches a web app for Stable Diffusion.
59
+
60
+ ```yaml
61
+ resources:
62
+ cluster: vessl-gcp-oregon
63
+ preset: gpu-l4-small
64
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
65
+ import:
66
+ /code/:
67
+ git:
68
+ url: https://github.com/vessl-ai/hub-model
69
+ ref: main
70
+ /model/: hf://huggingface.co/VESSL/SSD-1B
71
+ run:
72
+ - command: |-
73
+ pip install -r requirements.txt
74
+ streamlit run ssd_1b_streamlit.py --server.port=80
75
+ workdir: /code/SSD-1B
76
+ interactive:
77
+ max_runtime: 24h
78
+ jupyter:
79
+ idle_timeout: 120m
80
+ ports:
81
+ - name: streamlit
82
+ type: http
83
+ port: 80
84
+
85
+ ```
86
+
87
+ With every YAML file, you are creating a VESSL Run. VESSL Run is an atomic unit of VESSL AI, a single unit of Kubernetes-backed AI workload. You can use our YAML definition as you progress throughout the AI lifecycle from checkpointing models for fine-tuning to exposing ports for inference.
88
+
89
+ ## What's next?
90
+
91
+ See VESSL AI in action with our examples Runs and pre-configured open-source models.
92
+
93
+ <CardGroup cols={2}>
94
+ <Card title="Quickstart – Hello, world!" href="get-started/quickstart">
95
+ Fine-tune Llama2-7B with a code instructions dataset.
96
+ </Card>
97
+
98
+ <Card title="GPU-accelerated notebook" href="get-started/gpu-notebook">
99
+ Launch a GPU-accelerated Streamlit app of Mistral 7B.
100
+ </Card>
101
+
102
+ <Card title="SSD-1B Playground" href="get-started/stable-diffusion">
103
+ Interactive playground of a lighter and faster version for Stable Diffusion XL.
104
+ </Card>
105
+
106
+ <Card title="Llama2-7B Fine-tuning" href="get-started/llama2">
107
+ Translate audio snippets into text on a Streamlit playground.
108
+ </Card>
109
+ </CardGroup>
guides/get-started/quickstart.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Quickstart – Hello, world!
3
+ description: Launch a barebone GPU-accelerated workload
4
+ icon: "circle-play"
5
+ version: EN
6
+ ---
7
+
8
+ This quickstart deploys a barebone GPU-accelerated workload, a Python container that prints `Hello, world!`. It illustrates the basic components of a single run and how you can deploy one.
9
+
10
+ <CardGroup cols={2}>
11
+ <Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/create-your-own">
12
+ Try out the Quickstart example with a single click on VESSL Hub.
13
+ </Card>
14
+
15
+ <Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/tree/main/quickstart">
16
+ See the completed YAML file and final code for this example.
17
+ </Card>
18
+ </CardGroup>
19
+
20
+ ## What you will do
21
+
22
+ <img
23
+ style={{ borderRadius: '0.5rem' }}
24
+ src="/images/get-started/quickstart-title.png"
25
+ />
26
+
27
+ - Launch a GPU-acclerated workload
28
+ - Set up a runtime for your model
29
+ - Mount a Git codebase
30
+ - Run a simple command
31
+
32
+ ## Installation & setup
33
+
34
+ After creating a free account at [vessl.ai](https://vessl.ai), install our Python package and get an API authentication. Set your primary Organization and Project for your account and let's get going.
35
+
36
+ ```bash
37
+ pip install --upgrade vessl
38
+ vessl configure
39
+ ```
40
+
41
+ ## Writing the YAML
42
+
43
+ Launching a workload on VESSL AI begins with writing a YAML file. Our quickstart YAML is in four parts:
44
+
45
+ - Compute resource -- typically in terms of GPUs -- this is defined under `resources`
46
+ - Runtime environment that points to a Docker Image -- this is defined under `image`
47
+ - Input & output for code, dataset, and others defined under `import` & `export`
48
+ - Run commands executed inside the workload as defined under `run`
49
+
50
+ Let's start by creating `quickstart.yml` and define the key-value pairs one by one.
51
+
52
+ <Steps titleSize="h3">
53
+ <Step title="Spin up a compute instance">
54
+ `resources` defines the hardware specs you will use for your run. Here's an example that uses our managed cloud to launch an L4 instance.
55
+
56
+ You can see the full list of compute options and their string values for `preset` under [Resources](/resources/price). Later, you will be able to add and launch workloads on your private cloud or on-premise clusters simply by changing the value for `cluster`.
57
+
58
+ ```yaml
59
+ resources:
60
+ cluster: vessl-gcp-oregon
61
+ preset: gpu-l4-small
62
+ ```
63
+ </Step>
64
+
65
+ <Step title="Configure a runtime environment">
66
+ VESSL AI uses Docker images to define a runtime. We provide a set of base images as a starting point but you can also bring your own custom Docker images by referencing hosted images on Docker Hub or Red Hat Quay.
67
+
68
+ You can find the full list of Images and the dependencies for the latest build [here](https://quay.io/repository/vessl-ai/kernels?tab=tags&tag=py39-202310120824). Here, we'll use the latest go-to PyTorch container from NVIDIA NGC.
69
+
70
+ ```yaml
71
+ resources:
72
+ cluster: vessl-gcp-oregon
73
+ preset: gpu-l4-small
74
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
75
+ ```
76
+ </Step>
77
+
78
+ <Step title="Mount a GitHub repository">
79
+ Under `import`, you can mount models, codebases, and datasets from sources like GitHub, Hugging Face, Amazon S3, and more.
80
+
81
+ In this example, we are mounting a GitHub repo to `/code/` folder in our container. You can switch to different versions of code by changing `ref` to specific branches like `dev`.
82
+
83
+ ```yaml
84
+ resources:
85
+ cluster: vessl-gcp-oregon
86
+ preset: gpu-l4-small
87
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
88
+ import:
89
+ /code/:
90
+ git:
91
+ url: https://github.com/vessl-ai/hub-model
92
+ ref: main
93
+ ```
94
+ </Step>
95
+
96
+ <Step title="Write a run command">
97
+ Now that we defined the specifications of the compute resource and the runtime environment for our workload, let's run [`main.py`](https://github.com/vessl-ai/hub-model/blob/main/quickstart/main.py).
98
+
99
+ We can do this by defining `command` under `run` and specifying the working directory `workdir`.
100
+
101
+ ```yaml
102
+ resources:
103
+ cluster: vessl-gcp-oregon
104
+ preset: gpu-l4-small
105
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
106
+ import:
107
+ /code/:
108
+ git:
109
+ url: https://github.com/vessl-ai/hub-model
110
+ ref: main
111
+ run:
112
+ - command: |
113
+ python main.py
114
+ workdir: /code/quickstart
115
+ ```
116
+ </Step>
117
+
118
+ <Step title="Add metadata">
119
+ Finally, let's polish up by giving our Run a name and description. Here's the completed `.yml`:
120
+
121
+ ```yaml
122
+ name: Quickstart
123
+ description: A barebone GPU-accelerated workload
124
+ resources:
125
+ cluster: vessl-gcp-oregon
126
+ preset: gpu-l4-small
127
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
128
+ import:
129
+ /code/:
130
+ git:
131
+ url: https://github.com/vessl-ai/hub-model
132
+ ref: main
133
+ run:
134
+ - command: |
135
+ python main.py
136
+ workdir: /code/quickstart
137
+ ```
138
+ </Step>
139
+ </Steps>
140
+
141
+ ## Running the workload
142
+
143
+ Now that we have a completed YAML, we can run the workload with `vessl run`.
144
+
145
+ ```
146
+ vessl run create -f quickstart.yml
147
+ ```
148
+
149
+ <img
150
+ style={{ borderRadius: '0.5rem' }}
151
+ src="/images/get-started/quickstart-run.png"
152
+ />
153
+
154
+ Running the command will verify your YAML and show you the current status of the workload. Click the output link in your terminal to see full details and a realtime logs of the Run on the web, including the result of the run command.
155
+
156
+ <img
157
+ style={{ borderRadius: '0.5rem' }}
158
+ src="/images/get-started/quickstart-result.jpeg"
159
+ />
160
+
161
+ ## Behind the scenes
162
+
163
+ When you `vessl run`, VESSL AI performs the following as defined in `quickstart.yml`:
164
+
165
+ 1. Launch an empty Python container on the cloud with 1 NVIDIA L4 Tensor Core GPU.
166
+ 2. Configure runtime with CUDA compute-capable PyTorch 22.09.
167
+ 3. Mount a GitHub repo and set the working directory.
168
+ 3. Execute `main.py` and print `Hello, world!`.
169
+
170
+ ## Using our web interface
171
+
172
+ You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
173
+
174
+ <iframe
175
+ src="https://scribehow.com/embed/Quickstart__Hello_world__BzwSynUuQI-0R3mMGwJzHw?skipIntro=true&removeLogo=true"
176
+ width="100%" height="640" allowfullscreen frameborder="0"
177
+ style={{ borderRadius: '0.5rem' }} >
178
+ </iframe>
179
+
180
+ ## What's next?
181
+
182
+ Now that you've run a barebone workload, continue with our guide to launch a Jupyter server and host a web app.
183
+
184
+ <CardGroup cols={2}>
185
+ <Card title="GPU-accelerated notebook" href="get-started/gpu-notebook">
186
+ Launch a Jupyter Notebook server with an SSH connection
187
+ </Card>
188
+
189
+ <Card title="SSD-1B Playground" href="get-started/stable-diffusion">
190
+ Launch an interactive web application for Stable Diffusion
191
+ </Card>
192
+ </CardGroup>
guides/get-started/stable-diffusion.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Stable Diffusion Playground
3
+ description: Launch an interactive web app for Stable Diffusion
4
+ icon: "circle-2"
5
+ version: EN
6
+ ---
7
+
8
+ This example deploys a simple web app for Stable Diffusion. You will learn how you can set up an interactive workload for inference -- mounting models from Hugging Face and opening up a port for user inputs. For a more in-depth guide, refer to our [blog post](https://blog.vessl.ai/thin-plate-spline-motion-model-for-image-animation).
9
+
10
+ <CardGroup cols={2}>
11
+ <Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/ssd-1b-inference">
12
+ Try out the Quickstart example with a single click on VESSL Hub.
13
+ </Card>
14
+
15
+ <Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/tree/main/SSD-1B">
16
+ See the completed YAML file and final code for this example.
17
+ </Card>
18
+ </CardGroup>
19
+
20
+ ## What you will do
21
+
22
+ <img
23
+ style={{ borderRadius: '0.5rem' }}
24
+ src="/images/get-started/ssd-title.png"
25
+ />
26
+
27
+ - Host a GPU-accelerated web app built with [Streamlit](https://streamlit.io/)
28
+ - Mount model checkpoints from [Hugging Face](https://huggingface.co/)
29
+ - Open up a port to an interactive workload for inference
30
+
31
+ ## Writing the YAML
32
+
33
+ Let's fill in the `stable-diffusion.yml` file.
34
+
35
+ <Steps titleSize="h3">
36
+ <Step title="Spin up an interactive workload">
37
+ We already learned how you can launch an interactive workload in our [previous](/get-started/gpu-notebook) guide. Let's copy & paste the YAML we wrote for `notebook.yml`.
38
+
39
+ ```yaml
40
+ name: Stable Diffusion Playground
41
+ description: An interactive web app for Stable Diffusion
42
+ resources:
43
+ cluster: vessl-gcp-oregon
44
+ preset: gpu-l4-small
45
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
46
+ interactive:
47
+ jupyter:
48
+ idle_timeout: 120m
49
+ max_runtime: 24h
50
+ ```
51
+ </Step>
52
+
53
+ <Step title="Configure an interactive run">
54
+ Let's mount a [GitHub repo](https://github.com/vessl-ai/hub-model/tree/main/SSD-1B) and import a model checkpoint from Hugging Face. We already learned how you can mount a codebase from our [Quickstart](/get-started/quickstart) guide.
55
+
56
+ VESSL AI comes with a native integration with Hugging Face so you can import models and datasets simply by referencing the link to the Hugging Face repository. Under `import`, let's create a working directory `/model/` and import the [model](https://huggingface.co/VESSL/SSD-1B/tree/main).
57
+
58
+ ```yaml
59
+ name: Stable Diffusion Playground
60
+ description: An interactive web app for Stable Diffusion
61
+ resources:
62
+ cluster: vessl-gcp-oregon
63
+ preset: gpu-l4-small
64
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
65
+ import:
66
+ /code/:
67
+ git:
68
+ url: https://github.com/vessl-ai/hub-model
69
+ ref: main
70
+ /model/: hf://huggingface.co/VESSL/SSD-1B
71
+ interactive:
72
+ jupyter:
73
+ idle_timeout: 120m
74
+ max_runtime: 24h
75
+ ```
76
+ </Step>
77
+
78
+ <Step title="Open up a port for inference">
79
+
80
+ The `ports` key expose the workload ports where VESSL AI listens for HTTP requests. This means you will be able to interact with the remote workload -- sending input query and receiving an generated image through port `80` in this case.
81
+
82
+ ```yaml
83
+ name: Stable Diffusion Playground
84
+ description: An interactive web app for Stable Diffusion
85
+ resources:
86
+ cluster: vessl-gcp-oregon
87
+ preset: gpu-l4-small
88
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
89
+ import:
90
+ /code/:
91
+ git:
92
+ url: https://github.com/vessl-ai/hub-model
93
+ ref: main
94
+ /model/: hf://huggingface.co/VESSL/SSD-1B
95
+ interactive:
96
+ jupyter:
97
+ idle_timeout: 120m
98
+ max_runtime: 24h
99
+ ports:
100
+ - name: streamlit
101
+ type: http
102
+ port: 80
103
+ ```
104
+ </Step>
105
+
106
+ <Step title="Write the run commands">
107
+
108
+ Let's install additional Python dependencies with [`requirements.txt`](https://github.com/vessl-ai/hub-model/blob/main/SSD-1B/requirements.txt) and finally run our app [`ssd_1b_streamlit.py`](https://github.com/vessl-ai/hub-model/blob/main/SSD-1B/ssd_1b_streamlit.py).
109
+
110
+ Here, we see how our Streamlit app is using the port we created previously with the `--server.port=80` flag. Through the port, the app receives a user input and generates an image with the Hugging Face model we mounted on `/model/`.
111
+
112
+ ```yaml
113
+ name: Stable Diffusion Playground
114
+ description: An interactive web app for Stable Diffusion
115
+ resources:
116
+ cluster: vessl-gcp-oregon
117
+ preset: gpu-l4-small
118
+ image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
119
+ import:
120
+ /code/:
121
+ git:
122
+ url: https://github.com/vessl-ai/hub-model
123
+ ref: main
124
+ /model/: hf://huggingface.co/VESSL/SSD-1B
125
+ run:
126
+ - command: |-
127
+ pip install -r requirements.txt
128
+ streamlit run ssd_1b_streamlit.py --server.port=80
129
+ workdir: /code/SSD-1B
130
+ interactive:
131
+ max_runtime: 24h
132
+ jupyter:
133
+ idle_timeout: 120m
134
+ ports:
135
+ - name: streamlit
136
+ type: http
137
+ port: 80
138
+ ```
139
+ </Step>
140
+ </Steps>
141
+
142
+ ## Running the app
143
+
144
+ Once again, running the workload will guide you to the workload Summary page.
145
+
146
+ ```
147
+ vessl run create -f stable-diffusion.yml
148
+ ```
149
+
150
+ Under ENDPOINTS, click the `streamlit` link to launch the app.
151
+
152
+ <img
153
+ style={{ borderRadius: '0.5rem' }}
154
+ src="/images/get-started/ssd-summary.jpeg"
155
+ />
156
+
157
+ <img
158
+ style={{ borderRadius: '0.5rem' }}
159
+ src="/images/get-started/ssd-streamlit.jpeg"
160
+ />
161
+
162
+ ## Using our web interface
163
+
164
+ You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
165
+
166
+ <iframe
167
+ src="https://scribehow.com/embed/Stable_Diffusion_Playground__D9ujQM9ZQtGz_Aj9oiyXSg?skipIntro=true&removeLogo=true"
168
+ width="100%" height="640" allowfullscreen frameborder="0"
169
+ style={{ borderRadius: '0.5rem' }} >
170
+ </iframe>
171
+
172
+ ## What's next?
173
+
174
+ See how VESSL AI takes care of the infrastructural challenges of fine-tuning a large language model with a custom dataset.
175
+
176
+ <CardGroup cols={2}>
177
+ <Card title="Llama 2 Fine-tuing" href="get-started/llama2">
178
+ Launch an interactive web application for Stable Diffusion
179
+ </Card>
180
+ </CardGroup>
guides/models/create.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Create a new model
3
+ version: EN
4
+ ---
5
+
6
+ You can add a model to a model registry by selecting a previous experiment or by uploading local checkpoint files. To create a model, create a model repository first under **MODELS**. We recommend following our naming conventions to improve project maintainability.
7
+
8
+ <img style={{ borderRadius: '0.5rem' }}
9
+ src="/images/models/create/1_button.png"
10
+ />
11
+
12
+ ### Creating a model from experiment
13
+
14
+ <Note>Only `completed` experiments can be sourced to create models.</Note>
15
+
16
+ There are two entry points to create a model in the repository.
17
+
18
+ #### 1. Create a model from the model repository page
19
+
20
+ You can create models on the model repository page. Click the `New Model` button, set the model description and tag, find the experiment you want, and choose the desired directory you want to put in the model.&#x20;
21
+
22
+ <img style={{ borderRadius: '0.5rem' }}
23
+ src="/images/models/create/2_new.png"
24
+ />
25
+
26
+ #### 2. Create a model from the experiment detail page
27
+
28
+ If you find an experiment that you want to create a model from its output files, you can create one by clicking the `Create Model` button under the `Actions` button on the experiment detail page. Select the model repository and click`SELECT` on the dialog.&#x20;
29
+
30
+ <img style={{ borderRadius: '0.5rem' }}
31
+ src="/images/models/create/3_create.png"
32
+ />
33
+
34
+ Then, set the model description and tag, and choose the desired directory among the output files of the experiment on the model create page. You can include or exclude specific directories in the output files checkbox section.
35
+
36
+ <img style={{ borderRadius: '0.5rem' }}
37
+ src="/images/models/create/4_select.png"
38
+ />
39
+
40
+ ### Creating a model from local files
41
+
42
+ Uploading the checkpoint files on your local machine to VESSL is another way to utilize the model registry feature. If you select the `model from local` type when selecting **Source** in `Models>Model Repository>New Model`, you can create a model by uploading a local file.
43
+
44
+ <img style={{ borderRadius: '0.5rem' }}
45
+ src="/images/models/create/5_upload.png"
46
+ />
guides/models/deploy.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Deploy models
3
+ version: EN
4
+ ---
5
+
6
+ You can use VESSL to quickly deploy your models into production for use from external applications via APIs. You can register it via the SDK and deploy it in the Web UI in one click.
7
+
8
+ ### Register a model using the SDK
9
+
10
+ A model file cannot be deployed on its own - we need to provide instructions on how to setup the server, handle requests, and send responses. This step is called registering a model.
11
+
12
+ There are two ways you can register a model. One is to use an existing model - that is, a VESSL model exists and a model file is stored in it. The other is to train a model from scratch and register it. The two options are further explained below.
13
+
14
+ ### 1. Register an existing model
15
+
16
+ In most cases, you will have already trained model and have the file ready, either through [VESSL's experiment](../experiment/creating-an-experiment.md#creating-an-experiment) or in your local environment. After [creating a model](creating-a-model.md#creating-a-model), you will need to register it using the SDK. The below example shows how you can do so.
17
+
18
+ ```python
19
+ import torch
20
+ import torch.nn as nn
21
+ from io import BytesIO
22
+
23
+ import vessl
24
+
25
+ class Net(nn.Module):
26
+ # Define model
27
+
28
+ class MyRunner(vessl.RunnerBase):
29
+ @staticmethod
30
+ def load_model(props, artifacts):
31
+ model = Net()
32
+ model.load_state_dict(torch.load("model.pt"))
33
+ model.eval()
34
+ return model
35
+
36
+ @staticmethod
37
+ def preprocess_data(data):
38
+ return torch.load(BytesIO(data))
39
+
40
+ @staticmethod
41
+ def predict(model, data):
42
+ with torch.no_grad():
43
+ return model(data).argmax(dim=1, keepdim=False)
44
+
45
+ @staticmethod
46
+ def postprocess_data(data):
47
+ return {"result": data.item()}
48
+
49
+ vessl.configure()
50
+ vessl.register_model(
51
+ repository_name="my-repository",
52
+ model_number=1,
53
+ runner_cls=MyRunner,
54
+ requirements=["torch"],
55
+ )
56
+ ```
57
+
58
+ First, we redefine the layers of the torch model. (This is assuming we only saved the `state_dict`, or the model's parameters. If you saved the model's layers as well, you do not have to redefine the layers.)
59
+
60
+ Then, we define a `MyRunner` which inherits from `vessl.RunnerBase`, which provides instructions for how to serve our model. You can read more about each method [here](../../api-reference/python-sdk/auto-generated/serving.md#runnerbase).
61
+
62
+ Finally, we register the model using `vessl.register_model`. We specify the repository name and number, pass `MyRunner` as the runner class we will use for serving, and list any requirements to install.
63
+
64
+ After executing the script, you should see that two files have been generated: `vessl.manifest.yaml`, which stores metadata and `vessl.runner.pkl`, which stores the runner binary. Your model has been registered and is ready for serving.
65
+
66
+ ### 2. Register a model from scratch
67
+
68
+ In some cases, you will want to train the model and register it within one script. You can use `vessl.register_model` to register a new model as well:
69
+
70
+ ```python
71
+ # Your training code
72
+ # model.fit()
73
+
74
+ vessl.configure()
75
+ vessl.register_model(
76
+ repository_name="my-repository",
77
+ model_number=None,
78
+ runner_cls=MyRunner,
79
+ model_instance=model,
80
+ requirements=["tensorflow"],
81
+ )
82
+ ```
83
+
84
+ After executing the script, you should see that three files have been generated: `vessl.manifest.yaml`, which stores metadata, `vessl.runner.pkl`, which stores the runner binary, and `vessl.model.pkl`, which stores the trained model. Your model has been registered and is ready for serving.
85
+
86
+ #### PyTorch models
87
+
88
+ If you are using PyTorch, there is an easier way to register your model. You only need to optionally define `preprocess_data` and `postprocess_data` - the other methods are autogenerated.
89
+
90
+ ```python
91
+ # Your training code
92
+ # for epoch in range(epochs):
93
+ # train(model, epoch)
94
+
95
+ vessl.configure()
96
+ vessl.register_torch_model(
97
+ repository_name="my-model",
98
+ model_number=1,
99
+ model_instance=model,
100
+ requirements=["torch"],
101
+ )
102
+ ```
103
+
104
+ <Note>Check out the documentation [`vessl.register_model`](../../api-reference/python-sdk/auto-generated/serving.md#registermodel) and [`vessl.register_torch_model`](../../api-reference/python-sdk/auto-generated/serving.md#registertorchmodel).</Note>
105
+
106
+ ### Deploy a registered model
107
+
108
+ You can deploy your model with [VESSL Serve](../../user-guide/serve/README.md)
109
+
110
+ Once you deployed your model with VESSL Serve, you can make predictions using your service by sending HTTP requests to the service endpoint. As in the example request, use the POST method and pass your authentication token as a header. Pass your input data in the format you've specified in your runner when you registered the model. You should receive a response with the prediction.
111
+
112
+ ```
113
+ curl -X POST -H "X-AUTH-KEY:[YOUR-AUTHENTICATION-TOKEN]" -d [YOUR-DATA] https://service-zl067zvrmf69-service-8000.uw2-dev-cluster.savvihub.com
114
+ ```
guides/models/manage.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Manage models
3
+ version: EN
4
+ ---
5
+
6
+ When you enter each model page, you can see the metadata and files tabs. Under actions, you can run an experiment with the model, and also edit or delete the model.
7
+
8
+ ### Model status
9
+
10
+ | Type | Description |
11
+ | ----------------- | ------------------------------------------------------------------------------------------------------ |
12
+ | **Pending** | A model is created with a pending status until the model volume files are fully created. |
13
+ | **Ready** | A model is ready to use. |
14
+ | **Error** | The model has unsuccessfully created. |
15
+ | **Deploying** | The model is being deployed for model serving. |
16
+ | **In Production** | The model has been deployed for model serving. (Note that the actual server instance may have failed.) |
17
+
18
+ ### Running Experiments
19
+
20
+ You can create an experiment with a model volume mounted on the model page. It can be used to evaluate the model. The generated experiments are added to the model's related experiments.
21
+
22
+ <img style={{ borderRadius: '0.5rem' }}
23
+ src="/images/models/manage/run.png"
24
+ />
guides/models/overview.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Overview
3
+ version: EN
4
+ ---
5
+
6
+ A **model registry** is a centralized space for managing model versions. Model governance and security are strengthened by securing an independent space separated from code or dataset. You can collaborate with organization members by monitoring model status and performance.
7
+
8
+ ### Managing model on Web Console
9
+
10
+ To view and manage model repositories and a list of versioned models, click **MODELS** on the web console.
11
+
12
+ <img style={{ borderRadius: '0.5rem' }}
13
+ src="/images/models/overview/overview.png"
14
+ />
guides/organization/billing.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Manage billing
3
+ version: EN
4
+ ---
5
+
6
+ # Billing Information
7
+
8
+ You can check your payment information and credit balance on the billing page.&#x20;
9
+
10
+ <img style={{ borderRadius: '0.5rem' }}
11
+ src="/images/organization/billing/billing.png"
12
+ />
13
+
14
+ ### How is usage calculated?
15
+
16
+ VESSL charges based on compute, storage, and network usage. Check the our pricing table for each resource type.&#x20;
guides/organization/create.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Create a new Organization
3
+ version: EN
4
+ ---
5
+
6
+ Once you signed up, you can create or add an organization by clicking **ADD ORGANIZATION** on the organizations page. You can always come back to this page by clicking the VESSL logo on the top left corner.&#x20;
7
+
8
+ ![](<../../../assets/image (120).png>)
9
+ <img style={{ borderRadius: '0.5rem' }}
10
+ src="/images/organization/create/add.png"
11
+ />
12
+
13
+ **Name** your organization and choose the default **region** for the organization. For a detailed guide to specifying regions, refer to [AWS Regions and Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) page.
14
+
15
+ ![](<../../../assets/image (164).png>)
16
+ <img style={{ borderRadius: '0.5rem' }}
17
+ src="/images/organization/create/create.png"
18
+ />
guides/organization/integrations.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Add integrations
3
+ version: EN
4
+ ---
5
+
6
+ ### Integrating your service to VESSL
7
+
8
+ You can integrate various services to VESSL using AWS, Docker, and SSH Keys. The integrated AWS and Docker credentials are used to manage private docker images, whereas the SSH Keys are used for authorized keys for SSH connection.
9
+
10
+ ### AWS Credentials&#x20;
11
+
12
+ To integrate your AWS account, you need an AWS access key associated with your [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html). You can create this key by following this [guide from AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id\_credentials\_access-keys.html). Once you have your key, click **ADD INTEGRATION** and fill in the form.&#x20;
13
+
14
+ <img style={{ borderRadius: '0.5rem' }}
15
+ src="/images/organization/integrations/1_aws.png"
16
+ />
17
+
18
+ You can integrate multiple AWS credentials to your organization. You can also revoke your credentials by simply clicking the trash button on the right.&#x20;
19
+
20
+ <img style={{ borderRadius: '0.5rem' }}
21
+ src="/images/organization/integrations/2_aws-cred.png"
22
+ />
23
+
24
+ <Note>If you want to pull images from [ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) , make sure to provide the [ECR pull policy granted account.](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-policy-examples.html)</Note>
25
+
26
+ ### Docker Credentials
27
+
28
+ To integrate your Docker account, click **ADD INTEGRATION** and fill in your Docker credentials.
29
+
30
+ <img style={{ borderRadius: '0.5rem' }}
31
+ src="/images/organization/integrations/3_docker.png"
32
+ />
33
+
34
+ ### GitHub
35
+
36
+ To integrate your GitHub account, click ADD INTEGRATION. Grant repository access to VESSL App in the repository access section and click save in GitHub.
37
+
38
+ <img style={{ borderRadius: '0.5rem' }}
39
+ src="/images/organization/integrations/4_github.png"
40
+ />
guides/organization/members.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Add members
3
+ version: EN
4
+ ---
5
+
6
+ ### Collaborate with your teammates
7
+
8
+ Invite your teammates to your organization by sending an invitation mails. If you add an email address on the **Member** list, VESSL will send an invitation links. Your teammates can sign up by clicking this link.&#x20;
9
+
10
+ <img style={{ borderRadius: '0.5rem' }}
11
+ src="/images/organization/members/add.png"
12
+ />
guides/organization/notification.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Create a new Organization
3
+ version: EN
4
+ ---
5
+
6
+ You can receive either a **Slack** or **Email Notification** for your experiments. VESSL will notify your teammates when your experiment starts running, fails to run, or is completed.&#x20;
7
+
8
+ <img style={{ borderRadius: '0.5rem' }}
9
+ src="/images/organization/notification/add.png"
10
+ />
11
+
12
+ To add Slack Notifications,&#x20;
13
+
14
+ 1. Click **Add to Slack**.
15
+ 2. Specify Slack Workplace and Channel.
16
+ 3. Click Allow.
17
+
18
+ To add Email Notifications,&#x20;
19
+
20
+ 1. Type your email address.
21
+ 2. Click **ADD**.
guides/organization/overview.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Overview
3
+ version: EN
4
+ ---
5
+
6
+ **Organization** is a shared working environment where you can create projects, datasets, experiments, and services. Once you signed-up, you can create an organization with a specified region and invite teammates to collaborate.&#x20;
7
+
8
+ <img style={{ borderRadius: '0.5rem' }}
9
+ src="/images/organization/overview/1_view.png"
10
+ />
11
+
12
+ ### Managing Organization on Web Console
13
+
14
+ Once you entered an organization, you can navigate to other organizations by clicking your profile on the top right corder. If you wish to create a new organization, go back to the organization page by clicking the VESSL logo.
15
+
16
+ <img style={{ borderRadius: '0.5rem' }}
17
+ src="/images/organization/overview/2_manage.png"
18
+ />
guides/project/create.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Create a new project
3
+ version: EN
4
+ ---
5
+
6
+ # Creating a Project
7
+
8
+ To create a project, click **NEW PROJECT** on the project page.&#x20;
9
+
10
+ <img style={{ borderRadius: '0.5rem' }}
11
+ src="/images/projects/create/1_new.png"
12
+ />
13
+
14
+
15
+ ### Basic information
16
+
17
+ On the project create page, you should specify the name and the description of the project. Note that the project name should be unique within the organization.
18
+
19
+ <img style={{ borderRadius: '0.5rem' }}
20
+ src="/images/projects/create/2_info.png"
21
+ />
22
+
23
+ ### Connect Project Repository
24
+
25
+ If you want to connect the GitHub repository to the project, choose the GitHub account and the repository here. If you haven't install VESSL GitHub App, integrate your GitHub account in [`Organization Settings > Add Integrations`](../organization/organization-settings/add-integrations.md#github).
26
+
27
+ <img style={{ borderRadius: '0.5rem' }}
28
+ src="/images/projects/create/3_repo.png"
29
+ />
30
+
guides/project/overview.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Overview
3
+ version: EN
4
+ ---
5
+
6
+ Machine learning projects require both code and datasets. VESSL **project** is a conceptual space where you can easily manage codes and datasets as a basic element within the organization. You can share projects with organization members to collaborate on a machine learning project.
7
+
8
+ ### Managing Projects on Web Console
9
+
10
+ You can find the Project tab under the organization menu. Click the tab to view the full list of ongoing projects.
11
+
12
+ <img style={{ borderRadius: '0.5rem' }}
13
+ src="/images/projects/overview/web.png"
14
+ />
guides/project/repo-dataset.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Project-level repo & datasets
3
+ version: EN
4
+ ---
5
+
6
+ Project provides a way to connect code repositories and datasets. VESSL provides the following with project repositories & project datasets&#x20;
7
+
8
+ * Download codes & datasets when a experiment / a sweep is created
9
+ * Track versions and file diffs between experiments
10
+
11
+ ### Add Project Repository
12
+
13
+ Project Repository can be configured in creating project and project settings.&#x20;
14
+
15
+ <img style={{ borderRadius: '0.5rem' }}
16
+ src="/images/projects/repo-dataset/1_settings.png"
17
+ />
18
+
19
+ Add project repository and select github repositories in the dialog. If you have not integrated github with VESSL, you should integrate github first in the organization settings (link will be given in the add dialog.)
20
+
21
+ <img style={{ borderRadius: '0.5rem' }}
22
+ src="/images/projects/repo-dataset/2_repo.png"
23
+ />
24
+
25
+ ### Add Project Dataset
26
+
27
+ Project dataset can be configured in the same way as the project repository. Unlike project repository, project dataset is allowed to specify the mount path in the experiment/sweep.
28
+
29
+ <img style={{ borderRadius: '0.5rem' }}
30
+ src="/images/projects/repo-dataset/3_tree.png"
31
+ />
32
+
33
+ Once after you connected repositories and datasets, they are mounted by default when creating an experiment / a sweep.
34
+
35
+ <img style={{ borderRadius: '0.5rem' }}
36
+ src="/images/projects/repo-dataset/4_result.png"
37
+ />
38
+
39
+ #### Connect cluster-scoped local dataset ([docs](../dataset/adding-new-datasets.md))
40
+
41
+ You can connect the cluster-scoped dataset to project dataset. If you use different cluster from the cluster specified in the dataset during creating an experiment, an error may occur. To resolve it, you need to choose the cluster specified in the dataset or continue without the certain dataset.
42
+
43
+ <img style={{ borderRadius: '0.5rem' }}
44
+ src="/images/projects/repo-dataset/5_dataset.png"
45
+ />
guides/project/summary.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Project summary
3
+ version: EN
4
+ ---
5
+
6
+ **Project Overview** provides a bird's-eye view of the progress of your machine learning projects. On the project overview dashboard, you can manage and track key information about your project:
7
+
8
+ * **Key Metrics**: Keep track of essential evaluation metrics of your experiments such as accuracy, loss, and MSE.
9
+ * **Sample Media**: Log images, audio, and other media from your experiment and explore your model's prediction results to compare your experiments visually.
10
+ * **Starred Experiments**: Star and keep track of meaningful experiments.&#x20;
11
+ * **Project Notes**: Make a note of important information about your project and share it with your teammates – similar to README.md of Git codebase.&#x20;
12
+
13
+ <img style={{ borderRadius: '0.5rem' }}
14
+ src="/images/projects/summary/1_summary.png"
15
+ />
16
+
17
+ ## Key Metrics
18
+
19
+ VESSL AI automatically marks metrics of best-performing experiments as key metrics. You can also manually bookmark key metrics and keep track of your model's meaningful evaluation metrics.&#x20;
20
+
21
+ To add or remove Key Metrics
22
+
23
+ 1. Click the settings icon on top of the Key Metrics card.
24
+
25
+ <img style={{ borderRadius: '0.5rem' }}
26
+ src="/images/projects/summary/2_metrics.png"
27
+ />
28
+
29
+ 2\. Select **up to 4 metrics** and choose whether your goal is to minimize or maximize the target value.&#x20;
30
+
31
+ * If you select **Minimize**, an experiment with the smallest target value will be updated to the key metric charts.
32
+ * If you select **Maximize**, an experiment with the greatest target value will be updated to the key metric chart.&#x20;
33
+
34
+ <img style={{ borderRadius: '0.5rem' }}
35
+ src="/images/projects/summary/3_select.png"
36
+ />
37
+
38
+ ## Sample Media
39
+
40
+ You can log images or audio clips generated from your experiment to explore your model's prediction results and make visual (or auditory) comparisons.&#x20;
41
+
42
+ <Note>For more information about logging media during your experiment, refer to [`vessl.log`](../../api-reference/python-sdk/utils/vessl.log/) in our Python SDK.&#x20;</Note>
43
+
44
+ To see the media file, select an experiment and specify the media type using the dropdown menu on the upper right corner of Sample Media card.&#x20;
45
+
46
+ <img style={{ borderRadius: '0.5rem' }}
47
+ src="/images/projects/summary/4_audio.png"
48
+ />
49
+
50
+ ## Starred Experiment
51
+
52
+ You can mark important experiments as **Starred Experiments** to keep track of meaningful achievements in the project. Starred Experiments displayed with the tags and key metrics.
53
+
54
+ <img style={{ borderRadius: '0.5rem' }}
55
+ src="/images/projects/summary/5_starred.png"
56
+ />
57
+
58
+ To star or unstar experiments
59
+
60
+ 1. Go to the experiment tracking dashboard
61
+ 2. Select experiments
62
+ 3. Click 'Star' or 'Unstar' on the dropdown menu.&#x20;
63
+
64
+ <img style={{ borderRadius: '0.5rem' }}
65
+ src="/images/projects/summary/6_experiments.png"
66
+ />
67
+
68
+ You can also star or unstar experiments on the experiment summary page.&#x20;
69
+
70
+ <img style={{ borderRadius: '0.5rem' }}
71
+ src="/images/projects/summary/7_star.png"
72
+ />
73
+
74
+ ## Project Notes
75
+
76
+ **Project Notes** is a place for noting and sharing important information about the project together with your team. It works like README.md of your Git codebase.
77
+
78
+ <img style={{ borderRadius: '0.5rem' }}
79
+ src="/images/projects/summary/8_note.png"
80
+ />
81
+
82
+ To modify project note, click the settings icon on top of Project Notes card. You will be given a markdown editor to update your notes.&#x20;
83
+
84
+ <img style={{ borderRadius: '0.5rem' }}
85
+ src="/images/projects/summary/9_note-edit.png"
86
+ />
guides/resources/changelog.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Changelog
3
+ description: See what's new in VESSL AI
4
+ icon: megaphone
5
+ version: EN
6
+ ---
7
+
8
+ ## January 31, 2024
9
+
10
+ <img
11
+ style={{ borderRadius: '0.5rem' }}
12
+ src="/images/changelog/jan.png"
13
+ />
14
+
15
+ ### New Get started guide
16
+
17
+ We've updated our documenation with a new get started guide. The new guide covers everything from product overview to the latest use cases of our product in Gen AI & LLM.
18
+
19
+ Follow along our new guide [here](https://run-docs.vessl.ai/docs/en/get-started/quickstart).
20
+
21
+ ### New & Improved
22
+
23
+ - Added a new managed cloud option built on Google Cloud
24
+ - Renamed our default managed Docker images to `torch:2.1.0-cuda12.2-r3`
25
+
26
+ ## December 28, 2023
27
+
28
+ ### Announcing VESSL Hub
29
+
30
+ <img
31
+ style={{ borderRadius: '0.5rem' }}
32
+ src="/images/changelog/dec.png"
33
+ />
34
+
35
+ VESSL Hub is a collection of one-click recipes for the latest open-source models like Llama2, Mistral 7B, and StableDiffusion. Built on our fullstack AI infrastructure, Hub provides the easiest way to explore and deploy models.
36
+
37
+ Fine-tune and deploy the latest models on our production-grade fullstack cloud infrastructure with just a single click.
38
+ Read about the release on our [blog](https://blog.vessl.ai/vessl-hub) or try it out now at [vessl.ai/hub](https://vessl.ai/hub).