File size: 3,449 Bytes
6b2b569
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7631cbb
 
 
 
 
 
 
 
6b2b569
 
 
e4a3107
7631cbb
 
 
6b2b569
 
 
 
 
 
 
 
 
 
7631cbb
6b2b569
 
 
 
7631cbb
6b2b569
7631cbb
 
 
6b2b569
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec6b558
6b2b569
 
 
 
 
 
 
 
7631cbb
6b2b569
7631cbb
6b2b569
 
 
 
 
 
 
ec6b558
6b2b569
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
language:
- en
pipeline_tag: text-generation
tags:
- shining-valiant
- shining-valiant-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- science
- physics
- biology
- chemistry
- compsci
- computer-science
- engineering
- technical
- conversational
- chat
- instruct
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- sequelbox/Celestia
- sequelbox/Supernova
model_type: llama
license: llama3.1
---


![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/EXX7TKbB-R6arxww2mk0R.jpeg)


Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
  - Finetuned on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) for best available general performance
  - Trained on a variety of high quality data; focused on science, engineering, technical knowledge, and structured reasoning


## Version

This is the **2024-09-16** release of Shining Valiant 2 for Llama 3.1 8b.

We've improved and open-sourced our new baseline [science-instruct dataset](https://huggingface.co/datasets/sequelbox/Celestia). This release features improvements in physics, chemistry, biology, and computer science.

Future upgrades will continue to expand Shining Valiant's technical knowledge base.

Help us and recommend Shining Valiant 2 to your friends!


## Prompting Guide
Shining Valiant 2 uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat:


import transformers
import torch

model_id = "ValiantLabs/Llama3.1-8B-ShiningValiant2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are Shining Valiant, a highly capable chat AI."},
    {"role": "user", "content": "Describe the role of transformation matrices in 3D graphics."}
]

outputs = pipeline(
    messages,
    max_new_tokens=2048,
)

print(outputs[0]["generated_text"][-1])


## The Model
Shining Valiant 2 is built on top of Llama 3.1 8b Instruct.

The current version of Shining Valiant 2 is trained on technical knowledge using [sequelbox/Celestia](https://huggingface.co/datasets/sequelbox/Celestia) and general chat capability using [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)

Our private data adds specialist knowledge and Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical. (As a general note: we're hoping to replace and open-source this part of Shining Valiant's dataset with synthetic data soon!)


![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)


Shining Valiant 2 is created by [Valiant Labs.](http://valiantlabs.ca/)

[Check out our HuggingFace page for our open-source Build Tools models, including the newest version of code-specialist Enigma!](https://huggingface.co/ValiantLabs)

[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)

We care about open source.
For everyone to use.

We encourage others to finetune further from our models.