xavierwoon commited on
Commit
c23ddea
1 Parent(s): 55a2735

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -14,7 +14,7 @@ base_model:
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
 
17
-
18
 
19
  ## Model Details
20
 
@@ -28,8 +28,8 @@ base_model:
28
  <!-- - **Funded by [optional]:** [More Information Needed]
29
  - **Shared by [optional]:** [More Information Needed] -->
30
  - **Model type:** Mistral
31
- - **Language(s) (NLP):** [More Information Needed]
32
- - **License:** [More Information Needed]
33
  - **Finetuned from model [optional]:** unsloth/mistral-7b-bnb-4bit
34
 
35
  <!-- ### Model Sources [optional]
@@ -44,11 +44,11 @@ Provide the basic links for the model.
44
 
45
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
 
47
- ### Direct Use
48
 
49
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
 
51
- [More Information Needed]
52
 
53
  <!-- ### Downstream Use [optional] -->
54
 
@@ -113,7 +113,7 @@ text_streamer = TextStreamer(tokenizer)
113
  _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 2048)
114
  ```
115
 
116
- [More Information Needed]
117
 
118
  ## Training Details
119
 
@@ -138,7 +138,7 @@ Training Data was created based on Data Structures and Algorithm (DSA) codes cre
138
 
139
  <!-- #### Training Hyperparameters -->
140
 
141
- <!-- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> -->
142
 
143
  <!-- #### Speeds, Sizes, Times [optional] -->
144
 
 
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
 
17
+ Cestermistral is a fine-tuned Mistral 7B model that is able to generate Libcester unit test cases in the correct format.
18
 
19
  ## Model Details
20
 
 
28
  <!-- - **Funded by [optional]:** [More Information Needed]
29
  - **Shared by [optional]:** [More Information Needed] -->
30
  - **Model type:** Mistral
31
+ <!-- - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed] -->
33
  - **Finetuned from model [optional]:** unsloth/mistral-7b-bnb-4bit
34
 
35
  <!-- ### Model Sources [optional]
 
44
 
45
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
 
47
+ <!-- ### Direct Use -->
48
 
49
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
 
51
+ <!-- [More Information Needed] -->
52
 
53
  <!-- ### Downstream Use [optional] -->
54
 
 
113
  _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 2048)
114
  ```
115
 
116
+ <!-- [More Information Needed] -->
117
 
118
  ## Training Details
119
 
 
138
 
139
  <!-- #### Training Hyperparameters -->
140
 
141
+ <!-- - **Training regime:** [More Information Needed] fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
142
 
143
  <!-- #### Speeds, Sizes, Times [optional] -->
144