Deeokay commited on
Commit
cd32f9a
1 Parent(s): 3d1dd7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -56,11 +56,12 @@ This GGUF is based on llama3-3-8B-Instruct thus ollama doesn't need anything els
56
  After than you should be able to use this model to chat!
57
 
58
 
 
59
  # NOTE: DISCLAIMER
60
 
61
  Please note this is not for the purpose of production, but result of Fine Tuning through self learning
62
 
63
- The llama3 Tokens where kept the same, however the format was slight customized using the available tokens
64
 
65
  I have foregone the {{.System}} part as this would be updated when converting the llama3.
66
 
@@ -72,7 +73,7 @@ First pass through my ~30K personalized customized dataset.
72
  If would like to know how I started creating my dataset, you can check this link
73
  [Crafting GPT2 for Personalized AI-Preparing Data the Long Way (Part1)](https://medium.com/@deeokay/the-soul-in-the-machine-crafting-gpt2-for-personalized-ai-9d38be3f635f)
74
 
75
- As the data was getting created with custom special tokens, I had to convert that to the llama3 Template.
76
 
77
  However I got creative again.. the training data has the following Template:
78
 
 
56
  After than you should be able to use this model to chat!
57
 
58
 
59
+
60
  # NOTE: DISCLAIMER
61
 
62
  Please note this is not for the purpose of production, but result of Fine Tuning through self learning
63
 
64
+ The llama3 Special Tokens where kept the same, however the format was slight customized using the available tokens
65
 
66
  I have foregone the {{.System}} part as this would be updated when converting the llama3.
67
 
 
73
  If would like to know how I started creating my dataset, you can check this link
74
  [Crafting GPT2 for Personalized AI-Preparing Data the Long Way (Part1)](https://medium.com/@deeokay/the-soul-in-the-machine-crafting-gpt2-for-personalized-ai-9d38be3f635f)
75
 
76
+ As the data was getting created with custom GPT2 special tokens, I had to convert that to the llama3 Template.
77
 
78
  However I got creative again.. the training data has the following Template:
79