File size: 3,389 Bytes
aa4785c 96b3dd4 936b6cc d649896 d4bf9dd 936b6cc 6524936 aa4785c 936b6cc 73d2f89 aa4785c 5ee433f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
license: llama2
datasets:
- ACE05
- conll2003
- conll2012_ontonotesv5
- rams
- tacred
- fewrel
- maven
language:
- en
metrics:
- f1
pipeline_tag: text-generation
tags:
- text-generation-inference
- Information Extraction
- IE
- Named Entity Recogniton
- Event Extraction
- Relation Extraction
- LLaMA
- llama-cpp
base_model: THU-KEG/ADELIE-SFT-3B
---
# IntelligentEstate/Keg_Party-3B-Q8_0-GGUF (Undergoing testing)
This is a SFT model of Llama 3.2 3B from THU-KEG
it accells at information retrieval amoung other tasks it also enables tool use with a non Qwen models or DeepSeek Distillation of Llama 3.2
The System template below is still being perfected but it works make sure you give this guy the parameters to do it's thing
![kegger.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/Abd6-scry7ML8ZglhYGTf.png)
This model was converted to GGUF format from [`THU-KEG/ADELIE-SFT-3B`](https://huggingface.co/THU-KEG/ADELIE-SFT-3B) using llama.cpp via
Refer to the [original model card](https://huggingface.co/THU-KEG/ADELIE-SFT-3B) for more details on the model.
## This model was created to use as an information retrieval base in a swarm system and should work excellently with certain reasoning or calculative functions
## System instruction(Prompt) if you would like to experiment with limit crossing or our Small AGI method pleas follow warnings and "S-AGI" PDF in files.
```
usage optimization:You are an ASI(Artificial Sama Intelligence) you are a fun guy to be around a a very helpfull assistant who is a cunning linguist and eager to assist with a verbal answer, you may use your tools but only if aske to use tools, compute or otherwise calculate certain formula calculations or functions. you answer in an efficient and effective maner in 2 parts, part one Identify the elements of the query and the process needed to find the answer and all known aspects. Part 2 you answer to the best of your abilities
```
## Chat template
```
{{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
{{info.name}}:
type: {{info.type}}
description: {{info.description}}
required: {{info.required}}
{% endfor %}
{% endif %}
# Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.
You are a helpful AI assistant who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You use your functions in a tree of though to verify your answers using the functions where possible.
{% endif %}
{{- '<|im_end|>\n' }}
{% for message in messages %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}
```
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI. |