---
license: cc-by-sa-3.0
datasets:
- VMware/open-instruct
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
[![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# open-llama-7b-v2-open-instruct - GGUF
- Model creator: [VMware](https://huggingface.co/VMware)
- Original model: [open-llama-7b-v2-open-instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct)
OpenLlama is a free reimplementation of the original Llama Model which is licensed under Apache 2 license.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# VMware/open-llama-7B-v2-open-instruct
Instruction-tuned version of the fully trained Open LLama 7B v2 model. The model is open for COMMERCIAL USE.
- This model performs better on code compared to v1 due to the improvements made on the base model by the openlm-research team.
- The instruction model is trained on an improved instruction tuning dataset compared to v1
**NOTE**: The model was trained using the Alpaca prompt template
**NOTE**: Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
## License
- CC BY-SA-3.0 **(Commercially Viable!)**
- Base Language Model ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
- Fine-Tuning Dataset ([VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct)) is under cc-by-sa-3.0
## Datasets used for Fine-Tuning
### Open-instruct
**Open-instruct-v1**
- Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
**Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples**
- ESNLI - MIT
- ECQA - CDLA 1.0 - Sharing
- Strategy - MIT
- CREAK - MIT
- gsmk8 - MIT
- aqua - MIT
- qasc - Apache 2.0
## Nomenclature
- Model : Open-llama-v2
- Model Size: 7B parameters
- Dataset: Open-instruct
## Use in Transformers
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'VMware/open-llama-7b-v2-open-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
prompt = """What is attention mechanism of a transformer model?
Write a python code to illustrate how attention works within a transformer model using numpy library. Donot use pytorch or tensorflow."""
inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output = tokenizer.decode(output1[0])
print(output)
```
### Output
Sure, I can help you with that!
Attention mechanisms in transformer models are typically implemented using the attention mechanism in the self-attention layer. Self-attention allows the model to focus on different parts of the input sequence when processing it. This is achieved by computing a set of attention weights, which are used to weigh the contribution of each input element to the output.
Here's an example code using NumPy to illustrate how attention works in a transformer model:
```python
import numpy as np
def attention_weights(query, key, value, mask):
# Query, key, and value are input tensors. Mask is a tensor of zeros and ones that represents the attention mask.
# It is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
# The attention weights are the element-wise product of the query, key, and mask tensors.
# The result is a tensor of the same shape as the query tensor.
# Compute the dot product between the query tensor and the key tensor
dot = np.matmul(query, key)
# Compute the element-wise softmax of the dot product tensor
exp_dot = np.exp(dot)
# Multiply the dot product and the softmax of the dot product tensors
weights = dot * exp_dot
# Return the attention weights as a NumPy tensor
return weights
# Define the input sequence
query = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
key = np.array([[0.1, 0.2], [0.3, 0.4]])
value = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
mask = np.array([[False, True, True], [False, True, True]])
# Compute the attention weights
weights = attention_weights(query, key, value, mask)
# Print the attention weights
print(weights)
```
In this example, the `attention_weights` function takes as input the query tensor, key tensor, value tensor, and mask tensor. It computes the dot product between the query and key tensors using the `np.matmul` function, and then applies a softmax function using the `np.exp` function to the element-wise dot product tensor. It then multiplies the dot product and softmax tensors using the `np.matmul` function, and returns the result as a NumPy tensor.
The `query`, `key`, and `value` tensors represent the input sequence to the transformer model. The `mask` tensor represents the attention mask, which is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
The output of the `attention_weights` function is a NumPy tensor that represents the attention weights for the input sequence. These weights are used by the transformer model to weigh the contribution of each input element to the output.
I hope this helps!