File size: 958 Bytes
275e934
343473b
 
 
 
275e934
 
343473b
275e934
343473b
9a81d09
343473b
 
9a81d09
343473b
275e934
343473b
ce18e10
 
343473b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
datasets:
- flozi00/conversations
language:
- de
---

## This project is sponsored by [ ![PrimeLine](https://www.primeline-solutions.com/skin/frontend/default/theme566/images/primeline-solutions-logo.png) ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)

# Model Card

This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:"
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.

The model archictecture is based on Llama version 2 with 13B parameters, trained on 100% renewable energy powered hardware.

This work is contributed by private research of [flozi00](https://huggingface.co/flozi00)


Join discussions about german llm research, and plan larger training runs together: https://join.slack.com/t/slack-dtc7771/shared_invite/zt-219keplqu-hLwjm0xcFAOX7enERfBz0Q