Edit model card

Model Card for Model ID

This is a testing model by:

  • shrinking the layers of llama-3-8b to only 2 layers of transformer decoder
  • adding a customized layer to llama attention
  • shrinking the total param size to around 2b... so pathetic ๐Ÿ˜ข๐Ÿ˜ข๐Ÿ˜ข

The purpose of this model is to show how you can download a pre-trained llama and customize it... however you want... and then re-train it with... whatever you want... and then upload your model to hugging face ๐Ÿค—

It is not intended to compete with larger models developed by large corporations with GPU advantage.

However, of course, if you want to use this model properly, you have to not only download it with LlamaForCausalLM but also to use the following code in .ipynb to build the same customized layer and merge the huggingface safetensor to each parameter:

Model Details

  • Developed by: Po-Lung Wang
  • Model type: Llama-3
  • License: MIT
Downloads last month
19
Safetensors
Model size
1.94B params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.