YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is a Finetuning of GPT-J-6B using LoRa - https://huggingface.co/EleutherAI/gpt-j-6B

The dataset is the cleaned version of the Alpaca dataset - https://github.com/gururise/AlpacaDataCleaned

A model similar to this has been talked about

The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa

This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens

You will need a 3090 or A100 to run it, unfortunately this current version won't work on a T4.

here is a Colab https://colab.research.google.com/drive/1O1JjyGaC300BgSJoUbru6LuWAzRzEqCz?usp=sharing

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.