Edit model card

LucasInsight/Meta-Llama-3.1-8B-Instruct Model Card

Model Overview

The LucasInsight/Meta-Llama-3.1-8B-Instruct model is an enhanced version of the Meta-Llama3 project, incorporating the alpaca-gpt4-data-zh Chinese dataset. The model has been fine-tuned using Unsloth with 4-bit QLoRA and generates GGUF model files compatible with the Ollama inference engine.

👋Join our WeChat

模型概述

LucasInsight/Meta-Llama-3.1-8B-Instruct 模型是在 Meta-Llama3 工程的基础上,增加了 alpaca-gpt4-data-zh 中文数据集。该模型通过使用 Unsloth 的 4-bit QLoRA 进行微调,生成的 GGUF 模型文件支持 Ollama 推理引擎。

👋加入我们的微信群

License Information

This project is governed by the licenses of the integrated components:

  1. Meta-Llama3 Project

    Citation:

    @article{llama3modelcard,
        title={Llama 3 Model Card},
        author={AI@Meta},
        year={2024},
        url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
    }
    
  2. Unsloth Project

  3. Chinese Dataset Integration

    Usage and License Notices:
    The data is intended and licensed for research use only. The dataset is CC BY NC 4.0, allowing only non-commercial use. Models trained using this dataset should not be used outside of research purposes.

    Citation:

    @article{peng2023gpt4llm,
        title={Instruction Tuning with GPT-4},
        author={Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao},
        journal={arXiv preprint arXiv:2304.03277},
        year={2023}
    }
    

许可证信息

本项目的许可证由各集成工程的许可证构成:

  1. Meta-Llama3 项目

    引用说明:

    @article{llama3modelcard,
        title={Llama 3 Model Card},
        author={AI@Meta},
        year={2024},
        url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
    }
    
  2. Unsloth 项目

  3. 中文数据集集成

    使用和许可证通知:
    该数据仅限于研究使用,且基于 CC BY NC 4.0 许可证,只允许非商业用途。使用此数据集训练的模型不得用于研究用途以外的场合。

    引用说明:

    @article{peng2023gpt4llm,
        title={Instruction Tuning with GPT-4},
        author={Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao},
        journal={arXiv preprint arXiv:2304.03277},
        year={2023}
    }
    

WeChat

Downloads last month
219
GGUF
Model size
8.03B params
Architecture
llama

4-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .