lordjia's picture
Update README.md
bdddd1b verified
|
raw
history blame
3.18 kB
metadata
license: apache-2.0
datasets:
  - jed351/cantonese-wikipedia
  - raptorkwok/cantonese-traditional-chinese-parallel-corpus
language:
  - zh
  - en
pipeline_tag: text-generation
tags:
  - Cantonese
  - Qwen2
  - chat

Qwen2-Cantonese-7B-Instruct

Model Overview / 模型概述

Qwen2-Cantonese-7B-Instruct is a Cantonese language model based on Qwen2-7B-Instruct, fine-tuned using LoRA. It aims to enhance Cantonese text generation and comprehension capabilities, supporting various tasks such as dialogue generation, text summarization, and question-answering.

Qwen2-Cantonese-7B-Instruct係基於Qwen2-7B-Instruct嘅粵語語言模型,使用LoRA進行微調。 它旨在提高粵語文本的生成和理解能力,支持各種任務,如對話生成、文本摘要和問答。

Model Features / 模型特性

Usage / 用法

You can easily load and use this model with Hugging Face's Transformers library. Here is a simple example:

你可以輕鬆地將此模型與Hugging Face嘅Transformers庫一起使用。 下面係一個簡單嘅示例:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("lordjia/Qwen2-Cantonese-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("lordjia/Qwen2-Cantonese-7B-Instruct")

input_text = "唔該你用廣東話講下你係邊個。"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Quantized Version / 量化版本

A 4-bit quantized version of this model is also available: qwen2-cantonese-7b-instruct-q4_0.gguf.

此外,仲提供此模型嘅4位量化版本:qwen2-cantonese-7b-instruct-q4_0.gguf

Alternative Model Recommendation / 備選模型舉薦

For an alternative, consider Llama-3-Cantonese-8B-Instruct, also fine-tuned by LordJia and based on Meta-Llama-3-8B-Instruct.

對於替代方案,請考慮Llama-3-Cantonese-8B-Instruct,同樣由LordJia微調並基於Meta-Llama-3-8B-Instruct。

License / 許可證

This model is licensed under the Apache 2.0 license. Please review the terms before use.

此模型喺Apache 2.0許可證下獲得許可。 請在使用前仔細閱讀呢啲條款。

Contributors / 貢獻