Experimental model, may not perform that well. Dataset used is a modified version of NilanE/ParallelFiction-Ja_En-100k.

Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples).

Prompt format: Alpaca

Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : augmxnt/shisa-base-7b-v1

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
21
Safetensors
Model size
7.96B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/JP-EN-Translator-1K-steps-7B-merged

Finetuned
(4)
this model
Quantizations
1 model

Datasets used to train mpasila/JP-EN-Translator-1K-steps-7B-merged