Edit model card

Papy_2_Llama-3.1-8B-Instruct_text

This is a finetuned version Llama-3.1-8B-Instruct specialized on reconstructing spans of 1–20 missing characters in ancient Greek documentary papyri. In spans of 1–10 missing characters it did so with a Character Error Rate of 14.9%, a top-1 accuracy of 73.5%, and top-20 of 85.9% on a test set of 7,811 papyrus editions. It replaces Papy_2_Llama-3.1-8B-Instruct_text. See https://arxiv.org/abs/2409.13870.

Usage

To run the model on a GPU with large memory capacity, follow these steps:

1. Download and load the model

import json
from transformers import pipeline, AutoTokenizer, LlamaForCausalLM
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
import torch
import warnings
warnings.filterwarnings("ignore", message=".*copying from a non-meta parameter in the checkpoint*")
model_id = "Ericu950/Papy_2_Llama-3.1-8B-Instruct_text"

with init_empty_weights():
    model = LlamaForCausalLM.from_pretrained(model_id)

model = load_checkpoint_and_dispatch(
    model,
    model_id,
    device_map="auto",
    offload_folder="offload",
    offload_state_dict=True,
)

tokenizer = AutoTokenizer.from_pretrained(model_id)

generation_pipeline = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    device_map="auto",
)

2. Run inference on a papyrus fragment of your choice

papyrus_edition = """
ετουσ τεταρτου αυτοκρατοροσ καισαροσ ουεσπασιανου σεβαστου ------------------ 
ομολογει παυσιριων απολλωνιου του παuσιριωνοσ μητροσ ---------------τωι γεγονοτι αυτωι 
εκ τησ γενομενησ και μετηλλαχυιασ αυτου γυναικοσ ------------------------- 
απο τησ αυτησ πολεωσ εν αγυιαι συγχωρειν ειναι ---------------------------------- 
--------------------σ αυτωι εξ ησ συνεστιν ------------------------------------ 
----τησ αυτησ γενεασ την υπαρχουσαν αυτωι οικιαν ------------ 
------------------ ---------καὶ αιθριον και αυλη απερ ο υιοσ διοκοροσ -------------------------- 
--------εγραψεν του δ αυτου διοσκορου ειναι ------------------------------------ 
---------- και προ κατενγεγυηται τα δικαια -------------------------------------- 
νησ κατα τουσ τησ χωρασ νομουσ· εαν δε μη --------------------------------------- 
υπ αυτου τηι του διοσκορου σημαινομενηι -----------------------------------ενοικισμωι του 
ημισουσ μερουσ τησ προκειμενησ οικιασ --------------------------------- διοσκοροσ την τουτων αποχην 
---------------------------------------------μηδ υπεναντιον τουτοισ επιτελειν μηδε 
------------------------------------------------ ανασκευηι κατ αυτησ τιθεσθαι ομολογιαν μηδε 
----------------------------------- επιτελεσαι η χωρισ του κυρια ειναι τα διομολογημενα 
παραβαινειν, εκτεινειν δε τον παραβησομενον τωι υιωι διοσκορωι η τοισ παρ αυτου καθ εκαστην 
εφοδον το τε βλαβοσ και επιτιμον αργυριου δραχμασ 0 και εισ το δημο[7 missing letters] ισασ και μηθεν 
ησσον· δ -----ιων ομολογιαν συνεχωρησεν·
"""
system_prompt = "Fill in the missing letters in this papyrus fragment!"
input_messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": papyrus_edition},
]
terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = generation_pipeline(
    input_messages,
    max_new_tokens=10,
    num_beams=30, # Set this as high as your memory will allow!
    num_return_sequences=10,
    early_stopping=True,
)
beam_contents = []
for output in outputs:
    generated_text = output.get('generated_text', [])
    for item in generated_text:
        if item.get('role') == 'assistant':
            beam_contents.append(item.get('content'))
real_response = "σιον τασ"
print(f"The masked sequence: {real_response}")
for i, content in enumerate(beam_contents, start=1):
    print(f"Suggestion {i}: {content}")

Expected Output:

The masked sequence: σιον τασ
Suggestion 1: σιον τασ
Suggestion 2: σιν τασ ι
Suggestion 3: σ τασ ισα
Suggestion 4: σιου τασ
Suggestion 5: συ τασ ισ
Suggestion 6: ιον τασ ι
Suggestion 7: ν τασ ισα
Suggestion 8: σ ισασ κα
Suggestion 9: σασ τασ ι
Suggestion 10: σιωι τασ

Usage on free tier in Google Colab

If you don’t have access to a larger GPU but want to try the model out, you can run it in a quantized format in Google Colab. The quality of the responses will deteriorate significantly! Follow these steps:

Step 1: Connect to free GPU

  1. Click Connect arrow_drop_down near the top right of the notebook.
  2. Select Change runtime type.
  3. In the modal window, select T4 GPU as your hardware accelerator.
  4. Click Save.
  5. Click the Connect button to connect to your runtime. After some time, the button will present a green checkmark, along with RAM and disk usage graphs. This indicates that a server has successfully been created with your required hardware.

Step 2: Install Dependencies

!pip install -U bitsandbytes
import os
os._exit(00)

Step 3: Download and quantize the model

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
import torch
quant_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained("Ericu950/Papy_2_Llama-3.1-8B-Instruct_text",
device_map = "auto", quantization_config = quant_config)
tokenizer = AutoTokenizer.from_pretrained("Ericu950/Papy_2_Llama-3.1-8B-Instruct_text")
generation_pipeline = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    device_map="auto",
)

Step 4: Run inference on a papyrus fragment of your choice

papyrus_edition = """
ετουσ τεταρτου αυτοκρατοροσ καισαροσ ουεσπασιανου σεβαστου ------------------ 
ομολογει παυσιριων απολλωνιου του παuσιριωνοσ μητροσ ---------------τωι γεγονοτι αυτωι 
εκ τησ γενομενησ και μετηλλαχυιασ αυτου γυναικοσ ------------------------- 
απο τησ αυτησ πολεωσ εν αγυιαι συγχωρειν ειναι ---------------------------------- 
--------------------σ αυτωι εξ ησ συνεστιν ------------------------------------ 
----τησ αυτησ γενεασ την υπαρχουσαν αυτωι οικιαν ------------ 
------------------ ---------καὶ αιθριον και αυλη απερ ο υιοσ διοκοροσ -------------------------- 
--------εγραψεν του δ αυτου διοσκορου ειναι ------------------------------------ 
---------- και προ κατενγεγυηται τα δικαια -------------------------------------- 
νησ κατα τουσ τησ χωρασ νομουσ· εαν δε μη --------------------------------------- 
υπ αυτου τηι του διοσκορου σημαινομενηι -----------------------------------ενοικισμωι του 
ημισουσ μερουσ τησ προκειμενησ οικιασ --------------------------------- διοσκοροσ την τουτων αποχην 
---------------------------------------------μηδ υπεναντιον τουτοισ επιτελειν μηδε 
------------------------------------------------ ανασκευηι κατ αυτησ τιθεσθαι ομολογιαν μηδε 
----------------------------------- επιτελεσαι η χωρισ του κυρια ειναι τα διομολογημενα 
παραβαινειν, εκτεινειν δε τον παραβησομενον τωι υιωι διοσκορωι η τοισ παρ αυτου καθ εκαστην 
εφοδον το τε βλαβοσ και επιτιμον αργυριου δραχμασ 0 και εισ το δημο[7 missing letters] ισασ και μηθεν 
ησσον· δ -----ιων ομολογιαν συνεχωρησεν·
"""
system_prompt = "Fill in the missing letters in this papyrus fragment!"
input_messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": papyrus_edition},
]
terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = generation_pipeline(
    input_messages,
    max_new_tokens=10,
    num_beams=30, # Set this as high as your memory will allow!
    num_return_sequences=10,
    early_stopping=True,
)
beam_contents = []
for output in outputs:
    generated_text = output.get('generated_text', [])
    for item in generated_text:
        if item.get('role') == 'assistant':
            beam_contents.append(item.get('content'))
real_response = "σιον τασ"
print(f"The masked characters: {real_response}")
for i, content in enumerate(beam_contents, start=1):
    print(f"Suggestion {i}: {content}")

Expected Output:

The masked characters: σιον τασ
Suggestion 1: σιον τα 00·
Suggestion 2: σιον αυτωι·
Suggestion 3: σιον 00 00
Suggestion 4: σιον και 0·
Suggestion 5: σιον τα 00··
Suggestion 6: σιον τασ 0
Suggestion 7: σιον τα 000·
Suggestion 8: σιον τα 0ο
Suggestion 9: σιον τασασ·
Suggestion 10: σιον τα 00

Observe that performance declines! If we change

   load_in_4bit=True,
   bnb_4bit_compute_dtype=torch.bfloat16

in the second cell to

   load_in_8bit=True,

we get

The masked characters: σιον τασ
Suggestion 1: σιον τασ
Suggestion 2: σιν τασ ι
Suggestion 3: σ τασ ισα
Suggestion 4: σιου τασ
Suggestion 5: σ ισασ κα
Suggestion 6: συ τασ ισ
Suggestion 7: σασ τασ ι
Suggestion 8: ν τασ ισα
Suggestion 9: ιον τασ ι
Suggestion 10: σισ τασ ι

Information about configuration for merging

The finetuned model was remerged with Llama-3.1-8B-Instruct using the TIES merge method. This did not afect CER or top-1 accuracy, but the effect on top-20 accuracy was positive. The following YAML configuration was used:

models:
  - model: original # Llama 3.1
  - model: DDbDP_reconstructer_5 # A model fintuned on the 95 % of the DDbDP for 11 epochs
    parameters:
      density: 1.1
      weight: 0.5
merge_method: ties
base_model: original # Llama 3.1
parameters:
  normalize: true
dtype: bfloat16

Downloads last month
15
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Ericu950/Papy_2_Llama-3.1-8B-Instruct_text

Finetuned
(419)
this model

Dataset used to train Ericu950/Papy_2_Llama-3.1-8B-Instruct_text

Collection including Ericu950/Papy_2_Llama-3.1-8B-Instruct_text