Edit model card

已经发布win客户端本地离线cpu推理exe,可以在github上下载。 或者直接在Files里下载zip文件解压,直接运行exe即可(免安装)。

The win client local offline CPU inference exe has been released and can be downloaded on github.

Github: https://github.com/RQLuo/MixTeX?tab=readme-ov-file

使用方法 Usage instructions:

from transformers import AutoTokenizer, VisionEncoderDecoderModel, AutoImageProcessor
from PIL import Image
import requests

feature_extractor = AutoImageProcessor.from_pretrained("MixTex/ZhEn-Latex-OCR")
tokenizer = AutoTokenizer.from_pretrained("MixTex/ZhEn-Latex-OCR", max_len=296)
model = VisionEncoderDecoderModel.from_pretrained("MixTex/ZhEn-Latex-OCR")

imgen = Image.open(requests.get('https://cdn-uploads.huggingface.co/production/uploads/62dbaade36292040577d2d4f/eOAym7FZDsjic_8ptsC-H.png', stream=True).raw)
#imgzh = Image.open(requests.get('https://cdn-uploads.huggingface.co/production/uploads/62dbaade36292040577d2d4f/m-oVg8dsQbQZ1fDWbwKtO.png', stream=True).raw)
print(tokenizer.decode(model.generate(feature_extractor(imgen, return_tensors="pt").pixel_values)[0]).replace('\\[','\\begin{align*}').replace('\\]','\\end{align*}'))

colab: https://colab.research.google.com/drive/1vj3GKTmHcVor7FRKyk254nXEi9Lu_dhL?usp=sharing

样例 Dataset example:

image/png

image/png

I will release a small portion of the generated synthetic dataset and the data collection methods.

补充 Supplementary information:

建议图片尺寸:大约 (400, 500), (H, W)。 建议输出字数:大约100-300个tokens。 Latex 环境:

Suggested image dimensions: approximately (400, 500), (H, W). Recommended output length: approximately 100-300 tokens. LaTeX environment:

\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{stmaryrd}
\usepackage{color}
Downloads last month
830
Safetensors
Model size
85.9M params
Tensor type
I64
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.