---
pipeline_tag: text-generation
language:
- multilingual
inference: false
license: cc-by-nc-4.0
library_name: transformers
---
Trained by Jina AI.
[Blog](https://jina.ai/news/readerlm-v2-frontier-small-language-model-for-markdown-and-json) | [Colab](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing) # ReaderLM-v2 `ReaderLM-v2` is the second generation of Jina ReaderLM, a **1.5B** parameter language model that converts raw HTML into beautifully formatted markdown or JSON with superior accuracy and improved longer context handling. It supports multiple languages (29 in total) and is specialized for tasks involving HTML parsing, transformation, and text extraction. ## Model Overview - **Model Type**: Autoregressive, decoder-only transformer - **Parameter Count**: ~1.5B - **Context Window**: Up to 512K tokens (combined input and output) - **Supported Languages**: English, Chinese, Japanese, Korean, French, Spanish, Portuguese, German, Italian, Russian, Vietnamese, Thai, Arabic, and more (29 total) ## What's New in `ReaderLM-v2` `ReaderLM-v2` features several significant improvements over its predecessor: - **Better Markdown Generation**: Generates cleaner, more readable Markdown output. - **JSON Output**: Can produce JSON-formatted text, enabling structured extraction for further downstream processing. - **Longer Context Handling**: Can handle up to 512K tokens, which is beneficial for large HTML documents or combined transformations. - **Multilingual Support**: Covers 29 languages for broader application across international web data. --- # Usage Below, you will find instructions and examples for using `ReaderLM-v2` locally using the Hugging Face Transformers library. For a more hands-on experience in a hosted environment, see the [Google Colab Notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing). ## On Google Colab The easiest way to experience `ReaderLM-v2` is by running our [Colab notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing), The notebook runs on a free T4 GPU tier and uses vLLM and Triton for faster inference. You can feed any website’s HTML directly into the model. • For simple HTML-to-Markdown tasks, you only need to provide the raw HTML (no special instructions). • For JSON output and instruction-based extraction, use the prompt formatting guidelines in the notebook. ## Local Usage To use `ReaderLM-v2` locally: 1. Install the necessary dependencies: ```bash pip install transformers ``` 2. Load and run the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import re device = "cuda" # or "cpu" tokenizer = AutoTokenizer.from_pretrained("jinaai/ReaderLM-v2") model = AutoModelForCausalLM.from_pretrained("jinaai/ReaderLM-v2").to(device) ``` 3. (Optional) Pre-clean your HTML to remove scripts, styles, comments, to reduce the noise and length of the input a bit (i.e. make it more friendly for GPU VRAM): ```python # Patterns SCRIPT_PATTERN = r'<[ ]*script.*?\/[ ]*script[ ]*>' STYLE_PATTERN = r'<[ ]*style.*?\/[ ]*style[ ]*>' META_PATTERN = r'<[ ]*meta.*?>' COMMENT_PATTERN = r'<[ ]*!--.*?--[ ]*>' LINK_PATTERN = r'<[ ]*link.*?>' BASE64_IMG_PATTERN = r'