--- language: - en - ja license: apache-2.0 library_name: transformers tags: - mlx datasets: - databricks/databricks-dolly-15k - llm-jp/databricks-dolly-15k-ja - llm-jp/oasst1-21k-en - llm-jp/oasst1-21k-ja - llm-jp/oasst2-33k-en - llm-jp/oasst2-33k-ja programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation inference: false --- # mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-8bit This model was converted to MLX format from [`llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```