Model Card for ORANSight Mistral-12B (Nemo)

This model belongs to the first release of the ORANSight family of models.

  • Developed by: NextG lab@ NC State
  • License: apache-2.0
  • Context Window: 128K
  • Fine Tuning Framework: Unsloth

Generate with Transformers

Below is a quick example of how to use the model with Hugging Face Transformers:

from transformers import pipeline

# Example query
messages = [
    {"role": "system", "content": "You are an O-RAN expert assistant."},
    {"role": "user", "content": "Explain the E2 interface."},
]

# Load the model
chatbot = pipeline("text-generation", model="NextGLab/ORANSight_Mistral_Nemo_Instruct")
result = chatbot(messages)
print(result)

Coming Soon

A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.

@article{gajjar2024oran,
  title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
  author={Gajjar, Pranshav and Shah, Vijay K},
  journal={arXiv preprint arXiv:2407.06245},
  year={2024}
}

Downloads last month
23
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for NextGLab/ORANSight_Mistral_Nemo_Instruct

Finetuned
(35)
this model
Quantizations
2 models

Collection including NextGLab/ORANSight_Mistral_Nemo_Instruct