Edit model card

Model Card for Model ID

Quantum Research Bot is a chat model fine-tuned on the latest quantum science research data. It includes data from the second half of 2024, making it more accurate and up-to-date than general-purpose models.

NOTICE: v0.9 might perform better on certain questions since it reached better overall loss on an evaluation set, but benchmarking metrics were worse.

Model Details

Model Description

  • Developed by: Nenad Banfic
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model [optional]: meta-llama/Meta-Llama-3.1-8B-Instruct

Uses

You can use the model to ask questions about the latest developments in quantum science. Below are examples of questions that general-purpose models may answer incorrectly or inadequately, but this model should provide accurate responses.

Question Expected answer
On top of what platform is TensorKrowch built on and where was it created? TensorKrowch is built on top of the PyTorch framework and was created at the University of Madrid
What algorithms does the quantum FIPS 205 deal with? The FIPS 205 deals with the stateless hash-based digital signature algorithm (SLH-DSA).
What is the variance which you can get with polynomial bond dimension in pure quantum states in one dimensional systems? The variance that you can get with polynomial bond dimension in pure quantum states in one dimensional systems is as small as ∝ 1 / log N.
As if September 2024, how many qubits has the quantum Krylov algorithm been demonstrated on experimentally? The quantum Krylov algorithm has been demonstrated on up to 56 qubits experimentally.
In the analysis of noise effects in controlled-swap gate circuits, what percentage of errors were eliminated with a dephasing error probability of 10% when using two noisy copies of a quantum state? 67% of errors were eliminated when using two copies of a quantum state with a dephasing error probability of 10%. ,

Out-of-Scope Use

Although this model should be able to generalize well, the quantum science terminology and context is very complex, so it might struggle with simplification, hence, should not be used in that context.

Since there is a risk of possible overfitting in certain cases, the model might be able to answer incorrectly on some small changes to the questions.

Bias, Risks, and Limitations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

The model does hallucinate on certain edge cases (more coming soon).

How to Get Started with the Model

Please refer to the instructions for the Meta Instruct models; the principle is the same.

Training Details

Training Data

Initially trained on a bit less than 3k entries, it was later expanded to 5k high quality questions and answers to make the best of supervised fine tuning. The evaluation set consisted of about ~200 entries in the final training round.

The dataset was generated by crawling the https://quantum-journal.org/ site, and passing data into the OpenAI gpt-4-turbo model with various prompts to ensure high quality data generation.

Training Procedure

Various training procedures were explored alongside multiple models, however, all of them were parameter efficient. The general idea was to freeze most of the original model's parameters and only allow a small subset of parameters to be trainable.

Over time, several base models and fine-tuning approaches were tested. The best accuracy was achieved with Llama 3.1 70B Instruct and qLoRA, but the training duration was extensive, and optimizing hyperparameters proved to be highly challenging.

Other base models were also tested: Mistral 7B v0.1, Meta-Llama/Llama-2-7b-chat-hf, and the base model of this experiment.

Since Bayesian methods for parameter search are prone to getting stuck in local maxima, I performed a semi-grid search with several optimization techniques such as LoRA, DoRA, LoRA+, (LO)ReFT, and qLoRA. With LoRA, LoRA+, and DoRA, I found that a rank of 8 (with the paper-recommended double alpha of 16) achieved the best performance, particularly since my dataset was on the smaller side, which otherwise would have led to overfitting even with additional regularization through grad clipping. Various LoRA dropout rates were tested between 10% and 20%, but increasing the rate started to lead to underfitting. Hence, I sticked to 10%. After applying the linear scaling rule, I settled on a batch size of 8 and found that a starting learning rate of 10^-4 yielded the best results. There was no significant difference between using cosine or linear decay for the learning rate when employing the AdamW optimizer.

Regarding the nodes, training on only attention nodes performed very poorly on both training and evaluation data. The results improved slightly with the addition of MLP projections, but none of the models or fine-tuning approaches achieved an evaluation cross-entropy below 0.5. However, when including the embedding layer—despite the significant increase in the number of training parameters—the model began to generalize well. I assume this is due to the introduction of new terminology, requiring the model to adjust its embeddings slightly to catch the new semantics. I did not modify the LM head, as no significant performance improvements were observed. DORA training introduced the concept of training a magnitude parameter, which can help guide or vectorize the LLM model in a new direction, but the training was up to 4x longer, making it too costly for this purpose, while yielding the same accuracy as LORA+.

For ReFT, the nodes in the last 8 layers were unfrozen with attention to allow the model to retain its general knowledge while incorporating more specific domain knowledge about quantum research. Although the results were close to those obtained with LoRA, they were consistently slightly worse.

After 3 to 4 epochs, the model began to overfit regardless of the strategies employed. Increasing both batch size and the number of epochs resulted in higher final training and evaluation cross-entropy.

Following an extensive grid search with a form of Bayesian optimization to reduce the search area, supervised fine-tuning of Llama 3.1-8B-Instruct with LoRA+ and the parameters mentioned below yielded the best training and evaluation cross-entropy. I've chosen the size ratio between the matrices A and B of 8. The matrix A weights were initialized using the He method, while the matrix B values started with zero. Different Gaussian initialization of weights were also considered, but led to a suboptimal result. Since a custom optimizer was built here, I will share that code here. Regarding the rest of the code, including pre-training, CustomSFTTrainer, and the scoring scripts are currently in the private repo, and will become public as soon it's ready.

Preprocessing [optional]

[Coming soon]

Training Hyperparameters

  • Training regime:
  • bfloat16 precision (nf4 for qLoRA)
  • LORA rank: 8
  • LORA alpha: 16
  • LORA droput: 0.1
  • Weight decay: 0.01 -> did provide me with satisfying regularization
  • Grad clipping: 0.3 -> various values tried, but settled on this one
  • Unfreezed nodes are attention, MLP, and embeddings
  • Optimizer: AdamW
  • LR: 1e-4
  • LR scheduler: cosine
  • NEFT noise enabled: true
  • Batch size: 8
  • Number of epochs: 4
  • Padding: right with an additional padding token added

Speeds, Sizes, Times

This model was trained on ~550 million parameters on a training that lasted a bit more than 30 minutes and went through 4 epochs. The GPU utilization was above 90% at all times during training.

Evaluation

Please see the graph below:

Alt text

The final evaluation cross-entropy ended around 0.4 for this model.

The table below shows the best evaluation cross-entropy (across all params) for each of the techniques applied. Without the embedding nodes included, the results were usually worse for up to 0.1.

Loss on Llama 3.1 fine tuning Notice
LORA 0.4603
LORA+ 0.4011 The model uploaded here
DORA 0.4182
qLORA (for 70b model) 0.3694 The model with best evaluation, was too big to optimize it further with my budget
qLORA (for 8b model) 0.5471
(LO)ReFT 0.4824

The loss mask was applied during training, but it wasn't particularly useful since the model doesn't involve function calling or external data fetching.

Metrics

Since the fine-tuned model is designed to explain, and if possible, summarize newly learned data, ROUGE and BERTScore metrics were measured on a sample of 50 manually crafted questions. The reference answers were constructed during the creation of the training and evaluation sets. Given that GPT-4-turbo was already used in this context for the reference questions generation, I did not compare my model against it. Instead, I chose to compare it against the following models:

Metric (mean/avg) quantum-research-bot-v1.0 Meta-Llama-3.1-8B-Instruct gemini-1.5-pro
BERTScore F1 0.5821 0.3305 0.4982
ROUGE-1 0.6045 0.3152 0.5029
ROUGE-2 0.4098 0.1751 0.3104
ROUGE-L 0.5809 0.2902 0.4856
BLEU 0.2538 0.0736 0.1753

quantum-research-bot-v1.0 outperformed on all metrics, although Gemini came close in BERTScore precision with the difference of only 0.001. The Gemini model is able to recognize subtle differences in the input better, but lacks the latest knowledge, making it perform worse in general.

Most other metrics, such as TruthfulQA, MMLU, and similar benchmarks, are not applicable here because this model has been fine-tuned for a very specific domain of knowledge.

[More Metrics Coming In Future]

Results

Quantization might also be needed after the training to enable the model to run more efficiently on memory-constraint devices. The model was also built modularly and can be extended easily.

While the model outperforms baselines and other general-purpose models on most tasks, it still faces challenges with certain edge cases, particularly those involving rare terms, as well as sentences that differ significantly in structure. These results show the potential of fine-tuning large models for specialized tasks and suggest that further exploration of hybrid optimization techniques could yield even better performance. Additionally, greater investment in creating more robust and comprehensive datasets could lead to further improvements in model accuracy and generalization.

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions are estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: RTX A6000
  • Hours used: ~20h in total, although most trainings took a bit more than 30 minutes with rare exceptions
  • Cloud Provider: Runpod
  • Compute Region: West US
  • Carbon Emitted: 1.5 kg CO2

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

For most workloads:

1 x RTX A6000 16 vCPU 62 GB RAM

However, when fine tuning meta-llama/Meta-Llama-3-70B-Instruct, I've applied quantization with 4xA100 GPUs. Since this did not yield much improvements, and it was very costly, I decided to stick to models with fewer parameters.

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
56
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nenad1002/quantum-research-bot-v1.0

Finetuned
(402)
this model

Dataset used to train nenad1002/quantum-research-bot-v1.0