dakshvar22's picture
Update README.md
2a6f672 verified
metadata
base_model:
  - codellama/CodeLlama-13b-Instruct-hf
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
datasets:
  - rasa/command-generation-calm-demo-v1
pipeline_tag: text-generation

Model Card for Command Generator

This is a Dialogue Understanding (DU) model developed by Rasa. It can be used to power assistants built with the Conversational AI with Language Models (CALM) approach developed by Rasa.

Model Details

Model Description

This model takes as input a transcript of an ongoing conversation between an AI assistant and a user, as well as structured information about the assistant's business logic. As output, it produces a short sequence of commands (typically 1-3) from the following list:

  • StartFlow(flow_name)
  • SetSlot(slot_name, slot_value)
  • CorrectSlot(slot_name, slot_value)
  • Clarify(flow_name_1, flow_name_2, ...)
  • ChitChat
  • KnowledgeAnswer
  • HumanHandoff
  • Error

Note that this model can only produce commands to be interpreted by Rasa. It cannot be used to generate arbitrary text.

The Command Generator translates user messages into this internal grammar, allowing CALM to progress the conversation.

Examples:

I want to transfer money

StartFlow(transfer_money)

I want to transfer $55 to John

StartFlow(transfer_money), SetSlot(recipient, John), SetSlot(amount, 55)

  • Developed by: Rasa Technologies
  • Model type: Text Generation
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model [optional]: CodeLlama 13b Instruct

Uses

The Command Generator is used as part of an AI assistant developed with Rasa's CALM paradigm. Typical use cases include customer-facing chatbots, voice assistants, IVR systems, and internal chatbots in large organizations.

Direct Use

This model can be directly used as part of the command generator component if the flows of your CALM assistant are similar to flows used in the rasa-calm-demo assistant.

Downstream Use [optional]

The model can also be used as a base model to fine-tune further on your own assistant's data using the fine-tuning recipe feature available in rasa pro

Out-of-Scope Use

Since the model has been explicitly fine-tuned to output the grammar of commands, it shouldn't be used to generate any other free form content.

Bias, Risks, and Limitations

The Command Generator interprets conversations and translates user messages into commands. These commands are processed by Rasa to advance the conversations. This model does not generate text to be sent to and end user, and is incapable of generating problematic or harmful text.

However, as with any pre-trained model, its predictions are susceptible to bias. For example, the accuracy of the model varies with the language used. The authors have tested the performance on English but haven't tried the model in any other language.

Training Details

Training Data

Trained on the train split of rasa/command-generation-calm-demo-v1.

Training Procedure

Trained using the notebook available here. Used a single A100 GPU with 80GB VRAM.

Evaluation

Testing Data, Factors & Metrics

Testing Data

Evaluated on the test split of rasa/command-generation-calm-demo-v1.

Metrics

F1 score per command type (StartFlow, SetSlot, etc.) is the main metric chosen to evaluate the model on the test split. This helps us understand which commands the model has learnt well and the ones that the model needs more training on.

Results

The below table shows the F1 score per command for this model.

Model StartFlow SetSlot Cancel Clarify Chitchat SearchAndReply SkipQuestion
rasa/cmd_gen_codellama_13b_calm_demo 0.9722 0.9239 0.6667 0.8889 1 0 0.8

Model Card Contact

[More Information Needed]