metadata
license: cc-by-nc-4.0
datasets:
- passing2961/stark-summary
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
tags:
- conversational ai
- conversation summarization
Ultron-Summarizer-1B Model Card
π Homepage | π» Github | π Arxiv | π PDF
List of Provided Model Series
- Ultron-Summarizer-Series: π€ Ultron-Summarizer-1B | π€ Ultron-Summarizer-3B | π€ Ultron-Summarizer-8B
- Ultron 7B: π€ Ultron-7B
π¨ Disclaimer: All models and datasets are intended for research purposes only.
Model Description
- Repository: Code
- Paper: Stark: Social Long-Term Multi-Modal Conversation with Persona Commonsense Knowledge
- Point of Contact: Young-Jun Lee
Model Details
- Model: Ultron-Summarizer-1B is a fully open-source conversational summarizer that generates summaries for long-term conversations, including those with image-sharing turns.
- Date: Ultron-Summarizer-1B was trained in 2024.
- Training Dataset: Stark-Summary
- Architecture: Ultron-Summarizer-1B was trained on top of LLaMA-3.2-1B.
How to Use
License and Recommendations
π¨ Ultron-Summarizer-1B is intended to be used for research purposes only.
Acknowledgement
This work was supported by a grant of the KAIST-KT joint research project through AI Tech Lab, Institute of convergence Technology, funded by KT [Project No. G01230605, Development of Task-oriented Persona-based Dialogue Generation Combining Multi-modal Interaction and Knowledge Modeling].
Citation
If you find the resources in this repository useful, please cite our work:
@article{lee2024stark,
title={Stark: Social Long-Term Multi-Modal Conversation with Persona Commonsense Knowledge},
author={Lee, Young-Jun and Lee, Dokyong and Youn, Junyoung and Oh, Kyeongjin and Ko, Byungsoo and Hyeon, Jonghwan and Choi, Ho-Jin},
journal={arXiv preprint arXiv:2407.03958},
year={2024}
}