π© Report: Ethical issue(s)
Hello SapienzaNLP team. We have noticed that your model can produce toxic content (e.g., racist, sexist) in Italian, and we have been able to reproduce them. We can provide them privately, but prefer not to share here publicly.
For reference, this is our content policy: https://huggingface.co/content-guidelines
Reach out to us if we can assist you. Thanks for your cooperation.
Hi there,
Thank you for bringing this to our attention. We are aware of the potential for generating problematic content, as our model is a base version focused primarily on research purposes and has not undergone alignment or red teaming processes.
We would be grateful for any suggestions on incorporating a clear message or disclaimer in our model card to better inform users of these limitations. For now, we have taken the message from Bloom as reference:
π¨β οΈπ¨ Bias, Risks, and Limitations π¨β οΈπ¨
This section identifies foreseeable harms and misunderstandings.
This is a foundation model, not subject to alignment. Model may:
Overrepresent some viewpoints and underrepresent others
Contain stereotypes
Contain personal information
Generate:
Hateful, abusive, or violent language
Discriminatory or prejudicial language
Content that may not be appropriate for all settings, including sexual content
Make errors, including producing incorrect information as if it were factual
Generate irrelevant or repetitive outputs
Furthermore, it's important to note that we have observed similar behavior in such other base models, and we don't think this is a unique issue with Minerva. Our training corpus consists solely of CulturaX which is itself hosted in Huggingface. We are keen to collaborate with the research community and HF to enhance model safety across the board and we find that exploring the results of training on such available web datasets, especially for non-english data, is important.
Thank you for your cooperation, and we look forward to your guidance.
Best regards,
SapienzaNLP team.
Maybe this shouldn't have been released before solving the bias problem.
Thank you for your swift reply, @PereLluis13 .
I opened a PR to suggest some more accurate details to add to the model card. In the meanwhile, I suggest you disable the inference API -- it's easier to do from your side directly.
As mentioned elsewhere, if the model is intended for research purposes only for now, it should also be clearly stated at the beginning of the model card.
Thank you for your cooperation, and feel free to reach out if I can help with anything else.