Spaces:
Sleeping
Sleeping
File size: 5,627 Bytes
994c606 f5a3b9d 994c606 d641f9a 428143c 9412837 0519e07 9412837 428143c e2b2995 428143c 3e32321 428143c 2dac454 428143c 5c4f1cd 428143c 0945e5b a97efcc 03b6491 428143c 9412837 428143c 03b6491 428143c d72193c 9412837 428143c 03b6491 428143c 2831e2c ce18752 03b6491 428143c 03b6491 428143c 2831e2c 428143c 3e32321 428143c 03b6491 9412837 428143c 9412837 428143c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
title: ChillTranslator
emoji: ❄️
colorFrom: red
colorTo: pink
sdk: docker
pinned: false
---
# ❄️ ChillTranslator 🤬 ➡️ 😎💬
This is an early experimental tool aimed at reducing online toxicity by automatically ➡️ transforming 🌶️ spicy or toxic comments into constructive, ❤️ kinder dialogues using AI and large language models.
ChillTranslator aims to help make online interactions more healthy.
Currently, it "translates" a built-in example of a spicy comment, and it can be used via the command line to improve a specific text of your choice, or it can be imported as a module.
Online toxicity can undermine the quality of discourse, causing distress 😞 and driving people away from online communities. Or worse: it can create a viral toxic loop 🌀!
<img src="https://github.com/lukestanley/ChillTranslator/assets/306671/2899f311-24ee-4ce4-ba76-d1de665aab01" width="300">
ChillTranslator hopes to mitigate toxic comments by automatically rephrasing negative comments, while maintaining the original intent and promoting positive communication 🗣️➡️💬. These rephrased texts could be suggested to the original authors as alternatives, or users could enhance their internet experience with "rose-tinted glasses" 🌹😎, automatically translating spicy comments into versions that are easier and more calming to read.
There could be all kinds of failure cases, but it's a start!
Could Reddit, Twitter, Hacker News, or even YouTube comments be more calm and constructive places? I think so!
![ChillTranslator demo](https://github.com/lukestanley/ChillTranslator/assets/306671/128611f4-3e8e-4c52-ba20-2ae61d727d52)
You can try out the ChillTranslator directly in your browser through the HuggingFace Space demo at [https://huggingface.co/spaces/lukestanley/ChillTranslator](https://huggingface.co/spaces/lukestanley/ChillTranslator).
## Approach ✨
- **Converts** text to less toxic variations
- **Preserves original intent**, focusing on constructive dialogue
- **Offline LLM model**: running DIY could save costs, avoid needing to sign up to APIs, and avoid the risk of toxic content causing API access to be revoked. We use llama-cpp-python's server with Mixtral.
- **HuggingFace Space**: A demo is now available at HuggingFace Spaces, allowing users to try out ChillTranslator without any setup.
## Possible future directions 🌟
- **Integration**: example showing use as Python module, HTTP API, for use from other tools, browser extensions.
- **Speed** improvements.
- Split text into sentences e.g: with “pysbd” for parallel processing of translations.
- Use a hate speech scoring model instead of the current "spicy" score method.
- Use a dataset of hate speech to make a dataset for training a translation transformer like Google's T5 to run faster than Mixtral could.
- Use natural language similarity techniques to compare possible rephrasing fidelity faster.
- Enabling easy experimenting with online hosted LLM APIs
- Code refactoring to improve development speed!
## Getting Started 🚀
### Try it Online
You can try out ChillTranslator without any installation by visiting the HuggingFace Space demo:
```
https://huggingface.co/spaces/lukestanley/ChillTranslator
```
### Installation
1. Clone the Project Repository:
```
git clone https://github.com/lukestanley/ChillTranslator.git
cd ChillTranslator
```
2. Download a compatible and capable model like: [Mixtral-8x7B-Instruct-v0.1-GGUF](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf?download=true). E.g:
```
wget https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf?download=true -O mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf &
```
3. Install dependencies, including a special fork of `llama-cpp-python`, and Nvidia GPU support if needed:
```
pip install requests pydantic uvicorn starlette fastapi sse_starlette starlette_context pydantic_settings
# If you have an Nvidia GPU, install the special fork of llama-cpp-python with CUBLAS support:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install git+https://github.com/lukestanley/llama-cpp-python.git@expose_json_grammar_convert_function
```
If you don't have an Nvidia GPU, the `CMAKE_ARGS="-DLLAMA_CUBLAS=on"` is not needed before the `pip install` command.
4. Start the LLM server with your chosen configuration. Example for Nvidia with `--n_gpu_layers` set to 20; different GPUs fit more or less layers. If you have no GPU, you don't need the `--n_gpu_layers` flag:
```
python3 -m llama_cpp.server --model mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --port 5834 --n_ctx 4096 --use_mlock false --n_gpu_layers 20 &
```
These config options may need tweaking. Please check out https://llama-cpp-python.readthedocs.io/en/latest/ for more info.
### Local Usage
ChillTranslator can be used locally to improve specific texts. This is how to see it in action:
```python
python3 chill.py
```
For improving a specific text of your choice, use the `-t` flag followed by your text enclosed in quotes:
```bash
python3 chill.py -t "Your text goes here"
```
Or chill can be imported as a module, with the improvement_loop function provided the text to improve.
## Contributing 🤝
Contributions are very welcome!
Especially:
- pull requests,
- free GPU credits
- LLM API credits / access.
ChillTranslator is released under the MIT License.
Help make the internet a kinder place, one comment at a time.
Your contribution could make a big difference! |