Spaces:
Sleeping
Sleeping
lukestanley
commited on
Commit
·
9412837
1
Parent(s):
428143c
Minor changes, typo fixes
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
# ChillTranslator ⁉️🌐❄️
|
2 |
|
3 |
-
|
|
|
|
|
4 |
|
5 |
ChillTranslator aims to foster healthier online interactions. The potential uses of this translator are vast, and exploring its integration could prove invaluable.
|
6 |
|
@@ -16,19 +18,19 @@ Could Reddit, Twitter, Hacker News, or even YouTube comments be more calm and co
|
|
16 |
|
17 |
- **Converts** text to less toxic variations
|
18 |
- **Preserves original intent**, focusing on constructive dialogue
|
19 |
-
- **Offline LLM model
|
20 |
|
21 |
|
22 |
## Possible future directions 🌟
|
23 |
- **Integration**: offer a Python module and HTTP API, for use from other tools, browser extensions.
|
24 |
- **HuggingFace / Replicate.com etc**: Running this on a fast system, perhaps on a HuggingFace Space could be good.
|
25 |
-
- **Speed**
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
|
33 |
## Getting Started 🚀
|
34 |
|
@@ -37,30 +39,30 @@ Could Reddit, Twitter, Hacker News, or even YouTube comments be more calm and co
|
|
37 |
1. Download a compatible and capable model like: [Mixtral-8x7B-Instruct-v0.1-GGUF](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf?download=true)
|
38 |
2. Make sure it's named as expected by the next command.
|
39 |
3. Install dependencies:
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
4. Start the LLM server:
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
|
49 |
### Usage
|
50 |
|
51 |
ChillTranslator currently has an example spicy comment it works on fixing right away.
|
52 |
This is how to see it in action:
|
53 |
```python
|
54 |
-
|
55 |
```
|
56 |
|
57 |
## Contributing 🤝
|
58 |
|
59 |
Contributions are welcome!
|
60 |
-
Especially:
|
61 |
-
- pull requests,
|
62 |
- free GPU credits
|
63 |
-
- LLM API credits / access.
|
64 |
|
65 |
ChillTranslator is released under the MIT License.
|
66 |
|
|
|
1 |
# ChillTranslator ⁉️🌐❄️
|
2 |
|
3 |
+
|
4 |
+
This is an early experimental tool aimed at reducing online toxicity by automatically 🔄 transforming 🌶️ spicy or toxic comments into constructive, ❤️ kinder dialogues using AI and large language models. ❄️
|
5 |
+
|
6 |
|
7 |
ChillTranslator aims to foster healthier online interactions. The potential uses of this translator are vast, and exploring its integration could prove invaluable.
|
8 |
|
|
|
18 |
|
19 |
- **Converts** text to less toxic variations
|
20 |
- **Preserves original intent**, focusing on constructive dialogue
|
21 |
+
- **Offline LLM model**: running DIY could save costs, avoid needing to sign up to APIs, and avoid the risk of toxic content causing API access to be revoked. We use llama-cpp-python's server with Mixtral.
|
22 |
|
23 |
|
24 |
## Possible future directions 🌟
|
25 |
- **Integration**: offer a Python module and HTTP API, for use from other tools, browser extensions.
|
26 |
- **HuggingFace / Replicate.com etc**: Running this on a fast system, perhaps on a HuggingFace Space could be good.
|
27 |
+
- **Speed** improvements.
|
28 |
+
- Split text into sentences e.g: with “pysbd” for parallel processing of translations.
|
29 |
+
- Use a hate speech scoring model instead of the current "spicy" score method.
|
30 |
+
- Use a dataset of hate speech to make a dataset for training a translation transformer like Google's T5 to run faster than Mixtral could.
|
31 |
+
- Use natural language similarity techniques to compare possible rephrasing fidelity faster.
|
32 |
+
- Enabling easy experimenting with online hosted LLM APIs
|
33 |
+
- Code refactoring to improve development speed!
|
34 |
|
35 |
## Getting Started 🚀
|
36 |
|
|
|
39 |
1. Download a compatible and capable model like: [Mixtral-8x7B-Instruct-v0.1-GGUF](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf?download=true)
|
40 |
2. Make sure it's named as expected by the next command.
|
41 |
3. Install dependencies:
|
42 |
+
```
|
43 |
+
pip install requests pydantic llama-cpp-python llama-cpp-python[server] --upgrade
|
44 |
+
```
|
45 |
4. Start the LLM server:
|
46 |
+
```
|
47 |
+
python3 -m llama_cpp.server --model mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --port 5834 --n_ctx 4096 --use_mlock false
|
48 |
+
```
|
49 |
+
These config options are not going to be optimal for a lot of setups, as it may not use GPU right away, but this can be configured with a different argument. Please check out https://llama-cpp-python.readthedocs.io/en/latest/ for more info.
|
50 |
|
51 |
### Usage
|
52 |
|
53 |
ChillTranslator currently has an example spicy comment it works on fixing right away.
|
54 |
This is how to see it in action:
|
55 |
```python
|
56 |
+
python3 chill.py
|
57 |
```
|
58 |
|
59 |
## Contributing 🤝
|
60 |
|
61 |
Contributions are welcome!
|
62 |
+
Especially:
|
63 |
+
- pull requests,
|
64 |
- free GPU credits
|
65 |
+
- LLM API credits / access.
|
66 |
|
67 |
ChillTranslator is released under the MIT License.
|
68 |
|