Update README.md
Browse files
README.md
CHANGED
@@ -55,11 +55,11 @@ print(tokenizer.decode(outputs[0]))
|
|
55 |
|
56 |
```
|
57 |
## Limitations and bias
|
58 |
-
The training data used for this model come from Lithuanian Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the
|
|
|
59 |
|
60 |
-
```
|
61 |
"Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes."
|
62 |
-
|
63 |
|
64 |
## Author
|
65 |
|
|
|
55 |
|
56 |
```
|
57 |
## Limitations and bias
|
58 |
+
The training data used for this model come from Lithuanian Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the OpenAI team themselves point out in their model card:
|
59 |
+
|
60 |
|
|
|
61 |
"Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes."
|
62 |
+
|
63 |
|
64 |
## Author
|
65 |
|