repo_id
stringclasses
55 values
file_path
stringlengths
42
186
content
stringlengths
1
333k
__index_level_0__
int64
0
0
mavonic_private_repos
mavonic_private_repos/transformers/README_ko.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <b>ํ•œ๊ตญ์–ด</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p> Jax, Pytorch, TensorFlow๋ฅผ ์œ„ํ•œ ์ตœ์ฒจ๋‹จ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers๋Š” ๋ถ„๋ฅ˜, ์ •๋ณด ์ถ”์ถœ, ์งˆ๋ฌธ ๋‹ต๋ณ€, ์š”์•ฝ, ๋ฒˆ์—ญ, ๋ฌธ์žฅ ์ƒ์„ฑ ๋“ฑ์„ 100๊ฐœ ์ด์ƒ์˜ ์–ธ์–ด๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์ˆ˜์ฒœ๊ฐœ์˜ ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ๋ชฉํ‘œ๋Š” ๋ชจ๋‘๊ฐ€ ์ตœ์ฒจ๋‹จ์˜ NLP ๊ธฐ์ˆ ์„ ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ์ด๋Ÿฌํ•œ ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ๋‹ค์šด๋กœ๋“œํ•ด ํŠน์ • ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•˜๊ณ , ์›ํ•˜๋Š” ๋ฐ์ดํ„ฐ๋กœ fine-tuningํ•ด ์ปค๋ฎค๋‹ˆํ‹ฐ๋‚˜ ์šฐ๋ฆฌ์˜ [๋ชจ๋ธ ํ—ˆ๋ธŒ](https://huggingface.co/models)์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก API๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ •์˜ํ•˜๋Š” ๊ฐ ํŒŒ์ด์ฌ ๋ชจ๋“ˆ์€ ์™„์ „ํžˆ ๋…๋ฆฝ์ ์ด์—ฌ์„œ ์—ฐ๊ตฌ ์‹คํ—˜์„ ์œ„ํ•ด ์†์‰ฝ๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๊ฐ€์žฅ ์œ ๋ช…ํ•œ 3๊ฐœ์˜ ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด๋“ค์€ ์„œ๋กœ ์™„๋ฒฝํžˆ ์—ฐ๋™๋ฉ๋‹ˆ๋‹ค โ€” [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). ๊ฐ„๋‹จํ•˜๊ฒŒ ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ค‘ ํ•˜๋‚˜๋กœ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ณ , ๋˜ ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ์ถ”๋ก ์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์˜จ๋ผ์ธ ๋ฐ๋ชจ ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์„ [๋ชจ๋ธ ํ—ˆ๋ธŒ](https://huggingface.co/models) ํŽ˜์ด์ง€์—์„œ ๋ฐ”๋กœ ํ…Œ์ŠคํŠธํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณต๊ฐœ ๋ฐ ๋น„๊ณต๊ฐœ ๋ชจ๋ธ์„ ์œ„ํ•œ [๋น„๊ณต๊ฐœ ๋ชจ๋ธ ํ˜ธ์ŠคํŒ…, ๋ฒ„์ „ ๊ด€๋ฆฌ, ์ถ”๋ก  API](https://huggingface.co/pricing)๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ: - [BERT๋กœ ๋งˆ์Šคํ‚น๋œ ๋‹จ์–ด ์™„์„ฑํ•˜๊ธฐ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Electra๋ฅผ ์ด์šฉํ•œ ๊ฐœ์ฒด๋ช… ์ธ์‹](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [GPT-2๋กœ ํ…์ŠคํŠธ ์ƒ์„ฑํ•˜๊ธฐ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [RoBERTa๋กœ ์ž์—ฐ์–ด ์ถ”๋ก ํ•˜๊ธฐ](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [BART๋ฅผ ์ด์šฉํ•œ ์š”์•ฝ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [DistilBERT๋ฅผ ์ด์šฉํ•œ ์งˆ๋ฌธ ๋‹ต๋ณ€](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [T5๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) **[Transformer์™€ ๊ธ€์“ฐ๊ธฐ](https://transformer.huggingface.co)** ๋Š” ์ด ์ €์žฅ์†Œ์˜ ํ…์ŠคํŠธ ์ƒ์„ฑ ๋Šฅ๋ ฅ์— ๊ด€ํ•œ Hugging Face ํŒ€์˜ ๊ณต์‹ ๋ฐ๋ชจ์ž…๋‹ˆ๋‹ค. ## Hugging Face ํŒ€์˜ ์ปค์Šคํ…€ ์ง€์›์„ ์›ํ•œ๋‹ค๋ฉด <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## ํ€ต ํˆฌ์–ด ์›ํ•˜๋Š” ํ…์ŠคํŠธ์— ๋ฐ”๋กœ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, ์šฐ๋ฆฌ๋Š” `pipeline` API๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. Pipeline์€ ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ๊ณผ ๊ทธ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ๋•Œ ์ ์šฉํ•œ ์ „์ฒ˜๋ฆฌ ๋ฐฉ์‹์„ ํ•˜๋‚˜๋กœ ํ•ฉ์นฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ธ์ •์ ์ธ ํ…์ŠคํŠธ์™€ ๋ถ€์ •์ ์ธ ํ…์ŠคํŠธ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๊ธฐ ์œ„ํ•ด pipeline์„ ์‚ฌ์šฉํ•œ ๊ฐ„๋‹จํ•œ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค: ```python >>> from transformers import pipeline # Allocate a pipeline for sentiment-analysis >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` ์ฝ”๋“œ์˜ ๋‘๋ฒˆ์งธ ์ค„์€ pipeline์ด ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œ๋กœ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ์„ธ๋ฒˆ์งธ ์ค„์—์„  ๊ทธ ๋ชจ๋ธ์ด ์ฃผ์–ด์ง„ ํ…์ŠคํŠธ๋ฅผ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ๋ชจ๋ธ์€ 99.97%์˜ ํ™•๋ฅ ๋กœ ํ…์ŠคํŠธ๊ฐ€ ๊ธ์ •์ ์ด๋ผ๊ณ  ํ‰๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋งŽ์€ NLP ๊ณผ์ œ๋“ค์„ `pipeline`์œผ๋กœ ๋ฐ”๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์งˆ๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์–ด์ง€๋ฉด ์†์‰ฝ๊ฒŒ ๋‹ต๋ณ€์„ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` python >>> from transformers import pipeline # Allocate a pipeline for question-answering >>> question_answerer = pipeline('question-answering') >>> question_answerer({ ... 'question': 'What is the name of the repository ?', ... 'context': 'Pipeline has been included in the huggingface/transformers repository' ... }) {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} ``` ๋‹ต๋ณ€๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ์—ฌ๊ธฐ์— ์‚ฌ์šฉ๋œ ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ์€ ํ™•์‹ ๋„์™€ ํ† ํฌ๋‚˜์ด์ฆˆ๋œ ๋ฌธ์žฅ ์† ๋‹ต๋ณ€์˜ ์‹œ์ž‘์ , ๋์ ๊นŒ์ง€ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. [์ด ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/transformers/task_summary)์—์„œ `pipeline` API๊ฐ€ ์ง€์›ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๊ณผ์ œ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฝ”๋“œ 3์ค„๋กœ ์›ํ•˜๋Š” ๊ณผ์ œ์— ๋งž๊ฒŒ ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œ ๋ฐ›๊ณ  ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ PyTorch ๋ฒ„์ „์ž…๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` ๋‹ค์Œ์€ TensorFlow ๋ฒ„์ „์ž…๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` ํ† ํฌ๋‚˜์ด์ €๋Š” ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ์˜ ๋ชจ๋“  ์ „์ฒ˜๋ฆฌ๋ฅผ ์ฑ…์ž„์ง‘๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  (์œ„์˜ ์˜ˆ์‹œ์ฒ˜๋Ÿผ) 1๊ฐœ์˜ ์ŠคํŠธ๋ง์ด๋‚˜ ๋ฆฌ์ŠคํŠธ๋„ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š”๋ฐ, ์ด๋Š” ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ฝ”๋“œ์— ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ์–ธํŒจํ‚น ์—ฐ์‚ฐ์ž ** ๋ฅผ ์ด์šฉํ•ด ๋ชจ๋ธ์— ๋ฐ”๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ž์ฒด๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)๋‚˜ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)์ž…๋‹ˆ๋‹ค. [์ด ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/transformers/training.html)์€ ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ํ‘œ์ค€์ ์ธ PyTorch๋‚˜ TensorFlow ํ•™์Šต ๊ณผ์ •์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•, ๋˜๋Š” ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋กœ fine-tuneํ•˜๊ธฐ ์œ„ํ•ด `Trainer` API๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ด์ค๋‹ˆ๋‹ค. ## ์™œ transformers๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ• ๊นŒ์š”? 1. ์†์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ: - NLU์™€ NLG ๊ณผ์ œ์—์„œ ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. - ๊ต์œก์ž ์‹ค๋ฌด์ž์—๊ฒŒ ์ง„์ž… ์žฅ๋ฒฝ์ด ๋‚ฎ์Šต๋‹ˆ๋‹ค. - 3๊ฐœ์˜ ํด๋ž˜์Šค๋งŒ ๋ฐฐ์šฐ๋ฉด ๋ฐ”๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ํ•˜๋‚˜์˜ API๋กœ ๋ชจ๋“  ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. ๋” ์ ์€ ๊ณ„์‚ฐ ๋น„์šฉ, ๋” ์ ์€ ํƒ„์†Œ ๋ฐœ์ž๊ตญ: - ์—ฐ๊ตฌ์ž๋“ค์€ ๋ชจ๋ธ์„ ๊ณ„์† ๋‹ค์‹œ ํ•™์Šต์‹œํ‚ค๋Š” ๋Œ€์‹  ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์‹ค๋ฌด์ž๋“ค์€ ํ•™์Šต์— ํ•„์š”ํ•œ ์‹œ๊ฐ„๊ณผ ๋น„์šฉ์„ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ˆ˜์‹ญ๊ฐœ์˜ ๋ชจ๋ธ ๊ตฌ์กฐ, 2,000๊ฐœ ์ด์ƒ์˜ ์‚ฌ์ „ํ•™์Šต ๋ชจ๋ธ, 100๊ฐœ ์ด์ƒ์˜ ์–ธ์–ด๋กœ ํ•™์Šต๋œ ๋ชจ๋ธ ๋“ฑ. 1. ๋ชจ๋ธ์˜ ๊ฐ ์ƒ์• ์ฃผ๊ธฐ์— ์ ํ•ฉํ•œ ํ”„๋ ˆ์ž„์›Œํฌ: - ์ฝ”๋“œ 3์ค„๋กœ ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜์„ธ์š”. - ์ž์œ ๋กญ๊ฒŒ ๋ชจ๋ธ์„ TF2.0๋‚˜ PyTorch ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”. - ํ•™์Šต, ํ‰๊ฐ€, ๊ณต๊ฐœ ๋“ฑ ๊ฐ ๋‹จ๊ณ„์— ๋งž๋Š” ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์›ํ•˜๋Š”๋Œ€๋กœ ์„ ํƒํ•˜์„ธ์š”. 1. ํ•„์š”ํ•œ ๋Œ€๋กœ ๋ชจ๋ธ์ด๋‚˜ ์˜ˆ์‹œ๋ฅผ ์ปค์Šคํ„ฐ๋งˆ์ด์ฆˆํ•˜์„ธ์š”: - ์šฐ๋ฆฌ๋Š” ์ €์ž๊ฐ€ ๊ณต๊ฐœํ•œ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ๊ฐ ๋ชจ๋ธ ๊ตฌ์กฐ์˜ ์˜ˆ์‹œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ ๋‚ด๋ถ€ ๊ตฌ์กฐ๋Š” ๊ฐ€๋Šฅํ•œ ์ผ๊ด€์ ์œผ๋กœ ๊ณต๊ฐœ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - ๋น ๋ฅธ ์‹คํ—˜์„ ์œ„ํ•ด ๋ชจ๋ธ ํŒŒ์ผ์€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๋…๋ฆฝ์ ์œผ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์™œ transformers๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ๋ง์•„์•ผ ํ• ๊นŒ์š”? - ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‹ ๊ฒฝ๋ง ๋ธ”๋ก์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•œ ๋ชจ๋“ˆ์ด ์•„๋‹™๋‹ˆ๋‹ค. ์—ฐ๊ตฌ์ž๋“ค์ด ์—ฌ๋Ÿฌ ํŒŒ์ผ์„ ์‚ดํŽด๋ณด์ง€ ์•Š๊ณ  ๋ฐ”๋กœ ๊ฐ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, ๋ชจ๋ธ ํŒŒ์ผ ์ฝ”๋“œ์˜ ์ถ”์ƒํ™” ์ˆ˜์ค€์„ ์ ์ •ํ•˜๊ฒŒ ์œ ์ง€ํ–ˆ์Šต๋‹ˆ๋‹ค. - ํ•™์Šต API๋Š” ๋ชจ๋“  ๋ชจ๋ธ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋งŒ๋“ค์–ด์ง€์ง„ ์•Š์•˜์ง€๋งŒ, ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์ œ๊ณตํ•˜๋Š” ๋ชจ๋ธ๋“ค์— ์ ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ์ ํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ๋จธ์‹  ๋Ÿฌ๋‹์„ ์œ„ํ•ด์„ , ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. - ๊ฐ€๋Šฅํ•œ ๋งŽ์€ ์‚ฌ์šฉ ์˜ˆ์‹œ๋ฅผ ๋ณด์—ฌ๋“œ๋ฆฌ๊ณ  ์‹ถ์–ด์„œ, [์˜ˆ์‹œ ํด๋”](https://github.com/huggingface/transformers/tree/main/examples)์˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ค€๋น„ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์Šคํฌ๋ฆฝํŠธ๋“ค์„ ์ˆ˜์ • ์—†์ด ํŠน์ •ํ•œ ๋ฌธ์ œ์— ๋ฐ”๋กœ ์ ์šฉํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”์— ๋งž๊ฒŒ ์ผ๋ถ€ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์„ค์น˜ ### pip๋กœ ์„ค์น˜ํ•˜๊ธฐ ์ด ์ €์žฅ์†Œ๋Š” Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, TensorFlow 2.6+์—์„œ ํ…Œ์ŠคํŠธ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. [๊ฐ€์ƒ ํ™˜๊ฒฝ](https://docs.python.org/3/library/venv.html)์— ๐Ÿค— Transformers๋ฅผ ์„ค์น˜ํ•˜์„ธ์š”. Python ๊ฐ€์ƒ ํ™˜๊ฒฝ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์‚ฌ์šฉ์ž ๊ฐ€์ด๋“œ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ์šฐ์„ , ์‚ฌ์šฉํ•  Python ๋ฒ„์ „์œผ๋กœ ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ๋งŒ๋“ค๊ณ  ์‹คํ–‰ํ•˜์„ธ์š”. ๊ทธ ๋‹ค์Œ, Flax, PyTorch, TensorFlow ์ค‘ ์ ์–ด๋„ ํ•˜๋‚˜๋Š” ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ”Œ๋žซํผ์— ๋งž๋Š” ์„ค์น˜ ๋ช…๋ น์–ด๋ฅผ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด [TensorFlow ์„ค์น˜ ํŽ˜์ด์ง€](https://www.tensorflow.org/install/), [PyTorch ์„ค์น˜ ํŽ˜์ด์ง€](https://pytorch.org/get-started/locally/#start-locally), [Flax ์„ค์น˜ ํŽ˜์ด์ง€](https://github.com/google/flax#quick-install)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ์ด๋“ค ์ค‘ ์ ์–ด๋„ ํ•˜๋‚˜๊ฐ€ ์„ค์น˜๋˜์—ˆ๋‹ค๋ฉด, ๐Ÿค— Transformers๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด pip์„ ์ด์šฉํ•ด ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pip install transformers ``` ์˜ˆ์‹œ๋“ค์„ ์ฒดํ—˜ํ•ด๋ณด๊ณ  ์‹ถ๊ฑฐ๋‚˜, ์ตœ์ตœ์ตœ์ฒจ๋‹จ ์ฝ”๋“œ๋ฅผ ์›ํ•˜๊ฑฐ๋‚˜, ์ƒˆ๋กœ์šด ๋ฒ„์ „์ด ๋‚˜์˜ฌ ๋•Œ๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆด ์ˆ˜ ์—†๋‹ค๋ฉด [๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์†Œ์Šค์—์„œ ๋ฐ”๋กœ ์„ค์น˜](https://huggingface.co/docs/transformers/installation#installing-from-source)ํ•˜์…”์•ผ ํ•ฉ๋‹ˆ๋‹ค. ### conda๋กœ ์„ค์น˜ํ•˜๊ธฐ ๐Ÿค— Transformers๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด conda๋กœ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```shell script conda install conda-forge::transformers ``` > **_๋…ธํŠธ:_** `huggingface` ์ฑ„๋„์—์„œ `transformers`๋ฅผ ์„ค์น˜ํ•˜๋Š” ๊ฒƒ์€ ์‚ฌ์šฉ์ด ์ค‘๋‹จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. Flax, PyTorch, TensorFlow ์„ค์น˜ ํŽ˜์ด์ง€์—์„œ ์ด๋“ค์„ conda๋กœ ์„ค์น˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์„ธ์š”. ## ๋ชจ๋ธ ๊ตฌ์กฐ **๐Ÿค— Transformers๊ฐ€ ์ œ๊ณตํ•˜๋Š” [๋ชจ๋“  ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models)** ๋Š” huggingface.co [๋ชจ๋ธ ํ—ˆ๋ธŒ](https://huggingface.co)์— ์™„๋ฒฝํžˆ ์—ฐ๋™๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. [๊ฐœ์ธ](https://huggingface.co/users)๊ณผ [๊ธฐ๊ด€](https://huggingface.co/organizations)์ด ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ์ง์ ‘ ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ์˜ ๊ฐœ์ˆ˜: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers๋Š” ๋‹ค์Œ ๋ชจ๋ธ๋“ค์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค: ๊ฐ ๋ชจ๋ธ์˜ ์š”์•ฝ์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/transformers/model_summary)์„œ ํ™•์ธํ•˜์„ธ์š”. ๊ฐ ๋ชจ๋ธ์ด Flax, PyTorch, TensorFlow์œผ๋กœ ๊ตฌํ˜„๋˜์—ˆ๋Š”์ง€ ๋˜๋Š” ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์ง€์›ํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋ ค๋ฉด, [์ด ํ‘œ](https://huggingface.co/docs/transformers/index#supported-frameworks)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ์ด ๊ตฌํ˜„์€ ์—ฌ๋Ÿฌ ๋ฐ์ดํ„ฐ๋กœ ๊ฒ€์ฆ๋˜์—ˆ๊ณ  (์˜ˆ์‹œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”) ์˜ค๋ฆฌ์ง€๋„ ๊ตฌํ˜„์˜ ์„ฑ๋Šฅ๊ณผ ๊ฐ™์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. [๋„ํ๋จผํŠธ](https://huggingface.co/docs/transformers/examples)์˜ Examples ์„น์…˜์—์„œ ์„ฑ๋Šฅ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋” ์•Œ์•„๋ณด๊ธฐ | ์„น์…˜ | ์„ค๋ช… | |-|-| | [๋„ํ๋จผํŠธ](https://huggingface.co/transformers/) | ์ „์ฒด API ๋„ํ๋จผํŠธ์™€ ํŠœํ† ๋ฆฌ์–ผ | | [๊ณผ์ œ ์š”์•ฝ](https://huggingface.co/docs/transformers/task_summary) | ๐Ÿค— Transformers๊ฐ€ ์ง€์›ํ•˜๋Š” ๊ณผ์ œ๋“ค | | [์ „์ฒ˜๋ฆฌ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` ํด๋ž˜์Šค๋ฅผ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์ค€๋น„ํ•˜๊ธฐ | | [ํ•™์Šต๊ณผ fine-tuning](https://huggingface.co/docs/transformers/training) | ๐Ÿค— Transformers๊ฐ€ ์ œ๊ณตํ•˜๋Š” ๋ชจ๋ธ PyTorch/TensorFlow ํ•™์Šต ๊ณผ์ •๊ณผ `Trainer` API์—์„œ ์‚ฌ์šฉํ•˜๊ธฐ | | [ํ€ต ํˆฌ์–ด: Fine-tuning/์‚ฌ์šฉ ์Šคํฌ๋ฆฝํŠธ](https://github.com/huggingface/transformers/tree/main/examples) | ๋‹ค์–‘ํ•œ ๊ณผ์ œ์—์„œ ๋ชจ๋ธ fine-tuningํ•˜๋Š” ์˜ˆ์‹œ ์Šคํฌ๋ฆฝํŠธ | | [๋ชจ๋ธ ๊ณต์œ  ๋ฐ ์—…๋กœ๋“œ](https://huggingface.co/docs/transformers/model_sharing) | ์ปค๋ฎค๋‹ˆํ‹ฐ์— fine-tune๋œ ๋ชจ๋ธ์„ ์—…๋กœ๋“œ ๋ฐ ๊ณต์œ ํ•˜๊ธฐ | | [๋งˆ์ด๊ทธ๋ ˆ์ด์…˜](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`๋‚˜ `pytorch-pretrained-bert`์—์„œ ๐Ÿค— Transformers๋กœ ์ด๋™ํ•˜๊ธฐ| ## ์ธ์šฉ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ธ์šฉํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ์ด [๋…ผ๋ฌธ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)์„ ์ธ์šฉํ•ด ์ฃผ์„ธ์š”: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/CONTRIBUTING.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Contribute to ๐Ÿค— Transformers Everyone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable. It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply โญ๏ธ the repository to say thank you. However you choose to contribute, please be mindful and respect our [code of conduct](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md). **This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).** ## Ways to contribute There are several ways you can contribute to ๐Ÿค— Transformers: * Fix outstanding issues with the existing code. * Submit issues related to bugs or desired new features. * Implement new models. * Contribute to the examples or to the documentation. If you don't know where to start, there is a special [Good First Issue](https://github.com/huggingface/transformers/contribute) listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source. The best way to do that is to open a Pull Request and link it to the issue that you'd like to work on. We try to give priority to opened PRs as we can easily track the progress of the fix, and if the contributor does not have time anymore, someone else can take the PR over. For something slightly more challenging, you can also take a look at the [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) list. In general though, if you feel like you know what you're doing, go for it and we'll help you get there! ๐Ÿš€ > All contributions are equally valuable to the community. ๐Ÿฅฐ ## Fixing outstanding issues If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](#create-a-pull-request) and open a Pull Request! ## Submitting a bug-related issue or feature request Do your best to follow these guidelines when submitting a bug-related issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback. ### Did you find a bug? The ๐Ÿค— Transformers library is robust and reliable thanks to users who report the problems they encounter. Before you report an issue, we would really appreciate it if you could **make sure the bug was not already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you're unsure whether the bug is in your code or the library, please ask in the [forum](https://discuss.huggingface.co/) first. This helps us respond quicker to fixing issues related to the library versus general questions. Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it: * Your **OS type and version** and **Python**, **PyTorch** and **TensorFlow** versions when applicable. * A short, self-contained, code snippet that allows us to reproduce the bug in less than 30s. * The *full* traceback if an exception is raised. * Attach any other additional information, like screenshots, you think may help. To get the OS and software versions automatically, run the following command: ```bash transformers-cli env ``` You can also run the same command from the root of the repository: ```bash python src/transformers/commands/transformers_cli.py env ``` ### Do you want a new feature? If there is a new feature you'd like to see in ๐Ÿค— Transformers, please open an issue and describe: 1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community? Whatever it is, we'd love to hear about it! 2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you. 3. Provide a *code snippet* that demonstrates the features usage. 4. If the feature is related to a paper, please include a link. If your issue is well written we're already 80% of the way there by the time you create it. We have added [templates](https://github.com/huggingface/transformers/tree/main/templates) to help you get started with your issue. ## Do you want to implement a new model? New models are constantly released and if you want to implement a new model, please provide the following information: * A short description of the model and a link to the paper. * Link to the implementation if it is open-sourced. * Link to the model weights if they are available. If you are willing to contribute the model yourself, let us know so we can help you add it to ๐Ÿค— Transformers! We have a technical guide for [how to add a model to ๐Ÿค— Transformers](https://huggingface.co/docs/transformers/add_new_model). ## Do you want to add documentation? We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be happy to make the changes or help you make a contribution if you're interested! For more details about how to generate, build, and write the documentation, take a look at the documentation [README](https://github.com/huggingface/transformers/tree/main/docs). ## Create a Pull Request Before writing any code, we strongly advise you to search through the existing PRs or issues to make sure nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback. You will need basic `git` proficiency to contribute to ๐Ÿค— Transformers. While `git` is not the easiest tool to use, it has the greatest manual. Type `git --help` in a shell and enjoy! If you prefer books, [Pro Git](https://git-scm.com/book/en/v2) is a very good reference. You'll need **[Python 3.8](https://github.com/huggingface/transformers/blob/main/setup.py#L426)** or above to contribute to ๐Ÿค— Transformers. Follow the steps below to start contributing: 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the **[Fork](https://github.com/huggingface/transformers/fork)** button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your fork to your local disk, and add the base repository as a remote: ```bash git clone [email protected]:<your Github handle>/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Create a new branch to hold your development changes: ```bash git checkout -b a-descriptive-name-for-my-changes ``` ๐Ÿšจ **Do not** work on the `main` branch! 4. Set up a development environment by running the following command in a virtual environment: ```bash pip install -e ".[dev]" ``` If ๐Ÿค— Transformers was already installed in the virtual environment, remove it with `pip uninstall transformers` before reinstalling it in editable mode with the `-e` flag. Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do: ```bash pip install -e ".[quality]" ``` which should be enough for most use cases. 5. Develop the features in your branch. As you work on your code, you should make sure the test suite passes. Run the tests impacted by your changes like this: ```bash pytest tests/<TEST_TO_RUN>.py ``` For more information about tests, check out the [Testing](https://huggingface.co/docs/transformers/testing) guide. ๐Ÿค— Transformers relies on `black` and `ruff` to format its source code consistently. After you make changes, apply automatic style corrections and code verifications that can't be automated in one go with: ```bash make fixup ``` This target is also optimized to only work with files modified by the PR you're working on. If you prefer to run the checks one after the other, the following command applies the style corrections: ```bash make style ``` ๐Ÿค— Transformers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality controls are run by the CI, but you can run the same checks with: ```bash make quality ``` Finally, we have a lot of scripts to make sure we don't forget to update some files when adding a new model. You can run these scripts with: ```bash make repo-consistency ``` To learn more about those checks and how to fix any issues with them, check out the [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide. If you're modifying documents under the `docs/source` directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check make sure you install the documentation builder: ```bash pip install ".[docs]" ``` Run the following command from the root of the repository: ```bash doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build ``` This will build the documentation in the `~/tmp/test-build` folder where you can inspect the generated Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request. Once you're happy with your changes, add the changed files with `git add` and record your changes locally with `git commit`: ```bash git add modified_file.py git commit ``` Please remember to write [good commit messages](https://chris.beams.io/posts/git-commit/) to clearly communicate the changes you made! To keep your copy of the code up to date with the original repository, rebase your branch on `upstream/branch` *before* you open a pull request or if requested by a maintainer: ```bash git fetch upstream git rebase upstream/main ``` Push your changes to your branch: ```bash git push -u origin a-descriptive-name-for-my-changes ``` If you've already opened a pull request, you'll need to force push with the `--force` flag. Otherwise, if the pull request hasn't been opened yet, you can just push your changes normally. 6. Now you can go to your fork of the repository on GitHub and click on **Pull Request** to open a pull request. Make sure you tick off all the boxes on our [checklist](#pull-request-checklist) below. When you're ready, you can send your changes to the project maintainers for review. 7. It's ok if maintainers request changes, it happens to our core contributors too! So everyone can see the changes in the pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request. ### Pull request checklist โ˜ The pull request title should summarize your contribution.<br> โ˜ If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people viewing the issue know you are working on it).<br> โ˜ To indicate a work in progress please prefix the title with `[WIP]`. These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged.<br> โ˜ Make sure existing tests pass.<br> โ˜ If adding a new feature, also add tests for it.<br> - If you are adding a new model, make sure you use `ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)` to trigger the common tests. - If you are adding new `@slow` tests, make sure they pass using `RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`. - If you are adding a new tokenizer, write tests and make sure `RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py` passes. - CircleCI does not run the slow tests, but GitHub Actions does every night!<br> โ˜ All public methods must have informative docstrings (see [`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py) for an example).<br> โ˜ Due to the rapidly growing repository, don't add any images, videos and other non-text files that'll significantly weigh down the repository. Instead, use a Hub repository such as [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) to host these files and reference them by URL. We recommend placing documentation related images in the following repository: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images). You can open a PR on this dataset repository and ask a Hugging Face member to merge it. For more information about the checks run on a pull request, take a look at our [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide. ### Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the [tests](https://github.com/huggingface/transformers/tree/main/tests) folder and examples tests in the [examples](https://github.com/huggingface/transformers/tree/main/examples) folder. We like `pytest` and `pytest-xdist` because it's faster. From the root of the repository, specify a *path to a subfolder or a test file* to run the test: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model ``` Similarly, for the `examples` directory, specify a *path to a subfolder or test file* to run the test. For example, the following command tests the text classification subfolder in the PyTorch `examples` directory: ```bash pip install -r examples/xxx/requirements.txt # only needed the first time python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` In fact, this is actually how our `make test` and `make test-examples` commands are implemented (not including the `pip install`)! You can also specify a smaller set of tests in order to test only the feature you're working on. By default, slow tests are skipped but you can set the `RUN_SLOW` environment variable to `yes` to run them. This will download many gigabytes of models so make sure you have enough disk space, a good internet connection or a lot of patience! <Tip warning={true}> Remember to specify a *path to a subfolder or a test file* to run the test. Otherwise, you'll run all the tests in the `tests` or `examples` folder, which will take a very long time! </Tip> ```bash RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` Like the slow tests, there are other environment variables available which not enabled by default during testing: - `RUN_CUSTOM_TOKENIZERS`: Enables tests for custom tokenizers. - `RUN_PT_FLAX_CROSS_TESTS`: Enables tests for PyTorch + Flax integration. - `RUN_PT_TF_CROSS_TESTS`: Enables tests for TensorFlow + PyTorch integration. More environment variables and additional information can be found in the [testing_utils.py](src/transformers/testing_utils.py). ๐Ÿค— Transformers uses `pytest` as a test runner only. It doesn't use any `pytest`-specific features in the test suite itself. This means `unittest` is fully supported. Here's how to run tests with `unittest`: ```bash python -m unittest discover -s tests -t . -v python -m unittest discover -s examples -t examples -v ``` ### Style guide For documentation strings, ๐Ÿค— Transformers follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html). Check our [documentation writing guide](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification) for more information. ### Develop on Windows On Windows (unless you're working in [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) or WSL), you need to configure git to transform Windows `CRLF` line endings to Linux `LF` line endings: ```bash git config core.autocrlf input ``` One way to run the `make` command on Windows is with MSYS2: 1. [Download MSYS2](https://www.msys2.org/), and we assume it's installed in `C:\msys64`. 2. Open the command line `C:\msys64\msys2.exe` (it should be available from the **Start** menu). 3. Run in the shell: `pacman -Syu` and install `make` with `pacman -S make`. 4. Add `C:\msys64\usr\bin` to your PATH environment variable. You can now use `make` from any terminal (PowerShell, cmd.exe, etc.)! ๐ŸŽ‰ ### Sync a forked repository with upstream main (the Hugging Face repository) When updating the main branch of a forked repository, please follow these steps to avoid pinging the upstream repository which adds reference notes to each upstream PR, and sends unnecessary notifications to the developers involved in these PRs. 1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. 2. If a PR is absolutely necessary, use the following steps after checking out your branch: ```bash git checkout -b your-branch-for-syncing git pull --squash --no-commit upstream main git commit -m '<your message without GitHub references>' git push --set-upstream origin your-branch-for-syncing ```
0
mavonic_private_repos
mavonic_private_repos/transformers/LICENSE
Copyright 2018- The Hugging Face team. All rights reserved. Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
mavonic_private_repos
mavonic_private_repos/transformers/CODE_OF_CONDUCT.md
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [email protected]. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
0
mavonic_private_repos
mavonic_private_repos/transformers/README_hd.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!--- A useful guide for English-Hindi translation of Hugging Face documentation - Add space around English words and numbers when they appear between Hindi characters. E.g., เค•เฅเคฒ เคฎเคฟเคฒเคพเค•เคฐ 100 เคธเฅ‡ เค…เคงเคฟเค• เคญเคพเคทเคพเคเค; เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคฒเคพเค‡เคฌเฅเคฐเฅ‡เคฐเฅ€ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคคเคพ เคนเฅˆเฅค - เคตเคฐเฅเค—เคพเค•เคพเคฐ เค‰เคฆเฅเคงเคฐเคฃเฅ‹เค‚ เค•เคพ เคชเฅเคฐเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚, เคœเฅˆเคธเฅ‡, "เค‰เคฆเฅเคงเคฐเคฃ" Dictionary Hugging Face: เค—เคฒเฅ‡ เคฒเค—เคพเค“ เคšเฅ‡เคนเคฐเคพ token: เคถเคฌเฅเคฆ (เค”เคฐ เคฎเฅ‚เคฒ เค…เค‚เค—เฅเคฐเฅ‡เคœเฅ€ เค•เฅ‹ เค•เฅ‹เคทเฅเค เค• เคฎเฅ‡เค‚ เคšเคฟเคนเฅเคจเคฟเคค เค•เคฐเฅ‡เค‚๏ผ‰ tokenize: เคŸเฅ‹เค•เคจเคจเคพเค‡เคœเคผ เค•เคฐเฅ‡เค‚ (เค”เคฐ เคฎเฅ‚เคฒ เค…เค‚เค—เฅเคฐเฅ‡เคœเคผเฅ€ เค•เฅ‹ เคšเคฟเคนเฅเคจเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เค•เฅ‹เคทเฅเค เค• เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚) tokenizer: Tokenizer (เคฎเฅ‚เคฒ เค…เค‚เค—เฅเคฐเฅ‡เคœเฅ€ เคฎเฅ‡เค‚ เค•เฅ‹เคทเฅเค เค• เค•เฅ‡ เคธเคพเคฅ) transformer: transformer pipeline: เคธเคฎเคจเฅเค•เฅเคฐเคฎ API: API (เค…เคจเฅเคตเคพเคฆ เค•เฅ‡ เคฌเคฟเคจเคพ) inference: เคตเคฟเคšเคพเคฐ Trainer: เคชเฅเคฐเคถเคฟเค•เฅเคทเค•เฅค เค•เค•เฅเคทเคพ เค•เฅ‡ เคจเคพเคฎ เค•เฅ‡ เคฐเฅ‚เคช เคฎเฅ‡เค‚ เคชเฅเคฐเคธเฅเคคเฅเคค เค•เคฟเค เคœเคพเคจเฅ‡ เคชเคฐ เค…เคจเฅเคตเคพเคฆเคฟเคค เคจเคนเฅ€เค‚ เค•เคฟเคฏเคพ เค—เคฏเคพเฅค pretrained/pretrain: เคชเฅ‚เคฐเฅเคต เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ finetune: เคซเคผเคพเค‡เคจ เคŸเฅเคฏเฅ‚เคจเคฟเค‚เค— community: เคธเคฎเฅเคฆเคพเคฏ example: เคœเคฌ เคตเคฟเคถเคฟเคทเฅเคŸ เค—เฅ‹เคฆเคพเคฎ example เค•เฅˆเคŸเคฒเฅ‰เค— เค•เคฐเคคเฅ‡ เคธเคฎเคฏ "เค•เฅ‡เคธ เค•เฅ‡เคธ" เค•เฅ‡ เคฐเฅ‚เคช เคฎเฅ‡เค‚ เค…เคจเฅเคตเคพเคฆเคฟเคค Python data structures (e.g., list, set, dict): เคฎเฅ‚เคฒ เค…เค‚เค—เฅเคฐเฅ‡เคœเฅ€ เค•เฅ‹ เคšเคฟเคนเฅเคจเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคธเฅ‚เคšเคฟเคฏเฅ‹เค‚, เคธเฅ‡เคŸเฅ‹เค‚, เคถเคฌเฅเคฆเค•เฅ‹เคถเฅ‹เค‚ เคฎเฅ‡เค‚ เค…เคจเฅเคตเคพเคฆ เค•เคฐเฅ‡เค‚ เค”เคฐ เค•เฅ‹เคทเฅเค เค• เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚ NLP/Natural Language Processing: เคฆเฅเคตเคพเคฐเคพ NLP เค…เคจเฅเคตเคพเคฆ เค•เฅ‡ เคฌเคฟเคจเคพ เคชเฅเคฐเค•เคŸ เคนเฅ‹เคคเฅ‡ เคนเฅˆเค‚ Natural Language Processing เคชเฅเคฐเคธเฅเคคเฅเคค เค•เคฟเค เคœเคพเคจเฅ‡ เคชเคฐ เคชเฅเคฐเคพเค•เฅƒเคคเคฟเค• เคญเคพเคทเคพ เคธเค‚เคธเคพเคงเคจ เคฎเฅ‡เค‚ เค…เคจเฅเคตเคพเคฆ เค•เคฐเฅ‡เค‚ checkpoint: เคœเคพเคเคš เคฌเคฟเค‚เคฆเฅ --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <b>เคนเคฟเคจเฅเคฆเฅ€</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>Jax, PyTorch เค”เคฐ TensorFlow เค•เฅ‡ เคฒเคฟเค เค‰เคจเฅเคจเคค เคฎเคถเฅ€เคจ เคฒเคฐเฅเคจเคฟเค‚เค—</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers 100 เคธเฅ‡ เค…เคงเคฟเค• เคญเคพเคทเคพเค“เค‚ เคฎเฅ‡เค‚ เคชเคพเค  เคตเคฐเฅเค—เฅ€เค•เคฐเคฃ, เคธเฅ‚เคšเคจเคพ เคจเคฟเคทเฅเค•เคฐเฅเคทเคฃ, เคชเฅเคฐเคถเฅเคจ เค‰เคคเฅเคคเคฐ, เคธเคพเคฐเคพเค‚เคถเฅ€เค•เคฐเคฃ, เค…เคจเฅเคตเคพเคฆ, เคชเคพเค  เคจเคฟเคฐเฅเคฎเคพเคฃ เค•เคพ เคธเคฎเคฐเฅเคฅเคจ เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคนเคœเคพเคฐเฅ‹เค‚ เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ เคชเฅเคฐเคฆเคพเคจ เค•เคฐเคคเคพ เคนเฅˆเฅค เค‡เคธเค•เคพ เค‰เคฆเฅเคฆเฅ‡เคถเฅเคฏ เคธเคฌเคธเฅ‡ เค‰เคจเฅเคจเคค เคเคจเคเคฒเคชเฅ€ เคคเค•เคจเฅ€เค• เค•เฅ‹ เคธเคญเฅ€ เค•เฅ‡ เคฒเคฟเค เคธเฅเคฒเคญ เคฌเคจเคพเคจเคพ เคนเฅˆเฅค ๐Ÿค— Transformers เคคเฅเคตเคฐเคฟเคค เคกเคพเค‰เคจเคฒเฅ‹เคก เค”เคฐ เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฒเคฟเค เคเค• เคเคชเฅ€เค†เคˆ เคชเฅเคฐเคฆเคพเคจ เค•เคฐเคคเคพ เคนเฅˆ, เคœเคฟเคธเคธเฅ‡ เค†เคช เค•เคฟเคธเฅ€ เคฆเคฟเค เค—เค เคชเคพเค  เคชเคฐ เคเค• เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ เคฒเฅ‡ เคธเค•เคคเฅ‡ เคนเฅˆเค‚, เค‡เคธเฅ‡ เค…เคชเคจเฅ‡ เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเคฐ เค เฅ€เค• เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ เค”เคฐ เค‡เคธเฅ‡ [เคฎเฅ‰เคกเคฒ เคนเคฌ](https://huggingface.co/models) เค•เฅ‡ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ‡ เคธเคฎเฅเคฆเคพเคฏ เค•เฅ‡ เคธเคพเคฅ เคธเคพเคเคพ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค เค‡เคธเฅ€ เคธเคฎเคฏ, เคชเฅเคฐเคคเฅเคฏเฅ‡เค• เคชเคฐเคฟเคญเคพเคทเคฟเคค เคชเคพเคฏเคฅเคจ เคฎเฅ‰เคกเฅเคฏเฅ‚เคฒ เคชเฅ‚เคฐเฅ€ เคคเคฐเคน เคธเฅ‡ เคธเฅเคตเคคเค‚เคคเฅเคฐ เคนเฅˆ, เคœเฅ‹ เคธเค‚เคถเฅ‹เคงเคจ เค”เคฐ เคคเฅ‡เคœเฅ€ เคธเฅ‡ เค…เคจเฅเคธเค‚เคงเคพเคจ เคชเฅเคฐเคฏเฅ‹เค—เฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค เคธเฅเคตเคฟเคงเคพเคœเคจเค• เคนเฅˆเฅค ๐Ÿค— Transformers เคคเฅ€เคจ เคธเคฌเคธเฅ‡ เคฒเฅ‹เค•เคชเฅเคฐเคฟเคฏ เค—เคนเคจ เคถเคฟเค•เฅเคทเคฃ เคชเฅเคธเฅเคคเค•เคพเคฒเคฏเฅ‹เค‚ เค•เคพ เคธเคฎเคฐเฅเคฅเคจ เค•เคฐเคคเคพ เคนเฅˆ๏ผš [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) โ€” เค”เคฐ เค‡เคธเค•เฅ‡ เคธเคพเคฅ เคจเคฟเคฐเฅเคฌเคพเคง เคฐเฅ‚เคช เคธเฅ‡ เคเค•เฅ€เค•เฅƒเคค เคนเฅ‹เคคเคพ เคนเฅˆเฅค เค†เคช เค…เคชเคจเฅ‡ เคฎเฅ‰เคกเคฒ เค•เฅ‹ เคธเฅ€เคงเฅ‡ เคเค• เคขเคพเค‚เคšเฅ‡ เค•เฅ‡ เคธเคพเคฅ เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ เค”เคฐ เคฆเฅ‚เคธเคฐเฅ‡ เค•เฅ‡ เคธเคพเคฅ เคฒเฅ‹เคก เค”เคฐ เค…เคจเฅเคฎเคพเคจ เคฒเค—เคพ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค ## เค‘เคจเคฒเคพเค‡เคจ เคกเฅ‡เคฎเฅ‹ เค†เคช เคธเคฌเคธเฅ‡ เคธเฅ€เคงเฅ‡ เคฎเฅ‰เคกเคฒ เคชเฅƒเคทเฅเค  เคชเคฐ เคชเคฐเฅ€เค•เฅเคทเคฃ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ [model hub](https://huggingface.co/models) เคฎเฅ‰เคกเคฒ เคชเคฐเฅค เคนเคฎ [เคจเคฟเคœเฅ€ เคฎเฅ‰เคกเคฒ เคนเฅ‹เคธเฅเคŸเคฟเค‚เค—, เคฎเฅ‰เคกเคฒ เคธเค‚เคธเฅเค•เคฐเคฃ, เค”เคฐ เค…เคจเฅเคฎเคพเคจ เคเคชเฅ€เค†เคˆ](https://huggingface.co/pricing) เคญเฅ€ เคชเฅเคฐเคฆเคพเคจ เค•เคฐเคคเฅ‡ เคนเฅˆเค‚เฅคใ€‚ เคฏเคนเคพเค เค•เฅเค› เค‰เคฆเคพเคนเคฐเคฃ เคนเฅˆเค‚๏ผš - [เคถเคฌเฅเคฆ เค•เฅ‹ เคญเคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคฎเคพเคธเฅเค• เค•เฅ‡ เคฐเฅ‚เคช เคฎเฅ‡เค‚ BERT เค•เคพ เคชเฅเคฐเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [เค‡เคฒเฅ‡เค•เฅเคŸเฅเคฐเคพ เค•เฅ‡ เคธเคพเคฅ เคจเคพเคฎเคฟเคค เค‡เค•เคพเคˆ เคชเคนเคšเคพเคจ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [เคœเฅ€เคชเฅ€เคŸเฅ€-2 เค•เฅ‡ เคธเคพเคฅ เคŸเฅ‡เค•เฅเคธเฅเคŸ เคœเคจเคฐเฅ‡เคถเคจ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [เคฐเฅ‰เคฌเคฐเฅเคŸเคพ เค•เฅ‡ เคธเคพเคฅ เคชเฅเคฐเคพเค•เฅƒเคคเคฟเค• เคญเคพเคทเคพ เคจเคฟเคทเฅเค•เคฐเฅเคท](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [เคฌเคพเคฐเฅเคŸ เค•เฅ‡ เคธเคพเคฅ เคชเคพเค  เคธเคพเคฐเคพเค‚เคถ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [เคกเคฟเคธเฅเคŸเคฟเคฒเคฌเคฐเฅเคŸ เค•เฅ‡ เคธเคพเคฅ เคชเฅเคฐเคถเฅเคจเฅ‹เคคเฅเคคเคฐ](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [เค…เคจเฅเคตเคพเคฆ เค•เฅ‡ เคฒเคฟเค T5 เค•เคพ เคชเฅเคฐเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) **[Write With Transformer](https://transformer.huggingface.co)**๏ผŒเคนเค—เคฟเค‚เค— เคซเฅ‡เคธ เคŸเฅ€เคฎ เคฆเฅเคตเคพเคฐเคพ เคฌเคจเคพเคฏเคพ เค—เคฏเคพ, เคฏเคน เคเค• เค†เคงเคฟเค•เคพเคฐเคฟเค• เคชเคพเค  เคชเฅ€เคขเคผเฅ€ เคนเฅˆ demoใ€‚ ## เคฏเคฆเคฟ เค†เคช เคนเค—เคฟเค‚เค— เคซเฅ‡เคธ เคŸเฅ€เคฎ เคธเฅ‡ เคฌเฅ€เคธเฅเคชเฅ‹เค• เคธเคฎเคฐเฅเคฅเคจ เค•เฅ€ เคคเคฒเคพเคถ เค•เคฐ เคฐเคนเฅ‡ เคนเฅˆเค‚ <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## เคœเคฒเฅเคฆเฅ€ เคถเฅเคฐเฅ‚ เค•เคฐเฅ‡เค‚ เคนเคฎ เคคเฅเคตเคฐเคฟเคค เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฒเคฟเค เคฎเฅ‰เคกเคฒ เคชเฅเคฐเคฆเคพเคจ เค•เคฐเคคเฅ‡ เคนเฅˆเค‚ `pipeline` (เคชเคพเค‡เคชเคฒเคพเค‡เคจ) เคเคชเฅ€เค†เคˆเฅค เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ เค”เคฐ เคธเค‚เคฌเค‚เคงเคฟเคค เคชเคพเค  เคชเฅเคฐเฅ€เคชเฅเคฐเฅ‹เคธเฅ‡เคธเคฟเค‚เค— เค•เฅ‹ เคเค•เคคเฅเคฐเคฟเคค เค•เคฐเคคเฅ€ เคนเฅˆเฅค เคธเค•เคพเคฐเคพเคคเฅเคฎเค• เค”เคฐ เคจเค•เคพเคฐเคพเคคเฅเคฎเค• เคญเคพเคตเคจเคพ เค•เฅ‹ เคจเคฟเคฐเฅเคงเคพเคฐเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเฅ‡ เค•เคพ เคเค• เคคเฅเคตเคฐเคฟเคค เค‰เคฆเคพเคนเคฐเคฃ เคฏเคนเคพเค‚ เคฆเคฟเคฏเคพ เค—เคฏเคพ เคนเฅˆ: ```python >>> from transformers import pipeline # เคญเคพเคตเคจเคพ เคตเคฟเคถเฅเคฒเฅ‡เคทเคฃ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` เค•เฅ‹เคก เค•เฅ€ เคฆเฅ‚เคธเคฐเฅ€ เคชเค‚เค•เฅเคคเคฟ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคฆเฅเคตเคพเคฐเคพ เค‰เคชเคฏเฅ‹เค— เค•เคฟเค เค—เค เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ เค•เฅ‹ เคกเคพเค‰เคจเคฒเฅ‹เคก เค”เคฐ เค•เฅˆเคถ เค•เคฐเคคเฅ€ เคนเฅˆ, เคœเคฌเค•เคฟ เค•เฅ‹เคก เค•เฅ€ เคคเฅ€เคธเคฐเฅ€ เคชเค‚เค•เฅเคคเคฟ เคฆเคฟเค เค—เค เคชเคพเค  เคชเคฐ เคฎเฅ‚เคฒเฅเคฏเคพเค‚เค•เคจ เค•เคฐเคคเฅ€ เคนเฅˆเฅค เคฏเคนเคพเค‚ เค‰เคคเฅเคคเคฐ 99 เค†เคคเฅเคฎเคตเคฟเคถเฅเคตเคพเคธ เค•เฅ‡ เคธเฅเคคเคฐ เค•เฅ‡ เคธเคพเคฅ "เคธเค•เคพเคฐเคพเคคเฅเคฎเค•" เคนเฅˆเฅค เค•เคˆ เคเคจเคเคฒเคชเฅ€ เค•เคพเคฐเฅเคฏเฅ‹เค‚ เคฎเฅ‡เค‚ เค†เค‰เคŸ เค‘เฅž เคฆ เคฌเฅ‰เค•เฅเคธ เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เคพ เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ เคนเฅ‹เคคเคพ เคนเฅˆเฅค เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค, เคนเคฎ เค•เคฟเคธเฅ€ เคฆเคฟเค เค—เค เคชเคพเค  เคธเฅ‡ เค•เคฟเคธเฅ€ เคชเฅเคฐเคถเฅเคจ เค•เคพ เค‰เคคเฅเคคเคฐ เค†เคธเคพเคจเฅ€ เคธเฅ‡ เคจเคฟเค•เคพเคฒ เคธเค•เคคเฅ‡ เคนเฅˆเค‚: ``` python >>> from transformers import pipeline # เคชเฅเคฐเคถเฅเคจเฅ‹เคคเฅเคคเคฐ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ >>> question_answerer = pipeline('question-answering') >>> question_answerer({ ... 'question': 'What is the name of the repository ?', ... 'context': 'Pipeline has been included in the huggingface/transformers repository' ... }) {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} ``` เค‰เคคเฅเคคเคฐ เคฆเฅ‡เคจเฅ‡ เค•เฅ‡ เค…เคฒเคพเคตเคพ, เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ เคธเค‚เค—เคค เค†เคคเฅเคฎเคตเคฟเคถเฅเคตเคพเคธ เคธเฅเค•เฅ‹เคฐ เคญเฅ€ เคฆเฅ‡เคคเคพ เคนเฅˆ, เคœเคนเคพเค‚ เค‰เคคเฅเคคเคฐ เคŸเฅ‹เค•เคจเคฏเฅเค•เฅเคค เคชเคพเค  เคฎเฅ‡เค‚ เคถเฅเคฐเฅ‚ เค”เคฐ เคธเคฎเคพเคชเฅเคค เคนเฅ‹เคคเคพ เคนเฅˆเฅค เค†เคช [เค‡เคธ เคŸเฅเคฏเฅ‚เคŸเฅ‹เคฐเคฟเคฏเคฒ](https://huggingface.co/docs/transformers/task_summary) เคธเฅ‡ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคเคชเฅ€เค†เคˆ เคฆเฅเคตเคพเคฐเคพ เคธเคฎเคฐเฅเคฅเคฟเคค เค•เคพเคฐเฅเคฏเฅ‹เค‚ เค•เฅ‡ เคฌเคพเคฐเฅ‡ เคฎเฅ‡เค‚ เค…เคงเคฟเค• เคœเคพเคจ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค เค…เคชเคจเฅ‡ เค•เคพเคฐเฅเคฏ เคชเคฐ เค•เคฟเคธเฅ€ เคญเฅ€ เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ เค•เฅ‹ เคกเคพเค‰เคจเคฒเฅ‹เคก เค•เคฐเคจเคพ เค”เคฐ เค‰เคธเค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เคญเฅ€ เค•เฅ‹เคก เค•เฅ€ เคคเฅ€เคจ เคชเค‚เค•เฅเคคเคฟเคฏเฅ‹เค‚ เค•เฅ€ เคคเคฐเคน เคธเคฐเคฒ เคนเฅˆเฅค เคฏเคนเคพเค PyTorch เคธเค‚เคธเฅเค•เคฐเคฃ เค•เฅ‡ เคฒเคฟเค เคเค• เค‰เคฆเคพเคนเคฐเคฃ เคฆเคฟเคฏเคพ เค—เคฏเคพ เคนเฅˆ: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` เคฏเคนเคพเค เคธเคฎเค•เค•เฅเคท เคนเฅˆ TensorFlow เค•เฅ‹เคก: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` เคŸเฅ‹เค•เคจเคจเคพเค‡เคœเคผเคฐ เคธเคญเฅ€ เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค เคชเฅเคฐเฅ€เคชเฅเคฐเฅ‹เคธเฅ‡เคธเคฟเค‚เค— เคชเฅเคฐเคฆเคพเคจ เค•เคฐเคคเคพ เคนเฅˆ เค”เคฐ เค‡เคธเฅ‡ เคธเฅ€เคงเฅ‡ เคเค• เคธเฅเคŸเฅเคฐเคฟเค‚เค— (เคœเฅˆเคธเฅ‡ เคŠเคชเคฐ เคฆเคฟเค เค—เค เค‰เคฆเคพเคนเคฐเคฃ) เคฏเคพ เค•เคฟเคธเฅ€ เคธเฅ‚เคšเฅ€ เคชเคฐ เคฌเฅเคฒเคพเคฏเคพ เคœเคพ เคธเค•เคคเคพ เคนเฅˆเฅค เคฏเคน เคเค• เคกเคฟเค•เฅเคถเคจเคฐเฅ€ (เคคเคพเคจเคพเคถเคพเคนเฅ€) เค•เฅ‹ เค†เค‰เคŸเคชเฅเคŸ เค•เคฐเคคเคพ เคนเฅˆ เคœเคฟเคธเฅ‡ เค†เคช เคกเคพเค‰เคจเคธเฅเคŸเฅเคฐเฅ€เคฎ เค•เฅ‹เคก เคฎเฅ‡เค‚ เค‰เคชเคฏเฅ‹เค— เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ เคฏเคพ `**` เค…เคจเคชเฅˆเค•เคฟเค‚เค— เคเค•เฅเคธเคชเฅเคฐเฅ‡เคถเคจ เค•เฅ‡ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ‡ เคธเฅ€เคงเฅ‡ เคฎเฅ‰เคกเคฒ เค•เฅ‹ เคชเคพเคธ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค เคฎเฅ‰เคกเคฒ เคธเฅเคตเคฏเค‚ เคเค• เคจเคฟเคฏเคฎเคฟเคค [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) เคฏเคพ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (เค†เคชเค•เฅ‡ เคฌเฅˆเค•เคเค‚เคก เค•เฅ‡ เค†เคงเคพเคฐ เคชเคฐ), เคœเฅ‹ เคนเฅ‹ เคธเค•เคคเคพ เคนเฅˆ เคธเคพเคฎเคพเคจเฅเคฏ เคคเคฐเฅ€เค•เฅ‡ เคธเฅ‡ เค‰เคชเคฏเฅ‹เค— เค•เคฟเคฏเคพ เคœเคพเคคเคพ เคนเฅˆเฅค [เคฏเคน เคŸเฅเคฏเฅ‚เคŸเฅ‹เคฐเคฟเคฏเคฒ](https://huggingface.co/transformers/training.html) เคฌเคคเคพเคคเคพ เคนเฅˆ เค•เคฟ เค‡เคธ เคคเคฐเคน เค•เฅ‡ เคฎเฅ‰เคกเคฒ เค•เฅ‹ เค•เฅเคฒเคพเคธเคฟเค• PyTorch เคฏเคพ TensorFlow เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ เคฒเฅ‚เคช เคฎเฅ‡เค‚ เค•เฅˆเคธเฅ‡ เคเค•เฅ€เค•เฅƒเคค เค•เคฟเคฏเคพ เคœเคพเค, เคฏเคพ เคนเคฎเคพเคฐเฅ‡ `เคŸเฅเคฐเฅ‡เคจเคฐ` เคเคชเฅ€เค†เคˆ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เฅˆเคธเฅ‡ เค•เคฐเฅ‡เค‚ เคคเคพเค•เคฟ เค‡เคธเฅ‡ เคœเคฒเฅเคฆเฅ€ เคธเฅ‡ เคซเคผเคพเค‡เคจ เคŸเฅเคฏเฅ‚เคจ เค•เคฟเคฏเคพ เคœเคพ เคธเค•เฅ‡เฅคเคเค• เคจเคฏเคพ เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเฅ‡เฅค ## เคŸเฅเคฐเคพเค‚เคธเคซเคพเคฐเฅเคฎเคฐ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เฅเคฏเฅ‹เค‚ เค•เคฐเฅ‡เค‚? 1. เค‰เคชเคฏเฅ‹เค— เคฎเฅ‡เค‚ เค†เคธเคพเคจเฅ€ เค•เฅ‡ เคฒเคฟเค เค‰เคจเฅเคจเคค เคฎเฅ‰เคกเคฒ: - เคเคจเคเคฒเคฏเฅ‚ เค”เคฐ เคเคจเคเคฒเคœเฅ€ เคชเคฐ เคฌเฅ‡เคนเคคเคฐ เคชเฅเคฐเคฆเคฐเฅเคถเคจ - เคชเฅเคฐเคตเฅ‡เคถ เค•เฅ‡ เคฒเคฟเค เค•เคฎ เคฌเคพเคงเคพเค“เค‚ เค•เฅ‡ เคธเคพเคฅ เคถเคฟเค•เฅเคทเคฃ เค”เคฐ เค…เคญเฅเคฏเคพเคธ เค•เฅ‡ เค…เคจเฅเค•เฅ‚เคฒ - เค‰เคชเคฏเฅ‹เค—เค•เคฐเฅเคคเคพ-เคธเคพเคฎเคจเคพ เค•เคฐเคจเฅ‡ เคตเคพเคฒเฅ‡ เคธเคพเคฐ เคคเคคเฅเคต, เค•เฅ‡เคตเคฒ เคคเฅ€เคจ เคตเคฐเฅเค—เฅ‹เค‚ เค•เฅ‹ เคœเคพเคจเคจเฅ‡ เค•เฅ€ เคœเคฐเฅ‚เคฐเคค เคนเฅˆ - เคธเคญเฅ€ เคฎเฅ‰เคกเคฒเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค เคเค•เฅ€เค•เฅƒเคค เคเคชเฅ€เค†เคˆ 1. เค•เคฎ เค•เคฎเฅเคชเฅเคฏเฅ‚เคŸเฅ‡เคถเคจเคฒ เค“เคตเคฐเคนเฅ‡เคก เค”เคฐ เค•เคฎ เค•เคพเคฐเฅเคฌเคจ เค‰เคคเฅเคธเคฐเฅเคœเคจ: - เคถเฅ‹เคงเค•เคฐเฅเคคเคพ เคนเคฐ เคฌเคพเคฐ เคจเค เคธเคฟเคฐเฅ‡ เคธเฅ‡ เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ เคฆเฅ‡เคจเฅ‡ เค•เฅ‡ เคฌเคœเคพเคฏ เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ เคธเคพเคเคพ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ - เค‡เค‚เคœเฅ€เคจเคฟเคฏเคฐ เค—เคฃเคจเคพ เคธเคฎเคฏ เค”เคฐ เค‰เคคเฅเคชเคพเคฆเคจ เค“เคตเคฐเคนเฅ‡เคก เค•เฅ‹ เค•เคฎ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ - เคฆเคฐเฅเคœเคจเฅ‹เค‚ เคฎเฅ‰เคกเคฒ เค†เคฐเฅเค•เคฟเคŸเฅ‡เค•เฅเคšเคฐ, 2,000 เคธเฅ‡ เค…เคงเคฟเค• เคชเฅ‚เคฐเฅเคต-เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เคฎเฅ‰เคกเคฒ, 100 เคธเฅ‡ เค…เคงเคฟเค• เคญเคพเคทเคพเค“เค‚ เค•เคพ เคธเคฎเคฐเฅเคฅเคจ 1.เคฎเฅ‰เคกเคฒ เคœเฅ€เคตเคจเคšเค•เฅเคฐ เค•เฅ‡ เคนเคฐ เคนเคฟเคธเฅเคธเฅ‡ เค•เฅ‹ เคถเคพเคฎเคฟเคฒ เค•เคฐเคคเคพ เคนเฅˆ: - เค•เฅ‹เคก เค•เฅ€ เค•เฅ‡เคตเคฒ 3 เคชเค‚เค•เฅเคคเคฟเคฏเฅ‹เค‚ เคฎเฅ‡เค‚ เค‰เคจเฅเคจเคค เคฎเฅ‰เคกเคฒเฅ‹เค‚ เค•เฅ‹ เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เค•เคฐเฅ‡เค‚ - เคฎเฅ‰เคกเคฒ เค•เฅ‹ เคฎเคจเคฎเคพเคจเฅ‡ เคขเค‚เค— เคธเฅ‡ เคตเคฟเคญเคฟเคจเฅเคจ เคกเฅ€เคช เคฒเคฐเฅเคจเคฟเค‚เค— เคซเฅเคฐเฅ‡เคฎเคตเคฐเฅเค• เค•เฅ‡ เคฌเฅ€เคš เคธเฅเคฅเคพเคจเคพเค‚เคคเคฐเคฟเคค เค•เคฟเคฏเคพ เคœเคพ เคธเค•เคคเคพ เคนเฅˆ, เคœเฅˆเคธเคพ เค†เคช เคšเคพเคนเคคเฅ‡ เคนเฅˆเค‚ - เคจเคฟเคฐเฅเคฌเคพเคง เคฐเฅ‚เคช เคธเฅ‡ เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ, เคฎเฅ‚เคฒเฅเคฏเคพเค‚เค•เคจ เค”เคฐ เค‰เคคเฅเคชเคพเคฆเคจ เค•เฅ‡ เคฒเคฟเค เคธเคฌเคธเฅ‡ เค‰เคชเคฏเฅเค•เฅเคค เคขเคพเค‚เคšเคพ เคšเฅเคจเฅ‡เค‚ 1. เค†เคธเคพเคจเฅ€ เคธเฅ‡ เค…เคจเคจเฅเคฏ เคฎเฅ‰เคกเคฒ เค•เฅ‹ เค…เคจเฅเค•เฅ‚เคฒเคฟเคค เค•เคฐเฅ‡เค‚ เค”เคฐ เค…เคชเคจเฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพเค“เค‚ เค•เฅ‡ เคฒเคฟเค เคฎเคพเคฎเคฒเฅ‹เค‚ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚: - เคนเคฎ เคฎเฅ‚เคฒ เคชเฅ‡เคชเคฐ เคชเคฐเคฟเคฃเคพเคฎเฅ‹เค‚ เค•เฅ‹ เคชเฅเคจ: เคชเฅ‡เคถ เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคชเฅเคฐเคคเฅเคฏเฅ‡เค• เคฎเฅ‰เคกเคฒ เค†เคฐเฅเค•เคฟเคŸเฅ‡เค•เฅเคšเคฐ เค•เฅ‡ เคฒเคฟเค เค•เคˆ เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‡ เคชเฅเคฐเคฆเคพเคจ เค•เคฐเคคเฅ‡ เคนเฅˆเค‚ - เคฎเฅ‰เคกเคฒ เค•เฅ€ เค†เค‚เคคเคฐเคฟเค• เคธเค‚เคฐเคšเคจเคพ เคชเคพเคฐเคฆเคฐเฅเคถเฅ€ เค”เคฐ เคธเฅเคธเค‚เค—เคค เคฐเคนเคคเฅ€ เคนเฅˆ - เคฎเฅ‰เคกเคฒ เคซเคผเคพเค‡เคฒ เค•เฅ‹ เค…เคฒเค— เคธเฅ‡ เค‡เคธเฅเคคเฅ‡เคฎเคพเคฒ เค•เคฟเคฏเคพ เคœเคพ เคธเค•เคคเคพ เคนเฅˆ, เคœเฅ‹ เคธเค‚เคถเฅ‹เคงเคจ เค”เคฐ เคคเฅเคตเคฐเคฟเคค เคชเฅเคฐเคฏเฅ‹เค— เค•เฅ‡ เคฒเคฟเค เคธเฅเคตเคฟเคงเคพเคœเคจเค• เคนเฅˆ ## เคฎเฅเคเฅ‡ เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฌ เคจเคนเฅ€เค‚ เค•เคฐเคจเคพ เคšเคพเคนเคฟเค? - เคฏเคน เคฒเคพเค‡เคฌเฅเคฐเฅ‡เคฐเฅ€ เคฎเฅ‰เคกเฅเคฏเฅ‚เคฒเคฐ เคจเฅเคฏเฅ‚เคฐเคฒ เคจเฅ‡เคŸเคตเคฐเฅเค• เคŸเฅ‚เคฒเคฌเฅ‰เค•เฅเคธ เคจเคนเฅ€เค‚ เคนเฅˆเฅค เคฎเฅ‰เคกเคฒ เคซเคผเคพเค‡เคฒ เคฎเฅ‡เค‚ เค•เฅ‹เคก เคœเคพเคจเคฌเฅ‚เคเค•เคฐ เค…เคฒเฅเคชเคตเคฟเค•เคธเคฟเคค เคนเฅˆ, เคฌเคฟเคจเคพ เค…เคคเคฟเคฐเคฟเค•เฅเคค เคธเคพเคฐ เค‡เคจเค•เฅˆเคชเฅเคธเฅเคฒเฅ‡เคถเคจ เค•เฅ‡, เคคเคพเค•เคฟ เคถเฅ‹เคงเค•เคฐเฅเคคเคพ เค…เคฎเฅ‚เคฐเฅเคคเคคเคพ เค”เคฐ เคซเคผเคพเค‡เคฒ เคœเค‚เคชเคฟเค‚เค— เคฎเฅ‡เค‚ เคถเคพเคฎเคฟเคฒ เคนเฅเค เคœเคฒเฅเคฆเฅ€ เคธเฅ‡ เคชเฅเคจเคฐเคพเคตเฅƒเคคเคฟ เค•เคฐ เคธเค•เฅ‡เค‚เฅค - `เคŸเฅเคฐเฅ‡เคจเคฐ` เคเคชเฅ€เค†เคˆ เค•เคฟเคธเฅ€ เคญเฅ€ เคฎเฅ‰เคกเคฒ เค•เฅ‡ เคธเคพเคฅ เคธเค‚เค—เคค เคจเคนเฅ€เค‚ เคนเฅˆ, เคฏเคน เค•เฅ‡เคตเคฒ เค‡เคธ เคชเฅเคธเฅเคคเค•เคพเคฒเคฏ เค•เฅ‡ เคฎเฅ‰เคกเคฒ เค•เฅ‡ เคฒเคฟเค เค…เคจเฅเค•เฅ‚เคฒเคฟเคค เคนเฅˆเฅค เคฏเคฆเคฟ เค†เคช เคธเคพเคฎเคพเคจเฅเคฏ เคฎเคถเฅ€เคจ เคฒเคฐเฅเคจเคฟเค‚เค— เค•เฅ‡ เคฒเคฟเค เค‰เคชเคฏเฅเค•เฅเคค เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ เคฒเฅ‚เคช เค•เคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจ เค•เฅ€ เคคเคฒเคพเคถ เคฎเฅ‡เค‚ เคนเฅˆเค‚, เคคเฅ‹ เค•เคนเฅ€เค‚ เค”เคฐ เคฆเฅ‡เค–เฅ‡เค‚เฅค - เคนเคฎเคพเคฐเฅ‡ เคธเคฐเฅเคตเฅ‹เคคเฅเคคเคฎ เคชเฅเคฐเคฏเคพเคธเฅ‹เค‚ เค•เฅ‡ เคฌเคพเคตเคœเฅ‚เคฆ, [เค‰เคฆเคพเคนเคฐเคฃ เคจเคฟเคฐเฅเคฆเฅ‡เคถเคฟเค•เคพ](https://github.com/huggingface/transformers/tree/main/examples) เคฎเฅ‡เค‚ เคธเฅเค•เฅเคฐเคฟเคชเฅเคŸ เค•เฅ‡เคตเคฒ เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‡ เคนเฅˆเค‚เฅค เค†เคชเค•เฅ€ เคตเคฟเคถเคฟเคทเฅเคŸ เคธเคฎเคธเฅเคฏเคพ เค•เฅ‡ เคฒเคฟเค, เคตเฅ‡ เคœเคฐเฅ‚เคฐเฅ€ เคจเคนเฅ€เค‚ เค•เคฟ เคฌเฅ‰เค•เฅเคธ เคธเฅ‡ เคฌเคพเคนเคฐ เค•เคพเคฎ เค•เคฐเฅ‡เค‚, เค”เคฐ เค†เคชเค•เฅ‹ เค•เฅ‹เคก เค•เฅ€ เค•เฅเค› เคชเค‚เค•เฅเคคเคฟเคฏเฅ‹เค‚ เค•เฅ‹ เคธเฅ‚เคŸ เค•เคฐเคจเฅ‡ เค•เฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพ เคนเฅ‹ เคธเค•เคคเฅ€ เคนเฅˆเฅค ## เคธเฅเคฅเคพเคชเคฟเคค เค•เคฐเคจเคพ ### เคชเคฟเคช เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เค‡เคธ เคฐเคฟเคชเฅ‰เคœเคฟเคŸเคฐเฅ€ เค•เคพ เคชเคฐเฅ€เค•เฅเคทเคฃ Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ เค”เคฐ TensorFlow 2.6+ เค•เฅ‡ เคคเคนเคค เค•เคฟเคฏเคพ เค—เคฏเคพ เคนเฅˆเฅค เค†เคช [เคตเคฐเฅเคšเฅเค…เคฒ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅ‡เค‚เคŸ](https://docs.python.org/3/library/venv.html) เคฎเฅ‡เค‚ ๐Ÿค— เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เค‡เค‚เคธเฅเคŸเฅ‰เคฒ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค เคฏเคฆเคฟ เค†เคช เค…เคญเฅ€ เคคเค• เคชเคพเคฏเคฅเคจ เค•เฅ‡ เคตเคฐเฅเคšเฅเค…เคฒ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅ‡เค‚เคŸ เคธเฅ‡ เคชเคฐเคฟเคšเคฟเคค เคจเคนเฅ€เค‚ เคนเฅˆเค‚, เคคเฅ‹ เค•เฅƒเคชเคฏเคพ เค‡เคธเฅ‡ [เค‰เคชเคฏเฅ‹เค—เค•เคฐเฅเคคเคพ เคจเคฟเคฐเฅเคฆเฅ‡เคถ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) เคชเคขเคผเฅ‡เค‚เฅค เคธเคฌเคธเฅ‡ เคชเคนเคฒเฅ‡, เคชเคพเคฏเคฅเคจ เค•เฅ‡ เค‰เคธ เคธเค‚เคธเฅเค•เคฐเคฃ เค•เฅ‡ เคธเคพเคฅ เคเค• เค†เคญเคพเคธเฅ€ เคตเคพเคคเคพเคตเคฐเคฃ เคฌเคจเคพเคเค‚ เคœเคฟเคธเค•เคพ เค†เคช เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเฅ‡ เค”เคฐ เค‰เคธเฅ‡ เคธเค•เฅเคฐเคฟเคฏ เค•เคฐเคจเฅ‡ เค•เฅ€ เคฏเฅ‹เคœเคจเคพ เคฌเคจเคพ เคฐเคนเฅ‡ เคนเฅˆเค‚เฅค เคซเคฟเคฐ, เค†เคชเค•เฅ‹ Flax, PyTorch เคฏเคพ TensorFlow เคฎเฅ‡เค‚ เคธเฅ‡ เค•เคฟเคธเฅ€ เคเค• เค•เฅ‹ เคธเฅเคฅเคพเคชเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพ เคนเฅˆเฅค เค…เคชเคจเฅ‡ เคชเฅเคฒเฅ‡เคŸเคซเคผเฅ‰เคฐเฅเคฎ เคชเคฐ เค‡เคจ เคซเคผเฅเคฐเฅ‡เคฎเคตเคฐเฅเค• เค•เฅ‹ เคธเฅเคฅเคพเคชเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค, [TensorFlow เคธเฅเคฅเคพเคชเคจเคพ เคชเฅƒเคทเฅเค ](https://www.tensorflow.org/install/), [PyTorch เคธเฅเคฅเคพเคชเคจเคพ เคชเฅƒเคทเฅเค ](https://pytorch.org/get-started/locally) เคฆเฅ‡เค–เฅ‡เค‚ start-locally เคฏเคพ [Flax เคธเฅเคฅเคพเคชเคจเคพ เคชเฅƒเคทเฅเค ](https://github.com/google/flax#quick-install). เคœเคฌ เค‡เคจเคฎเฅ‡เค‚ เคธเฅ‡ เค•เฅ‹เคˆ เคเค• เคฌเฅˆเค•เคเค‚เคก เคธเคซเคฒเคคเคพเคชเฅ‚เคฐเฅเคตเค• เคธเฅเคฅเคพเคชเคฟเคค เคนเฅ‹ เคœเคพเคคเคพ เคนเฅˆ, เคคเฅ‹ เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคจเคฟเคฎเฅเคจเคพเคจเฅเคธเคพเคฐ เคธเฅเคฅเคพเคชเคฟเคค เค•เคฟเค เคœเคพ เคธเค•เคคเฅ‡ เคนเฅˆเค‚: ```bash pip install transformers ``` เคฏเคฆเคฟ เค†เคช เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‹เค‚ เค•เฅ‹ เค†เคœเคผเคฎเคพเคจเคพ เคšเคพเคนเคคเฅ‡ เคนเฅˆเค‚ เคฏเคพ เค†เคงเคฟเค•เคพเคฐเคฟเค• เคฐเคฟเคฒเฅ€เคœเคผ เคธเฅ‡ เคชเคนเคฒเฅ‡ เคจเคตเฅ€เคจเคคเคฎ เค‡เคจ-เคกเฅ‡เคตเคฒเคชเคฎเฅ‡เค‚เคŸ เค•เฅ‹เคก เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เคšเคพเคนเคคเฅ‡ เคนเฅˆเค‚, เคคเฅ‹ เค†เคชเค•เฅ‹ [เคธเฅ‹เคฐเฅเคธ เคธเฅ‡ เค‡เค‚เคธเฅเคŸเฅ‰เคฒ เค•เคฐเคจเคพ เคนเฅ‹เค—เคพ](https://huggingface.co/docs/transformers/installation#installing-from-) เคธเฅเคฐเฅ‹เคคเฅค ### เค•เฅ‹เค‚เคกเคพ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เค•เฅ‹เค‚เคกเคพ เค•เฅ‡ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ‡ เคจเคฟเคฎเฅเคจเคพเคจเฅเคธเคพเคฐ เคธเฅเคฅเคพเคชเคฟเคค เค•เคฟเคฏเคพ เคœเคพ เคธเค•เคคเคพ เคนเฅˆ: ```shell script conda install conda-forge::transformers ``` > **_เคจเฅ‹เคŸ:_** `huggingface` เคšเฅˆเคจเคฒ เคธเฅ‡ `transformers` เค‡เค‚เคธเฅเคŸเฅ‰เคฒ เค•เคฐเคจเคพ เคชเฅเคฐเคพเคจเคพ เคชเคกเคผ เคšเฅเค•เคพ เคนเฅˆเฅค เค•เฅ‹เค‚เคกเคพ เค•เฅ‡ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ‡ Flax, PyTorch, เคฏเคพ TensorFlow เคฎเฅ‡เค‚ เคธเฅ‡ เค•เคฟเคธเฅ€ เคเค• เค•เฅ‹ เคธเฅเคฅเคพเคชเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค, เคจเคฟเคฐเฅเคฆเฅ‡เคถเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค เค‰เคจเค•เฅ‡ เคธเค‚เคฌเค‚เคงเคฟเคค เคธเฅเคฅเคพเคชเคจเคพ เคชเฅƒเคทเฅเค  เคฆเฅ‡เค–เฅ‡เค‚เฅค ## เคฎเฅ‰เคกเคฒ เค†เคฐเฅเค•เคฟเคŸเฅ‡เค•เฅเคšเคฐ [เค‰เคชเคฏเฅ‹เค—เค•เคฐเฅเคคเคพ](https://huggingface.co/users) เค”เคฐ [organization](https://huggingface.co) เคฆเฅเคตเคพเคฐเคพ เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคธเคฎเคฐเฅเคฅเคฟเคค [**เคธเคญเฅ€ เคฎเฅ‰เคกเคฒ เคšเฅŒเค•เคฟเคฏเฅ‹เค‚**](https://huggingface.co/models/users) เคนเค—เคฟเค‚เค—เคซเฅ‡เคธ.เค•เฅ‹/เค‘เคฐเฅเค—เคจเคพเค‡เคœเฅ‡เคถเคจ), เคธเคญเฅ€ เค•เฅ‹ เคฌเคฟเคจเคพ เค•เคฟเคธเฅ€ เคฌเคพเคงเคพ เค•เฅ‡ เคนเค—เคฟเค‚เค—เคซเฅ‡เคธ.เค•เฅ‹ [เคฎเฅ‰เคกเคฒ เคนเคฌ](https://huggingface.co) เค•เฅ‡ เคธเคพเคฅ เคเค•เฅ€เค•เฅƒเคค เค•เคฟเคฏเคพ เค—เคฏเคพ เคนเฅˆเฅค เคšเฅŒเค•เคฟเคฏเฅ‹เค‚ เค•เฅ€ เคตเคฐเฅเคคเคฎเคพเคจ เคธเค‚เค–เฅเคฏเคพ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคตเคฐเฅเคคเคฎเคพเคจ เคฎเฅ‡เค‚ เคจเคฟเคฎเฅเคจเคฒเคฟเค–เคฟเคค เค†เคฐเฅเค•เคฟเคŸเฅ‡เค•เฅเคšเคฐ เค•เคพ เคธเคฎเคฐเฅเคฅเคจ เค•เคฐเคคเฅ‡ เคนเฅˆเค‚: เคฎเฅ‰เคกเคฒ เค•เฅ‡ เค…เคตเคฒเฅ‹เค•เคจ เค•เฅ‡ เคฒเคฟเค [เคฏเคนเคพเค‚ เคฆเฅ‡เค–เฅ‡เค‚](https://huggingface.co/docs/transformers/model_summary)๏ผš เคฏเคน เคœเคพเค‚เคšเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เค•เคฟ เค•เฅเคฏเคพ เค•เคฟเคธเฅ€ เคฎเฅ‰เคกเคฒ เคฎเฅ‡เค‚ เคชเคนเคฒเฅ‡ เคธเฅ‡ เคนเฅ€ Flax, PyTorch เคฏเคพ TensorFlow เค•เคพ เค•เคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจ เคนเฅˆ, เคฏเคพ เคฏเคฆเคฟ เค‰เคธเค•เฅ‡ เคชเคพเคธ Tokenizers เคฒเคพเค‡เคฌเฅเคฐเฅ‡เคฐเฅ€ เคฎเฅ‡เค‚ เคธเค‚เคฌเค‚เคงเคฟเคค เคŸเฅ‹เค•เคจ เคนเฅˆ, เคคเฅ‹ [เคฏเคน เคคเคพเคฒเคฟเค•เคพ](https://huggingface.co/docs/transformers/index#supported) เคฆเฅ‡เค–เฅ‡เค‚เฅค -เคซเฅเคฐเฅ‡เคฎเคตเคฐเฅเค•)เฅค เค‡เคจ เค•เคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจเฅ‹เค‚ เค•เคพ เคชเคฐเฅ€เค•เฅเคทเคฃ เค•เคˆ เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเคฐ เค•เคฟเคฏเคพ เค—เคฏเคพ เคนเฅˆ (เคฆเฅ‡เค–เฅ‡เค‚ เค•เฅ‡เคธ เคธเฅเค•เฅเคฐเคฟเคชเฅเคŸ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚) เค”เคฐ เคตเฅˆเคจเคฟเคฒเคพ เค•เคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจ เค•เฅ‡ เคฒเคฟเค เคคเฅเคฒเคจเคพเคคเฅเคฎเค• เคฐเฅ‚เคช เคธเฅ‡ เคชเฅเคฐเคฆเคฐเฅเคถเคจ เค•เคฐเคจเคพ เคšเคพเคนเคฟเคเฅค เค†เคช เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‡ เค•เฅ‡ เคฆเคธเฅเคคเคพเคตเฅ‡เคœเคผ [เค‡เคธ เค…เคจเฅเคญเคพเค—](https://huggingface.co/docs/transformers/examples) เคฎเฅ‡เค‚ เคตเฅเคฏเคตเคนเคพเคฐ เค•เคพ เคตเคฟเคตเคฐเคฃ เคชเคขเคผ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค ## เค…เคงเคฟเค• เคธเคฎเคเฅ‡เค‚ |เค…เคงเฅเคฏเคพเคฏ | เคตเคฟเคตเคฐเคฃ | |-|-| | [เคฆเคธเฅเคคเคพเคตเฅ‡เคœเคผเฅ€เค•เคฐเคฃ](https://huggingface.co/transformers/) | เคชเฅ‚เคฐเคพ เคเคชเฅ€เค†เคˆ เคฆเคธเฅเคคเคพเคตเฅ‡เคœเคผเฅ€เค•เคฐเคฃ เค”เคฐ เคŸเฅเคฏเฅ‚เคŸเฅ‹เคฐเคฟเคฏเคฒ | | [เค•เคพเคฐเฅเคฏ เคธเคพเคฐเคพเค‚เคถ](https://huggingface.co/docs/transformers/task_summary) | เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคธเคฎเคฐเฅเคฅเคฟเคค เค•เคพเคฐเฅเคฏ | | [เคชเฅเคฐเฅ€เคชเฅเคฐเฅ‹เคธเฅ‡เคธเคฟเค‚เค— เคŸเฅเคฏเฅ‚เคŸเฅ‹เคฐเคฟเคฏเคฒ](https://huggingface.co/docs/transformers/preprocessing) | เคฎเฅ‰เคกเคฒ เค•เฅ‡ เคฒเคฟเค เคกเฅ‡เคŸเคพ เคคเฅˆเคฏเคพเคฐ เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค `เคŸเฅ‹เค•เคจเคพเค‡เคœเคผเคฐ` เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ | | [เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ เค”เคฐ เคซเคพเค‡เคจ-เคŸเฅเคฏเฅ‚เคจเคฟเค‚เค—](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow เค•เฅ‡ เคŸเฅเคฐเฅ‡เคจเคฟเค‚เค— เคฒเฅ‚เคช เคฏเคพ `เคŸเฅเคฐเฅ‡เคจเคฐ` API เคฎเฅ‡เค‚ เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคฆเฅเคตเคพเคฐเคพ เคฆเคฟเค เค—เค เคฎเฅ‰เคกเคฒ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚ | | [เค•เฅเคตเคฟเค• เคธเฅเคŸเคพเคฐเฅเคŸ: เคŸเฅเคตเฅ€เค•เคฟเค‚เค— เคเค‚เคก เคฏเฅ‚เคœเคผ เค•เฅ‡เคธ เคธเฅเค•เฅเคฐเคฟเคชเฅเคŸเฅเคธ](https://github.com/huggingface/transformers/tree/main/examples) | เคตเคฟเคญเคฟเคจเฅเคจ เค•เคพเคฐเฅเคฏเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค เค•เฅ‡เคธ เคธเฅเค•เฅเคฐเคฟเคชเฅเคŸ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚ | | [เคฎเฅ‰เคกเคฒ เคธเคพเคเคพ เค•เคฐเคจเคพ เค”เคฐ เค…เคชเคฒเฅ‹เคก เค•เคฐเคจเคพ](https://huggingface.co/docs/transformers/model_sharing) | เคธเคฎเฅเคฆเคพเคฏ เค•เฅ‡ เคธเคพเคฅ เค…เคชเคจเฅ‡ เคซเคพเค‡เคจ เคŸเฅ‚เคจเคก เคฎเฅ‰เคกเคฒ เค…เคชเคฒเฅ‹เคก เค”เคฐ เคธเคพเคเคพ เค•เคฐเฅ‡เค‚ | | [เคฎเคพเค‡เค—เฅเคฐเฅ‡เคถเคจ](https://huggingface.co/docs/transformers/migration) | `เคชเคพเค‡เคŸเฅ‹เคฐเคš-เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐเฅเคธ` เคฏเคพ `เคชเคพเค‡เคŸเฅ‹เคฐเคš-เคชเฅเคฐเฅ€เคŸเฅเคฐเฅ‡เคจเคก-เคฌเคฐเฅเคŸ` เคธเฅ‡ เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคฎเฅ‡เค‚ เคฎเคพเค‡เค—เฅเคฐเฅ‡เคŸ เค•เคฐเคจเคพ | ## เค‰เคฆเฅเคงเคฐเคฃ เคนเคฎเคจเฅ‡ เค†เคงเคฟเค•เคพเคฐเคฟเค• เคคเฅŒเคฐ เคชเคฐ เค‡เคธ เคฒเคพเค‡เคฌเฅเคฐเฅ‡เคฐเฅ€ เค•เคพ [เคชเฅ‡เคชเคฐ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) เคชเฅเคฐเค•เคพเคถเคฟเคค เค•เคฟเคฏเคพ เคนเฅˆ, เค…เค—เคฐ เค†เคช เคŸเฅเคฐเคพเคจเฅเคธเคซเคผเฅ‰เคฐเฅเคฎเคฐเฅเคธ เคฒเคพเค‡เคฌเฅเคฐเฅ‡เคฐเฅ€ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคคเฅ‡ เคนเฅˆเค‚, เคคเฅ‹ เค•เฅƒเคชเคฏเคพ เค‰เคฆเฅเคงเฅƒเคค เค•เคฐเฅ‡เค‚: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/awesome-transformers.md
# Awesome projects built with Transformers This page lists awesome projects built on top of Transformers. Transformers is more than a toolkit to use pretrained models: it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects. In this list, we showcase incredibly impactful and novel projects that have pushed the field forward. We celebrate 100 of these projects as we reach the milestone of 100k stars as a community; but we're very open to pull requests adding other projects to the list. If you believe a project should be here and it's not, then please, open a PR to add it. ## [gpt4all](https://github.com/nomic-ai/gpt4all) [gpt4all](https://github.com/nomic-ai/gpt4all) is an ecosystem of open-source chatbots trained on massive collections of clean assistant data including code, stories and dialogue. It offers open-source, large language models such as LLaMA and GPT-J trained in an assistant-style. Keywords: Open-source, LLaMa, GPT-J, instruction, assistant ## [recommenders](https://github.com/microsoft/recommenders) This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. It goes over several aspects required to build efficient recommendation systems: data preparation, modeling, evaluation, model selection & optimization, as well as operationalization Keywords: Recommender systems, AzureML ## [IOPaint](https://github.com/Sanster/IOPaint) Image inpainting tool powered by Stable Diffusion. Remove any unwanted object, defect, people from your pictures or erase and replace anything on your pictures. Keywords: inpainting, SD, Stable Diffusion ## [flair](https://github.com/flairNLP/flair) FLAIR is a powerful PyTorch NLP framework, convering several important tasks: NER, sentiment-analysis, part-of-speech tagging, text and document embeddings, among other things. Keywords: NLP, text embedding, document embedding, biomedical, NER, PoS, sentiment-analysis ## [mindsdb](https://github.com/mindsdb/mindsdb) MindsDB is a low-code ML platform, which automates and integrates several ML frameworks into the data stack as "AI Tables" to streamline the integration of AI into applications, making it accessible to developers of all skill levels. Keywords: Database, low-code, AI table ## [langchain](https://github.com/hwchase17/langchain) [langchain](https://github.com/hwchase17/langchain) is aimed at assisting in the development of apps merging both LLMs and other sources of knowledge. The library allows chaining calls to applications, creating a sequence across many tools. Keywords: LLMs, Large Language Models, Agents, Chains ## [LlamaIndex](https://github.com/jerryjliu/llama_index) [LlamaIndex](https://github.com/jerryjliu/llama_index) is a project that provides a central interface to connect your LLM's with external data. It provides various kinds of indices and retreival mechanisms to perform different LLM tasks and obtain knowledge-augmented results. Keywords: LLMs, Large Language Models, Data Retrieval, Indices, Knowledge Augmentation ## [ParlAI](https://github.com/facebookresearch/ParlAI) [ParlAI](https://github.com/facebookresearch/ParlAI) is a python framework for sharing, training and testing dialogue models, from open-domain chitchat, to task-oriented dialogue, to visual question answering. It provides more than 100 datasets under the same API, a large zoo of pretrained models, a set of agents, and has several integrations. Keywords: Dialogue, Chatbots, VQA, Datasets, Agents ## [sentence-transformers](https://github.com/UKPLab/sentence-transformers) This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various task. Text is embedding in vector space such that similar text is close and can efficiently be found using cosine similarity. Keywords: Dense vector representations, Text embeddings, Sentence embeddings ## [ludwig](https://github.com/ludwig-ai/ludwig) Ludwig is a declarative machine learning framework that makes it easy to define machine learning pipelines using a simple and flexible data-driven configuration system. Ludwig is targeted at a wide variety of AI tasks. It provides a data-driven configuration system, training, prediction, and evaluation scripts, as well as a programmatic API. Keywords: Declarative, Data-driven, ML Framework ## [InvokeAI](https://github.com/invoke-ai/InvokeAI) [InvokeAI](https://github.com/invoke-ai/InvokeAI) is an engine for Stable Diffusion models, aimed at professionals, artists, and enthusiasts. It leverages the latest AI-driven technologies through CLI as well as a WebUI. Keywords: Stable-Diffusion, WebUI, CLI ## [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) is an easy-to-use and powerful NLP library particularly targeted at the Chinese languages. It has support for multiple pre-trained model zoos, and supports a wide-range of NLP tasks from research to industrial applications. Keywords: NLP, Chinese, Research, Industry ## [stanza](https://github.com/stanfordnlp/stanza) The Stanford NLP Group's official Python NLP library. It contains support for running various accurate natural language processing tools on 60+ languages and for accessing the Java Stanford CoreNLP software from Python. Keywords: NLP, Multilingual, CoreNLP ## [DeepPavlov](https://github.com/deeppavlov/DeepPavlov) [DeepPavlov](https://github.com/deeppavlov/DeepPavlov) is an open-source conversational AI library. It is designed for the development of production ready chat-bots and complex conversational systems, as well as research in the area of NLP and, particularly, of dialog systems. Keywords: Conversational, Chatbot, Dialog ## [alpaca-lora](https://github.com/tloen/alpaca-lora) Alpaca-lora contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). The repository provides training (fine-tuning) as well as generation scripts. Keywords: LoRA, Parameter-efficient fine-tuning ## [imagen-pytorch](https://github.com/lucidrains/imagen-pytorch) An open-source Implementation of Imagen, Google's closed-source Text-to-Image Neural Network that beats DALL-E2. As of release, it is the new SOTA for text-to-image synthesis. Keywords: Imagen, Text-to-image ## [adapters](https://github.com/adapter-hub/adapters) [adapters](https://github.com/adapter-hub/adapters) is an extension of HuggingFace's Transformers library, integrating adapters into state-of-the-art language models by incorporating AdapterHub, a central repository for pre-trained adapter modules. It is a drop-in replacement for transformers, which is regularly updated to stay up-to-date with the developments of transformers. Keywords: Adapters, LoRA, Parameter-efficient fine-tuning, Hub ## [NeMo](https://github.com/NVIDIA/NeMo) NVIDIA [NeMo](https://github.com/NVIDIA/NeMo) is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), text-to-speech synthesis (TTS), large language models (LLMs), and natural language processing (NLP). The primary objective of [NeMo](https://github.com/NVIDIA/NeMo) is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new https://developer.nvidia.com/conversational-ai#started. Keywords: Conversational, ASR, TTS, LLMs, NLP ## [Runhouse](https://github.com/run-house/runhouse) [Runhouse](https://github.com/run-house/runhouse) allows to send code and data to any of your compute or data infra, all in Python, and continue to interact with them normally from your existing code and environment. Runhouse developers mention: > Think of it as an expansion pack to your Python interpreter that lets it take detours to remote machines or manipulate remote data. Keywords: MLOps, Infrastructure, Data storage, Modeling ## [MONAI](https://github.com/Project-MONAI/MONAI) [MONAI](https://github.com/Project-MONAI/MONAI) is a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem. Its ambitions are: - developing a community of academic, industrial and clinical researchers collaborating on a common foundation; - creating state-of-the-art, end-to-end training workflows for healthcare imaging; - providing researchers with the optimized and standardized way to create and evaluate deep learning models. Keywords: Healthcare imaging, Training, Evaluation ## [simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers) Simple Transformers lets you quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize, train, and evaluate a model. It supports a wide variety of NLP tasks. Keywords: Framework, simplicity, NLP ## [JARVIS](https://github.com/microsoft/JARVIS) [JARVIS](https://github.com/microsoft/JARVIS) is a system attempting to merge LLMs such as GPT-4 with the rest of the open-source ML community: leveraging up to 60 downstream models in order to perform tasks identified by the LLM. Keywords: LLM, Agents, HF Hub ## [transformers.js](https://xenova.github.io/transformers.js/) [transformers.js](https://xenova.github.io/transformers.js/) is a JavaScript library targeted at running models from transformers directly within the browser. Keywords: Transformers, JavaScript, browser ## [bumblebee](https://github.com/elixir-nx/bumblebee) Bumblebee provides pre-trained Neural Network models on top of Axon, a neural networks library for the Elixir language. It includes integration with ๐Ÿค— Models, allowing anyone to download and perform Machine Learning tasks with few lines of code. Keywords: Elixir, Axon ## [argilla](https://github.com/argilla-io/argilla) Argilla is an open-source platform providing advanced NLP labeling, monitoring, and workspaces. It is compatible with many open source ecosystems such as Hugging Face, Stanza, FLAIR, and others. Keywords: NLP, Labeling, Monitoring, Workspaces ## [haystack](https://github.com/deepset-ai/haystack) Haystack is an open source NLP framework to interact with your data using Transformer models and LLMs. It offers production-ready tools to quickly build complex decision making, question answering, semantic search, text generation applications, and more. Keywords: NLP, Framework, LLM ## [spaCy](https://github.com/explosion/spaCy) [spaCy](https://github.com/explosion/spaCy) is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products. It offers support for transformers models through its third party package, spacy-transformers. Keywords: NLP, Framework ## [speechbrain](https://github.com/speechbrain/speechbrain) SpeechBrain is an open-source and all-in-one conversational AI toolkit based on PyTorch. The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, speech separation, language identification, multi-microphone signal processing, and many others. Keywords: Conversational, Speech ## [skorch](https://github.com/skorch-dev/skorch) Skorch is a scikit-learn compatible neural network library that wraps PyTorch. It has support for models within transformers, and tokenizers from tokenizers. Keywords: Scikit-Learn, PyTorch ## [bertviz](https://github.com/jessevig/bertviz) BertViz is an interactive tool for visualizing attention in Transformer language models such as BERT, GPT2, or T5. It can be run inside a Jupyter or Colab notebook through a simple Python API that supports most Huggingface models. Keywords: Visualization, Transformers ## [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) is a haiku library using the xmap/pjit operators in JAX for model parallelism of transformers. This library is designed for scalability up to approximately 40B parameters on TPUv3s. It was the library used to train the GPT-J model. Keywords: Haiku, Model parallelism, LLM, TPU ## [deepchem](https://github.com/deepchem/deepchem) DeepChem aims to provide a high quality open-source toolchain that democratizes the use of deep-learning in drug discovery, materials science, quantum chemistry, and biology. Keywords: Drug discovery, Materials Science, Quantum Chemistry, Biology ## [OpenNRE](https://github.com/thunlp/OpenNRE) An Open-Source Package for Neural Relation Extraction (NRE). It is targeted at a wide range of users, from newcomers to relation extraction, to developers, researchers, or students. Keywords: Neural Relation Extraction, Framework ## [pycorrector](https://github.com/shibing624/pycorrector) PyCorrector is a Chinese Text Error Correction Tool. It uses a language model to detect errors, pinyin feature and shape feature to correct Chinese text errors. it can be used for Chinese Pinyin and stroke input method. Keywords: Chinese, Error correction tool, Language model, Pinyin ## [nlpaug](https://github.com/makcedward/nlpaug) This python library helps you with augmenting nlp for machine learning projects. It is a lightweight library featuring synthetic data generation for improving model performance, support for audio and text, and compatibility with several ecosystems (scikit-learn, pytorch, tensorflow). Keywords: Data augmentation, Synthetic data generation, Audio, NLP ## [dream-textures](https://github.com/carson-katri/dream-textures) [dream-textures](https://github.com/carson-katri/dream-textures) is a library targeted at bringing stable-diffusion support within Blender. It supports several use-cases, such as image generation, texture projection, inpainting/outpainting, ControlNet, and upscaling. Keywords: Stable-Diffusion, Blender ## [seldon-core](https://github.com/SeldonIO/seldon-core) Seldon core converts your ML models (Tensorflow, Pytorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production REST/GRPC microservices. Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more. Keywords: Microservices, Modeling, Language wrappers ## [open_model_zoo](https://github.com/openvinotoolkit/open_model_zoo) This repository includes optimized deep learning models and a set of demos to expedite development of high-performance deep learning inference applications. Use these free pre-trained models instead of training your own models to speed-up the development and production deployment process. Keywords: Optimized models, Demos ## [ml-stable-diffusion](https://github.com/apple/ml-stable-diffusion) ML-Stable-Diffusion is a repository by Apple bringing Stable Diffusion support to Core ML, on Apple Silicon devices. It supports stable diffusion checkpoints hosted on the Hugging Face Hub. Keywords: Stable Diffusion, Apple Silicon, Core ML ## [stable-dreamfusion](https://github.com/ashawkey/stable-dreamfusion) Stable-Dreamfusion is a pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model. Keywords: Text-to-3D, Stable Diffusion ## [txtai](https://github.com/neuml/txtai) [txtai](https://github.com/neuml/txtai) is an open-source platform for semantic search and workflows powered by language models. txtai builds embeddings databases, which are a union of vector indexes and relational databases enabling similarity search with SQL. Semantic workflows connect language models together into unified applications. Keywords: Semantic search, LLM ## [djl](https://github.com/deepjavalibrary/djl) Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is designed to be easy to get started with and simple to use for developers. DJL provides a native Java development experience and functions like any other regular Java library. DJL offers [a Java binding](https://github.com/deepjavalibrary/djl/tree/master/extensions/tokenizers) for HuggingFace Tokenizers and easy conversion toolkit for HuggingFace model to deploy in Java. Keywords: Java, Framework ## [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/) This project provides a unified framework to test generative language models on a large number of different evaluation tasks. It has support for more than 200 tasks, and supports different ecosystems: HF Transformers, GPT-NeoX, DeepSpeed, as well as the OpenAI API. Keywords: LLM, Evaluation, Few-shot ## [gpt-neox](https://github.com/EleutherAI/gpt-neox) This repository records EleutherAI's library for training large-scale language models on GPUs. The framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. It is focused on training multi-billion-parameter models. Keywords: Training, LLM, Megatron, DeepSpeed ## [muzic](https://github.com/microsoft/muzic) Muzic is a research project on AI music that empowers music understanding and generation with deep learning and artificial intelligence. Muzic was created by researchers from Microsoft Research Asia. Keywords: Music understanding, Music generation ## [dalle-flow](https://github.com/jina-ai/dalle-flow) DALLยทE Flow is an interactive workflow for generating high-definition images from a text prompt. Itt leverages DALLยทE-Mega, GLID-3 XL, and Stable Diffusion to generate image candidates, and then calls CLIP-as-service to rank the candidates w.r.t. the prompt. The preferred candidate is fed to GLID-3 XL for diffusion, which often enriches the texture and background. Finally, the candidate is upscaled to 1024x1024 via SwinIR. Keywords: High-definition image generation, Stable Diffusion, DALL-E Mega, GLID-3 XL, CLIP, SwinIR ## [lightseq](https://github.com/bytedance/lightseq) LightSeq is a high performance training and inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP and CV models such as BERT, GPT, Transformer, etc. It is therefore best useful for machine translation, text generation, image classification, and other sequence related tasks. Keywords: Training, Inference, Sequence Processing, Sequence Generation ## [LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR) The goal of this project is to create a learning based system that takes an image of a math formula and returns corresponding LaTeX code. Keywords: OCR, LaTeX, Math formula ## [open_clip](https://github.com/mlfoundations/open_clip) OpenCLIP is an open source implementation of OpenAI's CLIP. The goal of this repository is to enable training models with contrastive image-text supervision, and to investigate their properties such as robustness to distribution shift. The starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset. Specifically, a ResNet-50 model trained with this codebase on OpenAI's 15 million image subset of YFCC achieves 32.7% top-1 accuracy on ImageNet. Keywords: CLIP, Open-source, Contrastive, Image-text ## [dalle-playground](https://github.com/saharmor/dalle-playground) A playground to generate images from any text prompt using Stable Diffusion and Dall-E mini. Keywords: WebUI, Stable Diffusion, Dall-E mini ## [FedML](https://github.com/FedML-AI/FedML) [FedML](https://github.com/FedML-AI/FedML) is a federated learning and analytics library enabling secure and collaborative machine learning on decentralized data anywhere at any scale. It supports large-scale cross-silo federated learning, and cross-device federated learning on smartphones/IoTs, and research simulation. Keywords: Federated Learning, Analytics, Collaborative ML, Decentralized ## [gpt-code-clippy](https://github.com/CodedotAl/gpt-code-clippy) GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub. Keywords: LLM, Code ## [TextAttack](https://github.com/QData/TextAttack) [TextAttack](https://github.com/QData/TextAttack) ๐Ÿ™ is a Python framework for adversarial attacks, data augmentation, and model training in NLP. Keywords: Adversarial attacks, Data augmentation, NLP ## [OpenPrompt](https://github.com/thunlp/OpenPrompt) Prompt-learning is a paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modify the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. [OpenPrompt](https://github.com/thunlp/OpenPrompt) supports loading PLMs directly from https://github.com/huggingface/transformers. ## [text-generation-webui](https://github.com/oobabooga/text-generation-webui/) [text-generation-webui](https://github.com/oobabooga/text-generation-webui/) is a Gradio Web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. Keywords: LLM, WebUI ## [libra](https://github.com/Palashio/libra) An ergonomic machine learning [libra](https://github.com/Palashio/libra)ry for non-technical users. It focuses on ergonomics and on ensuring that training a model is as simple as it can be. Keywords: Ergonomic, Non-technical ## [alibi](https://github.com/SeldonIO/alibi) Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. Keywords: Model inspection, Model interpretation, Black-box, White-box ## [tortoise-tts](https://github.com/neonbjb/tortoise-tts) Tortoise is a text-to-speech program built with the following priorities: strong multi-voice capabilities, and highly realistic prosody and intonation. Keywords: Text-to-speech ## [flower](https://github.com/adap/flower) Flower (flwr) is a framework for building federated learning systems. The design of Flower is based on a few guiding principles: customizability, extendability, framework agnosticity, and ease-of-use. Keywords: Federated learning systems, Customizable, Extendable, Framework-agnostic, Simplicity ## [fast-bert](https://github.com/utterworks/fast-bert) Fast-Bert is a deep learning library that allows developers and data scientists to train and deploy BERT and XLNet based models for natural language processing tasks beginning with Text Classification. It is aimed at simplicity. Keywords: Deployment, BERT, XLNet ## [towhee](https://github.com/towhee-io/towhee) Towhee makes it easy to build neural data processing pipelines for AI applications. We provide hundreds of models, algorithms, and transformations that can be used as standard pipeline building blocks. Users can use Towhee's Pythonic API to build a prototype of their pipeline and automatically optimize it for production-ready environments. Keywords: Data processing pipeline, Optimization ## [alibi-detect](https://github.com/SeldonIO/alibi-detect) Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both TensorFlow and PyTorch backends are supported for drift detection. Keywords: Adversarial, Outlier, Drift detection ## [FARM](https://github.com/deepset-ai/FARM) [FARM](https://github.com/deepset-ai/FARM) makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built upon transformers and provides additional features to simplify the life of developers: Parallelized preprocessing, highly modular design, multi-task learning, experiment tracking, easy debugging and close integration with AWS SageMaker. Keywords: Transfer Learning, Modular design, Multi-task learning, Experiment tracking ## [aitextgen](https://github.com/minimaxir/aitextgen) A robust Python tool for text-based AI training and generation using OpenAI's GPT-2 and EleutherAI's GPT Neo/GPT-3 architecture. [aitextgen](https://github.com/minimaxir/aitextgen) is a Python package that leverages PyTorch, Hugging Face Transformers and pytorch-lightning with specific optimizations for text generation using GPT-2, plus many added features. Keywords: Training, Generation ## [diffgram](https://github.com/diffgram/diffgram) Diffgram aims to integrate human supervision into platforms. We support your team programmatically changing the UI (Schema, layout, etc.) like in Streamlit. This means that you can collect and annotate timely data from users. In other words, we are the platform behind your platform, an integrated part of your application, to ship new & better AI products faster. Keywords: Human supervision, Platform ## [ecco](https://github.com/jalammar/ecco) Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0). Keywords: Model explainability ## [s3prl](https://github.com/s3prl/s3prl) [s3prl](https://github.com/s3prl/s3prl) stands for Self-Supervised Speech Pre-training and Representation Learning. Self-supervised speech pre-trained models are called upstream in this toolkit, and are utilized in various downstream tasks. Keywords: Speech, Training ## [ru-dalle](https://github.com/ai-forever/ru-dalle) RuDALL-E aims to be similar to DALL-E, targeted to Russian. Keywords: DALL-E, Russian ## [DeepKE](https://github.com/zjunlp/DeepKE) [DeepKE](https://github.com/zjunlp/DeepKE) is a knowledge extraction toolkit for knowledge graph construction supporting cnSchema๏ผŒlow-resource, document-level and multimodal scenarios for entity, relation and attribute extraction. Keywords: Knowledge Extraction, Knowledge Graphs ## [Nebuly](https://github.com/nebuly-ai/nebuly) Nebuly is the next-generation platform to monitor and optimize your AI costs in one place. The platform connects to all your AI cost sources (compute, API providers, AI software licenses, etc) and centralizes them in one place to give you full visibility on a model basis. The platform also provides optimization recommendations and a co-pilot model that can guide during the optimization process. The platform builds on top of the open-source tools allowing you to optimize the different steps of your AI stack to squeeze out the best possible cost performances. Keywords: Optimization, Performance, Monitoring ## [imaginAIry](https://github.com/brycedrennan/imaginAIry) Offers a CLI and a Python API to generate images with Stable Diffusion. It has support for many tools, like image structure control (controlnet), instruction-based image edits (InstructPix2Pix), prompt-based masking (clipseg), among others. Keywords: Stable Diffusion, CLI, Python API ## [sparseml](https://github.com/neuralmagic/sparseml) SparseML is an open-source model optimization toolkit that enables you to create inference-optimized sparse models using pruning, quantization, and distillation algorithms. Models optimized with SparseML can then be exported to the ONNX and deployed with DeepSparse for GPU-class performance on CPU hardware. Keywords: Model optimization, Pruning, Quantization, Distillation ## [opacus](https://github.com/pytorch/opacus) Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment. Keywords: Differential privacy ## [LAVIS](https://github.com/salesforce/LAVIS) [LAVIS](https://github.com/salesforce/LAVIS) is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets. It features a unified interface design to access Keywords: Multimodal, NLP, Vision ## [buzz](https://github.com/chidiwilliams/buzz) Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper. Keywords: Audio transcription, Translation ## [rust-bert](https://github.com/guillaume-be/rust-bert) Rust-native state-of-the-art Natural Language Processing models and pipelines. Port of Hugging Face's Transformers library, using the tch-rs crate and pre-processing from rust-tokenizers. Supports multi-threaded tokenization and GPU inference. This repository exposes the model base architecture, task-specific heads and ready-to-use pipelines. Keywords: Rust, BERT, Inference ## [EasyNLP](https://github.com/alibaba/EasyNLP) [EasyNLP](https://github.com/alibaba/EasyNLP) is an easy-to-use NLP development and application toolkit in PyTorch, first released inside Alibaba in 2021. It is built with scalable distributed training strategies and supports a comprehensive suite of NLP algorithms for various NLP applications. [EasyNLP](https://github.com/alibaba/EasyNLP) integrates knowledge distillation and few-shot learning for landing large pre-trained models, together with various popular multi-modality pre-trained models. It provides a unified framework of model training, inference, and deployment for real-world applications. Keywords: NLP, Knowledge distillation, Few-shot learning, Multi-modality, Training, Inference, Deployment ## [TurboTransformers](https://github.com/Tencent/TurboTransformers) A fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU. Keywords: Optimization, Performance ## [hivemind](https://github.com/learning-at-home/hivemind) Hivemind is a PyTorch library for decentralized deep learning across the Internet. Its intended usage is training one large model on hundreds of computers from different universities, companies, and volunteers. Keywords: Decentralized training ## [docquery](https://github.com/impira/docquery) DocQuery is a library and command-line tool that makes it easy to analyze semi-structured and unstructured documents (PDFs, scanned images, etc.) using large language models (LLMs). You simply point DocQuery at one or more documents and specify a question you want to ask. DocQuery is created by the team at Impira. Keywords: Semi-structured documents, Unstructured documents, LLM, Document Question Answering ## [CodeGeeX](https://github.com/THUDM/CodeGeeX) [CodeGeeX](https://github.com/THUDM/CodeGeeX) is a large-scale multilingual code generation model with 13 billion parameters, pre-trained on a large code corpus of more than 20 programming languages. It has several unique features: - Multilingual code generation - Crosslingual code translation - Is a customizable programming assistant Keywords: Code Generation Model ## [ktrain](https://github.com/amaiya/ktrain) [ktrain](https://github.com/amaiya/ktrain) is a lightweight wrapper for the deep learning library TensorFlow Keras (and other libraries) to help build, train, and deploy neural networks and other machine learning models. Inspired by ML framework extensions like fastai and ludwig, [ktrain](https://github.com/amaiya/ktrain) is designed to make deep learning and AI more accessible and easier to apply for both newcomers and experienced practitioners. Keywords: Keras wrapper, Model building, Training, Deployment ## [FastDeploy](https://github.com/PaddlePaddle/FastDeploy) [FastDeploy](https://github.com/PaddlePaddle/FastDeploy) is an Easy-to-use and High Performance AI model deployment toolkit for Cloud, Mobile and Edge with packageout-of-the-box and unified experience, endend-to-end optimization for over fire160+ Text, Vision, Speech and Cross-modal AI models. Including image classification, object detection, OCR, face detection, matting, pp-tracking, NLP, stable diffusion, TTS and other tasks to meet developers' industrial deployment needs for multi-scenario, multi-hardware and multi-platform. Keywords: Model deployment, CLoud, Mobile, Edge ## [underthesea](https://github.com/undertheseanlp/underthesea) [underthesea](https://github.com/undertheseanlp/underthesea) is a Vietnamese NLP toolkit. Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in Vietnamese Natural Language Processing. We provides extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing. Keywords: Vietnamese, NLP ## [hasktorch](https://github.com/hasktorch/hasktorch) Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the core C++ libraries shared by PyTorch. Keywords: Haskell, Neural Networks ## [donut](https://github.com/clovaai/donut) Donut, or Document understanding transformer, is a new method of document understanding that utilizes an OCR-free end-to-end Transformer model. Donut does not require off-the-shelf OCR engines/APIs, yet it shows state-of-the-art performances on various visual document understanding tasks, such as visual document classification or information extraction (a.k.a. document parsing). Keywords: Document Understanding ## [transformers-interpret](https://github.com/cdpierse/transformers-interpret) Transformers Interpret is a model explainability tool designed to work exclusively with the transformers package. In line with the philosophy of the Transformers package Transformers Interpret allows any transformers model to be explained in just two lines. Explainers are available for both text and computer vision models. Visualizations are also available in notebooks and as savable png and html files Keywords: Model interpretation, Visualization ## [mlrun](https://github.com/mlrun/mlrun) MLRun is an open MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications, significantly reducing engineering efforts, time to production, and computation resources. With MLRun, you can choose any IDE on your local machine or on the cloud. MLRun breaks the silos between data, ML, software, and DevOps/MLOps teams, enabling collaboration and fast continuous improvements. Keywords: MLOps ## [FederatedScope](https://github.com/alibaba/FederatedScope) [FederatedScope](https://github.com/alibaba/FederatedScope) is a comprehensive federated learning platform that provides convenient usage and flexible customization for various federated learning tasks in both academia and industry. Based on an event-driven architecture, [FederatedScope](https://github.com/alibaba/FederatedScope) integrates rich collections of functionalities to satisfy the burgeoning demands from federated learning, and aims to build up an easy-to-use platform for promoting learning safely and effectively. Keywords: Federated learning, Event-driven ## [pythainlp](https://github.com/PyThaiNLP/pythainlp) PyThaiNLP is a Python package for text processing and linguistic analysis, similar to NLTK with focus on Thai language. Keywords: Thai, NLP, NLTK ## [FlagAI](https://github.com/FlagAI-Open/FlagAI) [FlagAI](https://github.com/FlagAI-Open/FlagAI) (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model. Our goal is to support training, fine-tuning, and deployment of large-scale models on various downstream tasks with multi-modality. Keywords: Large models, Training, Fine-tuning, Deployment, Multi-modal ## [pyserini](https://github.com/castorini/pyserini) [pyserini](https://github.com/castorini/pyserini) is a Python toolkit for reproducible information retrieval research with sparse and dense representations. Retrieval using sparse representations is provided via integration with the group's Anserini IR toolkit. Retrieval using dense representations is provided via integration with Facebook's Faiss library. Keywords: IR, Information Retrieval, Dense, Sparse ## [baal](https://github.com/baal-org/baal) [baal](https://github.com/baal-org/baal) is an active learning library that supports both industrial applications and research usecases. [baal](https://github.com/baal-org/baal) currently supports Monte-Carlo Dropout, MCDropConnect, deep ensembles, and semi-supervised learning. Keywords: Active Learning, Research, Labeling ## [cleanlab](https://github.com/cleanlab/cleanlab) [cleanlab](https://github.com/cleanlab/cleanlab) is the standard data-centric AI package for data quality and machine learning with messy, real-world data and labels. For text, image, tabular, audio (among others) datasets, you can use cleanlab to automatically: detect data issues (outliers, label errors, near duplicates, etc), train robust ML models, infer consensus + annotator-quality for multi-annotator data, suggest data to (re)label next (active learning). Keywords: Data-Centric AI, Data Quality, Noisy Labels, Outlier Detection, Active Learning ## [BentoML](https://github.com/bentoml/BentoML) [BentoML](https://github.com/bentoml) is the unified framework for for building, shipping, and scaling production-ready AI applications incorporating traditional ML, pre-trained AI models, Generative and Large Language Models. All Hugging Face models and pipelines can be seamlessly integrated into BentoML applications, enabling the running of models on the most suitable hardware and independent scaling based on usage. Keywords: BentoML, Framework, Deployment, AI Applications ## [LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory) [LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory) offers a user-friendly fine-tuning framework that incorporates PEFT. The repository includes training(fine-tuning) and inference examples for LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, and other LLMs. A ChatGLM version is also available in [ChatGLM-Efficient-Tuning](https://github.com/hiyouga/ChatGLM-Efficient-Tuning). Keywords: PEFT, fine-tuning, LLaMA-2, ChatGLM, Qwen
0
mavonic_private_repos
mavonic_private_repos/transformers/README_pt-br.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <b>ะ ortuguรชs</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>Aprendizado de mรกquina de รบltima geraรงรฃo para JAX, PyTorch e TensorFlow</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> A biblioteca ๐Ÿค— Transformers oferece milhares de modelos prรฉ-treinados para executar tarefas em diferentes modalidades, como texto, visรฃo e รกudio. Esses modelos podem ser aplicados a: * ๐Ÿ“ Texto, para tarefas como classificaรงรฃo de texto, extraรงรฃo de informaรงรตes, resposta a perguntas, sumarizaรงรฃo, traduรงรฃo, geraรงรฃo de texto, em mais de 100 idiomas. * ๐Ÿ–ผ๏ธ Imagens, para tarefas como classificaรงรฃo de imagens, detecรงรฃo de objetos e segmentaรงรฃo. * ๐Ÿ—ฃ๏ธ รudio, para tarefas como reconhecimento de fala e classificaรงรฃo de รกudio. Os modelos Transformer tambรฉm podem executar tarefas em diversas modalidades combinadas, como responder a perguntas em tabelas, reconhecimento รณptico de caracteres, extraรงรฃo de informaรงรตes de documentos digitalizados, classificaรงรฃo de vรญdeo e resposta a perguntas visuais. A biblioteca ๐Ÿค— Transformers oferece APIs para baixar e usar rapidamente esses modelos prรฉ-treinados em um texto especรญfico, ajustรก-los em seus prรณprios conjuntos de dados e, em seguida, compartilhรก-los com a comunidade em nosso [model hub](https://huggingface.co/models). Ao mesmo tempo, cada mรณdulo Python que define uma arquitetura รฉ totalmente independente e pode ser modificado para permitir experimentos de pesquisa rรกpidos. A biblioteca ๐Ÿค— Transformers รฉ respaldada pelas trรชs bibliotecas de aprendizado profundo mais populares โ€” [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) e [TensorFlow](https://www.tensorflow.org/) โ€” com uma integraรงรฃo perfeita entre elas. ร‰ simples treinar seus modelos com uma delas antes de carregรก-los para inferรชncia com a outra ## Demonstraรงรฃo Online Vocรช pode testar a maioria de nossos modelos diretamente em suas pรกginas a partir do [model hub](https://huggingface.co/models). Tambรฉm oferecemos [hospedagem de modelos privados, versionamento e uma API de inferรชncia](https://huggingface.co/pricing) para modelos pรบblicos e privados. Aqui estรฃo alguns exemplos: Em Processamento de Linguagem Natural: - [Completar palavra mascarada com BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Reconhecimento de Entidades Nomeadas com Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [Geraรงรฃo de texto com GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C) - [Inferรชncia de Linguagem Natural com RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [Sumarizaรงรฃo com BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [Resposta a perguntas com DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [Traduรงรฃo com T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) Em Visรฃo Computacional: - [Classificaรงรฃo de Imagens com ViT](https://huggingface.co/google/vit-base-patch16-224) - [Detecรงรฃo de Objetos com DETR](https://huggingface.co/facebook/detr-resnet-50) - [Segmentaรงรฃo Semรขntica com SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [Segmentaรงรฃo Panรณptica com MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco) - [Estimativa de Profundidade com DPT](https://huggingface.co/docs/transformers/model_doc/dpt) - [Classificaรงรฃo de Vรญdeo com VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae) - [Segmentaรงรฃo Universal com OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) Em รudio: - [Reconhecimento Automรกtico de Fala com Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) - [Detecรงรฃo de Palavras-Chave com Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - [Classificaรงรฃo de รudio com Transformer de Espectrograma de รudio](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) Em Tarefas Multimodais: - [Respostas de Perguntas em Tabelas com TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq) - [Respostas de Perguntas Visuais com ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) - [Classificaรงรฃo de Imagens sem Anotaรงรฃo com CLIP](https://huggingface.co/openai/clip-vit-large-patch14) - [Respostas de Perguntas em Documentos com LayoutLM](https://huggingface.co/impira/layoutlm-document-qa) - [Classificaรงรฃo de Vรญdeo sem Anotaรงรฃo com X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip) ## 100 Projetos Usando Transformers Transformers รฉ mais do que um conjunto de ferramentas para usar modelos prรฉ-treinados: รฉ uma comunidade de projetos construรญdos ao seu redor e o Hugging Face Hub. Queremos que o Transformers permita que desenvolvedores, pesquisadores, estudantes, professores, engenheiros e qualquer outra pessoa construa seus projetos dos sonhos. Para celebrar as 100.000 estrelas do Transformers, decidimos destacar a comunidade e criamos a pรกgina [awesome-transformers](./awesome-transformers.md), que lista 100 projetos incrรญveis construรญdos nas proximidades dos Transformers. Se vocรช possui ou utiliza um projeto que acredita que deveria fazer parte da lista, abra um PR para adicionรก-lo! ## Se vocรช estรก procurando suporte personalizado da equipe Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## Tour Rรกpido Para usar imediatamente um modelo em uma entrada especรญfica (texto, imagem, รกudio, ...), oferecemos a API `pipeline`. Os pipelines agrupam um modelo prรฉ-treinado com o prรฉ-processamento que foi usado durante o treinamento desse modelo. Aqui estรก como usar rapidamente um pipeline para classificar textos como positivos ou negativos: ```python from transformers import pipeline # Carregue o pipeline de classificaรงรฃo de texto >>> classifier = pipeline("sentiment-analysis") # Classifique o texto como positivo ou negativo >>> classifier("Estamos muito felizes em apresentar o pipeline no repositรณrio dos transformers.") [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` A segunda linha de cรณdigo baixa e armazena em cache o modelo prรฉ-treinado usado pelo pipeline, enquanto a terceira linha o avalia no texto fornecido. Neste exemplo, a resposta รฉ "positiva" com uma confianรงa de 99,97%. Muitas tarefas tรชm um `pipeline` prรฉ-treinado pronto para uso, nรฃo apenas em PNL, mas tambรฉm em visรฃo computacional e processamento de รกudio. Por exemplo, podemos facilmente extrair objetos detectados em uma imagem: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Download an image with cute cats >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # Allocate a pipeline for object detection >>> object_detector = pipeline('object-detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` Aqui obtemos uma lista de objetos detectados na imagem, com uma caixa envolvendo o objeto e uma pontuaรงรฃo de confianรงa. Aqui estรก a imagem original ร  esquerda, com as previsรตes exibidas ร  direita: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> Vocรช pode aprender mais sobre as tarefas suportadas pela API `pipeline` em [este tutorial](https://huggingface.co/docs/transformers/task_summary). Alรฉm do `pipeline`, para baixar e usar qualquer um dos modelos prรฉ-treinados em sua tarefa especรญfica, tudo o que รฉ necessรกrio sรฃo trรชs linhas de cรณdigo. Aqui estรก a versรฃo em PyTorch: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` E aqui estรก o cรณdigo equivalente para TensorFlow: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` O tokenizador รฉ responsรกvel por todo o prรฉ-processamento que o modelo prรฉ-treinado espera, e pode ser chamado diretamente em uma รบnica string (como nos exemplos acima) ou em uma lista. Ele produzirรก um dicionรกrio que vocรช pode usar no cรณdigo subsequente ou simplesmente passar diretamente para o seu modelo usando o operador de descompactaรงรฃo de argumentos **. O modelo em si รฉ um [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ou um [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(dependendo do seu back-end) que vocรช pode usar como de costume. [Este tutorial](https://huggingface.co/docs/transformers/training) explica como integrar esse modelo em um ciclo de treinamento clรกssico do PyTorch ou TensorFlow, ou como usar nossa API `Trainer` para ajuste fino rรกpido em um novo conjunto de dados. ## Por que devo usar transformers? 1. Modelos state-of-the-art fรกceis de usar: - Alto desempenho em compreensรฃo e geraรงรฃo de linguagem natural, visรฃo computacional e tarefas de รกudio. - Barreira de entrada baixa para educadores e profissionais. - Poucas abstraรงรตes visรญveis para o usuรกrio, com apenas trรชs classes para aprender. - Uma API unificada para usar todos os nossos modelos prรฉ-treinados. 1. Menores custos de computaรงรฃo, menor pegada de carbono: - Pesquisadores podem compartilhar modelos treinados em vez de treinar sempre do zero. - Profissionais podem reduzir o tempo de computaรงรฃo e os custos de produรงรฃo. - Dezenas de arquiteturas com mais de 60.000 modelos prรฉ-treinados em todas as modalidades. 1. Escolha o framework certo para cada parte da vida de um modelo: - Treine modelos state-of-the-art em 3 linhas de cรณdigo. - Mova um รบnico modelo entre frameworks TF2.0/PyTorch/JAX ร  vontade. - Escolha o framework certo de forma contรญnua para treinamento, avaliaรงรฃo e produรงรฃo. 1. Personalize facilmente um modelo ou um exemplo para atender ร s suas necessidades: - Fornecemos exemplos para cada arquitetura para reproduzir os resultados publicados pelos autores originais. - Os detalhes internos do modelo sรฃo expostos de maneira consistente. - Os arquivos do modelo podem ser usados de forma independente da biblioteca para experimentos rรกpidos. ## Por que nรฃo devo usar transformers? - Esta biblioteca nรฃo รฉ uma caixa de ferramentas modular para construir redes neurais. O cรณdigo nos arquivos do modelo nรฃo รฉ refatorado com abstraรงรตes adicionais de propรณsito, para que os pesquisadores possam iterar rapidamente em cada um dos modelos sem se aprofundar em abstraรงรตes/arquivos adicionais. - A API de treinamento nรฃo รฉ projetada para funcionar com qualquer modelo, mas รฉ otimizada para funcionar com os modelos fornecidos pela biblioteca. Para loops de aprendizado de mรกquina genรฉricos, vocรช deve usar outra biblioteca (possivelmente, [Accelerate](https://huggingface.co/docs/accelerate)). - Embora nos esforcemos para apresentar o maior nรบmero possรญvel de casos de uso, os scripts em nossa [pasta de exemplos](https://github.com/huggingface/transformers/tree/main/examples) sรฃo apenas isso: exemplos. ร‰ esperado que eles nรฃo funcionem prontos para uso em seu problema especรญfico e que seja necessรกrio modificar algumas linhas de cรณdigo para adaptรก-los ร s suas necessidades. ### Com pip Este repositรณrio รฉ testado no Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ e TensorFlow 2.6+. Vocรช deve instalar o ๐Ÿค— Transformers em um [ambiente virtual](https://docs.python.org/3/library/venv.html). Se vocรช nรฃo estรก familiarizado com ambientes virtuais em Python, confira o [guia do usuรกrio](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Primeiro, crie um ambiente virtual com a versรฃo do Python que vocรช vai usar e ative-o. Em seguida, vocรช precisarรก instalar pelo menos um dos back-ends Flax, PyTorch ou TensorFlow. Consulte a [pรกgina de instalaรงรฃo do TensorFlow](https://www.tensorflow.org/install/), a [pรกgina de instalaรงรฃo do PyTorch](https://pytorch.org/get-started/locally/#start-locally) e/ou [Flax](https://github.com/google/flax#quick-install) e [Jax](https://github.com/google/jax#installation) pรกginas de instalaรงรฃo para obter o comando de instalaรงรฃo especรญfico para a sua plataforma. Quando um desses back-ends estiver instalado, o ๐Ÿค— Transformers pode ser instalado usando pip da seguinte forma: ```bash pip install transformers ``` Se vocรช deseja experimentar com os exemplos ou precisa da versรฃo mais recente do cรณdigo e nรฃo pode esperar por um novo lanรงamento, vocรช deve instalar a [biblioteca a partir do cรณdigo-fonte](https://huggingface.co/docs/transformers/installation#installing-from-source). ### Com conda O ๐Ÿค— Transformers pode ser instalado com conda da seguinte forma: ```bash conda install conda-forge::transformers ``` > **_NOTA:_** Instalar `transformers` pelo canal `huggingface` estรก obsoleto. Siga as pรกginas de instalaรงรฃo do Flax, PyTorch ou TensorFlow para ver como instalรก-los com conda. Siga as pรกginas de instalaรงรฃo do Flax, PyTorch ou TensorFlow para ver como instalรก-los com o conda. > **_NOTA:_** No Windows, vocรช pode ser solicitado a ativar o Modo de Desenvolvedor para aproveitar o cache. Se isso nรฃo for uma opรงรฃo para vocรช, por favor nos avise [neste problema](https://github.com/huggingface/huggingface_hub/issues/1062). ## Arquiteturas de Modelos **[Todos os pontos de verificaรงรฃo de modelo](https://huggingface.co/models)** fornecidos pelo ๐Ÿค— Transformers sรฃo integrados de forma transparente do [model hub](https://huggingface.co/models) do huggingface.co, onde sรฃo carregados diretamente por [usuรกrios](https://huggingface.co/users) e [organizaรงรตes](https://huggingface.co/organizations). Nรบmero atual de pontos de verificaรงรฃo: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers atualmente fornece as seguintes arquiteturas: veja [aqui](https://huggingface.co/docs/transformers/model_summary) para um resumo de alto nรญvel de cada uma delas. Para verificar se cada modelo tem uma implementaรงรฃo em Flax, PyTorch ou TensorFlow, ou possui um tokenizador associado com a biblioteca ๐Ÿค— Tokenizers, consulte [esta tabela](https://huggingface.co/docs/transformers/index#supported-frameworks). Essas implementaรงรตes foram testadas em vรกrios conjuntos de dados (veja os scripts de exemplo) e devem corresponder ao desempenho das implementaรงรตes originais. Vocรช pode encontrar mais detalhes sobre o desempenho na seรงรฃo de Exemplos da [documentaรงรฃo](https://github.com/huggingface/transformers/tree/main/examples). ## Saiba mais | Seรงรฃo | Descriรงรฃo | |-|-| | [Documentaรงรฃo](https://huggingface.co/docs/transformers/) | Documentaรงรฃo completa da API e tutoriais | | [Resumo de Tarefas](https://huggingface.co/docs/transformers/task_summary) | Tarefas suportadas pelo ๐Ÿค— Transformers | | [Tutorial de Prรฉ-processamento](https://huggingface.co/docs/transformers/preprocessing) | Usando a classe `Tokenizer` para preparar dados para os modelos | | [Treinamento e Ajuste Fino](https://huggingface.co/docs/transformers/training) | Usando os modelos fornecidos pelo ๐Ÿค— Transformers em um loop de treinamento PyTorch/TensorFlow e a API `Trainer` | | [Tour Rรกpido: Scripts de Ajuste Fino/Utilizaรงรฃo](https://github.com/huggingface/transformers/tree/main/examples) | Scripts de exemplo para ajuste fino de modelos em uma ampla gama de tarefas | | [Compartilhamento e Envio de Modelos](https://huggingface.co/docs/transformers/model_sharing) | Envie e compartilhe seus modelos ajustados com a comunidade | ## Citaรงรฃo Agora temos um [artigo](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que vocรช pode citar para a biblioteca ๐Ÿค— Transformers: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = out, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/SECURITY.md
# Security Policy ## Hugging Face Hub, remote artefacts, and remote code Transformers is open-source software that is tightly coupled to the Hugging Face Hub. While you have the ability to use it offline with pre-downloaded model weights, it provides a very simple way to download, use, and manage models locally. When downloading artefacts that have been uploaded by others on any platform, you expose yourself to risks. Please read below for the security recommendations in order to keep your runtime and local environment safe. ### Remote artefacts Models uploaded on the Hugging Face Hub come in different formats. We heavily recommend uploading and downloading models in the [`safetensors`](https://github.com/huggingface/safetensors) format (which is the default prioritized by the transformers library), as developed specifically to prevent arbitrary code execution on your system. To avoid loading models from unsafe formats(e.g. [pickle](https://docs.python.org/3/library/pickle.html), you should use the `use_safetenstors` parameter. If doing so, in the event that no .safetensors file is present, transformers will error when loading the model. ### Remote code #### Modeling Transformers supports many model architectures, but is also the bridge between your Python runtime and models that are stored in model repositories on the Hugging Face Hub. These models require the `trust_remote_code=True` parameter to be set when using them; please **always** verify the content of the modeling files when using this argument. We recommend setting a revision in order to ensure you protect yourself from updates on the repository. #### Tools Through the `Agent` framework, remote tools can be downloaded to be used by the Agent. You're to specify these tools yourself, but please keep in mind that their code will be run on your machine if the Agent chooses to run them. Please inspect the code of the tools before passing them to the Agent to protect your runtime and local setup. ## Reporting a Vulnerability ๐Ÿค— Please feel free to submit vulnerability reports to our private bug bounty program at https://hackerone.com/hugging_face. You'll need to request access to the program by emailing [email protected]. Note that you'll need to be invited to our program, so send us a quick email at [email protected] if you've found a vulnerability.
0
mavonic_private_repos
mavonic_private_repos/transformers/conftest.py
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # tests directory-specific settings - this file is run automatically # by pytest before any tests are run import doctest import sys import warnings from os.path import abspath, dirname, join import _pytest import pytest from transformers.testing_utils import HfDoctestModule, HfDocTestParser NOT_DEVICE_TESTS = { "test_tokenization", "test_processor", "test_processing", "test_beam_constraints", "test_configuration_utils", "test_data_collator", "test_trainer_callback", "test_trainer_utils", "test_feature_extraction", "test_image_processing", "test_image_processor", "test_image_transforms", "test_optimization", "test_retrieval", "test_config", "test_from_pretrained_no_checkpoint", "test_keep_in_fp32_modules", "test_gradient_checkpointing_backward_compatibility", "test_gradient_checkpointing_enable_disable", "test_save_load_fast_init_from_base", "test_fast_init_context_manager", "test_fast_init_tied_embeddings", "test_save_load_fast_init_to_base", "test_torch_save_load", "test_initialization", "test_forward_signature", "test_model_common_attributes", "test_model_main_input_name", "test_correct_missing_keys", "test_tie_model_weights", "test_can_use_safetensors", "test_load_save_without_tied_weights", "test_tied_weights_keys", "test_model_weights_reload_no_missing_tied_weights", "test_pt_tf_model_equivalence", "test_mismatched_shapes_have_properly_initialized_weights", "test_matched_shapes_have_loaded_weights_when_some_mismatched_shapes_exist", "test_model_is_small", "test_tf_from_pt_safetensors", "test_flax_from_pt_safetensors", "ModelTest::test_pipeline_", # None of the pipeline tests from PipelineTesterMixin (of which XxxModelTest inherits from) are running on device "ModelTester::test_pipeline_", "/repo_utils/", "/utils/", "/tools/", } # allow having multiple repository checkouts and not needing to remember to rerun # `pip install -e '.[dev]'` when switching between checkouts and running tests. git_repo_path = abspath(join(dirname(__file__), "src")) sys.path.insert(1, git_repo_path) # silence FutureWarning warnings in tests since often we can't act on them until # they become normal warnings - i.e. the tests still need to test the current functionality warnings.simplefilter(action="ignore", category=FutureWarning) def pytest_configure(config): config.addinivalue_line( "markers", "is_pt_tf_cross_test: mark test to run only when PT and TF interactions are tested" ) config.addinivalue_line( "markers", "is_pt_flax_cross_test: mark test to run only when PT and FLAX interactions are tested" ) config.addinivalue_line("markers", "is_pipeline_test: mark test to run only when pipelines are tested") config.addinivalue_line("markers", "is_staging_test: mark test to run only in the staging environment") config.addinivalue_line("markers", "accelerate_tests: mark test that require accelerate") config.addinivalue_line("markers", "tool_tests: mark the tool tests that are run on their specific schedule") config.addinivalue_line("markers", "not_device_test: mark the tests always running on cpu") def pytest_collection_modifyitems(items): for item in items: if any(test_name in item.nodeid for test_name in NOT_DEVICE_TESTS): item.add_marker(pytest.mark.not_device_test) def pytest_addoption(parser): from transformers.testing_utils import pytest_addoption_shared pytest_addoption_shared(parser) def pytest_terminal_summary(terminalreporter): from transformers.testing_utils import pytest_terminal_summary_main make_reports = terminalreporter.config.getoption("--make-reports") if make_reports: pytest_terminal_summary_main(terminalreporter, id=make_reports) def pytest_sessionfinish(session, exitstatus): # If no tests are collected, pytest exists with code 5, which makes the CI fail. if exitstatus == 5: session.exitstatus = 0 # Doctest custom flag to ignore output. IGNORE_RESULT = doctest.register_optionflag("IGNORE_RESULT") OutputChecker = doctest.OutputChecker class CustomOutputChecker(OutputChecker): def check_output(self, want, got, optionflags): if IGNORE_RESULT & optionflags: return True return OutputChecker.check_output(self, want, got, optionflags) doctest.OutputChecker = CustomOutputChecker _pytest.doctest.DoctestModule = HfDoctestModule doctest.DocTestParser = HfDocTestParser
0
mavonic_private_repos
mavonic_private_repos/transformers/.coveragerc
[run] source=transformers omit = # skip convertion scripts from testing for now */convert_* */__main__.py [report] exclude_lines = pragma: no cover raise except register_parameter
0
mavonic_private_repos
mavonic_private_repos/transformers/README_vi.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <b>Tiแบฟng viแป‡t</b> | </p> </h4> <h3 align="center"> <p>Cรดng nghแป‡ Hแปc mรกy tiรชn tiแบฟn cho JAX, PyTorch vร  TensorFlow</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers cung cแบฅp hร ng ngร n mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c ฤ‘แปƒ thแปฑc hiแป‡n cรกc nhiแป‡m vแปฅ trรชn cรกc modalities khรกc nhau nhฦฐ vฤƒn bแบฃn, hรฌnh แบฃnh vร  รขm thanh. Cรกc mรด hรฌnh nร y cรณ thแปƒ ฤ‘ฦฐแปฃc รกp dแปฅng vร o: * ๐Ÿ“ Vฤƒn bแบฃn, cho cรกc nhiแป‡m vแปฅ nhฦฐ phรขn loแบกi vฤƒn bแบฃn, trรญch xuแบฅt thรดng tin, trแบฃ lแปi cรขu hแปi, tรณm tแบฏt, dแป‹ch thuแบญt vร  sinh vฤƒn bแบฃn, trong hฦกn 100 ngรดn ngแปฏ. * ๐Ÿ–ผ๏ธ Hรฌnh แบฃnh, cho cรกc nhiแป‡m vแปฅ nhฦฐ phรขn loแบกi hรฌnh แบฃnh, nhแบญn diแป‡n ฤ‘แป‘i tฦฐแปฃng vร  phรขn ฤ‘oแบกn. * ๐Ÿ—ฃ๏ธ ร‚m thanh, cho cรกc nhiแป‡m vแปฅ nhฦฐ nhแบญn dแบกng giแปng nรณi vร  phรขn loแบกi รขm thanh. Cรกc mรด hรฌnh Transformer cลฉng cรณ thแปƒ thแปฑc hiแป‡n cรกc nhiแป‡m vแปฅ trรชn **nhiแปu modalities kแบฟt hแปฃp**, nhฦฐ trแบฃ lแปi cรขu hแปi vแป bแบฃng, nhแบญn dแบกng kรฝ tแปฑ quang hแปc, trรญch xuแบฅt thรดng tin tแปซ tร i liแป‡u quรฉt, phรขn loแบกi video vร  trแบฃ lแปi cรขu hแปi hรฌnh แบฃnh. ๐Ÿค— Transformers cung cแบฅp cรกc API ฤ‘แปƒ tแบฃi xuแป‘ng vร  sแปญ dแปฅng nhanh chรณng cรกc mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c ฤ‘รณ trรชn vฤƒn bแบฃn cแปฅ thแปƒ, ฤ‘iแปu chแป‰nh chรบng trรชn tแบญp dแปฏ liแป‡u cแปงa riรชng bแบกn vร  sau ฤ‘รณ chia sแบป chรบng vแป›i cแป™ng ฤ‘แป“ng trรชn [model hub](https://huggingface.co/models) cแปงa chรบng tรดi. ฤแป“ng thแปi, mแป—i module python xรกc ฤ‘แป‹nh mแป™t kiแบฟn trรบc lร  hoร n toร n ฤ‘แป™c lแบญp vร  cรณ thแปƒ ฤ‘ฦฐแปฃc sแปญa ฤ‘แป•i ฤ‘แปƒ cho phรฉp thแปฑc hiแป‡n nhanh cรกc thรญ nghiแป‡m nghiรชn cแปฉu. ๐Ÿค— Transformers ฤ‘ฦฐแปฃc hแป— trแปฃ bแปŸi ba thฦฐ viแป‡n hแปc sรขu phแป• biแบฟn nhแบฅt โ€” [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) vร  [TensorFlow](https://www.tensorflow.org/) โ€” vแป›i tรญch hแปฃp mฦฐแปฃt mร  giแปฏa chรบng. Viแป‡c huแบฅn luyแป‡n mรด hรฌnh cแปงa bแบกn vแป›i mแป™t thฦฐ viแป‡n trฦฐแป›c khi tแบฃi chรบng ฤ‘แปƒ sแปญ dแปฅng trong suy luแบญn vแป›i thฦฐ viแป‡n khรกc lร  rแบฅt dแป… dร ng. ## Cรกc demo trแปฑc tuyแบฟn Bแบกn cรณ thแปƒ kiแปƒm tra hแบงu hแบฟt cรกc mรด hรฌnh cแปงa chรบng tรดi trแปฑc tiแบฟp trรชn trang cแปงa chรบng tแปซ [model hub](https://huggingface.co/models). Chรบng tรดi cลฉng cung cแบฅp [dแป‹ch vแปฅ lฦฐu trแปฏ mรด hรฌnh riรชng tฦฐ, phiรชn bแบฃn vร  API suy luแบญn](https://huggingface.co/pricing) cho cรกc mรด hรฌnh cรดng khai vร  riรชng tฦฐ. Dฦฐแป›i ฤ‘รขy lร  mแป™t sแป‘ vรญ dแปฅ: Trong Xแปญ lรฝ Ngรดn ngแปฏ Tแปฑ nhiรชn: - [Hoร n thร nh tแปซ vแปฅng vแป tแปซ vแป›i BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Nhแบญn dแบกng thแปฑc thแปƒ ฤ‘แบทt tรชn vแป›i Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [Tแบกo vฤƒn bแบฃn tแปฑ nhiรชn vแป›i Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - [Suy luแบญn Ngรดn ngแปฏ Tแปฑ nhiรชn vแป›i RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [Tรณm tแบฏt vฤƒn bแบฃn vแป›i BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [Trแบฃ lแปi cรขu hแปi vแป›i DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [Dแป‹ch vฤƒn bแบฃn vแป›i T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) Trong Thแป‹ giรกc Mรกy tรญnh: - [Phรขn loแบกi hรฌnh แบฃnh vแป›i ViT](https://huggingface.co/google/vit-base-patch16-224) - [Phรกt hiแป‡n ฤ‘แป‘i tฦฐแปฃng vแป›i DETR](https://huggingface.co/facebook/detr-resnet-50) - [Phรขn ฤ‘oแบกn ngแปฏ nghฤฉa vแป›i SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [Phรขn ฤ‘oแบกn toร n diแป‡n vแป›i Mask2Former](https://huggingface.co/facebook/mask2former-swin-large-coco-panoptic) - [ฦฏแป›c lฦฐแปฃng ฤ‘แป™ sรขu vแป›i Depth Anything](https://huggingface.co/docs/transformers/main/model_doc/depth_anything) - [Phรขn loแบกi video vแป›i VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae) - [Phรขn ฤ‘oแบกn toร n cแบงu vแป›i OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) Trong รขm thanh: - [Nhแบญn dแบกng giแปng nรณi tแปฑ ฤ‘แป™ng vแป›i Whisper](https://huggingface.co/openai/whisper-large-v3) - [Phรกt hiแป‡n tแปซ khรณa vแป›i Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - [Phรขn loแบกi รขm thanh vแป›i Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) Trong cรกc nhiแป‡m vแปฅ ฤ‘a phฦฐฦกng thแปฉc: - [Trแบฃ lแปi cรขu hแปi vแป bแบฃng vแป›i TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq) - [Trแบฃ lแปi cรขu hแปi hรฌnh แบฃnh vแป›i ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) - [Mรด tแบฃ hรฌnh แบฃnh vแป›i LLaVa](https://huggingface.co/llava-hf/llava-1.5-7b-hf) - [Phรขn loแบกi hรฌnh แบฃnh khรดng cแบงn nhรฃn vแป›i SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) - [Trแบฃ lแปi cรขu hแปi vฤƒn bแบฃn tร i liแป‡u vแป›i LayoutLM](https://huggingface.co/impira/layoutlm-document-qa) - [Phรขn loแบกi video khรดng cแบงn nhรฃn vแป›i X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip) - [Phรกt hiแป‡n ฤ‘แป‘i tฦฐแปฃng khรดng cแบงn nhรฃn vแป›i OWLv2](https://huggingface.co/docs/transformers/en/model_doc/owlv2) - [Phรขn ฤ‘oแบกn hรฌnh แบฃnh khรดng cแบงn nhรฃn vแป›i CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg) - [Tแบกo mแบทt nแบก tแปฑ ฤ‘แป™ng vแป›i SAM](https://huggingface.co/docs/transformers/model_doc/sam) ## 100 dแปฑ รกn sแปญ dแปฅng Transformers Transformers khรดng chแป‰ lร  mแป™t bแป™ cรดng cแปฅ ฤ‘แปƒ sแปญ dแปฅng cรกc mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c: ฤ‘รณ lร  mแป™t cแป™ng ฤ‘แป“ng cรกc dแปฑ รกn xรขy dแปฑng xung quanh nรณ vร  Hugging Face Hub. Chรบng tรดi muแป‘n Transformers giรบp cรกc nhร  phรกt triแปƒn, nhร  nghiรชn cแปฉu, sinh viรชn, giรกo sฦฐ, kแปน sฦฐ vร  bแบฅt kแปณ ai khรกc xรขy dแปฑng nhแปฏng dแปฑ รกn mฦก ฦฐแป›c cแปงa hแป. ฤแปƒ kแปท niแป‡m 100.000 sao cแปงa transformers, chรบng tรดi ฤ‘รฃ quyแบฟt ฤ‘แป‹nh tแบญp trung vร o cแป™ng ฤ‘แป“ng vร  tแบกo ra trang [awesome-transformers](./awesome-transformers.md) liแป‡t kรช 100 dแปฑ รกn tuyแป‡t vแปi ฤ‘ฦฐแปฃc xรขy dแปฑng xung quanh transformers. Nแบฟu bแบกn sแปŸ hแปฏu hoแบทc sแปญ dแปฅng mแป™t dแปฑ รกn mร  bแบกn tin rแบฑng nรชn ฤ‘ฦฐแปฃc thรชm vร o danh sรกch, vui lรฒng mแปŸ mแป™t PR ฤ‘แปƒ thรชm nรณ! ## Nแบฟu bแบกn ฤ‘ang tรฌm kiแบฟm hแป— trแปฃ tรนy chแป‰nh tแปซ ฤ‘แป™i ngลฉ Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## Hร nh trรฌnh nhanh ฤแปƒ ngay lแบญp tแปฉc sแปญ dแปฅng mแป™t mรด hรฌnh trรชn mแป™t ฤ‘แบงu vร o cแปฅ thแปƒ (vฤƒn bแบฃn, hรฌnh แบฃnh, รขm thanh, ...), chรบng tรดi cung cแบฅp API `pipeline`. Pipelines nhรณm mแป™t mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c vแป›i quรก trรฌnh tiแปn xแปญ lรฝ ฤ‘รฃ ฤ‘ฦฐแปฃc sแปญ dแปฅng trong quรก trรฌnh huแบฅn luyแป‡n cแปงa mรด hรฌnh ฤ‘รณ. Dฦฐแป›i ฤ‘รขy lร  cรกch sแปญ dแปฅng nhanh mแป™t pipeline ฤ‘แปƒ phรขn loแบกi vฤƒn bแบฃn tรญch cแปฑc so vแป›i tiรชu cแปฑc: ```python >>> from transformers import pipeline # Cแบฅp phรกt mแป™t pipeline cho phรขn tรญch cแบฃm xรบc >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` Dรฒng code thแปฉ hai tแบฃi xuแป‘ng vร  lฦฐu trแปฏ bแป™ mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n ฤ‘ฦฐแปฃc sแปญ dแปฅng bแปŸi pipeline, trong khi dรฒng thแปฉ ba ฤ‘รกnh giรก nรณ trรชn vฤƒn bแบฃn ฤ‘รฃ cho. แปž ฤ‘รขy, cรขu trแบฃ lแปi lร  "tรญch cแปฑc" vแป›i ฤ‘แป™ tin cแบญy lร  99,97%. Nhiแปu nhiแป‡m vแปฅ cรณ sแบตn mแป™t `pipeline` ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c, trong NLP nhฦฐng cลฉng trong thแป‹ giรกc mรกy tรญnh vร  giแปng nรณi. Vรญ dแปฅ, chรบng ta cรณ thแปƒ dแป… dร ng trรญch xuแบฅt cรกc ฤ‘แป‘i tฦฐแปฃng ฤ‘ฦฐแปฃc phรกt hiแป‡n trong mแป™t hรฌnh แบฃnh: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Tแบฃi xuแป‘ng mแป™t hรฌnh แบฃnh vแป›i nhแปฏng con mรจo dแป… thฦฐฦกng >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # Cแบฅp phรกt mแป™t pipeline cho phรกt hiแป‡n ฤ‘แป‘i tฦฐแปฃng >>> object_detector = pipeline('object-detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` แปž ฤ‘รขy, chรบng ta nhแบญn ฤ‘ฦฐแปฃc mแป™t danh sรกch cรกc ฤ‘แป‘i tฦฐแปฃng ฤ‘ฦฐแปฃc phรกt hiแป‡n trong hรฌnh แบฃnh, vแป›i mแป™t hแป™p bao quanh ฤ‘แป‘i tฦฐแปฃng vร  mแป™t ฤ‘iแปƒm ฤ‘รกnh giรก ฤ‘แป™ tin cแบญy. ฤรขy lร  hรฌnh แบฃnh gแป‘c แปŸ bรชn trรกi, vแป›i cรกc dแปฑ ฤ‘oรกn hiแปƒn thแป‹ แปŸ bรชn phแบฃi: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> Bแบกn cรณ thแปƒ tรฌm hiแปƒu thรชm vแป cรกc nhiแป‡m vแปฅ ฤ‘ฦฐแปฃc hแป— trแปฃ bแปŸi API `pipeline` trong [hฦฐแป›ng dแบซn nร y](https://huggingface.co/docs/transformers/task_summary). Ngoร i `pipeline`, ฤ‘แปƒ tแบฃi xuแป‘ng vร  sแปญ dแปฅng bแบฅt kแปณ mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c nร o cho nhiแป‡m vแปฅ cแปฅ thแปƒ cแปงa bแบกn, chแป‰ cแบงn ba dรฒng code. ฤรขy lร  phiรชn bแบฃn PyTorch: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` Vร  ฤ‘รขy lร  mรฃ tฦฐฦกng ฤ‘ฦฐฦกng cho TensorFlow: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` Tokenizer lร  thร nh phแบงn chแป‹u trรกch nhiแป‡m cho viแป‡c tiแปn xแปญ lรฝ mร  mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c mong ฤ‘แปฃi vร  cรณ thแปƒ ฤ‘ฦฐแปฃc gแปi trแปฑc tiแบฟp trรชn mแป™t chuแป—i ฤ‘ฦกn (nhฦฐ trong cรกc vรญ dแปฅ trรชn) hoแบทc mแป™t danh sรกch. Nรณ sแบฝ xuแบฅt ra mแป™t tแปซ ฤ‘iแปƒn mร  bแบกn cรณ thแปƒ sแปญ dแปฅng trong mรฃ phแปฅ thuแป™c hoแบทc ฤ‘ฦกn giแบฃn lร  truyแปn trแปฑc tiแบฟp cho mรด hรฌnh cแปงa bแบกn bแบฑng cรกch sแปญ dแปฅng toรกn tแปญ ** ฤ‘แปƒ giแบฃi nรฉn ฤ‘แป‘i sแป‘. Chรญnh mรด hรฌnh lร  mแป™t [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) thรดng thฦฐแปng hoแบทc mแป™t [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (tรนy thuแป™c vร o backend cแปงa bแบกn) mร  bแบกn cรณ thแปƒ sแปญ dแปฅng nhฦฐ bรฌnh thฦฐแปng. [Hฦฐแป›ng dแบซn nร y](https://huggingface.co/docs/transformers/training) giแบฃi thรญch cรกch tรญch hแปฃp mแป™t mรด hรฌnh nhฦฐ vแบญy vร o mแป™t vรฒng lแบทp huแบฅn luyแป‡n cแป• ฤ‘iแปƒn PyTorch hoแบทc TensorFlow, hoแบทc cรกch sแปญ dแปฅng API `Trainer` cแปงa chรบng tรดi ฤ‘แปƒ tinh chแป‰nh nhanh chรณng trรชn mแป™t bแป™ dแปฏ liแป‡u mแป›i. ## Tแบกi sao tรดi nรชn sแปญ dแปฅng transformers? 1. Cรกc mรด hรฌnh tiรชn tiแบฟn dแป… sแปญ dแปฅng: - Hiแป‡u suแบฅt cao trong viแป‡c hiแปƒu vร  tแบกo ra ngรดn ngแปฏ tแปฑ nhiรชn, thแป‹ giรกc mรกy tรญnh vร  รขm thanh. - Ngฦฐแปกng vร o thแบฅp cho giแบฃng viรชn vร  ngฦฐแปi thแปฑc hร nh. - รt trแปซu tฦฐแปฃng dร nh cho ngฦฐแปi dรนng vแป›i chแป‰ ba lแป›p hแปc. - Mแป™t API thแป‘ng nhแบฅt ฤ‘แปƒ sแปญ dแปฅng tแบฅt cแบฃ cรกc mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c cแปงa chรบng tรดi. 2. Giแบฃm chi phรญ tรญnh toรกn, lร m giแบฃm lฦฐแปฃng khรญ thแบฃi carbon: - Cรกc nhร  nghiรชn cแปฉu cรณ thแปƒ chia sแบป cรกc mรด hรฌnh ฤ‘รฃ ฤ‘ฦฐแปฃc huแบฅn luyแป‡n thay vรฌ luรดn luรดn huแบฅn luyแป‡n lแบกi. - Ngฦฐแปi thแปฑc hร nh cรณ thแปƒ giแบฃm thแปi gian tรญnh toรกn vร  chi phรญ sแบฃn xuแบฅt. - Hร ng chแปฅc kiแบฟn trรบc vแป›i hฦกn 400.000 mรด hรฌnh ฤ‘ฦฐแปฃc huแบฅn luyแป‡n trฦฐแป›c trรชn tแบฅt cแบฃ cรกc phฦฐฦกng phรกp. 3. Lแปฑa chแปn framework phรน hแปฃp cho mแปi giai ฤ‘oแบกn cแปงa mรด hรฌnh: - Huแบฅn luyแป‡n cรกc mรด hรฌnh tiรชn tiแบฟn chแป‰ trong 3 dรฒng code. - Di chuyแปƒn mแป™t mรด hรฌnh duy nhแบฅt giแปฏa cรกc framework TF2.0/PyTorch/JAX theo รฝ muแป‘n. - Dแป… dร ng chแปn framework phรน hแปฃp cho huแบฅn luyแป‡n, ฤ‘รกnh giรก vร  sแบฃn xuแบฅt. 4. Dแป… dร ng tรนy chแป‰nh mแป™t mรด hรฌnh hoแบทc mแป™t vรญ dแปฅ theo nhu cแบงu cแปงa bแบกn: - Chรบng tรดi cung cแบฅp cรกc vรญ dแปฅ cho mแป—i kiแบฟn trรบc ฤ‘แปƒ tรกi tแบกo kแบฟt quแบฃ ฤ‘ฦฐแปฃc cรดng bแป‘ bแปŸi cรกc tรกc giแบฃ gแป‘c. - Cรกc thร nh phแบงn nแป™i tแบกi cแปงa mรด hรฌnh ฤ‘ฦฐแปฃc tiแบฟt lแป™ mแป™t cรกch nhแบฅt quรกn nhแบฅt cรณ thแปƒ. - Cรกc tแป‡p mรด hรฌnh cรณ thแปƒ ฤ‘ฦฐแปฃc sแปญ dแปฅng ฤ‘แป™c lแบญp vแป›i thฦฐ viแป‡n ฤ‘แปƒ thแปฑc hiแป‡n cรกc thแปญ nghiแป‡m nhanh chรณng. ## Tแบกi sao tรดi khรดng nรชn sแปญ dแปฅng transformers? - Thฦฐ viแป‡n nร y khรดng phแบฃi lร  mแป™t bแป™ cรดng cแปฅ modul cho cรกc khแป‘i xรขy dแปฑng mแบกng neural. Mรฃ trong cรกc tแป‡p mรด hรฌnh khรดng ฤ‘ฦฐแปฃc tรกi cแบฅu trรบc vแป›i cรกc trแปซu tฦฐแปฃng bแป• sung mแป™t cรกch cแป‘ รฝ, ฤ‘แปƒ cรกc nhร  nghiรชn cแปฉu cรณ thแปƒ lแบทp nhanh trรชn tแปซng mรด hรฌnh mร  khรดng cแบงn ฤ‘ร o sรขu vร o cรกc trแปซu tฦฐแปฃng/tแป‡p bแป• sung. - API huแบฅn luyแป‡n khรดng ฤ‘ฦฐแปฃc thiแบฟt kแบฟ ฤ‘แปƒ hoแบกt ฤ‘แป™ng trรชn bแบฅt kแปณ mรด hรฌnh nร o, mร  ฤ‘ฦฐแปฃc tแป‘i ฦฐu hรณa ฤ‘แปƒ hoแบกt ฤ‘แป™ng vแป›i cรกc mรด hรฌnh ฤ‘ฦฐแปฃc cung cแบฅp bแปŸi thฦฐ viแป‡n. ฤแป‘i vแป›i vรฒng lแบทp hแปc mรกy chung, bแบกn nรชn sแปญ dแปฅng mแป™t thฦฐ viแป‡n khรกc (cรณ thแปƒ lร  [Accelerate](https://huggingface.co/docs/accelerate)). - Mแบทc dรน chรบng tรดi cแป‘ gแบฏng trรฌnh bร y cร ng nhiแปu trฦฐแปng hแปฃp sแปญ dแปฅng cร ng tแป‘t, nhฦฐng cรกc tแบญp lแป‡nh trong thฦฐ mแปฅc [examples](https://github.com/huggingface/transformers/tree/main/examples) chแป‰ lร  vรญ dแปฅ. Dแปฑ kiแบฟn rแบฑng chรบng sแบฝ khรดng hoแบกt ฤ‘แป™ng ngay tแปฉc khแบฏc trรชn vแบฅn ฤ‘แป cแปฅ thแปƒ cแปงa bแบกn vร  bแบกn sแบฝ phแบฃi thay ฤ‘แป•i mแป™t sแป‘ dรฒng mรฃ ฤ‘แปƒ thรญch nghi vแป›i nhu cแบงu cแปงa bแบกn. ## Cร i ฤ‘แบทt ### Sแปญ dแปฅng pip Thฦฐ viแป‡n nร y ฤ‘ฦฐแปฃc kiแปƒm tra trรชn Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ vร  TensorFlow 2.6+. Bแบกn nรชn cร i ฤ‘แบทt ๐Ÿค— Transformers trong mแป™t [mรดi trฦฐแปng แบฃo Python](https://docs.python.org/3/library/venv.html). Nแบฟu bแบกn chฦฐa quen vแป›i mรดi trฦฐแปng แบฃo Python, hรฃy xem [hฦฐแป›ng dแบซn sแปญ dแปฅng](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Trฦฐแป›c tiรชn, tแบกo mแป™t mรดi trฦฐแปng แบฃo vแป›i phiรชn bแบฃn Python bแบกn sแบฝ sแปญ dแปฅng vร  kรญch hoแบกt nรณ. Sau ฤ‘รณ, bแบกn sแบฝ cแบงn cร i ฤ‘แบทt รญt nhแบฅt mแป™t trong sแป‘ cรกc framework Flax, PyTorch hoแบทc TensorFlow. Vui lรฒng tham khแบฃo [trang cร i ฤ‘แบทt TensorFlow](https://www.tensorflow.org/install/), [trang cร i ฤ‘แบทt PyTorch](https://pytorch.org/get-started/locally/#start-locally) vร /hoแบทc [Flax](https://github.com/google/flax#quick-install) vร  [Jax](https://github.com/google/jax#installation) ฤ‘แปƒ biแบฟt lแป‡nh cร i ฤ‘แบทt cแปฅ thแปƒ cho nแปn tแบฃng cแปงa bแบกn. Khi ฤ‘รฃ cร i ฤ‘แบทt mแป™t trong cรกc backend ฤ‘รณ, ๐Ÿค— Transformers cรณ thแปƒ ฤ‘ฦฐแปฃc cร i ฤ‘แบทt bแบฑng pip nhฦฐ sau: ```bash pip install transformers ``` Nแบฟu bแบกn muแป‘n thแปฑc hiแป‡n cรกc vรญ dแปฅ hoแบทc cแบงn phiรชn bแบฃn mแป›i nhแบฅt cแปงa mรฃ vร  khรดng thแปƒ chแป ฤ‘แปฃi cho mแป™t phiรชn bแบฃn mแป›i, bแบกn phแบฃi [cร i ฤ‘แบทt thฦฐ viแป‡n tแปซ nguแป“n](https://huggingface.co/docs/transformers/installation#installing-from-source). ### Vแป›i conda ๐Ÿค— Transformers cรณ thแปƒ ฤ‘ฦฐแปฃc cร i ฤ‘แบทt bแบฑng conda nhฦฐ sau: ```shell script conda install conda-forge::transformers ``` > **_GHI CHรš:_** Cร i ฤ‘แบทt `transformers` tแปซ kรชnh `huggingface` ฤ‘รฃ bแป‹ lแป—i thแปi. Hรฃy lร m theo trang cร i ฤ‘แบทt cแปงa Flax, PyTorch hoแบทc TensorFlow ฤ‘แปƒ xem cรกch cร i ฤ‘แบทt chรบng bแบฑng conda. > **_GHI CHรš:_** Trรชn Windows, bแบกn cรณ thแปƒ ฤ‘ฦฐแปฃc yรชu cแบงu kรญch hoแบกt Chแบฟ ฤ‘แป™ phรกt triแปƒn ฤ‘แปƒ tแบญn dแปฅng viแป‡c lฦฐu cache. Nแบฟu ฤ‘iแปu nร y khรดng phแบฃi lร  mแป™t lแปฑa chแปn cho bแบกn, hรฃy cho chรบng tรดi biแบฟt trong [vแบฅn ฤ‘แป nร y](https://github.com/huggingface/huggingface_hub/issues/1062). ## Kiแบฟn trรบc mรด hรฌnh **[Tแบฅt cแบฃ cรกc ฤ‘iแปƒm kiแปƒm tra mรด hรฌnh](https://huggingface.co/models)** ฤ‘ฦฐแปฃc cung cแบฅp bแปŸi ๐Ÿค— Transformers ฤ‘ฦฐแปฃc tรญch hแปฃp mแป™t cรกch mฦฐแปฃt mร  tแปซ trung tรขm mรด hรฌnh huggingface.co [model hub](https://huggingface.co/models), nฦกi chรบng ฤ‘ฦฐแปฃc tแบฃi lรชn trแปฑc tiแบฟp bแปŸi [ngฦฐแปi dรนng](https://huggingface.co/users) vร  [tแป• chแปฉc](https://huggingface.co/organizations). Sแป‘ lฦฐแปฃng ฤ‘iแปƒm kiแปƒm tra hiแป‡n tแบกi: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers hiแป‡n ฤ‘ang cung cแบฅp cรกc kiแบฟn trรบc sau ฤ‘รขy: xem [แปŸ ฤ‘รขy](https://huggingface.co/docs/transformers/model_summary) ฤ‘แปƒ cรณ mแป™t tรณm tแบฏt tแป•ng quan vแป mแป—i kiแบฟn trรบc. ฤแปƒ kiแปƒm tra xem mแป—i mรด hรฌnh cรณ mแป™t phiรชn bแบฃn thแปฑc hiแป‡n trong Flax, PyTorch hoแบทc TensorFlow, hoแบทc cรณ mแป™t tokenizer liรชn quan ฤ‘ฦฐแปฃc hแป— trแปฃ bแปŸi thฦฐ viแป‡n ๐Ÿค— Tokenizers, vui lรฒng tham khแบฃo [bแบฃng nร y](https://huggingface.co/docs/transformers/index#supported-frameworks). Nhแปฏng phiรชn bแบฃn nร y ฤ‘รฃ ฤ‘ฦฐแปฃc kiแปƒm tra trรชn mแป™t sแป‘ tแบญp dแปฏ liแป‡u (xem cรกc tแบญp lแป‡nh vรญ dแปฅ) vร  nรชn tฦฐฦกng ฤ‘ฦฐฦกng vแป›i hiแป‡u suแบฅt cแปงa cรกc phiรชn bแบฃn gแป‘c. Bแบกn cรณ thแปƒ tรฌm thแบฅy thรชm thรดng tin vแป hiแป‡u suแบฅt trong phแบงn Vรญ dแปฅ cแปงa [tร i liแป‡u](https://github.com/huggingface/transformers/tree/main/examples). ## Tรฌm hiแปƒu thรชm | Phแบงn | Mรด tแบฃ | |-|-| | [Tร i liแป‡u](https://huggingface.co/docs/transformers/) | Toร n bแป™ tร i liแป‡u API vร  hฦฐแป›ng dแบซn | | [Tรณm tแบฏt nhiแป‡m vแปฅ](https://huggingface.co/docs/transformers/task_summary) | Cรกc nhiแป‡m vแปฅ ฤ‘ฦฐแปฃc hแป— trแปฃ bแปŸi ๐Ÿค— Transformers | | [Hฦฐแป›ng dแบซn tiแปn xแปญ lรฝ](https://huggingface.co/docs/transformers/preprocessing) | Sแปญ dแปฅng lแป›p `Tokenizer` ฤ‘แปƒ chuแบฉn bแป‹ dแปฏ liแป‡u cho cรกc mรด hรฌnh | | [Huแบฅn luyแป‡n vร  ฤ‘iแปu chแป‰nh](https://huggingface.co/docs/transformers/training) | Sแปญ dแปฅng cรกc mรด hรฌnh ฤ‘ฦฐแปฃc cung cแบฅp bแปŸi ๐Ÿค— Transformers trong vรฒng lแบทp huแบฅn luyแป‡n PyTorch/TensorFlow vร  API `Trainer` | | [Hฦฐแป›ng dแบซn nhanh: ฤiแปu chแป‰nh/sแปญ dแปฅng cรกc kแป‹ch bแบฃn](https://github.com/huggingface/transformers/tree/main/examples) | Cรกc kแป‹ch bแบฃn vรญ dแปฅ ฤ‘แปƒ ฤ‘iแปu chแป‰nh mรด hรฌnh trรชn nhiแปu nhiแป‡m vแปฅ khรกc nhau | | [Chia sแบป vร  tแบฃi lรชn mรด hรฌnh](https://huggingface.co/docs/transformers/model_sharing) | Tแบฃi lรชn vร  chia sแบป cรกc mรด hรฌnh ฤ‘รฃ ฤ‘iแปu chแป‰nh cแปงa bแบกn vแป›i cแป™ng ฤ‘แป“ng | ## Trรญch dแบซn Bรขy giแป chรบng ta cรณ mแป™t [bร i bรกo](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) mร  bแบกn cรณ thแปƒ trรญch dแบซn cho thฦฐ viแป‡n ๐Ÿค— Transformers: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/README_ja.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!--- A useful guide for English-Traditional Japanese translation of Hugging Face documentation - Use square quotes, e.g.,ใ€Œๅผ•็”จใ€ Dictionary API: API(็ฟป่จณใ—ใชใ„) add: ่ฟฝๅŠ  checkpoint: ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ code: ใ‚ณใƒผใƒ‰ community: ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃ confidence: ไฟก้ ผๅบฆ dataset: ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ documentation: ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ example: ไพ‹ finetune: ๅพฎ่ชฟๆ•ด Hugging Face: Hugging Face(็ฟป่จณใ—ใชใ„) implementation: ๅฎŸ่ฃ… inference: ๆŽจ่ซ– library: ใƒฉใ‚คใƒ–ใƒฉใƒช module: ใƒขใ‚ธใƒฅใƒผใƒซ NLP/Natural Language Processing: NLPใจ่กจ็คบใ•ใ‚Œใ‚‹ๅ ดๅˆใฏ็ฟป่จณใ•ใ‚Œใšใ€Natural Language Processingใจ่กจ็คบใ•ใ‚Œใ‚‹ๅ ดๅˆใฏ็ฟป่จณใ•ใ‚Œใ‚‹ online demos: ใ‚ชใƒณใƒฉใ‚คใƒณใƒ‡ใƒข pipeline: pipeline(็ฟป่จณใ—ใชใ„) pretrained/pretrain: ๅญฆ็ฟ’ๆธˆใฟ Python data structures (e.g., list, set, dict): ใƒชใ‚นใƒˆใ€ใ‚ปใƒƒใƒˆใ€ใƒ‡ใ‚ฃใ‚ฏใ‚ทใƒงใƒŠใƒชใจ่จณใ•ใ‚Œใ€ๆ‹ฌๅผงๅ†…ใฏๅŽŸๆ–‡่‹ฑ่ชž repository: repository(็ฟป่จณใ—ใชใ„) summary: ๆฆ‚่ฆ token-: token-(็ฟป่จณใ—ใชใ„) Trainer: Trainer(็ฟป่จณใ—ใชใ„) transformer: transformer(็ฟป่จณใ—ใชใ„) tutorial: ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ user: ใƒฆใƒผใ‚ถ --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <b>ๆ—ฅๆœฌ่ชž</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>JAXใ€PyTorchใ€TensorFlowใฎใŸใ‚ใฎๆœ€ๅ…ˆ็ซฏๆฉŸๆขฐๅญฆ็ฟ’</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค—Transformersใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใ€่ฆ–่ฆšใ€้Ÿณๅฃฐใชใฉใฎ็•ฐใชใ‚‹ใƒขใƒ€ใƒชใƒ†ใ‚ฃใซๅฏพใ—ใฆใ‚ฟใ‚นใ‚ฏใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใซใ€ไบ‹ๅ‰ใซๅญฆ็ฟ’ใ•ใ›ใŸๆ•ฐๅƒใฎใƒขใƒ‡ใƒซใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฏๆฌกใฎใ‚ˆใ†ใชๅ ดๅˆใซ้ฉ็”จใงใใพใ™: * ๐Ÿ“ ใƒ†ใ‚ญใ‚นใƒˆใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใฎๅˆ†้กžใ€ๆƒ…ๅ ฑๆŠฝๅ‡บใ€่ณชๅ•ๅฟœ็ญ”ใ€่ฆ็ด„ใ€็ฟป่จณใ€ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใชใฉใฎใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใซใ€100ไปฅไธŠใฎ่จ€่ชžใซๅฏพๅฟœใ—ใฆใ„ใพใ™ใ€‚ * ๐Ÿ–ผ๏ธ ็”ปๅƒๅˆ†้กžใ€็‰ฉไฝ“ๆคœๅ‡บใ€ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใชใฉใฎใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใฎ็”ปๅƒใ€‚ * ๐Ÿ—ฃ๏ธ ้Ÿณๅฃฐใฏใ€้Ÿณๅฃฐ่ช่ญ˜ใ‚„้Ÿณๅฃฐๅˆ†้กžใชใฉใฎใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใ—ใพใ™ใ€‚ ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใฏใ€ใƒ†ใƒผใƒ–ใƒซ่ณชๅ•ๅฟœ็ญ”ใ€ๅ…‰ๅญฆๆ–‡ๅญ—่ช่ญ˜ใ€ใ‚นใ‚ญใƒฃใƒณๆ–‡ๆ›ธใ‹ใ‚‰ใฎๆƒ…ๅ ฑๆŠฝๅ‡บใ€ใƒ“ใƒ‡ใ‚ชๅˆ†้กžใ€่ฆ–่ฆš็š„่ณชๅ•ๅฟœ็ญ”ใชใฉใ€**่ค‡ๆ•ฐใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใ‚’็ต„ใฟๅˆใ‚ใ›ใŸ**ใ‚ฟใ‚นใ‚ฏใ‚‚ๅฎŸ่กŒๅฏ่ƒฝใงใ™ใ€‚ ๐Ÿค—Transformersใฏใ€ไธŽใˆใ‚‰ใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆใซๅฏพใ—ใฆใใ‚Œใ‚‰ใฎไบ‹ๅ‰ๅญฆ็ฟ’ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’็ด ๆ—ฉใใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆไฝฟ็”จใ—ใ€ใ‚ใชใŸ่‡ช่บซใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใใ‚Œใ‚‰ใ‚’ๅพฎ่ชฟๆ•ดใ—ใ€็งใŸใกใฎ[model hub](https://huggingface.co/models)ใงใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจๅ…ฑๆœ‰ใ™ใ‚‹ใŸใ‚ใฎAPIใ‚’ๆไพ›ใ—ใพใ™ใ€‚ๅŒๆ™‚ใซใ€ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๅฎš็พฉใ™ใ‚‹ๅ„Pythonใƒขใ‚ธใƒฅใƒผใƒซใฏๅฎŒๅ…จใซใ‚นใ‚ฟใƒณใƒ‰ใ‚ขใƒญใƒณใงใ‚ใ‚Šใ€่ฟ…้€Ÿใช็ ”็ฉถๅฎŸ้จ“ใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ใŸใ‚ใซๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ๐Ÿค—Transformersใฏ[Jax](https://jax.readthedocs.io/en/latest/)ใ€[PyTorch](https://pytorch.org/)ใ€[TensorFlow](https://www.tensorflow.org/)ใจใ„ใ†3ๅคงใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐใƒฉใ‚คใƒ–ใƒฉใƒชใƒผใซๆ”ฏใˆใ‚‰ใ‚Œใ€ใใ‚Œใžใ‚Œใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚ทใƒผใƒ ใƒฌใ‚นใซ็ตฑๅˆใ—ใฆใ„ใพใ™ใ€‚็‰‡ๆ–นใงใƒขใƒ‡ใƒซใ‚’ๅญฆ็ฟ’ใ—ใฆใ‹ใ‚‰ใ€ใ‚‚ใ†็‰‡ๆ–นใงๆŽจ่ซ–็”จใซใƒญใƒผใƒ‰ใ™ใ‚‹ใฎใฏ็ฐกๅ˜ใชใ“ใจใงใ™ใ€‚ ## ใ‚ชใƒณใƒฉใ‚คใƒณใƒ‡ใƒข [model hub](https://huggingface.co/models)ใ‹ใ‚‰ใ€ใปใจใ‚“ใฉใฎใƒขใƒ‡ใƒซใฎใƒšใƒผใ‚ธใง็›ดๆŽฅใƒ†ใ‚นใƒˆใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใพใŸใ€ใƒ‘ใƒ–ใƒชใƒƒใ‚ฏใƒขใƒ‡ใƒซใ€ใƒ—ใƒฉใ‚คใƒ™ใƒผใƒˆใƒขใƒ‡ใƒซใซๅฏพใ—ใฆใ€[ใƒ—ใƒฉใ‚คใƒ™ใƒผใƒˆใƒขใƒ‡ใƒซใฎใƒ›ใ‚นใƒ†ใ‚ฃใƒณใ‚ฐใ€ใƒใƒผใ‚ธใƒงใƒ‹ใƒณใ‚ฐใ€ๆŽจ่ซ–API](https://huggingface.co/pricing)ใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ ไปฅไธ‹ใฏใใฎไธ€ไพ‹ใงใ™: ่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใซใฆ: - [BERTใซใ‚ˆใ‚‹ใƒžใ‚นใ‚ฏใƒ‰ใƒฏใƒผใƒ‰่ฃœๅฎŒ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Electraใซใ‚ˆใ‚‹ๅๅ‰ๅฎŸไฝ“่ช่ญ˜](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [GPT-2ใซใ‚ˆใ‚‹ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [RoBERTaใซใ‚ˆใ‚‹่‡ช็„ถ่จ€่ชžๆŽจ่ซ–](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [BARTใซใ‚ˆใ‚‹่ฆ็ด„](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [DistilBERTใซใ‚ˆใ‚‹่ณชๅ•ๅฟœ็ญ”](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [T5ใซใ‚ˆใ‚‹็ฟป่จณ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใซใฆ: - [ViTใซใ‚ˆใ‚‹็”ปๅƒๅˆ†้กž](https://huggingface.co/google/vit-base-patch16-224) - [DETRใซใ‚ˆใ‚‹็‰ฉไฝ“ๆคœๅ‡บ](https://huggingface.co/facebook/detr-resnet-50) - [SegFormerใซใ‚ˆใ‚‹ใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [DETRใซใ‚ˆใ‚‹ใƒ‘ใƒŽใƒ—ใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://huggingface.co/facebook/detr-resnet-50-panoptic) ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใซใฆ: - [Wav2Vec2ใซใ‚ˆใ‚‹่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜](https://huggingface.co/facebook/wav2vec2-base-960h) - [Wav2Vec2ใซใ‚ˆใ‚‹ใ‚ญใƒผใƒฏใƒผใƒ‰ๆคœ็ดข](https://huggingface.co/superb/wav2vec2-base-superb-ks) ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใชใ‚ฟใ‚นใ‚ฏใซใฆ: - [ViLTใซใ‚ˆใ‚‹่ฆ–่ฆš็š„่ณชๅ•ๅฟœ็ญ”](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) Hugging Faceใƒใƒผใƒ ใซใ‚ˆใฃใฆไฝœใ‚‰ใ‚ŒใŸ **[ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใ‚’ไฝฟใฃใŸๆ›ธใ่พผใฟ](https://transformer.huggingface.co)** ใฏใ€ใ“ใฎใƒชใƒใ‚ธใƒˆใƒชใฎใƒ†ใ‚ญใ‚นใƒˆ็”ŸๆˆๆฉŸ่ƒฝใฎๅ…ฌๅผใƒ‡ใƒขใงใ‚ใ‚‹ใ€‚ ## Hugging Faceใƒใƒผใƒ ใซใ‚ˆใ‚‹ใ‚ซใ‚นใ‚ฟใƒ ใƒปใ‚ตใƒใƒผใƒˆใ‚’ใ”ๅธŒๆœ›ใฎๅ ดๅˆ <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## ใ‚ฏใ‚คใƒƒใ‚ฏใƒ„ใ‚ขใƒผ ไธŽใˆใ‚‰ใ‚ŒใŸๅ…ฅๅŠ›๏ผˆใƒ†ใ‚ญใ‚นใƒˆใ€็”ปๅƒใ€้Ÿณๅฃฐใ€...๏ผ‰ใซๅฏพใ—ใฆใ™ใใซใƒขใƒ‡ใƒซใ‚’ไฝฟใ†ใŸใ‚ใซใ€ๆˆ‘ใ€…ใฏ`pipeline`ใจใ„ใ†APIใ‚’ๆไพ›ใ—ใฆใŠใ‚Šใพใ™ใ€‚pipelineใฏใ€ๅญฆ็ฟ’ๆธˆใฟใฎใƒขใƒ‡ใƒซใจใ€ใใฎใƒขใƒ‡ใƒซใฎๅญฆ็ฟ’ๆ™‚ใซไฝฟ็”จใ•ใ‚ŒใŸๅ‰ๅ‡ฆ็†ใ‚’ใ‚ฐใƒซใƒผใƒ—ๅŒ–ใ—ใŸใ‚‚ใฎใงใ™ใ€‚ไปฅไธ‹ใฏใ€่‚ฏๅฎš็š„ใชใƒ†ใ‚ญใ‚นใƒˆใจๅฆๅฎš็š„ใชใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅˆ†้กžใ™ใ‚‹ใŸใ‚ใซpipelineใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใงใ™: ```python >>> from transformers import pipeline # Allocate a pipeline for sentiment-analysis >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` 2่กŒ็›ฎใฎใ‚ณใƒผใƒ‰ใงใฏใ€pipelineใงไฝฟ็”จใ•ใ‚Œใ‚‹ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆใ‚ญใƒฃใƒƒใ‚ทใƒฅใ—ใ€3่กŒ็›ฎใงใฏไธŽใˆใ‚‰ใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆใซๅฏพใ—ใฆใใฎใƒขใƒ‡ใƒซใ‚’่ฉ•ไพกใ—ใพใ™ใ€‚ใ“ใ“ใงใฏใ€็ญ”ใˆใฏ99.97%ใฎไฟก้ ผๅบฆใงใ€Œใƒใ‚ธใƒ†ใ‚ฃใƒ–ใ€ใงใ™ใ€‚ ่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใ ใ‘ใงใชใใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ‚„้Ÿณๅฃฐๅ‡ฆ็†ใซใŠใ„ใฆใ‚‚ใ€ๅคšใใฎใ‚ฟใ‚นใ‚ฏใซใฏใ‚ใ‚‰ใ‹ใ˜ใ‚่จ“็ทดใ•ใ‚ŒใŸ`pipeline`ใŒ็”จๆ„ใ•ใ‚Œใฆใ„ใ‚‹ใ€‚ไพ‹ใˆใฐใ€็”ปๅƒใ‹ใ‚‰ๆคœๅ‡บใ•ใ‚ŒใŸ็‰ฉไฝ“ใ‚’็ฐกๅ˜ใซๆŠฝๅ‡บใ™ใ‚‹ใ“ใจใŒใงใใ‚‹: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Download an image with cute cats >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # Allocate a pipeline for object detection >>> object_detector = pipeline('object-detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` ใ“ใ“ใงใฏใ€็”ปๅƒใ‹ใ‚‰ๆคœๅ‡บใ•ใ‚ŒใŸใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎใƒชใ‚นใƒˆใŒๅพ—ใ‚‰ใ‚Œใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๅ›ฒใ‚€ใƒœใƒƒใ‚ฏใ‚นใจไฟก้ ผๅบฆใ‚นใ‚ณใ‚ขใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ๅทฆๅดใŒๅ…ƒ็”ปๅƒใ€ๅณๅดใŒไบˆๆธฌ็ตๆžœใ‚’่กจ็คบใ—ใŸใ‚‚ใฎใงใ™: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> [ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://huggingface.co/docs/transformers/task_summary)ใงใฏใ€`pipeline`APIใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ `pipeline`ใซๅŠ ใˆใฆใ€ไธŽใˆใ‚‰ใ‚ŒใŸใ‚ฟใ‚นใ‚ฏใซๅญฆ็ฟ’ๆธˆใฟใฎใƒขใƒ‡ใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใฎใฏใ€3่กŒใฎใ‚ณใƒผใƒ‰ใ ใ‘ใงใ™ใ€‚ไปฅไธ‹ใฏPyTorchใฎใƒใƒผใ‚ธใƒงใƒณใงใ™: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` ใใ—ใฆใ“ใกใ‚‰ใฏTensorFlowใจๅŒ็ญ‰ใฎใ‚ณใƒผใƒ‰ใจใชใ‚Šใพใ™: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใŒๆœŸๅพ…ใ™ใ‚‹ใ™ในใฆใฎๅ‰ๅ‡ฆ็†ใ‚’ๆ‹…ๅฝ“ใ—ใ€ๅ˜ไธ€ใฎๆ–‡ๅญ—ๅˆ— (ไธŠ่จ˜ใฎไพ‹ใฎใ‚ˆใ†ใซ) ใพใŸใฏใƒชใ‚นใƒˆใซๅฏพใ—ใฆ็›ดๆŽฅๅ‘ผใณๅ‡บใ™ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใ‚Œใฏไธ‹ๆตใฎใ‚ณใƒผใƒ‰ใงไฝฟ็”จใงใใ‚‹่พžๆ›ธใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ใพใŸใ€ๅ˜็ด”ใซ ** ๅผ•ๆ•ฐๅฑ•้–‹ๆผ”็ฎ—ๅญใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใซ็›ดๆŽฅๆธกใ™ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ใƒขใƒ‡ใƒซ่‡ชไฝ“ใฏ้€šๅธธใฎ[Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ใพใŸใฏ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใซใ‚ˆใฃใฆ็•ฐใชใ‚‹)ใงใ€้€šๅธธ้€šใ‚Šไฝฟ็”จใ™ใ‚‹ใ“ใจใŒๅฏ่ƒฝใงใ™ใ€‚[ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://huggingface.co/docs/transformers/training)ใงใฏใ€ใ“ใฎใ‚ˆใ†ใชใƒขใƒ‡ใƒซใ‚’ๅพ“ๆฅใฎPyTorchใ‚„TensorFlowใฎๅญฆ็ฟ’ใƒซใƒผใƒ—ใซ็ตฑๅˆใ™ใ‚‹ๆ–นๆณ•ใ‚„ใ€็งใŸใกใฎ`Trainer`APIใ‚’ไฝฟใฃใฆๆ–ฐใ—ใ„ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง็ด ๆ—ฉใๅพฎ่ชฟๆ•ดใ‚’่กŒใ†ๆ–นๆณ•ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ## ใชใœtransformersใ‚’ไฝฟใ†ๅฟ…่ฆใŒใ‚ใ‚‹ใฎใงใ—ใ‚‡ใ†ใ‹๏ผŸ 1. ไฝฟใ„ใ‚„ใ™ใ„ๆœ€ๆ–ฐใƒขใƒ‡ใƒซ: - ่‡ช็„ถ่จ€่ชž็†่งฃใƒป็”Ÿๆˆใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใฎๅ„ใ‚ฟใ‚นใ‚ฏใง้ซ˜ใ„ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’็™บๆฎใ—ใพใ™ใ€‚ - ๆ•™่‚ฒ่€…ใ€ๅฎŸๅ‹™่€…ใซใจใฃใฆใฎไฝŽใ„ๅ‚ๅ…ฅ้šœๅฃใ€‚ - ๅญฆ็ฟ’ใ™ใ‚‹ใ‚ฏใƒฉใ‚นใฏ3ใคใ ใ‘ใงใ€ใƒฆใƒผใ‚ถใŒ็›ด้ขใ™ใ‚‹ๆŠฝ่ฑกๅŒ–ใฏใปใจใ‚“ใฉใ‚ใ‚Šใพใ›ใ‚“ใ€‚ - ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ๅˆฉ็”จใ™ใ‚‹ใŸใ‚ใฎ็ตฑไธ€ใ•ใ‚ŒใŸAPIใ€‚ 1. ไฝŽใ„่จˆ็ฎ—ใ‚ณใ‚นใƒˆใ€ๅฐ‘ใชใ„ใ‚ซใƒผใƒœใƒณใƒ•ใƒƒใƒˆใƒ—ใƒชใƒณใƒˆ: - ็ ”็ฉถ่€…ใฏใ€ๅธธใซๅ†ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’่กŒใ†ใฎใงใฏใชใใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ - ๅฎŸๅ‹™ๅฎถใฏใ€่จˆ็ฎ—ๆ™‚้–“ใ‚„็”Ÿ็”ฃใ‚ณใ‚นใƒˆใ‚’ๅ‰Šๆธ›ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ - ใ™ในใฆใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใซใŠใ„ใฆใ€60,000ไปฅไธŠใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ๆŒใคๆ•ฐๅคšใใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆไพ›ใ—ใพใ™ใ€‚ 1. ใƒขใƒ‡ใƒซใฎใƒฉใ‚คใƒ•ใ‚ฟใ‚คใƒ ใฎใ‚ใ‚‰ใ‚†ใ‚‹้ƒจๅˆ†ใง้ฉๅˆ‡ใชใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’้ธๆŠžๅฏ่ƒฝ: - 3่กŒใฎใ‚ณใƒผใƒ‰ใงๆœ€ๅ…ˆ็ซฏใฎใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ€‚ - TF2.0/PyTorch/JAXใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ้–“ใง1ใคใฎใƒขใƒ‡ใƒซใ‚’่‡ชๅœจใซ็งปๅ‹•ใ•ใ›ใ‚‹ใ€‚ - ๅญฆ็ฟ’ใ€่ฉ•ไพกใ€็”Ÿ็”ฃใซ้ฉใ—ใŸใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’ใ‚ทใƒผใƒ ใƒฌใ‚นใซ้ธๆŠžใงใใพใ™ใ€‚ 1. ใƒขใƒ‡ใƒซใ‚„ใ‚ตใƒณใƒ—ใƒซใ‚’ใƒ‹ใƒผใ‚บใซๅˆใ‚ใ›ใฆ็ฐกๅ˜ใซใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บๅฏ่ƒฝ: - ๅŽŸ่‘—่€…ใŒ็™บ่กจใ—ใŸ็ตๆžœใ‚’ๅ†็พใ™ใ‚‹ใŸใ‚ใซใ€ๅ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎไพ‹ใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ - ใƒขใƒ‡ใƒซๅ†…้ƒจใฏๅฏ่ƒฝใช้™ใ‚Šไธ€่ฒซใ—ใฆๅ…ฌ้–‹ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ - ใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซใฏใƒฉใ‚คใƒ–ใƒฉใƒชใจใฏ็‹ฌ็ซ‹ใ—ใฆๅˆฉ็”จใ™ใ‚‹ใ“ใจใŒใงใใ€่ฟ…้€ŸใชๅฎŸ้จ“ใŒๅฏ่ƒฝใงใ™ใ€‚ ## ใชใœtransformersใ‚’ไฝฟใฃใฆใฏใ„ใ‘ใชใ„ใฎใงใ—ใ‚‡ใ†ใ‹๏ผŸ - ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใฎใŸใ‚ใฎใƒ“ใƒซใƒ‡ใ‚ฃใƒณใ‚ฐใƒ–ใƒญใƒƒใ‚ฏใฎใƒขใ‚ธใƒฅใƒผใƒซๅผใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซใฎใ‚ณใƒผใƒ‰ใฏใ€็ ”็ฉถ่€…ใŒ่ฟฝๅŠ ใฎๆŠฝ่ฑกๅŒ–/ใƒ•ใ‚กใ‚คใƒซใซ้ฃ›ใณ่พผใ‚€ใ“ใจใชใใ€ๅ„ใƒขใƒ‡ใƒซใ‚’็ด ๆ—ฉใๅๅพฉใงใใ‚‹ใ‚ˆใ†ใซใ€ๆ„ๅ›ณ็š„ใซ่ฟฝๅŠ ใฎๆŠฝ่ฑกๅŒ–ใงใƒชใƒ•ใ‚กใ‚ฏใ‚ฟใƒชใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ - ๅญฆ็ฟ’APIใฏใฉใฎใ‚ˆใ†ใชใƒขใƒ‡ใƒซใงใ‚‚ๅ‹•ไฝœใ™ใ‚‹ใ‚ใ‘ใงใฏใชใใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใŒๆไพ›ใ™ใ‚‹ใƒขใƒ‡ใƒซใงๅ‹•ไฝœใ™ใ‚‹ใ‚ˆใ†ใซๆœ€้ฉๅŒ–ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ไธ€่ˆฌ็š„ใชๆฉŸๆขฐๅญฆ็ฟ’ใฎใƒซใƒผใƒ—ใซใฏใ€ๅˆฅใฎใƒฉใ‚คใƒ–ใƒฉใƒช(ใŠใใ‚‰ใ[Accelerate](https://huggingface.co/docs/accelerate))ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ็งใŸใกใฏใงใใ‚‹ใ ใ‘ๅคšใใฎไฝฟ็”จไพ‹ใ‚’็ดนไป‹ใ™ใ‚‹ใ‚ˆใ†ๅŠชๅŠ›ใ—ใฆใ„ใพใ™ใŒใ€[examples ใƒ•ใ‚ฉใƒซใƒ€](https://github.com/huggingface/transformers/tree/main/examples) ใซใ‚ใ‚‹ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ‚ใใพใงไพ‹ใงใ™ใ€‚ใ‚ใชใŸใฎ็‰นๅฎšใฎๅ•้กŒใซๅฏพใ—ใฆใ™ใใซๅ‹•ไฝœใ™ใ‚‹ใ‚ใ‘ใงใฏใชใใ€ใ‚ใชใŸใฎใƒ‹ใƒผใ‚บใซๅˆใ‚ใ›ใ‚‹ใŸใ‚ใซๆ•ฐ่กŒใฎใ‚ณใƒผใƒ‰ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใŒไบˆๆƒณใ•ใ‚Œใพใ™ใ€‚ ## ใ‚คใƒณใ‚นใƒˆใƒผใƒซ ### pipใซใฆ ใ“ใฎใƒชใƒใ‚ธใƒˆใƒชใฏใ€Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, TensorFlow 2.6+ ใงใƒ†ใ‚นใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ๐Ÿค—Transformersใฏ[ไปฎๆƒณ็’ฐๅขƒ](https://docs.python.org/3/library/venv.html)ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚Pythonใฎไปฎๆƒณ็’ฐๅขƒใซๆ…ฃใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏใ€[ใƒฆใƒผใ‚ถใƒผใ‚ฌใ‚คใƒ‰](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ใพใšใ€ไฝฟ็”จใ™ใ‚‹ใƒใƒผใ‚ธใƒงใƒณใฎPythonใงไปฎๆƒณ็’ฐๅขƒใ‚’ไฝœๆˆใ—ใ€ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใƒˆใ—ใพใ™ใ€‚ ใใฎๅพŒใ€Flax, PyTorch, TensorFlowใฎใ†ใกๅฐ‘ใชใใจใ‚‚1ใคใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ [TensorFlowใ‚คใƒณใ‚นใƒˆใƒผใƒซใƒšใƒผใ‚ธ](https://www.tensorflow.org/install/)ใ€[PyTorchใ‚คใƒณใ‚นใƒˆใƒผใƒซใƒšใƒผใ‚ธ](https://pytorch.org/get-started/locally/#start-locally)ใ€[Flax](https://github.com/google/flax#quick-install)ใ€[Jax](https://github.com/google/jax#installation)ใ‚คใƒณใ‚นใƒˆใƒผใƒซใƒšใƒผใ‚ธใงใ€ใŠไฝฟใ„ใฎใƒ—ใƒฉใƒƒใƒˆใƒ•ใ‚ฉใƒผใƒ ๅˆฅใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซใ‚ณใƒžใƒณใƒ‰ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใฎใ„ใšใ‚Œใ‹ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€๐Ÿค—Transformersใฏไปฅไธ‹ใฎใ‚ˆใ†ใซpipใ‚’ไฝฟ็”จใ—ใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ“ใจใŒใงใใพใ™: ```bash pip install transformers ``` ใ‚‚ใ—ใ‚ตใƒณใƒ—ใƒซใ‚’่ฉฆใ—ใŸใ„ใ€ใพใŸใฏใ‚ณใƒผใƒ‰ใฎๆœ€ๅ…ˆ็ซฏใŒๅฟ…่ฆใงใ€ๆ–ฐใ—ใ„ใƒชใƒชใƒผใ‚นใ‚’ๅพ…ใฆใชใ„ๅ ดๅˆใฏใ€[ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใ‚คใƒณใ‚นใƒˆใƒผใƒซ](https://huggingface.co/docs/transformers/installation#installing-from-source)ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ### condaใซใฆ ๐Ÿค—Transformersใฏไปฅไธ‹ใฎใ‚ˆใ†ใซcondaใ‚’ไฝฟใฃใฆ่จญ็ฝฎใ™ใ‚‹ใ“ใจใŒใงใใพใ™: ```shell script conda install conda-forge::transformers ``` > **_ๆณจๆ„:_** `huggingface` ใƒใƒฃใƒณใƒใƒซใ‹ใ‚‰ `transformers` ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ“ใจใฏ้žๆŽจๅฅจใงใ™ใ€‚ Flaxใ€PyTorchใ€TensorFlowใ‚’condaใงใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๆ–นๆณ•ใฏใ€ใใ‚Œใžใ‚Œใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซใƒšใƒผใ‚ธใซๅพ“ใฃใฆใใ ใ•ใ„ใ€‚ > **_ๆณจๆ„:_** Windowsใงใฏใ€ใ‚ญใƒฃใƒƒใ‚ทใƒฅใฎๆฉๆตใ‚’ๅ—ใ‘ใ‚‹ใŸใ‚ใซใ€ใƒ‡ใƒ™ใƒญใƒƒใƒ‘ใƒผใƒขใƒผใƒ‰ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ‚ˆใ†ไฟƒใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎใ‚ˆใ†ใชๅ ดๅˆใฏใ€[ใ“ใฎissue](https://github.com/huggingface/huggingface_hub/issues/1062)ใงใŠ็Ÿฅใ‚‰ใ›ใใ ใ•ใ„ใ€‚ ## ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ ๐Ÿค—TransformersใŒๆไพ›ใ™ใ‚‹ **[ๅ…จใƒขใƒ‡ใƒซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ](https://huggingface.co/models)** ใฏใ€[ใƒฆใƒผใ‚ถใƒผ](https://huggingface.co/users)ใ‚„[็ต„็น”](https://huggingface.co/organizations)ใซใ‚ˆใฃใฆ็›ดๆŽฅใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚Œใ‚‹huggingface.co [model hub](https://huggingface.co)ใ‹ใ‚‰ใ‚ทใƒผใƒ ใƒฌใ‚นใซ็ตฑๅˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ็พๅœจใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆๆ•ฐ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค—Transformersใฏ็พๅœจใ€ไปฅไธ‹ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆไพ›ใ—ใฆใ„ใพใ™: ใใ‚Œใžใ‚Œใฎใƒใ‚คใƒฌใƒ™ใƒซใช่ฆ็ด„ใฏ[ใ“ใกใ‚‰](https://huggingface.co/docs/transformers/model_summary)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„. ๅ„ใƒขใƒ‡ใƒซใŒFlaxใ€PyTorchใ€TensorFlowใงๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใ€๐Ÿค—Tokenizersใƒฉใ‚คใƒ–ใƒฉใƒชใซๆ”ฏใˆใ‚‰ใ‚ŒใŸ้–ข้€ฃใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ๆŒใฃใฆใ„ใ‚‹ใ‹ใฏใ€[ใ“ใฎ่กจ](https://huggingface.co/docs/transformers/index#supported-frameworks)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใ‚‰ใฎๅฎŸ่ฃ…ใฏใ„ใใคใ‹ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใƒ†ใ‚นใƒˆใ•ใ‚ŒใฆใŠใ‚Š(ใ‚ตใƒณใƒ—ใƒซใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๅ‚็…ง)ใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎๅฎŸ่ฃ…ใฎๆ€ง่ƒฝใจไธ€่‡ดใ™ใ‚‹ใฏใšใงใ‚ใ‚‹ใ€‚ๆ€ง่ƒฝใฎ่ฉณ็ดฐใฏ[documentation](https://github.com/huggingface/transformers/tree/main/examples)ใฎExamplesใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ฆ‹ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ## ใ•ใ‚‰ใซ่ฉณใ—ใ | ใ‚ปใ‚ฏใ‚ทใƒงใƒณ | ๆฆ‚่ฆ | |-|-| | [ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/transformers/) | ๅฎŒๅ…จใชAPIใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใจใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ | | [ใ‚ฟใ‚นใ‚ฏๆฆ‚่ฆ](https://huggingface.co/docs/transformers/task_summary) | ๐Ÿค—TransformersใŒใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏ | | [ๅ‰ๅ‡ฆ็†ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://huggingface.co/docs/transformers/preprocessing) | ใƒขใƒ‡ใƒซ็”จใฎใƒ‡ใƒผใ‚ฟใ‚’ๆบ–ๅ‚™ใ™ใ‚‹ใŸใ‚ใซ`Tokenizer`ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จ | | [ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๅพฎ่ชฟๆ•ด](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlowใฎๅญฆ็ฟ’ใƒซใƒผใƒ—ใจ`Trainer`APIใง๐Ÿค—TransformersใŒๆไพ›ใ™ใ‚‹ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จ | | [ใ‚ฏใ‚คใƒƒใ‚ฏใƒ„ใ‚ขใƒผ: ๅพฎ่ชฟๆ•ด/ไฝฟ็”จๆ–นๆณ•ใ‚นใ‚ฏใƒชใƒ—ใƒˆ](https://github.com/huggingface/transformers/tree/main/examples) | ๆง˜ใ€…ใชใ‚ฟใ‚นใ‚ฏใงใƒขใƒ‡ใƒซใฎๅพฎ่ชฟๆ•ดใ‚’่กŒใ†ใŸใ‚ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆไพ‹ | | [ใƒขใƒ‡ใƒซใฎๅ…ฑๆœ‰ใจใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰](https://huggingface.co/docs/transformers/model_sharing) | ๅพฎ่ชฟๆ•ดใ—ใŸใƒขใƒ‡ใƒซใ‚’ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใฆใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใงๅ…ฑๆœ‰ใ™ใ‚‹ | | [ใƒžใ‚คใ‚ฐใƒฌใƒผใ‚ทใƒงใƒณ](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`ใพใŸใฏ`pytorch-pretrained-bert`ใ‹ใ‚‰๐Ÿค—Transformers ใซ็งป่กŒใ™ใ‚‹ | ## ๅผ•็”จ ๐Ÿค— ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒฉใ‚คใƒ–ใƒฉใƒชใซๅผ•็”จใงใใ‚‹[่ซ–ๆ–‡](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)ใŒๅ‡บๆฅใพใ—ใŸ: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/README_es.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <b>Espaรฑol</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>Lo รบltimo de Machine Learning para JAX, PyTorch y TensorFlow</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers aporta miles de modelos preentrenados para realizar tareas en diferentes modalidades como texto, visiรณn, y audio. Estos modelos pueden ser aplicados en: * ๐Ÿ“ Texto, para tareas como clasificaciรณn de texto, extracciรณn de informaciรณn, responder preguntas, resumir, traducir, generaciรณn de texto, en mรกs de 100 idiomas. * ๐Ÿ–ผ๏ธ Imรกgenes, para tareas como clasificaciรณn de imรกgenes, detecciรณn the objetos, y segmentaciรณn. * ๐Ÿ—ฃ๏ธ Audio, para tareas como reconocimiento de voz y clasificaciรณn de audio. Los modelos de Transformer tambiรฉn pueden realizar tareas en **muchas modalidades combinadas**, como responder preguntas, reconocimiento de carรกcteres รณpticos,extracciรณn de informaciรณn de documentos escaneados, clasificaciรณn de video, y respuesta de preguntas visuales. ๐Ÿค— Transformers aporta APIs para descargar rรกpidamente y usar estos modelos preentrenados en un texto dado, afinarlos en tus propios sets de datos y compartirlos con la comunidad en nuestro [centro de modelos](https://huggingface.co/models). Al mismo tiempo, cada mรณdulo de Python que define una arquitectura es completamente independiente y se puede modificar para permitir experimentos de investigaciรณn rรกpidos. ๐Ÿค— Transformers estรก respaldado por las tres bibliotecas de deep learning mรกs populares โ€” [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) y [TensorFlow](https://www.tensorflow.org/) โ€” con una perfecta integraciรณn entre ellos. Es sencillo entrenar sus modelos con uno antes de cargarlos para la inferencia con el otro. ## Demostraciones en lรญnea Puedes probar la mayorรญa de nuestros modelos directamente en sus pรกginas desde el [centro de modelos](https://huggingface.co/models). Tambiรฉn ofrecemos [alojamiento de modelos privados, control de versiones y una API de inferencia](https://huggingface.co/pricing) para modelos pรบblicos y privados. Aquรญ hay algunos ejemplos: En procesamiento del lenguaje natural: - [Terminaciรณn de palabras enmascaradas con BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Reconocimiento del nombre de la entidad con Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [Generaciรณn de texto con GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [Inferencia del lenguaje natural con RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [Resumen con BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [Responder a preguntas con DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [Traducciรณn con T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) En visiรณn de ordenador: - [Clasificaciรณn de imรกgenes con ViT](https://huggingface.co/google/vit-base-patch16-224) - [Detecciรณn de objetos con DETR](https://huggingface.co/facebook/detr-resnet-50) - [Segmentaciรณn semรกntica con SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [Segmentaciรณn panรณptica con DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic) - [Segmentaciรณn Universal con OneFormer (Segmentaciรณn Semรกntica, de Instancia y Panรณptica con un solo modelo)](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) En Audio: - [Reconocimiento de voz automรกtico con Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) - [Detecciรณn de palabras clave con Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) En tareas multimodales: - [Respuesta visual a preguntas con ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) **[Escribe con Transformer](https://transformer.huggingface.co)**, construido por el equipo de Hugging Face, es la demostraciรณn oficial de las capacidades de generaciรณn de texto de este repositorio. ## Si estรก buscando soporte personalizado del equipo de Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## Tour rรกpido Para usar inmediatamente un modelo en una entrada determinada (texto, imagen, audio, ...), proporcionamos la API de `pipeline`. Los pipelines agrupan un modelo previamente entrenado con el preprocesamiento que se usรณ durante el entrenamiento de ese modelo. Aquรญ se explica cรณmo usar rรกpidamente un pipeline para clasificar textos positivos frente a negativos: ```python >>> from transformers import pipeline # Allocate a pipeline for sentiment-analysis >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` La segunda lรญnea de cรณdigo descarga y almacena en cachรฉ el modelo previamente entrenado que usa la canalizaciรณn, mientras que la tercera lo evalรบa en el texto dado. Aquรญ la respuesta es "positiva" con una confianza del 99,97%. Muchas tareas tienen un `pipeline` preentrenado listo para funcionar, en NLP pero tambiรฉn en visiรณn por ordenador y habla. Por ejemplo, podemos extraer fรกcilmente los objetos detectados en una imagen: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Download an image with cute cats >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # Allocate a pipeline for object detection >>> object_detector = pipeline('object_detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` Aquรญ obtenemos una lista de objetos detectados en la imagen, con un cuadro que rodea el objeto y una puntuaciรณn de confianza. Aquรญ estรก la imagen original a la derecha, con las predicciones mostradas a la izquierda: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> Puedes obtener mรกs informaciรณn sobre las tareas admitidas por la API de `pipeline` en [este tutorial](https://huggingface.co/docs/transformers/task_summary). Ademรกs de `pipeline`, para descargar y usar cualquiera de los modelos previamente entrenados en su tarea dada, todo lo que necesita son tres lรญneas de cรณdigo. Aquรญ estรก la versiรณn de PyTorch: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` Y aquรญ estรก el cรณdigo equivalente para TensorFlow: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` El tokenizador es responsable de todo el preprocesamiento que espera el modelo preentrenado y se puede llamar directamente en una sola cadena (como en los ejemplos anteriores) o en una lista. Este darรก como resultado un diccionario que puedes usar en el cรณdigo descendente o simplemente pasarlo directamente a su modelo usando el operador de desempaquetado de argumento **. El modelo en si es un [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) normal o un [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (dependiendo De tu backend) que puedes usar de forma habitual. [Este tutorial](https://huggingface.co/docs/transformers/training) explica cรณmo integrar un modelo de este tipo en un ciclo de entrenamiento PyTorch o TensorFlow clรกsico, o como usar nuestra API `Trainer` para ajustar rรกpidamente un nuevo conjunto de datos. ## ยฟPor quรฉ debo usar transformers? 1. Modelos de รบltima generaciรณn fรกciles de usar: - Alto rendimiento en comprensiรณn y generaciรณn de lenguaje natural, visiรณn artificial y tareas de audio. - Baja barrera de entrada para educadores y profesionales. - Pocas abstracciones de cara al usuario con solo tres clases para aprender. - Una API unificada para usar todos nuestros modelos preentrenados. 1. Menores costes de cรณmputo, menor huella de carbono: - Los investigadores pueden compartir modelos entrenados en lugar de siempre volver a entrenar. - Los profesionales pueden reducir el tiempo de cรณmputo y los costos de producciรณn. - Docenas de arquitecturas con mรกs de 60 000 modelos preentrenados en todas las modalidades. 1. Elija el marco adecuado para cada parte de la vida รบtil de un modelo: - Entrene modelos de รบltima generaciรณn en 3 lรญneas de cรณdigo. - Mueva un solo modelo entre los marcos TF2.0/PyTorch/JAX a voluntad. - Elija sin problemas el marco adecuado para la formaciรณn, la evaluaciรณn y la producciรณn. 1. Personalice fรกcilmente un modelo o un ejemplo segรบn sus necesidades: - Proporcionamos ejemplos de cada arquitectura para reproducir los resultados publicados por sus autores originales.. - Los internos del modelo estรกn expuestos lo mรกs consistentemente posible.. - Los archivos modelo se pueden usar independientemente de la biblioteca para experimentos rรกpidos. ## ยฟPor quรฉ no deberรญa usar transformers? - Esta biblioteca no es una caja de herramientas modular de bloques de construcciรณn para redes neuronales. El cรณdigo en los archivos del modelo no se refactoriza con abstracciones adicionales a propรณsito, de modo que los investigadores puedan iterar rรกpidamente en cada uno de los modelos sin sumergirse en abstracciones/archivos adicionales. - La API de entrenamiento no estรก diseรฑada para funcionar en ningรบn modelo, pero estรก optimizada para funcionar con los modelos proporcionados por la biblioteca. Para bucles genรฉricos de aprendizaje automรกtico, debe usar otra biblioteca (posiblemente, [Accelerate](https://huggingface.co/docs/accelerate)). - Si bien nos esforzamos por presentar tantos casos de uso como sea posible, los scripts en nuestra [carpeta de ejemplos](https://github.com/huggingface/transformers/tree/main/examples) son solo eso: ejemplos. Se espera que no funcionen de forma inmediata en su problema especรญfico y que deba cambiar algunas lรญneas de cรณdigo para adaptarlas a sus necesidades. ## Instalaciรณn ### Con pip Este repositorio estรก probado en Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ y TensorFlow 2.6+. Deberรญas instalar ๐Ÿค— Transformers en un [entorno virtual](https://docs.python.org/3/library/venv.html). Si no estas familiarizado con los entornos virtuales de Python, consulta la [guรญa de usuario](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Primero, crea un entorno virtual con la versiรณn de Python que vas a usar y actรญvalo. Luego, deberรกs instalar al menos uno entre Flax, PyTorch o TensorFlow. Por favor, ve a la [pรกgina de instalaciรณn de TensorFlow](https://www.tensorflow.org/install/), [pรกgina de instalaciรณn de PyTorch](https://pytorch.org/get-started/locally/#start-locally) y/o las pรกginas de instalaciรณn de [Flax](https://github.com/google/flax#quick-install) y [Jax](https://github.com/google/jax#installation) con respecto al comando de instalaciรณn especรญfico para tu plataforma. Cuando se ha instalado uno de esos backends, los ๐Ÿค— Transformers se pueden instalar usando pip de la siguiente manera: ```bash pip install transformers ``` Si deseas jugar con los ejemplos o necesitas la รบltima versiรณn del cรณdigo y no puedes esperar a una nueva versiรณn, tienes que [instalar la librerรญa de la fuente](https://huggingface.co/docs/transformers/installation#installing-from-source). ### Con conda ๐Ÿค— Transformers se puede instalar usando conda de la siguiente manera: ```shell script conda install conda-forge::transformers ``` > **_NOTA:_** Instalar `transformers` desde el canal `huggingface` estรก obsoleto. Sigue las pรกginas de instalaciรณn de Flax, PyTorch o TensorFlow para ver cรณmo instalarlos con conda. > **_NOTA:_** En Windows, es posible que se le pida que active el modo de desarrollador para beneficiarse del almacenamiento en cachรฉ. Si esta no es una opciรณn para usted, hรกganoslo saber en [esta issue](https://github.com/huggingface/huggingface_hub/issues/1062). ## Arquitecturas modelo **[Todos los puntos de control del modelo](https://huggingface.co/models)** aportados por ๐Ÿค— Transformers estรกn perfectamente integrados desde huggingface.co [Centro de modelos](https://huggingface.co) donde son subidos directamente por los [usuarios](https://huggingface.co/users) y [organizaciones](https://huggingface.co/organizations). Nรบmero actual de puntos de control: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers actualmente proporciona las siguientes arquitecturas: ver [aquรญ](https://huggingface.co/docs/transformers/model_summary) para un resumen de alto nivel de cada uno de ellas. Para comprobar si cada modelo tiene una implementaciรณn en Flax, PyTorch o TensorFlow, o tiene un tokenizador asociado respaldado por la librerรญa ๐Ÿค— Tokenizers, ve a [esta tabla](https://huggingface.co/docs/transformers/index#supported-frameworks). Estas implementaciones se han probado en varios conjuntos de datos (consulte los scripts de ejemplo) y deberรญan coincidir con el rendimiento de las implementaciones originales. Puede encontrar mรกs detalles sobre el rendimiento en la secciรณn Examples de la [documentaciรณn](https://github.com/huggingface/transformers/tree/main/examples). ## Aprender mรกs | Secciรณn | Descripciรณn | |-|-| | [Documentaciรณn](https://huggingface.co/docs/transformers/) | Toda la documentaciรณn de la API y tutoriales | | [Resumen de tareas](https://huggingface.co/docs/transformers/task_summary) | Tareas soportadas ๐Ÿค— Transformers | | [Tutorial de preprocesamiento](https://huggingface.co/docs/transformers/preprocessing) | Usando la clase `Tokenizer` para preparar datos para los modelos | | [Entrenamiento y puesta a punto](https://huggingface.co/docs/transformers/training) | Usando los modelos aportados por ๐Ÿค— Transformers en un bucle de entreno de PyTorch/TensorFlow y la API de `Trainer` | | [Recorrido rรกpido: secuencias de comandos de ajuste/uso](https://github.com/huggingface/transformers/tree/main/examples) | Scripts de ejemplo para ajustar modelos en una amplia gama de tareas | | [Compartir y subir modelos](https://huggingface.co/docs/transformers/model_sharing) | Carga y comparte tus modelos perfeccionados con la comunidad | | [Migraciรณn](https://huggingface.co/docs/transformers/migration) | Migra a ๐Ÿค— Transformers desde `pytorch-transformers` o `pytorch-pretrained-bert` | ## Citaciรณn Ahora nosotros tenemos un [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que puedes citar para la librerรญa de ๐Ÿค— Transformers: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/README_te.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <b>เฐคเฑ†เฐฒเฑเฐ—เฑ</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>JAX, PyTorch เฐฎเฐฐเฐฟเฐฏเฑ TensorFlow เฐ•เฑ‹เฐธเฐ‚ เฐ…เฐคเฑเฐฏเฐพเฐงเฑเฐจเฐฟเฐ• เฐฏเฐ‚เฐคเฑเฐฐ เฐ…เฐญเฑเฐฏเฐพเฐธเฐ‚</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ, เฐตเฐฟเฐœเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐ†เฐกเฐฟเฐฏเฑ‹ เฐตเฐ‚เฐŸเฐฟ เฐตเฐฟเฐญเฐฟเฐจเฑเฐจ เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฐชเฑˆ เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒเฐจเฑ เฐจเฐฟเฐฐเฑเฐตเฐนเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐตเฑ‡เฐฒเฐพเฐฆเฐฟ เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฐพเฐฏเฐฟ. เฐˆ เฐจเฐฎเฑ‚เฐจเฐพเฐฒเฑ เฐตเฐฐเฑเฐคเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ: * ๐Ÿ“ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ, 100เฐ•เฐฟ เฐชเฑˆเฐ—เฐพ เฐญเฐพเฐทเฐฒเฑเฐฒเฑ‹ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ เฐ•เฑเฐฒเฐพเฐธเฐฟเฐซเฐฟเฐ•เฑ‡เฐทเฐจเฑ, เฐ‡เฐจเฑเฐซเฐฐเฑเฐฎเฑ‡เฐทเฐจเฑ เฐŽเฐ•เฑเฐธเฑโ€ŒเฐŸเฑเฐฐเฐพเฐ•เฑเฐทเฐจเฑ, เฐชเฑเฐฐเฐถเฑเฐจเฐฒเฐ•เฑ เฐธเฐฎเฐพเฐงเฐพเฐจเฐพเฐฒเฑ, เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚, เฐ…เฐจเฑเฐตเฐพเฐฆเฐ‚, เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ เฐœเฐจเฐฐเฑ‡เฐทเฐจเฑ เฐตเฐ‚เฐŸเฐฟ เฐชเฐจเฑเฐฒ เฐ•เฑ‹เฐธเฐ‚. * ๐Ÿ–ผ๏ธ เฐ‡เฐฎเฑ‡เฐœเฑโ€Œเฐฒเฑ, เฐ‡เฐฎเฑ‡เฐœเฑ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ, เฐ†เฐฌเฑเฐœเฑ†เฐ•เฑเฐŸเฑ เฐกเฐฟเฐŸเฑ†เฐ•เฑเฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐธเฑ†เฐ—เฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ เฐตเฐ‚เฐŸเฐฟ เฐชเฐจเฑเฐฒ เฐ•เฑ‹เฐธเฐ‚. * ๐Ÿ—ฃ๏ธ เฐ†เฐกเฐฟเฐฏเฑ‹, เฐธเฑเฐชเฑ€เฐšเฑ เฐฐเฐฟเฐ•เฐ—เฑเฐจเฐฟเฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐ†เฐกเฐฟเฐฏเฑ‹ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ เฐตเฐ‚เฐŸเฐฟ เฐชเฐจเฑเฐฒ เฐ•เฑ‹เฐธเฐ‚. เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฑ เฐŸเฑ‡เฐฌเฑเฐฒเฑ เฐ•เฑเฐตเฐถเฑเฐšเฐจเฑ เฐ†เฐจเฑเฐธเฐฐเฑ เฐšเฑ‡เฐฏเฐกเฐ‚, เฐ†เฐชเฑเฐŸเฐฟเฐ•เฐฒเฑ เฐ•เฑเฐฏเฐพเฐฐเฑ†เฐ•เฑเฐŸเฐฐเฑ เฐฐเฐฟเฐ•เฐ—เฑเฐจเฐฟเฐทเฐจเฑ, เฐธเฑเฐ•เฐพเฐจเฑ เฐšเฑ‡เฐธเฐฟเฐจ เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑโ€Œเฐฒ เฐจเฑเฐ‚เฐกเฐฟ เฐ‡เฐจเฑเฐซเฐฐเฑเฐฎเฑ‡เฐทเฐจเฑ เฐŽเฐ•เฑเฐธเฑโ€ŒเฐŸเฑเฐฐเฐพเฐ•เฑเฐทเฐจเฑ, เฐตเฑ€เฐกเฐฟเฐฏเฑ‹ เฐ•เฑเฐฒเฐพเฐธเฐฟเฐซเฐฟเฐ•เฑ‡เฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐฟเฐœเฑเฐตเฐฒเฑ เฐ•เฑเฐตเฐถเฑเฐšเฐจเฑ เฐ†เฐจเฑเฐธเฐฐเฑ เฐšเฑ‡เฐฏเฐกเฐ‚ เฐตเฐ‚เฐŸเฐฟ **เฐ…เฐจเฑ‡เฐ• เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฐคเฑ‹ เฐ•เฐฒเฐฟเฐชเฐฟ** เฐชเฐจเฑเฐฒเฐจเฑ เฐ•เฑ‚เฐกเฐพ เฐšเฑ‡เฐฏเฐ—เฐฒเฐตเฑ. ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐ…เฐ‚เฐฆเฐฟเฐ‚เฐšเฐฟเฐจ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑโ€Œเฐฒเฑ‹ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐคเฑเฐตเฐฐเฐ—เฐพ เฐกเฑŒเฐจเฑโ€Œเฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ, เฐตเฐพเฐŸเฐฟเฐจเฐฟ เฐฎเฑ€ เฐธเฑเฐตเฐ‚เฐค เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐฒเฐฒเฑ‹ เฐซเฑˆเฐจเฑ-เฐŸเฑเฐฏเฑ‚เฐจเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐพเฐŸเฐฟเฐจเฐฟ เฐฎเฐพ [เฐฎเฑ‹เฐกเฐฒเฑ เฐนเฐฌเฑ](https://huggingface.co/models)เฐฒเฑ‹ เฐธเฐ‚เฐ˜เฐ‚เฐคเฑ‹ เฐญเฐพเฐ—เฐธเฑเฐตเฐพเฐฎเฑเฐฏเฐ‚ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ API เฐฒเฐจเฑ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐ…เฐฆเฑ‡ เฐธเฐฎเฐฏเฐ‚เฐฒเฑ‹, เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑโ€Œเฐจเฐฟ เฐจเฐฟเฐฐเฑเฐตเฐšเฐฟเฐ‚เฐšเฑ‡ เฐชเฑเฐฐเฐคเฐฟ เฐชเฑˆเฐฅเฐพเฐจเฑ เฐฎเฐพเฐกเฑเฐฏเฑ‚เฐฒเฑ เฐชเฑ‚เฐฐเฑเฐคเฐฟเฐ—เฐพ เฐธเฑเฐตเฐคเฐ‚เฐคเฑเฐฐเฐ‚เฐ—เฐพ เฐ‰เฐ‚เฐŸเฑเฐ‚เฐฆเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฐฟเฐถเฑ‹เฐงเฐจ เฐชเฑเฐฐเฐฏเฑ‹เฐ—เฐพเฐฒเฐจเฑ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฐตเฐฐเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐ•เฑ เฐฎเฑ‚เฐกเฑ เฐ…เฐคเฑเฐฏเฐ‚เฐค เฐชเฑเฐฐเฐœเฐพเฐฆเฐฐเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐกเฑ€เฐชเฑ เฐฒเฑ†เฐฐเฑเฐจเฐฟเฐ‚เฐ—เฑ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐฒเฑ เฐ‰เฐจเฑเฐจเฐพเฐฏเฐฟ โ€” [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) เฐฎเฐฐเฐฟเฐฏเฑ [TensorFlow](https://www.tensorflow.org/) โ€” เฐตเฐพเฐŸเฐฟ เฐฎเฐงเฑเฐฏ เฐ…เฐคเฑเฐ•เฑเฐฒเฑ เฐฒเฑ‡เฐจเฐฟ เฐเฐ•เฑ€เฐ•เฐฐเฐฃเฐคเฑ‹. เฐฎเฑ€ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐ’เฐ•เฐฆเฐพเฐจเฐฟเฐคเฑ‹ เฐฎเฐฐเฑŠเฐ•เฐฆเฐพเฐจเฐฟเฐคเฑ‹ เฐ…เฐจเฑเฐฎเฐฟเฐคเฐฟ เฐ•เฑ‹เฐธเฐ‚ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐธเฑ‡ เฐฎเฑเฐ‚เฐฆเฑ เฐตเฐพเฐŸเฐฟเฐ•เฐฟ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐ‡เฐตเฑเฐตเฐกเฐ‚ เฐšเฐพเฐฒเฐพ เฐธเฑเฐฒเฐญเฐ‚. ## เฐ†เฐจเฑโ€Œเฐฒเฑˆเฐจเฑ เฐกเฑ†เฐฎเฑ‹เฐฒเฑ เฐฎเฑ€เฐฐเฑ [เฐฎเฑ‹เฐกเฐฒเฑ เฐนเฐฌเฑ](https://huggingface.co/models) เฐจเฑเฐ‚เฐกเฐฟ เฐฎเฐพ เฐฎเฑ‹เฐกเฐณเฑเฐฒเฐฒเฑ‹ เฐšเฐพเฐฒเฐพ เฐตเฐฐเฐ•เฑ เฐตเฐพเฐŸเฐฟ เฐชเฑ‡เฐœเฑ€เฐฒเฐฒเฑ‹ เฐจเฑ‡เฐฐเฑเฐ—เฐพ เฐชเฐฐเฑ€เฐ•เฑเฐทเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. เฐฎเฑ‡เฐฎเฑ เฐชเฐฌเฑเฐฒเฐฟเฐ•เฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐฐเฑˆเฐตเฑ‡เฐŸเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚ [เฐชเฑเฐฐเฑˆเฐตเฑ‡เฐŸเฑ เฐฎเฑ‹เฐกเฐฒเฑ เฐนเฑ‹เฐธเฑเฐŸเฐฟเฐ‚เฐ—เฑ, เฐธเฐ‚เฐธเฑเฐ•เฐฐเฐฃ & เฐ…เฐจเฑเฐฎเฐฟเฐคเฐฟ API](https://huggingface.co/pricing)เฐจเฐฟ เฐ•เฑ‚เฐกเฐพ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฐพเฐฎเฑ. เฐ‡เฐ•เฑเฐ•เฐก เฐ•เฑŠเฐจเฑเฐจเฐฟ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒเฑ เฐ‰เฐจเฑเฐจเฐพเฐฏเฐฟ: เฐธเฐนเฐœ เฐญเฐพเฐทเฐพ เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑโ€Œเฐฒเฑ‹: - [BERT เฐคเฑ‹ เฐฎเฐพเฐธเฑเฐ•เฑโ€Œเฐกเฑ เฐตเฐฐเฑเฐกเฑ เฐ•เฐ‚เฐชเฑเฐฒเฑ€เฐทเฐจเฑ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Electra เฐคเฑ‹ เฐชเฑ‡เฐฐเฑ เฐŽเฐ‚เฐŸเฐฟเฐŸเฑ€ เฐ—เฑเฐฐเฑเฐคเฐฟเฐ‚เฐชเฑ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [GPT-2 เฐคเฑ‹ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ เฐœเฐจเฐฐเฑ‡เฐทเฐจเฑ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [RoBERTa เฐคเฑ‹ เฐธเฐนเฐœ เฐญเฐพเฐทเฐพ เฐ…เฐจเฑเฐฎเฐฟเฐคเฐฟ](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+Lost.+Nobody+lost+any+animal) - [BART เฐคเฑ‹ เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [DistilBERT เฐคเฑ‹ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ‚](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [T5 เฐคเฑ‹ เฐ…เฐจเฑเฐตเฐพเฐฆเฐ‚](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) เฐ•เฐ‚เฐชเฑเฐฏเฑ‚เฐŸเฐฐเฑ เฐฆเฑƒเฐทเฑเฐŸเฐฟเฐฒเฑ‹: - [VIT เฐคเฑ‹ เฐšเฐฟเฐคเฑเฐฐ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ](https://huggingface.co/google/vit-base-patch16-224) - [DETR เฐคเฑ‹ เฐ†เฐฌเฑเฐœเฑ†เฐ•เฑเฐŸเฑ เฐกเฐฟเฐŸเฑ†เฐ•เฑเฐทเฐจเฑ](https://huggingface.co/facebook/detr-resnet-50) - [SegFormer เฐคเฑ‹ เฐธเฑ†เฐฎเฐพเฐ‚เฐŸเฐฟเฐ•เฑ เฐธเฑ†เฐ—เฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [MaskFormer เฐคเฑ‹ เฐชเฐพเฐจเฑ‹เฐชเฑเฐŸเฐฟเฐ•เฑ เฐธเฑ†เฐ—เฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ](https://huggingface.co/facebook/maskformer-swin-small-coco) - [DPT เฐคเฑ‹ เฐฒเฑ‹เฐคเฑ เฐ…เฐ‚เฐšเฐจเฐพ](https://huggingface.co/docs/transformers/model_doc/dpt) - [VideoMAE เฐคเฑ‹ เฐตเฑ€เฐกเฐฟเฐฏเฑ‹ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ](https://huggingface.co/docs/transformers/model_doc/videomae) - [OneFormer เฐคเฑ‹ เฐฏเฑ‚เฐจเฐฟเฐตเฐฐเฑเฐธเฐฒเฑ เฐธเฑ†เฐ—เฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) เฐ†เฐกเฐฟเฐฏเฑ‹เฐฒเฑ‹: - [Wav2Vec2 เฐคเฑ‹ เฐ†เฐŸเฑ‹เฐฎเฑ‡เฐŸเฐฟเฐ•เฑ เฐธเฑเฐชเฑ€เฐšเฑ เฐฐเฐฟเฐ•เฐ—เฑเฐจเฐฟเฐทเฐจเฑ](https://huggingface.co/facebook/wav2vec2-base-960h) - [Wav2Vec2 เฐคเฑ‹ เฐ•เฑ€เฐตเฐฐเฑเฐกเฑ เฐธเฑเฐชเฐพเฐŸเฐฟเฐ‚เฐ—เฑ](https://huggingface.co/superb/wav2vec2-base-superb-ks) - [เฐ†เฐกเฐฟเฐฏเฑ‹ เฐธเฑเฐชเฑ†เฐ•เฑเฐŸเฑเฐฐเฑ‹เฐ—เฑเฐฐเฐพเฐฎเฑ เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐคเฑ‹ เฐ†เฐกเฐฟเฐฏเฑ‹ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) เฐฎเฐฒเฑเฐŸเฑ€เฐฎเฑ‹เฐกเฐฒเฑ เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒเฐฒเฑ‹: - [TAPAS เฐคเฑ‹ เฐŸเฑ‡เฐฌเฑเฐฒเฑ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐฎเฐพเฐงเฐพเฐจเฐพเฐฒเฑ](https://huggingface.co/google/tapas-base-finetuned-wtq) - [ViLT เฐคเฑ‹ เฐฆเฑƒเฐถเฑเฐฏเฐฎเฐพเฐจ เฐชเฑเฐฐเฐถเฑเฐจเฐ•เฑ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ‚](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) - [CLIP เฐคเฑ‹ เฐœเฑ€เฐฐเฑ‹-เฐทเฐพเฐŸเฑ เฐ‡เฐฎเฑ‡เฐœเฑ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ](https://huggingface.co/openai/clip-vit-large-patch14) - [LayoutLM เฐคเฑ‹ เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑ เฐชเฑเฐฐเฐถเฑเฐจเฐ•เฑ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ‚](https://huggingface.co/impira/layoutlm-document-qa) - [X-CLIP เฐคเฑ‹ เฐœเฑ€เฐฐเฑ‹-เฐทเฐพเฐŸเฑ เฐตเฑ€เฐกเฐฟเฐฏเฑ‹ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ](https://huggingface.co/docs/transformers/model_doc/xclip) ## เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟ 100 เฐชเฑเฐฐเฐพเฐœเฑ†เฐ•เฑเฐŸเฑเฐฒเฑ เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐŸเฑ‚เฐฒเฑโ€Œเฐ•เฐฟเฐŸเฑ เฐ•เฐ‚เฐŸเฑ‡ เฐŽเฐ•เฑเฐ•เฑเฐต: เฐ‡เฐฆเฐฟ เฐฆเฐพเฐจเฐฟ เฐšเฑเฐŸเฑเฐŸเฑ‚ เฐจเฐฟเฐฐเฑเฐฎเฐฟเฐ‚เฐšเฐฟเฐจ เฐชเฑเฐฐเฐพเฐœเฑ†เฐ•เฑเฐŸเฑโ€Œเฐฒ เฐธเฐ‚เฐ˜เฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐนเฐ—เฑเฐ—เฐฟเฐ‚เฐ—เฑ เฐซเฑ‡เฐธเฑ เฐนเฐฌเฑ. เฐกเฑ†เฐตเฐฒเฐชเฐฐเฑโ€Œเฐฒเฑ, เฐชเฐฐเฐฟเฐถเฑ‹เฐงเฐ•เฑเฐฒเฑ, เฐตเฐฟเฐฆเฑเฐฏเฐพเฐฐเฑเฐฅเฑเฐฒเฑ, เฐชเฑเฐฐเฑŠเฐซเฑ†เฐธเฐฐเฑโ€Œเฐฒเฑ, เฐ‡เฐ‚เฐœเฐจเฑ€เฐฐเฑเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐŽเฐตเฐฐเฐฟเฐจเฑˆเฐจเฐพ เฐ…เฐจเฑเฐฎเฐคเฐฟเฐ‚เฐšเฑ‡เฐฒเฐพ เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐจเฑ เฐฎเฑ‡เฐฎเฑ เฐ•เฑ‹เฐฐเฑเฐ•เฑเฐ‚เฐŸเฑเฐจเฑเฐจเฐพเฐฎเฑ เฐตเฐพเฐฐเฐฟ เฐ•เฐฒเฐฒ เฐชเฑเฐฐเฐพเฐœเฑ†เฐ•เฑเฐŸเฑเฐฒเฐจเฑ เฐจเฐฟเฐฐเฑเฐฎเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ. เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒ 100,000 เฐจเฐ•เฑเฐทเฐคเฑเฐฐเฐพเฐฒเฐจเฑ เฐœเฐฐเฑเฐชเฑเฐ•เฑ‹เฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ, เฐฎเฑ‡เฐฎเฑ เฐธเฑเฐชเฐพเฐŸเฑโ€ŒเฐฒเฑˆเฐŸเฑโ€Œเฐจเฐฟ เฐ‰เฐ‚เฐšเฐพเฐฒเฐจเฐฟ เฐจเฐฟเฐฐเฑเฐฃเฐฏเฐฟเฐ‚เฐšเฑเฐ•เฑเฐจเฑเฐจเฐพเฐฎเฑ เฐธเฐ‚เฐ˜เฐ‚, เฐฎเฐฐเฐฟเฐฏเฑ เฐฎเฑ‡เฐฎเฑ 100 เฐœเฐพเฐฌเฐฟเฐคเฐพเฐฒเฐจเฑ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐจเฑเฐจ [awesome-transformers](./awesome-transformers.md) เฐชเฑ‡เฐœเฑ€เฐจเฐฟ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐพเฐฎเฑ. เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒ เฐชเฐฐเฐฟเฐธเฐฐเฐพเฐฒเฑเฐฒเฑ‹ เฐ…เฐฆเฑเฐญเฑเฐคเฐฎเฑˆเฐจ เฐชเฑเฐฐเฐพเฐœเฑ†เฐ•เฑเฐŸเฑเฐฒเฑ เฐจเฐฟเฐฐเฑเฐฎเฐฟเฐ‚เฐšเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ. เฐœเฐพเฐฌเฐฟเฐคเฐพเฐฒเฑ‹ เฐญเฐพเฐ—เฐฎเฐจเฐฟ เฐฎเฑ€เฐฐเฑ เฐตเฐฟเฐถเฑเฐตเฐธเฐฟเฐ‚เฐšเฑ‡ เฐชเฑเฐฐเฐพเฐœเฑ†เฐ•เฑเฐŸเฑโ€Œเฐจเฑ เฐฎเฑ€เฐฐเฑ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐ‚เฐŸเฑ‡ เฐฒเฑ‡เฐฆเฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐธเฑเฐคเฑเฐ‚เฐŸเฑ‡, เฐฆเฐฏเฐšเฑ‡เฐธเฐฟ เฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐœเฑ‹เฐกเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ PRเฐจเฐฟ เฐคเฑ†เฐฐเฐตเฐ‚เฐกเฐฟ! ## เฐฎเฑ€เฐฐเฑ เฐนเฐ—เฑเฐ—เฐฟเฐ‚เฐ—เฑ เฐซเฑ‡เฐธเฑ เฐŸเฑ€เฐฎเฑ เฐจเฑเฐ‚เฐกเฐฟ เฐ…เฐจเฑเฐ•เฑ‚เฐฒ เฐฎเฐฆเฑเฐฆเฐคเฑ เฐ•เฑ‹เฐธเฐ‚ เฐšเฑ‚เฐธเฑเฐคเฑเฐจเฑเฐจเฐŸเฑเฐฒเฐฏเฐฟเฐคเฑ‡ <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฑเฐฏเฐŸเฐจ เฐ‡เฐšเฑเฐšเฐฟเฐจ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑ (เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ, เฐ‡เฐฎเฑ‡เฐœเฑ, เฐ†เฐกเฐฟเฐฏเฑ‹, ...)เฐชเฑˆ เฐคเฐ•เฑเฐทเฐฃเฐฎเฑ‡ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ, เฐฎเฑ‡เฐฎเฑ `pipeline` API เฐจเฐฟ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฐพเฐฎเฑ. เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑโ€Œเฐฒเฑ เฐ† เฐฎเฑ‹เฐกเฐฒเฑ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐธเฐฎเฐฏเฐ‚เฐฒเฑ‹ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟเฐจ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑโ€Œเฐคเฑ‹ เฐ•เฑ‚เฐกเฐฟเฐจ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐธเฐฎเฑ‚เฐนเฐชเฐฐเฑเฐธเฑเฐคเฐพเฐฏเฐฟ. เฐธเฐพเฐจเฑเฐ•เฑ‚เฐฒ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐฐเฐคเฐฟเฐ•เฑ‚เฐฒ เฐชเฐพเฐ เฐพเฐฒเฐจเฑ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑโ€Œเฐจเฑ เฐคเฑเฐตเฐฐเฐ—เฐพ เฐŽเฐฒเฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐพเฐฒเฑ‹ เฐ‡เฐ•เฑเฐ•เฐก เฐ‰เฐ‚เฐฆเฐฟ: ```python >>> from transformers import pipeline # Allocate a pipeline for sentiment-analysis >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` เฐฐเฑ†เฐ‚เฐกเฐต เฐฒเฑˆเฐจเฑ เฐ•เฑ‹เฐกเฑ เฐกเฑŒเฐจเฑโ€Œเฐฒเฑ‹เฐกเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฑ‡ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ•เฐพเฐทเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ, เฐฎเฑ‚เฐกเฐตเฐฆเฐฟ เฐ‡เฐšเฑเฐšเฐฟเฐจ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑโ€Œเฐชเฑˆ เฐฎเฑ‚เฐฒเฑเฐฏเฐพเฐ‚เฐ•เฐจเฐ‚ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐ‡เฐ•เฑเฐ•เฐก เฐธเฐฎเฐพเฐงเฐพเฐจเฐ‚ 99.97% เฐตเฐฟเฐถเฑเฐตเฐพเฐธเฐ‚เฐคเฑ‹ "เฐชเฐพเฐœเฐฟเฐŸเฐฟเฐตเฑ". เฐšเฐพเฐฒเฐพ เฐชเฐจเฑเฐฒเฑ NLPเฐฒเฑ‹ เฐ•เฐพเฐจเฑ€ เฐ•เฐ‚เฐชเฑเฐฏเฑ‚เฐŸเฐฐเฑ เฐตเฐฟเฐœเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐธเฑเฐชเฑ€เฐšเฑโ€Œเฐฒเฑ‹ เฐ•เฑ‚เฐกเฐพ เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ `pipeline` เฐธเฐฟเฐฆเฑเฐงเฐ‚เฐ—เฐพ เฐ‰เฐจเฑเฐจเฐพเฐฏเฐฟ. เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐ•เฑ, เฐฎเฐจเฐ‚ เฐšเฐฟเฐคเฑเฐฐเฐ‚เฐฒเฑ‹ เฐ—เฑเฐฐเฑเฐคเฐฟเฐ‚เฐšเฐฟเฐจ เฐตเฐธเฑเฐคเฑเฐตเฑเฐฒเฐจเฑ เฐธเฑเฐฒเฐญเฐ‚เฐ—เฐพ เฐธเฐ‚เฐ—เฑเฐฐเฐนเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Download an image with cute cats >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # Allocate a pipeline for object detection >>> object_detector = pipeline('object-detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` เฐ‡เฐ•เฑเฐ•เฐก เฐฎเฐจเฐ‚ เฐ†เฐฌเฑเฐœเฑ†เฐ•เฑเฐŸเฑ เฐšเฑเฐŸเฑเฐŸเฑ‚ เฐ‰เฐจเฑเฐจ เฐฌเฐพเฐ•เฑเฐธเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐ•เฐพเฐจเฑเฐซเฐฟเฐกเฑ†เฐจเฑเฐธเฑ เฐธเฑเฐ•เฑ‹เฐฐเฑโ€Œเฐคเฑ‹ เฐšเฐฟเฐคเฑเฐฐเฐ‚เฐฒเฑ‹ เฐ—เฑเฐฐเฑเฐคเฐฟเฐ‚เฐšเฐฌเฐกเฐฟเฐจ เฐตเฐธเฑเฐคเฑเฐตเฑเฐฒ เฐœเฐพเฐฌเฐฟเฐคเฐพเฐจเฑ เฐชเฑŠเฐ‚เฐฆเฑเฐคเฐพเฐฎเฑ. เฐ‡เฐ•เฑเฐ•เฐก เฐŽเฐกเฐฎเฐตเฑˆเฐชเฑเฐจ เฐ‰เฐจเฑเฐจ เฐ…เฐธเฐฒเฑ เฐšเฐฟเฐคเฑเฐฐเฐ‚, เฐ•เฑเฐกเฐฟเฐตเฑˆเฐชเฑเฐจ เฐ…เฐ‚เฐšเฐจเฐพเฐฒเฑ เฐชเฑเฐฐเฐฆเฐฐเฑเฐถเฐฟเฐ‚เฐšเฐฌเฐกเฐคเฐพเฐฏเฐฟ: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> เฐฎเฑ€เฐฐเฑ [เฐˆ เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑ](https://huggingface.co/docs/transformers/task_summary)เฐฒเฑ‹ `pipeline` API เฐฆเฑเฐตเฐพเฐฐเฐพ เฐธเฐชเฑ‹เฐฐเฑเฐŸเฑ เฐšเฑ‡เฐธเฑ‡ เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐฎเฐฐเฐฟเฐ‚เฐค เฐคเฑ†เฐฒเฑเฐธเฑเฐ•เฑ‹เฐตเฐšเฑเฐšเฑ. `pipeline`เฐคเฑ‹ เฐชเฐพเฐŸเฑ, เฐฎเฑ€เฐฐเฑ เฐ‡เฐšเฑเฐšเฐฟเฐจ เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒเฑ‹ เฐเฐฆเฑˆเฐจเฐพ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐกเฑŒเฐจเฑโ€Œเฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ, เฐฆเฑ€เฐจเฐฟเฐ•เฐฟ เฐฎเฑ‚เฐกเฑ เฐฒเฑˆเฐจเฑเฐฒ เฐ•เฑ‹เฐกเฑ เฐธเฐฐเฐฟเฐชเฑ‹เฐคเฑเฐ‚เฐฆเฐฟ. เฐ‡เฐ•เฑเฐ•เฐก PyTorch เฐตเฑ†เฐฐเฑเฐทเฐจเฑ เฐ‰เฐ‚เฐฆเฐฟ: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` เฐฎเฐฐเฐฟเฐฏเฑ TensorFlow เฐ•เฐฟ เฐธเฐฎเฐพเฐจเฐฎเฑˆเฐจ เฐ•เฑ‹เฐกเฑ เฐ‡เฐ•เฑเฐ•เฐก เฐ‰เฐ‚เฐฆเฐฟ: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` เฐชเฑเฐฐเฐฟเฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑ เฐ†เฐถเฐฟเฐ‚เฐšเฑ‡ เฐ…เฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑโ€Œเฐฒเฐ•เฑ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ เฐฌเฐพเฐงเฑเฐฏเฐค เฐตเฐนเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐจเฑ‡เฐฐเฑเฐ—เฐพ เฐ’เฐ•เฑ‡ เฐธเฑเฐŸเฑเฐฐเฐฟเฐ‚เฐ—เฑ (เฐชเฑˆ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒเฐฒเฑ‹ เฐตเฐฒเฑ†) เฐฒเฑ‡เฐฆเฐพ เฐœเฐพเฐฌเฐฟเฐคเฐพเฐชเฑˆ เฐ•เฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ. เฐ‡เฐฆเฐฟ เฐฎเฑ€เฐฐเฑ เฐกเฑŒเฐจเฑโ€ŒเฐธเฑเฐŸเฑเฐฐเฑ€เฐฎเฑ เฐ•เฑ‹เฐกเฑโ€Œเฐฒเฑ‹ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ—เฐฒ เฐจเฐฟเฐ˜เฐ‚เฐŸเฑเฐตเฑเฐจเฐฟ เฐ…เฐตเฑเฐŸเฑโ€ŒเฐชเฑเฐŸเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐฒเฑ‡เฐฆเฐพ ** เฐ†เฐฐเฑเฐ—เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑ เฐ…เฐจเฑโ€Œเฐชเฑเฐฏเฐพเฐ•เฐฟเฐ‚เฐ—เฑ เฐ†เฐชเฐฐเฑ‡เฐŸเฐฐเฑโ€Œเฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟ เฐจเฑ‡เฐฐเฑเฐ—เฐพ เฐฎเฑ€ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฐฟ เฐชเฐ‚เฐชเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐฎเฑ‹เฐกเฐฒเฑ เฐ•เฑ‚เฐกเฐพ เฐธเฐพเฐงเฐพเฐฐเฐฃ [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) เฐฒเฑ‡เฐฆเฐพ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (เฐฎเฑ€ เฐฌเฑเฐฏเฐพเฐ•เฑ†เฐ‚เฐกเฑโ€Œเฐจเฐฟ เฐฌเฐŸเฑเฐŸเฐฟ) เฐฎเฑ€เฐฐเฑ เฐฎเฐพเฐฎเฑ‚เฐฒเฑเฐ—เฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. [เฐˆ เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑ](https://huggingface.co/docs/transformers/training) เฐ…เฐŸเฑเฐตเฐ‚เฐŸเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฐฟ เฐ•เฑเฐฒเฐพเฐธเฐฟเฐ•เฑ PyTorch เฐฒเฑ‡เฐฆเฐพ TensorFlow เฐŸเฑเฐฐเฑˆเฐจเฐฟเฐ‚เฐ—เฑ เฐฒเฑ‚เฐชเฑโ€Œเฐฒเฑ‹ เฐŽเฐฒเฐพ เฐ‡เฐ‚เฐŸเฐฟเฐ—เฑเฐฐเฑ‡เฐŸเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฑ‹ เฐฒเฑ‡เฐฆเฐพ เฐฎเฐพ `Trainer` API เฐจเฐฟ เฐŽเฐฒเฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐพเฐฒเฑ‹ เฐตเฐฟเฐตเฐฐเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐ•เฑŠเฐคเฑเฐค เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑ. ## เฐจเฑ‡เฐจเฑ เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐจเฑ เฐŽเฐ‚เฐฆเฑเฐ•เฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐพเฐฒเฐฟ? 1. เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฑเฐฒเฐญเฐฎเฑˆเฐจ เฐธเฑเฐŸเฑ‡เฐŸเฑ เฐ†เฐซเฑ เฐฆเฐฟ เฐ†เฐฐเฑเฐŸเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฑ: - เฐธเฐนเฐœ เฐญเฐพเฐทเฐพ เฐ…เฐตเฐ—เฐพเฐนเฐจ & เฐ‰เฐคเฑเฐชเฐคเฑเฐคเฐฟ, เฐ•เฐ‚เฐชเฑเฐฏเฑ‚เฐŸเฐฐเฑ เฐฆเฑƒเฐทเฑเฐŸเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ†เฐกเฐฟเฐฏเฑ‹ เฐชเฐจเฑเฐฒเฐชเฑˆ เฐ…เฐงเฐฟเฐ• เฐชเฐจเฐฟเฐคเฑ€เฐฐเฑ. - เฐตเฐฟเฐฆเฑเฐฏเฐพเฐตเฑ‡เฐคเฑเฐคเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐ…เฐญเฑเฐฏเฐพเฐธเฐ•เฑเฐฒ เฐชเฑเฐฐเฐตเฑ‡เฐถเฐพเฐจเฐฟเฐ•เฐฟ เฐคเฐ•เฑเฐ•เฑเฐต เฐ…เฐตเฐฐเฑ‹เฐงเฐ‚. - เฐคเฑ†เฐฒเฑเฐธเฑเฐ•เฑ‹เฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ•เฑ‡เฐตเฐฒเฐ‚ เฐฎเฑ‚เฐกเฑ เฐคเฐฐเฐ—เฐคเฑเฐฒเฐคเฑ‹ เฐ•เฑŠเฐจเฑเฐจเฐฟ เฐตเฐฟเฐจเฐฟเฐฏเฑ‹เฐ—เฐฆเฐพเฐฐเฑ-เฐฎเฑเฐ– เฐธเฐ‚เฐ—เฑเฐฐเฐนเฐฃเฐฒเฑ. - เฐฎเฐพ เฐ…เฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐ‚ เฐ•เฑ‹เฐธเฐ‚ เฐเฐ•เฑ€เฐ•เฑƒเฐค API. 2. เฐคเฐ•เฑเฐ•เฑเฐต เฐ—เฐฃเฐจ เฐ–เฐฐเฑเฐšเฑเฐฒเฑ, เฐšเฐฟเฐจเฑเฐจ เฐ•เฐพเฐฐเฑเฐฌเฐจเฑ เฐชเฐพเฐฆเฐฎเฑเฐฆเฑเฐฐ: - เฐชเฐฐเฐฟเฐถเฑ‹เฐงเฐ•เฑเฐฒเฑ เฐŽเฐฒเฑเฐฒเฐชเฑเฐชเฑเฐกเฑ‚ เฐฎเฐณเฑเฐฒเฑ€ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฑ‡ เฐฌเฐฆเฑเฐฒเฑ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐจเฐฎเฑ‚เฐจเฐพเฐฒเฐจเฑ เฐชเฐ‚เฐšเฑเฐ•เฑ‹เฐตเฐšเฑเฐšเฑ. - เฐ…เฐญเฑเฐฏเฐพเฐธเฐ•เฑเฐฒเฑ เฐ—เฐฃเฐจ เฐธเฐฎเฐฏเฐพเฐจเฑเฐจเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ‰เฐคเฑเฐชเฐคเฑเฐคเฐฟ เฐ–เฐฐเฑเฐšเฑเฐฒเฐจเฑ เฐคเฐ—เฑเฐ—เฐฟเฐ‚เฐšเฐ—เฐฒเฐฐเฑ. - เฐ…เฐจเฑเฐจเฐฟ เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฑเฐฒเฑ‹ 60,000 เฐ•เฐ‚เฐŸเฑ‡ เฐŽเฐ•เฑเฐ•เฑเฐต เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐคเฑ‹ เฐกเฐœเฐจเฑเฐฒ เฐ•เฑŠเฐฆเฑเฐฆเฑ€ เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑโ€Œเฐฒเฑ. 3. เฐฎเฑ‹เฐกเฐฒเฑ เฐœเฑ€เฐตเฐฟเฐคเฐ•เฐพเฐฒเฐ‚เฐฒเฑ‹ เฐชเฑเฐฐเฐคเฐฟ เฐญเฐพเฐ—เฐพเฐจเฐฟเฐ•เฐฟ เฐธเฐฐเฑˆเฐจ เฐซเฑเฐฐเฑ‡เฐฎเฑโ€Œเฐตเฐฐเฑเฐ•เฑโ€Œเฐจเฑ เฐŽเฐ‚เฐšเฑเฐ•เฑ‹เฐ‚เฐกเฐฟ: - 3 เฐฒเฑˆเฐจเฑเฐฒ เฐ•เฑ‹เฐกเฑโ€Œเฐฒเฑ‹ เฐธเฑเฐŸเฑ‡เฐŸเฑ เฐ†เฐซเฑ เฐฆเฐฟ เฐ†เฐฐเฑเฐŸเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐ•เฑ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐ‡เฐตเฑเฐตเฐ‚เฐกเฐฟ. - TF2.0/PyTorch/JAX เฐซเฑเฐฐเฑ‡เฐฎเฑโ€Œเฐตเฐฐเฑเฐ•เฑโ€Œเฐฒ เฐฎเฐงเฑเฐฏ เฐ’เฐ•เฑ‡ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ‡เฐทเฑเฐŸเฐพเฐจเฑเฐธเฐพเฐฐเฐ‚เฐ—เฐพ เฐคเฐฐเฐฒเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ. - เฐถเฐฟเฐ•เฑเฐทเฐฃ, เฐฎเฑ‚เฐฒเฑเฐฏเฐพเฐ‚เฐ•เฐจเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐ‰เฐคเฑเฐชเฐคเฑเฐคเฐฟ เฐ•เฑ‹เฐธเฐ‚ เฐธเฐฐเฑˆเฐจ เฐซเฑเฐฐเฑ‡เฐฎเฑโ€Œเฐตเฐฐเฑเฐ•เฑโ€Œเฐจเฑ เฐธเฐœเฐพเฐตเฑเฐ—เฐพ เฐŽเฐ‚เฐšเฑเฐ•เฑ‹เฐ‚เฐกเฐฟ. 4. เฐฎเฑ€ เฐ…เฐตเฐธเฐฐเฐพเฐฒเฐ•เฑ เฐ…เฐจเฑเฐ—เฑเฐฃเฐ‚เฐ—เฐพ เฐฎเฑ‹เฐกเฐฒเฑ เฐฒเฑ‡เฐฆเฐพ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐจเฑ เฐธเฑเฐฒเฐญเฐ‚เฐ—เฐพ เฐ…เฐจเฑเฐ•เฑ‚เฐฒเฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ: - เฐชเฑเฐฐเฐคเฐฟ เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑ เฐฆเฐพเฐจเฐฟ เฐ…เฐธเฐฒเฑ เฐฐเฐšเฐฏเฐฟเฐคเฐฒเฑ เฐชเฑเฐฐเฐšเฑเฐฐเฐฟเฐ‚เฐšเฐฟเฐจ เฐซเฐฒเฐฟเฐคเฐพเฐฒเฐจเฑ เฐชเฑเฐจเฐฐเฑเฐคเฑเฐชเฐคเฑเฐคเฐฟ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ‡เฐฎเฑ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒเฐจเฑ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฐพเฐฎเฑ. - เฐฎเฑ‹เฐกเฐฒเฑ เฐ‡เฐ‚เฐŸเฐฐเฑเฐจเฐฒเฑโ€Œเฐฒเฑ เฐตเฑ€เฐฒเฑˆเฐจเฐ‚เฐค เฐธเฑเฐฅเฐฟเฐฐเฐ‚เฐ—เฐพ เฐฌเฐนเฐฟเฐฐเฑเฐ—เฐคเฐฎเฐตเฑเฐคเฐพเฐฏเฐฟ. - เฐถเฑ€เฐ˜เฑเฐฐ เฐชเฑเฐฐเฐฏเฑ‹เฐ—เฐพเฐฒ เฐ•เฑ‹เฐธเฐ‚ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€ เฐจเฑเฐ‚เฐกเฐฟ เฐธเฑเฐตเฐคเฐ‚เฐคเฑเฐฐเฐ‚เฐ—เฐพ เฐฎเฑ‹เฐกเฐฒเฑ เฐซเฑˆเฐฒเฑโ€Œเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. ## เฐจเฑ‡เฐจเฑ เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐจเฑ เฐŽเฐ‚เฐฆเฑเฐ•เฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ•เฑ‚เฐกเฐฆเฑ? - เฐˆ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€ เฐจเฑเฐฏเฑ‚เฐฐเฐฒเฑ เฐจเฑ†เฐŸเฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚ เฐฌเฐฟเฐฒเฑเฐกเฐฟเฐ‚เฐ—เฑ เฐฌเฑเฐฒเฐพเฐ•เฑโ€Œเฐฒ เฐฎเฐพเฐกเฑเฐฏเฑเฐฒเฐฐเฑ เฐŸเฑ‚เฐฒเฑโ€Œเฐฌเฐพเฐ•เฑเฐธเฑ เฐ•เฐพเฐฆเฑ. เฐฎเฑ‹เฐกเฐฒเฑ เฐซเฑˆเฐฒเฑโ€Œเฐฒเฐฒเฑ‹เฐจเฐฟ เฐ•เฑ‹เฐกเฑ เฐ‰เฐฆเฑเฐฆเฑ‡เฐถเฐชเฑ‚เฐฐเฑเฐตเฐ•เฐ‚เฐ—เฐพ เฐ…เฐฆเฐจเฐชเฑ เฐธเฐ‚เฐ—เฑเฐฐเฐนเฐฃเฐฒเฐคเฑ‹ เฐฐเฑ€เฐซเฑเฐฏเฐพเฐ•เฑเฐŸเฐฐเฐฟเฐ‚เฐ—เฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฆเฑ, เฐคเฐฆเฑเฐตเฐพเฐฐเฐพ เฐชเฐฐเฐฟเฐถเฑ‹เฐงเฐ•เฑเฐฒเฑ เฐ…เฐฆเฐจเฐชเฑ เฐธเฐ‚เฐ—เฑเฐฐเฐนเฐฃเฐฒเฑ/เฐซเฑˆเฐณเฑเฐฒเฐฒเฑ‹เฐ•เฐฟ เฐชเฑเฐฐเฐตเฑ‡เฐถเฐฟเฐ‚เฐšเฐ•เฑเฐ‚เฐกเฐพ เฐชเฑเฐฐเฐคเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐชเฑˆ เฐคเฑเฐตเฐฐเฐ—เฐพ เฐฎเฐณเฑเฐฒเฐฟเฐ‚เฐšเฐ—เฐฒเฐฐเฑ. - เฐถเฐฟเฐ•เฑเฐทเฐฃ API เฐ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฑ‹ เฐชเฐจเฐฟ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ‰เฐฆเฑเฐฆเฑ‡เฐถเฐฟเฐ‚เฐšเฐฌเฐกเฐฒเฑ‡เฐฆเฑ เฐ•เฐพเฐจเฑ€ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€ เฐ…เฐ‚เฐฆเฐฟเฐ‚เฐšเฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐคเฑ‹ เฐชเฐจเฐฟ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ†เฐชเฑเฐŸเฐฟเฐฎเฑˆเฐœเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐ‚เฐฆเฐฟ. เฐธเฐพเฐงเฐพเฐฐเฐฃ เฐฎเฑ†เฐทเฐฟเฐจเฑ เฐฒเฑ†เฐฐเฑเฐจเฐฟเฐ‚เฐ—เฑ เฐฒเฑ‚เฐชเฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚, เฐฎเฑ€เฐฐเฑ เฐฎเฐฐเฑŠเฐ• เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐพเฐฒเฐฟ (เฐฌเฐนเฑเฐถเฐพ, [Accelerate](https://huggingface.co/docs/accelerate)). - เฐฎเฑ‡เฐฎเฑ เฐตเฑ€เฐฒเฑˆเฐจเฐจเฑเฐจเฐฟ เฐŽเฐ•เฑเฐ•เฑเฐต เฐตเฐฟเฐจเฐฟเฐฏเฑ‹เฐ— เฐธเฐ‚เฐฆเฐฐเฑเฐญเฐพเฐฒเฐจเฑ เฐชเฑเฐฐเฐฆเฐฐเฑเฐถเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐชเฑเฐฐเฐฏเฐคเฑเฐจเฐฟเฐธเฑเฐคเฑเฐจเฑเฐจเฐชเฑเฐชเฑเฐกเฑ, เฐฎเฐพ [เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒ เฐซเฑ‹เฐฒเฑเฐกเฐฐเฑ](https://github.com/huggingface/transformers/tree/main/examples)เฐฒเฑ‹เฐจเฐฟ เฐธเฑเฐ•เฑเฐฐเฐฟเฐชเฑเฐŸเฑโ€Œเฐฒเฑ เฐ•เฑ‡เฐตเฐฒเฐ‚: เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒเฑ. เฐฎเฑ€ เฐจเฐฟเฐฐเฑเฐฆเฐฟเฐทเฑเฐŸ เฐธเฐฎเฐธเฑเฐฏเฐชเฑˆ เฐ…เฐตเฐฟ เฐชเฐจเฐฟ เฐšเฑ‡เฐฏเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐพเฐŸเฐฟเฐจเฐฟ เฐฎเฑ€ เฐ…เฐตเฐธเฐฐเฐพเฐฒเฐ•เฑ เฐ…เฐจเฑเฐ—เฑเฐฃเฐ‚เฐ—เฐพ เฐฎเฐพเฐฐเฑเฐšเฑเฐ•เฑ‹เฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐฐเฑ เฐ•เฑŠเฐจเฑเฐจเฐฟ เฐ•เฑ‹เฐกเฑ เฐฒเฑˆเฐจเฑโ€Œเฐฒเฐจเฑ เฐฎเฐพเฐฐเฑเฐšเฐตเฐฒเฐธเฐฟ เฐ‰เฐ‚เฐŸเฑเฐ‚เฐฆเฐฟ. ## เฐธเฐ‚เฐธเฑเฐฅเฐพเฐชเฐจ ### เฐชเฐฟเฐชเฑ เฐคเฑ‹ เฐˆ เฐฐเฐฟเฐชเฑ‹เฐœเฐฟเฐŸเฐฐเฑ€ เฐชเฑˆเฐฅเฐพเฐจเฑ 3.8+, เฐซเฑเฐฒเฐพเฐ•เฑเฐธเฑ 0.4.1+, PyTorch 1.11+ เฐฎเฐฐเฐฟเฐฏเฑ TensorFlow 2.6+เฐฒเฑ‹ เฐชเฐฐเฑ€เฐ•เฑเฐทเฐฟเฐ‚เฐšเฐฌเฐกเฐฟเฐ‚เฐฆเฐฟ. เฐฎเฑ€เฐฐเฑ [เฐตเฐฐเฑเฐšเฑเฐตเฐฒเฑ เฐตเฐพเฐคเฐพเฐตเฐฐเฐฃเฐ‚](https://docs.python.org/3/library/venv.html)เฐฒเฑ‹ ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ. เฐฎเฑ€เฐ•เฑ เฐชเฑˆเฐฅเฐพเฐจเฑ เฐตเฐฐเฑเฐšเฑเฐตเฐฒเฑ เฐชเฐฐเฐฟเฐธเฐฐเฐพเฐฒ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐคเฑ†เฐฒเฐฟเฐฏเฐ•เฑเฐ‚เฐŸเฑ‡, [เฐฏเฑ‚เฐœเฐฐเฑ เฐ—เฑˆเฐกเฑ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ, เฐฎเฑ€เฐฐเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฌเฑ‹เฐคเฑเฐจเฑเฐจ เฐชเฑˆเฐฅเฐพเฐจเฑ เฐตเฑ†เฐฐเฑเฐทเฐจเฑโ€Œเฐคเฑ‹ เฐตเฐฐเฑเฐšเฑเฐตเฐฒเฑ เฐตเฐพเฐคเฐพเฐตเฐฐเฐฃเฐพเฐจเฑเฐจเฐฟ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐธเฐ•เฑเฐฐเฐฟเฐฏเฐ‚ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ. เฐ…เฐชเฑเฐชเฑเฐกเฑ, เฐฎเฑ€เฐฐเฑ เฐซเฑเฐฒเฐพเฐ•เฑเฐธเฑ, เฐชเฑˆเฐŸเฐพเฐฐเฑเฐšเฑ เฐฒเฑ‡เฐฆเฐพ เฐŸเฑ†เฐจเฑเฐธเฐฐเฑโ€Œเฐซเฑเฐฒเฑ‹เฐฒเฑ‹ เฐ•เฐจเฑ€เฐธเฐ‚ เฐ’เฐ•เฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ. เฐฆเฐฏเฐšเฑ‡เฐธเฐฟ [TensorFlow เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ‡เฐทเฐจเฑ เฐชเฑ‡เฐœเฑ€](https://www.tensorflow.org/install/), [PyTorch เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ‡เฐทเฐจเฑ เฐชเฑ‡เฐœเฑ€](https://pytorch.org/get-started/locally/#start-locally) เฐฎเฐฐเฐฟเฐฏเฑ/เฐจเฐฟ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ เฐฒเฑ‡เฐฆเฐพ เฐฎเฑ€ เฐชเฑเฐฒเฐพเฐŸเฑโ€Œเฐซเฐพเฐฐเฐฎเฑ เฐ•เฑ‹เฐธเฐ‚ เฐจเฐฟเฐฐเฑเฐฆเฐฟเฐทเฑเฐŸ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ‡เฐทเฐจเฑ เฐ•เฐฎเฐพเฐ‚เฐกเฑโ€Œเฐ•เฑ เฐธเฐ‚เฐฌเฐ‚เฐงเฐฟเฐ‚เฐšเฐฟ [Flax](https://github.com/google/flax#quick-install) เฐฎเฐฐเฐฟเฐฏเฑ [Jax](https://github.com/google/jax#installation) เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ‡เฐทเฐจเฑ เฐชเฑ‡เฐœเฑ€เฐฒเฑ . เฐ† เฐฌเฑเฐฏเฐพเฐ•เฑ†เฐ‚เฐกเฑโ€Œเฐฒเฐฒเฑ‹ เฐ’เฐ•เฐŸเฐฟ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจเฐชเฑเฐชเฑเฐกเฑ, ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐจเฑ เฐˆ เฐ•เฑเฐฐเฐฟเฐ‚เฐฆเฐฟ เฐตเฐฟเฐงเฐ‚เฐ—เฐพ เฐชเฐฟเฐชเฑโ€Œเฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ: ```bash pip install transformers ``` เฐฎเฑ€เฐฐเฑ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒเฐคเฑ‹ เฐชเฑเฐฒเฑ‡ เฐšเฑ‡เฐฏเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑ‡ เฐฒเฑ‡เฐฆเฐพ เฐ•เฑ‹เฐกเฑ เฐฏเฑŠเฐ•เฑเฐ• เฐฌเฑเฐฒเฑ€เฐกเฐฟเฐ‚เฐ—เฑ เฐŽเฐกเฑเฐœเฑ เฐ…เฐตเฐธเฐฐเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐ•เฑŠเฐคเฑเฐค เฐตเฐฟเฐกเฑเฐฆเฐฒ เฐ•เฑ‹เฐธเฐ‚ เฐตเฑ‡เฐšเฐฟ เฐ‰เฐ‚เฐกเฐฒเฑ‡เฐ•เฐชเฑ‹เฐคเฑ‡, เฐฎเฑ€เฐฐเฑ เฐคเฐชเฑเฐชเฐจเฐฟเฐธเฐฐเฐฟเฐ—เฐพ [เฐฎเฑ‚เฐฒเฐ‚ เฐจเฑเฐ‚เฐกเฐฟ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐจเฐฟ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ](https://huggingface.co/docs/transformers/installation#installing-from-source). ### เฐ•เฑŠเฐ‚เฐกเฐพ เฐคเฑ‹ ๐Ÿค— เฐ•เฐฟเฐ‚เฐฆเฐฟ เฐตเฐฟเฐงเฐ‚เฐ—เฐพ เฐ•เฑŠเฐ‚เฐกเฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟ เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ: ```shell script conda install conda-forge::transformers ``` > **_เฐ—เฐฎเฐจเฐฟเฐ•:_** `huggingface` เฐ›เฐพเฐจเฑ†เฐฒเฑ เฐจเฑเฐ‚เฐกเฐฟ `transformers` เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐกเฐ‚ เฐชเฑเฐฐเฐพเฐคเฐจเฐ‚เฐ—เฐพ เฐ‰เฐ‚เฐฆเฐฟ. Flax, PyTorch เฐฒเฑ‡เฐฆเฐพ TensorFlow เฐฏเฑŠเฐ•เฑเฐ• เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ‡เฐทเฐจเฑ เฐชเฑ‡เฐœเฑ€เฐฒเฐจเฑ เฐ•เฑŠเฐ‚เฐกเฐพเฐคเฑ‹ เฐŽเฐฒเฐพ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฑ‹ เฐšเฑ‚เฐกเฐŸเฐพเฐจเฐฟเฐ•เฐฟ เฐตเฐพเฐŸเฐฟเฐจเฐฟ เฐ…เฐจเฑเฐธเฐฐเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ. > **_เฐ—เฐฎเฐจเฐฟเฐ•:_** Windowsเฐฒเฑ‹, เฐ•เฐพเฐทเฐฟเฐ‚เฐ—เฑ เฐจเฑเฐ‚เฐกเฐฟ เฐชเฑเฐฐเฐฏเฑ‹เฐœเฐจเฐ‚ เฐชเฑŠเฐ‚เฐฆเฑ‡เฐ‚เฐฆเฑเฐ•เฑ เฐฎเฑ€เฐฐเฑ เฐกเฑ†เฐตเฐฒเฐชเฐฐเฑ เฐฎเฑ‹เฐกเฑโ€Œเฐจเฐฟ เฐธเฐ•เฑเฐฐเฐฟเฐฏเฐ‚ เฐšเฑ‡เฐฏเฐฎเฐจเฐฟ เฐชเฑเฐฐเฐพเฐ‚เฐชเฑเฐŸเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐตเฐšเฑเฐšเฑ. เฐ‡เฐฆเฐฟ เฐฎเฑ€เฐ•เฑ เฐŽเฐ‚เฐชเฐฟเฐ• เฐ•เฐพเฐ•เฐชเฑ‹เฐคเฑ‡, เฐฆเฐฏเฐšเฑ‡เฐธเฐฟ [เฐˆ เฐธเฐ‚เฐšเฐฟเฐ•](https://github.com/huggingface/huggingface_hub/issues/1062)เฐฒเฑ‹ เฐฎเฐพเฐ•เฑ เฐคเฑ†เฐฒเฐฟเฐฏเฐœเฑ‡เฐฏเฐ‚เฐกเฐฟ. ## เฐฎเฑ‹เฐกเฐฒเฑ เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑเฐฒเฑ **[เฐ…เฐจเฑเฐจเฐฟ เฐฎเฑ‹เฐกเฐฒเฑ เฐšเฑ†เฐ•เฑโ€Œเฐชเฐพเฐฏเฐฟเฐ‚เฐŸเฑโ€Œเฐฒเฑ](https://huggingface.co/models)** ๐Ÿค— เฐ…เฐ‚เฐฆเฐฟเฐ‚เฐšเฐฟเฐจ เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ huggingface.co [model hub](https://huggingface.co/models) เฐจเฑเฐ‚เฐกเฐฟ เฐธเฐœเฐพเฐตเฑเฐ—เฐพ เฐเฐ•เฑ€เฐ•เฑƒเฐคเฐ‚ เฐšเฑ‡เฐฏเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ [users](https://huggingface.co/users) เฐฎเฐฐเฐฟเฐฏเฑ [organizations](https://huggingface.co/organizations) เฐฆเฑเฐตเฐพเฐฐเฐพ เฐจเฑ‡เฐฐเฑเฐ—เฐพ เฐ…เฐชเฑโ€Œเฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐคเฐพเฐฏเฐฟ. เฐชเฑเฐฐเฐธเฑเฐคเฑเฐค เฐคเฐจเฐฟเฐ–เฑ€ เฐ•เฑ‡เฐ‚เฐฆเฑเฐฐเฐพเฐฒ เฐธเฐ‚เฐ–เฑเฐฏ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฐธเฑเฐคเฑเฐคเฐ‚ เฐ•เฐฟเฐ‚เฐฆเฐฟ เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ…เฐ‚เฐฆเฐœเฑ‡เฐธเฑเฐคเฑเฐจเฑเฐจเฐพเฐฏเฐฟ: เฐตเฐพเฐŸเฐฟเฐฒเฑ‹ เฐชเฑเฐฐเฐคเฐฟ เฐ’เฐ•เฑเฐ•เฐŸเฐฟ เฐ‰เฐจเฑเฐจเฐค เฐธเฑเฐฅเฐพเฐฏเฐฟ เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚ เฐ•เฑ‹เฐธเฐ‚ [เฐ‡เฐ•เฑเฐ•เฐก](https://huggingface.co/docs/transformers/model_summary) เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. เฐˆ เฐ…เฐฎเฐฒเฑเฐฒเฑ เฐ…เฐจเฑ‡เฐ• เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐฒเฐฒเฑ‹ เฐชเฐฐเฑ€เฐ•เฑเฐทเฐฟเฐ‚เฐšเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ (เฐ‰เฐฆเฐพเฐนเฐฐเฐฃ เฐธเฑเฐ•เฑเฐฐเฐฟเฐชเฑเฐŸเฑโ€Œเฐฒเฐจเฑ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ) เฐฎเฐฐเฐฟเฐฏเฑ เฐ…เฐธเฐฒเฑˆเฐจ เฐ…เฐฎเฐฒเฑเฐฒ เฐชเฐจเฐฟเฐคเฑ€เฐฐเฑเฐคเฑ‹ เฐธเฐฐเฐฟเฐชเฑ‹เฐฒเฐพเฐฒเฐฟ. เฐฎเฑ€เฐฐเฑ [เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ](https://github.com/huggingface/transformers/tree/main/examples) เฐฏเฑŠเฐ•เฑเฐ• เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒ เฐตเฐฟเฐญเฐพเฐ—เฐ‚เฐฒเฑ‹ เฐชเฐจเฐฟเฐคเฑ€เฐฐเฑเฐชเฑˆ เฐฎเฐฐเฐฟเฐจเฑเฐจเฐฟ เฐตเฐฟเฐตเฐฐเฐพเฐฒเฐจเฑ เฐ•เฐจเฑเฐ—เฑŠเฐจเฐตเฐšเฑเฐšเฑ. ## เฐ‡เฐ‚เฐ•เฐพ เฐจเฑ‡เฐฐเฑเฐšเฑเฐ•เฑ‹ | เฐตเฐฟเฐญเฐพเฐ—เฐ‚ | เฐตเฐฟเฐตเฐฐเฐฃ | |-|-| | [เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ](https://huggingface.co/docs/transformers/) | เฐชเฑ‚เฐฐเฑเฐคเฐฟ API เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑเฐธเฑ | | [เฐŸเฐพเฐธเฑเฐ•เฑ เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚](https://huggingface.co/docs/transformers/task_summary) | ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐธเฐชเฑ‹เฐฐเฑเฐŸเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐตเฐฟเฐงเฑเฐฒเฑ | | [เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑ เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑ](https://huggingface.co/docs/transformers/preprocessing) | เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚ เฐกเฑ‡เฐŸเฐพเฐจเฑ เฐธเฐฟเฐฆเฑเฐงเฐ‚ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ `Tokenizer` เฐ•เฑเฐฒเฐพเฐธเฑโ€Œเฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐ‚ | | [เฐŸเฑเฐฐเฑˆเฐจเฐฟเฐ‚เฐ—เฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐซเฑˆเฐจเฑ-เฐŸเฑเฐฏเฑ‚เฐจเฐฟเฐ‚เฐ—เฑ](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow เฐŸเฑเฐฐเฑˆเฐจเฐฟเฐ‚เฐ—เฑ เฐฒเฑ‚เฐชเฑ เฐฎเฐฐเฐฟเฐฏเฑ `Trainer` APIเฐฒเฑ‹ ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐ…เฐ‚เฐฆเฐฟเฐ‚เฐšเฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐ‚ | | [เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฑเฐฏเฐŸเฐจ: เฐซเฑˆเฐจเฑ-เฐŸเฑเฐฏเฑ‚เฐจเฐฟเฐ‚เฐ—เฑ/เฐฏเฑ‚เฐธเฑ‡เฐœเฑ เฐธเฑเฐ•เฑเฐฐเฐฟเฐชเฑเฐŸเฑโ€Œเฐฒเฑ](https://github.com/huggingface/transformers/tree/main/examples) | เฐตเฐฟเฐธเฑเฐคเฑƒเฐค เฐถเฑเฐฐเฑ‡เฐฃเฐฟ เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒเฐชเฑˆ เฐซเฑˆเฐจเฑ-เฐŸเฑเฐฏเฑ‚เฐจเฐฟเฐ‚เฐ—เฑ เฐฎเฑ‹เฐกเฐฒเฑเฐธเฑ เฐ•เฑ‹เฐธเฐ‚ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃ เฐธเฑเฐ•เฑเฐฐเฐฟเฐชเฑเฐŸเฑโ€Œเฐฒเฑ | | [เฐฎเฑ‹เฐกเฐฒเฑ เฐญเฐพเฐ—เฐธเฑเฐตเฐพเฐฎเฑเฐฏเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐ…เฐชเฑโ€Œเฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐ‚](https://huggingface.co/docs/transformers/model_sharing) | เฐ•เฐฎเฑเฐฏเฑ‚เฐจเฐฟเฐŸเฑ€เฐคเฑ‹ เฐฎเฑ€ เฐซเฑˆเฐจเฑ-เฐŸเฑเฐฏเฑ‚เฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฐจเฑ เฐ…เฐชเฑโ€Œเฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐญเฐพเฐ—เฐธเฑเฐตเฐพเฐฎเฑเฐฏเฐ‚ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ | ## เฐ…เฐจเฑเฐฒเฑ‡เฐ–เฐจเฐ‚ ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€ เฐ•เฑ‹เฐธเฐ‚ เฐฎเฑ€เฐฐเฑ เฐ‰เฐฆเฐนเฐฐเฐฟเฐ‚เฐšเฐ—เฐฒ [เฐชเฑ‡เฐชเฐฐเฑ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) เฐ‡เฐชเฑเฐชเฑเฐกเฑ เฐฎเฐพ เฐตเฐฆเฑเฐฆ เฐ‰เฐ‚เฐฆเฐฟ: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/CITATION.cff
cff-version: "1.2.0" date-released: 2020-10 message: "If you use this software, please cite it using these metadata." title: "Transformers: State-of-the-Art Natural Language Processing" url: "https://github.com/huggingface/transformers" authors: - family-names: Wolf given-names: Thomas - family-names: Debut given-names: Lysandre - family-names: Sanh given-names: Victor - family-names: Chaumond given-names: Julien - family-names: Delangue given-names: Clement - family-names: Moi given-names: Anthony - family-names: Cistac given-names: Perric - family-names: Ma given-names: Clara - family-names: Jernite given-names: Yacine - family-names: Plu given-names: Julien - family-names: Xu given-names: Canwen - family-names: "Le Scao" given-names: Teven - family-names: Gugger given-names: Sylvain - family-names: Drame given-names: Mariama - family-names: Lhoest given-names: Quentin - family-names: Rush given-names: "Alexander M." preferred-citation: type: conference-paper authors: - family-names: Wolf given-names: Thomas - family-names: Debut given-names: Lysandre - family-names: Sanh given-names: Victor - family-names: Chaumond given-names: Julien - family-names: Delangue given-names: Clement - family-names: Moi given-names: Anthony - family-names: Cistac given-names: Perric - family-names: Ma given-names: Clara - family-names: Jernite given-names: Yacine - family-names: Plu given-names: Julien - family-names: Xu given-names: Canwen - family-names: "Le Scao" given-names: Teven - family-names: Gugger given-names: Sylvain - family-names: Drame given-names: Mariama - family-names: Lhoest given-names: Quentin - family-names: Rush given-names: "Alexander M." booktitle: "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations" month: 10 start: 38 end: 45 title: "Transformers: State-of-the-Art Natural Language Processing" year: 2020 publisher: "Association for Computational Linguistics" url: "https://www.aclweb.org/anthology/2020.emnlp-demos.6" address: "Online"
0
mavonic_private_repos
mavonic_private_repos/transformers/Makefile
.PHONY: deps_table_update modified_only_fixup extra_style_checks quality style fixup fix-copies test test-examples # make sure to test the local checkout in scripts and not the pre-installed one (don't use quotes!) export PYTHONPATH = src check_dirs := examples tests src utils exclude_folders := examples/research_projects modified_only_fixup: $(eval modified_py_files := $(shell python utils/get_modified_files.py $(check_dirs))) @if test -n "$(modified_py_files)"; then \ echo "Checking/fixing $(modified_py_files)"; \ ruff check $(modified_py_files) --fix --exclude $(exclude_folders); \ ruff format $(modified_py_files) --exclude $(exclude_folders);\ else \ echo "No library .py files were modified"; \ fi # Update src/transformers/dependency_versions_table.py deps_table_update: @python setup.py deps_table_update deps_table_check_updated: @md5sum src/transformers/dependency_versions_table.py > md5sum.saved @python setup.py deps_table_update @md5sum -c --quiet md5sum.saved || (printf "\nError: the version dependency table is outdated.\nPlease run 'make fixup' or 'make style' and commit the changes.\n\n" && exit 1) @rm md5sum.saved # autogenerating code autogenerate_code: deps_table_update # Check that the repo is in a good state repo-consistency: python utils/check_copies.py python utils/check_table.py python utils/check_dummies.py python utils/check_repo.py python utils/check_inits.py python utils/check_config_docstrings.py python utils/check_config_attributes.py python utils/check_doctest_list.py python utils/update_metadata.py --check-only python utils/check_docstrings.py python utils/check_support_list.py # this target runs checks on all files quality: @python -c "from transformers import *" || (echo '๐Ÿšจ import failed, this means you introduced unprotected imports! ๐Ÿšจ'; exit 1) ruff check $(check_dirs) setup.py conftest.py ruff format --check $(check_dirs) setup.py conftest.py python utils/custom_init_isort.py --check_only python utils/sort_auto_mappings.py --check_only python utils/check_doc_toc.py # Format source code automatically and check is there are any problems left that need manual fixing extra_style_checks: python utils/custom_init_isort.py python utils/sort_auto_mappings.py python utils/check_doc_toc.py --fix_and_overwrite # this target runs checks on all files and potentially modifies some of them style: ruff check $(check_dirs) setup.py conftest.py --fix --exclude $(exclude_folders) ruff format $(check_dirs) setup.py conftest.py --exclude $(exclude_folders) ${MAKE} autogenerate_code ${MAKE} extra_style_checks # Super fast fix and check target that only works on relevant modified files since the branch was made fixup: modified_only_fixup extra_style_checks autogenerate_code repo-consistency # Make marked copies of snippets of codes conform to the original fix-copies: python utils/check_copies.py --fix_and_overwrite python utils/check_table.py --fix_and_overwrite python utils/check_dummies.py --fix_and_overwrite python utils/check_doctest_list.py --fix_and_overwrite python utils/check_docstrings.py --fix_and_overwrite # Run tests for the library test: python -m pytest -n auto --dist=loadfile -s -v ./tests/ # Run tests for examples test-examples: python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/ # Run tests for SageMaker DLC release test-sagemaker: # install sagemaker dependencies in advance with pip install .[sagemaker] TEST_SAGEMAKER=True python -m pytest -n auto -s -v ./tests/sagemaker # Release stuff pre-release: python utils/release.py pre-patch: python utils/release.py --patch post-release: python utils/release.py --post_release post-patch: python utils/release.py --post_release --patch build-release: rm -rf dist rm -rf build python setup.py bdist_wheel python setup.py sdist python utils/check_build.py
0
mavonic_private_repos
mavonic_private_repos/transformers/README_ru.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <b>ะ ัƒััะบะธะน</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | <p> </h4> <h3 align="center"> <p>ะกะพะฒั€ะตะผะตะฝะฝะพะต ะผะฐัˆะธะฝะฝะพะต ะพะฑัƒั‡ะตะฝะธะต ะดะปั JAX, PyTorch ะธ TensorFlow</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers ะฟั€ะตะดะพัั‚ะฐะฒะปัะตั‚ ั‚ั‹ััั‡ะธ ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝั‹ั… ะผะพะดะตะปะตะน ะดะปั ะฒั‹ะฟะพะปะฝะตะฝะธั ั€ะฐะทะปะธั‡ะฝั‹ั… ะทะฐะดะฐั‡, ั‚ะฐะบะธั… ะบะฐะบ ั‚ะตะบัั‚, ะทั€ะตะฝะธะต ะธ ะฐัƒะดะธะพ. ะญั‚ะธ ะผะพะดะตะปะธ ะผะพะณัƒั‚ ะฑั‹ั‚ัŒ ะฟั€ะธะผะตะฝะตะฝั‹ ะบ: * ๐Ÿ“ ะขะตะบัั‚ัƒ ะดะปั ั‚ะฐะบะธั… ะทะฐะดะฐั‡, ะบะฐะบ ะบะปะฐััะธั„ะธะบะฐั†ะธั ั‚ะตะบัั‚ะพะฒ, ะธะทะฒะปะตั‡ะตะฝะธะต ะธะฝั„ะพั€ะผะฐั†ะธะธ, ะพั‚ะฒะตั‚ั‹ ะฝะฐ ะฒะพะฟั€ะพัั‹, ะพะฑะพะฑั‰ะตะฝะธะต, ะฟะตั€ะตะฒะพะด, ะณะตะฝะตั€ะฐั†ะธั ั‚ะตะบัั‚ะพะฒ ะฝะฐ ะฑะพะปะตะต ั‡ะตะผ 100 ัะทั‹ะบะฐั…. * ๐Ÿ–ผ๏ธ ะ˜ะทะพะฑั€ะฐะถะตะฝะธัะผ ะดะปั ะทะฐะดะฐั‡ ะบะปะฐััะธั„ะธะบะฐั†ะธะธ ะธะทะพะฑั€ะฐะถะตะฝะธะน, ะพะฑะฝะฐั€ัƒะถะตะฝะธั ะพะฑัŠะตะบั‚ะพะฒ ะธ ัะตะณะผะตะฝั‚ะฐั†ะธะธ. * ๐Ÿ—ฃ๏ธ ะัƒะดะธะพ ะดะปั ะทะฐะดะฐั‡ ั€ะฐัะฟะพะทะฝะฐะฒะฐะฝะธั ั€ะตั‡ะธ ะธ ะบะปะฐััะธั„ะธะบะฐั†ะธะธ ะฐัƒะดะธะพ. ะœะพะดะตะปะธ transformers ั‚ะฐะบะถะต ะผะพะณัƒั‚ ะฒั‹ะฟะพะปะฝัั‚ัŒ ะฝะตัะบะพะปัŒะบะพ ะทะฐะดะฐั‡, ั‚ะฐะบะธะต ะบะฐะบ ะพั‚ะฒะตั‚ั‹ ะฝะฐ ั‚ะฐะฑะปะธั‡ะฝั‹ะต ะฒะพะฟั€ะพัั‹, ั€ะฐัะฟะพะทะฝะฐะฒะฐะฝะธะต ะพะฟั‚ะธั‡ะตัะบะธั… ัะธะผะฒะพะปะพะฒ, ะธะทะฒะปะตั‡ะตะฝะธะต ะธะฝั„ะพั€ะผะฐั†ะธะธ ะธะท ะพั‚ัะบะฐะฝะธั€ะพะฒะฐะฝะฝั‹ั… ะดะพะบัƒะผะตะฝั‚ะพะฒ, ะบะปะฐััะธั„ะธะบะฐั†ะธั ะฒะธะดะตะพ ะธ ะพั‚ะฒะตั‚ั‹ ะฝะฐ ะฒะธะทัƒะฐะปัŒะฝั‹ะต ะฒะพะฟั€ะพัั‹. ๐Ÿค— Transformers ะฟั€ะตะดะพัั‚ะฐะฒะปัะตั‚ API ะดะปั ะฑั‹ัั‚ั€ะพะน ะทะฐะณั€ัƒะทะบะธ ะธ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝั‹ั… ะผะพะดะตะปะตะน, ะธั… ั‚ะพะฝะบะพะน ะฝะฐัั‚ั€ะพะนะบะธ ะฝะฐ ัะพะฑัั‚ะฒะตะฝะฝั‹ั… ะดะฐั‚ะฐัะตั‚ะฐั… ะธ ะฟะพัะปะตะดัƒัŽั‰ะตะณะพ ะฒะทะฐะธะผะพะดะตะนัั‚ะฒะธั ะธะผะธ ั ัะพะพะฑั‰ะตัั‚ะฒะพะผ ะฝะฐ ะฝะฐัˆะตะผ [ัะฐะนั‚ะต](https://huggingface.co/models). ะ’ ั‚ะพ ะถะต ะฒั€ะตะผั ะบะฐะถะดั‹ะน python ะผะพะดัƒะปัŒ, ะพะฟั€ะตะดะตะปััŽั‰ะธะน ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ัƒ, ะฟะพะปะฝะพัั‚ัŒัŽ ะฐะฒั‚ะพะฝะพะผะตะฝ ะธ ะผะพะถะตั‚ ะฑั‹ั‚ัŒ ะผะพะดะธั„ะธั†ะธั€ะพะฒะฐะฝ ะดะปั ะฟั€ะพะฒะตะดะตะฝะธั ะฑั‹ัั‚ั€ั‹ั… ะธััะปะตะดะพะฒะฐั‚ะตะปัŒัะบะธั… ัะบัะฟะตั€ะธะผะตะฝั‚ะพะฒ. ๐Ÿค— Transformers ะพะฟะธั€ะฐะตั‚ัั ะฝะฐ ั‚ั€ะธ ัะฐะผั‹ะต ะฟะพะฟัƒะปัั€ะฝั‹ะต ะฑะธะฑะปะธะพั‚ะตะบะธ ะณะปัƒะฑะพะบะพะณะพ ะพะฑัƒั‡ะตะฝะธั - [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) ะธ [TensorFlow](https://www.tensorflow.org/) - ะธ ะปะตะณะบะพ ะธะฝั‚ะตะณั€ะธั€ัƒะตั‚ัั ะผะตะถะดัƒ ะฝะธะผะธ. ะญั‚ะพ ะฟะพะทะฒะพะปัะตั‚ ะปะตะณะบะพ ะพะฑัƒั‡ะฐั‚ัŒ ะผะพะดะตะปะธ ั ะฟะพะผะพั‰ัŒัŽ ะพะดะฝะพะน ะธะท ะฝะธั…, ะฐ ะทะฐั‚ะตะผ ะทะฐะณั€ัƒะถะฐั‚ัŒ ะธั… ะดะปั ะฒั‹ะฒะพะดะพะฒ ั ะฟะพะผะพั‰ัŒัŽ ะดั€ัƒะณะพะน. ## ะžะฝะปะฐะนะฝ ะดะตะผะพะฝัั‚ั€ะฐั†ะธั ะ‘ะพะปัŒัˆะธะฝัั‚ะฒะพ ะฝะฐัˆะธั… ะผะพะดะตะปะตะน ะผะพะถะฝะพ ะฟั€ะพั‚ะตัั‚ะธั€ะพะฒะฐั‚ัŒ ะฝะตะฟะพัั€ะตะดัั‚ะฒะตะฝะฝะพ ะฝะฐ ะธั… ัั‚ั€ะฐะฝะธั†ะฐั… ั [ัะฐะนั‚ะฐ](https://huggingface.co/models). ะœั‹ ั‚ะฐะบะถะต ะฟั€ะตะดะปะฐะณะฐะตะผ [ะฟั€ะธะฒั‚ะฐะฝั‹ะน ั…ะพัั‚ะธะฝะณ ะผะพะดะตะปะตะน, ะบะพะฝั‚ั€ะพะปัŒ ะฒะตั€ัะธะน ะธ API ะดะปั ะฒั‹ะฒะพะดะพะฒ](https://huggingface.co/pricing) ะดะปั ะฟัƒะฑะปะธั‡ะฝั‹ั… ะธ ั‡ะฐัั‚ะฝั‹ั… ะผะพะดะตะปะตะน. ะ’ะพั‚ ะฝะตัะบะพะปัŒะบะพ ะฟั€ะธะผะตั€ะพะฒ: ะ’ ะพะฑะปะฐัั‚ะธ NLP ( ะžะฑั€ะฐะฑะพั‚ะบะฐ ั‚ะตะบัั‚ะพะฒ ะฝะฐ ะตัั‚ะตัั‚ะฒะตะฝะฝะพะผ ัะทั‹ะบะต ): - [ะœะฐัะบะธั€ะพะฒะฐะฝะฝะพะต ะทะฐะฟะพะปะฝะตะฝะธะต ัะปะพะฒ ั ะฟะพะผะพั‰ัŒัŽ BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [ะ ะฐัะฟะพะทะฝะฐะฒะฐะฝะธะต ััƒั‰ะฝะพัั‚ะตะน ั ะฟะพะผะพั‰ัŒัŽ Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [ะ“ะตะฝะตั€ะฐั†ะธั ั‚ะตะบัั‚ะฐ ั ะฟะพะผะพั‰ัŒัŽ GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [ะ’ั‹ะฒะพะดั‹ ะฝะฐ ะตัั‚ะตัั‚ะฒะตะฝะฝะพะผ ัะทั‹ะบะต ั ะฟะพะผะพั‰ัŒัŽ RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [ะžะฑะพะฑั‰ะตะฝะธะต ั ะฟะพะผะพั‰ัŒัŽ BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [ะžั‚ะฒะตั‚ั‹ ะฝะฐ ะฒะพะฟั€ะพัั‹ ั ะฟะพะผะพั‰ัŒัŽ DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [ะŸะตั€ะตะฒะพะด ั ะฟะพะผะพั‰ัŒัŽ T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) ะ’ ะพะฑะปะฐัั‚ะธ ะบะพะผะฟัŒัŽั‚ะตั€ะฝะพะณะพ ะทั€ะตะฝะธั: - [ะšะปะฐััะธั„ะธะบะฐั†ะธั ะธะทะพะฑั€ะฐะถะตะฝะธะน ั ะฟะพะผะพั‰ัŒัŽ ViT](https://huggingface.co/google/vit-base-patch16-224) - [ะžะฑะฝะฐั€ัƒะถะตะฝะธะต ะพะฑัŠะตะบั‚ะพะฒ ั ะฟะพะผะพั‰ัŒัŽ DETR](https://huggingface.co/facebook/detr-resnet-50) - [ะกะตะผะฐะฝั‚ะธั‡ะตัะบะฐั ัะตะณะผะตะฝั‚ะฐั†ะธั ั ะฟะพะผะพั‰ัŒัŽ SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [ะกะตะณะผะตะฝั‚ะฐั†ะธั ะฟะฐะฝะพะฟั‚ะธะบัƒะผะฐ ั ะฟะพะผะพั‰ัŒัŽ MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco) - [ะžั†ะตะฝะบะฐ ะณะปัƒะฑะธะฝั‹ ั ะฟะพะผะพั‰ัŒัŽ DPT](https://huggingface.co/docs/transformers/model_doc/dpt) - [ะšะปะฐััะธั„ะธะบะฐั†ะธั ะฒะธะดะตะพ ั ะฟะพะผะพั‰ัŒัŽ VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae) - [ะฃะฝะธะฒะตั€ัะฐะปัŒะฝะฐั ัะตะณะผะตะฝั‚ะฐั†ะธั ั ะฟะพะผะพั‰ัŒัŽ OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) ะ’ ะพะฑะปะฐัั‚ะธ ะทะฒัƒะบะฐ: - [ะะฒั‚ะพะผะฐั‚ะธั‡ะตัะบะพะต ั€ะฐัะฟะพะทะฝะฐะฒะฐะฝะธะต ั€ะตั‡ะธ ั ะฟะพะผะพั‰ัŒัŽ Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) - [ะŸะพะธัะบ ะบะปัŽั‡ะตะฒั‹ั… ัะปะพะฒ ั ะฟะพะผะพั‰ัŒัŽ Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - [ะšะปะฐััะธั„ะธะบะฐั†ะธั ะฐัƒะดะธะพะดะฐะฝะฝั‹ั… ั ะฟะพะผะพั‰ัŒัŽ ั‚ั€ะฐัะฝั„ะพั€ะผะตั€ะฐ ะฐัƒะดะธะพัะฟะตะบั‚ั€ะพะณั€ะฐะผะผ](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) ะ’ ะผัƒะปัŒั‚ะธะผะพะดะฐะปัŒะฝั‹ั… ะทะฐะดะฐั‡ะฐั…: - [ะžั‚ะฒะตั‚ั‹ ะฝะฐ ะฒะพะฟั€ะพัั‹ ะฟะพ ั‚ะฐะฑะปะธั†ะต ั ะฟะพะผะพั‰ัŒัŽ TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq) - [ะ’ะธะทัƒะฐะปัŒะฝั‹ะต ะพั‚ะฒะตั‚ั‹ ะฝะฐ ะฒะพะฟั€ะพัั‹ ั ะฟะพะผะพั‰ัŒัŽ ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) - [Zero-shot ะบะปะฐััะธั„ะธะบะฐั†ะธั ะธะทะพะฑั€ะฐะถะตะฝะธะน ั ะฟะพะผะพั‰ัŒัŽ CLIP](https://huggingface.co/openai/clip-vit-large-patch14) - [ะžั‚ะฒะตั‚ั‹ ะฝะฐ ะฒะพะฟั€ะพัั‹ ะฟะพ ะดะพะบัƒะผะตะฝั‚ะฐะผ ั ะฟะพะผะพั‰ัŒัŽ LayoutLM](https://huggingface.co/impira/layoutlm-document-qa) - [Zero-shot ะบะปะฐััะธั„ะธะบะฐั†ะธั ะฒะธะดะตะพ ั ะฟะพะผะพั‰ัŒัŽ X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip) ## 100 ะฟั€ะพะตะบั‚ะพะฒ, ะธัะฟะพะปัŒะทัƒัŽั‰ะธั… Transformers Transformers - ัั‚ะพ ะฝะต ะฟั€ะพัั‚ะพ ะฝะฐะฑะพั€ ะธะฝัั‚ั€ัƒะผะตะฝั‚ะพะฒ ะดะปั ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝั‹ั… ะผะพะดะตะปะตะน: ัั‚ะพ ัะพะพะฑั‰ะตัั‚ะฒะพ ะฟั€ะพะตะบั‚ะพะฒ, ัะพะทะดะฐะฝะฝะพะต ะฝะฐ ะตะณะพ ะพัะฝะพะฒะต, ะธ Hugging Face Hub. ะœั‹ ั…ะพั‚ะธะผ, ั‡ั‚ะพะฑั‹ Transformers ะฟะพะทะฒะพะปะธะป ั€ะฐะทั€ะฐะฑะพั‚ั‡ะธะบะฐะผ, ะธััะปะตะดะพะฒะฐั‚ะตะปัะผ, ัั‚ัƒะดะตะฝั‚ะฐะผ, ะฟั€ะพั„ะตััะพั€ะฐะผ, ะธะฝะถะตะฝะตั€ะฐะผ ะธ ะฒัะตะผ ะถะตะปะฐัŽั‰ะธะผ ัะพะทะดะฐะฒะฐั‚ัŒ ะฟั€ะพะตะบั‚ั‹ ัะฒะพะตะน ะผะตั‡ั‚ั‹. ะงั‚ะพะฑั‹ ะพั‚ะฟั€ะฐะทะดะฝะพะฒะฐั‚ัŒ 100 ั‚ั‹ััั‡ ะทะฒะตะทะด Transformers, ะผั‹ ั€ะตัˆะธะปะธ ัะดะตะปะฐั‚ัŒ ะฐะบั†ะตะฝั‚ ะฝะฐ ัะพะพะฑั‰ะตัั‚ะฒะต, ะธ ัะพะทะดะฐะปะธ ัั‚ั€ะฐะฝะธั†ัƒ [awesome-transformers](./awesome-transformers.md), ะฝะฐ ะบะพั‚ะพั€ะพะน ะฟะตั€ะตั‡ะธัะปะตะฝั‹ 100 ะฝะตะฒะตั€ะพัั‚ะฝั‹ั… ะฟั€ะพะตะบั‚ะพะฒ, ัะพะทะดะฐะฝะฝั‹ั… ั ะฟะพะผะพั‰ัŒัŽ transformers. ะ•ัะปะธ ะฒั‹ ัะฒะปัะตั‚ะตััŒ ะฒะปะฐะดะตะปัŒั†ะตะผ ะธะปะธ ะฟะพะปัŒะทะพะฒะฐั‚ะตะปะตะผ ะฟั€ะพะตะบั‚ะฐ, ะบะพั‚ะพั€ั‹ะน, ะฟะพ ะฒะฐัˆะตะผัƒ ะผะฝะตะฝะธัŽ, ะดะพะปะถะตะฝ ะฑั‹ั‚ัŒ ะฒะบะปัŽั‡ะตะฝ ะฒ ัั‚ะพั‚ ัะฟะธัะพะบ, ะฟะพะถะฐะปัƒะนัั‚ะฐ, ะพั‚ะบั€ะพะนั‚ะต PR ะดะปั ะตะณะพ ะดะพะฑะฐะฒะปะตะฝะธั! ## ะ•ัะปะธ ะฒั‹ ั…ะพั‚ะธั‚ะต ะฟะพะปัƒั‡ะธั‚ัŒ ะธะฝะดะธะฒะธะดัƒะฐะปัŒะฝัƒัŽ ะฟะพะดะดะตั€ะถะบัƒ ะพั‚ ะบะพะผะฐะฝะดั‹ Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## ะ‘ั‹ัั‚ั€ั‹ะน ะณะฐะนะด ะ”ะปั ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั ะผะพะดะตะปะธ ะฝะฐ ะทะฐะดะฐะฝะฝะพะผ ะฒั…ะพะดะต (ั‚ะตะบัั‚, ะธะทะพะฑั€ะฐะถะตะฝะธะต, ะทะฒัƒะบ, ...) ะผั‹ ะฟั€ะตะดะพัั‚ะฐะฒะปัะตะผ API `pipeline`. ะšะพะฝะฒะตะนะตั€ั‹ ะพะฑัŠะตะดะธะฝััŽั‚ ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝัƒัŽ ะผะพะดะตะปัŒ ั ะฟั€ะตะฟั€ะพั†ะตััะธะฝะณะพะผ, ะบะพั‚ะพั€ั‹ะน ะธัะฟะพะปัŒะทะพะฒะฐะปัั ะฟั€ะธ ะตะต ะพะฑัƒั‡ะตะฝะธะธ. ะ’ะพั‚ ะบะฐะบ ะผะพะถะฝะพ ะฑั‹ัั‚ั€ะพ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะบะพะฝะฒะตะนะตั€ ะดะปั ะบะปะฐััะธั„ะธะบะฐั†ะธะธ ะฟะพะปะพะถะธั‚ะตะปัŒะฝั‹ั… ะธ ะพั‚ั€ะธั†ะฐั‚ะตะปัŒะฝั‹ั… ั‚ะตะบัั‚ะพะฒ: ```python >>> from transformers import pipeline # ะ’ั‹ะดะตะปะตะฝะธะต ะบะพะฝะฒะตะนะตั€ะฐ ะดะปั ะฐะฝะฐะปะธะทะฐ ะฝะฐัั‚ั€ะพะตะฝะธะน >>> classifier = pipeline('sentiment-analysis') >>> classifier('ะœั‹ ะพั‡ะตะฝัŒ ั€ะฐะดั‹ ะฟั€ะตะดัั‚ะฐะฒะธั‚ัŒ ะบะพะฝะฒะตะนะตั€ ะฒ transformers.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` ะ’ั‚ะพั€ะฐั ัั‚ั€ะพะบะฐ ะบะพะดะฐ ะทะฐะณั€ัƒะถะฐะตั‚ ะธ ะบััˆะธั€ัƒะตั‚ ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝัƒัŽ ะผะพะดะตะปัŒ, ะธัะฟะพะปัŒะทัƒะตะผัƒัŽ ะบะพะฝะฒะตะนะตั€ะพะผ, ะฐ ั‚ั€ะตั‚ัŒั ะพั†ะตะฝะธะฒะฐะตั‚ ะตะต ะฝะฐ ะทะฐะดะฐะฝะฝะพะผ ั‚ะตะบัั‚ะต. ะ—ะดะตััŒ ะพั‚ะฒะตั‚ "POSITIVE" ั ัƒะฒะตั€ะตะฝะฝะพัั‚ัŒัŽ 99,97%. ะ’ะพ ะผะฝะพะณะธั… ะทะฐะดะฐั‡ะฐั…, ะบะฐะบ ะฒ ะะ›ะŸ, ั‚ะฐะบ ะธ ะฒ ะบะพะผะฟัŒัŽั‚ะตั€ะฝะพะผ ะทั€ะตะฝะธะธ ะธ ั€ะตั‡ะธ, ัƒะถะต ะตัั‚ัŒ ะณะพั‚ะพะฒั‹ะน `pipeline`. ะะฐะฟั€ะธะผะตั€, ะผั‹ ะผะพะถะตะผ ะปะตะณะบะพ ะธะทะฒะปะตั‡ัŒ ะพะฑะฝะฐั€ัƒะถะตะฝะฝั‹ะต ะพะฑัŠะตะบั‚ั‹ ะฝะฐ ะธะทะพะฑั€ะฐะถะตะฝะธะธ: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # ะกะบะฐั‡ะธะฒะฐะตะผ ะธะทะพะฑั€ะฐะถะตะฝะธะต ั ะผะธะปั‹ะผะธ ะบะพั‚ะธะบะฐะผะธ >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # ะ’ั‹ะดะตะปะตะฝะธะต ะบะพะฝะฒะตะนะตั€ะฐ ะดะปั ะพะฑะฝะฐั€ัƒะถะตะฝะธั ะพะฑัŠะตะบั‚ะพะฒ >>> object_detector = pipeline('object-detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` ะ—ะดะตััŒ ะผั‹ ะฟะพะปัƒั‡ะฐะตะผ ัะฟะธัะพะบ ะพะฑัŠะตะบั‚ะพะฒ, ะพะฑะฝะฐั€ัƒะถะตะฝะฝั‹ั… ะฝะฐ ะธะทะพะฑั€ะฐะถะตะฝะธะธ, ั ั€ะฐะผะบะพะน ะฒะพะบั€ัƒะณ ะพะฑัŠะตะบั‚ะฐ ะธ ะพั†ะตะฝะบะพะน ะดะพัั‚ะพะฒะตั€ะฝะพัั‚ะธ. ะกะปะตะฒะฐ - ะธัั…ะพะดะฝะพะต ะธะทะพะฑั€ะฐะถะตะฝะธะต, ัะฟั€ะฐะฒะฐ ะฟั€ะพะณะฝะพะทั‹: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> ะŸะพะดั€ะพะฑะฝะตะต ะพ ะทะฐะดะฐั‡ะฐั…, ะฟะพะดะดะตั€ะถะธะฒะฐะตะผั‹ั… API `pipeline`, ะผะพะถะฝะพ ัƒะทะฝะฐั‚ัŒ ะฒ [ัั‚ะพะผ ัƒั‡ะตะฑะฝะพะผ ะฟะพัะพะฑะธะธ](https://huggingface.co/docs/transformers/task_sum) ะ’ ะดะพะฟะพะปะฝะตะฝะธะต ะบ `pipeline`, ะดะปั ะทะฐะณั€ัƒะทะบะธ ะธ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั ะปัŽะฑะพะน ะธะท ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝั‹ั… ะผะพะดะตะปะตะน ะฒ ะทะฐะดะฐะฝะฝะพะน ะทะฐะดะฐั‡ะต ะดะพัั‚ะฐั‚ะพั‡ะฝะพ ั‚ั€ะตั… ัั‚ั€ะพะบ ะบะพะดะฐ. ะ’ะพั‚ ะฒะตั€ัะธั ะดะปั PyTorch: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("ะŸั€ะธะฒะตั‚ ะผะธั€!", return_tensors="pt") >>> outputs = model(**inputs) ``` ะ ะฒะพั‚ ัะบะฒะธะฒะฐะปะตะฝั‚ะฝั‹ะน ะบะพะด ะดะปั TensorFlow: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("ะŸั€ะธะฒะตั‚ ะผะธั€!", return_tensors="tf") >>> outputs = model(**inputs) ``` ะขะพะบะตะฝะธะทะฐั‚ะพั€ ะพั‚ะฒะตั‡ะฐะตั‚ ะทะฐ ะฒััŽ ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝัƒัŽ ะพะฑั€ะฐะฑะพั‚ะบัƒ, ะบะพั‚ะพั€ัƒัŽ ะพะถะธะดะฐะตั‚ ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝะฐั ะผะพะดะตะปัŒ, ะธ ะผะพะถะตั‚ ะฑั‹ั‚ัŒ ะฒั‹ะทะฒะฐะฝ ะฝะตะฟะพัั€ะตะดัั‚ะฒะตะฝะฝะพ ั ะฟะพะผะพั‰ัŒัŽ ะพะดะฝะพะน ัั‚ั€ะพะบะธ (ะบะฐะบ ะฒ ะฟั€ะธะฒะตะดะตะฝะฝั‹ั… ะฒั‹ัˆะต ะฟั€ะธะผะตั€ะฐั…) ะธะปะธ ะฝะฐ ัะฟะธัะบะต. ะ’ ั€ะตะทัƒะปัŒั‚ะฐั‚ะต ะฑัƒะดะตั‚ ะฟะพะปัƒั‡ะตะฝ ัะปะพะฒะฐั€ัŒ, ะบะพั‚ะพั€ั‹ะน ะผะพะถะฝะพ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะฒ ะฟะพัะปะตะดัƒัŽั‰ะตะผ ะบะพะดะต ะธะปะธ ะฟั€ะพัั‚ะพ ะฝะฐะฟั€ัะผัƒัŽ ะฟะตั€ะตะดะฐั‚ัŒ ะฒ ะผะพะดะตะปัŒ ั ะฟะพะผะพั‰ัŒัŽ ะพะฟะตั€ะฐั‚ะพั€ะฐ ั€ะฐัะฟะฐะบะพะฒะบะธ ะฐั€ะณัƒะผะตะฝั‚ะพะฒ **. ะกะฐะผะฐ ะผะพะดะตะปัŒ ะฟั€ะตะดัั‚ะฐะฒะปัะตั‚ ัะพะฑะพะน ะพะฑั‹ั‡ะฝั‹ะน [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ะธะปะธ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (ะฒ ะทะฐะฒะธัะธะผะพัั‚ะธ ะพั‚ ะธัะฟะพะปัŒะทัƒะตะผะพะณะพ ะฑัะบะตะฝะดะฐ), ะบะพั‚ะพั€ั‹ะน ะผะพะถะฝะพ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะบะฐะบ ะพะฑั‹ั‡ะฝะพ. [ะ’ ัั‚ะพะผ ั€ัƒะบะพะฒะพะดัั‚ะฒะต](https://huggingface.co/docs/transformers/training) ั€ะฐััะบะฐะทั‹ะฒะฐะตั‚ัั, ะบะฐะบ ะธะฝั‚ะตะณั€ะธั€ะพะฒะฐั‚ัŒ ั‚ะฐะบัƒัŽ ะผะพะดะตะปัŒ ะฒ ะบะปะฐััะธั‡ะตัะบะธะน ั†ะธะบะป ะพะฑัƒั‡ะตะฝะธั PyTorch ะธะปะธ TensorFlow, ะธะปะธ ะบะฐะบ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะฝะฐัˆ API `Trainer` ะดะปั ะฑั‹ัั‚ั€ะพะน ั‚ะพะฝะบะพะน ะฝะฐัั‚ั€ะพะนะบะธ ะฝะฐ ะฝะพะฒะพะผ ะดะฐั‚ะฐัะตั‚ะต. ## ะŸะพั‡ะตะผัƒ ะฝะตะพะฑั…ะพะดะธะผะพ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ transformers? 1. ะŸั€ะพัั‚ั‹ะต ะฒ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธะธ ัะพะฒั€ะตะผะตะฝะฝั‹ะต ะผะพะดะตะปะธ: - ะ’ั‹ัะพะบะฐั ะฟั€ะพะธะทะฒะพะดะธั‚ะตะปัŒะฝะพัั‚ัŒ ะฒ ะทะฐะดะฐั‡ะฐั… ะฟะพะฝะธะผะฐะฝะธั ะธ ะณะตะฝะตั€ะฐั†ะธะธ ะตัั‚ะตัั‚ะฒะตะฝะฝะพะณะพ ัะทั‹ะบะฐ, ะบะพะผะฟัŒัŽั‚ะตั€ะฝะพะณะพ ะทั€ะตะฝะธั ะธ ะฐัƒะดะธะพ. - ะะธะทะบะธะน ะฒั…ะพะดะฝะพะน ะฑะฐั€ัŒะตั€ ะดะปั ะฟั€ะตะฟะพะดะฐะฒะฐั‚ะตะปะตะน ะธ ะฟั€ะฐะบั‚ะธะบะพะฒ. - ะะตะฑะพะปัŒัˆะพะต ะบะพะปะธั‡ะตัั‚ะฒะพ ะฐะฑัั‚ั€ะฐะบั†ะธะน ะดะปั ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั ะธ ะฒัะตะณะพ ั‚ั€ะธ ะบะปะฐััะฐ ะดะปั ะธะทัƒั‡ะตะฝะธั. - ะ•ะดะธะฝั‹ะน API ะดะปั ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั ะฒัะตั… ะฝะฐัˆะธั… ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝั‹ั… ะผะพะดะตะปะตะน. 1. ะ‘ะพะปะตะต ะฝะธะทะบะธะต ะฒั‹ั‡ะธัะปะธั‚ะตะปัŒะฝั‹ะต ะทะฐั‚ั€ะฐั‚ั‹, ะผะตะฝัŒัˆะธะน "ัƒะณะปะตั€ะพะดะฝั‹ะน ัะปะตะด": - ะ˜ััะปะตะดะพะฒะฐั‚ะตะปะธ ะผะพะณัƒั‚ ะพะฑะผะตะฝะธะฒะฐั‚ัŒัั ะพะฑัƒั‡ะตะฝะฝั‹ะผะธ ะผะพะดะตะปัะผะธ ะฒะผะตัั‚ะพ ั‚ะพะณะพ, ั‡ั‚ะพะฑั‹ ะฟะพัั‚ะพัะฝะฝะพ ะธั… ะฟะตั€ะตะพะฑัƒั‡ะฐั‚ัŒ. - ะŸั€ะฐะบั‚ะธะบะธ ะผะพะณัƒั‚ ัะพะบั€ะฐั‚ะธั‚ัŒ ะฒั€ะตะผั ะฒั‹ั‡ะธัะปะตะฝะธะน ะธ ะฟั€ะพะธะทะฒะพะดัั‚ะฒะตะฝะฝั‹ะต ะทะฐั‚ั€ะฐั‚ั‹. - ะ”ะตััั‚ะบะธ ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ ั ะฑะพะปะตะต ั‡ะตะผ 60 000 ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพ ะพะฑัƒั‡ะตะฝะฝั‹ั… ะผะพะดะตะปะตะน ะดะปั ะฒัะตั… ะผะพะดะฐะปัŒะฝะพัั‚ะตะน. 1. ะ’ั‹ะฑะพั€ ะฟะพะดั…ะพะดัั‰ะตะณะพ ั„ั€ะตะนะผะฒะพั€ะบะฐ ะดะปั ะบะฐะถะดะพะณะพ ัั‚ะฐะฟะฐ ะถะธะทะฝะธ ะผะพะดะตะปะธ: - ะžะฑัƒั‡ะตะฝะธะต ัะฐะผั‹ั… ัะพะฒั€ะตะผะตะฝะฝั‹ั… ะผะพะดะตะปะตะน ะทะฐ 3 ัั‚ั€ะพะบะธ ะบะพะดะฐ. - ะŸะตั€ะตะผะตั‰ะฐะนั‚ะต ะพะดะฝัƒ ะผะพะดะตะปัŒ ะผะตะถะดัƒ ั„ั€ะตะนะผะฒะพั€ะบะฐะผะธ TF2.0/PyTorch/JAX ะฟะพ ัะฒะพะตะผัƒ ัƒัะผะพั‚ั€ะตะฝะธัŽ. - ะ‘ะตัะฟั€ะตะฟัั‚ัั‚ะฒะตะฝะฝั‹ะน ะฒั‹ะฑะพั€ ะฟะพะดั…ะพะดัั‰ะตะณะพ ั„ั€ะตะนะผะฒะพั€ะบะฐ ะดะปั ะพะฑัƒั‡ะตะฝะธั, ะพั†ะตะฝะบะธ ะธ ะฟั€ะพะธะทะฒะพะดัั‚ะฒะฐ. 1. ะ›ะตะณะบะพ ะฝะฐัั‚ั€ะพะธั‚ัŒ ะผะพะดะตะปัŒ ะธะปะธ ะฟั€ะธะผะตั€ ะฟะพะด ัะฒะพะธ ะฝัƒะถะดั‹: - ะœั‹ ะฟั€ะตะดะพัั‚ะฐะฒะปัะตะผ ะฟั€ะธะผะตั€ั‹ ะดะปั ะบะฐะถะดะพะน ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ั‹, ั‡ั‚ะพะฑั‹ ะฒะพัะฟั€ะพะธะทะฒะตัั‚ะธ ั€ะตะทัƒะปัŒั‚ะฐั‚ั‹, ะพะฟัƒะฑะปะธะบะพะฒะฐะฝะฝั‹ะต ะธั… ะฐะฒั‚ะพั€ะฐะผะธ. - ะ’ะฝัƒั‚ั€ะตะฝะฝะธะต ะบะพะผะฟะพะฝะตะฝั‚ั‹ ะผะพะดะตะปะธ ั€ะฐัะบั€ั‹ะฒะฐัŽั‚ัั ะผะฐะบัะธะผะฐะปัŒะฝะพ ะฟะพัะปะตะดะพะฒะฐั‚ะตะปัŒะฝะพ. - ะคะฐะนะปั‹ ะผะพะดะตะปะตะน ะผะพะถะฝะพ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะฝะตะทะฐะฒะธัะธะผะพ ะพั‚ ะฑะธะฑะปะธะพั‚ะตะบะธ ะดะปั ะฟั€ะพะฒะตะดะตะฝะธั ะฑั‹ัั‚ั€ั‹ั… ัะบัะฟะตั€ะธะผะตะฝั‚ะพะฒ. ## ะŸะพั‡ะตะผัƒ ั ะฝะต ะดะพะปะถะตะฝ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ transformers? - ะ”ะฐะฝะฝะฐั ะฑะธะฑะปะธะพั‚ะตะบะฐ ะฝะต ัะฒะปัะตั‚ัั ะผะพะดัƒะปัŒะฝั‹ะผ ะฝะฐะฑะพั€ะพะผ ัั‚ั€ะพะธั‚ะตะปัŒะฝั‹ั… ะฑะปะพะบะพะฒ ะดะปั ะฝะตะนั€ะพะฝะฝั‹ั… ัะตั‚ะตะน. ะšะพะด ะฒ ั„ะฐะนะปะฐั… ะผะพะดะตะปะตะน ัะฟะตั†ะธะฐะปัŒะฝะพ ะฝะต ั€ะตั„ะฐะบั‚ะพั€ะธั‚ัั ะดะพะฟะพะปะฝะธั‚ะตะปัŒะฝั‹ะผะธ ะฐะฑัั‚ั€ะฐะบั†ะธัะผะธ, ั‡ั‚ะพะฑั‹ ะธััะปะตะดะพะฒะฐั‚ะตะปะธ ะผะพะณะปะธ ะฑั‹ัั‚ั€ะพ ะธั‚ะตั€ะฐั‚ะธะฒะฝะพ ั€ะฐะฑะพั‚ะฐั‚ัŒ ั ะบะฐะถะดะพะน ะธะท ะผะพะดะตะปะตะน, ะฝะต ะฟะพะณั€ัƒะถะฐัััŒ ะฒ ะดะพะฟะพะปะฝะธั‚ะตะปัŒะฝั‹ะต ะฐะฑัั‚ั€ะฐะบั†ะธะธ/ั„ะฐะนะปั‹. - API ะพะฑัƒั‡ะตะฝะธั ะฝะต ะฟั€ะตะดะฝะฐะทะฝะฐั‡ะตะฝ ะดะปั ั€ะฐะฑะพั‚ั‹ ั ะปัŽะฑะพะน ะผะพะดะตะปัŒัŽ, ะฐ ะพะฟั‚ะธะผะธะทะธั€ะพะฒะฐะฝ ะดะปั ั€ะฐะฑะพั‚ั‹ ั ะผะพะดะตะปัะผะธ, ะฟั€ะตะดะพัั‚ะฐะฒะปัะตะผั‹ะผะธ ะฑะธะฑะปะธะพั‚ะตะบะพะน. ะ”ะปั ั€ะฐะฑะพั‚ั‹ ั ะพะฑั‰ะธะผะธ ั†ะธะบะปะฐะผะธ ะผะฐัˆะธะฝะฝะพะณะพ ะพะฑัƒั‡ะตะฝะธั ัะปะตะดัƒะตั‚ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะดั€ัƒะณัƒัŽ ะฑะธะฑะปะธะพั‚ะตะบัƒ (ะฒะพะทะผะพะถะฝะพ, [Accelerate](https://huggingface.co/docs/accelerate)). - ะะตัะผะพั‚ั€ั ะฝะฐ ั‚ะพ, ั‡ั‚ะพ ะผั‹ ัั‚ั€ะตะผะธะผัั ะฟั€ะตะดัั‚ะฐะฒะธั‚ัŒ ะบะฐะบ ะผะพะถะฝะพ ะฑะพะปัŒัˆะต ะฟั€ะธะผะตั€ะพะฒ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั, ัะบั€ะธะฟั‚ั‹ ะฒ ะฝะฐัˆะตะน ะฟะฐะฟะบะต [ะฟั€ะธะผะตั€ะพะฒ](https://github.com/huggingface/transformers/tree/main/examples) ัะฒะปััŽั‚ัั ะธะผะตะฝะฝะพ ะฟั€ะธะผะตั€ะฐะผะธ. ะŸั€ะตะดะฟะพะปะฐะณะฐะตั‚ัั, ั‡ั‚ะพ ะพะฝะธ ะฝะต ะฑัƒะดัƒั‚ ั€ะฐะฑะพั‚ะฐั‚ัŒ "ะธะท ะบะพั€ะพะฑะบะธ" ะดะปั ั€ะตัˆะตะฝะธั ะฒะฐัˆะตะน ะบะพะฝะบั€ะตั‚ะฝะพะน ะทะฐะดะฐั‡ะธ, ะธ ะฒะฐะผ ะฟั€ะธะดะตั‚ัั ะธะทะผะตะฝะธั‚ัŒ ะฝะตัะบะพะปัŒะบะพ ัั‚ั€ะพะบ ะบะพะดะฐ, ั‡ั‚ะพะฑั‹ ะฐะดะฐะฟั‚ะธั€ะพะฒะฐั‚ัŒ ะธั… ะฟะพะด ัะฒะพะธ ะฝัƒะถะดั‹. ## ะฃัั‚ะฐะฝะพะฒะบะฐ ### ะก ะฟะพะผะพั‰ัŒัŽ pip ะ”ะฐะฝะฝั‹ะน ั€ะตะฟะพะทะธั‚ะพั€ะธะน ะฟั€ะพั‚ะตัั‚ะธั€ะพะฒะฐะฝ ะฝะฐ Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ ะธ TensorFlow 2.6+. ะฃัั‚ะฐะฝะฐะฒะปะธะฒะฐั‚ัŒ ๐Ÿค— Transformers ัะปะตะดัƒะตั‚ ะฒ [ะฒะธั€ั‚ัƒะฐะปัŒะฝะพะน ัั€ะตะดะต](https://docs.python.org/3/library/venv.html). ะ•ัะปะธ ะฒั‹ ะฝะต ะทะฝะฐะบะพะผั‹ ั ะฒะธั€ั‚ัƒะฐะปัŒะฝั‹ะผะธ ัั€ะตะดะฐะผะธ Python, ะพะทะฝะฐะบะพะผัŒั‚ะตััŒ ั [ั€ัƒะบะพะฒะพะดัั‚ะฒะพะผ ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). ะกะฝะฐั‡ะฐะปะฐ ัะพะทะดะฐะนั‚ะต ะฒะธั€ั‚ัƒะฐะปัŒะฝัƒัŽ ัั€ะตะดัƒ ั ั‚ะพะน ะฒะตั€ัะธะตะน Python, ะบะพั‚ะพั€ัƒัŽ ะฒั‹ ัะพะฑะธั€ะฐะตั‚ะตััŒ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ, ะธ ะฐะบั‚ะธะฒะธั€ัƒะนั‚ะต ะตะต. ะ—ะฐั‚ะตะผ ะฝะตะพะฑั…ะพะดะธะผะพ ัƒัั‚ะฐะฝะพะฒะธั‚ัŒ ั…ะพั‚ั ะฑั‹ ะพะดะธะฝ ะฑะตะบะตะฝะด ะธะท Flax, PyTorch ะธะปะธ TensorFlow. ะŸะพะถะฐะปัƒะนัั‚ะฐ, ะพะฑั€ะฐั‚ะธั‚ะตััŒ ะบ ัั‚ั€ะฐะฝะธั†ะฐะผ [TensorFlow ัƒัั‚ะฐะฝะพะฒะพั‡ะฝะฐั ัั‚ั€ะฐะฝะธั†ะฐ](https://www.tensorflow.org/install/), [PyTorch ัƒัั‚ะฐะฝะพะฒะพั‡ะฝะฐั ัั‚ั€ะฐะฝะธั†ะฐ](https://pytorch.org/get-started/locally/#start-locally) ะธ/ะธะปะธ [Flax](https://github.com/google/flax#quick-install) ะธ [Jax](https://github.com/google/jax#installation), ะณะดะต ะพะฟะธัะฐะฝั‹ ะบะพะผะฐะฝะดั‹ ัƒัั‚ะฐะฝะพะฒะบะธ ะดะปั ะฒะฐัˆะตะน ะฟะปะฐั‚ั„ะพั€ะผั‹. ะŸะพัะปะต ัƒัั‚ะฐะฝะพะฒะบะธ ะพะดะฝะพะณะพ ะธะท ัั‚ะธั… ะฑัะบะตะฝะดะพะฒ ๐Ÿค— Transformers ะผะพะถะตั‚ ะฑั‹ั‚ัŒ ัƒัั‚ะฐะฝะพะฒะปะตะฝ ั ะฟะพะผะพั‰ัŒัŽ pip ัะปะตะดัƒัŽั‰ะธะผ ะพะฑั€ะฐะทะพะผ: ```bash pip install transformers ``` ะ•ัะปะธ ะฒั‹ ั…ะพั‚ะธั‚ะต ะฟะพะธะณั€ะฐั‚ัŒ ั ะฟั€ะธะผะตั€ะฐะผะธ ะธะปะธ ะฒะฐะผ ะฝัƒะถะตะฝ ัะฐะผั‹ะน ัะพะฒั€ะตะผะตะฝะฝั‹ะน ะบะพะด ะธ ะฒั‹ ะฝะต ะผะพะถะตั‚ะต ะถะดะฐั‚ัŒ ะฝะพะฒะพะณะพ ั€ะตะปะธะทะฐ, ะฒั‹ ะดะพะปะถะฝั‹ [ัƒัั‚ะฐะฝะพะฒะธั‚ัŒ ะฑะธะฑะปะธะพั‚ะตะบัƒ ะธะท ะธัั…ะพะดะฝะพะณะพ ะบะพะดะฐ](https://huggingface.co/docs/transformers/installation#installing-from-source). ### ะก ะฟะพะผะพั‰ัŒัŽ conda ะฃัั‚ะฐะฝะพะฒะธั‚ัŒ Transformers ั ะฟะพะผะพั‰ัŒัŽ conda ะผะพะถะฝะพ ัะปะตะดัƒัŽั‰ะธะผ ะพะฑั€ะฐะทะพะผ: ```bash conda install conda-forge::transformers ``` > **_ะ—ะะœะ•ะขะšะ:_** ะฃัั‚ะฐะฝะพะฒะบะฐ `transformers` ั‡ะตั€ะตะท ะบะฐะฝะฐะป `huggingface` ัƒัั‚ะฐั€ะตะปะฐ. ะž ั‚ะพะผ, ะบะฐะบ ัƒัั‚ะฐะฝะพะฒะธั‚ัŒ Flax, PyTorch ะธะปะธ TensorFlow ั ะฟะพะผะพั‰ัŒัŽ conda, ั‡ะธั‚ะฐะนั‚ะต ะฝะฐ ัั‚ั€ะฐะฝะธั†ะฐั…, ะฟะพัะฒัั‰ะตะฝะฝั‹ั… ะธั… ัƒัั‚ะฐะฝะพะฒะบะต. > **_ะ—ะะœะ•ะขะšะ:_** ะ’ ะพะฟะตั€ะฐั†ะธะพะฝะฝะพะน ัะธัั‚ะตะผะต Windows ะฒะฐะผ ะผะพะถะตั‚ ะฑั‹ั‚ัŒ ะฟั€ะตะดะปะพะถะตะฝะพ ะฐะบั‚ะธะฒะธั€ะพะฒะฐั‚ัŒ ั€ะตะถะธะผ ั€ะฐะทั€ะฐะฑะพั‚ั‡ะธะบะฐ, ั‡ั‚ะพะฑั‹ ะฒะพัะฟะพะปัŒะทะพะฒะฐั‚ัŒัั ะฟั€ะตะธะผัƒั‰ะตัั‚ะฒะฐะผะธ ะบััˆะธั€ะพะฒะฐะฝะธั. ะ•ัะปะธ ะดะปั ะฒะฐั ัั‚ะพ ะฝะตะฒะพะทะผะพะถะฝะพ, ัะพะพะฑั‰ะธั‚ะต ะฝะฐะผ ะพะฑ ัั‚ะพะผ [ะทะดะตััŒ](https://github.com/huggingface/huggingface_hub/issues/1062). ## ะœะพะดะตะปัŒะฝั‹ะต ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ั‹ **[ะ’ัะต ะบะพะฝั‚ั€ะพะปัŒะฝั‹ะต ั‚ะพั‡ะบะธ ะผะพะดะตะปะตะน](https://huggingface.co/models)**, ะฟั€ะตะดะพัั‚ะฐะฒะปัะตะผั‹ะต ๐Ÿค— Transformers, ะฑะตัะฟั€ะตะฟัั‚ัั‚ะฒะตะฝะฝะพ ะธะฝั‚ะตะณั€ะธั€ัƒัŽั‚ัั ั huggingface.co [model hub](https://huggingface.co/models), ะบัƒะดะฐ ะพะฝะธ ะทะฐะณั€ัƒะถะฐัŽั‚ัั ะฝะตะฟะพัั€ะตะดัั‚ะฒะตะฝะฝะพ [ะฟะพะปัŒะทะพะฒะฐั‚ะตะปัะผะธ](https://huggingface.co/users) ะธ [ะพั€ะณะฐะฝะธะทะฐั†ะธัะผะธ](https://huggingface.co/organizations). ะขะตะบัƒั‰ะตะต ะบะพะปะธั‡ะตัั‚ะฒะพ ะบะพะฝั‚ั€ะพะปัŒะฝั‹ั… ั‚ะพั‡ะตะบ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— ะ’ ะฝะฐัั‚ะพัั‰ะตะต ะฒั€ะตะผั Transformers ะฟั€ะตะดะพัั‚ะฐะฒะปัะตั‚ ัะปะตะดัƒัŽั‰ะธะต ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ั‹: ะฟะพะดั€ะพะฑะฝะพะต ะพะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะธะท ะฝะธั… ัะผ. [ะทะดะตััŒ](https://huggingface.co/docs/transformers/model_summary). ะงั‚ะพะฑั‹ ะฟั€ะพะฒะตั€ะธั‚ัŒ, ะตัั‚ัŒ ะปะธ ัƒ ะบะฐะถะดะพะน ะผะพะดะตะปะธ ั€ะตะฐะปะธะทะฐั†ะธั ะฝะฐ Flax, PyTorch ะธะปะธ TensorFlow, ะธะปะธ ัะฒัะทะฐะฝะฝั‹ะน ั ะฝะตะน ั‚ะพะบะตะฝะธะทะฐั‚ะพั€, ะฟะพะดะดะตั€ะถะธะฒะฐะตะผั‹ะน ะฑะธะฑะปะธะพั‚ะตะบะพะน ๐Ÿค— Tokenizers, ะพะฑั€ะฐั‚ะธั‚ะตััŒ ะบ [ัั‚ะพะน ั‚ะฐะฑะปะธั†ะต](https://huggingface.co/docs/transformers/index#supported-frameworks). ะญั‚ะธ ั€ะตะฐะปะธะทะฐั†ะธะธ ะฑั‹ะปะธ ะฟั€ะพั‚ะตัั‚ะธั€ะพะฒะฐะฝั‹ ะฝะฐ ะฝะตัะบะพะปัŒะบะธั… ะฝะฐะฑะพั€ะฐั… ะดะฐะฝะฝั‹ั… (ัะผ. ะฟั€ะธะผะตั€ั‹ ัะบั€ะธะฟั‚ะพะฒ) ะธ ะดะพะปะถะฝั‹ ัะพะพั‚ะฒะตั‚ัั‚ะฒะพะฒะฐั‚ัŒ ะฟั€ะพะธะทะฒะพะดะธั‚ะตะปัŒะฝะพัั‚ะธ ะพั€ะธะณะธะฝะฐะปัŒะฝั‹ั… ั€ะตะฐะปะธะทะฐั†ะธะน. ะ‘ะพะปะตะต ะฟะพะดั€ะพะฑะฝัƒัŽ ะธะฝั„ะพั€ะผะฐั†ะธัŽ ะพ ะฟั€ะพะธะทะฒะพะดะธั‚ะตะปัŒะฝะพัั‚ะธ ะผะพะถะฝะพ ะฝะฐะนั‚ะธ ะฒ ั€ะฐะทะดะตะปะต "ะŸั€ะธะผะตั€ั‹" [ะดะพะบัƒะผะตะฝั‚ะฐั†ะธะธ](https://github.com/huggingface/transformers/tree/main/examples). ## ะ˜ะทัƒั‡ะธ ะฑะพะปัŒัˆะต | ะกะตะบั†ะธั | ะžะฟะธัะฐะฝะธะต | |-|-| | [ะ”ะพะบัƒะผะตะฝั‚ะฐั†ะธั](https://huggingface.co/docs/transformers/) | ะŸะพะปะฝะฐั ะดะพะบัƒะผะตะฝั‚ะฐั†ะธั ะฟะพ API ะธ ะณะฐะนะดั‹ | | [ะšั€ะฐั‚ะบะธะต ะพะฟะธัะฐะฝะธั ะทะฐะดะฐั‡](https://huggingface.co/docs/transformers/task_summary) | ะ—ะฐะดะฐั‡ะธ ะฟะพะดะดะตั€ะถะธะฒะฐัŽั‚ัั ๐Ÿค— Transformers | | [ะŸะพัะพะฑะธะต ะฟะพ ะฟั€ะตะดะฒะฐั€ะธั‚ะตะปัŒะฝะพะน ะพะฑั€ะฐะฑะพั‚ะบะต](https://huggingface.co/docs/transformers/preprocessing) | ะ˜ัะฟะพะปัŒะทะพะฒะฐะฝะธะต ะบะปะฐััะฐ `Tokenizer` ะดะปั ะฟะพะดะณะพั‚ะพะฒะบะธ ะดะฐะฝะฝั‹ั… ะดะปั ะผะพะดะตะปะตะน | | [ะžะฑัƒั‡ะตะฝะธะต ะธ ะดะพั€ะฐะฑะพั‚ะบะฐ](https://huggingface.co/docs/transformers/training) | ะ˜ัะฟะพะปัŒะทะพะฒะฐะฝะธะต ะผะพะดะตะปะตะน, ะฟั€ะตะดะพัั‚ะฐะฒะปัะตะผั‹ั… ๐Ÿค— Transformers, ะฒ ั†ะธะบะปะต ะพะฑัƒั‡ะตะฝะธั PyTorch/TensorFlow ะธ API `Trainer`. | | [ะ‘ั‹ัั‚ั€ั‹ะน ั‚ัƒั€: ะขะพะฝะบะฐั ะฝะฐัั‚ั€ะพะนะบะฐ/ัะบั€ะธะฟั‚ั‹ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั](https://github.com/huggingface/transformers/tree/main/examples) | ะŸั€ะธะผะตั€ั‹ ัะบั€ะธะฟั‚ะพะฒ ะดะปั ั‚ะพะฝะบะพะน ะฝะฐัั‚ั€ะพะนะบะธ ะผะพะดะตะปะตะน ะฝะฐ ัˆะธั€ะพะบะพะผ ัะฟะตะบั‚ั€ะต ะทะฐะดะฐั‡ | | [ะกะพะฒะผะตัั‚ะฝะพะต ะธัะฟะพะปัŒะทะพะฒะฐะฝะธะต ะธ ะทะฐะณั€ัƒะทะบะฐ ะผะพะดะตะปะตะน](https://huggingface.co/docs/transformers/model_sharing) | ะ—ะฐะณั€ัƒะถะฐะนั‚ะต ะธ ะดะตะปะธั‚ะตััŒ ั ัะพะพะฑั‰ะตัั‚ะฒะพะผ ัะฒะพะธะผะธ ะดะพั€ะฐะฑะพั‚ะฐะฝะฝั‹ะผะธ ะผะพะดะตะปัะผะธ | ## ะฆะธั‚ะธั€ะพะฒะฐะฝะธะต ะขะตะฟะตั€ัŒ ัƒ ะฝะฐั ะตัั‚ัŒ [ัั‚ะฐั‚ัŒั](https://www.aclweb.org/anthology/2020.emnlp-demos.6/), ะบะพั‚ะพั€ัƒัŽ ะผะพะถะฝะพ ั†ะธั‚ะธั€ะพะฒะฐั‚ัŒ ะดะปั ะฑะธะฑะปะธะพั‚ะตะบะธ ๐Ÿค— Transformers: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/README_zh-hant.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!--- A useful guide for English-Traditional Chinese translation of Hugging Face documentation - Add space around English words and numbers when they appear between Chinese characters. E.g., ๅ…ฑ 100 ๅคš็จฎ่ชž่จ€; ไฝฟ็”จ transformers ๅ‡ฝๅผๅบซใ€‚ - Use square quotes, e.g.,ใ€Œๅผ•็”จใ€ - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese. Dictionary API: API (ไธ็ฟป่ญฏ๏ผ‰ add: ๅŠ ๅ…ฅ checkpoint: ๆชขๆŸฅ้ปž code: ็จ‹ๅผ็ขผ community: ็คพ็พค confidence: ไฟก่ณดๅบฆ dataset: ่ณ‡ๆ–™้›† documentation: ๆ–‡ไปถ example: ๅŸบๆœฌ็ฟป่ญฏ็‚บใ€Œ็ฏ„ไพ‹ใ€๏ผŒๆˆ–ไพ่ชžๆ„็ฟป็‚บใ€Œไพ‹ๅญใ€ finetune: ๅพฎ่ชฟ Hugging Face: Hugging Face๏ผˆไธ็ฟป่ญฏ๏ผ‰ implementation: ๅฏฆไฝœ inference: ๆŽจ่ซ– library: ๅ‡ฝๅผๅบซ module: ๆจก็ต„ NLP/Natural Language Processing: ไปฅ NLP ๅ‡บ็พๆ™‚ไธ็ฟป่ญฏ๏ผŒไปฅ Natural Language Processing ๅ‡บ็พๆ™‚็ฟป่ญฏ็‚บ่‡ช็„ถ่ชž่จ€่™•็† online demos: ็ทšไธŠDemo pipeline: pipeline๏ผˆไธ็ฟป่ญฏ๏ผ‰ pretrained/pretrain: ้ ่จ“็ทด Python data structures (e.g., list, set, dict): ็ฟป่ญฏ็‚บไธฒๅˆ—๏ผŒ้›†ๅˆ๏ผŒๅญ—ๅ…ธ๏ผŒไธฆ็”จๆ‹ฌ่™Ÿๆจ™่จปๅŽŸ่‹ฑๆ–‡ repository: repository๏ผˆไธ็ฟป่ญฏ๏ผ‰ summary: ๆฆ‚่ฆฝ token-: token-๏ผˆไธ็ฟป่ญฏ๏ผ‰ Trainer: Trainer๏ผˆไธ็ฟป่ญฏ๏ผ‰ transformer: transformer๏ผˆไธ็ฟป่ญฏ๏ผ‰ tutorial: ๆ•™ๅญธ user: ไฝฟ็”จ่€… --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <b>็น้ซ”ไธญๆ–‡</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>็‚บ Jaxใ€PyTorch ไปฅๅŠ TensorFlow ๆ‰“้€ ็š„ๅ…ˆ้€ฒ่‡ช็„ถ่ชž่จ€่™•็†ๅ‡ฝๅผๅบซ</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers ๆไพ›ไบ†ๆ•ธไปฅๅƒ่จˆ็š„้ ่จ“็ทดๆจกๅž‹๏ผŒๆ”ฏๆด 100 ๅคš็จฎ่ชž่จ€็š„ๆ–‡ๆœฌๅˆ†้กžใ€่ณ‡่จŠๆ“ทๅ–ใ€ๅ•็ญ”ใ€ๆ‘˜่ฆใ€็ฟป่ญฏใ€ๆ–‡ๆœฌ็”Ÿๆˆใ€‚ๅฎƒ็š„ๅฎ—ๆ—จๆ˜ฏ่ฎ“ๆœ€ๅ…ˆ้€ฒ็š„ NLP ๆŠ€่ก“ไบบไบบๆ˜“็”จใ€‚ ๐Ÿค— Transformers ๆไพ›ไบ†ไพฟๆ–ผๅฟซ้€Ÿไธ‹่ผ‰ๅ’Œไฝฟ็”จ็š„API๏ผŒ่ฎ“ไฝ ๅฏไปฅๅฐ‡้ ่จ“็ทดๆจกๅž‹็”จๅœจ็ตฆๅฎšๆ–‡ๆœฌใ€ๅœจไฝ ็š„่ณ‡ๆ–™้›†ไธŠๅพฎ่ชฟ็„ถๅพŒ็ถ“็”ฑ [model hub](https://huggingface.co/models) ่ˆ‡็คพ็พคๅ…ฑไบซใ€‚ๅŒๆ™‚๏ผŒๆฏๅ€‹ๅฎš็พฉ็š„ Python ๆจก็ต„ๆžถๆง‹ๅ‡ๅฎŒๅ…จ็จ็ซ‹๏ผŒๆ–นไพฟไฟฎๆ”นๅ’Œๅฟซ้€Ÿ็ ”็ฉถๅฏฆ้ฉ—ใ€‚ ๐Ÿค— Transformers ๆ”ฏๆดไธ‰ๅ€‹ๆœ€็†ฑ้–€็š„ๆทฑๅบฆๅญธ็ฟ’ๅ‡ฝๅผๅบซ๏ผš [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) ไปฅๅŠ [TensorFlow](https://www.tensorflow.org/) โ€” ไธฆ่ˆ‡ไน‹ๅฎŒ็พŽๆ•ดๅˆใ€‚ไฝ ๅฏไปฅ็›ดๆŽฅไฝฟ็”จๅ…ถไธญไธ€ๅ€‹ๆก†ๆžถ่จ“็ทดไฝ ็š„ๆจกๅž‹๏ผŒ็„ถๅพŒ็”จๅฆไธ€ๅ€‹่ผ‰ๅ…ฅๅ’ŒๆŽจ่ซ–ใ€‚ ## ็ทšไธŠDemo ไฝ ๅฏไปฅ็›ดๆŽฅๅœจ [model hub](https://huggingface.co/models) ไธŠๆธฌ่ฉฆๅคงๅคšๆ•ธ็š„ๆจกๅž‹ใ€‚ๆˆ‘ๅ€‘ไนŸๆไพ›ไบ† [็งๆœ‰ๆจกๅž‹่จ—็ฎกใ€ๆจกๅž‹็‰ˆๆœฌ็ฎก็†ไปฅๅŠๆŽจ่ซ–API](https://huggingface.co/pricing)ใ€‚ ้€™่ฃกๆ˜ฏไธ€ไบ›็ฏ„ไพ‹๏ผš - [็”จ BERT ๅš้ฎ่“‹ๅกซ่ฉž](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [็”จ Electra ๅšๅฐˆๆœ‰ๅ่ฉž่พจ่ญ˜](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [็”จ GPT-2 ๅšๆ–‡ๆœฌ็”Ÿๆˆ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [็”จ RoBERTa ๅš่‡ช็„ถ่ชž่จ€ๆŽจ่ซ–](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [็”จ BART ๅšๆ–‡ๆœฌๆ‘˜่ฆ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [็”จ DistilBERT ๅšๅ•็ญ”](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [็”จ T5 ๅš็ฟป่ญฏ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) **[Write With Transformer](https://transformer.huggingface.co)**๏ผŒ็”ฑ Hugging Face ๅœ˜้šŠๆ‰€ๆ‰“้€ ๏ผŒๆ˜ฏไธ€ๅ€‹ๆ–‡ๆœฌ็”Ÿๆˆ็š„ๅฎ˜ๆ–น demoใ€‚ ## ๅฆ‚ๆžœไฝ ๅœจๅฐ‹ๆ‰พ็”ฑ Hugging Face ๅœ˜้šŠๆ‰€ๆไพ›็š„ๅฎข่ฃฝๅŒ–ๆ”ฏๆดๆœๅ‹™ <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## ๅฟซ้€ŸไธŠๆ‰‹ ๆˆ‘ๅ€‘็‚บๅฟซ้€Ÿไฝฟ็”จๆจกๅž‹ๆไพ›ไบ† `pipeline` APIใ€‚ Pipeline ๅŒ…ๅซไบ†้ ่จ“็ทดๆจกๅž‹ๅ’Œๅฐๆ‡‰็š„ๆ–‡ๆœฌ้ ่™•็†ใ€‚ไธ‹้ขๆ˜ฏไธ€ๅ€‹ๅฟซ้€Ÿไฝฟ็”จ pipeline ๅŽปๅˆคๆ–ทๆญฃ่ฒ ้ขๆƒ…็ท’็š„ไพ‹ๅญ๏ผš ```python >>> from transformers import pipeline # ไฝฟ็”จๆƒ…็ท’ๅˆ†ๆž pipeline >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` ็ฌฌไบŒ่กŒ็จ‹ๅผ็ขผไธ‹่ผ‰ไธฆๅฟซๅ– pipeline ไฝฟ็”จ็š„้ ่จ“็ทดๆจกๅž‹๏ผŒ่€Œ็ฌฌไธ‰่กŒ็จ‹ๅผ็ขผๅ‰‡ๅœจ็ตฆๅฎš็š„ๆ–‡ๆœฌไธŠ้€ฒ่กŒไบ†่ฉ•ไผฐใ€‚้€™่ฃก็š„็ญ”ๆกˆโ€œๆญฃ้ขโ€ (positive) ๅ…ทๆœ‰ 99.97% ็š„ไฟก่ณดๅบฆใ€‚ ่จฑๅคš็š„ NLP ไปปๅ‹™้ƒฝๆœ‰้šจ้ธๅณ็”จ็š„้ ่จ“็ทด `pipeline`ใ€‚ไพ‹ๅฆ‚๏ผŒๆˆ‘ๅ€‘ๅฏไปฅ่ผ•้ฌ†ๅœฐๅพž็ตฆๅฎšๆ–‡ๆœฌไธญๆ“ทๅ–ๅ•้กŒ็ญ”ๆกˆ๏ผš ``` python >>> from transformers import pipeline # ไฝฟ็”จๅ•็ญ” pipeline >>> question_answerer = pipeline('question-answering') >>> question_answerer({ ... 'question': 'What is the name of the repository ?', ... 'context': 'Pipeline has been included in the huggingface/transformers repository' ... }) {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} ``` ้™คไบ†ๆไพ›ๅ•้กŒ่งฃ็ญ”๏ผŒ้ ่จ“็ทดๆจกๅž‹้‚„ๆไพ›ไบ†ๅฐๆ‡‰็š„ไฟก่ณดๅบฆๅˆ†ๆ•ธไปฅๅŠ่งฃ็ญ”ๅœจ tokenized ๅพŒ็š„ๆ–‡ๆœฌไธญ้–‹ๅง‹ๅ’Œ็ตๆŸ็š„ไฝ็ฝฎใ€‚ไฝ ๅฏไปฅๅพž[้€™ๅ€‹ๆ•™ๅญธ](https://huggingface.co/docs/transformers/task_summary)ไบ†่งฃๆ›ดๅคš `pipeline` APIๆ”ฏๆด็š„ไปปๅ‹™ใ€‚ ่ฆๅœจไฝ ็š„ไปปๅ‹™ไธญไธ‹่ผ‰ๅ’Œไฝฟ็”จไปปไฝ•้ ่จ“็ทดๆจกๅž‹ๅพˆ็ฐกๅ–ฎ๏ผŒๅช้œ€ไธ‰่กŒ็จ‹ๅผ็ขผใ€‚้€™่ฃกๆ˜ฏ PyTorch ็‰ˆ็š„็ฏ„ไพ‹๏ผš ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` ้€™่ฃกๆ˜ฏๅฐๆ‡‰็š„ TensorFlow ็จ‹ๅผ็ขผ๏ผš ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` Tokenizer ็‚บๆ‰€ๆœ‰็š„้ ่จ“็ทดๆจกๅž‹ๆไพ›ไบ†้ ่™•็†๏ผŒไธฆๅฏไปฅ็›ดๆŽฅ่ฝ‰ๆ›ๅ–ฎไธ€ๅญ—ไธฒ๏ผˆๆฏ”ๅฆ‚ไธŠ้ข็š„ไพ‹ๅญ๏ผ‰ๆˆ–ไธฒๅˆ— (list)ใ€‚ๅฎƒๆœƒ่ผธๅ‡บไธ€ๅ€‹็š„ๅญ—ๅ…ธ (dict) ่ฎ“ไฝ ๅฏไปฅๅœจไธ‹ๆธธ็จ‹ๅผ็ขผ่ฃกไฝฟ็”จๆˆ–็›ดๆŽฅ่—‰็”ฑ `**` ้‹็ฎ—ๅผๅ‚ณ็ตฆๆจกๅž‹ใ€‚ ๆจกๅž‹ๆœฌ่บซๆ˜ฏไธ€ๅ€‹ๅธธ่ฆ็š„ [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ๆˆ– [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๏ผˆๅ–ๆฑบๆ–ผไฝ ็š„ๅพŒ็ซฏ๏ผ‰๏ผŒๅฏไพๅธธ่ฆๆ–นๅผไฝฟ็”จใ€‚ [้€™ๅ€‹ๆ•™ๅญธ](https://huggingface.co/transformers/training.html)่งฃ้‡‹ไบ†ๅฆ‚ไฝ•ๅฐ‡้€™ๆจฃ็š„ๆจกๅž‹ๆ•ดๅˆๅˆฐไธ€่ˆฌ็š„ PyTorch ๆˆ– TensorFlow ่จ“็ทด่ฟดๅœˆไธญ๏ผŒๆˆ–ๆ˜ฏๅฆ‚ไฝ•ไฝฟ็”จๆˆ‘ๅ€‘็š„ `Trainer` API ๅœจไธ€ๅ€‹ๆ–ฐ็š„่ณ‡ๆ–™้›†ไธŠๅฟซ้€Ÿ้€ฒ่กŒๅพฎ่ชฟใ€‚ ## ็‚บไป€้บผ่ฆ็”จ transformers๏ผŸ 1. ไพฟๆ–ผไฝฟ็”จ็š„ๅ…ˆ้€ฒๆจกๅž‹๏ผš - NLU ๅ’Œ NLG ไธŠๆ€ง่ƒฝๅ“่ถŠ - ๅฐๆ•™ๅญธๅ’Œๅฏฆไฝœๅ‹ๅฅฝไธ”ไฝŽ้–€ๆชป - ้ซ˜ๅบฆๆŠฝ่ฑก๏ผŒไฝฟ็”จ่€…ๅช้ ˆๅญธ็ฟ’ 3 ๅ€‹้กžๅˆฅ - ๅฐๆ‰€ๆœ‰ๆจกๅž‹ไฝฟ็”จ็š„ๅˆถๅผๅŒ–API 1. ๆ›ดไฝŽ็š„้‹็ฎ—ๆˆๆœฌ๏ผŒๆ›ดๅฐ‘็š„็ขณๆŽ’ๆ”พ๏ผš - ็ ”็ฉถไบบๅ“กๅฏไปฅๅˆ†ไบซๅทฒ่จ“็ทด็š„ๆจกๅž‹่€Œ้žๆฏๆฌกๅพž้ ญ้–‹ๅง‹่จ“็ทด - ๅทฅ็จ‹ๅธซๅฏไปฅๆธ›ๅฐ‘่จˆ็ฎ—ๆ™‚้–“ไปฅๅŠ็”Ÿ็”ขๆˆๆœฌ - ๆ•ธๅ็จฎๆจกๅž‹ๆžถๆง‹ใ€ๅ…ฉๅƒๅคšๅ€‹้ ่จ“็ทดๆจกๅž‹ใ€100ๅคš็จฎ่ชž่จ€ๆ”ฏๆด 1. ๅฐๆ–ผๆจกๅž‹็”Ÿๅ‘ฝ้€ฑๆœŸ็š„ๆฏไธ€ๅ€‹้ƒจๅˆ†้ƒฝ้ข้ขไฟฑๅˆฐ๏ผš - ่จ“็ทดๅ…ˆ้€ฒ็š„ๆจกๅž‹๏ผŒๅช้œ€ 3 ่กŒ็จ‹ๅผ็ขผ - ๆจกๅž‹ๅฏไปฅๅœจไธๅŒๆทฑๅบฆๅญธ็ฟ’ๆก†ๆžถไน‹้–“ไปปๆ„่ฝ‰ๆ› - ็‚บ่จ“็ทดใ€่ฉ•ไผฐๅ’Œ็”Ÿ็”ข้ธๆ“‡ๆœ€้ฉๅˆ็š„ๆก†ๆžถ๏ผŒไธฆๅฎŒ็พŽ้ŠœๆŽฅ 1. ็‚บไฝ ็š„้œ€ๆฑ‚่ผ•้ฌ†ๅฎข่ฃฝๅŒ–ๅฐˆๅฑฌๆจกๅž‹ๅ’Œ็ฏ„ไพ‹๏ผš - ๆˆ‘ๅ€‘็‚บๆฏ็จฎๆจกๅž‹ๆžถๆง‹ๆไพ›ไบ†ๅคšๅ€‹็ฏ„ไพ‹ไพ†้‡็พๅŽŸ่ซ–ๆ–‡็ตๆžœ - ไธ€่‡ด็š„ๆจกๅž‹ๅ…ง้ƒจๆžถๆง‹ - ๆจกๅž‹ๆช”ๆกˆๅฏๅ–ฎ็จไฝฟ็”จ๏ผŒไพฟๆ–ผไฟฎๆ”นๅ’Œๅฟซ้€Ÿๅฏฆ้ฉ— ## ไป€้บผๆƒ…ๆณไธ‹ๆˆ‘ไธ่ฉฒ็”จ transformers๏ผŸ - ๆœฌๅ‡ฝๅผๅบซไธฆไธๆ˜ฏๆจก็ต„ๅŒ–็š„็ฅž็ถ“็ถฒ็ตกๅทฅๅ…ท็ฎฑใ€‚ๆจกๅž‹ๆ–‡ไปถไธญ็š„็จ‹ๅผ็ขผไธฆๆœชๅš้กๅค–็š„ๆŠฝ่ฑกๅฐ่ฃ๏ผŒไปฅไพฟ็ ”็ฉถไบบๅ“กๅฟซ้€Ÿๅœฐ็ฟป้–ฑๅŠไฟฎๆ”น็จ‹ๅผ็ขผ๏ผŒ่€Œไธๆœƒๆทฑ้™ท่ค‡้›œ็š„้กžๅˆฅๅŒ…่ฃไน‹ไธญใ€‚ - `Trainer` API ไธฆ้ž็›ธๅฎนไปปไฝ•ๆจกๅž‹๏ผŒๅฎƒๅช็‚บๆœฌๅ‡ฝๅผๅบซไธญ็š„ๆจกๅž‹ๆœ€ไฝณๅŒ–ใ€‚ๅฐๆ–ผไธ€่ˆฌ็š„ๆฉŸๅ™จๅญธ็ฟ’็”จ้€”๏ผŒ่ซ‹ไฝฟ็”จๅ…ถไป–ๅ‡ฝๅผๅบซใ€‚ - ๅ„˜็ฎกๆˆ‘ๅ€‘ๅทฒ็›กๅŠ›่€Œ็‚บ๏ผŒ[examples ็›ฎ้Œ„](https://github.com/huggingface/transformers/tree/main/examples)ไธญ็š„่…ณๆœฌไนŸๅƒ…็‚บ็ฏ„ไพ‹่€Œๅทฒใ€‚ๅฐๆ–ผ็‰นๅฎšๅ•้กŒ๏ผŒๅฎƒๅ€‘ไธฆไธไธ€ๅฎš้šจ้ธๅณ็”จ๏ผŒๅฏ่ƒฝ้œ€่ฆไฟฎๆ”นๅนพ่กŒ็จ‹ๅผ็ขผไปฅ็ฌฆๅˆ้œ€ๆฑ‚ใ€‚ ## ๅฎ‰่ฃ ### ไฝฟ็”จ pip ้€™ๅ€‹ Repository ๅทฒๅœจ Python 3.8+ใ€Flax 0.4.1+ใ€PyTorch 1.11+ ๅ’Œ TensorFlow 2.6+ ไธ‹็ถ“้Žๆธฌ่ฉฆใ€‚ ไฝ ๅฏไปฅๅœจ[่™›ๆ“ฌ็’ฐๅขƒ](https://docs.python.org/3/library/venv.html)ไธญๅฎ‰่ฃ ๐Ÿค— Transformersใ€‚ๅฆ‚ๆžœไฝ ้‚„ไธ็†Ÿๆ‚‰ Python ็š„่™›ๆ“ฌ็’ฐๅขƒ๏ผŒ่ซ‹้–ฑๆญค[ไฝฟ็”จ่€…ๆŒ‡ๅผ•](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใ€‚ ้ฆ–ๅ…ˆ๏ผŒ็”จไฝ ๆ‰“็ฎ—ไฝฟ็”จ็š„็‰ˆๆœฌ็š„ Python ๅ‰ตๅปบไธ€ๅ€‹่™›ๆ“ฌ็’ฐๅขƒไธฆ้€ฒๅ…ฅใ€‚ ็„ถๅพŒ๏ผŒไฝ ้œ€่ฆๅฎ‰่ฃ Flaxใ€PyTorch ๆˆ– TensorFlow ๅ…ถไธญไน‹ไธ€ใ€‚ๅฐๆ–ผ่ฉฒๅฆ‚ไฝ•ๅœจไฝ ไฝฟ็”จ็š„ๅนณๅฐไธŠๅฎ‰่ฃ้€™ไบ›ๆก†ๆžถ๏ผŒ่ซ‹ๅƒ้–ฑ [TensorFlow ๅฎ‰่ฃ้ ้ข](https://www.tensorflow.org/install/), [PyTorch ๅฎ‰่ฃ้ ้ข](https://pytorch.org/get-started/locally/#start-locally) ๆˆ– [Flax ๅฎ‰่ฃ้ ้ข](https://github.com/google/flax#quick-install)ใ€‚ ็•ถๅ…ถไธญไธ€ๅ€‹ๅพŒ็ซฏๅฎ‰่ฃๆˆๅŠŸๅพŒ๏ผŒ๐Ÿค— Transformers ๅฏไพๆญคๅฎ‰่ฃ๏ผš ```bash pip install transformers ``` ๅฆ‚ๆžœไฝ ๆƒณ่ฆ่ฉฆ่ฉฆ็ฏ„ไพ‹ๆˆ–่€…ๆƒณๅœจๆญฃๅผ็™ผๅธƒๅ‰ไฝฟ็”จๆœ€ๆ–ฐ้–‹็™ผไธญ็š„็จ‹ๅผ็ขผ๏ผŒไฝ ๅฟ…้ ˆ[ๅพžๅŽŸๅง‹็ขผๅฎ‰่ฃ](https://huggingface.co/docs/transformers/installation#installing-from-source)ใ€‚ ### ไฝฟ็”จ conda ๐Ÿค— Transformers ๅฏไปฅ่—‰็”ฑ conda ไพๆญคๅฎ‰่ฃ๏ผš ```shell script conda install conda-forge::transformers ``` > **_็ญ†่จ˜:_** ๅพž `huggingface` ้ ป้“ๅฎ‰่ฃ `transformers` ๅทฒ่ขซๆท˜ๆฑฐใ€‚ ่ฆ่—‰็”ฑ conda ๅฎ‰่ฃ Flaxใ€PyTorch ๆˆ– TensorFlow ๅ…ถไธญไน‹ไธ€๏ผŒ่ซ‹ๅƒ้–ฑๅฎƒๅ€‘ๅ„่‡ชๅฎ‰่ฃ้ ้ข็š„่ชชๆ˜Žใ€‚ ## ๆจกๅž‹ๆžถๆง‹ **๐Ÿค— Transformers ๆ”ฏๆด็š„[ๆ‰€ๆœ‰็š„ๆจกๅž‹ๆชขๆŸฅ้ปž](https://huggingface.co/models)**๏ผŒ็”ฑ[ไฝฟ็”จ่€…](https://huggingface.co/users)ๅ’Œ[็ต„็น”](https://huggingface.co/organizations)ไธŠๅ‚ณ๏ผŒๅ‡่ˆ‡ huggingface.co [model hub](https://huggingface.co) ๅฎŒ็พŽ็ตๅˆใ€‚ ็›ฎๅ‰็š„ๆชขๆŸฅ้ปžๆ•ธ้‡๏ผš ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers ็›ฎๅ‰ๆ”ฏๆดไปฅไธ‹็š„ๆžถๆง‹: ๆจกๅž‹ๆฆ‚่ฆฝ่ซ‹ๅƒ้–ฑ[้€™่ฃก](https://huggingface.co/docs/transformers/model_summary). ่ฆๆชขๆŸฅๆŸๅ€‹ๆจกๅž‹ๆ˜ฏๅฆๅทฒๆœ‰ Flaxใ€PyTorch ๆˆ– TensorFlow ็š„ๅฏฆไฝœ๏ผŒๆˆ–ๅ…ถๆ˜ฏๅฆๅœจ๐Ÿค— Tokenizers ๅ‡ฝๅผๅบซไธญๆœ‰ๅฐๆ‡‰็š„ tokenizer๏ผŒๆ•ฌ่ซ‹ๅƒ้–ฑ[ๆญค่กจ](https://huggingface.co/docs/transformers/index#supported-frameworks)ใ€‚ ้€™ไบ›ๅฏฆไฝœๅ‡ๅทฒๆ–ผๅคšๅ€‹่ณ‡ๆ–™้›†ๆธฌ่ฉฆ๏ผˆ่ซ‹ๅƒ้–ฑ็ฏ„ไพ‹่…ณๆœฌ๏ผ‰ไธฆๆ‡‰่ˆ‡ๅŽŸ็‰ˆๅฏฆไฝœ่กจ็พ็›ธ็•ถใ€‚ไฝ ๅฏไปฅๅœจ็ฏ„ไพ‹ๆ–‡ไปถ็š„[ๆญค็ฏ€](https://huggingface.co/docs/transformers/examples)ไธญไบ†่งฃๅฏฆไฝœ็š„็ดฐ็ฏ€ใ€‚ ## ไบ†่งฃๆ›ดๅคš | ็ซ ็ฏ€ | ๆ่ฟฐ | |-|-| | [ๆ–‡ไปถ](https://huggingface.co/transformers/) | ๅฎŒๆ•ด็š„ API ๆ–‡ไปถๅ’Œๆ•™ๅญธ | | [ไปปๅ‹™ๆฆ‚่ฆฝ](https://huggingface.co/docs/transformers/task_summary) | ๐Ÿค— Transformers ๆ”ฏๆด็š„ไปปๅ‹™ | | [้ ่™•็†ๆ•™ๅญธ](https://huggingface.co/docs/transformers/preprocessing) | ไฝฟ็”จ `Tokenizer` ไพ†็‚บๆจกๅž‹ๆบ–ๅ‚™่ณ‡ๆ–™ | | [่จ“็ทดๅ’Œๅพฎ่ชฟ](https://huggingface.co/docs/transformers/training) | ไฝฟ็”จ PyTorch/TensorFlow ็š„ๅ…งๅปบ็š„่จ“็ทดๆ–นๅผๆˆ–ๆ–ผ `Trainer` API ไธญไฝฟ็”จ ๐Ÿค— Transformers ๆไพ›็š„ๆจกๅž‹ | | [ๅฟซ้€ŸไธŠๆ‰‹๏ผšๅพฎ่ชฟๅ’Œ็ฏ„ไพ‹่…ณๆœฌ](https://github.com/huggingface/transformers/tree/main/examples) | ็‚บๅ„็จฎไปปๅ‹™ๆไพ›็š„็ฏ„ไพ‹่…ณๆœฌ | | [ๆจกๅž‹ๅˆ†ไบซๅ’ŒไธŠๅ‚ณ](https://huggingface.co/docs/transformers/model_sharing) | ไธŠๅ‚ณไธฆ่ˆ‡็คพ็พคๅˆ†ไบซไฝ ๅพฎ่ชฟ็š„ๆจกๅž‹ | | [้ท็งป](https://huggingface.co/docs/transformers/migration) | ๅพž `pytorch-transformers` ๆˆ– `pytorch-pretrained-bert` ้ท็งปๅˆฐ ๐Ÿค— Transformers | ## ๅผ•็”จ ๆˆ‘ๅ€‘ๅทฒๅฐ‡ๆญคๅ‡ฝๅผๅบซ็š„[่ซ–ๆ–‡](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)ๆญฃๅผ็™ผ่กจใ€‚ๅฆ‚ๆžœไฝ ไฝฟ็”จไบ† ๐Ÿค— Transformers ๅ‡ฝๅผๅบซ๏ผŒๅฏไปฅๅผ•็”จ๏ผš ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/ISSUES.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # How To Request Support This is an Open Source Project so please be mindful that like in any other project of this kind there is no obligation to answer all requests for help. However, we want to encourage you to ask for help whenever you think it's needed! We are happy about every question we get because it allows us to better understand your needs, possible misunderstandings, and most importantly a way for you to help us make this library better. That being said, this document's main purpose is to provide guidelines at how you can formulate your requests to increase your chances to be understood and to get support. There are two main venues to receive support: [the forums](https://discuss.huggingface.co/) and [the GitHub issues](https://github.com/huggingface/transformers/issues). ## The Forums [The user forums](https://discuss.huggingface.co/) are supported by the wide community of the library users and backed up by developers when needed. If you have a difficulty with deploying this library or some questions, or you'd like to discuss a new feature, please first consider discussing those things at the forums. Only when you feel your subject matter has been crystalized and you still need support from the library developers do proceed to file an [issue](https://github.com/huggingface/transformers/issues). In particular all "Please explain" questions or objectively very user-specific feature requests belong to the forums. Here are some example of such questions: * "I would like to use a BertModel within a RL-Agent for a customer support service. How can I use a BertForMaskedLM in my ChatBotModel?" * "Could you please explain why T5 has no positional embedding matrix under T5Model?" * "How should I set my generation parameters for translation?" * "How to train T5 on De->En translation?" ## The GitHub Issues Everything which hints at a bug should be opened as an [issue](https://github.com/huggingface/transformers/issues). You are not required to read the following guidelines before opening an issue. However, if you notice that your issue doesn't get any replies, chances are that the developers have one or several difficulties with its quality. In this case, reading the following points and adjusting your issue accordingly could help. 1. Before posting an issue, first search for already posted issues, since chances are someone has already asked a similar question before you. If you use Google your search query should be: ``` "huggingface" "transformers" your query ``` The first two quoted words tell Google to limit the search to the context of the Huggingface Transformers. The remainder is your query - most commonly this would be the error message the software fails with. We will go deeper into details shortly. The results of such a query will typically match GitHub issues, Hugging Face forums, StackExchange, and blogs. If you find relevant hints, you may choose to continue the discussion there if you have follow up questions. If what you found is similar but doesn't quite answer your problem, please, post a new issue and do include links to similar issues or forum discussions you may have found. Let's look at some examples: The error message, often referred to as an assertion, tells us what went wrong. Here is an example of an assertion: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/transformers/src/transformers/__init__.py", line 34, in <module> from . import dependency_versions_check File "/transformers/src/transformers/dependency_versions_check.py", line 34, in <module> from .utils import is_tokenizers_available File "/transformers/src/transformers/utils/import_utils.py", line 40, in <module> from tqdm.auto import tqdm ModuleNotFoundError: No module named 'tqdm.auto' ``` and it typically includes a traceback, so that we can see the full stack of calls the program made before it fails. This gives us the context to know why the program failed. Going back to the above example. If you received this error search, look at the very last line of the error which is: ```python ModuleNotFoundError: No module named 'tqdm.auto' ``` And now we can use it to do the searching on your favorite search engine: 1. first for `"huggingface" "transformers" "ModuleNotFoundError: No module named 'tqdm.auto'"` 2. if you don't find relevant results, then search for just `"ModuleNotFoundError: No module named 'tqdm.auto'"` 3. and finally if nothing still comes up, then remove the outside quotes: `ModuleNotFoundError: No module named 'tqdm.auto'` If the error includes any messages that include bits unique to your filesystem, always remove those in the search query since other users will not have the same filesystem as yours. For example: ```bash python -c 'open("/tmp/wrong_path.txt", "r")' Traceback (most recent call last): File "<string>", line 1, in <module> FileNotFoundError: [Errno 2] No such file or directory: '/tmp/wrong_path.txt' ``` Here you'd search for just: `"FileNotFoundError: [Errno 2] No such file or directory"` If the local information that you removed were inside the error message and you removed them you may need to remove double quotes since your query is no longer exact. So if the error message was something like: ```bash ValueError: '/tmp/wrong_path.txt' cannot be found ``` then you'd search for `"ValueError" "cannot be found"` As you search you will notice that when you don't use quotes often the search engines will return a variety of unrelated hits, which may or may not be what you want. Experiment with different ways and find which approach gives the most satisfactory results. 2. Keep the issue short, providing the information that you think will aid the developers to understand your situation. Put yourself in the shoes of the person who has never seen your code or knows anything about your custom setup. This mental exercise will help to develop an intuition to what/what not to share" 3. If there is a software failure, always provide the full traceback, for example: ```python $ python -c 'import transformers' Traceback (most recent call last): File "<string>", line 1, in <module> File "/transformers/src/transformers/__init__.py", line 34, in <module> from . import dependency_versions_check File "/transformers/src/transformers/dependency_versions_check.py", line 34, in <module> from .utils import is_tokenizers_available File "/transformers/src/transformers/utils/import_utils.py", line 40, in <module> from tqdm.auto import tqdm ModuleNotFoundError: No module named 'tqdm.auto' ``` As compared to providing just the last line of the error message, e.g.: ```python ModuleNotFoundError: No module named 'tqdm.auto' ``` which is not sufficient. If your application is running on more than one GPU (e.g. under `DistributedDataParallel`) and typically getting every log and traceback printed multiple times, please make sure that you paste only one copy of it. At times the traceback from parallel processes may get interleaved - so either disentangle these or change the loggers to log only for `local_rank==0` so that only one process logs things. 4. When quoting a traceback, command line instructions and any type of code always enclose it in triple backticks inside the editor window, that is: ```` ``` git clone https://github.com/huggingface/transformers cd transformers pip install . ``` ```` If it's a command line with a long argument list, please consider breaking it down using backslashes and new lines. Here is an example of a good command line quote: ```bash cd examples/seq2seq torchrun --nproc_per_node=2 ./finetune_trainer.py \ --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --data_dir wmt_en_ro \ --output_dir output_dir --overwrite_output_dir \ --do_train --n_train 500 --num_train_epochs 1 \ --per_device_train_batch_size 1 --freeze_embeds \ --src_lang en_XX --tgt_lang ro_RO --task translation \ --fp16 ``` If you don't break it up, one has to scroll horizontally which often makes it quite difficult to quickly see what's happening. The backslashes allow us to copy the command directly into the console to run it, without needing to edit it. 5. Include only the important information that you think will help the developer to quickly identify the problem. For example applications often create huge amounts of logs. Ask yourself whether providing all or parts of the log is useful. Pasting a 100-1000 lines of log into the issue is an immediate turn off, since it will take a lot of time to figure out where the pertinent parts of the log are. Attaching a full log can be helpful if it's done as an attachment, if it's enclosed in the following html code in the comment editor window: ``` <details> <summary>Full log</summary> <pre> many lines go here </pre> </details> ``` which would result in the following entry, which can be opened if desired, but otherwise takes little space. <details> <summary>Full log</summary> <pre> many lines go here </pre> </details> You could also provide a link to a pastebin service, but this is less beneficial since those links tend to expire quickly and future readers of your issue might not be able to access that log file anymore and may lack some context. 6. If this is an issue in your code, do try to reduce that code to a minimal example that still demonstrates the problem. Please ask at the forums if you have a hard time figuring how to do that. Please realize that we don't have the luxury of having time to try and understand all of your custom code. If you really tried to make a short reproducible code but couldn't figure it out, it might be that having a traceback will give the developer enough information to know what's going on. But if it is not enough and we can't reproduce the problem, we can't really solve it. Do not despair if you can't figure it out from the beginning, just share what you can and perhaps someone else will be able to help you at the forums. If your setup involves any custom datasets, the best way to help us reproduce the problem is to create a [Google Colab notebook](https://colab.research.google.com/) that demonstrates the issue and once you verify that the issue still exists, include a link to that notebook in the Issue. Just make sure that you don't copy and paste the location bar url of the open notebook - as this is private and we won't be able to open it. Instead, you need to click on `Share` in the right upper corner of the notebook, select `Get Link` and then copy and paste the public link it will give to you. 7. If you forked off some of this project's code or example applications, please, do not ask us to go into your code repository and figure out what you may have done. The code is already very complex and unless there is an easy way to do a diff and it's a small diff, it won't be possible to find someone with time on their hands to make a lengthy investigation. Albeit, you might find someone at the forums who will be generous to do this for you. 8. Before reporting an issue, first, always try to update your environment to the latest official version of this library. We have no resources to go and debug older revisions, which could easily have bugs that have been fixed in the latest released version. We understand that this is not always possible, especially when APIs change, in which case file an issue against the highest library version your environment can support. Of course, if you upgrade the library, always retest that the problem is still there. 9. Please do not ask us to reproduce an issue with your custom data, since we don't have it. So, either you should use some existing dataset supported by HF datasets or you need to supply a code that generates a small sample on the fly, or some another quick and simple way to get it. Please do not send us any non-public domain data that may require a license or a permission to be used. 10. Do not tag multiple developers on the issue unless you know this is expected, either because you asked them and they gave you an explicit permission to tag them or the issue template instructs you to do so. The "who to tag for what domain" part of the issue template is there to help users direct their questions to the right developers who are designated maintainers of project's specific domains. They can then decide at their own discretion to tag other developers if they feel it'd help move the issue forward. We currently don't have a triage service and we trust your capacity to identify the right domain and thus the persons to tag in your issue. If you are not sure, please use the forums to ask for guidance. When in doubt, err on the side of not tagging a given person. If you tag multiple people out of context or permission don't be surprised if you get no response at all. Please remember that every time you tag someone, they get a notification and you're taking their time without their permission. Please be sensitive to that. If you got helped by one of the developers in the past please don't tag them in future issues, unless they are listed in the issue template for the domain you are asking about or that developer gave you an explicit permission to tag them in future issues. If you see a certain developer doing multiple and/or recent commits into a specific area of the project that you feel is relevant to your issue, it is not a good reason to tag them. Various developers may be fixing things that prevent them from moving forward, but often their work is focused on a totally different domain. And while they may or may not know how to help you with the problem at hand, it would benefit the whole community much more if they focus on the domain of their unique expertise. 11. Use the Edit button. Take your time, and re-read and improve the wording and formatting to make your posts and comments as easy to understand as possible. Avoid posting multiple comments in a row, as each comment generates a notification for the developers tagged in that issue. If you happened to post multiple comments in a row, and nobody followed up yet - consider merging those into one or a few comments while editing the combined content to be coherent. If you choose to edit your older comments after others posted follow up comments you need to be aware that your modifications might not be noticed, so if it's not a typo fixing, try to write a new comment flagging that something has been changed in the previous comments. For example, the very first comment is the most important one. If while the thread unfolds you realize that things aren't as they seemed to you originally you may want to edit the first post to reflect the up-to-date understanding of the issue at hand so that it helps those who read your issue in the future quickly understand what's going on and not need to sift through dozens of comments. It also helps to indicate that the post was edited. So, those reading the thread later can understand why there might be certain discontinuity in the information flow. Use bullets and items if you have lists of items and the outcome improves overall readability. Use backticks to refer to class and function names, e.g. `BartModel` and `generate` as these stand out and improve the speed of a reader's comprehension. Try not use italics and bold text too much as these often make the text more difficult to read. 12. If you are cross-referencing a specific comment in a given thread or another issue, always link to that specific comment, rather than using the issue link. If you do the latter it could be quite impossible to find which specific comment you're referring to. To get the link to the specific comment do not copy the url from the location bar of your browser, but instead, click the `...` icon in the upper right corner of the comment and then select "Copy Link". For example the first link is a link to an issue, and the second to a specific comment in the same issue: 1. https://github.com/huggingface/transformers/issues/9257 2. https://github.com/huggingface/transformers/issues/9257#issuecomment-749945162 13. If you are replying to a last comment, it's totally fine to make your reply with just your comment in it. The readers can follow the information flow here. But if you're replying to a comment that happened some comments back it's always a good practice to quote just the relevant lines you're replying it. The `>` is used for quoting, or you can always use the menu to do so. For example your editor box will look like: ``` > How big is your gpu cluster? Our cluster is made of 256 gpus. ``` If you are addressing multiple comments, quote the relevant parts of each before your answer. Some people use the same comment to do multiple replies, others separate them into separate comments. Either way works. The latter approach helps for linking to a specific comment. In general the best way to figure out what works the best is learn from issues posted by other people - see which issues get great responses and which get little to no response - observe what the posters who received great responses did differently from those who did not. Thank you for reading this somewhat lengthy document. We would like to conclude that these are not absolute rules, but a friendly advice that will help maximize the chances for us to understand what you are trying to communicate, reproduce the problem then resolve it to your satisfaction and the benefit of the whole community. If after reading this document there are remaining questions on how and why or there is a need for further elucidation, please, don't hesitate to ask your question in [this thread](https://discuss.huggingface.co/t/how-to-request-support/3128).
0
mavonic_private_repos
mavonic_private_repos/transformers/hubconf.py
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys SRC_DIR = os.path.join(os.path.dirname(__file__), "src") sys.path.append(SRC_DIR) from transformers import ( AutoConfig, AutoModel, AutoModelForCausalLM, AutoModelForMaskedLM, AutoModelForQuestionAnswering, AutoModelForSequenceClassification, AutoTokenizer, add_start_docstrings, ) dependencies = ["torch", "numpy", "tokenizers", "filelock", "requests", "tqdm", "regex", "sentencepiece", "sacremoses", "importlib_metadata", "huggingface_hub"] @add_start_docstrings(AutoConfig.__doc__) def config(*args, **kwargs): r""" # Using torch.hub ! import torch config = torch.hub.load('huggingface/transformers', 'config', 'google-bert/bert-base-uncased') # Download configuration from huggingface.co and cache. config = torch.hub.load('huggingface/transformers', 'config', './test/bert_saved_model/') # E.g. config (or model) was saved using `save_pretrained('./test/saved_model/')` config = torch.hub.load('huggingface/transformers', 'config', './test/bert_saved_model/my_configuration.json') config = torch.hub.load('huggingface/transformers', 'config', 'google-bert/bert-base-uncased', output_attentions=True, foo=False) assert config.output_attentions == True config, unused_kwargs = torch.hub.load('huggingface/transformers', 'config', 'google-bert/bert-base-uncased', output_attentions=True, foo=False, return_unused_kwargs=True) assert config.output_attentions == True assert unused_kwargs == {'foo': False} """ return AutoConfig.from_pretrained(*args, **kwargs) @add_start_docstrings(AutoTokenizer.__doc__) def tokenizer(*args, **kwargs): r""" # Using torch.hub ! import torch tokenizer = torch.hub.load('huggingface/transformers', 'tokenizer', 'google-bert/bert-base-uncased') # Download vocabulary from huggingface.co and cache. tokenizer = torch.hub.load('huggingface/transformers', 'tokenizer', './test/bert_saved_model/') # E.g. tokenizer was saved using `save_pretrained('./test/saved_model/')` """ return AutoTokenizer.from_pretrained(*args, **kwargs) @add_start_docstrings(AutoModel.__doc__) def model(*args, **kwargs): r""" # Using torch.hub ! import torch model = torch.hub.load('huggingface/transformers', 'model', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache. model = torch.hub.load('huggingface/transformers', 'model', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')` model = torch.hub.load('huggingface/transformers', 'model', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading assert model.config.output_attentions == True # Loading from a TF checkpoint file instead of a PyTorch model (slower) config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') model = torch.hub.load('huggingface/transformers', 'model', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config) """ return AutoModel.from_pretrained(*args, **kwargs) @add_start_docstrings(AutoModelForCausalLM.__doc__) def modelForCausalLM(*args, **kwargs): r""" # Using torch.hub ! import torch model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', 'openai-community/gpt2') # Download model and configuration from huggingface.co and cache. model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', './test/saved_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')` model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', 'openai-community/gpt2', output_attentions=True) # Update configuration during loading assert model.config.output_attentions == True # Loading from a TF checkpoint file instead of a PyTorch model (slower) config = AutoConfig.from_pretrained('./tf_model/gpt_tf_model_config.json') model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', './tf_model/gpt_tf_checkpoint.ckpt.index', from_tf=True, config=config) """ return AutoModelForCausalLM.from_pretrained(*args, **kwargs) @add_start_docstrings(AutoModelForMaskedLM.__doc__) def modelForMaskedLM(*args, **kwargs): r""" # Using torch.hub ! import torch model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache. model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')` model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading assert model.config.output_attentions == True # Loading from a TF checkpoint file instead of a PyTorch model (slower) config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config) """ return AutoModelForMaskedLM.from_pretrained(*args, **kwargs) @add_start_docstrings(AutoModelForSequenceClassification.__doc__) def modelForSequenceClassification(*args, **kwargs): r""" # Using torch.hub ! import torch model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache. model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')` model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading assert model.config.output_attentions == True # Loading from a TF checkpoint file instead of a PyTorch model (slower) config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config) """ return AutoModelForSequenceClassification.from_pretrained(*args, **kwargs) @add_start_docstrings(AutoModelForQuestionAnswering.__doc__) def modelForQuestionAnswering(*args, **kwargs): r""" # Using torch.hub ! import torch model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache. model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')` model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading assert model.config.output_attentions == True # Loading from a TF checkpoint file instead of a PyTorch model (slower) config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config) """ return AutoModelForQuestionAnswering.from_pretrained(*args, **kwargs)
0
mavonic_private_repos
mavonic_private_repos/transformers/README_zh-hans.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!--- A useful guide for English-Chinese translation of Hugging Face documentation - Add space around English words and numbers when they appear between Chinese characters. E.g., ๅ…ฑ 100 ๅคš็ง่ฏญ่จ€; ไฝฟ็”จ transformers ๅบ“ใ€‚ - Use square quotes, e.g.,ใ€Œๅผ•็”จใ€ Dictionary Hugging Face: ๆŠฑๆŠฑ่„ธ token: ่ฏ็ฌฆ๏ผˆๅนถ็”จๆ‹ฌๅทๆ ‡ๆณจๅŽŸ่‹ฑๆ–‡๏ผ‰ tokenize: ่ฏ็ฌฆๅŒ–๏ผˆๅนถ็”จๆ‹ฌๅทๆ ‡ๆณจๅŽŸ่‹ฑๆ–‡๏ผ‰ tokenizer: ่ฏ็ฌฆๅŒ–ๅ™จ๏ผˆๅนถ็”จๆ‹ฌๅทๆ ‡ๆณจๅŽŸ่‹ฑๆ–‡๏ผ‰ transformer: transformer๏ผˆไธ็ฟป่ฏ‘๏ผ‰ pipeline: ๆตๆฐด็บฟ API: API (ไธ็ฟป่ฏ‘๏ผ‰ inference: ๆŽจ็† Trainer: ่ฎญ็ปƒๅ™จใ€‚ๅฝ“ไฝœไธบ็ฑปๅๅ‡บ็Žฐๆ—ถไธ็ฟป่ฏ‘ใ€‚ pretrained/pretrain: ้ข„่ฎญ็ปƒ finetune: ๅพฎ่ฐƒ community: ็คพๅŒบ example: ๅฝ“็‰นๆŒ‡ไป“ๅบ“ไธญ example ็›ฎๅฝ•ๆ—ถ็ฟป่ฏ‘ไธบใ€Œ็”จไพ‹ใ€ Python data structures (e.g., list, set, dict): ็ฟป่ฏ‘ไธบๅˆ—่กจ๏ผŒ้›†ๅˆ๏ผŒ่ฏๅ…ธ๏ผŒๅนถ็”จๆ‹ฌๅทๆ ‡ๆณจๅŽŸ่‹ฑๆ–‡ NLP/Natural Language Processing: ไปฅ NLP ๅ‡บ็Žฐๆ—ถไธ็ฟป่ฏ‘๏ผŒไปฅ Natural Language Processing ๅ‡บ็Žฐๆ—ถ็ฟป่ฏ‘ไธบ่‡ช็„ถ่ฏญ่จ€ๅค„็† checkpoint: ๆฃ€ๆŸฅ็‚น --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <b>็ฎ€ไฝ“ไธญๆ–‡</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>ไธบ Jaxใ€PyTorch ๅ’Œ TensorFlow ๆ‰“้€ ็š„ๅ…ˆ่ฟ›็š„่‡ช็„ถ่ฏญ่จ€ๅค„็†</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers ๆไพ›ไบ†ๆ•ฐไปฅๅƒ่ฎก็š„้ข„่ฎญ็ปƒๆจกๅž‹๏ผŒๆ”ฏๆŒ 100 ๅคš็ง่ฏญ่จ€็š„ๆ–‡ๆœฌๅˆ†็ฑปใ€ไฟกๆฏๆŠฝๅ–ใ€้—ฎ็ญ”ใ€ๆ‘˜่ฆใ€็ฟป่ฏ‘ใ€ๆ–‡ๆœฌ็”Ÿๆˆใ€‚ๅฎƒ็š„ๅฎ—ๆ—จๆ˜ฏ่ฎฉๆœ€ๅ…ˆ่ฟ›็š„ NLP ๆŠ€ๆœฏไบบไบบๆ˜“็”จใ€‚ ๐Ÿค— Transformers ๆไพ›ไบ†ไพฟไบŽๅฟซ้€Ÿไธ‹่ฝฝๅ’Œไฝฟ็”จ็š„API๏ผŒ่ฎฉไฝ ๅฏไปฅๆŠŠ้ข„่ฎญ็ปƒๆจกๅž‹็”จๅœจ็ป™ๅฎšๆ–‡ๆœฌใ€ๅœจไฝ ็š„ๆ•ฐๆฎ้›†ไธŠๅพฎ่ฐƒ็„ถๅŽ้€š่ฟ‡ [model hub](https://huggingface.co/models) ไธŽ็คพๅŒบๅ…ฑไบซใ€‚ๅŒๆ—ถ๏ผŒๆฏไธชๅฎšไน‰็š„ Python ๆจกๅ—ๅ‡ๅฎŒๅ…จ็‹ฌ็ซ‹๏ผŒๆ–นไพฟไฟฎๆ”นๅ’Œๅฟซ้€Ÿ็ ”็ฉถๅฎž้ชŒใ€‚ ๐Ÿค— Transformers ๆ”ฏๆŒไธ‰ไธชๆœ€็ƒญ้—จ็š„ๆทฑๅบฆๅญฆไน ๅบ“๏ผš [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) ไปฅๅŠ [TensorFlow](https://www.tensorflow.org/) โ€” ๅนถไธŽไน‹ๆ— ็ผๆ•ดๅˆใ€‚ไฝ ๅฏไปฅ็›ดๆŽฅไฝฟ็”จไธ€ไธชๆก†ๆžถ่ฎญ็ปƒไฝ ็š„ๆจกๅž‹็„ถๅŽ็”จๅฆไธ€ไธชๅŠ ่ฝฝๅ’ŒๆŽจ็†ใ€‚ ## ๅœจ็บฟๆผ”็คบ ไฝ ๅฏไปฅ็›ดๆŽฅๅœจๆจกๅž‹้กต้ขไธŠๆต‹่ฏ•ๅคงๅคšๆ•ฐ [model hub](https://huggingface.co/models) ไธŠ็š„ๆจกๅž‹ใ€‚ ๆˆ‘ไปฌไนŸๆไพ›ไบ† [็งๆœ‰ๆจกๅž‹ๆ‰˜็ฎกใ€ๆจกๅž‹็‰ˆๆœฌ็ฎก็†ไปฅๅŠๆŽจ็†API](https://huggingface.co/pricing)ใ€‚ ่ฟ™้‡Œๆ˜ฏไธ€ไบ›ไพ‹ๅญ๏ผš - [็”จ BERT ๅšๆŽฉ็ ๅกซ่ฏ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [็”จ Electra ๅšๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [็”จ GPT-2 ๅšๆ–‡ๆœฌ็”Ÿๆˆ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [็”จ RoBERTa ๅš่‡ช็„ถ่ฏญ่จ€ๆŽจ็†](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [็”จ BART ๅšๆ–‡ๆœฌๆ‘˜่ฆ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [็”จ DistilBERT ๅš้—ฎ็ญ”](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [็”จ T5 ๅš็ฟป่ฏ‘](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) **[Write With Transformer](https://transformer.huggingface.co)**๏ผŒ็”ฑๆŠฑๆŠฑ่„ธๅ›ข้˜Ÿๆ‰“้€ ๏ผŒๆ˜ฏไธ€ไธชๆ–‡ๆœฌ็”Ÿๆˆ็š„ๅฎ˜ๆ–น demoใ€‚ ## ๅฆ‚ๆžœไฝ ๅœจๅฏปๆ‰พ็”ฑๆŠฑๆŠฑ่„ธๅ›ข้˜Ÿๆไพ›็š„ๅฎšๅˆถๅŒ–ๆ”ฏๆŒๆœๅŠก <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## ๅฟซ้€ŸไธŠๆ‰‹ ๆˆ‘ไปฌไธบๅฟซ้€Ÿไฝฟ็”จๆจกๅž‹ๆไพ›ไบ† `pipeline` ๏ผˆๆตๆฐด็บฟ๏ผ‰APIใ€‚ๆตๆฐด็บฟ่šๅˆไบ†้ข„่ฎญ็ปƒๆจกๅž‹ๅ’Œๅฏนๅบ”็š„ๆ–‡ๆœฌ้ข„ๅค„็†ใ€‚ไธ‹้ขๆ˜ฏไธ€ไธชๅฟซ้€Ÿไฝฟ็”จๆตๆฐด็บฟๅŽปๅˆคๆ–ญๆญฃ่ดŸ้ขๆƒ…็ปช็š„ไพ‹ๅญ๏ผš ```python >>> from transformers import pipeline # ไฝฟ็”จๆƒ…็ปชๅˆ†ๆžๆตๆฐด็บฟ >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` ็ฌฌไบŒ่กŒไปฃ็ ไธ‹่ฝฝๅนถ็ผ“ๅญ˜ไบ†ๆตๆฐด็บฟไฝฟ็”จ็š„้ข„่ฎญ็ปƒๆจกๅž‹๏ผŒ่€Œ็ฌฌไธ‰่กŒไปฃ็ ๅˆ™ๅœจ็ป™ๅฎš็š„ๆ–‡ๆœฌไธŠ่ฟ›่กŒไบ†่ฏ„ไผฐใ€‚่ฟ™้‡Œ็š„็ญ”ๆกˆโ€œๆญฃ้ขโ€ (positive) ๅ…ทๆœ‰ 99 ็š„็ฝฎไฟกๅบฆใ€‚ ่ฎธๅคš็š„ NLP ไปปๅŠก้ƒฝๆœ‰ๅผ€็ฎฑๅณ็”จ็š„้ข„่ฎญ็ปƒๆตๆฐด็บฟใ€‚ๆฏ”ๅฆ‚่ฏด๏ผŒๆˆ‘ไปฌๅฏไปฅ่ฝปๆพ็š„ไปŽ็ป™ๅฎšๆ–‡ๆœฌไธญๆŠฝๅ–้—ฎ้ข˜็ญ”ๆกˆ๏ผš ``` python >>> from transformers import pipeline # ไฝฟ็”จ้—ฎ็ญ”ๆตๆฐด็บฟ >>> question_answerer = pipeline('question-answering') >>> question_answerer({ ... 'question': 'What is the name of the repository ?', ... 'context': 'Pipeline has been included in the huggingface/transformers repository' ... }) {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} ``` ้™คไบ†็ป™ๅ‡บ็ญ”ๆกˆ๏ผŒ้ข„่ฎญ็ปƒๆจกๅž‹่ฟ˜็ป™ๅ‡บไบ†ๅฏนๅบ”็š„็ฝฎไฟกๅบฆๅˆ†ๆ•ฐใ€็ญ”ๆกˆๅœจ่ฏ็ฌฆๅŒ– (tokenized) ๅŽ็š„ๆ–‡ๆœฌไธญๅผ€ๅง‹ๅ’Œ็ป“ๆŸ็š„ไฝ็ฝฎใ€‚ไฝ ๅฏไปฅไปŽ[่ฟ™ไธชๆ•™็จ‹](https://huggingface.co/docs/transformers/task_summary)ไบ†่งฃๆ›ดๅคšๆตๆฐด็บฟAPIๆ”ฏๆŒ็š„ไปปๅŠกใ€‚ ่ฆๅœจไฝ ็š„ไปปๅŠกไธŠไธ‹่ฝฝๅ’Œไฝฟ็”จไปปๆ„้ข„่ฎญ็ปƒๆจกๅž‹ไนŸๅพˆ็ฎ€ๅ•๏ผŒๅช้œ€ไธ‰่กŒไปฃ็ ใ€‚่ฟ™้‡Œๆ˜ฏ PyTorch ็‰ˆ็š„็คบไพ‹๏ผš ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` ่ฟ™้‡Œๆ˜ฏ็ญ‰ๆ•ˆ็š„ TensorFlow ไปฃ็ ๏ผš ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` ่ฏ็ฌฆๅŒ–ๅ™จ (tokenizer) ไธบๆ‰€ๆœ‰็š„้ข„่ฎญ็ปƒๆจกๅž‹ๆไพ›ไบ†้ข„ๅค„็†๏ผŒๅนถๅฏไปฅ็›ดๆŽฅๅฏนๅ•ไธชๅญ—็ฌฆไธฒ่ฟ›่กŒ่ฐƒ็”จ๏ผˆๆฏ”ๅฆ‚ไธŠ้ข็š„ไพ‹ๅญ๏ผ‰ๆˆ–ๅฏนๅˆ—่กจ (list) ่ฐƒ็”จใ€‚ๅฎƒไผš่พ“ๅ‡บไธ€ไธชไฝ ๅฏไปฅๅœจไธ‹ๆธธไปฃ็ ้‡Œไฝฟ็”จๆˆ–็›ดๆŽฅ้€š่ฟ‡ `**` ่งฃๅŒ…่กจ่พพๅผไผ ็ป™ๆจกๅž‹็š„่ฏๅ…ธ (dict)ใ€‚ ๆจกๅž‹ๆœฌ่บซๆ˜ฏไธ€ไธชๅธธ่ง„็š„ [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ๆˆ– [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๏ผˆๅ–ๅ†ณไบŽไฝ ็š„ๅŽ็ซฏ๏ผ‰๏ผŒๅฏไปฅๅธธ่ง„ๆ–นๅผไฝฟ็”จใ€‚ [่ฟ™ไธชๆ•™็จ‹](https://huggingface.co/transformers/training.html)่งฃ้‡Šไบ†ๅฆ‚ไฝ•ๅฐ†่ฟ™ๆ ท็š„ๆจกๅž‹ๆ•ดๅˆๅˆฐ็ปๅ…ธ็š„ PyTorch ๆˆ– TensorFlow ่ฎญ็ปƒๅพช็Žฏไธญ๏ผŒๆˆ–ๆ˜ฏๅฆ‚ไฝ•ไฝฟ็”จๆˆ‘ไปฌ็š„ `Trainer` ่ฎญ็ปƒๅ™จ๏ผ‰API ๆฅๅœจไธ€ไธชๆ–ฐ็š„ๆ•ฐๆฎ้›†ไธŠๅฟซ้€Ÿๅพฎ่ฐƒใ€‚ ## ไธบไป€ไนˆ่ฆ็”จ transformers๏ผŸ 1. ไพฟไบŽไฝฟ็”จ็š„ๅ…ˆ่ฟ›ๆจกๅž‹๏ผš - NLU ๅ’Œ NLG ไธŠ่กจ็Žฐไผ˜่ถŠ - ๅฏนๆ•™ๅญฆๅ’Œๅฎž่ทตๅ‹ๅฅฝไธ”ไฝŽ้—จๆง› - ้ซ˜็บงๆŠฝ่ฑก๏ผŒๅช้œ€ไบ†่งฃไธ‰ไธช็ฑป - ๅฏนๆ‰€ๆœ‰ๆจกๅž‹็ปŸไธ€็š„API 1. ๆ›ดไฝŽ่ฎก็ฎ—ๅผ€้”€๏ผŒๆ›ดๅฐ‘็š„็ขณๆŽ’ๆ”พ๏ผš - ็ ”็ฉถไบบๅ‘˜ๅฏไปฅๅˆ†ไบซๅทฒ่ฎญ็ปƒ็š„ๆจกๅž‹่€Œ้žๆฏๆฌกไปŽๅคดๅผ€ๅง‹่ฎญ็ปƒ - ๅทฅ็จ‹ๅธˆๅฏไปฅๅ‡ๅฐ‘่ฎก็ฎ—็”จๆ—ถๅ’Œ็”Ÿไบง็Žฏๅขƒๅผ€้”€ - ๆ•ฐๅ็งๆจกๅž‹ๆžถๆž„ใ€ไธคๅƒๅคšไธช้ข„่ฎญ็ปƒๆจกๅž‹ใ€100ๅคš็ง่ฏญ่จ€ๆ”ฏๆŒ 1. ๅฏนไบŽๆจกๅž‹็”Ÿๅ‘ฝๅ‘จๆœŸ็š„ๆฏไธ€ไธช้ƒจๅˆ†้ƒฝ้ข้ขไฟฑๅˆฐ๏ผš - ่ฎญ็ปƒๅ…ˆ่ฟ›็š„ๆจกๅž‹๏ผŒๅช้œ€ 3 ่กŒไปฃ็  - ๆจกๅž‹ๅœจไธๅŒๆทฑๅบฆๅญฆไน ๆก†ๆžถ้—ดไปปๆ„่ฝฌ็งป๏ผŒ้šไฝ ๅฟƒๆ„ - ไธบ่ฎญ็ปƒใ€่ฏ„ไผฐๅ’Œ็”Ÿไบง้€‰ๆ‹ฉๆœ€้€‚ๅˆ็š„ๆก†ๆžถ๏ผŒ่ก”ๆŽฅๆ— ็ผ 1. ไธบไฝ ็š„้œ€ๆฑ‚่ฝปๆพๅฎšๅˆถไธ“ๅฑžๆจกๅž‹ๅ’Œ็”จไพ‹๏ผš - ๆˆ‘ไปฌไธบๆฏ็งๆจกๅž‹ๆžถๆž„ๆไพ›ไบ†ๅคšไธช็”จไพ‹ๆฅๅค็ŽฐๅŽŸ่ฎบๆ–‡็ป“ๆžœ - ๆจกๅž‹ๅ†…้ƒจ็ป“ๆž„ไฟๆŒ้€ๆ˜Žไธ€่‡ด - ๆจกๅž‹ๆ–‡ไปถๅฏๅ•็‹ฌไฝฟ็”จ๏ผŒๆ–นไพฟ้ญ”ๆ”นๅ’Œๅฟซ้€Ÿๅฎž้ชŒ ## ไป€ไนˆๆƒ…ๅ†ตไธ‹ๆˆ‘ไธ่ฏฅ็”จ transformers๏ผŸ - ๆœฌๅบ“ๅนถไธๆ˜ฏๆจกๅ—ๅŒ–็š„็ฅž็ป็ฝ‘็ปœๅทฅๅ…ท็ฎฑใ€‚ๆจกๅž‹ๆ–‡ไปถไธญ็š„ไปฃ็ ็‰นๆ„ๅ‘ˆ่‹ฅ็’ž็Ž‰๏ผŒๆœช็ป้ขๅค–ๆŠฝ่ฑกๅฐ่ฃ…๏ผŒไปฅไพฟ็ ”็ฉถไบบๅ‘˜ๅฟซ้€Ÿ่ฟญไปฃ้ญ”ๆ”น่€Œไธ่‡ดๆบบไบŽๆŠฝ่ฑกๅ’Œๆ–‡ไปถ่ทณ่ฝฌไน‹ไธญใ€‚ - `Trainer` API ๅนถ้žๅ…ผๅฎนไปปไฝ•ๆจกๅž‹๏ผŒๅชไธบๆœฌๅบ“ไน‹ๆจกๅž‹ไผ˜ๅŒ–ใ€‚่‹ฅๆ˜ฏๅœจๅฏปๆ‰พ้€‚็”จไบŽ้€š็”จๆœบๅ™จๅญฆไน ็š„่ฎญ็ปƒๅพช็Žฏๅฎž็Žฐ๏ผŒ่ฏทๅฆ่ง…ไป–ๅบ“ใ€‚ - ๅฐฝ็ฎกๆˆ‘ไปฌๅทฒๅฐฝๅŠ›่€Œไธบ๏ผŒ[examples ็›ฎๅฝ•](https://github.com/huggingface/transformers/tree/main/examples)ไธญ็š„่„šๆœฌไนŸไป…ไธบ็”จไพ‹่€Œๅทฒใ€‚ๅฏนไบŽไฝ ็š„็‰นๅฎš้—ฎ้ข˜๏ผŒๅฎƒไปฌๅนถไธไธ€ๅฎšๅผ€็ฎฑๅณ็”จ๏ผŒๅฏ่ƒฝ้œ€่ฆๆ”นๅ‡ ่กŒไปฃ็ ไปฅ้€‚ไน‹ใ€‚ ## ๅฎ‰่ฃ… ### ไฝฟ็”จ pip ่ฟ™ไธชไป“ๅบ“ๅทฒๅœจ Python 3.8+ใ€Flax 0.4.1+ใ€PyTorch 1.11+ ๅ’Œ TensorFlow 2.6+ ไธ‹็ป่ฟ‡ๆต‹่ฏ•ใ€‚ ไฝ ๅฏไปฅๅœจ[่™šๆ‹Ÿ็Žฏๅขƒ](https://docs.python.org/3/library/venv.html)ไธญๅฎ‰่ฃ… ๐Ÿค— Transformersใ€‚ๅฆ‚ๆžœไฝ ่ฟ˜ไธ็†Ÿๆ‚‰ Python ็š„่™šๆ‹Ÿ็Žฏๅขƒ๏ผŒ่ฏท้˜…ๆญค[็”จๆˆท่ฏดๆ˜Ž](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใ€‚ ้ฆ–ๅ…ˆ๏ผŒ็”จไฝ ๆ‰“็ฎ—ไฝฟ็”จ็š„็‰ˆๆœฌ็š„ Python ๅˆ›ๅปบไธ€ไธช่™šๆ‹Ÿ็Žฏๅขƒๅนถๆฟ€ๆดปใ€‚ ็„ถๅŽ๏ผŒไฝ ้œ€่ฆๅฎ‰่ฃ… Flaxใ€PyTorch ๆˆ– TensorFlow ๅ…ถไธญไน‹ไธ€ใ€‚ๅ…ณไบŽๅœจไฝ ไฝฟ็”จ็š„ๅนณๅฐไธŠๅฎ‰่ฃ…่ฟ™ไบ›ๆก†ๆžถ๏ผŒ่ฏทๅ‚้˜… [TensorFlow ๅฎ‰่ฃ…้กต](https://www.tensorflow.org/install/), [PyTorch ๅฎ‰่ฃ…้กต](https://pytorch.org/get-started/locally/#start-locally) ๆˆ– [Flax ๅฎ‰่ฃ…้กต](https://github.com/google/flax#quick-install)ใ€‚ ๅฝ“่ฟ™ไบ›ๅŽ็ซฏไน‹ไธ€ๅฎ‰่ฃ…ๆˆๅŠŸๅŽ๏ผŒ ๐Ÿค— Transformers ๅฏไพๆญคๅฎ‰่ฃ…๏ผš ```bash pip install transformers ``` ๅฆ‚ๆžœไฝ ๆƒณ่ฆ่ฏ•่ฏ•็”จไพ‹ๆˆ–่€…ๆƒณๅœจๆญฃๅผๅ‘ๅธƒๅ‰ไฝฟ็”จๆœ€ๆ–ฐ็š„ๅผ€ๅ‘ไธญไปฃ็ ๏ผŒไฝ ๅพ—[ไปŽๆบไปฃ็ ๅฎ‰่ฃ…](https://huggingface.co/docs/transformers/installation#installing-from-source)ใ€‚ ### ไฝฟ็”จ conda ๐Ÿค— Transformers ๅฏไปฅ้€š่ฟ‡ conda ไพๆญคๅฎ‰่ฃ…๏ผš ```shell script conda install conda-forge::transformers ``` > **_็ฌ”่ฎฐ:_** ไปŽ `huggingface` ๆธ ้“ๅฎ‰่ฃ… `transformers` ๅทฒ่ขซๅบŸๅผƒใ€‚ ่ฆ้€š่ฟ‡ conda ๅฎ‰่ฃ… Flaxใ€PyTorch ๆˆ– TensorFlow ๅ…ถไธญไน‹ไธ€๏ผŒ่ฏทๅ‚้˜…ๅฎƒไปฌๅ„่‡ชๅฎ‰่ฃ…้กต็š„่ฏดๆ˜Žใ€‚ ## ๆจกๅž‹ๆžถๆž„ ๐Ÿค— Transformers ๆ”ฏๆŒ็š„[**ๆ‰€ๆœ‰็š„ๆจกๅž‹ๆฃ€ๆŸฅ็‚น**](https://huggingface.co/models)็”ฑ[็”จๆˆท](https://huggingface.co/users)ๅ’Œ[็ป„็ป‡](https://huggingface.co/organizations)ไธŠไผ ๏ผŒๅ‡ไธŽ huggingface.co [model hub](https://huggingface.co) ๆ— ็ผๆ•ดๅˆใ€‚ ็›ฎๅ‰็š„ๆฃ€ๆŸฅ็‚นๆ•ฐ้‡๏ผš ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers ็›ฎๅ‰ๆ”ฏๆŒๅฆ‚ไธ‹็š„ๆžถๆž„: ๆจกๅž‹ๆฆ‚่ฟฐ่ฏท้˜…[่ฟ™้‡Œ](https://huggingface.co/docs/transformers/model_summary). ่ฆๆฃ€ๆŸฅๆŸไธชๆจกๅž‹ๆ˜ฏๅฆๅทฒๆœ‰ Flaxใ€PyTorch ๆˆ– TensorFlow ็š„ๅฎž็Žฐ๏ผŒๆˆ–ๅ…ถๆ˜ฏๅฆๅœจ ๐Ÿค— Tokenizers ๅบ“ไธญๆœ‰ๅฏนๅบ”่ฏ็ฌฆๅŒ–ๅ™จ๏ผˆtokenizer๏ผ‰๏ผŒๆ•ฌ่ฏทๅ‚้˜…[ๆญค่กจ](https://huggingface.co/docs/transformers/index#supported-frameworks)ใ€‚ ่ฟ™ไบ›ๅฎž็Žฐๅ‡ๅทฒไบŽๅคšไธชๆ•ฐๆฎ้›†ๆต‹่ฏ•๏ผˆ่ฏทๅ‚็œ‹็”จไพ‹่„šๆœฌ๏ผ‰ๅนถๅบ”ไบŽๅŽŸ็‰ˆๅฎž็Žฐ่กจ็Žฐ็›ธๅฝ“ใ€‚ไฝ ๅฏไปฅๅœจ็”จไพ‹ๆ–‡ๆกฃ็š„[ๆญค่Š‚](https://huggingface.co/docs/transformers/examples)ไธญไบ†่งฃ่กจ็Žฐ็š„็ป†่Š‚ใ€‚ ## ไบ†่งฃๆ›ดๅคš | ็ซ ่Š‚ | ๆ่ฟฐ | |-|-| | [ๆ–‡ๆกฃ](https://huggingface.co/docs/transformers/) | ๅฎŒๆ•ด็š„ API ๆ–‡ๆกฃๅ’Œๆ•™็จ‹ | | [ไปปๅŠกๆ€ป็ป“](https://huggingface.co/docs/transformers/task_summary) | ๐Ÿค— Transformers ๆ”ฏๆŒ็š„ไปปๅŠก | | [้ข„ๅค„็†ๆ•™็จ‹](https://huggingface.co/docs/transformers/preprocessing) | ไฝฟ็”จ `Tokenizer` ๆฅไธบๆจกๅž‹ๅ‡†ๅค‡ๆ•ฐๆฎ | | [่ฎญ็ปƒๅ’Œๅพฎ่ฐƒ](https://huggingface.co/docs/transformers/training) | ๅœจ PyTorch/TensorFlow ็š„่ฎญ็ปƒๅพช็Žฏๆˆ– `Trainer` API ไธญไฝฟ็”จ ๐Ÿค— Transformers ๆไพ›็š„ๆจกๅž‹ | | [ๅฟซ้€ŸไธŠๆ‰‹๏ผšๅพฎ่ฐƒๅ’Œ็”จไพ‹่„šๆœฌ](https://github.com/huggingface/transformers/tree/main/examples) | ไธบๅ„็งไปปๅŠกๆไพ›็š„็”จไพ‹่„šๆœฌ | | [ๆจกๅž‹ๅˆ†ไบซๅ’ŒไธŠไผ ](https://huggingface.co/docs/transformers/model_sharing) | ๅ’Œ็คพๅŒบไธŠไผ ๅ’Œๅˆ†ไบซไฝ ๅพฎ่ฐƒ็š„ๆจกๅž‹ | | [่ฟ็งป](https://huggingface.co/docs/transformers/migration) | ไปŽ `pytorch-transformers` ๆˆ– `pytorch-pretrained-bert` ่ฟ็งปๅˆฐ ๐Ÿค— Transformers | ## ๅผ•็”จ ๆˆ‘ไปฌๅทฒๅฐ†ๆญคๅบ“็š„[่ฎบๆ–‡](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)ๆญฃๅผๅ‘่กจ๏ผŒๅฆ‚ๆžœไฝ ไฝฟ็”จไบ† ๐Ÿค— Transformers ๅบ“๏ผŒ่ฏทๅผ•็”จ: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/pyproject.toml
[tool.ruff] line-length = 119 [tool.ruff.lint] # Never enforce `E501` (line length violations). ignore = ["C901", "E501", "E741", "F402", "F823" ] select = ["C", "E", "F", "I", "W"] # Ignore import violations in all `__init__.py` files. [tool.ruff.lint.per-file-ignores] "__init__.py" = ["E402", "F401", "F403", "F811"] "src/transformers/file_utils.py" = ["F401"] "src/transformers/utils/dummy_*.py" = ["F401"] [tool.ruff.lint.isort] lines-after-imports = 2 known-first-party = ["transformers"] [tool.ruff.format] # Like Black, use double quotes for strings. quote-style = "double" # Like Black, indent with spaces, rather than tabs. indent-style = "space" # Like Black, respect magic trailing commas. skip-magic-trailing-comma = false # Like Black, automatically detect the appropriate line ending. line-ending = "auto" [tool.pytest.ini_options] doctest_optionflags="NUMBER NORMALIZE_WHITESPACE ELLIPSIS" doctest_glob="**/*.md" markers = [ "flash_attn_test: marks tests related to flash attention (deselect with '-m \"not flash_attn_test\"')", "bitsandbytes: select (or deselect with `not`) bitsandbytes integration tests", ]
0
mavonic_private_repos
mavonic_private_repos/transformers/README_fr.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Bibliothรจque Hugging Face Transformers" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Construction" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="Version GitHub" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Pacte des contributeurs" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <b>Franรงais</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>Apprentissage automatique de pointe pour JAX, PyTorch et TensorFlow</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers fournit des milliers de modรจles prรฉ-entraรฎnรฉs pour effectuer des tรขches sur diffรฉrentes modalitรฉs telles que le texte, la vision et l'audio. Ces modรจles peuvent รชtre appliquรฉs ร  : * ๐Ÿ“ Texte, pour des tรขches telles que la classification de texte, l'extraction d'informations, la rรฉponse aux questions, le rรฉsumรฉ, la traduction et la gรฉnรฉration de texte, dans plus de 100 langues. * ๐Ÿ–ผ๏ธ Images, pour des tรขches telles que la classification d'images, la dรฉtection d'objets et la segmentation. * ๐Ÿ—ฃ๏ธ Audio, pour des tรขches telles que la reconnaissance vocale et la classification audio. Les modรจles de transformer peuvent รฉgalement effectuer des tรขches sur **plusieurs modalitรฉs combinรฉes**, telles que la rรฉponse aux questions sur des tableaux, la reconnaissance optique de caractรจres, l'extraction d'informations ร  partir de documents numรฉrisรฉs, la classification vidรฉo et la rรฉponse aux questions visuelles. ๐Ÿค— Transformers fournit des API pour tรฉlรฉcharger et utiliser rapidement ces modรจles prรฉ-entraรฎnรฉs sur un texte donnรฉ, les affiner sur vos propres ensembles de donnรฉes, puis les partager avec la communautรฉ sur notre [hub de modรจles](https://huggingface.co/models). En mรชme temps, chaque module Python dรฉfinissant une architecture est complรจtement indรฉpendant et peut รชtre modifiรฉ pour permettre des expรฉriences de recherche rapides. ๐Ÿค— Transformers est soutenu par les trois bibliothรจques d'apprentissage profond les plus populaires โ€” [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) et [TensorFlow](https://www.tensorflow.org/) โ€” avec une intรฉgration transparente entre eux. Il est facile de former vos modรจles avec l'un avant de les charger pour l'infรฉrence avec l'autre. ## Dรฉmos en ligne Vous pouvez tester la plupart de nos modรจles directement sur leurs pages du [hub de modรจles](https://huggingface.co/models). Nous proposons รฉgalement [l'hรฉbergement privรฉ de modรจles, le versionning et une API d'infรฉrence](https://huggingface.co/pricing) pour des modรจles publics et privรฉs. Voici quelques exemples : En traitement du langage naturel : - [Complรฉtion de mots masquรฉs avec BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Reconnaissance d'entitรฉs nommรฉes avec Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [Gรฉnรฉration de texte avec GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [Infรฉrence de langage naturel avec RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [Rรฉsumรฉ avec BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [Rรฉponse aux questions avec DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [Traduction avec T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) En vision par ordinateur : - [Classification d'images avec ViT](https://huggingface.co/google/vit-base-patch16-224) - [Dรฉtection d'objets avec DETR](https://huggingface.co/facebook/detr-resnet-50) - [Segmentation sรฉmantique avec SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [Segmentation panoptique avec MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco) - [Estimation de profondeur avec DPT](https://huggingface.co/docs/transformers/model_doc/dpt) - [Classification vidรฉo avec VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae) - [Segmentation universelle avec OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) En audio : - [Reconnaissance automatique de la parole avec Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) - [Spotting de mots-clรฉs avec Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - [Classification audio avec Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) Dans les tรขches multimodales : - [Rรฉponses aux questions sur table avec TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq) - [Rรฉponses aux questions visuelles avec ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) - [Classification d'images sans รฉtiquette avec CLIP](https://huggingface.co/openai/clip-vit-large-patch14) - [Rรฉponses aux questions sur les documents avec LayoutLM](https://huggingface.co/impira/layoutlm-document-qa) - [Classification vidรฉo sans รฉtiquette avec X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip) ## 100 projets utilisant Transformers Transformers est plus qu'une boรฎte ร  outils pour utiliser des modรจles prรฉ-entraรฎnรฉs : c'est une communautรฉ de projets construits autour de lui et du Hub Hugging Face. Nous voulons que Transformers permette aux dรฉveloppeurs, chercheurs, รฉtudiants, professeurs, ingรฉnieurs et ร  quiconque d'imaginer et de rรฉaliser leurs projets de rรชve. Afin de cรฉlรฉbrer les 100 000 รฉtoiles de transformers, nous avons dรฉcidรฉ de mettre en avant la communautรฉ et avons crรฉรฉ la page [awesome-transformers](./awesome-transformers.md) qui rรฉpertorie 100 projets incroyables construits autour de transformers. Si vous possรฉdez ou utilisez un projet que vous pensez devoir figurer dans la liste, veuillez ouvrir une pull request pour l'ajouter ! ## Si vous recherchez un support personnalisรฉ de la part de l'รฉquipe Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="Programme d'accรฉlรฉration des experts HuggingFace" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## Tour rapide Pour utiliser immรฉdiatement un modรจle sur une entrรฉe donnรฉe (texte, image, audio,...), nous fournissons l'API `pipeline`. Les pipelines regroupent un modรจle prรฉ-entraรฎnรฉ avec la prรฉparation des donnรฉes qui a รฉtรฉ utilisรฉe lors de l'entraรฎnement de ce modรจle. Voici comment utiliser rapidement un pipeline pour classer des textes en positif ou nรฉgatif : ```python >>> from transformers import pipeline # Allouer un pipeline pour l'analyse de sentiment >>> classifieur = pipeline('sentiment-analysis') >>> classifieur("Nous sommes trรจs heureux d'introduire le pipeline dans le rรฉfรฉrentiel transformers.") [{'label': 'POSITIF', 'score': 0.9996980428695679}] ``` La deuxiรจme ligne de code tรฉlรฉcharge et met en cache le modรจle prรฉ-entraรฎnรฉ utilisรฉ par le pipeline, tandis que la troisiรจme l'รฉvalue sur le texte donnรฉ. Ici, la rรฉponse est "positive" avec une confiance de 99,97%. De nombreuses tรขches ont une pipeline prรฉ-entraรฎnรฉ prรชt ร  l'emploi, en NLP, mais aussi en vision par ordinateur et en parole. Par exemple, nous pouvons facilement extraire les objets dรฉtectรฉs dans une image : ```python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Tรฉlรฉcharger une image avec de jolis chats >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> donnees_image = requests.get(url, stream=True).raw >>> image = Image.open(donnees_image) # Allouer un pipeline pour la dรฉtection d'objets >>> detecteur_objets = pipeline('object-detection') >>> detecteur_objets(image) [{'score': 0.9982201457023621, 'label': 'tรฉlรฉcommande', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'tรฉlรฉcommande', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'canapรฉ', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'chat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'chat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` Ici, nous obtenons une liste d'objets dรฉtectรฉs dans l'image, avec une boรฎte entourant l'objet et un score de confiance. Voici l'image originale ร  gauche, avec les prรฉdictions affichรฉes ร  droite : <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> Vous pouvez en savoir plus sur les tรขches supportรฉes par l'API pipeline dans [ce tutoriel](https://huggingface.co/docs/transformers/task_summary). En plus de `pipeline`, pour tรฉlรฉcharger et utiliser n'importe lequel des modรจles prรฉ-entraรฎnรฉs sur votre tรขche donnรฉe, il suffit de trois lignes de code. Voici la version PyTorch : ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") inputs = tokenizer("Bonjour le monde !", return_tensors="pt") outputs = model(**inputs) ``` Et voici le code รฉquivalent pour TensorFlow : ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") inputs = tokenizer("Bonjour le monde !", return_tensors="tf") outputs = model(**inputs) ``` Le tokenizer est responsable de toutes les รฉtapes de prรฉtraitement que le modรจle prรฉentraรฎnรฉ attend et peut รชtre appelรฉ directement sur une seule chaรฎne de caractรจres (comme dans les exemples ci-dessus) ou sur une liste. Il produira un dictionnaire que vous pouvez utiliser dans votre code ou simplement passer directement ร  votre modรจle en utilisant l'opรฉrateur de dรฉballage **. Le modรจle lui-mรชme est un module [`nn.Module` PyTorch](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ou un modรจle [`tf.keras.Model` TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (selon votre backend) que vous pouvez utiliser comme d'habitude. [Ce tutoriel](https://huggingface.co/docs/transformers/training) explique comment intรฉgrer un tel modรจle dans une boucle d'entraรฎnement classique PyTorch ou TensorFlow, ou comment utiliser notre API `Trainer` pour affiner rapidement sur un nouvel ensemble de donnรฉes. ## Pourquoi devrais-je utiliser transformers ? 1. Des modรจles de pointe faciles ร  utiliser : - Hautes performances en comprรฉhension et gรฉnรฉration de langage naturel, en vision par ordinateur et en tรขches audio. - Faible barriรจre ร  l'entrรฉe pour les รฉducateurs et les praticiens. - Peu d'abstractions visibles pour l'utilisateur avec seulement trois classes ร  apprendre. - Une API unifiรฉe pour utiliser tous nos modรจles prรฉentraรฎnรฉs. 1. Coรปts informatiques rรฉduits, empreinte carbone plus petite : - Les chercheurs peuvent partager des modรจles entraรฎnรฉs au lieu de toujours les rรฉentraรฎner. - Les praticiens peuvent rรฉduire le temps de calcul et les coรปts de production. - Des dizaines d'architectures avec plus de 400 000 modรจles prรฉentraรฎnรฉs dans toutes les modalitรฉs. 1. Choisissez le bon framework pour chaque partie de la vie d'un modรจle : - Entraรฎnez des modรจles de pointe en 3 lignes de code. - Trasnfรฉrer un seul modรจle entre les frameworks TF2.0/PyTorch/JAX ร  volontรฉ. - Choisissez facilement le bon framework pour l'entraรฎnement, l'รฉvaluation et la production. 1. Personnalisez facilement un modรจle ou un exemple selon vos besoins : - Nous fournissons des exemples pour chaque architecture afin de reproduire les rรฉsultats publiรฉs par ses auteurs originaux. - Les dรฉtails internes du modรจle sont exposรฉs de maniรจre aussi cohรฉrente que possible. - Les fichiers de modรจle peuvent รชtre utilisรฉs indรฉpendamment de la bibliothรจque pour des expรฉriences rapides. ## Pourquoi ne devrais-je pas utiliser transformers ? - Cette bibliothรจque n'est pas une boรฎte ร  outils modulaire de blocs de construction pour les rรฉseaux neuronaux. Le code dans les fichiers de modรจle n'est pas refactored avec des abstractions supplรฉmentaires ร  dessein, afin que les chercheurs puissent itรฉrer rapidement sur chacun des modรจles sans plonger dans des abstractions/fichiers supplรฉmentaires. - L'API d'entraรฎnement n'est pas destinรฉe ร  fonctionner avec n'importe quel modรจle, mais elle est optimisรฉe pour fonctionner avec les modรจles fournis par la bibliothรจque. Pour des boucles gรฉnรฉriques d'apprentissage automatique, vous devriez utiliser une autre bibliothรจque (รฉventuellement, [Accelerate](https://huggingface.co/docs/accelerate)). - Bien que nous nous efforcions de prรฉsenter autant de cas d'utilisation que possible, les scripts de notre [dossier d'exemples](https://github.com/huggingface/transformers/tree/main/examples) ne sont que cela : des exemples. Il est prรฉvu qu'ils ne fonctionnent pas immรฉdiatement sur votre problรจme spรฉcifique et que vous devrez probablement modifier quelques lignes de code pour les adapter ร  vos besoins. ## Installation ### Avec pip Ce rรฉfรฉrentiel est testรฉ sur Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ et TensorFlow 2.6+. Vous devriez installer ๐Ÿค— Transformers dans un [environnement virtuel](https://docs.python.org/3/library/venv.html). Si vous n'รชtes pas familier avec les environnements virtuels Python, consultez le [guide utilisateur](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). D'abord, crรฉez un environnement virtuel avec la version de Python que vous allez utiliser et activez-le. Ensuite, vous devrez installer au moins l'un de Flax, PyTorch ou TensorFlow. Veuillez vous rรฉfรฉrer ร  la page d'installation de [TensorFlow](https://www.tensorflow.org/install/), de [PyTorch](https://pytorch.org/get-started/locally/#start-locally) et/ou de [Flax](https://github.com/google/flax#quick-install) et [Jax](https://github.com/google/jax#installation) pour connaรฎtre la commande d'installation spรฉcifique ร  votre plateforme. Lorsqu'un de ces backends est installรฉ, ๐Ÿค— Transformers peut รชtre installรฉ avec pip comme suit : ```bash pip install transformers ``` Si vous souhaitez jouer avec les exemples ou avez besoin de la derniรจre version du code et ne pouvez pas attendre une nouvelle version, vous devez [installer la bibliothรจque ร  partir de la source](https://huggingface.co/docs/transformers/installation#installing-from-source). ### Avec conda ๐Ÿค— Transformers peut รชtre installรฉ avec conda comme suit : ```shell conda install conda-forge::transformers ``` > **_NOTE:_** L'installation de `transformers` depuis le canal `huggingface` est obsolรจte. Suivez les pages d'installation de Flax, PyTorch ou TensorFlow pour voir comment les installer avec conda. > **_NOTE:_** Sur Windows, on peut vous demander d'activer le mode dรฉveloppeur pour bรฉnรฉficier de la mise en cache. Si ce n'est pas une option pour vous, veuillez nous le faire savoir dans [cette issue](https://github.com/huggingface/huggingface_hub/issues/1062). ## Architectures de modรจles **[Tous les points de contrรดle](https://huggingface.co/models)** de modรจle fournis par ๐Ÿค— Transformers sont intรฉgrรฉs de maniรจre transparente depuis le [hub de modรจles](https://huggingface.co/models) huggingface.co, oรน ils sont tรฉlรฉchargรฉs directement par les [utilisateurs](https://huggingface.co/users) et les [organisations](https://huggingface.co/organizations). Nombre actuel de points de contrรดle : ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers fournit actuellement les architectures suivantes: consultez [ici](https://huggingface.co/docs/transformers/model_summary) pour un rรฉsumรฉ global de chacune d'entre elles. Pour vรฉrifier si chaque modรจle a une implรฉmentation en Flax, PyTorch ou TensorFlow, ou s'il a un tokenizer associรฉ pris en charge par la bibliothรจque ๐Ÿค— Tokenizers, consultez [ce tableau](https://huggingface.co/docs/transformers/index#supported-frameworks). Ces implรฉmentations ont รฉtรฉ testรฉes sur plusieurs ensembles de donnรฉes (voir les scripts d'exemple) et devraient correspondre aux performances des implรฉmentations originales. Vous pouvez trouver plus de dรฉtails sur les performances dans la section Exemples de la [documentation](https://github.com/huggingface/transformers/tree/main/examples). ## En savoir plus | Section | Description | |-|-| | [Documentation](https://huggingface.co/docs/transformers/) | Documentation complรจte de l'API et tutoriels | | [Rรฉsumรฉ des tรขches](https://huggingface.co/docs/transformers/task_summary) | Tรขches prises en charge par les ๐Ÿค— Transformers | | [Tutoriel de prรฉtraitement](https://huggingface.co/docs/transformers/preprocessing) | Utilisation de la classe `Tokenizer` pour prรฉparer les donnรฉes pour les modรจles | | [Entraรฎnement et ajustement fin](https://huggingface.co/docs/transformers/training) | Utilisation des modรจles fournis par les ๐Ÿค— Transformers dans une boucle d'entraรฎnement PyTorch/TensorFlow et de l'API `Trainer` | | [Tour rapide : Scripts d'ajustement fin/d'utilisation](https://github.com/huggingface/transformers/tree/main/examples) | Scripts d'exemple pour ajuster finement les modรจles sur une large gamme de tรขches | | [Partage et tรฉlรฉversement de modรจles](https://huggingface.co/docs/transformers/model_sharing) | Tรฉlรฉchargez et partagez vos modรจles ajustรฉs avec la communautรฉ | ## Citation Nous disposons dรฉsormais d'un [article](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que vous pouvez citer pour la bibliothรจque ๐Ÿค— Transformers : ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/setup.py
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Simple check list from AllenNLP repo: https://github.com/allenai/allennlp/blob/main/setup.py To create the package for pypi. 1. Create the release branch named: v<RELEASE>-release, for example v4.19-release. For a patch release checkout the current release branch. If releasing on a special branch, copy the updated README.md on the main branch for your the commit you will make for the post-release and run `make fix-copies` on the main branch as well. 2. Run `make pre-release` (or `make pre-patch` for a patch release) and commit these changes with the message: "Release: <VERSION>" and push. 3. Go back to the main branch and run `make post-release` then `make fix-copies`. Commit these changes with the message "v<NEXT_VERSION>.dev.0" and push to main. # If you were just cutting the branch in preparation for a release, you can stop here for now. 4. Wait for the tests on the release branch to be completed and be green (otherwise revert and fix bugs) 5. On the release branch, add a tag in git to mark the release: "git tag v<VERSION> -m 'Adds tag v<VERSION> for pypi' " Push the tag to git: git push --tags origin v<RELEASE>-release 6. Build both the sources and the wheel. Do not change anything in setup.py between creating the wheel and the source distribution (obviously). Run `make build-release`. This will build the release and do some sanity checks for you. If this ends with an error message, you need to fix things before going further. You should now have a /dist directory with both .whl and .tar.gz source versions. 7. Check that everything looks correct by uploading the package to the pypi test server: twine upload dist/* -r testpypi (pypi suggest using twine as other methods upload files via plaintext.) You may have to specify the repository url, use the following command then: twine upload dist/* -r testpypi --repository-url=https://test.pypi.org/legacy/ Check that you can install it in a virtualenv by running: pip install -i https://testpypi.python.org/pypi transformers Check you can run the following commands: python -c "from transformers import pipeline; classifier = pipeline('text-classification'); print(classifier('What a nice release'))" python -c "from transformers import *" python utils/check_build.py --check_lib If making a patch release, double check the bug you are patching is indeed resolved. 8. Upload the final version to actual pypi: twine upload dist/* -r pypi 9. Copy the release notes from RELEASE.md to the tag in github once everything is looking hunky-dory. """ import os import re import shutil from pathlib import Path from setuptools import Command, find_packages, setup # Remove stale transformers.egg-info directory to avoid https://github.com/pypa/pip/issues/5466 stale_egg_info = Path(__file__).parent / "transformers.egg-info" if stale_egg_info.exists(): print( ( "Warning: {} exists.\n\n" "If you recently updated transformers to 3.0 or later, this is expected,\n" "but it may prevent transformers from installing in editable mode.\n\n" "This directory is automatically generated by Python's packaging tools.\n" "I will remove it now.\n\n" "See https://github.com/pypa/pip/issues/5466 for details.\n" ).format(stale_egg_info) ) shutil.rmtree(stale_egg_info) # IMPORTANT: # 1. all dependencies should be listed here with their version requirements if any # 2. once modified, run: `make deps_table_update` to update src/transformers/dependency_versions_table.py _deps = [ "Pillow>=10.0.1,<=15.0", "accelerate>=0.21.0", "av==9.2.0", # Latest version of PyAV (10.0.0) has issues with audio stream. "beautifulsoup4", "codecarbon==1.2.0", "cookiecutter==1.7.3", "dataclasses", "datasets!=2.5.0", "decord==0.6.0", "deepspeed>=0.9.3", "diffusers", "dill<0.3.5", "evaluate>=0.2.0", "faiss-cpu", "fastapi", "filelock", "flax>=0.4.1,<=0.7.0", "fsspec<2023.10.0", "ftfy", "fugashi>=1.0", "GitPython<3.1.19", "huggingface-hub>=0.19.3,<1.0", "importlib_metadata", "ipadic>=1.0.0,<2.0", "isort>=5.5.4", "jax>=0.4.1,<=0.4.13", "jaxlib>=0.4.1,<=0.4.13", "jieba", "kenlm", # Keras pin - this is to make sure Keras 3 doesn't destroy us. Remove or change when we have proper support. "keras>2.9,<2.16", "keras-nlp>=0.3.1", "librosa", "nltk", "natten>=0.14.6,<0.15.0", "numpy>=1.17", "onnxconverter-common", "onnxruntime-tools>=1.4.2", "onnxruntime>=1.4.0", "opencv-python", "optuna", "optax>=0.0.8,<=0.1.4", "packaging>=20.0", "parameterized", "phonemizer", "protobuf", "psutil", "pyyaml>=5.1", "pydantic", "pytest>=7.2.0,<8.0.0", "pytest-timeout", "pytest-xdist", "python>=3.8.0", "ray[tune]>=2.7.0", "regex!=2019.12.17", "requests", "rhoknp>=1.1.0,<1.3.1", "rjieba", "rouge-score!=0.0.7,!=0.0.8,!=0.1,!=0.1.1", "ruff==0.1.5", "sacrebleu>=1.4.12,<2.0.0", "sacremoses", "safetensors>=0.4.1", "sagemaker>=2.31.0", "scikit-learn", "scipy<1.13.0", # SciPy >= 1.13.0 is not supported with the current jax pin (`jax>=0.4.1,<=0.4.13`) "sentencepiece>=0.1.91,!=0.1.92", "sigopt", "starlette", "sudachipy>=0.6.6", "sudachidict_core>=20220729", "tensorboard", # TensorFlow pin. When changing this value, update examples/tensorflow/_tests_requirements.txt accordingly "tensorflow-cpu>2.9,<2.16", "tensorflow>2.9,<2.16", "tensorflow-text<2.16", "tensorflow-probability<2.16", "tf2onnx", "timeout-decorator", "timm", "tokenizers>=0.19,<0.20", "torch", "torchaudio", "torchvision", "pyctcdecode>=0.4.0", "tqdm>=4.27", "unidic>=1.0.2", "unidic_lite>=1.0.7", "urllib3<2.0.0", "uvicorn", "pytest-rich", ] # this is a lookup table with items like: # # tokenizers: "tokenizers==0.9.4" # packaging: "packaging" # # some of the values are versioned whereas others aren't. deps = {b: a for a, b in (re.findall(r"^(([^!=<>~ ]+)(?:[!=<>~ ].*)?$)", x)[0] for x in _deps)} # since we save this data in src/transformers/dependency_versions_table.py it can be easily accessed from # anywhere. If you need to quickly access the data from this table in a shell, you can do so easily with: # # python -c 'import sys; from transformers.dependency_versions_table import deps; \ # print(" ".join([ deps[x] for x in sys.argv[1:]]))' tokenizers datasets # # Just pass the desired package names to that script as it's shown with 2 packages above. # # If transformers is not yet installed and the work is done from the cloned repo remember to add `PYTHONPATH=src` to the script above # # You can then feed this for example to `pip`: # # pip install -U $(python -c 'import sys; from transformers.dependency_versions_table import deps; \ # print(" ".join([deps[x] for x in sys.argv[1:]]))' tokenizers datasets) # def deps_list(*pkgs): return [deps[pkg] for pkg in pkgs] class DepsTableUpdateCommand(Command): """ A custom distutils command that updates the dependency table. usage: python setup.py deps_table_update """ description = "build runtime dependency table" user_options = [ # format: (long option, short option, description). ("dep-table-update", None, "updates src/transformers/dependency_versions_table.py"), ] def initialize_options(self): pass def finalize_options(self): pass def run(self): entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()]) content = [ "# THIS FILE HAS BEEN AUTOGENERATED. To update:", "# 1. modify the `_deps` dict in setup.py", "# 2. run `make deps_table_update``", "deps = {", entries, "}", "", ] target = "src/transformers/dependency_versions_table.py" print(f"updating {target}") with open(target, "w", encoding="utf-8", newline="\n") as f: f.write("\n".join(content)) extras = {} extras["ja"] = deps_list("fugashi", "ipadic", "unidic_lite", "unidic", "sudachipy", "sudachidict_core", "rhoknp") extras["sklearn"] = deps_list("scikit-learn") extras["tf"] = deps_list("tensorflow", "onnxconverter-common", "tf2onnx", "tensorflow-text", "keras-nlp") extras["tf-cpu"] = deps_list("keras", "tensorflow-cpu", "onnxconverter-common", "tf2onnx", "tensorflow-text", "keras-nlp", "tensorflow-probability") extras["torch"] = deps_list("torch", "accelerate") extras["accelerate"] = deps_list("accelerate") if os.name == "nt": # windows extras["retrieval"] = deps_list("datasets") # faiss is not supported on windows extras["flax"] = [] # jax is not supported on windows else: extras["retrieval"] = deps_list("faiss-cpu", "datasets") extras["flax"] = deps_list("jax", "jaxlib", "flax", "optax", "scipy") extras["tokenizers"] = deps_list("tokenizers") extras["ftfy"] = deps_list("ftfy") extras["onnxruntime"] = deps_list("onnxruntime", "onnxruntime-tools") extras["onnx"] = deps_list("onnxconverter-common", "tf2onnx") + extras["onnxruntime"] extras["modelcreation"] = deps_list("cookiecutter") extras["sagemaker"] = deps_list("sagemaker") extras["deepspeed"] = deps_list("deepspeed") + extras["accelerate"] extras["optuna"] = deps_list("optuna") extras["ray"] = deps_list("ray[tune]") extras["sigopt"] = deps_list("sigopt") extras["integrations"] = extras["optuna"] + extras["ray"] + extras["sigopt"] extras["serving"] = deps_list("pydantic", "uvicorn", "fastapi", "starlette") extras["audio"] = deps_list("librosa", "pyctcdecode", "phonemizer", "kenlm") # `pip install ".[speech]"` is deprecated and `pip install ".[torch-speech]"` should be used instead extras["speech"] = deps_list("torchaudio") + extras["audio"] extras["torch-speech"] = deps_list("torchaudio") + extras["audio"] extras["tf-speech"] = extras["audio"] extras["flax-speech"] = extras["audio"] extras["vision"] = deps_list("Pillow") extras["timm"] = deps_list("timm") extras["torch-vision"] = deps_list("torchvision") + extras["vision"] extras["natten"] = deps_list("natten") extras["codecarbon"] = deps_list("codecarbon") extras["video"] = deps_list("decord", "av") extras["sentencepiece"] = deps_list("sentencepiece", "protobuf") extras["testing"] = ( deps_list( "pytest", "pytest-rich", "pytest-xdist", "timeout-decorator", "parameterized", "psutil", "datasets", "dill", "evaluate", "pytest-timeout", "ruff", "sacrebleu", "rouge-score", "nltk", "GitPython", "sacremoses", "rjieba", "beautifulsoup4", "tensorboard", "pydantic", "sentencepiece", ) + extras["retrieval"] + extras["modelcreation"] ) extras["deepspeed-testing"] = extras["deepspeed"] + extras["testing"] + extras["optuna"] + extras["sentencepiece"] extras["quality"] = deps_list("datasets", "isort", "ruff", "GitPython", "urllib3") extras["all"] = ( extras["tf"] + extras["torch"] + extras["flax"] + extras["sentencepiece"] + extras["tokenizers"] + extras["torch-speech"] + extras["vision"] + extras["integrations"] + extras["timm"] + extras["torch-vision"] + extras["codecarbon"] + extras["accelerate"] + extras["video"] ) extras["dev-torch"] = ( extras["testing"] + extras["torch"] + extras["sentencepiece"] + extras["tokenizers"] + extras["torch-speech"] + extras["vision"] + extras["integrations"] + extras["timm"] + extras["torch-vision"] + extras["codecarbon"] + extras["quality"] + extras["ja"] + extras["sklearn"] + extras["modelcreation"] + extras["onnxruntime"] ) extras["dev-tensorflow"] = ( extras["testing"] + extras["tf"] + extras["sentencepiece"] + extras["tokenizers"] + extras["vision"] + extras["quality"] + extras["sklearn"] + extras["modelcreation"] + extras["onnx"] + extras["tf-speech"] ) extras["dev"] = ( extras["all"] + extras["testing"] + extras["quality"] + extras["ja"] + extras["sklearn"] + extras["modelcreation"] ) extras["torchhub"] = deps_list( "filelock", "huggingface-hub", "importlib_metadata", "numpy", "packaging", "protobuf", "regex", "requests", "sentencepiece", "torch", "tokenizers", "tqdm", ) extras["agents"] = deps_list( "diffusers", "accelerate", "datasets", "torch", "sentencepiece", "opencv-python", "Pillow" ) # when modifying the following list, make sure to update src/transformers/dependency_versions_check.py install_requires = [ deps["filelock"], # filesystem locks, e.g., to prevent parallel downloads deps["huggingface-hub"], deps["numpy"], deps["packaging"], # utilities from PyPA to e.g., compare versions deps["pyyaml"], # used for the model cards metadata deps["regex"], # for OpenAI GPT deps["requests"], # for downloading models over HTTPS deps["tokenizers"], deps["safetensors"], deps["tqdm"], # progress bars in model download and training scripts ] setup( name="transformers", version="4.41.0.dev0", # expected format is one of x.y.z.dev0, or x.y.z.rc1 or x.y.z (no to dashes, yes to dots) author="The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)", author_email="[email protected]", description="State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow", long_description=open("README.md", "r", encoding="utf-8").read(), long_description_content_type="text/markdown", keywords="NLP vision speech deep learning transformer pytorch tensorflow jax BERT GPT-2 Wav2Vec2 ViT", license="Apache 2.0 License", url="https://github.com/huggingface/transformers", package_dir={"": "src"}, packages=find_packages("src"), include_package_data=True, package_data={"": ["**/*.cu", "**/*.cpp", "**/*.cuh", "**/*.h", "**/*.pyx"]}, zip_safe=False, extras_require=extras, entry_points={"console_scripts": ["transformers-cli=transformers.commands.transformers_cli:main"]}, python_requires=">=3.8.0", install_requires=list(install_requires), classifiers=[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Intended Audience :: Education", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Topic :: Scientific/Engineering :: Artificial Intelligence", ], cmdclass={"deps_table_update": DepsTableUpdateCommand}, ) extras["tests_torch"] = deps_list() extras["tests_tf"] = deps_list() extras["tests_flax"] = deps_list() extras["tests_torch_and_tf"] = deps_list() extras["tests_torch_and_flax"] = deps_list() extras["tests_hub"] = deps_list() extras["tests_pipelines_torch"] = deps_list() extras["tests_pipelines_tf"] = deps_list() extras["tests_onnx"] = deps_list() extras["tests_examples_torch"] = deps_list() extras["tests_examples_tf"] = deps_list() extras["tests_custom_tokenizers"] = deps_list() extras["tests_exotic_models"] = deps_list() extras["consistency"] = deps_list()
0
mavonic_private_repos
mavonic_private_repos/transformers/README.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <b>English</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. These models can be applied on: * ๐Ÿ“ Text, for tasks like text classification, information extraction, question answering, summarization, translation, and text generation, in over 100 languages. * ๐Ÿ–ผ๏ธ Images, for tasks like image classification, object detection, and segmentation. * ๐Ÿ—ฃ๏ธ Audio, for tasks like speech recognition and audio classification. Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. ๐Ÿค— Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments. ๐Ÿค— Transformers is backed by the three most popular deep learning libraries โ€” [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) โ€” with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. ## Online demos You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models. Here are a few examples: In Natural Language Processing: - [Masked word completion with BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Named Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [Text generation with Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - [Natural Language Inference with RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [Question answering with DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [Translation with T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) In Computer Vision: - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224) - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50) - [Semantic Segmentation with SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [Panoptic Segmentation with Mask2Former](https://huggingface.co/facebook/mask2former-swin-large-coco-panoptic) - [Depth Estimation with Depth Anything](https://huggingface.co/docs/transformers/main/model_doc/depth_anything) - [Video Classification with VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae) - [Universal Segmentation with OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) In Audio: - [Automatic Speech Recognition with Whisper](https://huggingface.co/openai/whisper-large-v3) - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - [Audio Classification with Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) In Multimodal tasks: - [Table Question Answering with TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq) - [Visual Question Answering with ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) - [Image captioning with LLaVa](https://huggingface.co/llava-hf/llava-1.5-7b-hf) - [Zero-shot Image Classification with SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) - [Document Question Answering with LayoutLM](https://huggingface.co/impira/layoutlm-document-qa) - [Zero-shot Video Classification with X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip) - [Zero-shot Object Detection with OWLv2](https://huggingface.co/docs/transformers/en/model_doc/owlv2) - [Zero-shot Image Segmentation with CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg) - [Automatic Mask Generation with SAM](https://huggingface.co/docs/transformers/model_doc/sam) ## 100 projects using Transformers Transformers is more than a toolkit to use pretrained models: it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects. In order to celebrate the 100,000 stars of transformers, we have decided to put the spotlight on the community, and we have created the [awesome-transformers](./awesome-transformers.md) page which lists 100 incredible projects built in the vicinity of transformers. If you own or use a project that you believe should be part of the list, please open a PR to add it! ## If you are looking for custom support from the Hugging Face team <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## Quick tour To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts: ```python >>> from transformers import pipeline # Allocate a pipeline for sentiment-analysis >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here, the answer is "positive" with a confidence of 99.97%. Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Download an image with cute cats >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # Allocate a pipeline for object detection >>> object_detector = pipeline('object-detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` Here, we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the left, with the predictions displayed on the right: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary). In addition to `pipeline`, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` And here is the equivalent code for TensorFlow: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` The tokenizer is responsible for all the preprocessing the pretrained model expects and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator. The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use as usual. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset. ## Why should I use transformers? 1. Easy-to-use state-of-the-art models: - High performance on natural language understanding & generation, computer vision, and audio tasks. - Low barrier to entry for educators and practitioners. - Few user-facing abstractions with just three classes to learn. - A unified API for using all our pretrained models. 1. Lower compute costs, smaller carbon footprint: - Researchers can share trained models instead of always retraining. - Practitioners can reduce compute time and production costs. - Dozens of architectures with over 400,000 pretrained models across all modalities. 1. Choose the right framework for every part of a model's lifetime: - Train state-of-the-art models in 3 lines of code. - Move a single model between TF2.0/PyTorch/JAX frameworks at will. - Seamlessly pick the right framework for training, evaluation, and production. 1. Easily customize a model or an example to your needs: - We provide examples for each architecture to reproduce the results published by its original authors. - Model internals are exposed as consistently as possible. - Model files can be used independently of the library for quick experiments. ## Why shouldn't I use transformers? - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, [Accelerate](https://huggingface.co/docs/accelerate)). - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the-box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. ## Installation ### With pip This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+. You should install ๐Ÿค— Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). First, create a virtual environment with the version of Python you're going to use and activate it. Then, you will need to install at least one of Flax, PyTorch, or TensorFlow. Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform. When one of those backends has been installed, ๐Ÿค— Transformers can be installed using pip as follows: ```bash pip install transformers ``` If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source). ### With conda ๐Ÿค— Transformers can be installed using conda as follows: ```shell script conda install conda-forge::transformers ``` > **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated. Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda. > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062). ## Model architectures **[All the model checkpoints](https://huggingface.co/models)** provided by ๐Ÿค— Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co/models), where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations). Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers currently provides the following architectures: see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them. To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the ๐Ÿค— Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks). These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://github.com/huggingface/transformers/tree/main/examples). ## Learn more | Section | Description | |-|-| | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials | | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by ๐Ÿค— Transformers | | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models | | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by ๐Ÿค— Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API | | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks | | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community | ## Citation We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the ๐Ÿค— Transformers library: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos
mavonic_private_repos/transformers/README_de.md
<!--- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎ€ไฝ“ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซ”ไธญๆ–‡</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ•œ๊ตญ์–ด</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆ—ฅๆœฌ่ชž</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ€</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัƒััะบะธะน</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑ†เฐฒเฑเฐ—เฑ</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> | <b>Deutsch</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแป‡t</a> | </p> </h4> <h3 align="center"> <p>Maschinelles Lernen auf dem neuesten Stand der Technik fรผr JAX, PyTorch und TensorFlow</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> ๐Ÿค— Transformers bietet Tausende von vortrainierten Modellen, um Aufgaben in verschiedenen Modalitรคten wie Text, Bild und Audio durchzufรผhren. Diese Modelle kรถnnen angewendet werden, auf: * ๐Ÿ“ Text - fรผr Aufgaben wie Textklassifizierung, Informationsextraktion, Question Answering, automatische Textzusammenfassung, maschinelle รœbersetzung und Textgenerierung in รผber 100 Sprachen. * ๐Ÿ–ผ๏ธ Bilder - fรผr Aufgaben wie Bildklassifizierung, Objekterkennung und Segmentierung. * ๐Ÿ—ฃ๏ธ Audio - fรผr Aufgaben wie Spracherkennung und Audioklassifizierung. Transformer-Modelle kรถnnen auch Aufgaben fรผr **mehrere Modalitรคten in Kombination** durchfรผhren, z. B. tabellenbasiertes Question Answering, optische Zeichenerkennung, Informationsextraktion aus gescannten Dokumenten, Videoklassifizierung und visuelles Question Answering. ๐Ÿค— Transformers bietet APIs, um diese vortrainierten Modelle schnell herunterzuladen und fรผr einen gegebenen Text zu verwenden, sie auf Ihren eigenen Datensรคtzen zu feintunen und dann mit der Community in unserem [Model Hub](https://huggingface.co/models) zu teilen. Gleichzeitig ist jedes Python-Modul, das eine Architektur definiert, komplett eigenstรคndig und kann modifiziert werden, um schnelle Forschungsexperimente zu ermรถglichen. ๐Ÿค— Transformers unterstรผtzt die nahtlose Integration von drei der beliebtesten Deep-Learning-Bibliotheken: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) und [TensorFlow](https://www.tensorflow.org/). Trainieren Sie Ihr Modell in einem Framework und laden Sie es zur Inferenz unkompliziert mit einem anderen. ## Online-Demos Sie kรถnnen die meisten unserer Modelle direkt auf ihren Seiten im [Model Hub](https://huggingface.co/models) testen. Wir bieten auch [privates Modell-Hosting, Versionierung, & eine Inferenz-API](https://huggingface.co/pricing) fรผr รถffentliche und private Modelle an. Hier sind einige Beispiele: In der Computerlinguistik: - [Maskierte Wortvervollstรคndigung mit BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Eigennamenerkennung mit Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [Textgenerierung mit GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+) - [Natural Language Inference mit RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [Automatische Textzusammenfassung mit BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [Question Answering mit DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [Maschinelle รœbersetzung mit T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) In der Computer Vision: - [Bildklassifizierung mit ViT](https://huggingface.co/google/vit-base-patch16-224) - [Objekterkennung mit DETR](https://huggingface.co/facebook/detr-resnet-50) - [Semantische Segmentierung mit SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [Panoptische Segmentierung mit MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco) - [Depth Estimation mit DPT](https://huggingface.co/docs/transformers/model_doc/dpt) - [Videoklassifizierung mit VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae) - [Universelle Segmentierung mit OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large) Im Audio-Bereich: - [Automatische Spracherkennung mit Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) - [Keyword Spotting mit Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - [Audioklassifizierung mit Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) In multimodalen Aufgaben: - [Tabellenbasiertes Question Answering mit TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq) - [Visuelles Question Answering mit ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) - [Zero-Shot-Bildklassifizierung mit CLIP](https://huggingface.co/openai/clip-vit-large-patch14) - [Dokumentenbasiertes Question Answering mit LayoutLM](https://huggingface.co/impira/layoutlm-document-qa) - [Zero-Shot-Videoklassifizierung mit X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip) ## 100 Projekte, die ๐Ÿค— Transformers verwenden ๐Ÿค— Transformers ist mehr als nur ein Toolkit zur Verwendung von vortrainierten Modellen: Es ist eine Gemeinschaft von Projekten, die darum herum und um den Hugging Face Hub aufgebaut sind. Wir mรถchten, dass ๐Ÿค— Transformers es Entwicklern, Forschern, Studenten, Professoren, Ingenieuren und jedem anderen ermรถglicht, ihre Traumprojekte zu realisieren. Um die 100.000 Sterne von ๐Ÿค— Transformers zu feiern, haben wir beschlossen, die Gemeinschaft in den Mittelpunkt zu stellen und die Seite [awesome-transformers](./awesome-transformers.md) erstellt, die 100 unglaubliche Projekte auflistet, die zusammen mit ๐Ÿค— Transformers realisiert wurden. Wenn Sie ein Projekt besitzen oder nutzen, von dem Sie glauben, dass es Teil der Liste sein sollte, รถffnen Sie bitte einen PR, um es hinzuzufรผgen! ## Wenn Sie individuelle Unterstรผtzung vom Hugging Face-Team mรถchten <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## Schnelleinstieg Um sofort ein Modell mit einer bestimmten Eingabe (Text, Bild, Audio ...) zu verwenden, bieten wir die `pipeline`-API an. Pipelines kombinieren ein vortrainiertes Modell mit der jeweiligen Vorverarbeitung, die wรคhrend dessen Trainings verwendet wurde. Hier sehen Sie, wie man schnell eine Pipeline verwenden kann, um positive und negative Texte zu klassifizieren: ```python >>> from transformers import pipeline # Zuweisung einer Pipeline fรผr die Sentiment-Analyse >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` Die zweite Codezeile lรคdt und cacht das vortrainierte Modell, das von der Pipeline verwendet wird, wรคhrend die dritte es an dem gegebenen Text evaluiert. Hier ist die Antwort "positiv" mit einer Konfidenz von 99,97 %. Viele Aufgaben, sowohl in der Computerlinguistik als auch in der Computer Vision und Sprachverarbeitung, haben eine vortrainierte `pipeline`, die sofort einsatzbereit ist. Z. B. kรถnnen wir leicht erkannte Objekte in einem Bild extrahieren: ``` python >>> import requests >>> from PIL import Image >>> from transformers import pipeline # Download eines Bildes mit sรผรŸen Katzen >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" >>> image_data = requests.get(url, stream=True).raw >>> image = Image.open(image_data) # Zuweisung einer Pipeline fรผr die Objekterkennung >>> object_detector = pipeline('object-detection') >>> object_detector(image) [{'score': 0.9982201457023621, 'label': 'remote', 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, {'score': 0.9960021376609802, 'label': 'remote', 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, {'score': 0.9954745173454285, 'label': 'couch', 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, {'score': 0.9988006353378296, 'label': 'cat', 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, {'score': 0.9986783862113953, 'label': 'cat', 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] ``` Hier erhalten wir eine Liste von Objekten, die im Bild erkannt wurden, mit einer Markierung, die das Objekt eingrenzt, und einem zugehรถrigen Konfidenzwert. Folgend ist das Originalbild links und die Vorhersagen rechts dargestellt: <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> </h3> Sie kรถnnen mehr รผber die von der `pipeline`-API unterstรผtzten Aufgaben in [diesem Tutorial](https://huggingface.co/docs/transformers/task_summary) erfahren. Zusรคtzlich zur `pipeline` benรถtigt es nur drei Zeilen Code, um eines der vortrainierten Modelle fรผr Ihre Aufgabe herunterzuladen und zu verwenden. Hier ist der Code fรผr die PyTorch-Version: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` Und hier ist der entsprechende Code fรผr TensorFlow: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` Der Tokenizer ist fรผr die gesamte Vorverarbeitung, die das vortrainierte Modell benรถtigt, verantwortlich und kann direkt auf einem einzelnen String (wie in den obigen Beispielen) oder einer Liste ausgefรผhrt werden. Er gibt ein Dictionary aus, das Sie im darauffolgenden Code verwenden oder einfach direkt Ihrem Modell รผbergeben kรถnnen, indem Sie den ** Operator zum Entpacken von Argumenten einsetzen. Das Modell selbst ist ein regulรคres [PyTorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) oder ein [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (abhรคngig von Ihrem Backend), das Sie wie gewohnt verwenden kรถnnen. [Dieses Tutorial](https://huggingface.co/docs/transformers/training) erklรคrt, wie man ein solches Modell in eine klassische PyTorch- oder TensorFlow-Trainingsschleife integrieren kann oder wie man unsere `Trainer`-API verwendet, um es schnell auf einem neuen Datensatz zu feintunen. ## Warum sollten Sie ๐Ÿค— Transformers verwenden? 1. Benutzerfreundliche Modelle auf dem neuesten Stand der Technik: - Hohe Leistung bei Aufgaben zu Natural Language Understanding & Generation, Computer Vision und Audio. - Niedrige Einstiegshรผrde fรผr Bildungskrรคfte und Praktiker. - Wenige benutzerseitige Abstraktionen mit nur drei zu lernenden Klassen. - Eine einheitliche API fรผr die Verwendung aller unserer vortrainierten Modelle. 1. Geringere Rechenkosten, kleinerer CO<sub>2</sub>-FuรŸabdruck: - Forscher kรถnnen trainierte Modelle teilen, anstatt sie immer wieder neu zu trainieren. - Praktiker kรถnnen die Rechenzeit und Produktionskosten reduzieren. - Dutzende Architekturen mit รผber 400.000 vortrainierten Modellen รผber alle Modalitรคten hinweg. 1. Wรคhlen Sie das richtige Framework fรผr jeden Lebensabschnitt eines Modells: - Trainieren Sie Modelle auf neustem Stand der Technik in nur drei Codezeilen. - Verwenden Sie ein einzelnes Modell nach Belieben mit TF2.0-/PyTorch-/JAX-Frameworks. - Wรคhlen Sie nahtlos das richtige Framework fรผr Training, Evaluation und Produktiveinsatz. 1. Passen Sie ein Modell oder Beispiel leicht an Ihre Bedรผrfnisse an: - Wir bieten Beispiele fรผr jede Architektur an, um die von ihren ursprรผnglichen Autoren verรถffentlichten Ergebnisse zu reproduzieren. - Modellinterna sind so einheitlich wie mรถglich verfรผgbar gemacht. - Modelldateien kรถnnen unabhรคngig von der Bibliothek fรผr schnelle Experimente verwendet werden. ## Warum sollten Sie ๐Ÿค— Transformers nicht verwenden? - Diese Bibliothek ist kein modularer Werkzeugkasten mit Bausteinen fรผr neuronale Netze. Der Code in den Modelldateien ist absichtlich nicht mit zusรคtzlichen Abstraktionen refaktorisiert, sodass Forscher schnell mit jedem der Modelle iterieren kรถnnen, ohne sich in zusรคtzliche Abstraktionen/Dateien vertiefen zu mรผssen. - Die Trainings-API ist nicht dafรผr gedacht, mit beliebigen Modellen zu funktionieren, sondern ist fรผr die Verwendung mit den von der Bibliothek bereitgestellten Modellen optimiert. Fรผr generische Trainingsschleifen von maschinellem Lernen sollten Sie eine andere Bibliothek verwenden (mรถglicherweise [Accelerate](https://huggingface.co/docs/accelerate)). - Auch wenn wir bestrebt sind, so viele Anwendungsfรคlle wie mรถglich zu veranschaulichen, sind die Beispielskripte in unserem [`examples`](./examples) Ordner genau das: Beispiele. Es ist davon auszugehen, dass sie nicht sofort auf Ihr spezielles Problem anwendbar sind und einige Codezeilen geรคndert werden mรผssen, um sie fรผr Ihre Bedรผrfnisse anzupassen. ## Installation ### Mit pip Dieses Repository wurde mit Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ und TensorFlow 2.6+ getestet. Sie sollten ๐Ÿค— Transformers in einer [virtuellen Umgebung](https://docs.python.org/3/library/venv.html) installieren. Wenn Sie mit virtuellen Python-Umgebungen nicht vertraut sind, schauen Sie sich den [Benutzerleitfaden](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) an. Erstellen und aktivieren Sie zuerst eine virtuelle Umgebung mit der Python-Version, die Sie verwenden mรถchten. Dann mรผssen Sie entweder Flax, PyTorch oder TensorFlow installieren. Bitte beziehe dich entsprechend auf die jeweiligen Installationsanleitungen fรผr [TensorFlow](https://www.tensorflow.org/install/), [PyTorch](https://pytorch.org/get-started/locally/#start-locally), und/oder [Flax](https://github.com/google/flax#quick-install) und [Jax](https://github.com/google/jax#installation) fรผr den spezifischen Installationsbefehl fรผr Ihre Plattform. Wenn eines dieser Backends installiert ist, kann ๐Ÿค— Transformers wie folgt mit pip installiert werden: ```bash pip install transformers ``` Wenn Sie mit den Beispielen experimentieren mรถchten oder die neueste Version des Codes benรถtigen und nicht auf eine neue Verรถffentlichung warten kรถnnen, mรผssen Sie [die Bibliothek von der Quelle installieren](https://huggingface.co/docs/transformers/installation#installing-from-source). ### Mit conda ๐Ÿค— Transformers kann wie folgt mit conda installiert werden: ```shell script conda install conda-forge::transformers ``` > **_HINWEIS:_** Die Installation von `transformers` aus dem `huggingface`-Kanal ist veraltet. Folgen Sie den Installationsanleitungen von Flax, PyTorch oder TensorFlow, um zu sehen, wie sie mit conda installiert werden kรถnnen. > **_HINWEIS:_** Auf Windows werden Sie mรถglicherweise aufgefordert, den Entwicklermodus zu aktivieren, um von Caching zu profitieren. Wenn das fรผr Sie keine Option ist, lassen Sie es uns bitte in [diesem Issue](https://github.com/huggingface/huggingface_hub/issues/1062) wissen. ## Modellarchitekturen **[Alle Modell-Checkpoints](https://huggingface.co/models)**, die von ๐Ÿค— Transformers bereitgestellt werden, sind nahtlos aus dem huggingface.co [Model Hub](https://huggingface.co/models) integriert, wo sie direkt von [Benutzern](https://huggingface.co/users) und [Organisationen](https://huggingface.co/organizations) hochgeladen werden. Aktuelle Anzahl der Checkpoints: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) ๐Ÿค— Transformers bietet derzeit die folgenden Architekturen an: siehe [hier](https://huggingface.co/docs/transformers/model_summary) fรผr eine jeweilige รœbersicht. Um zu รผberprรผfen, ob jedes Modell eine Implementierung in Flax, PyTorch oder TensorFlow hat oder รผber einen zugehรถrigen Tokenizer verfรผgt, der von der ๐Ÿค— Tokenizers-Bibliothek unterstรผtzt wird, schauen Sie auf [diese Tabelle](https://huggingface.co/docs/transformers/index#supported-frameworks). Diese Implementierungen wurden mit mehreren Datensรคtzen getestet (siehe Beispielskripte) und sollten den Leistungen der ursprรผnglichen Implementierungen entsprechen. Weitere Details zur Leistung finden Sie im Abschnitt der Beispiele in der [Dokumentation](https://github.com/huggingface/transformers/tree/main/examples). ## Mehr erfahren | Abschnitt | Beschreibung | |-|-| | [Dokumentation](https://huggingface.co/docs/transformers/) | Vollstรคndige API-Dokumentation und Tutorials | | [Zusammenfassung der Aufgaben](https://huggingface.co/docs/transformers/task_summary) | Von ๐Ÿค— Transformers unterstรผtzte Aufgaben | | [Vorverarbeitungs-Tutorial](https://huggingface.co/docs/transformers/preprocessing) | Verwendung der `Tokenizer`-Klasse zur Vorverarbeitung der Daten fรผr die Modelle | | [Training und Feintuning](https://huggingface.co/docs/transformers/training) | Verwendung der von ๐Ÿค— Transformers bereitgestellten Modelle in einer PyTorch-/TensorFlow-Trainingsschleife und der `Trainer`-API | | [Schnelleinstieg: Feintuning/Anwendungsskripte](https://github.com/huggingface/transformers/tree/main/examples) | Beispielskripte fรผr das Feintuning von Modellen fรผr eine breite Palette von Aufgaben | | [Modellfreigabe und -upload](https://huggingface.co/docs/transformers/model_sharing) | Laden Sie Ihre feingetunten Modelle hoch und teilen Sie sie mit der Community | ## Zitation Wir haben jetzt ein [Paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/), das Sie fรผr die ๐Ÿค— Transformers-Bibliothek zitieren kรถnnen: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/split_model_tests.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is used to get the list of folders under `tests/models` and split the list into `NUM_SLICES` splits. The main use case is a GitHub Actions workflow file calling this script to get the (nested) list of folders allowing it to split the list of jobs to run into multiple slices each containing a smaller number of jobs. This way, we can bypass the maximum of 256 jobs in a matrix. See the `setup` and `run_models_gpu` jobs defined in the workflow file `.github/workflows/self-scheduled.yml` for more details. Usage: This script is required to be run under `tests` folder of `transformers` root directory. Assume we are under `transformers` root directory: ```bash cd tests python ../utils/split_model_tests.py --num_splits 64 ``` """ import argparse import os if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--num_splits", type=int, default=1, help="the number of splits into which the (flat) list of folders will be split.", ) args = parser.parse_args() tests = os.getcwd() model_tests = os.listdir(os.path.join(tests, "models")) d1 = sorted(filter(os.path.isdir, os.listdir(tests))) d2 = sorted(filter(os.path.isdir, [f"models/{x}" for x in model_tests])) d1.remove("models") d = d2 + d1 num_jobs = len(d) num_jobs_per_splits = num_jobs // args.num_splits model_splits = [] end = 0 for idx in range(args.num_splits): start = end end = start + num_jobs_per_splits + (1 if idx < num_jobs % args.num_splits else 0) model_splits.append(d[start:end]) print(model_splits)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_docstrings.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that checks all docstrings of public objects have an argument section matching their signature. Use from the root of the repo with: ```bash python utils/check_docstrings.py ``` for a check that will error in case of inconsistencies (used by `make repo-consistency`). To auto-fix issues run: ```bash python utils/check_docstrings.py --fix_and_overwrite ``` which is used by `make fix-copies` (note that this fills what it cans, you might have to manually fill information like argument descriptions). """ import argparse import ast import enum import inspect import operator as op import re from pathlib import Path from typing import Any, Optional, Tuple, Union from check_repo import ignore_undocumented from transformers.utils import direct_transformers_import PATH_TO_TRANSFORMERS = Path("src").resolve() / "transformers" # This is to make sure the transformers module imported is the one in the repo. transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) OPTIONAL_KEYWORD = "*optional*" # Re pattern that catches args blocks in docstrings (with all variation around the name supported). _re_args = re.compile(r"^\s*(Args?|Arguments?|Attributes?|Params?|Parameters?):\s*$") # Re pattern that parses the start of an arg block: catches <name> (<description>) in those lines. _re_parse_arg = re.compile(r"^(\s*)(\S+)\s+\((.+)\)(?:\:|$)") # Re pattern that parses the end of a description of an arg (catches the default in *optional*, defaults to xxx). _re_parse_description = re.compile(r"\*optional\*, defaults to (.*)$") # This is a temporary list of objects to ignore while we progressively fix them. Do not add anything here, fix the # docstrings instead. If formatting should be ignored for the docstring, you can put a comment # no-format on the # line before the docstring. OBJECTS_TO_IGNORE = [ # Deprecated "InputExample", "InputFeatures", # Signature is *args/**kwargs # "PretrainedConfig", #ignored but could be fixed # "GenerationConfig", #ignored but could be fixed "TFSequenceSummary", "TFBertTokenizer", "TFGPT2Tokenizer", # Missing arguments in the docstring "ASTFeatureExtractor", "AlbertModel", "AlbertTokenizerFast", "AlignTextModel", "AlignVisionConfig", "AudioClassificationPipeline", "AutoformerConfig", "AutomaticSpeechRecognitionPipeline", "AzureOpenAiAgent", "BarkCoarseConfig", "BarkConfig", "BarkFineConfig", "BarkSemanticConfig", "BartConfig", "BartTokenizerFast", "BarthezTokenizerFast", "BeitModel", "BertConfig", "BertJapaneseTokenizer", "BertModel", "BertTokenizerFast", "BigBirdConfig", "BigBirdForQuestionAnswering", "BigBirdModel", "BigBirdPegasusConfig", "BigBirdTokenizerFast", "BitImageProcessor", "BlenderbotConfig", "BlenderbotSmallConfig", "BlenderbotSmallTokenizerFast", "BlenderbotTokenizerFast", "Blip2QFormerConfig", "Blip2VisionConfig", "BlipTextConfig", "BlipVisionConfig", "BloomConfig", "BloomTokenizerFast", "BridgeTowerTextConfig", "BridgeTowerVisionConfig", "BrosModel", "CamembertConfig", "CamembertModel", "CamembertTokenizerFast", "CanineModel", "CanineTokenizer", "ChineseCLIPTextModel", "ClapTextConfig", "ConditionalDetrConfig", "ConditionalDetrImageProcessor", "ConvBertConfig", "ConvBertTokenizerFast", "ConvNextConfig", "ConvNextV2Config", "ConversationalPipeline", "CpmAntTokenizer", "CvtConfig", "CvtModel", "DeiTImageProcessor", "DPRReaderTokenizer", "DPRReaderTokenizerFast", "DPTModel", "Data2VecAudioConfig", "Data2VecTextConfig", "Data2VecTextModel", "Data2VecVisionModel", "DataCollatorForLanguageModeling", "DebertaConfig", "DebertaV2Config", "DebertaV2Tokenizer", "DebertaV2TokenizerFast", "DecisionTransformerConfig", "DeformableDetrConfig", "DeformableDetrImageProcessor", "DeiTModel", "DepthEstimationPipeline", "DetaConfig", "DetaImageProcessor", "DetrConfig", "DetrImageProcessor", "DinatModel", "DistilBertConfig", "DistilBertTokenizerFast", "DocumentQuestionAnsweringPipeline", "DonutSwinModel", "EarlyStoppingCallback", "EfficientFormerConfig", "EfficientFormerImageProcessor", "EfficientNetConfig", "ElectraConfig", "ElectraTokenizerFast", "EncoderDecoderModel", "EncoderRepetitionPenaltyLogitsProcessor", "ErnieMModel", "ErnieModel", "ErnieMTokenizer", "EsmConfig", "EsmModel", "FlaxAlbertForMaskedLM", "FlaxAlbertForMultipleChoice", "FlaxAlbertForPreTraining", "FlaxAlbertForQuestionAnswering", "FlaxAlbertForSequenceClassification", "FlaxAlbertForTokenClassification", "FlaxAlbertModel", "FlaxBartForCausalLM", "FlaxBartForConditionalGeneration", "FlaxBartForQuestionAnswering", "FlaxBartForSequenceClassification", "FlaxBartModel", "FlaxBeitForImageClassification", "FlaxBeitForMaskedImageModeling", "FlaxBeitModel", "FlaxBertForCausalLM", "FlaxBertForMaskedLM", "FlaxBertForMultipleChoice", "FlaxBertForNextSentencePrediction", "FlaxBertForPreTraining", "FlaxBertForQuestionAnswering", "FlaxBertForSequenceClassification", "FlaxBertForTokenClassification", "FlaxBertModel", "FlaxBigBirdForCausalLM", "FlaxBigBirdForMaskedLM", "FlaxBigBirdForMultipleChoice", "FlaxBigBirdForPreTraining", "FlaxBigBirdForQuestionAnswering", "FlaxBigBirdForSequenceClassification", "FlaxBigBirdForTokenClassification", "FlaxBigBirdModel", "FlaxBlenderbotForConditionalGeneration", "FlaxBlenderbotModel", "FlaxBlenderbotSmallForConditionalGeneration", "FlaxBlenderbotSmallModel", "FlaxBloomForCausalLM", "FlaxBloomModel", "FlaxCLIPModel", "FlaxDistilBertForMaskedLM", "FlaxDistilBertForMultipleChoice", "FlaxDistilBertForQuestionAnswering", "FlaxDistilBertForSequenceClassification", "FlaxDistilBertForTokenClassification", "FlaxDistilBertModel", "FlaxElectraForCausalLM", "FlaxElectraForMaskedLM", "FlaxElectraForMultipleChoice", "FlaxElectraForPreTraining", "FlaxElectraForQuestionAnswering", "FlaxElectraForSequenceClassification", "FlaxElectraForTokenClassification", "FlaxElectraModel", "FlaxEncoderDecoderModel", "FlaxGPT2LMHeadModel", "FlaxGPT2Model", "FlaxGPTJForCausalLM", "FlaxGPTJModel", "FlaxGPTNeoForCausalLM", "FlaxGPTNeoModel", "FlaxLlamaForCausalLM", "FlaxLlamaModel", "FlaxGemmaForCausalLM", "FlaxGemmaModel", "FlaxMBartForConditionalGeneration", "FlaxMBartForQuestionAnswering", "FlaxMBartForSequenceClassification", "FlaxMBartModel", "FlaxMarianMTModel", "FlaxMarianModel", "FlaxMistralForCausalLM", "FlaxMistralModel", "FlaxOPTForCausalLM", "FlaxPegasusForConditionalGeneration", "FlaxPegasusModel", "FlaxRegNetForImageClassification", "FlaxRegNetModel", "FlaxResNetForImageClassification", "FlaxResNetModel", "FlaxRoFormerForMaskedLM", "FlaxRoFormerForMultipleChoice", "FlaxRoFormerForQuestionAnswering", "FlaxRoFormerForSequenceClassification", "FlaxRoFormerForTokenClassification", "FlaxRoFormerModel", "FlaxRobertaForCausalLM", "FlaxRobertaForMaskedLM", "FlaxRobertaForMultipleChoice", "FlaxRobertaForQuestionAnswering", "FlaxRobertaForSequenceClassification", "FlaxRobertaForTokenClassification", "FlaxRobertaModel", "FlaxRobertaPreLayerNormForCausalLM", "FlaxRobertaPreLayerNormForMaskedLM", "FlaxRobertaPreLayerNormForMultipleChoice", "FlaxRobertaPreLayerNormForQuestionAnswering", "FlaxRobertaPreLayerNormForSequenceClassification", "FlaxRobertaPreLayerNormForTokenClassification", "FlaxRobertaPreLayerNormModel", "FlaxSpeechEncoderDecoderModel", "FlaxViTForImageClassification", "FlaxViTModel", "FlaxVisionEncoderDecoderModel", "FlaxVisionTextDualEncoderModel", "FlaxWav2Vec2ForCTC", "FlaxWav2Vec2ForPreTraining", "FlaxWav2Vec2Model", "FlaxWhisperForAudioClassification", "FlaxWhisperForConditionalGeneration", "FlaxWhisperModel", "FlaxWhisperTimeStampLogitsProcessor", "FlaxXGLMForCausalLM", "FlaxXGLMModel", "FlaxXLMRobertaForCausalLM", "FlaxXLMRobertaForMaskedLM", "FlaxXLMRobertaForMultipleChoice", "FlaxXLMRobertaForQuestionAnswering", "FlaxXLMRobertaForSequenceClassification", "FlaxXLMRobertaForTokenClassification", "FlaxXLMRobertaModel", "FNetConfig", "FNetModel", "FNetTokenizerFast", "FSMTConfig", "FeatureExtractionPipeline", "FillMaskPipeline", "FlaubertConfig", "FlavaConfig", "FlavaForPreTraining", "FlavaImageModel", "FlavaImageProcessor", "FlavaMultimodalModel", "FlavaTextConfig", "FlavaTextModel", "FocalNetModel", "FunnelTokenizerFast", "GPTBigCodeConfig", "GPTJConfig", "GPTNeoXConfig", "GPTNeoXJapaneseConfig", "GPTNeoXTokenizerFast", "GPTSanJapaneseConfig", "GitConfig", "GitVisionConfig", "GraphormerConfig", "GroupViTTextConfig", "GroupViTVisionConfig", "HerbertTokenizerFast", "HubertConfig", "HubertForCTC", "IBertConfig", "IBertModel", "IdeficsConfig", "IdeficsProcessor", "ImageClassificationPipeline", "ImageFeatureExtractionPipeline", "ImageGPTConfig", "ImageSegmentationPipeline", "ImageToImagePipeline", "ImageToTextPipeline", "InformerConfig", "InstructBlipQFormerConfig", "JukeboxPriorConfig", "JukeboxTokenizer", "LEDConfig", "LEDTokenizerFast", "LayoutLMForQuestionAnswering", "LayoutLMTokenizerFast", "LayoutLMv2Config", "LayoutLMv2ForQuestionAnswering", "LayoutLMv2TokenizerFast", "LayoutLMv3Config", "LayoutLMv3ImageProcessor", "LayoutLMv3TokenizerFast", "LayoutXLMTokenizerFast", "LevitConfig", "LiltConfig", "LiltModel", "LongT5Config", "LongformerConfig", "LongformerModel", "LongformerTokenizerFast", "LukeModel", "LukeTokenizer", "LxmertTokenizerFast", "M2M100Config", "M2M100Tokenizer", "MarkupLMProcessor", "MaskGenerationPipeline", "MBart50TokenizerFast", "MBartConfig", "MCTCTFeatureExtractor", "MPNetConfig", "MPNetModel", "MPNetTokenizerFast", "MT5Config", "MT5TokenizerFast", "MarianConfig", "MarianTokenizer", "MarkupLMConfig", "MarkupLMModel", "MarkupLMTokenizer", "MarkupLMTokenizerFast", "Mask2FormerConfig", "MaskFormerConfig", "MaxTimeCriteria", "MegaConfig", "MegaModel", "MegatronBertConfig", "MegatronBertForPreTraining", "MegatronBertModel", "MobileBertConfig", "MobileBertModel", "MobileBertTokenizerFast", "MobileNetV1ImageProcessor", "MobileNetV1Model", "MobileNetV2ImageProcessor", "MobileNetV2Model", "MobileViTModel", "MobileViTV2Model", "MLukeTokenizer", "MraConfig", "MusicgenDecoderConfig", "MusicgenForConditionalGeneration", "MusicgenMelodyForConditionalGeneration", "MvpConfig", "MvpTokenizerFast", "MT5Tokenizer", "NatModel", "NerPipeline", "NezhaConfig", "NezhaModel", "NllbMoeConfig", "NllbTokenizer", "NllbTokenizerFast", "NystromformerConfig", "OPTConfig", "ObjectDetectionPipeline", "OneFormerProcessor", "OpenAIGPTTokenizerFast", "OpenLlamaConfig", "PLBartConfig", "PegasusConfig", "PegasusTokenizer", "PegasusTokenizerFast", "PegasusXConfig", "PerceiverImageProcessor", "PerceiverModel", "PerceiverTokenizer", "PersimmonConfig", "Pipeline", "Pix2StructConfig", "Pix2StructTextConfig", "PLBartTokenizer", "Pop2PianoConfig", "PreTrainedTokenizer", "PreTrainedTokenizerBase", "PreTrainedTokenizerFast", "PrefixConstrainedLogitsProcessor", "ProphetNetConfig", "QDQBertConfig", "QDQBertModel", "QuestionAnsweringPipeline", "RagConfig", "RagModel", "RagRetriever", "RagSequenceForGeneration", "RagTokenForGeneration", "RealmConfig", "RealmForOpenQA", "RealmScorer", "RealmTokenizerFast", "ReformerConfig", "ReformerTokenizerFast", "RegNetConfig", "RemBertConfig", "RemBertModel", "RemBertTokenizer", "RemBertTokenizerFast", "RepetitionPenaltyLogitsProcessor", "RetriBertConfig", "RetriBertTokenizerFast", "RoCBertConfig", "RoCBertModel", "RoCBertTokenizer", "RoFormerConfig", "RobertaConfig", "RobertaModel", "RobertaPreLayerNormConfig", "RobertaPreLayerNormModel", "RobertaTokenizerFast", "SEWConfig", "SEWDConfig", "SEWDForCTC", "SEWForCTC", "SamConfig", "SamPromptEncoderConfig", "SeamlessM4TConfig", # use of unconventional markdown "SeamlessM4Tv2Config", # use of unconventional markdown "Seq2SeqTrainingArguments", "SpecialTokensMixin", "Speech2Text2Config", "Speech2Text2Tokenizer", "Speech2TextTokenizer", "SpeechEncoderDecoderModel", "SpeechT5Config", "SpeechT5Model", "SplinterConfig", "SplinterTokenizerFast", "SqueezeBertTokenizerFast", "SummarizationPipeline", "Swin2SRImageProcessor", "Swinv2Model", "SwitchTransformersConfig", "T5Config", "T5Tokenizer", "T5TokenizerFast", "TableQuestionAnsweringPipeline", "TableTransformerConfig", "TapasConfig", "TapasModel", "TapasTokenizer", "Text2TextGenerationPipeline", "TextClassificationPipeline", "TextGenerationPipeline", "TFAlbertForMaskedLM", "TFAlbertForMultipleChoice", "TFAlbertForPreTraining", "TFAlbertForQuestionAnswering", "TFAlbertForSequenceClassification", "TFAlbertForTokenClassification", "TFAlbertModel", "TFBartForConditionalGeneration", "TFBartForSequenceClassification", "TFBartModel", "TFBertForMaskedLM", "TFBertForMultipleChoice", "TFBertForNextSentencePrediction", "TFBertForPreTraining", "TFBertForQuestionAnswering", "TFBertForSequenceClassification", "TFBertForTokenClassification", "TFBertModel", "TFBlenderbotForConditionalGeneration", "TFBlenderbotModel", "TFBlenderbotSmallForConditionalGeneration", "TFBlenderbotSmallModel", "TFBlipForConditionalGeneration", "TFBlipForImageTextRetrieval", "TFBlipForQuestionAnswering", "TFCLIPModel", "TFCTRLForSequenceClassification", "TFCTRLLMHeadModel", "TFCTRLModel", "TFCamembertForCausalLM", "TFCamembertForMaskedLM", "TFCamembertForMultipleChoice", "TFCamembertForQuestionAnswering", "TFCamembertForSequenceClassification", "TFCamembertForTokenClassification", "TFCamembertModel", "TFConvBertForMaskedLM", "TFConvBertForMultipleChoice", "TFConvBertForQuestionAnswering", "TFConvBertForSequenceClassification", "TFConvBertForTokenClassification", "TFConvBertModel", "TFConvNextForImageClassification", "TFConvNextModel", "TFConvNextV2Model", # Parsing issue. Equivalent to PT ConvNextV2Model, see PR #25558 "TFConvNextV2ForImageClassification", "TFCvtForImageClassification", "TFCvtModel", "TFDPRReader", "TFData2VecVisionForImageClassification", "TFData2VecVisionForSemanticSegmentation", "TFData2VecVisionModel", "TFDebertaForMaskedLM", "TFDebertaForQuestionAnswering", "TFDebertaForSequenceClassification", "TFDebertaForTokenClassification", "TFDebertaModel", "TFDebertaV2ForMaskedLM", "TFDebertaV2ForMultipleChoice", "TFDebertaV2ForQuestionAnswering", "TFDebertaV2ForSequenceClassification", "TFDebertaV2ForTokenClassification", "TFDebertaV2Model", "TFDeiTForImageClassification", "TFDeiTForImageClassificationWithTeacher", "TFDeiTForMaskedImageModeling", "TFDeiTModel", "TFDistilBertForMaskedLM", "TFDistilBertForMultipleChoice", "TFDistilBertForQuestionAnswering", "TFDistilBertForSequenceClassification", "TFDistilBertForTokenClassification", "TFDistilBertModel", "TFEfficientFormerForImageClassification", "TFEfficientFormerForImageClassificationWithTeacher", "TFEfficientFormerModel", "TFElectraForMaskedLM", "TFElectraForMultipleChoice", "TFElectraForPreTraining", "TFElectraForQuestionAnswering", "TFElectraForSequenceClassification", "TFElectraForTokenClassification", "TFElectraModel", "TFEncoderDecoderModel", "TFEsmForMaskedLM", "TFEsmForSequenceClassification", "TFEsmForTokenClassification", "TFEsmModel", "TFFlaubertForMultipleChoice", "TFFlaubertForQuestionAnsweringSimple", "TFFlaubertForSequenceClassification", "TFFlaubertForTokenClassification", "TFFlaubertModel", "TFFlaubertWithLMHeadModel", "TFFunnelBaseModel", "TFFunnelForMaskedLM", "TFFunnelForMultipleChoice", "TFFunnelForPreTraining", "TFFunnelForQuestionAnswering", "TFFunnelForSequenceClassification", "TFFunnelForTokenClassification", "TFFunnelModel", "TFGPT2DoubleHeadsModel", "TFGPT2ForSequenceClassification", "TFGPT2LMHeadModel", "TFGPT2Model", "TFGPTJForCausalLM", "TFGPTJForQuestionAnswering", "TFGPTJForSequenceClassification", "TFGPTJModel", "TFGroupViTModel", "TFHubertForCTC", "TFHubertModel", "TFLEDForConditionalGeneration", "TFLEDModel", "TFLayoutLMForMaskedLM", "TFLayoutLMForQuestionAnswering", "TFLayoutLMForSequenceClassification", "TFLayoutLMForTokenClassification", "TFLayoutLMModel", "TFLayoutLMv3ForQuestionAnswering", "TFLayoutLMv3ForSequenceClassification", "TFLayoutLMv3ForTokenClassification", "TFLayoutLMv3Model", "TFLongformerForMaskedLM", "TFLongformerForMultipleChoice", "TFLongformerForQuestionAnswering", "TFLongformerForSequenceClassification", "TFLongformerForTokenClassification", "TFLongformerModel", "TFLxmertForPreTraining", "TFLxmertModel", "TFMBartForConditionalGeneration", "TFMBartModel", "TFMPNetForMaskedLM", "TFMPNetForMultipleChoice", "TFMPNetForQuestionAnswering", "TFMPNetForSequenceClassification", "TFMPNetForTokenClassification", "TFMPNetModel", "TFMarianMTModel", "TFMarianModel", "TFMobileBertForMaskedLM", "TFMobileBertForMultipleChoice", "TFMobileBertForNextSentencePrediction", "TFMobileBertForPreTraining", "TFMobileBertForQuestionAnswering", "TFMobileBertForSequenceClassification", "TFMobileBertForTokenClassification", "TFMobileBertModel", "TFMobileViTForImageClassification", "TFMobileViTForSemanticSegmentation", "TFMobileViTModel", "TFOPTForCausalLM", "TFOPTModel", "TFOpenAIGPTDoubleHeadsModel", "TFOpenAIGPTForSequenceClassification", "TFOpenAIGPTLMHeadModel", "TFOpenAIGPTModel", "TFPegasusForConditionalGeneration", "TFPegasusModel", "TFRagModel", "TFRagSequenceForGeneration", "TFRagTokenForGeneration", "TFRegNetForImageClassification", "TFRegNetModel", "TFRemBertForCausalLM", "TFRemBertForMaskedLM", "TFRemBertForMultipleChoice", "TFRemBertForQuestionAnswering", "TFRemBertForSequenceClassification", "TFRemBertForTokenClassification", "TFRemBertModel", "TFRepetitionPenaltyLogitsProcessor", "TFResNetForImageClassification", "TFResNetModel", "TFRoFormerForCausalLM", "TFRoFormerForMaskedLM", "TFRoFormerForMultipleChoice", "TFRoFormerForQuestionAnswering", "TFRoFormerForSequenceClassification", "TFRoFormerForTokenClassification", "TFRoFormerModel", "TFRobertaForMaskedLM", "TFRobertaForMultipleChoice", "TFRobertaForQuestionAnswering", "TFRobertaForSequenceClassification", "TFRobertaForTokenClassification", "TFRobertaModel", "TFRobertaPreLayerNormForMaskedLM", "TFRobertaPreLayerNormForMultipleChoice", "TFRobertaPreLayerNormForQuestionAnswering", "TFRobertaPreLayerNormForSequenceClassification", "TFRobertaPreLayerNormForTokenClassification", "TFRobertaPreLayerNormModel", "TFSamModel", "TFSegformerForImageClassification", "TFSegformerForSemanticSegmentation", "TFSegformerModel", "TFSpeech2TextForConditionalGeneration", "TFSpeech2TextModel", "TFSwiftFormerForImageClassification", "TFSwiftFormerModel", "TFSwinForImageClassification", "TFSwinForMaskedImageModeling", "TFSwinModel", "TFT5EncoderModel", "TFT5ForConditionalGeneration", "TFT5Model", "TFTapasForMaskedLM", "TFTapasForQuestionAnswering", "TFTapasForSequenceClassification", "TFTapasModel", "TFTransfoXLForSequenceClassification", "TFTransfoXLLMHeadModel", "TFTransfoXLModel", "TFViTForImageClassification", "TFViTMAEForPreTraining", "TFViTMAEModel", "TFViTModel", "TFVisionEncoderDecoderModel", "TFVisionTextDualEncoderModel", "TFWav2Vec2ForCTC", "TFWav2Vec2Model", "TFWhisperForConditionalGeneration", "TFWhisperModel", "TFXGLMForCausalLM", "TFXGLMModel", "TFXLMForMultipleChoice", "TFXLMForQuestionAnsweringSimple", "TFXLMForSequenceClassification", "TFXLMForTokenClassification", "TFXLMModel", "TFXLMRobertaForCausalLM", "TFXLMRobertaForMaskedLM", "TFXLMRobertaForMultipleChoice", "TFXLMRobertaForQuestionAnswering", "TFXLMRobertaForSequenceClassification", "TFXLMRobertaForTokenClassification", "TFXLMRobertaModel", "TFXLMWithLMHeadModel", "TFXLNetForMultipleChoice", "TFXLNetForQuestionAnsweringSimple", "TFXLNetForSequenceClassification", "TFXLNetForTokenClassification", "TFXLNetLMHeadModel", "TFXLNetModel", "TimeSeriesTransformerConfig", "TokenClassificationPipeline", "TrOCRConfig", "TrainerState", "TrainingArguments", "TrajectoryTransformerConfig", "TranslationPipeline", "TvltImageProcessor", "UMT5Config", "UperNetConfig", "UperNetForSemanticSegmentation", "ViTHybridImageProcessor", "ViTHybridModel", "ViTMSNModel", "ViTModel", "VideoClassificationPipeline", "ViltConfig", "ViltForImagesAndTextClassification", "ViltModel", "VisionEncoderDecoderModel", "VisionTextDualEncoderModel", "VisualBertConfig", "VisualBertModel", "VisualQuestionAnsweringPipeline", "VitMatteForImageMatting", "VitsTokenizer", "VivitModel", "Wav2Vec2BertForCTC", "Wav2Vec2CTCTokenizer", "Wav2Vec2Config", "Wav2Vec2ConformerConfig", "Wav2Vec2ConformerForCTC", "Wav2Vec2FeatureExtractor", "Wav2Vec2PhonemeCTCTokenizer", "WavLMConfig", "WavLMForCTC", "WhisperConfig", "WhisperFeatureExtractor", "WhisperForAudioClassification", "XCLIPTextConfig", "XCLIPVisionConfig", "XGLMConfig", "XGLMModel", "XGLMTokenizerFast", "XLMConfig", "XLMProphetNetConfig", "XLMRobertaConfig", "XLMRobertaModel", "XLMRobertaTokenizerFast", "XLMRobertaXLConfig", "XLMRobertaXLModel", "XLNetConfig", "XLNetTokenizerFast", "XmodConfig", "XmodModel", "YolosImageProcessor", "YolosModel", "YosoConfig", "ZeroShotAudioClassificationPipeline", "ZeroShotClassificationPipeline", "ZeroShotImageClassificationPipeline", "ZeroShotObjectDetectionPipeline", ] # Supported math operations when interpreting the value of defaults. MATH_OPERATORS = { ast.Add: op.add, ast.Sub: op.sub, ast.Mult: op.mul, ast.Div: op.truediv, ast.Pow: op.pow, ast.BitXor: op.xor, ast.USub: op.neg, } def find_indent(line: str) -> int: """ Returns the number of spaces that start a line indent. """ search = re.search(r"^(\s*)(?:\S|$)", line) if search is None: return 0 return len(search.groups()[0]) def stringify_default(default: Any) -> str: """ Returns the string representation of a default value, as used in docstring: numbers are left as is, all other objects are in backtiks. Args: default (`Any`): The default value to process Returns: `str`: The string representation of that default. """ if isinstance(default, bool): # We need to test for bool first as a bool passes isinstance(xxx, (int, float)) return f"`{default}`" elif isinstance(default, enum.Enum): # We need to test for enum first as an enum with int values will pass isinstance(xxx, (int, float)) return f"`{str(default)}`" elif isinstance(default, int): return str(default) elif isinstance(default, float): result = str(default) return str(round(default, 2)) if len(result) > 6 else result elif isinstance(default, str): return str(default) if default.isnumeric() else f'`"{default}"`' elif isinstance(default, type): return f"`{default.__name__}`" else: return f"`{default}`" def eval_math_expression(expression: str) -> Optional[Union[float, int]]: # Mainly taken from the excellent https://stackoverflow.com/a/9558001 """ Evaluate (safely) a mathematial expression and returns its value. Args: expression (`str`): The expression to evaluate. Returns: `Optional[Union[float, int]]`: Returns `None` if the evaluation fails in any way and the value computed otherwise. Example: ```py >>> eval_expr('2^6') 4 >>> eval_expr('2**6') 64 >>> eval_expr('1 + 2*3**(4^5) / (6 + -7)') -5.0 ``` """ try: return eval_node(ast.parse(expression, mode="eval").body) except TypeError: return def eval_node(node): if isinstance(node, ast.Num): # <number> return node.n elif isinstance(node, ast.BinOp): # <left> <operator> <right> return MATH_OPERATORS[type(node.op)](eval_node(node.left), eval_node(node.right)) elif isinstance(node, ast.UnaryOp): # <operator> <operand> e.g., -1 return MATH_OPERATORS[type(node.op)](eval_node(node.operand)) else: raise TypeError(node) def replace_default_in_arg_description(description: str, default: Any) -> str: """ Catches the default value in the description of an argument inside a docstring and replaces it by the value passed. Args: description (`str`): The description of an argument in a docstring to process. default (`Any`): The default value that whould be in the docstring of that argument. Returns: `str`: The description updated with the new default value. """ # Lots of docstrings have `optional` or **opational** instead of *optional* so we do this fix here. description = description.replace("`optional`", OPTIONAL_KEYWORD) description = description.replace("**optional**", OPTIONAL_KEYWORD) if default is inspect._empty: # No default, make sure the description doesn't have any either idx = description.find(OPTIONAL_KEYWORD) if idx != -1: description = description[:idx].rstrip() if description.endswith(","): description = description[:-1].rstrip() elif default is None: # Default None are not written, we just set `*optional*`. If there is default that is not None specified in the # description, we do not erase it (as sometimes we set the default to `None` because the default is a mutable # object). idx = description.find(OPTIONAL_KEYWORD) if idx == -1: description = f"{description}, {OPTIONAL_KEYWORD}" elif re.search(r"defaults to `?None`?", description) is not None: len_optional = len(OPTIONAL_KEYWORD) description = description[: idx + len_optional] else: str_default = None # For numbers we may have a default that is given by a math operation (1/255 is really popular). We don't # want to replace those by their actual values. if isinstance(default, (int, float)) and re.search("defaults to `?(.*?)(?:`|$)", description) is not None: # Grab the default and evaluate it. current_default = re.search("defaults to `?(.*?)(?:`|$)", description).groups()[0] if default == eval_math_expression(current_default): try: # If it can be directly converted to the type of the default, it's a simple value str_default = str(type(default)(current_default)) except Exception: # Otherwise there is a math operator so we add a code block. str_default = f"`{current_default}`" elif isinstance(default, enum.Enum) and default.name == current_default.split(".")[-1]: # When the default is an Enum (this is often the case for PIL.Image.Resampling), and the docstring # matches the enum name, keep the existing docstring rather than clobbering it with the enum value. str_default = f"`{current_default}`" if str_default is None: str_default = stringify_default(default) # Make sure default match if OPTIONAL_KEYWORD not in description: description = f"{description}, {OPTIONAL_KEYWORD}, defaults to {str_default}" elif _re_parse_description.search(description) is None: idx = description.find(OPTIONAL_KEYWORD) len_optional = len(OPTIONAL_KEYWORD) description = f"{description[:idx + len_optional]}, defaults to {str_default}" else: description = _re_parse_description.sub(rf"*optional*, defaults to {str_default}", description) return description def get_default_description(arg: inspect.Parameter) -> str: """ Builds a default description for a parameter that was not documented. Args: arg (`inspect.Parameter`): The argument in the signature to generate a description for. Returns: `str`: The description. """ if arg.annotation is inspect._empty: arg_type = "<fill_type>" elif hasattr(arg.annotation, "__name__"): arg_type = arg.annotation.__name__ else: arg_type = str(arg.annotation) if arg.default is inspect._empty: return f"`{arg_type}`" elif arg.default is None: return f"`{arg_type}`, {OPTIONAL_KEYWORD}" else: str_default = stringify_default(arg.default) return f"`{arg_type}`, {OPTIONAL_KEYWORD}, defaults to {str_default}" def find_source_file(obj: Any) -> Path: """ Finds the source file of an object. Args: obj (`Any`): The object whose source file we are looking for. Returns: `Path`: The source file. """ module = obj.__module__ obj_file = PATH_TO_TRANSFORMERS for part in module.split(".")[1:]: obj_file = obj_file / part return obj_file.with_suffix(".py") def match_docstring_with_signature(obj: Any) -> Optional[Tuple[str, str]]: """ Matches the docstring of an object with its signature. Args: obj (`Any`): The object to process. Returns: `Optional[Tuple[str, str]]`: Returns `None` if there is no docstring or no parameters documented in the docstring, otherwise returns a tuple of two strings: the current documentation of the arguments in the docstring and the one matched with the signature. """ if len(getattr(obj, "__doc__", "")) == 0: # Nothing to do, there is no docstring. return # Read the docstring in the source code to see if there is a special command to ignore this object. try: source, _ = inspect.getsourcelines(obj) except OSError: source = [] idx = 0 while idx < len(source) and '"""' not in source[idx]: idx += 1 ignore_order = False if idx < len(source): line_before_docstring = source[idx - 1] if re.search(r"^\s*#\s*no-format\s*$", line_before_docstring): # This object is ignored return elif re.search(r"^\s*#\s*ignore-order\s*$", line_before_docstring): ignore_order = True # Read the signature signature = inspect.signature(obj).parameters obj_doc_lines = obj.__doc__.split("\n") # Get to the line where we start documenting arguments idx = 0 while idx < len(obj_doc_lines) and _re_args.search(obj_doc_lines[idx]) is None: idx += 1 if idx == len(obj_doc_lines): # Nothing to do, no parameters are documented. return indent = find_indent(obj_doc_lines[idx]) arguments = {} current_arg = None idx += 1 start_idx = idx # Keep going until the arg section is finished (nonempty line at the same indent level) or the end of the docstring. while idx < len(obj_doc_lines) and ( len(obj_doc_lines[idx].strip()) == 0 or find_indent(obj_doc_lines[idx]) > indent ): if find_indent(obj_doc_lines[idx]) == indent + 4: # New argument -> let's generate the proper doc for it re_search_arg = _re_parse_arg.search(obj_doc_lines[idx]) if re_search_arg is not None: _, name, description = re_search_arg.groups() current_arg = name if name in signature: default = signature[name].default if signature[name].kind is inspect._ParameterKind.VAR_KEYWORD: default = None new_description = replace_default_in_arg_description(description, default) else: new_description = description init_doc = _re_parse_arg.sub(rf"\1\2 ({new_description}):", obj_doc_lines[idx]) arguments[current_arg] = [init_doc] elif current_arg is not None: arguments[current_arg].append(obj_doc_lines[idx]) idx += 1 # We went too far by one (perhaps more if there are a lot of new lines) idx -= 1 while len(obj_doc_lines[idx].strip()) == 0: arguments[current_arg] = arguments[current_arg][:-1] idx -= 1 # And we went too far by one again. idx += 1 old_doc_arg = "\n".join(obj_doc_lines[start_idx:idx]) old_arguments = list(arguments.keys()) arguments = {name: "\n".join(doc) for name, doc in arguments.items()} # Add missing arguments with a template for name in set(signature.keys()) - set(arguments.keys()): arg = signature[name] # We ignore private arguments or *args/**kwargs (unless they are documented by the user) if name.startswith("_") or arg.kind in [ inspect._ParameterKind.VAR_KEYWORD, inspect._ParameterKind.VAR_POSITIONAL, ]: arguments[name] = "" else: arg_desc = get_default_description(arg) arguments[name] = " " * (indent + 4) + f"{name} ({arg_desc}): <fill_docstring>" # Arguments are sorted by the order in the signature unless a special comment is put. if ignore_order: new_param_docs = [arguments[name] for name in old_arguments if name in signature] missing = set(signature.keys()) - set(old_arguments) new_param_docs.extend([arguments[name] for name in missing if len(arguments[name]) > 0]) else: new_param_docs = [arguments[name] for name in signature.keys() if len(arguments[name]) > 0] new_doc_arg = "\n".join(new_param_docs) return old_doc_arg, new_doc_arg def fix_docstring(obj: Any, old_doc_args: str, new_doc_args: str): """ Fixes the docstring of an object by replacing its arguments documentaiton by the one matched with the signature. Args: obj (`Any`): The object whose dostring we are fixing. old_doc_args (`str`): The current documentation of the parameters of `obj` in the docstring (as returned by `match_docstring_with_signature`). new_doc_args (`str`): The documentation of the parameters of `obj` matched with its signature (as returned by `match_docstring_with_signature`). """ # Read the docstring in the source code and make sure we have the right part of the docstring source, line_number = inspect.getsourcelines(obj) # Get to the line where we start documenting arguments idx = 0 while idx < len(source) and _re_args.search(source[idx]) is None: idx += 1 if idx == len(source): # Args are not defined in the docstring of this object return # Get to the line where we stop documenting arguments indent = find_indent(source[idx]) idx += 1 start_idx = idx while idx < len(source) and (len(source[idx].strip()) == 0 or find_indent(source[idx]) > indent): idx += 1 idx -= 1 while len(source[idx].strip()) == 0: idx -= 1 idx += 1 if "".join(source[start_idx:idx])[:-1] != old_doc_args: # Args are not fully defined in the docstring of this object return obj_file = find_source_file(obj) with open(obj_file, "r", encoding="utf-8") as f: content = f.read() # Replace content lines = content.split("\n") lines = lines[: line_number + start_idx - 1] + [new_doc_args] + lines[line_number + idx - 1 :] print(f"Fixing the docstring of {obj.__name__} in {obj_file}.") with open(obj_file, "w", encoding="utf-8") as f: f.write("\n".join(lines)) def check_docstrings(overwrite: bool = False): """ Check docstrings of all public objects that are callables and are documented. Args: overwrite (`bool`, *optional*, defaults to `False`): Whether to fix inconsistencies or not. """ failures = [] hard_failures = [] to_clean = [] for name in dir(transformers): # Skip objects that are private or not documented. if name.startswith("_") or ignore_undocumented(name) or name in OBJECTS_TO_IGNORE: continue obj = getattr(transformers, name) if not callable(obj) or not isinstance(obj, type) or getattr(obj, "__doc__", None) is None: continue # Check docstring try: result = match_docstring_with_signature(obj) if result is not None: old_doc, new_doc = result else: old_doc, new_doc = None, None except Exception as e: print(e) hard_failures.append(name) continue if old_doc != new_doc: if overwrite: fix_docstring(obj, old_doc, new_doc) else: failures.append(name) elif not overwrite and new_doc is not None and ("<fill_type>" in new_doc or "<fill_docstring>" in new_doc): to_clean.append(name) # Deal with errors error_message = "" if len(hard_failures) > 0: error_message += ( "The argument part of the docstrings of the following objects could not be processed, check they are " "properly formatted." ) error_message += "\n" + "\n".join([f"- {name}" for name in hard_failures]) if len(failures) > 0: error_message += ( "The following objects docstrings do not match their signature. Run `make fix-copies` to fix this. " "In some cases, this error may be raised incorrectly by the docstring checker. If you think this is the " "case, you can manually check the docstrings and then add the object name to `OBJECTS_TO_IGNORE` in " "`utils/check_docstrings.py`." ) error_message += "\n" + "\n".join([f"- {name}" for name in failures]) if len(to_clean) > 0: error_message += ( "The following objects docstrings contain templates you need to fix: search for `<fill_type>` or " "`<fill_docstring>`." ) error_message += "\n" + "\n".join([f"- {name}" for name in to_clean]) if len(error_message) > 0: error_message = "There was at least one problem when checking docstrings of public objects.\n" + error_message raise ValueError(error_message) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() check_docstrings(overwrite=args.fix_and_overwrite)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/extract_warnings.py
import argparse import json import os import time import zipfile from get_ci_error_statistics import download_artifact, get_artifacts_links from transformers import logging logger = logging.get_logger(__name__) def extract_warnings_from_single_artifact(artifact_path, targets): """Extract warnings from a downloaded artifact (in .zip format)""" selected_warnings = set() buffer = [] def parse_line(fp): for line in fp: if isinstance(line, bytes): line = line.decode("UTF-8") if "warnings summary (final)" in line: continue # This means we are outside the body of a warning elif not line.startswith(" "): # process a single warning and move it to `selected_warnings`. if len(buffer) > 0: warning = "\n".join(buffer) # Only keep the warnings specified in `targets` if any(f": {x}: " in warning for x in targets): selected_warnings.add(warning) buffer.clear() continue else: line = line.strip() buffer.append(line) if from_gh: for filename in os.listdir(artifact_path): file_path = os.path.join(artifact_path, filename) if not os.path.isdir(file_path): # read the file if filename != "warnings.txt": continue with open(file_path) as fp: parse_line(fp) else: try: with zipfile.ZipFile(artifact_path) as z: for filename in z.namelist(): if not os.path.isdir(filename): # read the file if filename != "warnings.txt": continue with z.open(filename) as fp: parse_line(fp) except Exception: logger.warning( f"{artifact_path} is either an invalid zip file or something else wrong. This file is skipped." ) return selected_warnings def extract_warnings(artifact_dir, targets): """Extract warnings from all artifact files""" selected_warnings = set() paths = [os.path.join(artifact_dir, p) for p in os.listdir(artifact_dir) if (p.endswith(".zip") or from_gh)] for p in paths: selected_warnings.update(extract_warnings_from_single_artifact(p, targets)) return selected_warnings if __name__ == "__main__": def list_str(values): return values.split(",") parser = argparse.ArgumentParser() # Required parameters parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.") parser.add_argument( "--output_dir", type=str, required=True, help="Where to store the downloaded artifacts and other result files.", ) parser.add_argument("--token", default=None, type=str, help="A token that has actions:read permission.") # optional parameters parser.add_argument( "--targets", default="DeprecationWarning,UserWarning,FutureWarning", type=list_str, help="Comma-separated list of target warning(s) which we want to extract.", ) parser.add_argument( "--from_gh", action="store_true", help="If running from a GitHub action workflow and collecting warnings from its artifacts.", ) args = parser.parse_args() from_gh = args.from_gh if from_gh: # The artifacts have to be downloaded using `actions/download-artifact@v4` pass else: os.makedirs(args.output_dir, exist_ok=True) # get download links artifacts = get_artifacts_links(args.workflow_run_id, token=args.token) with open(os.path.join(args.output_dir, "artifacts.json"), "w", encoding="UTF-8") as fp: json.dump(artifacts, fp, ensure_ascii=False, indent=4) # download artifacts for idx, (name, url) in enumerate(artifacts.items()): print(name) print(url) print("=" * 80) download_artifact(name, url, args.output_dir, args.token) # Be gentle to GitHub time.sleep(1) # extract warnings from artifacts selected_warnings = extract_warnings(args.output_dir, args.targets) selected_warnings = sorted(selected_warnings) with open(os.path.join(args.output_dir, "selected_warnings.json"), "w", encoding="UTF-8") as fp: json.dump(selected_warnings, fp, ensure_ascii=False, indent=4)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/past_ci_versions.py
import argparse import os past_versions_testing = { "pytorch": { "1.13": { "torch": "1.13.1", "torchvision": "0.14.1", "torchaudio": "0.13.1", "python": 3.9, "cuda": "cu116", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1" " --extra-index-url https://download.pytorch.org/whl/cu116" ), "base_image": "nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04", }, "1.12": { "torch": "1.12.1", "torchvision": "0.13.1", "torchaudio": "0.12.1", "python": 3.9, "cuda": "cu113", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1" " --extra-index-url https://download.pytorch.org/whl/cu113" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "1.11": { "torch": "1.11.0", "torchvision": "0.12.0", "torchaudio": "0.11.0", "python": 3.9, "cuda": "cu113", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0" " --extra-index-url https://download.pytorch.org/whl/cu113" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "1.10": { "torch": "1.10.2", "torchvision": "0.11.3", "torchaudio": "0.10.2", "python": 3.9, "cuda": "cu113", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.10.2 torchvision==0.11.3 torchaudio==0.10.2" " --extra-index-url https://download.pytorch.org/whl/cu113" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, # torchaudio < 0.10 has no CUDA-enabled binary distributions "1.9": { "torch": "1.9.1", "torchvision": "0.10.1", "torchaudio": "0.9.1", "python": 3.9, "cuda": "cu111", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.9.1 torchvision==0.10.1 torchaudio==0.9.1" " --extra-index-url https://download.pytorch.org/whl/cu111" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, }, "tensorflow": { "2.11": { "tensorflow": "2.11.1", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.11.1", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.10": { "tensorflow": "2.10.1", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.10.1", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.9": { "tensorflow": "2.9.3", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.9.3", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.8": { "tensorflow": "2.8.2", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.8.2", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.7": { "tensorflow": "2.7.3", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.7.3", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.6": { "tensorflow": "2.6.5", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.6.5", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.5": { "tensorflow": "2.5.3", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.5.3", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, }, } if __name__ == "__main__": parser = argparse.ArgumentParser("Choose the framework and version to install") parser.add_argument( "--framework", help="The framework to install. Should be `torch` or `tensorflow`", type=str, required=True ) parser.add_argument("--version", help="The version of the framework to install.", type=str, required=True) args = parser.parse_args() info = past_versions_testing[args.framework][args.version] os.system(f'echo "export INSTALL_CMD=\'{info["install"]}\'" >> ~/.profile') print(f'echo "export INSTALL_CMD=\'{info["install"]}\'" >> ~/.profile') cuda = "" if args.framework == "pytorch": cuda = info["cuda"] os.system(f"echo \"export CUDA='{cuda}'\" >> ~/.profile") print(f"echo \"export CUDA='{cuda}'\" >> ~/.profile")
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_copies.py
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that checks whether the copies defined in the library match the original or not. This includes: - All code commented with `# Copied from` comments, - The list of models in the main README.md matches the ones in the localized READMEs, - Files that are registered as full copies of one another in the `FULL_COPIES` constant of this script. This also checks the list of models in the README is complete (has all models) and add a line to complete if there is a model missing. Use from the root of the repo with: ```bash python utils/check_copies.py ``` for a check that will error in case of inconsistencies (used by `make repo-consistency`) or ```bash python utils/check_copies.py --fix_and_overwrite ``` for a check that will fix all inconsistencies automatically (used by `make fix-copies`). """ import argparse import glob import os import re import subprocess from collections import OrderedDict from typing import List, Optional, Tuple, Union from transformers.utils import direct_transformers_import # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_copies.py TRANSFORMERS_PATH = "src/transformers" MODEL_TEST_PATH = "tests/models" PATH_TO_DOCS = "docs/source/en" REPO_PATH = "." # Mapping for files that are full copies of others (keys are copies, values the file to keep them up to data with) FULL_COPIES = { "examples/tensorflow/question-answering/utils_qa.py": "examples/pytorch/question-answering/utils_qa.py", "examples/flax/question-answering/utils_qa.py": "examples/pytorch/question-answering/utils_qa.py", } LOCALIZED_READMES = { # If the introduction or the conclusion of the list change, the prompts may need to be updated. "README.md": { "start_prompt": "๐Ÿค— Transformers currently provides the following architectures", "end_prompt": "1. Want to contribute a new model?", "format_model_list": ( "**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by" " {paper_authors}.{supplements}" ), }, "README_zh-hans.md": { "start_prompt": "๐Ÿค— Transformers ็›ฎๅ‰ๆ”ฏๆŒๅฆ‚ไธ‹็š„ๆžถๆž„", "end_prompt": "1. ๆƒณ่ฆ่ดก็Œฎๆ–ฐ็š„ๆจกๅž‹๏ผŸ", "format_model_list": ( "**[{title}]({model_link})** (ๆฅ่‡ช {paper_affiliations}) ไผด้š่ฎบๆ–‡ {paper_title_link} ็”ฑ {paper_authors}" " ๅ‘ๅธƒใ€‚{supplements}" ), }, "README_zh-hant.md": { "start_prompt": "๐Ÿค— Transformers ็›ฎๅ‰ๆ”ฏๆดไปฅไธ‹็š„ๆžถๆง‹", "end_prompt": "1. ๆƒณ่ฆ่ฒข็ปๆ–ฐ็š„ๆจกๅž‹๏ผŸ", "format_model_list": ( "**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by" " {paper_authors}.{supplements}" ), }, "README_ko.md": { "start_prompt": "๐Ÿค— Transformers๋Š” ๋‹ค์Œ ๋ชจ๋ธ๋“ค์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค", "end_prompt": "1. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์˜ฌ๋ฆฌ๊ณ  ์‹ถ๋‚˜์š”?", "format_model_list": ( "**[{title}]({model_link})** ({paper_affiliations} ์—์„œ ์ œ๊ณต)์€ {paper_authors}.{supplements}์˜" " {paper_title_link}๋…ผ๋ฌธ๊ณผ ํ•จ๊ป˜ ๋ฐœํ‘œํ–ˆ์Šต๋‹ˆ๋‹ค." ), }, "README_es.md": { "start_prompt": "๐Ÿค— Transformers actualmente proporciona las siguientes arquitecturas", "end_prompt": "1. ยฟQuieres aportar un nuevo modelo?", "format_model_list": ( "**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by" " {paper_authors}.{supplements}" ), }, "README_ja.md": { "start_prompt": "๐Ÿค—Transformersใฏ็พๅœจใ€ไปฅไธ‹ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆไพ›ใ—ใฆใ„ใพใ™", "end_prompt": "1. ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’ๆŠ•็จฟใ—ใŸใ„ใงใ™ใ‹๏ผŸ", "format_model_list": ( "**[{title}]({model_link})** ({paper_affiliations} ใ‹ใ‚‰) {paper_authors}.{supplements} ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡" " {paper_title_link}" ), }, "README_hd.md": { "start_prompt": "๐Ÿค— เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เคตเคฐเฅเคคเคฎเคพเคจ เคฎเฅ‡เค‚ เคจเคฟเคฎเฅเคจเคฒเคฟเค–เคฟเคค เค†เคฐเฅเค•เคฟเคŸเฅ‡เค•เฅเคšเคฐ เค•เคพ เคธเคฎเคฐเฅเคฅเคจ เค•เคฐเคคเฅ‡ เคนเฅˆเค‚", "end_prompt": "1. เคเค• เคจเค เคฎเฅ‰เคกเคฒ เคฎเฅ‡เค‚ เคฏเฅ‹เค—เคฆเคพเคจ เคฆเฅ‡เคจเคพ เคšเคพเคนเคคเฅ‡ เคนเฅˆเค‚?", "format_model_list": ( "**[{title}]({model_link})** ({paper_affiliations} เคธเฅ‡) {paper_authors}.{supplements} เคฆเฅเคตเคพเคฐเคพ" "เค…เคจเฅเคธเค‚เคงเคพเคจ เคชเคคเฅเคฐ {paper_title_link} เค•เฅ‡ เคธเคพเคฅ เคœเคพเคฐเฅ€ เค•เคฟเคฏเคพ เค—เคฏเคพ" ), }, "README_ru.md": { "start_prompt": "๐Ÿค— ะ’ ะฝะฐัั‚ะพัั‰ะตะต ะฒั€ะตะผั Transformers ะฟั€ะตะดะพัั‚ะฐะฒะปัะตั‚ ัะปะตะดัƒัŽั‰ะธะต ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ั‹", "end_prompt": "1. ะฅะพั‚ะธั‚ะต ะฒะฝะตัั‚ะธ ะฝะพะฒัƒัŽ ะผะพะดะตะปัŒ?", "format_model_list": ( "**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by" " {paper_authors}.{supplements}" ), }, "README_pt-br.md": { "start_prompt": "๐Ÿค— Transformers atualmente fornece as seguintes arquiteturas", "end_prompt": "1. Quer contribuir com um novo modelo?", "format_model_list": ( "**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by" " {paper_authors}.{supplements}" ), }, "README_te.md": { "start_prompt": "๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฐธเฑเฐคเฑเฐคเฐ‚ เฐ•เฐฟเฐ‚เฐฆเฐฟ เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ…เฐ‚เฐฆเฐœเฑ‡เฐธเฑเฐคเฑเฐจเฑเฐจเฐพเฐฏเฐฟ", "end_prompt": "1. เฐ•เฑŠเฐคเฑเฐค เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ…เฐ‚เฐฆเฐฟเฐ‚เฐšเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑเฐจเฑเฐจเฐพเฐฐเฐพ?", "format_model_list": ( "**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by" " {paper_authors}.{supplements}" ), }, "README_fr.md": { "start_prompt": "๐Ÿค— Transformers fournit actuellement les architectures suivantes", "end_prompt": "1. Vous souhaitez contribuer avec un nouveau modรจle ?", "format_model_list": ( "**[{title}]({model_link})** (de {paper_affiliations}) publiรฉ dans l'article {paper_title_link} par" "{paper_authors}.{supplements}" ), }, "README_de.md": { "start_prompt": "๐Ÿค— Transformers bietet derzeit die folgenden Architekturen an", "end_prompt": "1. Mรถchten Sie ein neues Modell beitragen?", "format_model_list": ( "**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by" " {paper_authors}.{supplements}" ), }, "README_vi.md": { "start_prompt": "๐Ÿค— Transformers hiแป‡n ฤ‘ang cung cแบฅp cรกc kiแบฟn trรบc sau ฤ‘รขy", "end_prompt": "1. Muแป‘n ฤ‘รณng gรณp mแป™t mรด hรฌnh mแป›i?", "format_model_list": ( "**[{title}]({model_link})** (tแปซ {paper_affiliations}) ฤ‘ฦฐแปฃc phรกt hร nh vแป›i bร i bรกo {paper_title_link} by" " {paper_authors}.{supplements}" ), }, } # This is to make sure the transformers module imported is the one in the repo. transformers_module = direct_transformers_import(TRANSFORMERS_PATH) def _is_definition_header_ending_line(line: str) -> bool: # Helper function. Returns `True` if `line` is the end parenthesis of a class/function definition return re.search(r"^\s*\)(\s*->.*:|:)\s*$", line) is not None def _should_continue(line: str, indent: str) -> bool: # Helper function. Returns `True` if `line` is empty, starts with the `indent` or is the end parenthesis of a # class/function definition return line.startswith(indent) or len(line.strip()) == 0 or _is_definition_header_ending_line(line) def _sanity_check_splits(splits_1, splits_2, is_class): """Check the two (inner) block structures of the corresponding code block given by `split_code_into_blocks` match. For the case of `class`, they must be of one of the following 3 cases: - a single block without name: class foo: a = 1 - a consecutive sequence of (1 or more) blocks with name class foo: def f(x): return x - a block without name, followed by a consecutive sequence of (1 or more) blocks with name class foo: a = 1 def f(x): return x def g(x): return None The 2 code snippets that give `splits_1` and `splits_2` have to be in the same case to pass this check, but the number of blocks with name in the consecutive sequence is not taken into account. For the case of `function or method`, we don't require it to be in one of the above 3 cases. However, the structure of`splits_1` and `splits_2` have to match exactly. In particular, the number of blocks with name in a consecutive sequence is taken into account. """ block_names_1 = [] block_names_2 = [] for block in splits_1[1:]: if block[0].startswith("_block_without_name_"): block_names_1.append("block_without_name") elif not block[0].startswith("_empty_block_") and ( not is_class or len(block_names_1) == 0 or block_names_1[-1].startswith("block_without_name") ): block_names_1.append("block_with_name") for block in splits_2[1:]: if block[0].startswith("_block_without_name_"): block_names_2.append("block_without_name") elif not block[0].startswith("_empty_block_") and ( not is_class or len(block_names_2) == 0 or block_names_2[-1].startswith("block_without_name") ): block_names_2.append("block_with_name") if is_class: if block_names_1 not in [ ["block_without_name"], ["block_with_name"], ["block_without_name", "block_with_name"], ]: raise ValueError( "For a class, it must have a specific structure. See the docstring of `_sanity_check_splits` in the file `utils/check_copies.py`" ) if block_names_1 != block_names_2: raise ValueError("The structures in the 2 code blocks differ.") def find_block_end(lines: List[str], start_index: int, indent: int) -> int: """ Find the end of the class/func block starting at `start_index` in a source code (defined by `lines`). Args: lines (`List[str]`): The source code, represented by a list of lines. start_index (`int`): The starting index of the target class/func block. indent (`int`): The indent of the class/func body. Returns: `int`: The index of the block's ending line plus by 1 (i.e. exclusive). """ indent = " " * indent # enter the block body line_index = start_index + 1 while line_index < len(lines) and _should_continue(lines[line_index], indent): line_index += 1 # Clean up empty lines at the end (if any). while len(lines[line_index - 1]) <= 1: line_index -= 1 return line_index def split_code_into_blocks( lines: List[str], start_index: int, end_index: int, indent: int, backtrace: bool = False ) -> List[Tuple[str, int, int]]: """ Split the class/func block starting at `start_index` in a source code (defined by `lines`) into *inner blocks*. The block's header is included as the first element. The contiguous regions (without empty lines) that are not inside any inner block are included as blocks. The contiguous regions of empty lines that are not inside any inner block are also included as (dummy) blocks. Args: lines (`List[str]`): The source code, represented by a list of lines. start_index (`int`): The starting index of the target class/func block. end_index (`int`): The ending index of the target class/func block. indent (`int`): The indent of the class/func body. backtrace (`bool`, *optional*, defaults to `False`): Whether or not to include the lines before the inner class/func block's header (e.g. comments, decorators, etc.) until an empty line is encountered. Returns: `List[Tuple[str, int, int]]`: A list of elements with the form `(block_name, start_index, end_index)`. """ splits = [] # `indent - 4` is the indent level of the target class/func header try: target_block_name = re.search( rf"^{' ' * (indent - 4)}((class|def)\s+\S+)(\(|\:)", lines[start_index] ).groups()[0] except Exception: start_context = min(start_index - 10, 0) end_context = min(end_index + 10, len(lines)) raise ValueError( f"Tried to split a class or function. It did not work. Error comes from line {start_index}: \n```\n" + "".join(lines[start_context:end_context]) + "```\n" ) # from now on, the `block` means inner blocks unless explicitly specified indent_str = " " * indent block_without_name_idx = 0 empty_block_idx = 0 # Find the lines for the definition header index = start_index if "(" in lines[start_index] and "):" not in lines[start_index] in lines[start_index]: while index < end_index: if _is_definition_header_ending_line(lines[index]): break index += 1 # the first line outside the definition header index += 1 splits.append((target_block_name, start_index, index)) block_start_index, prev_block_end_index = index, index while index < end_index: # if found, it will be an inner block block_found = re.search(rf"^{indent_str}((class|def)\s+\S+)(\(|\:)", lines[index]) if block_found: name = block_found.groups()[0] block_end_index = find_block_end(lines, index, indent + 4) # backtrace to include the lines before the found block's definition header (e.g. comments, decorators, # etc.) until an empty line is encountered. block_start_index = index if index > prev_block_end_index and backtrace: idx = index - 1 for idx in range(index - 1, prev_block_end_index - 2, -1): if not (len(lines[idx].strip()) > 0 and lines[idx].startswith(indent_str)): break idx += 1 if idx < index: block_start_index = idx # between the current found block and the previous found block if block_start_index > prev_block_end_index: # give it a dummy name if len("".join(lines[prev_block_end_index:block_start_index]).strip()) == 0: prev_block_name = f"_empty_block_{empty_block_idx}" empty_block_idx += 1 else: prev_block_name = f"_block_without_name_{block_without_name_idx}" block_without_name_idx += 1 # Add it as a block splits.append((prev_block_name, prev_block_end_index, block_start_index)) # Add the current found block splits.append((name, block_start_index, block_end_index)) prev_block_end_index = block_end_index index = block_end_index - 1 index += 1 if index > prev_block_end_index: if len("".join(lines[prev_block_end_index:index]).strip()) == 0: prev_block_name = f"_empty_block_{empty_block_idx}" else: prev_block_name = f"_block_without_name_{block_without_name_idx}" splits.append((prev_block_name, prev_block_end_index, index)) return splits def find_code_in_transformers( object_name: str, base_path: str = None, return_indices: bool = False ) -> Union[str, Tuple[List[str], int, int]]: """ Find and return the source code of an object. Args: object_name (`str`): The name of the object we want the source code of. base_path (`str`, *optional*): The path to the base folder where files are checked. If not set, it will be set to `TRANSFORMERS_PATH`. return_indices(`bool`, *optional*, defaults to `False`): If `False`, will only return the code (as a string), otherwise it will also return the whole lines of the file where the object specified by `object_name` is defined, together the start/end indices of the block in the file that defines the object. Returns: `Union[str, Tuple[List[str], int, int]]`: If `return_indices=False`, only the source code of the object will be returned. Otherwise, it also returns the whole lines of the file where the object specified by `object_name` is defined, together the start/end indices of the block in the file that defines the object. """ parts = object_name.split(".") i = 0 # We can't set this as the default value in the argument, otherwise `CopyCheckTester` will fail, as it uses a # patched temp directory. if base_path is None: base_path = TRANSFORMERS_PATH # Detail: the `Copied from` statement is originally designed to work with the last part of `TRANSFORMERS_PATH`, # (which is `transformers`). The same should be applied for `MODEL_TEST_PATH`. However, its last part is `models` # (to only check and search in it) which is a bit confusing. So we keep the copied statement staring with # `tests.models.` and change it to `tests` here. if base_path == MODEL_TEST_PATH: base_path = "tests" # First let's find the module where our object lives. module = parts[i] while i < len(parts) and not os.path.isfile(os.path.join(base_path, f"{module}.py")): i += 1 if i < len(parts): module = os.path.join(module, parts[i]) if i >= len(parts): raise ValueError( f"`object_name` should begin with the name of a module of transformers but got {object_name}." ) with open(os.path.join(base_path, f"{module}.py"), "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() # Now let's find the class / func in the code! indent = "" line_index = 0 for name in parts[i + 1 :]: while ( line_index < len(lines) and re.search(rf"^{indent}(class|def)\s+{name}(\(|\:)", lines[line_index]) is None ): line_index += 1 # find the target specified in the current level in `parts` -> increase `indent` so we can search the next indent += " " # the index of the first line in the (currently found) block *body* line_index += 1 if line_index >= len(lines): raise ValueError(f" {object_name} does not match any function or class in {module}.") # `indent` is already one level deeper than the (found) class/func block's definition header # We found the beginning of the class / func, now let's find the end (when the indent diminishes). # `start_index` is the index of the class/func block's definition header start_index = line_index - 1 end_index = find_block_end(lines, start_index, len(indent)) code = "".join(lines[start_index:end_index]) return (code, (lines, start_index, end_index)) if return_indices else code def replace_code(code: str, replace_pattern: str) -> str: """Replace `code` by a pattern of the form `with X1->X2,Y1->Y2,Z1->Z2`. Args: code (`str`): The code to be modified. replace_pattern (`str`): The pattern used to modify `code`. Returns: `str`: The modified code. """ if len(replace_pattern) > 0: patterns = replace_pattern.replace("with", "").split(",") patterns = [_re_replace_pattern.search(p) for p in patterns] for pattern in patterns: if pattern is None: continue obj1, obj2, option = pattern.groups() code = re.sub(obj1, obj2, code) if option.strip() == "all-casing": code = re.sub(obj1.lower(), obj2.lower(), code) code = re.sub(obj1.upper(), obj2.upper(), code) return code def find_code_and_splits(object_name: str, base_path: str, buffer: dict = None): """Find the code of an object (specified by `object_name`) and split it into blocks. Args: object_name (`str`): The name of the object, e.g. `transformers.models.bert.modeling_bert.BertAttention` or `tests.models.llama.test_modeling_llama.LlamaModelTest.test_config`. base_path (`str`): The path to the base directory within which the search will be performed. It could be either `TRANSFORMERS_PATH` or `MODEL_TEST_PATH`. buffer (`dict`, *optional*): The buffer used to store the previous results in order to speed up the process. Returns: lines (`List[str]`): The lines of the whole file where the object is defined. code (`str`): The object's code. code_splits (`List[Tuple[str, int, int]]`): `code` splitted into blocks. See `split_code_into_blocks`. """ if buffer is None: buffer = {} if (object_name, base_path) in buffer: lines, code, code_splits = buffer[(object_name, base_path)] else: code, (lines, target_start_index, target_end_index) = find_code_in_transformers( object_name, base_path=base_path, return_indices=True ) indent = get_indent(code) # Split the code into blocks # `indent` is the indent of the class/func definition header, but `code_splits` expects the indent level of the # block body. code_splits = split_code_into_blocks( lines, target_start_index, target_end_index, len(indent) + 4, backtrace=True ) buffer[(object_name, base_path)] = lines, code, code_splits return lines, code, code_splits _re_copy_warning = re.compile(r"^(\s*)#\s*Copied from\s+transformers\.(\S+\.\S+)\s*($|\S.*$)") _re_copy_warning_for_test_file = re.compile(r"^(\s*)#\s*Copied from\s+tests\.(\S+\.\S+)\s*($|\S.*$)") _re_replace_pattern = re.compile(r"^\s*(\S+)->(\S+)(\s+.*|$)") _re_fill_pattern = re.compile(r"<FILL\s+[^>]*>") def get_indent(code: str) -> str: """ Find the indent in the first non empty line in a code sample. Args: code (`str`): The code to inspect. Returns: `str`: The indent looked at (as string). """ lines = code.split("\n") idx = 0 while idx < len(lines) and len(lines[idx]) == 0: idx += 1 if idx < len(lines): return re.search(r"^(\s*)\S", lines[idx]).groups()[0] return "" def run_ruff(code): command = ["ruff", "format", "-", "--config", "pyproject.toml", "--silent"] process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) stdout, _ = process.communicate(input=code.encode()) return stdout.decode() def stylify(code: str) -> str: """ Applies the ruff part of our `make style` command to some code. This formats the code using `ruff format`. As `ruff` does not provide a python api this cannot be done on the fly. Args: code (`str`): The code to format. Returns: `str`: The formatted code. """ has_indent = len(get_indent(code)) > 0 if has_indent: code = f"class Bla:\n{code}" formatted_code = run_ruff(code) return formatted_code[len("class Bla:\n") :] if has_indent else formatted_code def check_codes_match(observed_code: str, theoretical_code: str) -> Optional[int]: """ Checks if two version of a code match with the exception of the class/function name. Args: observed_code (`str`): The code found. theoretical_code (`str`): The code to match. Returns: `Optional[int]`: The index of the first line where there is a difference (if any) and `None` if the codes match. """ observed_code_header = observed_code.split("\n")[0] theoretical_code_header = theoretical_code.split("\n")[0] # Catch the function/class name: it is expected that those do not match. _re_class_match = re.compile(r"class\s+([^\(:]+)(?:\(|:)") _re_func_match = re.compile(r"def\s+([^\(]+)\(") for re_pattern in [_re_class_match, _re_func_match]: if re_pattern.match(observed_code_header) is not None: try: observed_obj_name = re_pattern.search(observed_code_header).groups()[0] except Exception: raise ValueError( "Tried to split a class or function. It did not work. Error comes from: \n```\n" + observed_code_header + "\n```\n" ) try: theoretical_name = re_pattern.search(theoretical_code_header).groups()[0] except Exception: raise ValueError( "Tried to split a class or function. It did not work. Error comes from: \n```\n" + theoretical_code_header + "\n```\n" ) theoretical_code_header = theoretical_code_header.replace(theoretical_name, observed_obj_name) # Find the first diff. Line 0 is special since we need to compare with the function/class names ignored. diff_index = 0 if theoretical_code_header != observed_code_header: return 0 diff_index = 1 for observed_line, theoretical_line in zip(observed_code.split("\n")[1:], theoretical_code.split("\n")[1:]): if observed_line != theoretical_line: return diff_index diff_index += 1 def is_copy_consistent(filename: str, overwrite: bool = False, buffer: dict = None) -> Optional[List[Tuple[str, int]]]: """ Check if the code commented as a copy in a file matches the original. Args: filename (`str`): The name of the file to check. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the copies when they don't match. buffer (`dict`, *optional*): The buffer used to store the previous results in order to speed up the process. Returns: `Optional[List[Tuple[str, int]]]`: If `overwrite=False`, returns the list of differences as tuples `(str, int)` with the name of the object having a diff and the line number where theere is the first diff. """ base_path = TRANSFORMERS_PATH if not filename.startswith("tests") else MODEL_TEST_PATH with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() diffs = [] line_index = 0 # Not a for loop cause `lines` is going to change (if `overwrite=True`). while line_index < len(lines): search_re = _re_copy_warning if filename.startswith("tests"): search_re = _re_copy_warning_for_test_file search = search_re.search(lines[line_index]) if search is None: line_index += 1 continue # There is some copied code here, let's retrieve the original. indent, object_name, replace_pattern = search.groups() # Find the file lines, the object's code, and its blocks target_lines, theoretical_code, theoretical_code_splits = find_code_and_splits( object_name, base_path, buffer=buffer ) # code replaced by the patterns theoretical_code_blocks = OrderedDict() for name, start, end in theoretical_code_splits: name = replace_code(name, replace_pattern) code = "".join(target_lines[start:end]) code = replace_code(code, replace_pattern) theoretical_code_blocks[name] = code theoretical_indent = get_indent(theoretical_code) # `start_index` is the index of the first line (the definition header) after `# Copied from`. # (`indent != theoretical_indent` doesn't seem to occur so far, not sure what this case is for.) start_index = line_index + 1 if indent == theoretical_indent else line_index # enter the block body line_index = start_index + 1 subcode = "\n".join(theoretical_code.split("\n")[1:]) indent = get_indent(subcode) # Loop to check the observed code, stop when indentation diminishes or if we see a End copy comment. # We can't call `find_block_end` directly as there is sth. special `# End copy"` here. should_continue = True while line_index < len(lines) and should_continue: line_index += 1 if line_index >= len(lines): break line = lines[line_index] # There is a special pattern `# End copy` to stop early. It's not documented cause it shouldn't really be # used. should_continue = _should_continue(line, indent) and re.search(f"^{indent}# End copy", line) is None # `line_index` is outside the block # Clean up empty lines at the end (if any). while len(lines[line_index - 1]) <= 1: line_index -= 1 # Split the observed code into blocks observed_code_splits = split_code_into_blocks(lines, start_index, line_index, len(indent), backtrace=True) is_class = lines[start_index].startswith(f"{' ' * (len(indent) - 4)}class ") # sanity check _sanity_check_splits(theoretical_code_splits, observed_code_splits, is_class=is_class) # observed code in a structured way (a dict mapping block names to blocks' code) observed_code_blocks = OrderedDict() for name, start, end in observed_code_splits: code = "".join(lines[start:end]) observed_code_blocks[name] = code # Below, we change some names in `theoretical_code_blocks` and `observed_code_blocks`. These mappings map the # original names to the modified names: this is used to restore the original order of the code blocks. name_mappings_1 = {k: k for k in theoretical_code_blocks.keys()} name_mappings_2 = {k: k for k in observed_code_blocks.keys()} # Update code blocks' name and content: # If `"# Ignore copy"` is found in a block of the observed code: # 1. if it's a block only in the observed code --> add it to the theoretical code. # 2. if it's also in the theoretical code () --> put its content (body) to the corresponding block under the # same name in the theoretical code. # In both cases, we change the name to have a prefix `_ignored_` so we know if we can discard them during the # comparison. ignored_existing_block_index = 0 ignored_new_block_index = 0 for name in list(observed_code_blocks.keys()): code = observed_code_blocks[name] if "# Ignore copy" in code: if name in theoretical_code_blocks: # in the target --> just copy the content del theoretical_code_blocks[name] theoretical_code_blocks[f"_ignored_existing_block_{ignored_existing_block_index}"] = code name_mappings_1[name] = f"_ignored_existing_block_{ignored_existing_block_index}" del observed_code_blocks[name] observed_code_blocks[f"_ignored_existing_block_{ignored_existing_block_index}"] = code name_mappings_2[name] = f"_ignored_existing_block_{ignored_existing_block_index}" ignored_existing_block_index += 1 else: # not in the target --> add it theoretical_code_blocks[f"_ignored_new_block_{ignored_new_block_index}"] = code name_mappings_1[ f"_ignored_new_block_{ignored_new_block_index}" ] = f"_ignored_new_block_{ignored_new_block_index}" del observed_code_blocks[name] observed_code_blocks[f"_ignored_new_block_{ignored_new_block_index}"] = code name_mappings_2[name] = f"_ignored_new_block_{ignored_new_block_index}" ignored_new_block_index += 1 # Respect the original block order: # 1. in `theoretical_code_blocks`: the new blocks will follow the existing ones # 2. in `observed_code_blocks`: the original order are kept with names modified potentially. This is necessary # to compute the correct `diff_index` if `overwrite=True` and there is a diff. theoretical_code_blocks = { name_mappings_1[orig_name]: theoretical_code_blocks[name_mappings_1[orig_name]] for orig_name in name_mappings_1 } observed_code_blocks = { name_mappings_2[orig_name]: observed_code_blocks[name_mappings_2[orig_name]] for orig_name in name_mappings_2 } # Ignore the blocks specified to be ignored. This is the version used to check if there is a mismatch theoretical_code_blocks_clean = { k: v for k, v in theoretical_code_blocks.items() if not (k.startswith(("_ignored_existing_block_", "_ignored_new_block_"))) } theoretical_code = "".join(list(theoretical_code_blocks_clean.values())) # stylify `theoretical_code` before compare (this is needed only when `replace_pattern` is not empty) if replace_pattern: theoretical_code = stylify(theoretical_code) # Remove `\n\n` in `theoretical_code` before compare (so no empty line) while "\n\n" in theoretical_code: theoretical_code = theoretical_code.replace("\n\n", "\n") # Compute `observed_code` where we don't include any empty line + keep track the line index between the # original/processed `observed_code` so we can have the correct `diff_index`. idx_to_orig_idx_mapping_for_observed_code_lines = {} idx = -1 orig_idx = -1 observed_code = "" for name, code in observed_code_blocks.items(): if code.endswith("\n"): code = code[:-1] for code_line in code.split("\n"): orig_idx += 1 if code_line.strip() and not name.startswith(("_ignored_existing_block_", "_ignored_new_block_")): idx += 1 observed_code += code_line + "\n" idx_to_orig_idx_mapping_for_observed_code_lines[idx] = orig_idx # Test for a diff and act accordingly. diff_index = check_codes_match(observed_code, theoretical_code) if diff_index is not None: # switch to the index in the original `observed_code` (i.e. before removing empty lines) diff_index = idx_to_orig_idx_mapping_for_observed_code_lines[diff_index] diffs.append([object_name, diff_index + start_index + 1]) if overwrite: # `theoretical_code_to_write` is a single string but may have several lines. theoretical_code_to_write = stylify("".join(list(theoretical_code_blocks.values()))) lines = lines[:start_index] + [theoretical_code_to_write] + lines[line_index:] # Here we treat it as a single entry in `lines`. line_index = start_index + 1 if overwrite and len(diffs) > 0: # Warn the user a file has been modified. print(f"Detected changes, rewriting {filename}.") with open(filename, "w", encoding="utf-8", newline="\n") as f: f.writelines(lines) return diffs def check_copies(overwrite: bool = False, file: str = None): """ Check every file is copy-consistent with the original. Also check the model list in the main README and other READMEs are consistent. Args: overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the copies when they don't match. file (`bool`, *optional*): The path to a specific file to check and/or fix. """ buffer = {} if file is None: all_files = glob.glob(os.path.join(TRANSFORMERS_PATH, "**/*.py"), recursive=True) all_test_files = glob.glob(os.path.join(MODEL_TEST_PATH, "**/*.py"), recursive=True) all_files = list(all_files) + list(all_test_files) else: all_files = [file] diffs = [] for filename in all_files: new_diffs = is_copy_consistent(filename, overwrite, buffer) diffs += [f"- {filename}: copy does not match {d[0]} at line {d[1]}" for d in new_diffs] if not overwrite and len(diffs) > 0: diff = "\n".join(diffs) raise Exception( "Found the following copy inconsistencies:\n" + diff + "\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them." ) def check_full_copies(overwrite: bool = False): """ Check the files that are full copies of others (as indicated in `FULL_COPIES`) are copy-consistent. Args: overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the copies when they don't match. """ diffs = [] for target, source in FULL_COPIES.items(): with open(source, "r", encoding="utf-8") as f: source_code = f.read() with open(target, "r", encoding="utf-8") as f: target_code = f.read() if source_code != target_code: if overwrite: with open(target, "w", encoding="utf-8") as f: print(f"Replacing the content of {target} by the one of {source}.") f.write(source_code) else: diffs.append(f"- {target}: copy does not match {source}.") if not overwrite and len(diffs) > 0: diff = "\n".join(diffs) raise Exception( "Found the following copy inconsistencies:\n" + diff + "\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them." ) def get_model_list(filename: str, start_prompt: str, end_prompt: str) -> str: """ Extracts the model list from a README. Args: filename (`str`): The name of the README file to check. start_prompt (`str`): The string to look for that introduces the model list. end_prompt (`str`): The string to look for that ends the model list. Returns: `str`: The model list. """ with open(os.path.join(REPO_PATH, filename), "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() # Find the start of the list. start_index = 0 while not lines[start_index].startswith(start_prompt): start_index += 1 start_index += 1 result = [] current_line = "" end_index = start_index # Keep going until the end of the list. while not lines[end_index].startswith(end_prompt): if lines[end_index].startswith("1."): if len(current_line) > 1: result.append(current_line) current_line = lines[end_index] elif len(lines[end_index]) > 1: current_line = f"{current_line[:-1]} {lines[end_index].lstrip()}" end_index += 1 if len(current_line) > 1: result.append(current_line) return "".join(result) def convert_to_localized_md(model_list: str, localized_model_list: str, format_str: str) -> Tuple[bool, str]: """ Compare the model list from the main README to the one in a localized README. Args: model_list (`str`): The model list in the main README. localized_model_list (`str`): The model list in one of the localized README. format_str (`str`): The template for a model entry in the localized README (look at the `format_model_list` in the entries of `LOCALIZED_READMES` for examples). Returns: `Tuple[bool, str]`: A tuple where the first value indicates if the READMEs match or not, and the second value is the correct localized README. """ def _rep(match): title, model_link, paper_affiliations, paper_title_link, paper_authors, supplements = match.groups() return format_str.format( title=title, model_link=model_link, paper_affiliations=paper_affiliations, paper_title_link=paper_title_link, paper_authors=paper_authors, supplements=" " + supplements.strip() if len(supplements) != 0 else "", ) # This regex captures metadata from an English model description, including model title, model link, # affiliations of the paper, title of the paper, authors of the paper, and supplemental data (see DistilBERT for # example). _re_capture_meta = re.compile( r"\*\*\[([^\]]*)\]\(([^\)]*)\)\*\* \(from ([^)]*)\)[^\[]*([^\)]*\)).*?by (.*?[A-Za-z\*]{2,}?)\. (.*)$" ) # This regex is used to synchronize title link. _re_capture_title_link = re.compile(r"\*\*\[([^\]]*)\]\(([^\)]*)\)\*\*") # This regex is used to synchronize paper title and link. _re_capture_paper_link = re.compile(r" \[([^\]]*)\]\(([^\)]*)\)") if len(localized_model_list) == 0: localized_model_index = {} else: try: localized_model_index = { re.search(r"\*\*\[([^\]]*)", line).groups()[0]: line for line in localized_model_list.strip().split("\n") } except AttributeError: raise AttributeError("A model name in localized READMEs cannot be recognized.") model_keys = [re.search(r"\*\*\[([^\]]*)", line).groups()[0] for line in model_list.strip().split("\n")] # We exclude keys in localized README not in the main one. readmes_match = not any(k not in model_keys for k in localized_model_index) localized_model_index = {k: v for k, v in localized_model_index.items() if k in model_keys} for model in model_list.strip().split("\n"): title, model_link = _re_capture_title_link.search(model).groups() if title not in localized_model_index: readmes_match = False # Add an anchor white space behind a model description string for regex. # If metadata cannot be captured, the English version will be directly copied. localized_model_index[title] = _re_capture_meta.sub(_rep, model + " ") elif _re_fill_pattern.search(localized_model_index[title]) is not None: update = _re_capture_meta.sub(_rep, model + " ") if update != localized_model_index[title]: readmes_match = False localized_model_index[title] = update else: # Synchronize title link converted_model = _re_capture_title_link.sub( f"**[{title}]({model_link})**", localized_model_index[title], count=1 ) # Synchronize paper title and its link (if found) paper_title_link = _re_capture_paper_link.search(model) if paper_title_link is not None: paper_title, paper_link = paper_title_link.groups() converted_model = _re_capture_paper_link.sub( f" [{paper_title}]({paper_link})", converted_model, count=1 ) if converted_model != localized_model_index[title]: readmes_match = False localized_model_index[title] = converted_model sorted_index = sorted(localized_model_index.items(), key=lambda x: x[0].lower()) return readmes_match, "\n".join((x[1] for x in sorted_index)) + "\n" def _find_text_in_file(filename: str, start_prompt: str, end_prompt: str) -> Tuple[str, int, int, List[str]]: """ Find the text in a file between two prompts. Args: filename (`str`): The name of the file to look into. start_prompt (`str`): The string to look for that introduces the content looked for. end_prompt (`str`): The string to look for that ends the content looked for. Returns: Tuple[str, int, int, List[str]]: The content between the two prompts, the index of the start line in the original file, the index of the end line in the original file and the list of lines of that file. """ with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() # Find the start prompt. start_index = 0 while not lines[start_index].startswith(start_prompt): start_index += 1 start_index += 1 end_index = start_index while not lines[end_index].startswith(end_prompt): end_index += 1 end_index -= 1 while len(lines[start_index]) <= 1: start_index += 1 while len(lines[end_index]) <= 1: end_index -= 1 end_index += 1 return "".join(lines[start_index:end_index]), start_index, end_index, lines # Map a model name with the name it has in the README for the check_readme check SPECIAL_MODEL_NAMES = { "Bert Generation": "BERT For Sequence Generation", "BigBird": "BigBird-RoBERTa", "Data2VecAudio": "Data2Vec", "Data2VecText": "Data2Vec", "Data2VecVision": "Data2Vec", "DonutSwin": "Swin Transformer", "Marian": "MarianMT", "MaskFormerSwin": "Swin Transformer", "OpenAI GPT-2": "GPT-2", "OpenAI GPT": "GPT", "Perceiver": "Perceiver IO", "SAM": "Segment Anything", "ViT": "Vision Transformer (ViT)", } # Update this list with the models that shouldn't be in the README. This only concerns modular models or those who do # not have an associated paper. MODELS_NOT_IN_README = [ "BertJapanese", "Encoder decoder", "FairSeq Machine-Translation", "HerBERT", "RetriBERT", "Speech Encoder decoder", "Speech2Text", "Speech2Text2", "TimmBackbone", "Vision Encoder decoder", "VisionTextDualEncoder", "CLIPVisionModel", "SiglipVisionModel", "ChineseCLIPVisionModel", ] # Template for new entries to add in the main README when we have missing models. README_TEMPLATE = ( "1. **[{model_name}](https://huggingface.co/docs/main/transformers/model_doc/{model_type})** (from " "<FILL INSTITUTION>) released with the paper [<FILL PAPER TITLE>](<FILL ARKIV LINK>) by <FILL AUTHORS>." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--file", type=str, default=None, help="A specific file to check and/or fix") parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() check_copies(args.fix_and_overwrite, args.file) check_full_copies(args.fix_and_overwrite)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/update_tiny_models.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """A script running `create_dummy_models.py` with a pre-defined set of arguments. This file is intended to be used in a CI workflow file without the need of specifying arguments. It creates and uploads tiny models for all model classes (if their tiny versions are not on the Hub yet), as well as produces an updated version of `tests/utils/tiny_model_summary.json`. That updated file should be merged into the `main` branch of `transformers` so the pipeline testing will use the latest created/updated tiny models. """ import argparse import copy import json import multiprocessing import os import time from create_dummy_models import COMPOSITE_MODELS, create_tiny_models from huggingface_hub import ModelFilter, hf_api import transformers from transformers import AutoFeatureExtractor, AutoImageProcessor, AutoTokenizer from transformers.image_processing_utils import BaseImageProcessor def get_all_model_names(): model_names = set() # Each auto modeling files contains multiple mappings. Let's get them in a dynamic way. for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]: module = getattr(transformers.models.auto, module_name, None) if module is None: continue # all mappings in a single auto modeling file mapping_names = [ x for x in dir(module) if x.endswith("_MAPPING_NAMES") and (x.startswith("MODEL_") or x.startswith("TF_MODEL_") or x.startswith("FLAX_MODEL_")) ] for name in mapping_names: mapping = getattr(module, name) if mapping is not None: for v in mapping.values(): if isinstance(v, (list, tuple)): model_names.update(v) elif isinstance(v, str): model_names.add(v) return sorted(model_names) def get_tiny_model_names_from_repo(): # All model names defined in auto mappings model_names = set(get_all_model_names()) with open("tests/utils/tiny_model_summary.json") as fp: tiny_model_info = json.load(fp) tiny_models_names = set() for model_base_name in tiny_model_info: tiny_models_names.update(tiny_model_info[model_base_name]["model_classes"]) # Remove a tiny model name if one of its framework implementation hasn't yet a tiny version on the Hub. not_on_hub = model_names.difference(tiny_models_names) for model_name in copy.copy(tiny_models_names): if not model_name.startswith("TF") and f"TF{model_name}" in not_on_hub: tiny_models_names.remove(model_name) elif model_name.startswith("TF") and model_name[2:] in not_on_hub: tiny_models_names.remove(model_name) return sorted(tiny_models_names) def get_tiny_model_summary_from_hub(output_path): special_models = COMPOSITE_MODELS.values() # All tiny model base names on Hub model_names = get_all_model_names() models = hf_api.list_models( filter=ModelFilter( author="hf-internal-testing", ) ) _models = set() for x in models: model = x.modelId org, model = model.split("/") if not model.startswith("tiny-random-"): continue model = model.replace("tiny-random-", "") if not model[0].isupper(): continue if model not in model_names and model not in special_models: continue _models.add(model) models = sorted(_models) # All tiny model names on Hub summary = {} for model in models: repo_id = f"hf-internal-testing/tiny-random-{model}" model = model.split("-")[0] try: repo_info = hf_api.repo_info(repo_id) content = { "tokenizer_classes": set(), "processor_classes": set(), "model_classes": set(), "sha": repo_info.sha, } except Exception: continue try: time.sleep(1) tokenizer_fast = AutoTokenizer.from_pretrained(repo_id) content["tokenizer_classes"].add(tokenizer_fast.__class__.__name__) except Exception: pass try: time.sleep(1) tokenizer_slow = AutoTokenizer.from_pretrained(repo_id, use_fast=False) content["tokenizer_classes"].add(tokenizer_slow.__class__.__name__) except Exception: pass try: time.sleep(1) img_p = AutoImageProcessor.from_pretrained(repo_id) content["processor_classes"].add(img_p.__class__.__name__) except Exception: pass try: time.sleep(1) feat_p = AutoFeatureExtractor.from_pretrained(repo_id) if not isinstance(feat_p, BaseImageProcessor): content["processor_classes"].add(feat_p.__class__.__name__) except Exception: pass try: time.sleep(1) model_class = getattr(transformers, model) m = model_class.from_pretrained(repo_id) content["model_classes"].add(m.__class__.__name__) except Exception: pass try: time.sleep(1) model_class = getattr(transformers, f"TF{model}") m = model_class.from_pretrained(repo_id) content["model_classes"].add(m.__class__.__name__) except Exception: pass content["tokenizer_classes"] = sorted(content["tokenizer_classes"]) content["processor_classes"] = sorted(content["processor_classes"]) content["model_classes"] = sorted(content["model_classes"]) summary[model] = content with open(os.path.join(output_path, "hub_tiny_model_summary.json"), "w") as fp: json.dump(summary, fp, ensure_ascii=False, indent=4) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--num_workers", default=1, type=int, help="The number of workers to run.") args = parser.parse_args() # This has to be `spawn` to avoid hanging forever! multiprocessing.set_start_method("spawn") output_path = "tiny_models" all = True model_types = None models_to_skip = get_tiny_model_names_from_repo() no_check = True upload = True organization = "hf-internal-testing" create_tiny_models( output_path, all, model_types, models_to_skip, no_check, upload, organization, token=os.environ.get("TOKEN", None), num_workers=args.num_workers, )
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/notification_service.py
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import ast import collections import functools import json import operator import os import re import sys import time from typing import Dict, List, Optional, Union import requests from get_ci_error_statistics import get_jobs from get_previous_daily_ci import get_last_daily_ci_reports from slack_sdk import WebClient client = WebClient(token=os.environ["CI_SLACK_BOT_TOKEN"]) NON_MODEL_TEST_MODULES = [ "benchmark", "deepspeed", "extended", "fixtures", "generation", "onnx", "optimization", "pipelines", "sagemaker", "trainer", "utils", ] def handle_test_results(test_results): expressions = test_results.split(" ") failed = 0 success = 0 # When the output is short enough, the output is surrounded by = signs: "== OUTPUT ==" # When it is too long, those signs are not present. time_spent = expressions[-2] if "=" in expressions[-1] else expressions[-1] for i, expression in enumerate(expressions): if "failed" in expression: failed += int(expressions[i - 1]) if "passed" in expression: success += int(expressions[i - 1]) return failed, success, time_spent def handle_stacktraces(test_results): # These files should follow the following architecture: # === FAILURES === # <path>:<line>: Error ... # <path>:<line>: Error ... # <empty line> total_stacktraces = test_results.split("\n")[1:-1] stacktraces = [] for stacktrace in total_stacktraces: try: line = stacktrace[: stacktrace.index(" ")].split(":")[-2] error_message = stacktrace[stacktrace.index(" ") :] stacktraces.append(f"(line {line}) {error_message}") except Exception: stacktraces.append("Cannot retrieve error message.") return stacktraces def dicts_to_sum(objects: Union[Dict[str, Dict], List[dict]]): if isinstance(objects, dict): lists = objects.values() else: lists = objects # Convert each dictionary to counter counters = map(collections.Counter, lists) # Sum all the counters return functools.reduce(operator.add, counters) class Message: def __init__( self, title: str, ci_title: str, model_results: Dict, additional_results: Dict, selected_warnings: List = None, prev_ci_artifacts=None, ): self.title = title self.ci_title = ci_title # Failures and success of the modeling tests self.n_model_success = sum(r["success"] for r in model_results.values()) self.n_model_single_gpu_failures = sum(dicts_to_sum(r["failed"])["single"] for r in model_results.values()) self.n_model_multi_gpu_failures = sum(dicts_to_sum(r["failed"])["multi"] for r in model_results.values()) # Some suites do not have a distinction between single and multi GPU. self.n_model_unknown_failures = sum(dicts_to_sum(r["failed"])["unclassified"] for r in model_results.values()) self.n_model_failures = ( self.n_model_single_gpu_failures + self.n_model_multi_gpu_failures + self.n_model_unknown_failures ) # Failures and success of the additional tests self.n_additional_success = sum(r["success"] for r in additional_results.values()) if len(additional_results) > 0: # `dicts_to_sum` uses `dicts_to_sum` which requires a non empty dictionary. Let's just add an empty entry. all_additional_failures = dicts_to_sum([r["failed"] for r in additional_results.values()]) self.n_additional_single_gpu_failures = all_additional_failures["single"] self.n_additional_multi_gpu_failures = all_additional_failures["multi"] self.n_additional_unknown_gpu_failures = all_additional_failures["unclassified"] else: self.n_additional_single_gpu_failures = 0 self.n_additional_multi_gpu_failures = 0 self.n_additional_unknown_gpu_failures = 0 self.n_additional_failures = ( self.n_additional_single_gpu_failures + self.n_additional_multi_gpu_failures + self.n_additional_unknown_gpu_failures ) # Results self.n_failures = self.n_model_failures + self.n_additional_failures self.n_success = self.n_model_success + self.n_additional_success self.n_tests = self.n_failures + self.n_success self.model_results = model_results self.additional_results = additional_results self.thread_ts = None if selected_warnings is None: selected_warnings = [] self.selected_warnings = selected_warnings self.prev_ci_artifacts = prev_ci_artifacts @property def time(self) -> str: all_results = [*self.model_results.values(), *self.additional_results.values()] time_spent = [r["time_spent"].split(", ")[0] for r in all_results if len(r["time_spent"])] total_secs = 0 for time in time_spent: time_parts = time.split(":") # Time can be formatted as xx:xx:xx, as .xx, or as x.xx if the time spent was less than a minute. if len(time_parts) == 1: time_parts = [0, 0, time_parts[0]] hours, minutes, seconds = int(time_parts[0]), int(time_parts[1]), float(time_parts[2]) total_secs += hours * 3600 + minutes * 60 + seconds hours, minutes, seconds = total_secs // 3600, (total_secs % 3600) // 60, total_secs % 60 return f"{int(hours)}h{int(minutes)}m{int(seconds)}s" @property def header(self) -> Dict: return {"type": "header", "text": {"type": "plain_text", "text": self.title}} @property def ci_title_section(self) -> Dict: return {"type": "section", "text": {"type": "mrkdwn", "text": self.ci_title}} @property def no_failures(self) -> Dict: return { "type": "section", "text": { "type": "plain_text", "text": f"๐ŸŒž There were no failures: all {self.n_tests} tests passed. The suite ran in {self.time}.", "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } @property def failures(self) -> Dict: return { "type": "section", "text": { "type": "plain_text", "text": ( f"There were {self.n_failures} failures, out of {self.n_tests} tests.\n" f"Number of model failures: {self.n_model_failures}.\n" f"The suite ran in {self.time}." ), "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } @property def warnings(self) -> Dict: # If something goes wrong, let's avoid the CI report failing to be sent. button_text = "Check warnings (Link not found)" # Use the workflow run link job_link = f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}" for job in github_actions_jobs: if "Extract warnings in CI artifacts" in job["name"] and job["conclusion"] == "success": button_text = "Check warnings" # Use the actual job link job_link = job["html_url"] break huggingface_hub_warnings = [x for x in self.selected_warnings if "huggingface_hub" in x] text = f"There are {len(self.selected_warnings)} warnings being selected." text += f"\n{len(huggingface_hub_warnings)} of them are from `huggingface_hub`." return { "type": "section", "text": { "type": "plain_text", "text": text, "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": button_text, "emoji": True}, "url": job_link, }, } @staticmethod def get_device_report(report, rjust=6): if "single" in report and "multi" in report: return f"{str(report['single']).rjust(rjust)} | {str(report['multi']).rjust(rjust)} | " elif "single" in report: return f"{str(report['single']).rjust(rjust)} | {'0'.rjust(rjust)} | " elif "multi" in report: return f"{'0'.rjust(rjust)} | {str(report['multi']).rjust(rjust)} | " @property def category_failures(self) -> Dict: model_failures = [v["failed"] for v in self.model_results.values()] category_failures = {} for model_failure in model_failures: for key, value in model_failure.items(): if key not in category_failures: category_failures[key] = dict(value) else: category_failures[key]["unclassified"] += value["unclassified"] category_failures[key]["single"] += value["single"] category_failures[key]["multi"] += value["multi"] individual_reports = [] for key, value in category_failures.items(): device_report = self.get_device_report(value) if sum(value.values()): if device_report: individual_reports.append(f"{device_report}{key}") else: individual_reports.append(key) header = "Single | Multi | Category\n" category_failures_report = prepare_reports( title="The following modeling categories had failures", header=header, reports=individual_reports ) return {"type": "section", "text": {"type": "mrkdwn", "text": category_failures_report}} def compute_diff_for_failure_reports(self, curr_failure_report, prev_failure_report): # noqa # Remove the leading and training parts that don't contain failure count information. model_failures = curr_failure_report.split("\n")[3:-2] prev_model_failures = prev_failure_report.split("\n")[3:-2] entries_changed = set(model_failures).difference(prev_model_failures) prev_map = {} for f in prev_model_failures: items = [x.strip() for x in f.split("| ")] prev_map[items[-1]] = [int(x) for x in items[:-1]] curr_map = {} for f in entries_changed: items = [x.strip() for x in f.split("| ")] curr_map[items[-1]] = [int(x) for x in items[:-1]] diff_map = {} for k, v in curr_map.items(): if k not in prev_map: diff_map[k] = v else: diff = [x - y for x, y in zip(v, prev_map[k])] if max(diff) > 0: diff_map[k] = diff entries_changed = [] for model_name, diff_values in diff_map.items(): diff = [str(x) for x in diff_values] diff = [f"+{x}" if (x != "0" and not x.startswith("-")) else x for x in diff] diff = [x.rjust(9) for x in diff] device_report = " | ".join(diff) + " | " report = f"{device_report}{model_name}" entries_changed.append(report) entries_changed = sorted(entries_changed, key=lambda s: s.split("| ")[-1]) return entries_changed @property def model_failures(self) -> List[Dict]: # Obtain per-model failures def per_model_sum(model_category_dict): return dicts_to_sum(model_category_dict["failed"].values()) failures = {} non_model_failures = { k: per_model_sum(v) for k, v in self.model_results.items() if sum(per_model_sum(v).values()) } for k, v in self.model_results.items(): if k in NON_MODEL_TEST_MODULES: pass if sum(per_model_sum(v).values()): dict_failed = dict(v["failed"]) pytorch_specific_failures = dict_failed.pop("PyTorch") tensorflow_specific_failures = dict_failed.pop("TensorFlow") other_failures = dicts_to_sum(dict_failed.values()) failures[k] = { "PyTorch": pytorch_specific_failures, "TensorFlow": tensorflow_specific_failures, "other": other_failures, } model_reports = [] other_module_reports = [] for key, value in non_model_failures.items(): if key in NON_MODEL_TEST_MODULES: device_report = self.get_device_report(value) if sum(value.values()): if device_report: report = f"{device_report}{key}" else: report = key other_module_reports.append(report) for key, value in failures.items(): device_report_values = [ value["PyTorch"]["single"], value["PyTorch"]["multi"], value["TensorFlow"]["single"], value["TensorFlow"]["multi"], sum(value["other"].values()), ] if sum(device_report_values): device_report = " | ".join([str(x).rjust(9) for x in device_report_values]) + " | " report = f"{device_report}{key}" model_reports.append(report) # (Possibly truncated) reports for the current workflow run - to be sent to Slack channels model_header = "Single PT | Multi PT | Single TF | Multi TF | Other | Category\n" sorted_model_reports = sorted(model_reports, key=lambda s: s.split("| ")[-1]) model_failures_report = prepare_reports( title="These following model modules had failures", header=model_header, reports=sorted_model_reports ) module_header = "Single | Multi | Category\n" sorted_module_reports = sorted(other_module_reports, key=lambda s: s.split("| ")[-1]) module_failures_report = prepare_reports( title="The following non-model modules had failures", header=module_header, reports=sorted_module_reports ) # To be sent to Slack channels model_failure_sections = [ {"type": "section", "text": {"type": "mrkdwn", "text": model_failures_report}}, {"type": "section", "text": {"type": "mrkdwn", "text": module_failures_report}}, ] # Save the complete (i.e. no truncation) failure tables (of the current workflow run) # (to be uploaded as artifacts) model_failures_report = prepare_reports( title="These following model modules had failures", header=model_header, reports=sorted_model_reports, to_truncate=False, ) file_path = os.path.join(os.getcwd(), "prev_ci_results/model_failures_report.txt") with open(file_path, "w", encoding="UTF-8") as fp: fp.write(model_failures_report) module_failures_report = prepare_reports( title="The following non-model modules had failures", header=module_header, reports=sorted_module_reports, to_truncate=False, ) file_path = os.path.join(os.getcwd(), "prev_ci_results/module_failures_report.txt") with open(file_path, "w", encoding="UTF-8") as fp: fp.write(module_failures_report) if self.prev_ci_artifacts is not None: # if the last run produces artifact named `prev_ci_results` if ( "prev_ci_results" in self.prev_ci_artifacts and "model_failures_report.txt" in self.prev_ci_artifacts["prev_ci_results"] ): # Compute the difference of the previous/current (model failure) table prev_model_failures = self.prev_ci_artifacts["prev_ci_results"]["model_failures_report.txt"] entries_changed = self.compute_diff_for_failure_reports(model_failures_report, prev_model_failures) if len(entries_changed) > 0: # Save the complete difference diff_report = prepare_reports( title="Changed model modules failures", header=model_header, reports=entries_changed, to_truncate=False, ) file_path = os.path.join(os.getcwd(), "prev_ci_results/changed_model_failures_report.txt") with open(file_path, "w", encoding="UTF-8") as fp: fp.write(diff_report) # To be sent to Slack channels diff_report = prepare_reports( title="*Changed model modules failures*", header=model_header, reports=entries_changed, ) model_failure_sections.append( {"type": "section", "text": {"type": "mrkdwn", "text": diff_report}}, ) return model_failure_sections @property def additional_failures(self) -> Dict: failures = {k: v["failed"] for k, v in self.additional_results.items()} errors = {k: v["error"] for k, v in self.additional_results.items()} individual_reports = [] for key, value in failures.items(): device_report = self.get_device_report(value) if sum(value.values()) or errors[key]: report = f"{key}" if errors[key]: report = f"[Errored out] {report}" if device_report: report = f"{device_report}{report}" individual_reports.append(report) header = "Single | Multi | Category\n" failures_report = prepare_reports( title="The following non-modeling tests had failures", header=header, reports=individual_reports ) return {"type": "section", "text": {"type": "mrkdwn", "text": failures_report}} @property def payload(self) -> str: blocks = [self.header] if self.ci_title: blocks.append(self.ci_title_section) if self.n_model_failures > 0 or self.n_additional_failures > 0: blocks.append(self.failures) if self.n_model_failures > 0: blocks.append(self.category_failures) for block in self.model_failures: if block["text"]["text"]: blocks.append(block) if self.n_additional_failures > 0: blocks.append(self.additional_failures) if self.n_model_failures == 0 and self.n_additional_failures == 0: blocks.append(self.no_failures) if len(self.selected_warnings) > 0: blocks.append(self.warnings) new_failure_blocks = self.get_new_model_failure_blocks(with_header=False) if len(new_failure_blocks) > 0: blocks.extend(new_failure_blocks) return json.dumps(blocks) @staticmethod def error_out(title, ci_title="", runner_not_available=False, runner_failed=False, setup_failed=False): blocks = [] title_block = {"type": "header", "text": {"type": "plain_text", "text": title}} blocks.append(title_block) if ci_title: ci_title_block = {"type": "section", "text": {"type": "mrkdwn", "text": ci_title}} blocks.append(ci_title_block) offline_runners = [] if runner_not_available: text = "๐Ÿ’” CI runners are not available! Tests are not run. ๐Ÿ˜ญ" result = os.environ.get("OFFLINE_RUNNERS") if result is not None: offline_runners = json.loads(result) elif runner_failed: text = "๐Ÿ’” CI runners have problems! Tests are not run. ๐Ÿ˜ญ" elif setup_failed: text = "๐Ÿ’” Setup job failed. Tests are not run. ๐Ÿ˜ญ" else: text = "๐Ÿ’” There was an issue running the tests. ๐Ÿ˜ญ" error_block_1 = { "type": "header", "text": { "type": "plain_text", "text": text, }, } text = "" if len(offline_runners) > 0: text = "\n โ€ข " + "\n โ€ข ".join(offline_runners) text = f"The following runners are offline:\n{text}\n\n" text += "๐Ÿ™ Let's fix it ASAP! ๐Ÿ™" error_block_2 = { "type": "section", "text": { "type": "plain_text", "text": text, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } blocks.extend([error_block_1, error_block_2]) payload = json.dumps(blocks) print("Sending the following payload") print(json.dumps({"blocks": blocks})) client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, text=text, blocks=payload, ) def post(self): payload = self.payload print("Sending the following payload") print(json.dumps({"blocks": json.loads(payload)})) text = f"{self.n_failures} failures out of {self.n_tests} tests," if self.n_failures else "All tests passed." self.thread_ts = client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, blocks=payload, text=text, ) def get_reply_blocks(self, job_name, job_result, failures, device, text): """ failures: A list with elements of the form {"line": full test name, "trace": error trace} """ # `text` must be less than 3001 characters in Slack SDK # keep some room for adding "[Truncated]" when necessary MAX_ERROR_TEXT = 3000 - len("[Truncated]") failure_text = "" for idx, error in enumerate(failures): new_text = failure_text + f'*{error["line"]}*\n_{error["trace"]}_\n\n' if len(new_text) > MAX_ERROR_TEXT: # `failure_text` here has length <= 3000 failure_text = failure_text + "[Truncated]" break # `failure_text` here has length <= MAX_ERROR_TEXT failure_text = new_text title = job_name if device is not None: title += f" ({device}-gpu)" content = {"type": "section", "text": {"type": "mrkdwn", "text": text}} # TODO: Make sure we always have a valid job link (or at least a way not to break the report sending) # Currently we get the device from a job's artifact name. # If a device is found, the job name should contain the device type, for example, `XXX (single-gpu)`. # This could be done by adding `machine_type` in a job's `strategy`. # (If `job_result["job_link"][device]` is `None`, we get an error: `... [ERROR] must provide a string ...`) if job_result["job_link"] is not None and job_result["job_link"][device] is not None: content["accessory"] = { "type": "button", "text": {"type": "plain_text", "text": "GitHub Action job", "emoji": True}, "url": job_result["job_link"][device], } return [ {"type": "header", "text": {"type": "plain_text", "text": title.upper(), "emoji": True}}, content, {"type": "section", "text": {"type": "mrkdwn", "text": failure_text}}, ] def get_new_model_failure_blocks(self, with_header=True): if self.prev_ci_artifacts is None: return {} sorted_dict = sorted(self.model_results.items(), key=lambda t: t[0]) prev_model_results = {} if ( "prev_ci_results" in self.prev_ci_artifacts and "model_results.json" in self.prev_ci_artifacts["prev_ci_results"] ): prev_model_results = json.loads(self.prev_ci_artifacts["prev_ci_results"]["model_results.json"]) all_failure_lines = {} for job, job_result in sorted_dict: if len(job_result["failures"]): devices = sorted(job_result["failures"].keys(), reverse=True) for device in devices: failures = job_result["failures"][device] prev_error_lines = {} if job in prev_model_results and device in prev_model_results[job]["failures"]: prev_error_lines = {error["line"] for error in prev_model_results[job]["failures"][device]} url = None if job_result["job_link"] is not None and job_result["job_link"][device] is not None: url = job_result["job_link"][device] for idx, error in enumerate(failures): if error["line"] in prev_error_lines: continue new_text = f'{error["line"]}\n\n' if new_text not in all_failure_lines: all_failure_lines[new_text] = [] all_failure_lines[new_text].append(f"<{url}|{device}>" if url is not None else device) MAX_ERROR_TEXT = 3000 - len("[Truncated]") - len("```New model failures```\n\n") failure_text = "" for line, devices in all_failure_lines.items(): new_text = failure_text + f"{'|'.join(devices)} gpu\n{line}" if len(new_text) > MAX_ERROR_TEXT: # `failure_text` here has length <= 3000 failure_text = failure_text + "[Truncated]" break # `failure_text` here has length <= MAX_ERROR_TEXT failure_text = new_text blocks = [] if failure_text: if with_header: blocks.append( {"type": "header", "text": {"type": "plain_text", "text": "New model failures", "emoji": True}} ) else: failure_text = f"*New model failures*\n\n{failure_text}" blocks.append({"type": "section", "text": {"type": "mrkdwn", "text": failure_text}}) return blocks def post_reply(self): if self.thread_ts is None: raise ValueError("Can only post reply if a post has been made.") sorted_dict = sorted(self.model_results.items(), key=lambda t: t[0]) for job, job_result in sorted_dict: if len(job_result["failures"]): for device, failures in job_result["failures"].items(): text = "\n".join( sorted([f"*{k}*: {v[device]}" for k, v in job_result["failed"].items() if v[device]]) ) blocks = self.get_reply_blocks(job, job_result, failures, device, text=text) print("Sending the following reply") print(json.dumps({"blocks": blocks})) client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, text=f"Results for {job}", blocks=blocks, thread_ts=self.thread_ts["ts"], ) time.sleep(1) for job, job_result in self.additional_results.items(): if len(job_result["failures"]): for device, failures in job_result["failures"].items(): blocks = self.get_reply_blocks( job, job_result, failures, device, text=f'Number of failures: {job_result["failed"][device]}', ) print("Sending the following reply") print(json.dumps({"blocks": blocks})) client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, text=f"Results for {job}", blocks=blocks, thread_ts=self.thread_ts["ts"], ) time.sleep(1) blocks = self.get_new_model_failure_blocks() if blocks: print("Sending the following reply") print(json.dumps({"blocks": blocks})) client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, text="Results for new failures", blocks=blocks, thread_ts=self.thread_ts["ts"], ) time.sleep(1) def retrieve_artifact(artifact_path: str, gpu: Optional[str]): if gpu not in [None, "single", "multi"]: raise ValueError(f"Invalid GPU for artifact. Passed GPU: `{gpu}`.") _artifact = {} if os.path.exists(artifact_path): files = os.listdir(artifact_path) for file in files: try: with open(os.path.join(artifact_path, file)) as f: _artifact[file.split(".")[0]] = f.read() except UnicodeDecodeError as e: raise ValueError(f"Could not open {os.path.join(artifact_path, file)}.") from e return _artifact def retrieve_available_artifacts(): class Artifact: def __init__(self, name: str, single_gpu: bool = False, multi_gpu: bool = False): self.name = name self.single_gpu = single_gpu self.multi_gpu = multi_gpu self.paths = [] def __str__(self): return self.name def add_path(self, path: str, gpu: str = None): self.paths.append({"name": self.name, "path": path, "gpu": gpu}) _available_artifacts: Dict[str, Artifact] = {} directories = filter(os.path.isdir, os.listdir()) for directory in directories: artifact_name = directory name_parts = artifact_name.split("_postfix_") if len(name_parts) > 1: artifact_name = name_parts[0] if artifact_name.startswith("single-gpu"): artifact_name = artifact_name[len("single-gpu") + 1 :] if artifact_name in _available_artifacts: _available_artifacts[artifact_name].single_gpu = True else: _available_artifacts[artifact_name] = Artifact(artifact_name, single_gpu=True) _available_artifacts[artifact_name].add_path(directory, gpu="single") elif artifact_name.startswith("multi-gpu"): artifact_name = artifact_name[len("multi-gpu") + 1 :] if artifact_name in _available_artifacts: _available_artifacts[artifact_name].multi_gpu = True else: _available_artifacts[artifact_name] = Artifact(artifact_name, multi_gpu=True) _available_artifacts[artifact_name].add_path(directory, gpu="multi") else: if artifact_name not in _available_artifacts: _available_artifacts[artifact_name] = Artifact(artifact_name) _available_artifacts[artifact_name].add_path(directory) return _available_artifacts def prepare_reports(title, header, reports, to_truncate=True): report = "" MAX_ERROR_TEXT = 3000 - len("[Truncated]") if not to_truncate: MAX_ERROR_TEXT = float("inf") if len(reports) > 0: # `text` must be less than 3001 characters in Slack SDK # keep some room for adding "[Truncated]" when necessary for idx in range(len(reports)): _report = header + "\n".join(reports[: idx + 1]) new_report = f"{title}:\n```\n{_report}\n```\n" if len(new_report) > MAX_ERROR_TEXT: # `report` here has length <= 3000 report = report + "[Truncated]" break report = new_report return report if __name__ == "__main__": SLACK_REPORT_CHANNEL_ID = os.environ["SLACK_REPORT_CHANNEL"] # runner_status = os.environ.get("RUNNER_STATUS") # runner_env_status = os.environ.get("RUNNER_ENV_STATUS") setup_status = os.environ.get("SETUP_STATUS") # runner_not_available = True if runner_status is not None and runner_status != "success" else False # runner_failed = True if runner_env_status is not None and runner_env_status != "success" else False # Let's keep the lines regardig runners' status (we might be able to use them again in the future) runner_not_available = False runner_failed = False # Some jobs don't depend (`needs`) on the job `setup`: in this case, the status of the job `setup` is `skipped`. setup_failed = False if setup_status in ["skipped", "success"] else True org = "huggingface" repo = "transformers" repository_full_name = f"{org}/{repo}" # This env. variable is set in workflow file (under the job `send_results`). ci_event = os.environ["CI_EVENT"] # To find the PR number in a commit title, for example, `Add AwesomeFormer model (#99999)` pr_number_re = re.compile(r"\(#(\d+)\)$") title = f"๐Ÿค— Results of the {ci_event} tests." # Add Commit/PR title with a link for push CI # (check the title in 2 env. variables - depending on the CI is triggered via `push` or `workflow_run` event) ci_title_push = os.environ.get("CI_TITLE_PUSH") ci_title_workflow_run = os.environ.get("CI_TITLE_WORKFLOW_RUN") ci_title = ci_title_push if ci_title_push else ci_title_workflow_run ci_sha = os.environ.get("CI_SHA") ci_url = None if ci_sha: ci_url = f"https://github.com/{repository_full_name}/commit/{ci_sha}" if ci_title is not None: if ci_url is None: raise ValueError( "When a title is found (`ci_title`), it means a `push` event or a `workflow_run` even (triggered by " "another `push` event), and the commit SHA has to be provided in order to create the URL to the " "commit page." ) ci_title = ci_title.strip().split("\n")[0].strip() # Retrieve the PR title and author login to complete the report commit_number = ci_url.split("/")[-1] ci_detail_url = f"https://api.github.com/repos/{repository_full_name}/commits/{commit_number}" ci_details = requests.get(ci_detail_url).json() ci_author = ci_details["author"]["login"] merged_by = None # Find the PR number (if any) and change the url to the actual PR page. numbers = pr_number_re.findall(ci_title) if len(numbers) > 0: pr_number = numbers[0] ci_detail_url = f"https://api.github.com/repos/{repository_full_name}/pulls/{pr_number}" ci_details = requests.get(ci_detail_url).json() ci_author = ci_details["user"]["login"] ci_url = f"https://github.com/{repository_full_name}/pull/{pr_number}" merged_by = ci_details["merged_by"]["login"] if merged_by is None: ci_title = f"<{ci_url}|{ci_title}>\nAuthor: {ci_author}" else: ci_title = f"<{ci_url}|{ci_title}>\nAuthor: {ci_author} | Merged by: {merged_by}" elif ci_sha: ci_title = f"<{ci_url}|commit: {ci_sha}>" else: ci_title = "" if runner_not_available or runner_failed or setup_failed: Message.error_out(title, ci_title, runner_not_available, runner_failed, setup_failed) exit(0) # sys.argv[0] is always `utils/notification_service.py`. arguments = sys.argv[1:] # In our usage in `.github/workflows/slack-report.yml`, we always pass an argument when calling this script. # The argument could be an empty string `""` if a job doesn't depend on the job `setup`. if arguments[0] == "": models = [] else: model_list_as_str = arguments[0] try: folder_slices = ast.literal_eval(model_list_as_str) # Need to change from elements like `models/bert` to `models_bert` (the ones used as artifact names). models = [x.replace("models/", "models_") for folders in folder_slices for x in folders] except Exception: Message.error_out(title, ci_title) raise ValueError("Errored out.") github_actions_jobs = get_jobs( workflow_run_id=os.environ["GITHUB_RUN_ID"], token=os.environ["ACCESS_REPO_INFO_TOKEN"] ) github_actions_job_links = {job["name"]: job["html_url"] for job in github_actions_jobs} artifact_name_to_job_map = {} for job in github_actions_jobs: for step in job["steps"]: if step["name"].startswith("Test suite reports artifacts: "): artifact_name = step["name"][len("Test suite reports artifacts: ") :] artifact_name_to_job_map[artifact_name] = job break available_artifacts = retrieve_available_artifacts() modeling_categories = [ "PyTorch", "TensorFlow", "Flax", "Tokenizers", "Pipelines", "Trainer", "ONNX", "Auto", "Unclassified", ] # This dict will contain all the information relative to each model: # - Failures: the total, as well as the number of failures per-category defined above # - Success: total # - Time spent: as a comma-separated list of elapsed time # - Failures: as a line-break separated list of errors model_results = { model: { "failed": {m: {"unclassified": 0, "single": 0, "multi": 0} for m in modeling_categories}, "success": 0, "time_spent": "", "failures": {}, "job_link": {}, } for model in models if f"run_models_gpu_{model}_test_reports" in available_artifacts } unclassified_model_failures = [] for model in model_results.keys(): for artifact_path in available_artifacts[f"run_models_gpu_{model}_test_reports"].paths: artifact = retrieve_artifact(artifact_path["path"], artifact_path["gpu"]) if "stats" in artifact: # Link to the GitHub Action job job = artifact_name_to_job_map[artifact_path["path"]] model_results[model]["job_link"][artifact_path["gpu"]] = job["html_url"] failed, success, time_spent = handle_test_results(artifact["stats"]) model_results[model]["success"] += success model_results[model]["time_spent"] += time_spent[1:-1] + ", " stacktraces = handle_stacktraces(artifact["failures_line"]) for line in artifact["summary_short"].split("\n"): if line.startswith("FAILED "): line = line[len("FAILED ") :] line = line.split()[0].replace("\n", "") if artifact_path["gpu"] not in model_results[model]["failures"]: model_results[model]["failures"][artifact_path["gpu"]] = [] model_results[model]["failures"][artifact_path["gpu"]].append( {"line": line, "trace": stacktraces.pop(0)} ) if re.search("test_modeling_tf_", line): model_results[model]["failed"]["TensorFlow"][artifact_path["gpu"]] += 1 elif re.search("test_modeling_flax_", line): model_results[model]["failed"]["Flax"][artifact_path["gpu"]] += 1 elif re.search("test_modeling", line): model_results[model]["failed"]["PyTorch"][artifact_path["gpu"]] += 1 elif re.search("test_tokenization", line): model_results[model]["failed"]["Tokenizers"][artifact_path["gpu"]] += 1 elif re.search("test_pipelines", line): model_results[model]["failed"]["Pipelines"][artifact_path["gpu"]] += 1 elif re.search("test_trainer", line): model_results[model]["failed"]["Trainer"][artifact_path["gpu"]] += 1 elif re.search("onnx", line): model_results[model]["failed"]["ONNX"][artifact_path["gpu"]] += 1 elif re.search("auto", line): model_results[model]["failed"]["Auto"][artifact_path["gpu"]] += 1 else: model_results[model]["failed"]["Unclassified"][artifact_path["gpu"]] += 1 unclassified_model_failures.append(line) # Additional runs additional_files = { "PyTorch pipelines": "run_pipelines_torch_gpu_test_reports", "TensorFlow pipelines": "run_pipelines_tf_gpu_test_reports", "Examples directory": "run_examples_gpu_test_reports", "Torch CUDA extension tests": "run_torch_cuda_extensions_gpu_test_reports", } if ci_event in ["push", "Nightly CI"] or ci_event.startswith("Past CI"): del additional_files["Examples directory"] del additional_files["PyTorch pipelines"] del additional_files["TensorFlow pipelines"] elif ci_event.startswith("Scheduled CI (AMD)"): del additional_files["TensorFlow pipelines"] del additional_files["Torch CUDA extension tests"] elif ci_event.startswith("Push CI (AMD)"): additional_files = {} # A map associating the job names (specified by `inputs.job` in a workflow file) with the keys of # `additional_files`. This is used to remove some entries in `additional_files` that are not concerned by a # specific job. See below. job_to_test_map = { "run_pipelines_torch_gpu": "PyTorch pipelines", "run_pipelines_tf_gpu": "TensorFlow pipelines", "run_examples_gpu": "Examples directory", "run_torch_cuda_extensions_gpu": "Torch CUDA extension tests", } # Remove some entries in `additional_files` if they are not concerned. test_name = None job_name = os.getenv("CI_TEST_JOB") if job_name in job_to_test_map: test_name = job_to_test_map[job_name] additional_files = {k: v for k, v in additional_files.items() if k == test_name} additional_results = { key: { "failed": {"unclassified": 0, "single": 0, "multi": 0}, "success": 0, "time_spent": "", "error": False, "failures": {}, "job_link": {}, } for key in additional_files.keys() } for key in additional_results.keys(): # If a whole suite of test fails, the artifact isn't available. if additional_files[key] not in available_artifacts: additional_results[key]["error"] = True continue for artifact_path in available_artifacts[additional_files[key]].paths: # Link to the GitHub Action job job = artifact_name_to_job_map[artifact_path["path"]] additional_results[key]["job_link"][artifact_path["gpu"]] = job["html_url"] artifact = retrieve_artifact(artifact_path["path"], artifact_path["gpu"]) stacktraces = handle_stacktraces(artifact["failures_line"]) failed, success, time_spent = handle_test_results(artifact["stats"]) additional_results[key]["failed"][artifact_path["gpu"] or "unclassified"] += failed additional_results[key]["success"] += success additional_results[key]["time_spent"] += time_spent[1:-1] + ", " if len(artifact["errors"]): additional_results[key]["error"] = True if failed: for line in artifact["summary_short"].split("\n"): if line.startswith("FAILED "): line = line[len("FAILED ") :] line = line.split()[0].replace("\n", "") if artifact_path["gpu"] not in additional_results[key]["failures"]: additional_results[key]["failures"][artifact_path["gpu"]] = [] additional_results[key]["failures"][artifact_path["gpu"]].append( {"line": line, "trace": stacktraces.pop(0)} ) # Let's only check the warning for the model testing job. Currently, the job `run_extract_warnings` is only run # when `inputs.job` (in the workflow file) is `run_models_gpu`. The reason is: otherwise we need to save several # artifacts with different names which complicates the logic for an insignificant part of the CI workflow reporting. selected_warnings = [] if job_name == "run_models_gpu": if "warnings_in_ci" in available_artifacts: directory = available_artifacts["warnings_in_ci"].paths[0]["path"] with open(os.path.join(directory, "selected_warnings.json")) as fp: selected_warnings = json.load(fp) if not os.path.isdir(os.path.join(os.getcwd(), "prev_ci_results")): os.makedirs(os.path.join(os.getcwd(), "prev_ci_results")) # Only the model testing job is concerned: this condition is to avoid other jobs to upload the empty list as # results. if job_name == "run_models_gpu": with open("prev_ci_results/model_results.json", "w", encoding="UTF-8") as fp: json.dump(model_results, fp, indent=4, ensure_ascii=False) prev_ci_artifacts = None target_workflow = "huggingface/transformers/.github/workflows/self-scheduled.yml@refs/heads/main" if os.environ.get("CI_WORKFLOW_REF") == target_workflow: # Get the last previously completed CI's failure tables artifact_names = ["prev_ci_results"] output_dir = os.path.join(os.getcwd(), "previous_reports") os.makedirs(output_dir, exist_ok=True) prev_ci_artifacts = get_last_daily_ci_reports( artifact_names=artifact_names, output_dir=output_dir, token=os.environ["ACCESS_REPO_INFO_TOKEN"] ) message = Message( title, ci_title, model_results, additional_results, selected_warnings=selected_warnings, prev_ci_artifacts=prev_ci_artifacts, ) # send report only if there is any failure (for push CI) if message.n_failures or (ci_event != "push" and not ci_event.startswith("Push CI (AMD)")): message.post() message.post_reply()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/notification_service_doc_tests.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import re import time from typing import Dict, List from get_ci_error_statistics import get_jobs from slack_sdk import WebClient client = WebClient(token=os.environ["CI_SLACK_BOT_TOKEN"]) def handle_test_results(test_results): expressions = test_results.split(" ") failed = 0 success = 0 # When the output is short enough, the output is surrounded by = signs: "== OUTPUT ==" # When it is too long, those signs are not present. time_spent = expressions[-2] if "=" in expressions[-1] else expressions[-1] for i, expression in enumerate(expressions): if "failed" in expression: failed += int(expressions[i - 1]) if "passed" in expression: success += int(expressions[i - 1]) return failed, success, time_spent def extract_first_line_failure(failures_short_lines): failures = {} file = None in_error = False for line in failures_short_lines.split("\n"): if re.search(r"_ \[doctest\]", line): in_error = True file = line.split(" ")[2] elif in_error and not line.split(" ")[0].isdigit(): failures[file] = line in_error = False return failures class Message: def __init__(self, title: str, doc_test_results: Dict): self.title = title self.n_success = sum(job_result["n_success"] for job_result in doc_test_results.values()) self.n_failures = sum(job_result["n_failures"] for job_result in doc_test_results.values()) self.n_tests = self.n_success + self.n_failures # Failures and success of the modeling tests self.doc_test_results = doc_test_results @property def time(self) -> str: all_results = [*self.doc_test_results.values()] time_spent = [r["time_spent"].split(", ")[0] for r in all_results if len(r["time_spent"])] total_secs = 0 for time in time_spent: time_parts = time.split(":") # Time can be formatted as xx:xx:xx, as .xx, or as x.xx if the time spent was less than a minute. if len(time_parts) == 1: time_parts = [0, 0, time_parts[0]] hours, minutes, seconds = int(time_parts[0]), int(time_parts[1]), float(time_parts[2]) total_secs += hours * 3600 + minutes * 60 + seconds hours, minutes, seconds = total_secs // 3600, (total_secs % 3600) // 60, total_secs % 60 return f"{int(hours)}h{int(minutes)}m{int(seconds)}s" @property def header(self) -> Dict: return {"type": "header", "text": {"type": "plain_text", "text": self.title}} @property def no_failures(self) -> Dict: return { "type": "section", "text": { "type": "plain_text", "text": f"๐ŸŒž There were no failures: all {self.n_tests} tests passed. The suite ran in {self.time}.", "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } @property def failures(self) -> Dict: return { "type": "section", "text": { "type": "plain_text", "text": ( f"There were {self.n_failures} failures, out of {self.n_tests} tests.\nThe suite ran in" f" {self.time}." ), "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } @property def category_failures(self) -> List[Dict]: failure_blocks = [] MAX_ERROR_TEXT = 3000 - len("The following examples had failures:\n\n\n\n") - len("[Truncated]\n") line_length = 40 category_failures = {k: v["failed"] for k, v in doc_test_results.items() if isinstance(v, dict)} def single_category_failures(category, failures): text = "" if len(failures) == 0: return "" text += f"*{category} failures*:".ljust(line_length // 2).rjust(line_length // 2) + "\n" for idx, failure in enumerate(failures): new_text = text + f"`{failure}`\n" if len(new_text) > MAX_ERROR_TEXT: text = text + "[Truncated]\n" break text = new_text return text for category, failures in category_failures.items(): report = single_category_failures(category, failures) if len(report) == 0: continue block = { "type": "section", "text": { "type": "mrkdwn", "text": f"The following examples had failures:\n\n\n{report}\n", }, } failure_blocks.append(block) return failure_blocks @property def payload(self) -> str: blocks = [self.header] if self.n_failures > 0: blocks.append(self.failures) if self.n_failures > 0: blocks.extend(self.category_failures) if self.n_failures == 0: blocks.append(self.no_failures) return json.dumps(blocks) @staticmethod def error_out(): payload = [ { "type": "section", "text": { "type": "plain_text", "text": "There was an issue running the tests.", }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } ] print("Sending the following payload") print(json.dumps({"blocks": json.loads(payload)})) client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, text="There was an issue running the tests.", blocks=payload, ) def post(self): print("Sending the following payload") print(json.dumps({"blocks": json.loads(self.payload)})) text = f"{self.n_failures} failures out of {self.n_tests} tests," if self.n_failures else "All tests passed." self.thread_ts = client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, blocks=self.payload, text=text, ) def get_reply_blocks(self, job_name, job_link, failures, text): # `text` must be less than 3001 characters in Slack SDK # keep some room for adding "[Truncated]" when necessary MAX_ERROR_TEXT = 3000 - len("[Truncated]") failure_text = "" for key, value in failures.items(): new_text = failure_text + f"*{key}*\n_{value}_\n\n" if len(new_text) > MAX_ERROR_TEXT: # `failure_text` here has length <= 3000 failure_text = failure_text + "[Truncated]" break # `failure_text` here has length <= MAX_ERROR_TEXT failure_text = new_text title = job_name content = {"type": "section", "text": {"type": "mrkdwn", "text": text}} if job_link is not None: content["accessory"] = { "type": "button", "text": {"type": "plain_text", "text": "GitHub Action job", "emoji": True}, "url": job_link, } return [ {"type": "header", "text": {"type": "plain_text", "text": title, "emoji": True}}, content, {"type": "section", "text": {"type": "mrkdwn", "text": failure_text}}, ] def post_reply(self): if self.thread_ts is None: raise ValueError("Can only post reply if a post has been made.") sorted_dict = sorted(self.doc_test_results.items(), key=lambda t: t[0]) for job_name, job_result in sorted_dict: if len(job_result["failures"]) > 0: text = f"*Num failures* :{len(job_result['failed'])} \n" failures = job_result["failures"] blocks = self.get_reply_blocks(job_name, job_result["job_link"], failures, text=text) print("Sending the following reply") print(json.dumps({"blocks": blocks})) client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, text=f"Results for {job_name}", blocks=blocks, thread_ts=self.thread_ts["ts"], ) time.sleep(1) def retrieve_artifact(name: str): _artifact = {} if os.path.exists(name): files = os.listdir(name) for file in files: try: with open(os.path.join(name, file), encoding="utf-8") as f: _artifact[file.split(".")[0]] = f.read() except UnicodeDecodeError as e: raise ValueError(f"Could not open {os.path.join(name, file)}.") from e return _artifact def retrieve_available_artifacts(): class Artifact: def __init__(self, name: str): self.name = name self.paths = [] def __str__(self): return self.name def add_path(self, path: str): self.paths.append({"name": self.name, "path": path}) _available_artifacts: Dict[str, Artifact] = {} directories = filter(os.path.isdir, os.listdir()) for directory in directories: artifact_name = directory if artifact_name not in _available_artifacts: _available_artifacts[artifact_name] = Artifact(artifact_name) _available_artifacts[artifact_name].add_path(directory) return _available_artifacts if __name__ == "__main__": SLACK_REPORT_CHANNEL_ID = os.environ["SLACK_REPORT_CHANNEL"] github_actions_jobs = get_jobs( workflow_run_id=os.environ["GITHUB_RUN_ID"], token=os.environ["ACCESS_REPO_INFO_TOKEN"] ) artifact_name_to_job_map = {} for job in github_actions_jobs: for step in job["steps"]: if step["name"].startswith("Test suite reports artifacts: "): artifact_name = step["name"][len("Test suite reports artifacts: ") :] artifact_name_to_job_map[artifact_name] = job break available_artifacts = retrieve_available_artifacts() doc_test_results = {} # `artifact_key` is the artifact path for artifact_key, artifact_obj in available_artifacts.items(): artifact_path = artifact_obj.paths[0] if not artifact_path["path"].startswith("doc_tests_gpu_test_reports_"): continue # change "_" back to "/" (to show the job name as path) job_name = artifact_path["path"].replace("doc_tests_gpu_test_reports_", "").replace("_", "/") # This dict (for each job) will contain all the information relative to each doc test job, in particular: # - failed: list of failed tests # - failures: dict in the format 'test': 'error_message' job_result = {} doc_test_results[job_name] = job_result job = artifact_name_to_job_map[artifact_path["path"]] job_result["job_link"] = job["html_url"] job_result["category"] = "Python Examples" if job_name.startswith("src/") else "MD Examples" artifact = retrieve_artifact(artifact_path["path"]) if "stats" in artifact: failed, success, time_spent = handle_test_results(artifact["stats"]) job_result["n_failures"] = failed job_result["n_success"] = success job_result["time_spent"] = time_spent[1:-1] + ", " job_result["failed"] = [] job_result["failures"] = {} all_failures = extract_first_line_failure(artifact["failures_short"]) for line in artifact["summary_short"].split("\n"): if re.search("FAILED", line): line = line.replace("FAILED ", "") line = line.split()[0].replace("\n", "") if "::" in line: file_path, test = line.split("::") else: file_path, test = line, line job_result["failed"].append(test) failure = all_failures[test] if test in all_failures else "N/A" job_result["failures"][test] = failure # Save and to be uploaded as artifact os.makedirs("doc_test_results", exist_ok=True) with open("doc_test_results/doc_test_results.json", "w", encoding="UTF-8") as fp: json.dump(doc_test_results, fp, ensure_ascii=False, indent=4) message = Message("๐Ÿค— Results of the doc tests.", doc_test_results) message.post() message.post_reply()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/get_previous_daily_ci.py
import os import zipfile import requests from get_ci_error_statistics import download_artifact, get_artifacts_links def get_daily_ci_runs(token, num_runs=7): """Get the workflow runs of the scheduled (daily) CI. This only selects the runs triggered by the `schedule` event on the `main` branch. """ headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} # The id of a workflow (not of a workflow run) workflow_id = "636036" url = f"https://api.github.com/repos/huggingface/transformers/actions/workflows/{workflow_id}/runs" # On `main` branch + event being `schedule` + not returning PRs + only `num_runs` results url += f"?branch=main&event=schedule&exclude_pull_requests=true&per_page={num_runs}" result = requests.get(url, headers=headers).json() return result["workflow_runs"] def get_last_daily_ci_runs(token): """Get the last completed workflow run id of the scheduled (daily) CI.""" workflow_runs = get_daily_ci_runs(token) workflow_run_id = None for workflow_run in workflow_runs: if workflow_run["status"] == "completed": workflow_run_id = workflow_run["id"] break return workflow_run_id def get_last_daily_ci_artifacts(artifact_names, output_dir, token): """Get the artifacts of last completed workflow run id of the scheduled (daily) CI.""" workflow_run_id = get_last_daily_ci_runs(token) if workflow_run_id is not None: artifacts_links = get_artifacts_links(worflow_run_id=workflow_run_id, token=token) for artifact_name in artifact_names: if artifact_name in artifacts_links: artifact_url = artifacts_links[artifact_name] download_artifact( artifact_name=artifact_name, artifact_url=artifact_url, output_dir=output_dir, token=token ) def get_last_daily_ci_reports(artifact_names, output_dir, token): """Get the artifacts' content of the last completed workflow run id of the scheduled (daily) CI.""" get_last_daily_ci_artifacts(artifact_names, output_dir, token) results = {} for artifact_name in artifact_names: artifact_zip_path = os.path.join(output_dir, f"{artifact_name}.zip") if os.path.isfile(artifact_zip_path): results[artifact_name] = {} with zipfile.ZipFile(artifact_zip_path) as z: for filename in z.namelist(): if not os.path.isdir(filename): # read the file with z.open(filename) as f: results[artifact_name][filename] = f.read().decode("UTF-8") return results
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_support_list.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that checks the supports of 3rd party libraries are listed in the documentation file. Currently, this includes: - flash attention support - SDPA support Use from the root of the repo with (as used in `make repo-consistency`): ```bash python utils/check_support_list.py ``` It has no auto-fix mode. """ import os from glob import glob # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_doctest_list.py REPO_PATH = "." def check_flash_support_list(): with open(os.path.join(REPO_PATH, "docs/source/en/perf_infer_gpu_one.md"), "r") as f: doctext = f.read() doctext = doctext.split("FlashAttention-2 is currently supported for the following architectures:")[1] doctext = doctext.split("You can request to add FlashAttention-2 support")[0] patterns = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_*.py")) patterns_tf = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_tf_*.py")) patterns_flax = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_flax_*.py")) patterns = list(set(patterns) - set(patterns_tf) - set(patterns_flax)) archs_supporting_fa2 = [] for filename in patterns: with open(filename, "r") as f: text = f.read() if "_supports_flash_attn_2 = True" in text: model_name = os.path.basename(filename).replace(".py", "").replace("modeling_", "") archs_supporting_fa2.append(model_name) for arch in archs_supporting_fa2: if arch not in doctext: raise ValueError( f"{arch} should be in listed in the flash attention documentation but is not. Please update the documentation." ) def check_sdpa_support_list(): with open(os.path.join(REPO_PATH, "docs/source/en/perf_infer_gpu_one.md"), "r") as f: doctext = f.read() doctext = doctext.split( "For now, Transformers supports SDPA inference and training for the following architectures:" )[1] doctext = doctext.split("Note that FlashAttention can only be used for models using the")[0] patterns = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_*.py")) patterns_tf = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_tf_*.py")) patterns_flax = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_flax_*.py")) patterns = list(set(patterns) - set(patterns_tf) - set(patterns_flax)) archs_supporting_sdpa = [] for filename in patterns: with open(filename, "r") as f: text = f.read() if "_supports_sdpa = True" in text: model_name = os.path.basename(filename).replace(".py", "").replace("modeling_", "") archs_supporting_sdpa.append(model_name) for arch in archs_supporting_sdpa: if arch not in doctext: raise ValueError( f"{arch} should be in listed in the SDPA documentation but is not. Please update the documentation." ) if __name__ == "__main__": check_flash_support_list() check_sdpa_support_list()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/pr_slow_ci_models.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is used to get the models for which to run slow CI. A new model added in a pull request will be included, as well as models specified in a commit message with a prefix `[run-slow]`, `[run_slow]` or `[run slow]`. For example, the commit message `[run_slow]bert, gpt2` will give `bert` and `gpt2`. Usage: ```bash python utils/pr_slow_ci_models.py.py ``` """ import argparse import re from pathlib import Path from typing import List from git import Repo PATH_TO_REPO = Path(__file__).parent.parent.resolve() def get_new_python_files_between_commits(base_commit: str, commits: List[str]) -> List[str]: """ Get the list of added python files between a base commit and one or several commits. Args: repo (`git.Repo`): A git repository (for instance the Transformers repo). base_commit (`str`): The commit reference of where to compare for the diff. This is the current commit, not the branching point! commits (`List[str]`): The list of commits with which to compare the repo at `base_commit` (so the branching point). Returns: `List[str]`: The list of python files added between a base commit and one or several commits. """ code_diff = [] for commit in commits: for diff_obj in commit.diff(base_commit): # We always add new python files if diff_obj.change_type == "A" and diff_obj.b_path.endswith(".py"): code_diff.append(diff_obj.b_path) return code_diff def get_new_python_files() -> List[str]: """ Return a list of python files that have been added between the current head and the main branch. Returns: `List[str]`: The list of python files added. """ repo = Repo(PATH_TO_REPO) try: # For the cases where the main branch exists locally main = repo.refs.main except AttributeError: # On GitHub Actions runners, it doesn't have local main branch main = repo.remotes.origin.refs.main print(f"main is at {main.commit}") print(f"Current head is at {repo.head.commit}") branching_commits = repo.merge_base(main, repo.head) for commit in branching_commits: print(f"Branching commit: {commit}") return get_new_python_files_between_commits(repo.head.commit, branching_commits) def get_new_model(): new_files = get_new_python_files() reg = re.compile(r"src/transformers/(models/.*)/modeling_.*\.py") new_model = "" for x in new_files: find_new_model = reg.findall(x) if len(find_new_model) > 0: new_model = find_new_model[0] # It's unlikely we have 2 new modeling files in a pull request. break return new_model def parse_commit_message(commit_message: str) -> str: """ Parses the commit message to find the models specified in it to run slow CI. Args: commit_message (`str`): The commit message of the current commit. Returns: `str`: The substring in `commit_message` after `[run-slow]`, [run_slow]` or [run slow]`. If no such prefix is found, the empty string is returned. """ if commit_message is None: return "" command_search = re.search(r"\[([^\]]*)\](.*)", commit_message) if command_search is None: return "" command = command_search.groups()[0] command = command.lower().replace("-", " ").replace("_", " ") run_slow = command == "run slow" if run_slow: models = command_search.groups()[1].strip() return models else: return "" def get_models(commit_message: str): models = parse_commit_message(commit_message) return [f"models/{x}" for x in models.replace(",", " ").split()] if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--commit_message", type=str, default="", help="The commit message.") args = parser.parse_args() new_model = get_new_model() specified_models = get_models(args.commit_message) models = ([] if new_model == "" else [new_model]) + specified_models print(sorted(set(models)))
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_self_hosted_runner.py
import argparse import json import subprocess def get_runner_status(target_runners, token): offline_runners = [] cmd = ( f'curl -H "Accept: application/vnd.github+json" -H "Authorization: Bearer {token}"' " https://api.github.com/repos/huggingface/transformers/actions/runners" ) output = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE) o = output.stdout.decode("utf-8") status = json.loads(o) runners = status["runners"] for runner in runners: if runner["name"] in target_runners: if runner["status"] == "offline": offline_runners.append(runner) # save the result so we can report them on Slack with open("offline_runners.txt", "w") as fp: fp.write(json.dumps(offline_runners)) if len(offline_runners) > 0: failed = "\n".join([x["name"] for x in offline_runners]) raise ValueError(f"The following runners are offline:\n{failed}") if __name__ == "__main__": def list_str(values): return values.split(",") parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--target_runners", default=None, type=list_str, required=True, help="Comma-separated list of runners to check status.", ) parser.add_argument( "--token", default=None, type=str, required=True, help="A token that has actions:read permission." ) args = parser.parse_args() get_runner_status(args.target_runners, args.token)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/important_models.txt
models/llama models/mistral models/mixtral models/gemma
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/print_env.py
#!/usr/bin/env python3 # coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # this script dumps information about the environment import os import sys import transformers os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" print("Python version:", sys.version) print("transformers version:", transformers.__version__) try: import torch print("Torch version:", torch.__version__) print("Cuda available:", torch.cuda.is_available()) print("Cuda version:", torch.version.cuda) print("CuDNN version:", torch.backends.cudnn.version()) print("Number of GPUs available:", torch.cuda.device_count()) print("NCCL version:", torch.cuda.nccl.version()) except ImportError: print("Torch version:", None) try: import deepspeed print("DeepSpeed version:", deepspeed.__version__) except ImportError: print("DeepSpeed version:", None) try: import tensorflow as tf print("TensorFlow version:", tf.__version__) print("TF GPUs available:", bool(tf.config.list_physical_devices("GPU"))) print("Number of TF GPUs available:", len(tf.config.list_physical_devices("GPU"))) except ImportError: print("TensorFlow version:", None)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_doc_toc.py
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is responsible for cleaning the model section of the table of content by removing duplicates and sorting the entries in alphabetical order. Usage (from the root of the repo): Check that the table of content is properly sorted (used in `make quality`): ```bash python utils/check_doc_toc.py ``` Auto-sort the table of content if it is not properly sorted (used in `make style`): ```bash python utils/check_doc_toc.py --fix_and_overwrite ``` """ import argparse from collections import defaultdict from typing import List import yaml PATH_TO_TOC = "docs/source/en/_toctree.yml" def clean_model_doc_toc(model_doc: List[dict]) -> List[dict]: """ Cleans a section of the table of content of the model documentation (one specific modality) by removing duplicates and sorting models alphabetically. Args: model_doc (`List[dict]`): The list of dictionaries extracted from the `_toctree.yml` file for this specific modality. Returns: `List[dict]`: List of dictionaries like the input, but cleaned up and sorted. """ counts = defaultdict(int) for doc in model_doc: counts[doc["local"]] += 1 duplicates = [key for key, value in counts.items() if value > 1] new_doc = [] for duplicate_key in duplicates: titles = list({doc["title"] for doc in model_doc if doc["local"] == duplicate_key}) if len(titles) > 1: raise ValueError( f"{duplicate_key} is present several times in the documentation table of content at " "`docs/source/en/_toctree.yml` with different *Title* values. Choose one of those and remove the " "others." ) # Only add this once new_doc.append({"local": duplicate_key, "title": titles[0]}) # Add none duplicate-keys new_doc.extend([doc for doc in model_doc if counts[doc["local"]] == 1]) # Sort return sorted(new_doc, key=lambda s: s["title"].lower()) def check_model_doc(overwrite: bool = False): """ Check that the content of the table of content in `_toctree.yml` is clean (no duplicates and sorted for the model API doc) and potentially auto-cleans it. Args: overwrite (`bool`, *optional*, defaults to `False`): Whether to just check if the TOC is clean or to auto-clean it (when `overwrite=True`). """ with open(PATH_TO_TOC, encoding="utf-8") as f: content = yaml.safe_load(f.read()) # Get to the API doc api_idx = 0 while content[api_idx]["title"] != "API": api_idx += 1 api_doc = content[api_idx]["sections"] # Then to the model doc model_idx = 0 while api_doc[model_idx]["title"] != "Models": model_idx += 1 model_doc = api_doc[model_idx]["sections"] # Extract the modalities and clean them one by one. modalities_docs = [(idx, section) for idx, section in enumerate(model_doc) if "sections" in section] diff = False for idx, modality_doc in modalities_docs: old_modality_doc = modality_doc["sections"] new_modality_doc = clean_model_doc_toc(old_modality_doc) if old_modality_doc != new_modality_doc: diff = True if overwrite: model_doc[idx]["sections"] = new_modality_doc if diff: if overwrite: api_doc[model_idx]["sections"] = model_doc content[api_idx]["sections"] = api_doc with open(PATH_TO_TOC, "w", encoding="utf-8") as f: f.write(yaml.dump(content, allow_unicode=True)) else: raise ValueError( "The model doc part of the table of content is not properly sorted, run `make style` to fix this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() check_model_doc(args.fix_and_overwrite)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/get_ci_error_statistics.py
import argparse import json import math import os import time import traceback import zipfile from collections import Counter import requests def get_jobs(workflow_run_id, token=None): """Extract jobs in a GitHub Actions workflow run""" headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100" result = requests.get(url, headers=headers).json() jobs = [] try: jobs.extend(result["jobs"]) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() jobs.extend(result["jobs"]) return jobs except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return [] def get_job_links(workflow_run_id, token=None): """Extract job names and their job links in a GitHub Actions workflow run""" headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100" result = requests.get(url, headers=headers).json() job_links = {} try: job_links.update({job["name"]: job["html_url"] for job in result["jobs"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() job_links.update({job["name"]: job["html_url"] for job in result["jobs"]}) return job_links except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return {} def get_artifacts_links(worflow_run_id, token=None): """Get all artifact links from a workflow run""" headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{worflow_run_id}/artifacts?per_page=100" result = requests.get(url, headers=headers).json() artifacts = {} try: artifacts.update({artifact["name"]: artifact["archive_download_url"] for artifact in result["artifacts"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() artifacts.update({artifact["name"]: artifact["archive_download_url"] for artifact in result["artifacts"]}) return artifacts except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return {} def download_artifact(artifact_name, artifact_url, output_dir, token): """Download a GitHub Action artifact from a URL. The URL is of the form `https://api.github.com/repos/huggingface/transformers/actions/artifacts/{ARTIFACT_ID}/zip`, but it can't be used to download directly. We need to get a redirect URL first. See https://docs.github.com/en/rest/actions/artifacts#download-an-artifact """ headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} result = requests.get(artifact_url, headers=headers, allow_redirects=False) download_url = result.headers["Location"] response = requests.get(download_url, allow_redirects=True) file_path = os.path.join(output_dir, f"{artifact_name}.zip") with open(file_path, "wb") as fp: fp.write(response.content) def get_errors_from_single_artifact(artifact_zip_path, job_links=None): """Extract errors from a downloaded artifact (in .zip format)""" errors = [] failed_tests = [] job_name = None with zipfile.ZipFile(artifact_zip_path) as z: for filename in z.namelist(): if not os.path.isdir(filename): # read the file if filename in ["failures_line.txt", "summary_short.txt", "job_name.txt"]: with z.open(filename) as f: for line in f: line = line.decode("UTF-8").strip() if filename == "failures_line.txt": try: # `error_line` is the place where `error` occurs error_line = line[: line.index(": ")] error = line[line.index(": ") + len(": ") :] errors.append([error_line, error]) except Exception: # skip un-related lines pass elif filename == "summary_short.txt" and line.startswith("FAILED "): # `test` is the test method that failed test = line[len("FAILED ") :] failed_tests.append(test) elif filename == "job_name.txt": job_name = line if len(errors) != len(failed_tests): raise ValueError( f"`errors` and `failed_tests` should have the same number of elements. Got {len(errors)} for `errors` " f"and {len(failed_tests)} for `failed_tests` instead. The test reports in {artifact_zip_path} have some" " problem." ) job_link = None if job_name and job_links: job_link = job_links.get(job_name, None) # A list with elements of the form (line of error, error, failed test) result = [x + [y] + [job_link] for x, y in zip(errors, failed_tests)] return result def get_all_errors(artifact_dir, job_links=None): """Extract errors from all artifact files""" errors = [] paths = [os.path.join(artifact_dir, p) for p in os.listdir(artifact_dir) if p.endswith(".zip")] for p in paths: errors.extend(get_errors_from_single_artifact(p, job_links=job_links)) return errors def reduce_by_error(logs, error_filter=None): """count each error""" counter = Counter() counter.update([x[1] for x in logs]) counts = counter.most_common() r = {} for error, count in counts: if error_filter is None or error not in error_filter: r[error] = {"count": count, "failed_tests": [(x[2], x[0]) for x in logs if x[1] == error]} r = dict(sorted(r.items(), key=lambda item: item[1]["count"], reverse=True)) return r def get_model(test): """Get the model name from a test method""" test = test.split("::")[0] if test.startswith("tests/models/"): test = test.split("/")[2] else: test = None return test def reduce_by_model(logs, error_filter=None): """count each error per model""" logs = [(x[0], x[1], get_model(x[2])) for x in logs] logs = [x for x in logs if x[2] is not None] tests = {x[2] for x in logs} r = {} for test in tests: counter = Counter() # count by errors in `test` counter.update([x[1] for x in logs if x[2] == test]) counts = counter.most_common() error_counts = {error: count for error, count in counts if (error_filter is None or error not in error_filter)} n_errors = sum(error_counts.values()) if n_errors > 0: r[test] = {"count": n_errors, "errors": error_counts} r = dict(sorted(r.items(), key=lambda item: item[1]["count"], reverse=True)) return r def make_github_table(reduced_by_error): header = "| no. | error | status |" sep = "|-:|:-|:-|" lines = [header, sep] for error in reduced_by_error: count = reduced_by_error[error]["count"] line = f"| {count} | {error[:100]} | |" lines.append(line) return "\n".join(lines) def make_github_table_per_model(reduced_by_model): header = "| model | no. of errors | major error | count |" sep = "|-:|-:|-:|-:|" lines = [header, sep] for model in reduced_by_model: count = reduced_by_model[model]["count"] error, _count = list(reduced_by_model[model]["errors"].items())[0] line = f"| {model} | {count} | {error[:60]} | {_count} |" lines.append(line) return "\n".join(lines) if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.") parser.add_argument( "--output_dir", type=str, required=True, help="Where to store the downloaded artifacts and other result files.", ) parser.add_argument("--token", default=None, type=str, help="A token that has actions:read permission.") args = parser.parse_args() os.makedirs(args.output_dir, exist_ok=True) _job_links = get_job_links(args.workflow_run_id, token=args.token) job_links = {} # To deal with `workflow_call` event, where a job name is the combination of the job names in the caller and callee. # For example, `PyTorch 1.11 / Model tests (models/albert, single-gpu)`. if _job_links: for k, v in _job_links.items(): # This is how GitHub actions combine job names. if " / " in k: index = k.find(" / ") k = k[index + len(" / ") :] job_links[k] = v with open(os.path.join(args.output_dir, "job_links.json"), "w", encoding="UTF-8") as fp: json.dump(job_links, fp, ensure_ascii=False, indent=4) artifacts = get_artifacts_links(args.workflow_run_id, token=args.token) with open(os.path.join(args.output_dir, "artifacts.json"), "w", encoding="UTF-8") as fp: json.dump(artifacts, fp, ensure_ascii=False, indent=4) for idx, (name, url) in enumerate(artifacts.items()): download_artifact(name, url, args.output_dir, args.token) # Be gentle to GitHub time.sleep(1) errors = get_all_errors(args.output_dir, job_links=job_links) # `e[1]` is the error counter = Counter() counter.update([e[1] for e in errors]) # print the top 30 most common test errors most_common = counter.most_common(30) for item in most_common: print(item) with open(os.path.join(args.output_dir, "errors.json"), "w", encoding="UTF-8") as fp: json.dump(errors, fp, ensure_ascii=False, indent=4) reduced_by_error = reduce_by_error(errors) reduced_by_model = reduce_by_model(errors) s1 = make_github_table(reduced_by_error) s2 = make_github_table_per_model(reduced_by_model) with open(os.path.join(args.output_dir, "reduced_by_error.txt"), "w", encoding="UTF-8") as fp: fp.write(s1) with open(os.path.join(args.output_dir, "reduced_by_model.txt"), "w", encoding="UTF-8") as fp: fp.write(s2)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/create_dummy_models.py
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import collections.abc import copy import inspect import json import multiprocessing import os import shutil import tempfile import traceback from pathlib import Path from check_config_docstrings import get_checkpoint_from_config_class from datasets import load_dataset from get_test_info import get_model_to_tester_mapping, get_tester_classes_for_model from huggingface_hub import Repository, create_repo, hf_api, upload_folder from transformers import ( CONFIG_MAPPING, FEATURE_EXTRACTOR_MAPPING, IMAGE_PROCESSOR_MAPPING, PROCESSOR_MAPPING, TOKENIZER_MAPPING, AutoTokenizer, LayoutLMv3TokenizerFast, PreTrainedTokenizer, PreTrainedTokenizerFast, logging, ) from transformers.feature_extraction_utils import FeatureExtractionMixin from transformers.file_utils import is_tf_available, is_torch_available from transformers.image_processing_utils import BaseImageProcessor from transformers.models.auto.configuration_auto import AutoConfig, model_type_to_module_name from transformers.models.fsmt import configuration_fsmt from transformers.processing_utils import ProcessorMixin, transformers_module from transformers.tokenization_utils_base import PreTrainedTokenizerBase # make sure tokenizer plays nice with multiprocessing os.environ["TOKENIZERS_PARALLELISM"] = "false" logging.set_verbosity_error() logging.disable_progress_bar() logger = logging.get_logger(__name__) os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" if not is_torch_available(): raise ValueError("Please install PyTorch.") if not is_tf_available(): raise ValueError("Please install TensorFlow.") FRAMEWORKS = ["pytorch", "tensorflow"] INVALID_ARCH = [] TARGET_VOCAB_SIZE = 1024 data = {"training_ds": None, "testing_ds": None} COMPOSITE_MODELS = { "EncoderDecoderModel": "EncoderDecoderModel-bert-bert", "SpeechEncoderDecoderModel": "SpeechEncoderDecoderModel-wav2vec2-bert", "VisionEncoderDecoderModel": "VisionEncoderDecoderModel-vit-gpt2", "VisionTextDualEncoderModel": "VisionTextDualEncoderModel-vit-bert", } # This list contains the model architectures for which a tiny version could not be created. # Avoid to add new architectures here - unless we have verified carefully that it's (almost) impossible to create them. # One such case is: no model tester class is implemented for a model type (like `MT5`) because its architecture is # identical to another one (`MT5` is based on `T5`), but trained on different datasets or with different techniques. UNCONVERTIBLE_MODEL_ARCHITECTURES = { "BertGenerationEncoder", "BertGenerationDecoder", "CamembertForSequenceClassification", "CamembertForMultipleChoice", "CamembertForMaskedLM", "CamembertForCausalLM", "CamembertForTokenClassification", "CamembertForQuestionAnswering", "CamembertModel", "TFCamembertForMultipleChoice", "TFCamembertForTokenClassification", "TFCamembertForQuestionAnswering", "TFCamembertForSequenceClassification", "TFCamembertForMaskedLM", "TFCamembertModel", "TFCamembertForCausalLM", "DecisionTransformerModel", "GraphormerModel", "InformerModel", "JukeboxModel", "MarianForCausalLM", "MaskFormerSwinModel", "MaskFormerSwinBackbone", "MT5Model", "MT5ForConditionalGeneration", "UMT5ForConditionalGeneration", "TFMT5ForConditionalGeneration", "TFMT5Model", "QDQBertForSequenceClassification", "QDQBertForMaskedLM", "QDQBertModel", "QDQBertForTokenClassification", "QDQBertLMHeadModel", "QDQBertForMultipleChoice", "QDQBertForQuestionAnswering", "QDQBertForNextSentencePrediction", "ReformerModelWithLMHead", "RetriBertModel", "Speech2Text2ForCausalLM", "TimeSeriesTransformerModel", "TrajectoryTransformerModel", "TrOCRForCausalLM", "XLMProphetNetForConditionalGeneration", "XLMProphetNetForCausalLM", "XLMProphetNetModel", "XLMRobertaModel", "XLMRobertaForTokenClassification", "XLMRobertaForMultipleChoice", "XLMRobertaForMaskedLM", "XLMRobertaForCausalLM", "XLMRobertaForSequenceClassification", "XLMRobertaForQuestionAnswering", "TFXLMRobertaForSequenceClassification", "TFXLMRobertaForMaskedLM", "TFXLMRobertaForCausalLM", "TFXLMRobertaForQuestionAnswering", "TFXLMRobertaModel", "TFXLMRobertaForMultipleChoice", "TFXLMRobertaForTokenClassification", } def get_processor_types_from_config_class(config_class, allowed_mappings=None): """Return a tuple of processors for `config_class`. We use `tuple` here to include (potentially) both slow & fast tokenizers. """ # To make a uniform return type def _to_tuple(x): if not isinstance(x, collections.abc.Sequence): x = (x,) else: x = tuple(x) return x if allowed_mappings is None: allowed_mappings = ["processor", "tokenizer", "image_processor", "feature_extractor"] processor_types = () # Check first if a model has `ProcessorMixin`. Otherwise, check if it has tokenizers, and/or an image processor or # a feature extractor if config_class in PROCESSOR_MAPPING and "processor" in allowed_mappings: processor_types = _to_tuple(PROCESSOR_MAPPING[config_class]) else: if config_class in TOKENIZER_MAPPING and "tokenizer" in allowed_mappings: processor_types = TOKENIZER_MAPPING[config_class] if config_class in IMAGE_PROCESSOR_MAPPING and "image_processor" in allowed_mappings: processor_types += _to_tuple(IMAGE_PROCESSOR_MAPPING[config_class]) elif config_class in FEATURE_EXTRACTOR_MAPPING and "feature_extractor" in allowed_mappings: processor_types += _to_tuple(FEATURE_EXTRACTOR_MAPPING[config_class]) # Remark: some configurations have no processor at all. For example, generic composite models like # `EncoderDecoderModel` is used for any (compatible) text models. Also, `DecisionTransformer` doesn't # require any processor. # We might get `None` for some tokenizers - remove them here. processor_types = tuple(p for p in processor_types if p is not None) return processor_types def get_architectures_from_config_class(config_class, arch_mappings, models_to_skip=None): """Return a tuple of all possible architectures attributed to a configuration class `config_class`. For example, BertConfig -> [BertModel, BertForMaskedLM, ..., BertForQuestionAnswering]. """ # A model architecture could appear in several mappings. For example, `BartForConditionalGeneration` is in # - MODEL_FOR_PRETRAINING_MAPPING_NAMES # - MODEL_WITH_LM_HEAD_MAPPING_NAMES # - MODEL_FOR_MASKED_LM_MAPPING_NAMES # - MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES # We avoid the duplication. architectures = set() if models_to_skip is None: models_to_skip = [] models_to_skip = UNCONVERTIBLE_MODEL_ARCHITECTURES.union(models_to_skip) for mapping in arch_mappings: if config_class in mapping: models = mapping[config_class] models = tuple(models) if isinstance(models, collections.abc.Sequence) else (models,) for model in models: if model.__name__ not in models_to_skip: architectures.add(model) architectures = tuple(architectures) return architectures def get_config_class_from_processor_class(processor_class): """Get the config class from a processor class. Some config/model classes use tokenizers/feature_extractors from other models. For example, `GPT-J` uses `GPT2Tokenizer`. If no checkpoint is found for a config class, or a checkpoint is found without necessary file(s) to create the processor for `processor_class`, we get the config class that corresponds to `processor_class` and use it to find a checkpoint in order to create the processor. """ processor_prefix = processor_class.__name__ for postfix in ["TokenizerFast", "Tokenizer", "ImageProcessor", "FeatureExtractor", "Processor"]: processor_prefix = processor_prefix.replace(postfix, "") # `Wav2Vec2CTCTokenizer` -> `Wav2Vec2Config` if processor_prefix == "Wav2Vec2CTC": processor_prefix = "Wav2Vec2" # Find the new configuration class new_config_name = f"{processor_prefix}Config" new_config_class = getattr(transformers_module, new_config_name) return new_config_class def build_processor(config_class, processor_class, allow_no_checkpoint=False): """Create a processor for `processor_class`. If a processor is not able to be built with the original arguments, this method tries to change the arguments and call itself recursively, by inferring a new `config_class` or a new `processor_class` from another one, in order to find a checkpoint containing the necessary files to build a processor. The processor is not saved here. Instead, it will be saved in `convert_processors` after further changes in `convert_processors`. For each model architecture`, a copy will be created and saved along the built model. """ # Currently, this solely uses the docstring in the source file of `config_class` to find a checkpoint. checkpoint = get_checkpoint_from_config_class(config_class) if checkpoint is None: # try to get the checkpoint from the config class for `processor_class`. # This helps cases like `XCLIPConfig` and `VideoMAEFeatureExtractor` to find a checkpoint from `VideoMAEConfig`. config_class_from_processor_class = get_config_class_from_processor_class(processor_class) checkpoint = get_checkpoint_from_config_class(config_class_from_processor_class) processor = None try: processor = processor_class.from_pretrained(checkpoint) except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") # Try to get a new processor class from checkpoint. This is helpful for a checkpoint without necessary file to load # processor while `processor_class` is an Auto class. For example, `sew` has `Wav2Vec2Processor` in # `PROCESSOR_MAPPING_NAMES`, its `tokenizer_class` is `AutoTokenizer`, and the checkpoint # `https://huggingface.co/asapp/sew-tiny-100k` has no tokenizer file, but we can get # `tokenizer_class: Wav2Vec2CTCTokenizer` from the config file. (The new processor class won't be able to load from # `checkpoint`, but it helps this recursive method to find a way to build a processor). if ( processor is None and checkpoint is not None and issubclass(processor_class, (PreTrainedTokenizerBase, AutoTokenizer)) ): try: config = AutoConfig.from_pretrained(checkpoint) except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") config = None if config is not None: if not isinstance(config, config_class): raise ValueError( f"`config` (which is of type {config.__class__.__name__}) should be an instance of `config_class`" f" ({config_class.__name__})!" ) tokenizer_class = config.tokenizer_class new_processor_class = None if tokenizer_class is not None: new_processor_class = getattr(transformers_module, tokenizer_class) if new_processor_class != processor_class: processor = build_processor(config_class, new_processor_class) # If `tokenizer_class` is not specified in `config`, let's use `config` to get the process class via auto # mappings, but only allow the tokenizer mapping being used. This is to make `Wav2Vec2Conformer` build if processor is None: new_processor_classes = get_processor_types_from_config_class( config.__class__, allowed_mappings=["tokenizer"] ) # Used to avoid infinite recursion between a pair of fast/slow tokenizer types names = [ x.__name__.replace("Fast", "") for x in [processor_class, new_processor_class] if x is not None ] new_processor_classes = [ x for x in new_processor_classes if x is not None and x.__name__.replace("Fast", "") not in names ] if len(new_processor_classes) > 0: new_processor_class = new_processor_classes[0] # Let's use fast tokenizer if there is any for x in new_processor_classes: if x.__name__.endswith("Fast"): new_processor_class = x break processor = build_processor(config_class, new_processor_class) if processor is None: # Try to build each component (tokenizer & feature extractor) of a `ProcessorMixin`. if issubclass(processor_class, ProcessorMixin): attrs = {} for attr_name in processor_class.attributes: attrs[attr_name] = [] # This could be a tuple (for tokenizers). For example, `CLIPProcessor` has # - feature_extractor_class = "CLIPFeatureExtractor" # - tokenizer_class = ("CLIPTokenizer", "CLIPTokenizerFast") attr_class_names = getattr(processor_class, f"{attr_name}_class") if not isinstance(attr_class_names, tuple): attr_class_names = (attr_class_names,) for name in attr_class_names: attr_class = getattr(transformers_module, name) attr = build_processor(config_class, attr_class) if attr is not None: attrs[attr_name].append(attr) # try to build a `ProcessorMixin`, so we can return a single value if all(len(v) > 0 for v in attrs.values()): try: processor = processor_class(**{k: v[0] for k, v in attrs.items()}) except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") else: # `checkpoint` might lack some file(s) to load a processor. For example, `facebook/hubert-base-ls960` # has no tokenizer file to load `Wav2Vec2CTCTokenizer`. In this case, we try to build a processor # with the configuration class (for example, `Wav2Vec2Config`) corresponding to `processor_class`. config_class_from_processor_class = get_config_class_from_processor_class(processor_class) if config_class_from_processor_class != config_class: processor = build_processor(config_class_from_processor_class, processor_class) # Try to create an image processor or a feature extractor without any checkpoint if ( processor is None and allow_no_checkpoint and (issubclass(processor_class, BaseImageProcessor) or issubclass(processor_class, FeatureExtractionMixin)) ): try: processor = processor_class() except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") # validation if processor is not None: if not (isinstance(processor, processor_class) or processor_class.__name__.startswith("Auto")): raise ValueError( f"`processor` (which is of type {processor.__class__.__name__}) should be an instance of" f" {processor_class.__name__} or an Auto class!" ) return processor def get_tiny_config(config_class, model_class=None, **model_tester_kwargs): """Retrieve a tiny configuration from `config_class` using each model's `ModelTester`. Args: config_class: Subclass of `PreTrainedConfig`. Returns: An instance of `config_class` with tiny hyperparameters """ model_type = config_class.model_type # For model type like `data2vec-vision` and `donut-swin`, we can't get the config/model file name directly via # `model_type` as it would be sth. like `configuration_data2vec_vision.py`. # A simple way is to use `inspect.getsourcefile(config_class)`. config_source_file = inspect.getsourcefile(config_class) # The modeling file name without prefix (`modeling_`) and postfix (`.py`) modeling_name = config_source_file.split(os.path.sep)[-1].replace("configuration_", "").replace(".py", "") try: print("Importing", model_type_to_module_name(model_type)) module_name = model_type_to_module_name(model_type) if not modeling_name.startswith(module_name): raise ValueError(f"{modeling_name} doesn't start with {module_name}!") test_file = os.path.join("tests", "models", module_name, f"test_modeling_{modeling_name}.py") models_to_model_testers = get_model_to_tester_mapping(test_file) # Find the model tester class model_tester_class = None tester_classes = [] if model_class is not None: tester_classes = get_tester_classes_for_model(test_file, model_class) else: for _tester_classes in models_to_model_testers.values(): tester_classes.extend(_tester_classes) if len(tester_classes) > 0: # sort with the length of the class names first, then the alphabetical order # This is to avoid `T5EncoderOnlyModelTest` is used instead of `T5ModelTest`, which has # `is_encoder_decoder=False` and causes some pipeline tests failing (also failures in `Optimum` CI). # TODO: More fine grained control of the desired tester class. model_tester_class = sorted(tester_classes, key=lambda x: (len(x.__name__), x.__name__))[0] except ModuleNotFoundError: error = f"Tiny config not created for {model_type} - cannot find the testing module from the model name." raise ValueError(error) if model_tester_class is None: error = f"Tiny config not created for {model_type} - no model tester is found in the testing module." raise ValueError(error) # CLIP-like models have `text_model_tester` and `vision_model_tester`, and we need to pass `vocab_size` to # `text_model_tester` via `text_kwargs`. The same trick is also necessary for `Flava`. if "vocab_size" in model_tester_kwargs: if "text_kwargs" in inspect.signature(model_tester_class.__init__).parameters.keys(): vocab_size = model_tester_kwargs.pop("vocab_size") model_tester_kwargs["text_kwargs"] = {"vocab_size": vocab_size} # `parent` is an instance of `unittest.TestCase`, but we don't need it here. model_tester = model_tester_class(parent=None, **model_tester_kwargs) if hasattr(model_tester, "get_pipeline_config"): config = model_tester.get_pipeline_config() elif hasattr(model_tester, "prepare_config_and_inputs"): # `PoolFormer` has no `get_config` defined. Furthermore, it's better to use `prepare_config_and_inputs` even if # `get_config` is defined, since there might be some extra changes in `prepare_config_and_inputs`. config = model_tester.prepare_config_and_inputs()[0] elif hasattr(model_tester, "get_config"): config = model_tester.get_config() else: error = ( f"Tiny config not created for {model_type} - the model tester {model_tester_class.__name__} lacks" " necessary method to create config." ) raise ValueError(error) # make sure this is long enough (some model tester has `20` for this attr.) to pass `text-generation` # pipeline tests. max_positions = [] for key in ["max_position_embeddings", "max_source_positions", "max_target_positions"]: if getattr(config, key, 0) > 0: max_positions.append(getattr(config, key)) if getattr(config, "text_config", None) is not None: if getattr(config.text_config, key, None) is not None: max_positions.append(getattr(config.text_config, key)) if len(max_positions) > 0: max_position = max(200, min(max_positions)) for key in ["max_position_embeddings", "max_source_positions", "max_target_positions"]: if getattr(config, key, 0) > 0: setattr(config, key, max_position) if getattr(config, "text_config", None) is not None: if getattr(config.text_config, key, None) is not None: setattr(config.text_config, key, max_position) return config def convert_tokenizer(tokenizer_fast: PreTrainedTokenizerFast): new_tokenizer = tokenizer_fast.train_new_from_iterator( data["training_ds"]["text"], TARGET_VOCAB_SIZE, show_progress=False ) # Make sure it at least runs if not isinstance(new_tokenizer, LayoutLMv3TokenizerFast): new_tokenizer(data["testing_ds"]["text"]) return new_tokenizer def convert_feature_extractor(feature_extractor, tiny_config): to_convert = False kwargs = {} if hasattr(tiny_config, "image_size"): kwargs["size"] = tiny_config.image_size kwargs["crop_size"] = tiny_config.image_size to_convert = True elif ( hasattr(tiny_config, "vision_config") and tiny_config.vision_config is not None and hasattr(tiny_config.vision_config, "image_size") ): kwargs["size"] = tiny_config.vision_config.image_size kwargs["crop_size"] = tiny_config.vision_config.image_size to_convert = True # Speech2TextModel specific. if hasattr(tiny_config, "input_feat_per_channel"): kwargs["feature_size"] = tiny_config.input_feat_per_channel kwargs["num_mel_bins"] = tiny_config.input_feat_per_channel to_convert = True if to_convert: feature_extractor = feature_extractor.__class__(**kwargs) return feature_extractor def convert_processors(processors, tiny_config, output_folder, result): """Change a processor to work with smaller inputs. For tokenizers, we try to reduce their vocabulary size. For feature extractor, we use smaller image size or change other attributes using the values from `tiny_config`. See `convert_feature_extractor`. This method should not fail: we catch the errors and put them in `result["warnings"]` with descriptive messages. """ def _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=False): """Set tokenizer(s) to `None` if the fast/slow tokenizers have different values for `vocab_size` or `length`. If `keep_fast_tokenizer=True`, the fast tokenizer will be kept. """ # sanity check 1: fast and slow tokenizers should be compatible (vocab_size) if fast_tokenizer is not None and slow_tokenizer is not None: if fast_tokenizer.vocab_size != slow_tokenizer.vocab_size: warning_messagae = ( "The fast/slow tokenizers " f"({fast_tokenizer.__class__.__name__}/{slow_tokenizer.__class__.__name__}) have different " "vocabulary size: " f"fast_tokenizer.vocab_size = {fast_tokenizer.vocab_size} and " f"slow_tokenizer.vocab_size = {slow_tokenizer.vocab_size}." ) result["warnings"].append(warning_messagae) if not keep_fast_tokenizer: fast_tokenizer = None slow_tokenizer = None # sanity check 2: fast and slow tokenizers should be compatible (length) if fast_tokenizer is not None and slow_tokenizer is not None: if len(fast_tokenizer) != len(slow_tokenizer): warning_messagae = ( f"The fast/slow tokenizers () have different length: " f"len(fast_tokenizer) = {len(fast_tokenizer)} and " f"len(slow_tokenizer) = {len(slow_tokenizer)}." ) result["warnings"].append(warning_messagae) if not keep_fast_tokenizer: fast_tokenizer = None slow_tokenizer = None return fast_tokenizer, slow_tokenizer tokenizers = [] feature_extractors = [] for processor in processors: if isinstance(processor, PreTrainedTokenizerBase): if processor.__class__.__name__ not in {x.__class__.__name__ for x in tokenizers}: tokenizers.append(processor) elif isinstance(processor, BaseImageProcessor): if processor.__class__.__name__ not in {x.__class__.__name__ for x in feature_extractors}: feature_extractors.append(processor) elif isinstance(processor, FeatureExtractionMixin): if processor.__class__.__name__ not in {x.__class__.__name__ for x in feature_extractors}: feature_extractors.append(processor) elif isinstance(processor, ProcessorMixin): if hasattr(processor, "tokenizer"): if processor.tokenizer.__class__.__name__ not in {x.__class__.__name__ for x in tokenizers}: tokenizers.append(processor.tokenizer) # Currently, we only have these 2 possibilities if hasattr(processor, "image_processor"): if processor.image_processor.__class__.__name__ not in { x.__class__.__name__ for x in feature_extractors }: feature_extractors.append(processor.image_processor) elif hasattr(processor, "feature_extractor"): if processor.feature_extractor.__class__.__name__ not in { x.__class__.__name__ for x in feature_extractors }: feature_extractors.append(processor.feature_extractor) # check the built processors have the unique type num_types = len({x.__class__.__name__ for x in feature_extractors}) if num_types >= 2: raise ValueError(f"`feature_extractors` should contain at most 1 type, but it contains {num_types} types!") num_types = len({x.__class__.__name__.replace("Fast", "") for x in tokenizers}) if num_types >= 2: raise ValueError(f"`tokenizers` should contain at most 1 tokenizer type, but it contains {num_types} types!") fast_tokenizer = None slow_tokenizer = None for tokenizer in tokenizers: if isinstance(tokenizer, PreTrainedTokenizerFast): fast_tokenizer = tokenizer else: slow_tokenizer = tokenizer # If the (original) fast/slow tokenizers don't correspond, keep only the fast tokenizer. # This doesn't necessarily imply the fast/slow tokenizers in a single Hub repo. has issues. # It's more of an issue in `build_processor` which tries to get a checkpoint with as much effort as possible. # For `YosoModel` (which uses `AlbertTokenizer(Fast)`), its real (Hub) checkpoint doesn't contain valid files to # load the slower tokenizer (`AlbertTokenizer`), and it ends up finding the (canonical) checkpoint of `AlbertModel`, # which has different vocabulary. # TODO: Try to improve `build_processor`'s definition and/or usage to avoid the above situation in the first place. fast_tokenizer, slow_tokenizer = _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=True) original_fast_tokenizer, original_slow_tokenizer = fast_tokenizer, slow_tokenizer if fast_tokenizer: try: # Wav2Vec2ForCTC , ByT5Tokenizer etc. all are already small enough and have no fast version that can # be retrained if fast_tokenizer.vocab_size > TARGET_VOCAB_SIZE: fast_tokenizer = convert_tokenizer(fast_tokenizer) except Exception: result["warnings"].append( ( f"Failed to convert the fast tokenizer for {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) # If `fast_tokenizer` exists, `slow_tokenizer` should correspond to it. if fast_tokenizer: # Make sure the fast tokenizer can be saved try: # We don't save it to `output_folder` at this moment - only at the end of this function. with tempfile.TemporaryDirectory() as tmpdir: fast_tokenizer.save_pretrained(tmpdir) try: slow_tokenizer = AutoTokenizer.from_pretrained(tmpdir, use_fast=False) except Exception: result["warnings"].append( ( f"Failed to load the slow tokenizer saved from {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) # Let's just keep the fast version slow_tokenizer = None except Exception: result["warnings"].append( ( f"Failed to save the fast tokenizer for {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) fast_tokenizer = None # If the (possibly converted) fast/slow tokenizers don't correspond, set them to `None`, and use the original # tokenizers. fast_tokenizer, slow_tokenizer = _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=False) # If there is any conversion failed, we keep the original tokenizers. if (original_fast_tokenizer is not None and fast_tokenizer is None) or ( original_slow_tokenizer is not None and slow_tokenizer is None ): warning_messagae = ( "There are some issues when converting the fast/slow tokenizers. The original tokenizers from the Hub " " will be used instead." ) result["warnings"].append(warning_messagae) # Let's use the original version at the end (`original_fast_tokenizer` and `original_slow_tokenizer`) fast_tokenizer = original_fast_tokenizer slow_tokenizer = original_slow_tokenizer # Make sure the fast tokenizer can be saved if fast_tokenizer: # We don't save it to `output_folder` at this moment - only at the end of this function. with tempfile.TemporaryDirectory() as tmpdir: try: fast_tokenizer.save_pretrained(tmpdir) except Exception: result["warnings"].append( ( f"Failed to save the fast tokenizer for {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) fast_tokenizer = None # Make sure the slow tokenizer can be saved if slow_tokenizer: # We don't save it to `output_folder` at this moment - only at the end of this function. with tempfile.TemporaryDirectory() as tmpdir: try: slow_tokenizer.save_pretrained(tmpdir) except Exception: result["warnings"].append( ( f"Failed to save the slow tokenizer for {slow_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) slow_tokenizer = None # update feature extractors using the tiny config try: feature_extractors = [convert_feature_extractor(p, tiny_config) for p in feature_extractors] except Exception: result["warnings"].append( ( "Failed to convert feature extractors.", traceback.format_exc(), ) ) feature_extractors = [] if hasattr(tiny_config, "max_position_embeddings") and tiny_config.max_position_embeddings > 0: if fast_tokenizer is not None: if fast_tokenizer.__class__.__name__ in [ "RobertaTokenizerFast", "XLMRobertaTokenizerFast", "LongformerTokenizerFast", "MPNetTokenizerFast", ]: fast_tokenizer.model_max_length = tiny_config.max_position_embeddings - 2 else: fast_tokenizer.model_max_length = tiny_config.max_position_embeddings if slow_tokenizer is not None: if slow_tokenizer.__class__.__name__ in [ "RobertaTokenizer", "XLMRobertaTokenizer", "LongformerTokenizer", "MPNetTokenizer", ]: slow_tokenizer.model_max_length = tiny_config.max_position_embeddings - 2 else: slow_tokenizer.model_max_length = tiny_config.max_position_embeddings processors = [fast_tokenizer, slow_tokenizer] + feature_extractors processors = [p for p in processors if p is not None] for p in processors: p.save_pretrained(output_folder) return processors def get_checkpoint_dir(output_dir, model_arch): """Get framework-agnostic architecture name. Used to save all PT/TF/Flax models into the same directory.""" arch_name = model_arch.__name__ if arch_name.startswith("TF"): arch_name = arch_name[2:] elif arch_name.startswith("Flax"): arch_name = arch_name[4:] return os.path.join(output_dir, arch_name) def build_model(model_arch, tiny_config, output_dir): """Create and save a model for `model_arch`. Also copy the set of processors to each model (under the same model type) output folder. """ checkpoint_dir = get_checkpoint_dir(output_dir, model_arch) processor_output_dir = os.path.join(output_dir, "processors") # copy the (same set of) processors (for a model type) to the model arch. specific folder if os.path.isdir(processor_output_dir): shutil.copytree(processor_output_dir, checkpoint_dir, dirs_exist_ok=True) tiny_config = copy.deepcopy(tiny_config) if any(model_arch.__name__.endswith(x) for x in ["ForCausalLM", "LMHeadModel"]): tiny_config.is_encoder_decoder = False tiny_config.is_decoder = True model = model_arch(config=tiny_config) model.save_pretrained(checkpoint_dir) model.from_pretrained(checkpoint_dir) return model def fill_result_with_error(result, error, trace, models_to_create): """Fill `result` with errors for all target model arch if we can't build processor""" error = (error, trace) result["error"] = error for framework in FRAMEWORKS: if framework in models_to_create: result[framework] = {} for model_arch in models_to_create[framework]: result[framework][model_arch.__name__] = {"model": None, "checkpoint": None, "error": error} result["processor"] = {p.__class__.__name__: p.__class__.__name__ for p in result["processor"].values()} def upload_model(model_dir, organization, token): """Upload the tiny models""" arch_name = model_dir.split(os.path.sep)[-1] repo_name = f"tiny-random-{arch_name}" repo_id = f"{organization}/{repo_name}" repo_exist = False error = None try: create_repo(repo_id=repo_id, exist_ok=False, repo_type="model", token=token) except Exception as e: error = e if "You already created" in str(e): error = None logger.warning("Remote repository exists and will be cloned.") repo_exist = True try: create_repo(repo_id=repo_id, exist_ok=True, repo_type="model", token=token) except Exception as e: error = e if error is not None: raise error with tempfile.TemporaryDirectory() as tmpdir: repo = Repository(local_dir=tmpdir, clone_from=repo_id, token=token) repo.git_pull() shutil.copytree(model_dir, tmpdir, dirs_exist_ok=True) if repo_exist: # Open a PR on the existing Hub repo. hub_pr_url = upload_folder( folder_path=model_dir, repo_id=repo_id, repo_type="model", commit_message=f"Update tiny models for {arch_name}", commit_description=f"Upload tiny models for {arch_name}", create_pr=True, token=token, ) logger.warning(f"PR open in {hub_pr_url}.") # TODO: We need this information? else: # Push to Hub repo directly repo.git_add(auto_lfs_track=True) repo.git_commit(f"Upload tiny models for {arch_name}") repo.git_push(blocking=True) # this prints a progress bar with the upload logger.warning(f"Tiny models {arch_name} pushed to {repo_id}.") def build_composite_models(config_class, output_dir): import tempfile from transformers import ( BertConfig, BertLMHeadModel, BertModel, BertTokenizer, BertTokenizerFast, EncoderDecoderModel, GPT2Config, GPT2LMHeadModel, GPT2Tokenizer, GPT2TokenizerFast, SpeechEncoderDecoderModel, TFEncoderDecoderModel, TFVisionEncoderDecoderModel, TFVisionTextDualEncoderModel, VisionEncoderDecoderModel, VisionTextDualEncoderModel, ViTConfig, ViTFeatureExtractor, ViTModel, Wav2Vec2Config, Wav2Vec2Model, Wav2Vec2Processor, ) # These will be removed at the end if they are empty result = {"error": None, "warnings": []} if config_class.model_type == "encoder-decoder": encoder_config_class = BertConfig decoder_config_class = BertConfig encoder_processor = (BertTokenizerFast, BertTokenizer) decoder_processor = (BertTokenizerFast, BertTokenizer) encoder_class = BertModel decoder_class = BertLMHeadModel model_class = EncoderDecoderModel tf_model_class = TFEncoderDecoderModel elif config_class.model_type == "vision-encoder-decoder": encoder_config_class = ViTConfig decoder_config_class = GPT2Config encoder_processor = (ViTFeatureExtractor,) decoder_processor = (GPT2TokenizerFast, GPT2Tokenizer) encoder_class = ViTModel decoder_class = GPT2LMHeadModel model_class = VisionEncoderDecoderModel tf_model_class = TFVisionEncoderDecoderModel elif config_class.model_type == "speech-encoder-decoder": encoder_config_class = Wav2Vec2Config decoder_config_class = BertConfig encoder_processor = (Wav2Vec2Processor,) decoder_processor = (BertTokenizerFast, BertTokenizer) encoder_class = Wav2Vec2Model decoder_class = BertLMHeadModel model_class = SpeechEncoderDecoderModel tf_model_class = None elif config_class.model_type == "vision-text-dual-encoder": # Not encoder-decoder, but encoder-encoder. We just keep the same name as above to make code easier encoder_config_class = ViTConfig decoder_config_class = BertConfig encoder_processor = (ViTFeatureExtractor,) decoder_processor = (BertTokenizerFast, BertTokenizer) encoder_class = ViTModel decoder_class = BertModel model_class = VisionTextDualEncoderModel tf_model_class = TFVisionTextDualEncoderModel with tempfile.TemporaryDirectory() as tmpdir: try: # build encoder models_to_create = {"processor": encoder_processor, "pytorch": (encoder_class,), "tensorflow": []} encoder_output_dir = os.path.join(tmpdir, "encoder") build(encoder_config_class, models_to_create, encoder_output_dir) # build decoder models_to_create = {"processor": decoder_processor, "pytorch": (decoder_class,), "tensorflow": []} decoder_output_dir = os.path.join(tmpdir, "decoder") build(decoder_config_class, models_to_create, decoder_output_dir) # build encoder-decoder encoder_path = os.path.join(encoder_output_dir, encoder_class.__name__) decoder_path = os.path.join(decoder_output_dir, decoder_class.__name__) if config_class.model_type != "vision-text-dual-encoder": # Specify these explicitly for encoder-decoder like models, but not for `vision-text-dual-encoder` as it # has no decoder. decoder_config = decoder_config_class.from_pretrained(decoder_path) decoder_config.is_decoder = True decoder_config.add_cross_attention = True model = model_class.from_encoder_decoder_pretrained( encoder_path, decoder_path, decoder_config=decoder_config, ) elif config_class.model_type == "vision-text-dual-encoder": model = model_class.from_vision_text_pretrained(encoder_path, decoder_path) model_path = os.path.join( output_dir, f"{model_class.__name__}-{encoder_config_class.model_type}-{decoder_config_class.model_type}", ) model.save_pretrained(model_path) if tf_model_class is not None: model = tf_model_class.from_pretrained(model_path) model.save_pretrained(model_path) # copy the processors encoder_processor_path = os.path.join(encoder_output_dir, "processors") decoder_processor_path = os.path.join(decoder_output_dir, "processors") if os.path.isdir(encoder_processor_path): shutil.copytree(encoder_processor_path, model_path, dirs_exist_ok=True) if os.path.isdir(decoder_processor_path): shutil.copytree(decoder_processor_path, model_path, dirs_exist_ok=True) # fill `result` result["processor"] = {x.__name__: x.__name__ for x in encoder_processor + decoder_processor} result["pytorch"] = {model_class.__name__: {"model": model_class.__name__, "checkpoint": model_path}} result["tensorflow"] = {} if tf_model_class is not None: result["tensorflow"] = { tf_model_class.__name__: {"model": tf_model_class.__name__, "checkpoint": model_path} } except Exception: result["error"] = ( f"Failed to build models for {config_class.__name__}.", traceback.format_exc(), ) if not result["error"]: del result["error"] if not result["warnings"]: del result["warnings"] return result def get_token_id_from_tokenizer(token_id_name, tokenizer, original_token_id): """Use `tokenizer` to get the values of `bos_token_id`, `eos_token_ids`, etc. The argument `token_id_name` should be a string ending with `_token_id`, and `original_token_id` should be an integer that will be return if `tokenizer` has no token corresponding to `token_id_name`. """ token_id = original_token_id if not token_id_name.endswith("_token_id"): raise ValueError(f"`token_id_name` is {token_id_name}, which doesn't end with `_token_id`!") token = getattr(tokenizer, token_id_name.replace("_token_id", "_token"), None) if token is not None: if isinstance(tokenizer, PreTrainedTokenizerFast): token_id = tokenizer._convert_token_to_id_with_added_voc(token) else: token_id = tokenizer._convert_token_to_id(token) return token_id def get_config_overrides(config_class, processors): # `Bark` configuration is too special. Let's just not handle this for now. if config_class.__name__ == "BarkConfig": return {} config_overrides = {} # Check if there is any tokenizer (prefer fast version if any) tokenizer = None for processor in processors: if isinstance(processor, PreTrainedTokenizerFast): tokenizer = processor break elif isinstance(processor, PreTrainedTokenizer): tokenizer = processor if tokenizer is None: return config_overrides # Get some properties of the (already converted) tokenizer (smaller vocab size, special token ids, etc.) # We use `len(tokenizer)` instead of `tokenizer.vocab_size` to avoid potential issues for tokenizers with non-empty # `added_tokens_encoder`. One example is the `DebertaV2Tokenizer` where the mask token is the extra token. vocab_size = len(tokenizer) # The original checkpoint has length `35998`, but it doesn't have ids `30400` and `30514` but instead `35998` and # `35999`. if config_class.__name__ == "GPTSanJapaneseConfig": vocab_size += 2 config_overrides["vocab_size"] = vocab_size # Used to create a new model tester with `tokenizer.vocab_size` in order to get the (updated) special token ids. model_tester_kwargs = {"vocab_size": vocab_size} # `FSMTModelTester` accepts `src_vocab_size` and `tgt_vocab_size` but not `vocab_size`. if config_class.__name__ == "FSMTConfig": del model_tester_kwargs["vocab_size"] model_tester_kwargs["src_vocab_size"] = tokenizer.src_vocab_size model_tester_kwargs["tgt_vocab_size"] = tokenizer.tgt_vocab_size _tiny_config = get_tiny_config(config_class, **model_tester_kwargs) # handle the possibility of `text_config` inside `_tiny_config` for clip-like models (`owlvit`, `groupvit`, etc.) if hasattr(_tiny_config, "text_config"): _tiny_config = _tiny_config.text_config # Collect values of some special token ids for attr in dir(_tiny_config): if attr.endswith("_token_id"): token_id = getattr(_tiny_config, attr) if token_id is not None: # Using the token id values from `tokenizer` instead of from `_tiny_config`. token_id = get_token_id_from_tokenizer(attr, tokenizer, original_token_id=token_id) config_overrides[attr] = token_id if config_class.__name__ == "FSMTConfig": config_overrides["src_vocab_size"] = tokenizer.src_vocab_size config_overrides["tgt_vocab_size"] = tokenizer.tgt_vocab_size # `FSMTConfig` has `DecoderConfig` as `decoder` attribute. config_overrides["decoder"] = configuration_fsmt.DecoderConfig( vocab_size=tokenizer.tgt_vocab_size, bos_token_id=config_overrides["eos_token_id"] ) return config_overrides def build(config_class, models_to_create, output_dir): """Create all models for a certain model type. Args: config_class (`PretrainedConfig`): A subclass of `PretrainedConfig` that is used to determine `models_to_create`. models_to_create (`dict`): A dictionary containing the processor/model classes that we want to create the instances. These models are of the same model type which is associated to `config_class`. output_dir (`str`): The directory to save all the checkpoints. Each model architecture will be saved in a subdirectory under it. Models in different frameworks with the same architecture will be saved in the same subdirectory. """ if data["training_ds"] is None or data["testing_ds"] is None: ds = load_dataset("wikitext", "wikitext-2-raw-v1") data["training_ds"] = ds["train"] data["testing_ds"] = ds["test"] if config_class.model_type in [ "encoder-decoder", "vision-encoder-decoder", "speech-encoder-decoder", "vision-text-dual-encoder", ]: return build_composite_models(config_class, output_dir) result = {k: {} for k in models_to_create} # These will be removed at the end if they are empty result["error"] = None result["warnings"] = [] # Build processors processor_classes = models_to_create["processor"] if len(processor_classes) == 0: error = f"No processor class could be found in {config_class.__name__}." fill_result_with_error(result, error, None, models_to_create) logger.error(result["error"][0]) return result for processor_class in processor_classes: try: processor = build_processor(config_class, processor_class, allow_no_checkpoint=True) if processor is not None: result["processor"][processor_class] = processor except Exception: error = f"Failed to build processor for {processor_class.__name__}." trace = traceback.format_exc() fill_result_with_error(result, error, trace, models_to_create) logger.error(result["error"][0]) return result if len(result["processor"]) == 0: error = f"No processor could be built for {config_class.__name__}." fill_result_with_error(result, error, None, models_to_create) logger.error(result["error"][0]) return result try: tiny_config = get_tiny_config(config_class) except Exception as e: error = f"Failed to get tiny config for {config_class.__name__}: {e}" trace = traceback.format_exc() fill_result_with_error(result, error, trace, models_to_create) logger.error(result["error"][0]) return result # Convert the processors (reduce vocabulary size, smaller image size, etc.) processors = list(result["processor"].values()) processor_output_folder = os.path.join(output_dir, "processors") try: processors = convert_processors(processors, tiny_config, processor_output_folder, result) except Exception: error = "Failed to convert the processors." trace = traceback.format_exc() result["warnings"].append((error, trace)) if len(processors) == 0: error = f"No processor is returned by `convert_processors` for {config_class.__name__}." fill_result_with_error(result, error, None, models_to_create) logger.error(result["error"][0]) return result try: config_overrides = get_config_overrides(config_class, processors) except Exception as e: error = f"Failure occurs while calling `get_config_overrides`: {e}" trace = traceback.format_exc() fill_result_with_error(result, error, trace, models_to_create) logger.error(result["error"][0]) return result # Just for us to see this easily in the report if "vocab_size" in config_overrides: result["vocab_size"] = config_overrides["vocab_size"] # Update attributes that `vocab_size` involves for k, v in config_overrides.items(): if hasattr(tiny_config, k): setattr(tiny_config, k, v) # So far, we only have to deal with `text_config`, as `config_overrides` contains text-related attributes only. # `FuyuConfig` saves data under both FuyuConfig and its `text_config`. This is not good, but let's just update # every involved fields to avoid potential failure. if ( hasattr(tiny_config, "text_config") and tiny_config.text_config is not None and hasattr(tiny_config.text_config, k) ): setattr(tiny_config.text_config, k, v) # If `text_config_dict` exists, we need to update its value here too in order to # make # `save_pretrained -> from_pretrained` work. if hasattr(tiny_config, "text_config_dict"): tiny_config.text_config_dict[k] = v if result["warnings"]: logger.warning(result["warnings"][0][0]) # update `result["processor"]` result["processor"] = {type(p).__name__: p.__class__.__name__ for p in processors} for pytorch_arch in models_to_create["pytorch"]: result["pytorch"][pytorch_arch.__name__] = {} error = None try: model = build_model(pytorch_arch, tiny_config, output_dir=output_dir) except Exception as e: model = None error = f"Failed to create the pytorch model for {pytorch_arch}: {e}" trace = traceback.format_exc() result["pytorch"][pytorch_arch.__name__]["model"] = model.__class__.__name__ if model is not None else None result["pytorch"][pytorch_arch.__name__]["checkpoint"] = ( get_checkpoint_dir(output_dir, pytorch_arch) if model is not None else None ) if error is not None: result["pytorch"][pytorch_arch.__name__]["error"] = (error, trace) logger.error(f"{pytorch_arch.__name__}: {error}") for tensorflow_arch in models_to_create["tensorflow"]: # Make PT/TF weights compatible pt_arch_name = tensorflow_arch.__name__[2:] # Remove `TF` pt_arch = getattr(transformers_module, pt_arch_name) result["tensorflow"][tensorflow_arch.__name__] = {} error = None if pt_arch.__name__ in result["pytorch"] and result["pytorch"][pt_arch.__name__]["checkpoint"] is not None: ckpt = get_checkpoint_dir(output_dir, pt_arch) # Use the same weights from PyTorch. try: model = tensorflow_arch.from_pretrained(ckpt) model.save_pretrained(ckpt) except Exception as e: # Conversion may fail. Let's not create a model with different weights to avoid confusion (for now). model = None error = f"Failed to convert the pytorch model to the tensorflow model for {pt_arch}: {e}" trace = traceback.format_exc() else: try: model = build_model(tensorflow_arch, tiny_config, output_dir=output_dir) except Exception as e: model = None error = f"Failed to create the tensorflow model for {tensorflow_arch}: {e}" trace = traceback.format_exc() result["tensorflow"][tensorflow_arch.__name__]["model"] = ( model.__class__.__name__ if model is not None else None ) result["tensorflow"][tensorflow_arch.__name__]["checkpoint"] = ( get_checkpoint_dir(output_dir, tensorflow_arch) if model is not None else None ) if error is not None: result["tensorflow"][tensorflow_arch.__name__]["error"] = (error, trace) logger.error(f"{tensorflow_arch.__name__}: {error}") if not result["error"]: del result["error"] if not result["warnings"]: del result["warnings"] return result def build_tiny_model_summary(results, organization=None, token=None): """Build a summary: a dictionary of the form { model architecture name: { "tokenizer_classes": [...], "processor_classes": [...], "model_classes": [...], } .. } """ tiny_model_summary = {} for config_name in results: processors = [key for key, value in results[config_name]["processor"].items()] tokenizer_classes = sorted([x for x in processors if x.endswith("TokenizerFast") or x.endswith("Tokenizer")]) processor_classes = sorted([x for x in processors if x not in tokenizer_classes]) for framework in FRAMEWORKS: if framework not in results[config_name]: continue for arch_name in results[config_name][framework]: model_classes = [arch_name] base_arch_name = arch_name[2:] if arch_name.startswith("TF") else arch_name # tiny model is not created for `arch_name` if results[config_name][framework][arch_name]["model"] is None: model_classes = [] if base_arch_name not in tiny_model_summary: tiny_model_summary[base_arch_name] = {} tiny_model_summary[base_arch_name].update( { "tokenizer_classes": tokenizer_classes, "processor_classes": processor_classes, } ) tiny_model_summary[base_arch_name]["model_classes"] = sorted( tiny_model_summary[base_arch_name].get("model_classes", []) + model_classes ) if organization is not None: repo_name = f"tiny-random-{base_arch_name}" # composite models' checkpoints have more precise repo. names on the Hub. if base_arch_name in COMPOSITE_MODELS: repo_name = f"tiny-random-{COMPOSITE_MODELS[base_arch_name]}" repo_id = f"{organization}/{repo_name}" try: commit_hash = hf_api.repo_info(repo_id, token=token).sha except Exception: # The directory is not created, but processor(s) is/are included in `results`. logger.warning(f"Failed to get information for {repo_id}.\n{traceback.format_exc()}") del tiny_model_summary[base_arch_name] continue tiny_model_summary[base_arch_name]["sha"] = commit_hash return tiny_model_summary def build_failed_report(results, include_warning=True): failed_results = {} for config_name in results: if "error" in results[config_name]: if config_name not in failed_results: failed_results[config_name] = {} failed_results[config_name] = {"error": results[config_name]["error"]} if include_warning and "warnings" in results[config_name]: if config_name not in failed_results: failed_results[config_name] = {} failed_results[config_name]["warnings"] = results[config_name]["warnings"] for framework in FRAMEWORKS: if framework not in results[config_name]: continue for arch_name in results[config_name][framework]: if "error" in results[config_name][framework][arch_name]: if config_name not in failed_results: failed_results[config_name] = {} if framework not in failed_results[config_name]: failed_results[config_name][framework] = {} if arch_name not in failed_results[config_name][framework]: failed_results[config_name][framework][arch_name] = {} error = results[config_name][framework][arch_name]["error"] failed_results[config_name][framework][arch_name]["error"] = error return failed_results def build_simple_report(results): text = "" failed_text = "" for config_name in results: for framework in FRAMEWORKS: if framework not in results[config_name]: continue for arch_name in results[config_name][framework]: if "error" in results[config_name][framework][arch_name]: result = results[config_name][framework][arch_name]["error"] failed_text += f"{arch_name}: {result[0]}\n" else: result = ("OK",) text += f"{arch_name}: {result[0]}\n" return text, failed_text def update_tiny_model_summary_file(report_path): with open(os.path.join(report_path, "tiny_model_summary.json")) as fp: new_data = json.load(fp) with open("tests/utils/tiny_model_summary.json") as fp: data = json.load(fp) for key, value in new_data.items(): if key not in data: data[key] = value else: for attr in ["tokenizer_classes", "processor_classes", "model_classes"]: # we might get duplication here. We will remove them below when creating `updated_data`. data[key][attr].extend(value[attr]) new_sha = value.get("sha", None) if new_sha is not None: data[key]["sha"] = new_sha updated_data = {} for key in sorted(data.keys()): updated_data[key] = {} for attr, value in data[key].items(): # deduplication and sort updated_data[key][attr] = sorted(set(value)) if attr != "sha" else value with open(os.path.join(report_path, "updated_tiny_model_summary.json"), "w") as fp: json.dump(updated_data, fp, indent=4, ensure_ascii=False) def create_tiny_models( output_path, all, model_types, models_to_skip, no_check, upload, organization, token, num_workers=1, ): clone_path = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) if os.getcwd() != clone_path: raise ValueError(f"This script should be run from the root of the clone of `transformers` {clone_path}") report_path = os.path.join(output_path, "reports") os.makedirs(report_path) _pytorch_arch_mappings = [ x for x in dir(transformers_module) if x.startswith("MODEL_") and x.endswith("_MAPPING") and x != "MODEL_NAMES_MAPPING" ] _tensorflow_arch_mappings = [ x for x in dir(transformers_module) if x.startswith("TF_MODEL_") and x.endswith("_MAPPING") ] pytorch_arch_mappings = [getattr(transformers_module, x) for x in _pytorch_arch_mappings] tensorflow_arch_mappings = [getattr(transformers_module, x) for x in _tensorflow_arch_mappings] config_classes = CONFIG_MAPPING.values() if not all: config_classes = [CONFIG_MAPPING[model_type] for model_type in model_types] # A map from config classes to tuples of processors (tokenizer, feature extractor, processor) classes processor_type_map = {c: get_processor_types_from_config_class(c) for c in config_classes} to_create = {} for c in config_classes: processors = processor_type_map[c] models = get_architectures_from_config_class(c, pytorch_arch_mappings, models_to_skip) tf_models = get_architectures_from_config_class(c, tensorflow_arch_mappings, models_to_skip) if len(models) + len(tf_models) > 0: to_create[c] = {"processor": processors, "pytorch": models, "tensorflow": tf_models} results = {} if num_workers <= 1: for c, models_to_create in list(to_create.items()): print(f"Create models for {c.__name__} ...") result = build(c, models_to_create, output_dir=os.path.join(output_path, c.model_type)) results[c.__name__] = result print("=" * 40) else: all_build_args = [] for c, models_to_create in list(to_create.items()): all_build_args.append((c, models_to_create, os.path.join(output_path, c.model_type))) with multiprocessing.Pool() as pool: results = pool.starmap(build, all_build_args) results = {buid_args[0].__name__: result for buid_args, result in zip(all_build_args, results)} if upload: if organization is None: raise ValueError("The argument `organization` could not be `None`. No model is uploaded") to_upload = [] for model_type in os.listdir(output_path): # This is the directory containing the reports if model_type == "reports": continue for arch in os.listdir(os.path.join(output_path, model_type)): if arch == "processors": continue to_upload.append(os.path.join(output_path, model_type, arch)) to_upload = sorted(to_upload) upload_results = {} if len(to_upload) > 0: for model_dir in to_upload: try: upload_model(model_dir, organization, token) except Exception as e: error = f"Failed to upload {model_dir}. {e.__class__.__name__}: {e}" logger.error(error) upload_results[model_dir] = error with open(os.path.join(report_path, "failed_uploads.json"), "w") as fp: json.dump(upload_results, fp, indent=4) # Build the tiny model summary file. The `tokenizer_classes` and `processor_classes` could be both empty lists. # When using the items in this file to update the file `tests/utils/tiny_model_summary.json`, the model # architectures with `tokenizer_classes` and `processor_classes` being both empty should **NOT** be added to # `tests/utils/tiny_model_summary.json`. tiny_model_summary = build_tiny_model_summary(results, organization=organization, token=token) with open(os.path.join(report_path, "tiny_model_summary.json"), "w") as fp: json.dump(tiny_model_summary, fp, indent=4) with open(os.path.join(report_path, "tiny_model_creation_report.json"), "w") as fp: json.dump(results, fp, indent=4) # Build the warning/failure report (json format): same format as the complete `results` except this contains only # warnings or errors. failed_results = build_failed_report(results) with open(os.path.join(report_path, "failed_report.json"), "w") as fp: json.dump(failed_results, fp, indent=4) simple_report, failed_report = build_simple_report(results) # The simplified report: a .txt file with each line of format: # {model architecture name}: {OK or error message} with open(os.path.join(report_path, "simple_report.txt"), "w") as fp: fp.write(simple_report) # The simplified failure report: same above except this only contains line with errors with open(os.path.join(report_path, "simple_failed_report.txt"), "w") as fp: fp.write(failed_report) update_tiny_model_summary_file(report_path=os.path.join(output_path, "reports")) if __name__ == "__main__": # This has to be `spawn` to avoid hanging forever! multiprocessing.set_start_method("spawn") def list_str(values): return values.split(",") parser = argparse.ArgumentParser() parser.add_argument("--all", action="store_true", help="Will create all tiny models.") parser.add_argument( "--no_check", action="store_true", help="If set, will not check the validity of architectures. Use with caution.", ) parser.add_argument( "-m", "--model_types", type=list_str, help="Comma-separated list of model type(s) from which the tiny models will be created.", ) parser.add_argument( "--models_to_skip", type=list_str, help=( "Comma-separated list of model class names(s) from which the tiny models won't be created.\nThis is usually " "the list of model classes that have their tiny versions already uploaded to the Hub." ), ) parser.add_argument("--upload", action="store_true", help="If to upload the created tiny models to the Hub.") parser.add_argument( "--organization", default=None, type=str, help="The organization on the Hub to which the tiny models will be uploaded.", ) parser.add_argument( "--token", default=None, type=str, help="A valid authentication token for HuggingFace Hub with write access." ) parser.add_argument("output_path", type=Path, help="Path indicating where to store generated model.") parser.add_argument("--num_workers", default=1, type=int, help="The number of workers to run.") args = parser.parse_args() if not args.all and not args.model_types: raise ValueError("Please provide at least one model type or pass `--all` to export all architectures.") create_tiny_models( args.output_path, args.all, args.model_types, args.models_to_skip, args.no_check, args.upload, args.organization, args.token, args.num_workers, )
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_model_tester.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import glob import os from get_test_info import get_tester_classes if __name__ == "__main__": failures = [] pattern = os.path.join("tests", "models", "**", "test_modeling_*.py") test_files = glob.glob(pattern) # TODO: deal with TF/Flax too test_files = [ x for x in test_files if not (x.startswith("test_modeling_tf_") or x.startswith("test_modeling_flax_")) ] for test_file in test_files: tester_classes = get_tester_classes(test_file) for tester_class in tester_classes: # A few tester classes don't have `parent` parameter in `__init__`. # TODO: deal this better try: tester = tester_class(parent=None) except Exception: continue if hasattr(tester, "get_config"): config = tester.get_config() for k, v in config.to_dict().items(): if isinstance(v, int): target = None if k in ["vocab_size"]: target = 100 elif k in ["max_position_embeddings"]: target = 128 elif k in ["hidden_size", "d_model"]: target = 40 elif k == ["num_layers", "num_hidden_layers", "num_encoder_layers", "num_decoder_layers"]: target = 5 if target is not None and v > target: failures.append( f"{tester_class.__name__} will produce a `config` of type `{config.__class__.__name__}`" f' with config["{k}"] = {v} which is too large for testing! Set its value to be smaller' f" than {target}." ) if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_table.py
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that checks the big table in the file docs/source/en/index.md and potentially updates it. Use from the root of the repo with: ```bash python utils/check_inits.py ``` for a check that will error in case of inconsistencies (used by `make repo-consistency`). To auto-fix issues run: ```bash python utils/check_inits.py --fix_and_overwrite ``` which is used by `make fix-copies`. """ import argparse import collections import os import re from typing import List from transformers.utils import direct_transformers_import # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_table.py TRANSFORMERS_PATH = "src/transformers" PATH_TO_DOCS = "docs/source/en" REPO_PATH = "." def _find_text_in_file(filename: str, start_prompt: str, end_prompt: str) -> str: """ Find the text in filename between two prompts. Args: filename (`str`): The file to search into. start_prompt (`str`): A string to look for at the start of the content searched. end_prompt (`str`): A string that will mark the end of the content to look for. Returns: `str`: The content between the prompts. """ with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() # Find the start prompt. start_index = 0 while not lines[start_index].startswith(start_prompt): start_index += 1 start_index += 1 # Now go until the end prompt. end_index = start_index while not lines[end_index].startswith(end_prompt): end_index += 1 end_index -= 1 while len(lines[start_index]) <= 1: start_index += 1 while len(lines[end_index]) <= 1: end_index -= 1 end_index += 1 return "".join(lines[start_index:end_index]), start_index, end_index, lines # Regexes that match TF/Flax/PT model names. Add here suffixes that are used to identify models, separated by | _re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") _re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") # Will match any TF or Flax model too so need to be in an else branch after the two previous regexes. _re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") # This is to make sure the transformers module imported is the one in the repo. transformers_module = direct_transformers_import(TRANSFORMERS_PATH) def camel_case_split(identifier: str) -> List[str]: """ Split a camel-cased name into words. Args: identifier (`str`): The camel-cased name to parse. Returns: `List[str]`: The list of words in the identifier (as seprated by capital letters). Example: ```py >>> camel_case_split("CamelCasedClass") ["Camel", "Cased", "Class"] ``` """ # Regex thanks to https://stackoverflow.com/questions/29916065/how-to-do-camelcase-split-in-python matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier) return [m.group(0) for m in matches] def _center_text(text: str, width: int) -> str: """ Utility that will add spaces on the left and right of a text to make it centered for a given width. Args: text (`str`): The text to center. width (`int`): The desired length of the result. Returns: `str`: A text of length `width` with the original `text` in the middle. """ text_length = 2 if text == "โœ…" or text == "โŒ" else len(text) left_indent = (width - text_length) // 2 right_indent = width - text_length - left_indent return " " * left_indent + text + " " * right_indent SPECIAL_MODEL_NAME_LINK_MAPPING = { "Data2VecAudio": "[Data2VecAudio](model_doc/data2vec)", "Data2VecText": "[Data2VecText](model_doc/data2vec)", "Data2VecVision": "[Data2VecVision](model_doc/data2vec)", "DonutSwin": "[DonutSwin](model_doc/donut)", } MODEL_NAMES_WITH_SAME_CONFIG = { "BARThez": "BART", "BARTpho": "BART", "BertJapanese": "BERT", "BERTweet": "BERT", "BORT": "BERT", "ByT5": "T5", "CPM": "OpenAI GPT-2", "DePlot": "Pix2Struct", "DialoGPT": "OpenAI GPT-2", "DiT": "BEiT", "FLAN-T5": "T5", "FLAN-UL2": "T5", "HerBERT": "BERT", "LayoutXLM": "LayoutLMv2", "Llama2": "LLaMA", "Llama3": "LLaMA", "MADLAD-400": "T5", "MatCha": "Pix2Struct", "mBART-50": "mBART", "Megatron-GPT2": "OpenAI GPT-2", "mLUKE": "LUKE", "MMS": "Wav2Vec2", "NLLB": "M2M100", "PhoBERT": "BERT", "T5v1.1": "T5", "TAPEX": "BART", "UL2": "T5", "Wav2Vec2Phoneme": "Wav2Vec2", "XLM-V": "XLM-RoBERTa", "XLS-R": "Wav2Vec2", "XLSR-Wav2Vec2": "Wav2Vec2", } MODEL_NAMES_TO_IGNORE = ["CLIPVisionModel", "SiglipVisionModel", "ChineseCLIPVisionModel"] def get_model_table_from_auto_modules() -> str: """ Generates an up-to-date model table from the content of the auto modules. """ # Dictionary model names to config. config_maping_names = transformers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES model_name_to_config = { name: config_maping_names[code] for code, name in transformers_module.MODEL_NAMES_MAPPING.items() if code in config_maping_names } model_name_to_prefix = {name: config.replace("Config", "") for name, config in model_name_to_config.items()} # Dictionaries flagging if each model prefix has a backend in PT/TF/Flax. pt_models = collections.defaultdict(bool) tf_models = collections.defaultdict(bool) flax_models = collections.defaultdict(bool) # Let's lookup through all transformers object (once). for attr_name in dir(transformers_module): lookup_dict = None if _re_tf_models.match(attr_name) is not None: lookup_dict = tf_models attr_name = _re_tf_models.match(attr_name).groups()[0] elif _re_flax_models.match(attr_name) is not None: lookup_dict = flax_models attr_name = _re_flax_models.match(attr_name).groups()[0] elif _re_pt_models.match(attr_name) is not None: lookup_dict = pt_models attr_name = _re_pt_models.match(attr_name).groups()[0] if lookup_dict is not None: while len(attr_name) > 0: if attr_name in model_name_to_prefix.values(): lookup_dict[attr_name] = True break # Try again after removing the last word in the name attr_name = "".join(camel_case_split(attr_name)[:-1]) # Let's build that table! model_names = list(model_name_to_config.keys()) + list(MODEL_NAMES_WITH_SAME_CONFIG.keys()) # model name to doc link mapping model_names_mapping = transformers_module.models.auto.configuration_auto.MODEL_NAMES_MAPPING model_name_to_link_mapping = {value: f"[{value}](model_doc/{key})" for key, value in model_names_mapping.items()} # update mapping with special model names model_name_to_link_mapping = { k: SPECIAL_MODEL_NAME_LINK_MAPPING[k] if k in SPECIAL_MODEL_NAME_LINK_MAPPING else v for k, v in model_name_to_link_mapping.items() } # MaskFormerSwin and TimmBackbone are backbones and so not meant to be loaded and used on their own. Instead, they define architectures which can be loaded using the AutoBackbone API. names_to_exclude = ["MaskFormerSwin", "TimmBackbone", "Speech2Text2"] model_names = [name for name in model_names if name not in names_to_exclude] model_names.sort(key=str.lower) columns = ["Model", "PyTorch support", "TensorFlow support", "Flax Support"] # We'll need widths to properly display everything in the center (+2 is to leave one extra space on each side). widths = [len(c) + 2 for c in columns] widths[0] = max([len(doc_link) for doc_link in model_name_to_link_mapping.values()]) + 2 # Build the table per se table = "|" + "|".join([_center_text(c, w) for c, w in zip(columns, widths)]) + "|\n" # Use ":-----:" format to center-aligned table cell texts table += "|" + "|".join([":" + "-" * (w - 2) + ":" for w in widths]) + "|\n" check = {True: "โœ…", False: "โŒ"} for name in model_names: if name in MODEL_NAMES_TO_IGNORE: continue if name in MODEL_NAMES_WITH_SAME_CONFIG.keys(): prefix = model_name_to_prefix[MODEL_NAMES_WITH_SAME_CONFIG[name]] else: prefix = model_name_to_prefix[name] line = [ model_name_to_link_mapping[name], check[pt_models[prefix]], check[tf_models[prefix]], check[flax_models[prefix]], ] table += "|" + "|".join([_center_text(l, w) for l, w in zip(line, widths)]) + "|\n" return table def check_model_table(overwrite=False): """ Check the model table in the index.md is consistent with the state of the lib and potentially fix it. Args: overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the table when it's not up to date. """ current_table, start_index, end_index, lines = _find_text_in_file( filename=os.path.join(PATH_TO_DOCS, "index.md"), start_prompt="<!--This table is updated automatically from the auto modules", end_prompt="<!-- End table-->", ) new_table = get_model_table_from_auto_modules() if current_table != new_table: if overwrite: with open(os.path.join(PATH_TO_DOCS, "index.md"), "w", encoding="utf-8", newline="\n") as f: f.writelines(lines[:start_index] + [new_table] + lines[end_index:]) else: raise ValueError( "The model table in the `index.md` has not been updated. Run `make fix-copies` to fix this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() check_model_table(args.fix_and_overwrite)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/split_doctest_jobs.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is used to get the files against which we will run doc testing. This uses `tests_fetcher.get_all_doctest_files` then groups the test files by their directory paths. The files in `docs/source/en/model_doc` or `docs/source/en/tasks` are **NOT** grouped together with other files in the same directory: the objective is to run doctest against them in independent GitHub Actions jobs. Assume we are under `transformers` root directory: To get a map (dictionary) between directory (or file) paths and the corresponding files ```bash python utils/split_doctest_jobs.py ``` or to get a list of lists of directory (or file) paths ```bash python utils/split_doctest_jobs.py --only_return_keys --num_splits 4 ``` (this is used to allow GitHub Actions to generate more than 256 jobs using matrix) """ import argparse from collections import defaultdict from pathlib import Path from tests_fetcher import get_all_doctest_files if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--only_return_keys", action="store_true", help="if to only return the keys (which is a list of list of files' directory or file paths).", ) parser.add_argument( "--num_splits", type=int, default=1, help="the number of splits into which the (flat) list of direcotry/file paths will be split. This has effect only if `only_return_keys` is `True`.", ) args = parser.parse_args() all_doctest_files = get_all_doctest_files() raw_test_collection_map = defaultdict(list) for file in all_doctest_files: file_dir = "/".join(Path(file).parents[0].parts) raw_test_collection_map[file_dir].append(file) refined_test_collection_map = {} for file_dir in raw_test_collection_map.keys(): if file_dir in ["docs/source/en/model_doc", "docs/source/en/tasks"]: for file in raw_test_collection_map[file_dir]: refined_test_collection_map[file] = file else: refined_test_collection_map[file_dir] = " ".join(sorted(raw_test_collection_map[file_dir])) sorted_file_dirs = sorted(refined_test_collection_map.keys()) test_collection_map = {} for file_dir in sorted_file_dirs: test_collection_map[file_dir] = refined_test_collection_map[file_dir] num_jobs = len(test_collection_map) num_jobs_per_splits = num_jobs // args.num_splits file_directory_splits = [] end = 0 for idx in range(args.num_splits): start = end end = start + num_jobs_per_splits + (1 if idx < num_jobs % args.num_splits else 0) file_directory_splits.append(sorted_file_dirs[start:end]) if args.only_return_keys: print(file_directory_splits) else: print(dict(test_collection_map))
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/slow_documentation_tests.txt
docs/source/en/generation_strategies.md docs/source/en/model_doc/code_llama.md docs/source/en/model_doc/ctrl.md docs/source/en/model_doc/kosmos-2.md docs/source/en/model_doc/seamless_m4t.md docs/source/en/model_doc/seamless_m4t_v2.md docs/source/en/task_summary.md docs/source/en/tasks/prompting.md src/transformers/models/blip_2/modeling_blip_2.py src/transformers/models/ctrl/modeling_ctrl.py src/transformers/models/fuyu/modeling_fuyu.py src/transformers/models/idefics2/modeling_idefics2.py src/transformers/models/kosmos2/modeling_kosmos2.py src/transformers/models/musicgen_melody/modeling_musicgen_melody.py src/transformers/models/musicgen_melody/processing_musicgen_melody.py
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/get_test_info.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib import os import sys # This is required to make the module import works (when the python process is running from the root of the repo) sys.path.append(".") r""" The argument `test_file` in this file refers to a model test file. This should be a string of the from `tests/models/*/test_modeling_*.py`. """ def get_module_path(test_file): """Return the module path of a model test file.""" components = test_file.split(os.path.sep) if components[0:2] != ["tests", "models"]: raise ValueError( "`test_file` should start with `tests/models/` (with `/` being the OS specific path separator). Got " f"{test_file} instead." ) test_fn = components[-1] if not test_fn.endswith("py"): raise ValueError(f"`test_file` should be a python file. Got {test_fn} instead.") if not test_fn.startswith("test_modeling_"): raise ValueError( f"`test_file` should point to a file name of the form `test_modeling_*.py`. Got {test_fn} instead." ) components = components[:-1] + [test_fn.replace(".py", "")] test_module_path = ".".join(components) return test_module_path def get_test_module(test_file): """Get the module of a model test file.""" test_module_path = get_module_path(test_file) test_module = importlib.import_module(test_module_path) return test_module def get_tester_classes(test_file): """Get all classes in a model test file whose names ends with `ModelTester`.""" tester_classes = [] test_module = get_test_module(test_file) for attr in dir(test_module): if attr.endswith("ModelTester"): tester_classes.append(getattr(test_module, attr)) # sort with class names return sorted(tester_classes, key=lambda x: x.__name__) def get_test_classes(test_file): """Get all [test] classes in a model test file with attribute `all_model_classes` that are non-empty. These are usually the (model) test classes containing the (non-slow) tests to run and are subclasses of one of the classes `ModelTesterMixin`, `TFModelTesterMixin` or `FlaxModelTesterMixin`, as well as a subclass of `unittest.TestCase`. Exceptions include `RagTestMixin` (and its subclasses). """ test_classes = [] test_module = get_test_module(test_file) for attr in dir(test_module): attr_value = getattr(test_module, attr) # (TF/Flax)ModelTesterMixin is also an attribute in specific model test module. Let's exclude them by checking # `all_model_classes` is not empty (which also excludes other special classes). model_classes = getattr(attr_value, "all_model_classes", []) if len(model_classes) > 0: test_classes.append(attr_value) # sort with class names return sorted(test_classes, key=lambda x: x.__name__) def get_model_classes(test_file): """Get all model classes that appear in `all_model_classes` attributes in a model test file.""" test_classes = get_test_classes(test_file) model_classes = set() for test_class in test_classes: model_classes.update(test_class.all_model_classes) # sort with class names return sorted(model_classes, key=lambda x: x.__name__) def get_model_tester_from_test_class(test_class): """Get the model tester class of a model test class.""" test = test_class() if hasattr(test, "setUp"): test.setUp() model_tester = None if hasattr(test, "model_tester"): # `(TF/Flax)ModelTesterMixin` has this attribute default to `None`. Let's skip this case. if test.model_tester is not None: model_tester = test.model_tester.__class__ return model_tester def get_test_classes_for_model(test_file, model_class): """Get all [test] classes in `test_file` that have `model_class` in their `all_model_classes`.""" test_classes = get_test_classes(test_file) target_test_classes = [] for test_class in test_classes: if model_class in test_class.all_model_classes: target_test_classes.append(test_class) # sort with class names return sorted(target_test_classes, key=lambda x: x.__name__) def get_tester_classes_for_model(test_file, model_class): """Get all model tester classes in `test_file` that are associated to `model_class`.""" test_classes = get_test_classes_for_model(test_file, model_class) tester_classes = [] for test_class in test_classes: tester_class = get_model_tester_from_test_class(test_class) if tester_class is not None: tester_classes.append(tester_class) # sort with class names return sorted(tester_classes, key=lambda x: x.__name__) def get_test_to_tester_mapping(test_file): """Get a mapping from [test] classes to model tester classes in `test_file`. This uses `get_test_classes` which may return classes that are NOT subclasses of `unittest.TestCase`. """ test_classes = get_test_classes(test_file) test_tester_mapping = {test_class: get_model_tester_from_test_class(test_class) for test_class in test_classes} return test_tester_mapping def get_model_to_test_mapping(test_file): """Get a mapping from model classes to test classes in `test_file`.""" model_classes = get_model_classes(test_file) model_test_mapping = { model_class: get_test_classes_for_model(test_file, model_class) for model_class in model_classes } return model_test_mapping def get_model_to_tester_mapping(test_file): """Get a mapping from model classes to model tester classes in `test_file`.""" model_classes = get_model_classes(test_file) model_to_tester_mapping = { model_class: get_tester_classes_for_model(test_file, model_class) for model_class in model_classes } return model_to_tester_mapping def to_json(o): """Make the information succinct and easy to read. Avoid the full class representation like `<class 'transformers.models.bert.modeling_bert.BertForMaskedLM'>` when displaying the results. Instead, we use class name (`BertForMaskedLM`) for the readability. """ if isinstance(o, str): return o elif isinstance(o, type): return o.__name__ elif isinstance(o, (list, tuple)): return [to_json(x) for x in o] elif isinstance(o, dict): return {to_json(k): to_json(v) for k, v in o.items()} else: return o
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/get_modified_files.py
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # this script reports modified .py files under the desired list of top-level sub-dirs passed as a list of arguments, e.g.: # python ./utils/get_modified_files.py utils src tests examples # # it uses git to find the forking point and which files were modified - i.e. files not under git won't be considered # since the output of this script is fed into Makefile commands it doesn't print a newline after the results import re import subprocess import sys fork_point_sha = subprocess.check_output("git merge-base main HEAD".split()).decode("utf-8") modified_files = ( subprocess.check_output(f"git diff --diff-filter=d --name-only {fork_point_sha}".split()).decode("utf-8").split() ) joined_dirs = "|".join(sys.argv[1:]) regex = re.compile(rf"^({joined_dirs}).*?\.py$") relevant_modified_files = [x for x in modified_files if regex.match(x)] print(" ".join(relevant_modified_files), end="")
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_dummies.py
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is responsible for making sure the dummies in utils/dummies_xxx.py are up to date with the main init. Why dummies? This is to make sure that a user can always import all objects from `transformers`, even if they don't have the necessary extra libs installed. Those objects will then raise helpful error message whenever the user tries to access one of their methods. Usage (from the root of the repo): Check that the dummy files are up to date (used in `make repo-consistency`): ```bash python utils/check_dummies.py ``` Update the dummy files if needed (used in `make fix-copies`): ```bash python utils/check_dummies.py --fix_and_overwrite ``` """ import argparse import os import re from typing import Dict, List, Optional # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_dummies.py PATH_TO_TRANSFORMERS = "src/transformers" # Matches is_xxx_available() _re_backend = re.compile(r"is\_([a-z_]*)_available()") # Matches from xxx import bla _re_single_line_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n") # Matches if not is_xxx_available() _re_test_backend = re.compile(r"^\s+if\s+not\s+\(?is\_[a-z_]*\_available\(\)") # Template for the dummy objects. DUMMY_CONSTANT = """ {0} = None """ DUMMY_CLASS = """ class {0}(metaclass=DummyObject): _backends = {1} def __init__(self, *args, **kwargs): requires_backends(self, {1}) """ DUMMY_FUNCTION = """ def {0}(*args, **kwargs): requires_backends({0}, {1}) """ def find_backend(line: str) -> Optional[str]: """ Find one (or multiple) backend in a code line of the init. Args: line (`str`): A code line in an init file. Returns: Optional[`str`]: If one (or several) backend is found, returns it. In the case of multiple backends (the line contains `if is_xxx_available() and `is_yyy_available()`) returns all backends joined on `_and_` (so `xxx_and_yyy` for instance). """ if _re_test_backend.search(line) is None: return None backends = [b[0] for b in _re_backend.findall(line)] backends.sort() return "_and_".join(backends) def read_init() -> Dict[str, List[str]]: """ Read the init and extract backend-specific objects. Returns: Dict[str, List[str]]: A dictionary mapping backend name to the list of object names requiring that backend. """ with open(os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() # Get to the point we do the actual imports for type checking line_index = 0 while not lines[line_index].startswith("if TYPE_CHECKING"): line_index += 1 backend_specific_objects = {} # Go through the end of the file while line_index < len(lines): # If the line is an if is_backend_available, we grab all objects associated. backend = find_backend(lines[line_index]) if backend is not None: while not lines[line_index].startswith(" else:"): line_index += 1 line_index += 1 objects = [] # Until we unindent, add backend objects to the list while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8): line = lines[line_index] single_line_import_search = _re_single_line_import.search(line) if single_line_import_search is not None: # Single-line imports objects.extend(single_line_import_search.groups()[0].split(", ")) elif line.startswith(" " * 12): # Multiple-line imports (with 3 indent level) objects.append(line[12:-2]) line_index += 1 backend_specific_objects[backend] = objects else: line_index += 1 return backend_specific_objects def create_dummy_object(name: str, backend_name: str) -> str: """ Create the code for a dummy object. Args: name (`str`): The name of the object. backend_name (`str`): The name of the backend required for that object. Returns: `str`: The code of the dummy object. """ if name.isupper(): return DUMMY_CONSTANT.format(name) elif name.islower(): return DUMMY_FUNCTION.format(name, backend_name) else: return DUMMY_CLASS.format(name, backend_name) def create_dummy_files(backend_specific_objects: Optional[Dict[str, List[str]]] = None) -> Dict[str, str]: """ Create the content of the dummy files. Args: backend_specific_objects (`Dict[str, List[str]]`, *optional*): The mapping backend name to list of backend-specific objects. If not passed, will be obtained by calling `read_init()`. Returns: `Dict[str, str]`: A dictionary mapping backend name to code of the corresponding backend file. """ if backend_specific_objects is None: backend_specific_objects = read_init() dummy_files = {} for backend, objects in backend_specific_objects.items(): backend_name = "[" + ", ".join(f'"{b}"' for b in backend.split("_and_")) + "]" dummy_file = "# This file is autogenerated by the command `make fix-copies`, do not edit.\n" dummy_file += "from ..utils import DummyObject, requires_backends\n\n" dummy_file += "\n".join([create_dummy_object(o, backend_name) for o in objects]) dummy_files[backend] = dummy_file return dummy_files def check_dummies(overwrite: bool = False): """ Check if the dummy files are up to date and maybe `overwrite` with the right content. Args: overwrite (`bool`, *optional*, default to `False`): Whether or not to overwrite the content of the dummy files. Will raise an error if they are not up to date when `overwrite=False`. """ dummy_files = create_dummy_files() # For special correspondence backend name to shortcut as used in utils/dummy_xxx_objects.py short_names = {"torch": "pt"} # Locate actual dummy modules and read their content. path = os.path.join(PATH_TO_TRANSFORMERS, "utils") dummy_file_paths = { backend: os.path.join(path, f"dummy_{short_names.get(backend, backend)}_objects.py") for backend in dummy_files.keys() } actual_dummies = {} for backend, file_path in dummy_file_paths.items(): if os.path.isfile(file_path): with open(file_path, "r", encoding="utf-8", newline="\n") as f: actual_dummies[backend] = f.read() else: actual_dummies[backend] = "" # Compare actual with what they should be. for backend in dummy_files.keys(): if dummy_files[backend] != actual_dummies[backend]: if overwrite: print( f"Updating transformers.utils.dummy_{short_names.get(backend, backend)}_objects.py as the main " "__init__ has new objects." ) with open(dummy_file_paths[backend], "w", encoding="utf-8", newline="\n") as f: f.write(dummy_files[backend]) else: raise ValueError( "The main __init__ has objects that are not present in " f"transformers.utils.dummy_{short_names.get(backend, backend)}_objects.py. Run `make fix-copies` " "to fix this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() check_dummies(args.fix_and_overwrite)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_config_attributes.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import os import re from transformers.configuration_utils import PretrainedConfig from transformers.utils import direct_transformers_import # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_config_docstrings.py PATH_TO_TRANSFORMERS = "src/transformers" # This is to make sure the transformers module imported is the one in the repo. transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) CONFIG_MAPPING = transformers.models.auto.configuration_auto.CONFIG_MAPPING SPECIAL_CASES_TO_ALLOW = { # 'max_position_embeddings' is not used in modeling file, but needed for eval frameworks like Huggingface's lighteval (https://github.com/huggingface/lighteval/blob/af24080ea4f16eaf1683e353042a2dfc9099f038/src/lighteval/models/base_model.py#L264). # periods and offsers are not used in modeling file, but used in the configuration file to define `layers_block_type` and `layers_num_experts`. "JambaConfig": [ "max_position_embeddings", "attn_layer_offset", "attn_layer_period", "expert_layer_offset", "expert_layer_period", ], # used to compute the property `self.chunk_length` "EncodecConfig": ["overlap"], # used to compute the property `self.layers_block_type` "RecurrentGemmaConfig": ["block_types"], # used as in the config to define `intermediate_size` "MambaConfig": ["expand"], # used as `self.bert_model = BertModel(config, ...)` "DPRConfig": True, "FuyuConfig": True, # not used in modeling files, but it's an important information "FSMTConfig": ["langs"], # used internally in the configuration class file "GPTNeoConfig": ["attention_types"], # used internally in the configuration class file "EsmConfig": ["is_folding_model"], # used during training (despite we don't have training script for these models yet) "Mask2FormerConfig": ["ignore_value"], # `ignore_value` used during training (despite we don't have training script for these models yet) # `norm` used in conversion script (despite not using in the modeling file) "OneFormerConfig": ["ignore_value", "norm"], # used during preprocessing and collation, see `collating_graphormer.py` "GraphormerConfig": ["spatial_pos_max"], # used internally in the configuration class file "T5Config": ["feed_forward_proj"], # used internally in the configuration class file # `tokenizer_class` get default value `T5Tokenizer` intentionally "MT5Config": ["feed_forward_proj", "tokenizer_class"], "UMT5Config": ["feed_forward_proj", "tokenizer_class"], # used internally in the configuration class file "LongT5Config": ["feed_forward_proj"], # used internally in the configuration class file "Pop2PianoConfig": ["feed_forward_proj"], # used internally in the configuration class file "SwitchTransformersConfig": ["feed_forward_proj"], # having default values other than `1e-5` - we can't fix them without breaking "BioGptConfig": ["layer_norm_eps"], # having default values other than `1e-5` - we can't fix them without breaking "GLPNConfig": ["layer_norm_eps"], # having default values other than `1e-5` - we can't fix them without breaking "SegformerConfig": ["layer_norm_eps"], # having default values other than `1e-5` - we can't fix them without breaking "CvtConfig": ["layer_norm_eps"], # having default values other than `1e-5` - we can't fix them without breaking "PerceiverConfig": ["layer_norm_eps"], # used internally to calculate the feature size "InformerConfig": ["num_static_real_features", "num_time_features"], # used internally to calculate the feature size "TimeSeriesTransformerConfig": ["num_static_real_features", "num_time_features"], # used internally to calculate the feature size "AutoformerConfig": ["num_static_real_features", "num_time_features"], # used internally to calculate `mlp_dim` "SamVisionConfig": ["mlp_ratio"], # For (head) training, but so far not implemented "ClapAudioConfig": ["num_classes"], # Not used, but providing useful information to users "SpeechT5HifiGanConfig": ["sampling_rate"], # used internally in the configuration class file "UdopConfig": ["feed_forward_proj"], # Actually used in the config or generation config, in that case necessary for the sub-components generation "SeamlessM4TConfig": [ "max_new_tokens", "t2u_max_new_tokens", "t2u_decoder_attention_heads", "t2u_decoder_ffn_dim", "t2u_decoder_layers", "t2u_encoder_attention_heads", "t2u_encoder_ffn_dim", "t2u_encoder_layers", "t2u_max_position_embeddings", ], # Actually used in the config or generation config, in that case necessary for the sub-components generation "SeamlessM4Tv2Config": [ "max_new_tokens", "t2u_decoder_attention_heads", "t2u_decoder_ffn_dim", "t2u_decoder_layers", "t2u_encoder_attention_heads", "t2u_encoder_ffn_dim", "t2u_encoder_layers", "t2u_max_position_embeddings", "t2u_variance_pred_dropout", "t2u_variance_predictor_embed_dim", "t2u_variance_predictor_hidden_dim", "t2u_variance_predictor_kernel_size", ], } # TODO (ydshieh): Check the failing cases, try to fix them or move some cases to the above block once we are sure SPECIAL_CASES_TO_ALLOW.update( { "CLIPSegConfig": True, "DeformableDetrConfig": True, "DetaConfig": True, "DinatConfig": True, "DonutSwinConfig": True, "EfficientFormerConfig": True, "FastSpeech2ConformerConfig": True, "FSMTConfig": True, "JukeboxConfig": True, "LayoutLMv2Config": True, "MaskFormerSwinConfig": True, "MT5Config": True, # For backward compatibility with trust remote code models "MptConfig": True, "MptAttentionConfig": True, "NatConfig": True, "OneFormerConfig": True, "PerceiverConfig": True, "RagConfig": True, "SpeechT5Config": True, "SwinConfig": True, "Swin2SRConfig": True, "Swinv2Config": True, "SwitchTransformersConfig": True, "TableTransformerConfig": True, "TapasConfig": True, "UniSpeechConfig": True, "UniSpeechSatConfig": True, "WavLMConfig": True, "WhisperConfig": True, # TODO: @Arthur (for `alignment_head` and `alignment_layer`) "JukeboxPriorConfig": True, # TODO: @Younes (for `is_decoder`) "Pix2StructTextConfig": True, "IdeficsConfig": True, "IdeficsVisionConfig": True, "IdeficsPerceiverConfig": True, } ) def check_attribute_being_used(config_class, attributes, default_value, source_strings): """Check if any name in `attributes` is used in one of the strings in `source_strings` Args: config_class (`type`): The configuration class for which the arguments in its `__init__` will be checked. attributes (`List[str]`): The name of an argument (or attribute) and its variant names if any. default_value (`Any`): A default value for the attribute in `attributes` assigned in the `__init__` of `config_class`. source_strings (`List[str]`): The python source code strings in the same modeling directory where `config_class` is defined. The file containing the definition of `config_class` should be excluded. """ attribute_used = False for attribute in attributes: for modeling_source in source_strings: # check if we can find `config.xxx`, `getattr(config, "xxx", ...)` or `getattr(self.config, "xxx", ...)` if ( f"config.{attribute}" in modeling_source or f'getattr(config, "{attribute}"' in modeling_source or f'getattr(self.config, "{attribute}"' in modeling_source ): attribute_used = True # Deal with multi-line cases elif ( re.search( rf'getattr[ \t\v\n\r\f]*\([ \t\v\n\r\f]*(self\.)?config,[ \t\v\n\r\f]*"{attribute}"', modeling_source, ) is not None ): attribute_used = True # `SequenceSummary` is called with `SequenceSummary(config)` elif attribute in [ "summary_type", "summary_use_proj", "summary_activation", "summary_last_dropout", "summary_proj_to_labels", "summary_first_dropout", ]: if "SequenceSummary" in modeling_source: attribute_used = True if attribute_used: break if attribute_used: break # common and important attributes, even if they do not always appear in the modeling files attributes_to_allow = [ "bos_index", "eos_index", "pad_index", "unk_index", "mask_index", "image_size", "use_cache", "out_features", "out_indices", "sampling_rate", # backbone related arguments passed to load_backbone "use_pretrained_backbone", "backbone", "backbone_config", "use_timm_backbone", "backbone_kwargs", ] attributes_used_in_generation = ["encoder_no_repeat_ngram_size"] # Special cases to be allowed case_allowed = True if not attribute_used: case_allowed = False for attribute in attributes: # Allow if the default value in the configuration class is different from the one in `PretrainedConfig` if attribute in ["is_encoder_decoder"] and default_value is True: case_allowed = True elif attribute in ["tie_word_embeddings"] and default_value is False: case_allowed = True # Allow cases without checking the default value in the configuration class elif attribute in attributes_to_allow + attributes_used_in_generation: case_allowed = True elif attribute.endswith("_token_id"): case_allowed = True # configuration class specific cases if not case_allowed: allowed_cases = SPECIAL_CASES_TO_ALLOW.get(config_class.__name__, []) case_allowed = allowed_cases is True or attribute in allowed_cases return attribute_used or case_allowed def check_config_attributes_being_used(config_class): """Check the arguments in `__init__` of `config_class` are used in the modeling files in the same directory Args: config_class (`type`): The configuration class for which the arguments in its `__init__` will be checked. """ # Get the parameters in `__init__` of the configuration class, and the default values if any signature = dict(inspect.signature(config_class.__init__).parameters) parameter_names = [x for x in list(signature.keys()) if x not in ["self", "kwargs"]] parameter_defaults = [signature[param].default for param in parameter_names] # If `attribute_map` exists, an attribute can have different names to be used in the modeling files, and as long # as one variant is used, the test should pass reversed_attribute_map = {} if len(config_class.attribute_map) > 0: reversed_attribute_map = {v: k for k, v in config_class.attribute_map.items()} # Get the path to modeling source files config_source_file = inspect.getsourcefile(config_class) model_dir = os.path.dirname(config_source_file) # Let's check against all frameworks: as long as one framework uses an attribute, we are good. modeling_paths = [os.path.join(model_dir, fn) for fn in os.listdir(model_dir) if fn.startswith("modeling_")] # Get the source code strings modeling_sources = [] for path in modeling_paths: if os.path.isfile(path): with open(path, encoding="utf8") as fp: modeling_sources.append(fp.read()) unused_attributes = [] for config_param, default_value in zip(parameter_names, parameter_defaults): # `attributes` here is all the variant names for `config_param` attributes = [config_param] # some configuration classes have non-empty `attribute_map`, and both names could be used in the # corresponding modeling files. As long as one of them appears, it is fine. if config_param in reversed_attribute_map: attributes.append(reversed_attribute_map[config_param]) if not check_attribute_being_used(config_class, attributes, default_value, modeling_sources): unused_attributes.append(attributes[0]) return sorted(unused_attributes) def check_config_attributes(): """Check the arguments in `__init__` of all configuration classes are used in python files""" configs_with_unused_attributes = {} for _config_class in list(CONFIG_MAPPING.values()): # Skip deprecated models if "models.deprecated" in _config_class.__module__: continue # Some config classes are not in `CONFIG_MAPPING` (e.g. `CLIPVisionConfig`, `Blip2VisionConfig`, etc.) config_classes_in_module = [ cls for name, cls in inspect.getmembers( inspect.getmodule(_config_class), lambda x: inspect.isclass(x) and issubclass(x, PretrainedConfig) and inspect.getmodule(x) == inspect.getmodule(_config_class), ) ] for config_class in config_classes_in_module: unused_attributes = check_config_attributes_being_used(config_class) if len(unused_attributes) > 0: configs_with_unused_attributes[config_class.__name__] = unused_attributes if len(configs_with_unused_attributes) > 0: error = "The following configuration classes contain unused attributes in the corresponding modeling files:\n" for name, attributes in configs_with_unused_attributes.items(): error += f"{name}: {attributes}\n" raise ValueError(error) if __name__ == "__main__": check_config_attributes()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/download_glue_data.py
""" Script for downloading all GLUE data. Original source: https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e Note: for legal reasons, we are unable to host MRPC. You can either use the version hosted by the SentEval team, which is already tokenized, or you can download the original data from (https://download.microsoft.com/download/D/4/6/D46FF87A-F6B9-4252-AA8B-3604ED519838/MSRParaphraseCorpus.msi) and extract the data from it manually. For Windows users, you can run the .msi file. For Mac and Linux users, consider an external library such as 'cabextract' (see below for an example). You should then rename and place specific files in a folder (see below for an example). mkdir MRPC cabextract MSRParaphraseCorpus.msi -d MRPC cat MRPC/_2DEC3DBE877E4DB192D17C0256E90F1D | tr -d $'\r' > MRPC/msr_paraphrase_train.txt cat MRPC/_D7B391F9EAFF4B1B8BCE8F21B20B1B61 | tr -d $'\r' > MRPC/msr_paraphrase_test.txt rm MRPC/_* rm MSRParaphraseCorpus.msi 1/30/19: It looks like SentEval is no longer hosting their extracted and tokenized MRPC data, so you'll need to download the data from the original source for now. 2/11/19: It looks like SentEval actually *is* hosting the extracted data. Hooray! """ import argparse import os import sys import urllib.request import zipfile TASKS = ["CoLA", "SST", "MRPC", "QQP", "STS", "MNLI", "SNLI", "QNLI", "RTE", "WNLI", "diagnostic"] TASK2PATH = { "CoLA": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FCoLA.zip?alt=media&token=46d5e637-3411-4188-bc44-5809b5bfb5f4", "SST": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8", "MRPC": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc", "QQP": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQQP.zip?alt=media&token=700c6acf-160d-4d89-81d1-de4191d02cb5", "STS": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSTS-B.zip?alt=media&token=bddb94a7-8706-4e0d-a694-1109e12273b5", "MNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FMNLI.zip?alt=media&token=50329ea1-e339-40e2-809c-10c40afff3ce", "SNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSNLI.zip?alt=media&token=4afcfbb2-ff0c-4b2d-a09a-dbf07926f4df", "QNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQNLIv2.zip?alt=media&token=6fdcf570-0fc5-4631-8456-9505272d1601", "RTE": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb", "WNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FWNLI.zip?alt=media&token=068ad0a0-ded7-4bd7-99a5-5e00222e0faf", "diagnostic": "https://storage.googleapis.com/mtl-sentence-representations.appspot.com/tsvsWithoutLabels%2FAX.tsv?GoogleAccessId=firebase-adminsdk-0khhl@mtl-sentence-representations.iam.gserviceaccount.com&Expires=2498860800&Signature=DuQ2CSPt2Yfre0C%2BiISrVYrIFaZH1Lc7hBVZDD4ZyR7fZYOMNOUGpi8QxBmTNOrNPjR3z1cggo7WXFfrgECP6FBJSsURv8Ybrue8Ypt%2FTPxbuJ0Xc2FhDi%2BarnecCBFO77RSbfuz%2Bs95hRrYhTnByqu3U%2FYZPaj3tZt5QdfpH2IUROY8LiBXoXS46LE%2FgOQc%2FKN%2BA9SoscRDYsnxHfG0IjXGwHN%2Bf88q6hOmAxeNPx6moDulUF6XMUAaXCSFU%2BnRO2RDL9CapWxj%2BDl7syNyHhB7987hZ80B%2FwFkQ3MEs8auvt5XW1%2Bd4aCU7ytgM69r8JDCwibfhZxpaa4gd50QXQ%3D%3D", } MRPC_TRAIN = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt" MRPC_TEST = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt" def download_and_extract(task, data_dir): print(f"Downloading and extracting {task}...") data_file = f"{task}.zip" urllib.request.urlretrieve(TASK2PATH[task], data_file) with zipfile.ZipFile(data_file) as zip_ref: zip_ref.extractall(data_dir) os.remove(data_file) print("\tCompleted!") def format_mrpc(data_dir, path_to_data): print("Processing MRPC...") mrpc_dir = os.path.join(data_dir, "MRPC") if not os.path.isdir(mrpc_dir): os.mkdir(mrpc_dir) if path_to_data: mrpc_train_file = os.path.join(path_to_data, "msr_paraphrase_train.txt") mrpc_test_file = os.path.join(path_to_data, "msr_paraphrase_test.txt") else: print("Local MRPC data not specified, downloading data from %s" % MRPC_TRAIN) mrpc_train_file = os.path.join(mrpc_dir, "msr_paraphrase_train.txt") mrpc_test_file = os.path.join(mrpc_dir, "msr_paraphrase_test.txt") urllib.request.urlretrieve(MRPC_TRAIN, mrpc_train_file) urllib.request.urlretrieve(MRPC_TEST, mrpc_test_file) if not os.path.isfile(mrpc_train_file): raise ValueError(f"Train data not found at {mrpc_train_file}") if not os.path.isfile(mrpc_test_file): raise ValueError(f"Test data not found at {mrpc_test_file}") urllib.request.urlretrieve(TASK2PATH["MRPC"], os.path.join(mrpc_dir, "dev_ids.tsv")) dev_ids = [] with open(os.path.join(mrpc_dir, "dev_ids.tsv"), encoding="utf8") as ids_fh: for row in ids_fh: dev_ids.append(row.strip().split("\t")) with open(mrpc_train_file, encoding="utf8") as data_fh, open( os.path.join(mrpc_dir, "train.tsv"), "w", encoding="utf8" ) as train_fh, open(os.path.join(mrpc_dir, "dev.tsv"), "w", encoding="utf8") as dev_fh: header = data_fh.readline() train_fh.write(header) dev_fh.write(header) for row in data_fh: label, id1, id2, s1, s2 = row.strip().split("\t") if [id1, id2] in dev_ids: dev_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2)) else: train_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2)) with open(mrpc_test_file, encoding="utf8") as data_fh, open( os.path.join(mrpc_dir, "test.tsv"), "w", encoding="utf8" ) as test_fh: header = data_fh.readline() test_fh.write("index\t#1 ID\t#2 ID\t#1 String\t#2 String\n") for idx, row in enumerate(data_fh): label, id1, id2, s1, s2 = row.strip().split("\t") test_fh.write("%d\t%s\t%s\t%s\t%s\n" % (idx, id1, id2, s1, s2)) print("\tCompleted!") def download_diagnostic(data_dir): print("Downloading and extracting diagnostic...") if not os.path.isdir(os.path.join(data_dir, "diagnostic")): os.mkdir(os.path.join(data_dir, "diagnostic")) data_file = os.path.join(data_dir, "diagnostic", "diagnostic.tsv") urllib.request.urlretrieve(TASK2PATH["diagnostic"], data_file) print("\tCompleted!") return def get_tasks(task_names): task_names = task_names.split(",") if "all" in task_names: tasks = TASKS else: tasks = [] for task_name in task_names: if task_name not in TASKS: raise ValueError(f"Task {task_name} not found!") tasks.append(task_name) return tasks def main(arguments): parser = argparse.ArgumentParser() parser.add_argument("--data_dir", help="directory to save data to", type=str, default="glue_data") parser.add_argument( "--tasks", help="tasks to download data for as a comma separated string", type=str, default="all" ) parser.add_argument( "--path_to_mrpc", help="path to directory containing extracted MRPC data, msr_paraphrase_train.txt and msr_paraphrase_text.txt", type=str, default="", ) args = parser.parse_args(arguments) if not os.path.isdir(args.data_dir): os.mkdir(args.data_dir) tasks = get_tasks(args.tasks) for task in tasks: if task == "MRPC": format_mrpc(args.data_dir, args.path_to_mrpc) elif task == "diagnostic": download_diagnostic(args.data_dir) else: download_and_extract(task, args.data_dir) if __name__ == "__main__": sys.exit(main(sys.argv[1:]))
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/sort_auto_mappings.py
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that sorts the names in the auto mappings defines in the auto modules in alphabetical order. Use from the root of the repo with: ```bash python utils/sort_auto_mappings.py ``` to auto-fix all the auto mappings (used in `make style`). To only check if the mappings are properly sorted (as used in `make quality`), do: ```bash python utils/sort_auto_mappings.py --check_only ``` """ import argparse import os import re from typing import Optional # Path are set with the intent you should run this script from the root of the repo. PATH_TO_AUTO_MODULE = "src/transformers/models/auto" # re pattern that matches mapping introductions: # SUPER_MODEL_MAPPING_NAMES = OrderedDict or SUPER_MODEL_MAPPING = OrderedDict _re_intro_mapping = re.compile(r"[A-Z_]+_MAPPING(\s+|_[A-Z_]+\s+)=\s+OrderedDict") # re pattern that matches identifiers in mappings _re_identifier = re.compile(r'\s*\(\s*"(\S[^"]+)"') def sort_auto_mapping(fname: str, overwrite: bool = False) -> Optional[bool]: """ Sort all auto mappings in a file. Args: fname (`str`): The name of the file where we want to sort auto-mappings. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to fix and overwrite the file. Returns: `Optional[bool]`: Returns `None` if `overwrite=True`. Otherwise returns `True` if the file has an auto-mapping improperly sorted, `False` if the file is okay. """ with open(fname, "r", encoding="utf-8") as f: content = f.read() lines = content.split("\n") new_lines = [] line_idx = 0 while line_idx < len(lines): if _re_intro_mapping.search(lines[line_idx]) is not None: # Start of a new mapping! indent = len(re.search(r"^(\s*)\S", lines[line_idx]).groups()[0]) + 8 while not lines[line_idx].startswith(" " * indent + "("): new_lines.append(lines[line_idx]) line_idx += 1 blocks = [] while lines[line_idx].strip() != "]": # Blocks either fit in one line or not if lines[line_idx].strip() == "(": start_idx = line_idx while not lines[line_idx].startswith(" " * indent + ")"): line_idx += 1 blocks.append("\n".join(lines[start_idx : line_idx + 1])) else: blocks.append(lines[line_idx]) line_idx += 1 # Sort blocks by their identifiers blocks = sorted(blocks, key=lambda x: _re_identifier.search(x).groups()[0]) new_lines += blocks else: new_lines.append(lines[line_idx]) line_idx += 1 if overwrite: with open(fname, "w", encoding="utf-8") as f: f.write("\n".join(new_lines)) else: return "\n".join(new_lines) != content def sort_all_auto_mappings(overwrite: bool = False): """ Sort all auto mappings in the library. Args: overwrite (`bool`, *optional*, defaults to `False`): Whether or not to fix and overwrite the file. """ fnames = [os.path.join(PATH_TO_AUTO_MODULE, f) for f in os.listdir(PATH_TO_AUTO_MODULE) if f.endswith(".py")] diffs = [sort_auto_mapping(fname, overwrite=overwrite) for fname in fnames] if not overwrite and any(diffs): failures = [f for f, d in zip(fnames, diffs) if d] raise ValueError( f"The following files have auto mappings that need sorting: {', '.join(failures)}. Run `make style` to fix" " this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--check_only", action="store_true", help="Whether to only check or fix style.") args = parser.parse_args() sort_all_auto_mappings(not args.check_only)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_inits.py
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that checks the custom inits of Transformers are well-defined: Transformers uses init files that delay the import of an object to when it's actually needed. This is to avoid the main init importing all models, which would make the line `import transformers` very slow when the user has all optional dependencies installed. The inits with delayed imports have two halves: one definining a dictionary `_import_structure` which maps modules to the name of the objects in each module, and one in `TYPE_CHECKING` which looks like a normal init for type-checkers. The goal of this script is to check the objects defined in both halves are the same. This also checks the main init properly references all submodules, even if it doesn't import anything from them: every submodule should be defined as a key of `_import_structure`, with an empty list as value potentially, or the submodule won't be importable. Use from the root of the repo with: ```bash python utils/check_inits.py ``` for a check that will error in case of inconsistencies (used by `make repo-consistency`). There is no auto-fix possible here sadly :-( """ import collections import os import re from pathlib import Path from typing import Dict, List, Optional, Tuple # Path is set with the intent you should run this script from the root of the repo. PATH_TO_TRANSFORMERS = "src/transformers" # Matches is_xxx_available() _re_backend = re.compile(r"is\_([a-z_]*)_available()") # Catches a one-line _import_struct = {xxx} _re_one_line_import_struct = re.compile(r"^_import_structure\s+=\s+\{([^\}]+)\}") # Catches a line with a key-values pattern: "bla": ["foo", "bar"] _re_import_struct_key_value = re.compile(r'\s+"\S*":\s+\[([^\]]*)\]') # Catches a line if not is_foo_available _re_test_backend = re.compile(r"^\s*if\s+not\s+is\_[a-z_]*\_available\(\)") # Catches a line _import_struct["bla"].append("foo") _re_import_struct_add_one = re.compile(r'^\s*_import_structure\["\S*"\]\.append\("(\S*)"\)') # Catches a line _import_struct["bla"].extend(["foo", "bar"]) or _import_struct["bla"] = ["foo", "bar"] _re_import_struct_add_many = re.compile(r"^\s*_import_structure\[\S*\](?:\.extend\(|\s*=\s+)\[([^\]]*)\]") # Catches a line with an object between quotes and a comma: "MyModel", _re_quote_object = re.compile(r'^\s+"([^"]+)",') # Catches a line with objects between brackets only: ["foo", "bar"], _re_between_brackets = re.compile(r"^\s+\[([^\]]+)\]") # Catches a line with from foo import bar, bla, boo _re_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n") # Catches a line with try: _re_try = re.compile(r"^\s*try:") # Catches a line with else: _re_else = re.compile(r"^\s*else:") def find_backend(line: str) -> Optional[str]: """ Find one (or multiple) backend in a code line of the init. Args: line (`str`): A code line of the main init. Returns: Optional[`str`]: If one (or several) backend is found, returns it. In the case of multiple backends (the line contains `if is_xxx_available() and `is_yyy_available()`) returns all backends joined on `_and_` (so `xxx_and_yyy` for instance). """ if _re_test_backend.search(line) is None: return None backends = [b[0] for b in _re_backend.findall(line)] backends.sort() return "_and_".join(backends) def parse_init(init_file) -> Optional[Tuple[Dict[str, List[str]], Dict[str, List[str]]]]: """ Read an init_file and parse (per backend) the `_import_structure` objects defined and the `TYPE_CHECKING` objects defined. Args: init_file (`str`): Path to the init file to inspect. Returns: `Optional[Tuple[Dict[str, List[str]], Dict[str, List[str]]]]`: A tuple of two dictionaries mapping backends to list of imported objects, one for the `_import_structure` part of the init and one for the `TYPE_CHECKING` part of the init. Returns `None` if the init is not a custom init. """ with open(init_file, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() # Get the to `_import_structure` definition. line_index = 0 while line_index < len(lines) and not lines[line_index].startswith("_import_structure = {"): line_index += 1 # If this is a traditional init, just return. if line_index >= len(lines): return None # First grab the objects without a specific backend in _import_structure objects = [] while not lines[line_index].startswith("if TYPE_CHECKING") and find_backend(lines[line_index]) is None: line = lines[line_index] # If we have everything on a single line, let's deal with it. if _re_one_line_import_struct.search(line): content = _re_one_line_import_struct.search(line).groups()[0] imports = re.findall(r"\[([^\]]+)\]", content) for imp in imports: objects.extend([obj[1:-1] for obj in imp.split(", ")]) line_index += 1 continue single_line_import_search = _re_import_struct_key_value.search(line) if single_line_import_search is not None: imports = [obj[1:-1] for obj in single_line_import_search.groups()[0].split(", ") if len(obj) > 0] objects.extend(imports) elif line.startswith(" " * 8 + '"'): objects.append(line[9:-3]) line_index += 1 # Those are stored with the key "none". import_dict_objects = {"none": objects} # Let's continue with backend-specific objects in _import_structure while not lines[line_index].startswith("if TYPE_CHECKING"): # If the line is an if not is_backend_available, we grab all objects associated. backend = find_backend(lines[line_index]) # Check if the backend declaration is inside a try block: if _re_try.search(lines[line_index - 1]) is None: backend = None if backend is not None: line_index += 1 # Scroll until we hit the else block of try-except-else while _re_else.search(lines[line_index]) is None: line_index += 1 line_index += 1 objects = [] # Until we unindent, add backend objects to the list while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 4): line = lines[line_index] if _re_import_struct_add_one.search(line) is not None: objects.append(_re_import_struct_add_one.search(line).groups()[0]) elif _re_import_struct_add_many.search(line) is not None: imports = _re_import_struct_add_many.search(line).groups()[0].split(", ") imports = [obj[1:-1] for obj in imports if len(obj) > 0] objects.extend(imports) elif _re_between_brackets.search(line) is not None: imports = _re_between_brackets.search(line).groups()[0].split(", ") imports = [obj[1:-1] for obj in imports if len(obj) > 0] objects.extend(imports) elif _re_quote_object.search(line) is not None: objects.append(_re_quote_object.search(line).groups()[0]) elif line.startswith(" " * 8 + '"'): objects.append(line[9:-3]) elif line.startswith(" " * 12 + '"'): objects.append(line[13:-3]) line_index += 1 import_dict_objects[backend] = objects else: line_index += 1 # At this stage we are in the TYPE_CHECKING part, first grab the objects without a specific backend objects = [] while ( line_index < len(lines) and find_backend(lines[line_index]) is None and not lines[line_index].startswith("else") ): line = lines[line_index] single_line_import_search = _re_import.search(line) if single_line_import_search is not None: objects.extend(single_line_import_search.groups()[0].split(", ")) elif line.startswith(" " * 8): objects.append(line[8:-2]) line_index += 1 type_hint_objects = {"none": objects} # Let's continue with backend-specific objects while line_index < len(lines): # If the line is an if is_backend_available, we grab all objects associated. backend = find_backend(lines[line_index]) # Check if the backend declaration is inside a try block: if _re_try.search(lines[line_index - 1]) is None: backend = None if backend is not None: line_index += 1 # Scroll until we hit the else block of try-except-else while _re_else.search(lines[line_index]) is None: line_index += 1 line_index += 1 objects = [] # Until we unindent, add backend objects to the list while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8): line = lines[line_index] single_line_import_search = _re_import.search(line) if single_line_import_search is not None: objects.extend(single_line_import_search.groups()[0].split(", ")) elif line.startswith(" " * 12): objects.append(line[12:-2]) line_index += 1 type_hint_objects[backend] = objects else: line_index += 1 return import_dict_objects, type_hint_objects def analyze_results(import_dict_objects: Dict[str, List[str]], type_hint_objects: Dict[str, List[str]]) -> List[str]: """ Analyze the differences between _import_structure objects and TYPE_CHECKING objects found in an init. Args: import_dict_objects (`Dict[str, List[str]]`): A dictionary mapping backend names (`"none"` for the objects independent of any specific backend) to list of imported objects. type_hint_objects (`Dict[str, List[str]]`): A dictionary mapping backend names (`"none"` for the objects independent of any specific backend) to list of imported objects. Returns: `List[str]`: The list of errors corresponding to mismatches. """ def find_duplicates(seq): return [k for k, v in collections.Counter(seq).items() if v > 1] # If one backend is missing from the other part of the init, error early. if list(import_dict_objects.keys()) != list(type_hint_objects.keys()): return ["Both sides of the init do not have the same backends!"] errors = [] # Find all errors. for key in import_dict_objects.keys(): # Duplicate imports in any half. duplicate_imports = find_duplicates(import_dict_objects[key]) if duplicate_imports: errors.append(f"Duplicate _import_structure definitions for: {duplicate_imports}") duplicate_type_hints = find_duplicates(type_hint_objects[key]) if duplicate_type_hints: errors.append(f"Duplicate TYPE_CHECKING objects for: {duplicate_type_hints}") # Missing imports in either part of the init. if sorted(set(import_dict_objects[key])) != sorted(set(type_hint_objects[key])): name = "base imports" if key == "none" else f"{key} backend" errors.append(f"Differences for {name}:") for a in type_hint_objects[key]: if a not in import_dict_objects[key]: errors.append(f" {a} in TYPE_HINT but not in _import_structure.") for a in import_dict_objects[key]: if a not in type_hint_objects[key]: errors.append(f" {a} in _import_structure but not in TYPE_HINT.") return errors def check_all_inits(): """ Check all inits in the transformers repo and raise an error if at least one does not define the same objects in both halves. """ failures = [] for root, _, files in os.walk(PATH_TO_TRANSFORMERS): if "__init__.py" in files: fname = os.path.join(root, "__init__.py") objects = parse_init(fname) if objects is not None: errors = analyze_results(*objects) if len(errors) > 0: errors[0] = f"Problem in {fname}, both halves do not define the same objects.\n{errors[0]}" failures.append("\n".join(errors)) if len(failures) > 0: raise ValueError("\n\n".join(failures)) def get_transformers_submodules() -> List[str]: """ Returns the list of Transformers submodules. """ submodules = [] for path, directories, files in os.walk(PATH_TO_TRANSFORMERS): for folder in directories: # Ignore private modules if folder.startswith("_"): directories.remove(folder) continue # Ignore leftovers from branches (empty folders apart from pycache) if len(list((Path(path) / folder).glob("*.py"))) == 0: continue short_path = str((Path(path) / folder).relative_to(PATH_TO_TRANSFORMERS)) submodule = short_path.replace(os.path.sep, ".") submodules.append(submodule) for fname in files: if fname == "__init__.py": continue short_path = str((Path(path) / fname).relative_to(PATH_TO_TRANSFORMERS)) submodule = short_path.replace(".py", "").replace(os.path.sep, ".") if len(submodule.split(".")) == 1: submodules.append(submodule) return submodules IGNORE_SUBMODULES = [ "convert_pytorch_checkpoint_to_tf2", "modeling_flax_pytorch_utils", "models.esm.openfold_utils", "modeling_attn_mask_utils", "safetensors_conversion", ] def check_submodules(): """ Check all submodules of Transformers are properly registered in the main init. Error otherwise. """ # This is to make sure the transformers module imported is the one in the repo. from transformers.utils import direct_transformers_import transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) import_structure_keys = set(transformers._import_structure.keys()) # This contains all the base keys of the _import_structure object defined in the init, but if the user is missing # some optional dependencies, they may not have all of them. Thus we read the init to read all additions and # (potentiall re-) add them. with open(os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), "r") as f: init_content = f.read() import_structure_keys.update(set(re.findall(r"import_structure\[\"([^\"]*)\"\]", init_content))) module_not_registered = [ module for module in get_transformers_submodules() if module not in IGNORE_SUBMODULES and module not in import_structure_keys ] if len(module_not_registered) > 0: list_of_modules = "\n".join(f"- {module}" for module in module_not_registered) raise ValueError( "The following submodules are not properly registed in the main init of Transformers:\n" f"{list_of_modules}\n" "Make sure they appear somewhere in the keys of `_import_structure` with an empty list as value." ) if __name__ == "__main__": check_all_inits() check_submodules()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_doctest_list.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is responsible for cleaning the list of doctests by making sure the entries all exist and are in alphabetical order. Usage (from the root of the repo): Check that the doctest list is properly sorted and all files exist (used in `make repo-consistency`): ```bash python utils/check_doctest_list.py ``` Auto-sort the doctest list if it is not properly sorted (used in `make fix-copies`): ```bash python utils/check_doctest_list.py --fix_and_overwrite ``` """ import argparse import os # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_doctest_list.py REPO_PATH = "." DOCTEST_FILE_PATHS = ["not_doctested.txt", "slow_documentation_tests.txt"] def clean_doctest_list(doctest_file: str, overwrite: bool = False): """ Cleans the doctest in a given file. Args: doctest_file (`str`): The path to the doctest file to check or clean. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to fix problems. If `False`, will error when the file is not clean. """ non_existent_paths = [] all_paths = [] with open(doctest_file, "r", encoding="utf-8") as f: for line in f: line = line.strip().split(" ")[0] path = os.path.join(REPO_PATH, line) if not (os.path.isfile(path) or os.path.isdir(path)): non_existent_paths.append(line) all_paths.append(line) if len(non_existent_paths) > 0: non_existent_paths = "\n".join([f"- {f}" for f in non_existent_paths]) raise ValueError(f"`{doctest_file}` contains non-existent paths:\n{non_existent_paths}") sorted_paths = sorted(all_paths) if all_paths != sorted_paths: if not overwrite: raise ValueError( f"Files in `{doctest_file}` are not in alphabetical order, run `make fix-copies` to fix " "this automatically." ) with open(doctest_file, "w", encoding="utf-8") as f: f.write("\n".join(sorted_paths) + "\n") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() for doctest_file in DOCTEST_FILE_PATHS: doctest_file = os.path.join(REPO_PATH, "utils", doctest_file) clean_doctest_list(doctest_file, args.fix_and_overwrite)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_config_docstrings.py
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import re from transformers.utils import direct_transformers_import # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_config_docstrings.py PATH_TO_TRANSFORMERS = "src/transformers" # This is to make sure the transformers module imported is the one in the repo. transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) CONFIG_MAPPING = transformers.models.auto.configuration_auto.CONFIG_MAPPING # Regex pattern used to find the checkpoint mentioned in the docstring of `config_class`. # For example, `[google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)` _re_checkpoint = re.compile(r"\[(.+?)\]\((https://huggingface\.co/.+?)\)") CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK = { "DecisionTransformerConfig", "EncoderDecoderConfig", "MusicgenConfig", "RagConfig", "SpeechEncoderDecoderConfig", "TimmBackboneConfig", "VisionEncoderDecoderConfig", "VisionTextDualEncoderConfig", "LlamaConfig", } def get_checkpoint_from_config_class(config_class): checkpoint = None # source code of `config_class` config_source = inspect.getsource(config_class) checkpoints = _re_checkpoint.findall(config_source) # Each `checkpoint` is a tuple of a checkpoint name and a checkpoint link. # For example, `('google-bert/bert-base-uncased', 'https://huggingface.co/google-bert/bert-base-uncased')` for ckpt_name, ckpt_link in checkpoints: # allow the link to end with `/` if ckpt_link.endswith("/"): ckpt_link = ckpt_link[:-1] # verify the checkpoint name corresponds to the checkpoint link ckpt_link_from_name = f"https://huggingface.co/{ckpt_name}" if ckpt_link == ckpt_link_from_name: checkpoint = ckpt_name break return checkpoint def check_config_docstrings_have_checkpoints(): configs_without_checkpoint = [] for config_class in list(CONFIG_MAPPING.values()): # Skip deprecated models if "models.deprecated" in config_class.__module__: continue checkpoint = get_checkpoint_from_config_class(config_class) name = config_class.__name__ if checkpoint is None and name not in CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK: configs_without_checkpoint.append(name) if len(configs_without_checkpoint) > 0: message = "\n".join(sorted(configs_without_checkpoint)) raise ValueError( f"The following configurations don't contain any valid checkpoint:\n{message}\n\n" "The requirement is to include a link pointing to one of the models of this architecture in the " "docstring of the config classes listed above. The link should have be a markdown format like " "[myorg/mymodel](https://huggingface.co/myorg/mymodel)." ) if __name__ == "__main__": check_config_docstrings_have_checkpoints()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/notification_service_quantization.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import ast import json import os import sys import time from typing import Dict from get_ci_error_statistics import get_jobs from notification_service import ( Message, handle_stacktraces, handle_test_results, prepare_reports, retrieve_artifact, retrieve_available_artifacts, ) from slack_sdk import WebClient client = WebClient(token=os.environ["CI_SLACK_BOT_TOKEN"]) class QuantizationMessage(Message): def __init__( self, title: str, results: Dict, ): self.title = title # Failures and success of the modeling tests self.n_success = sum(r["success"] for r in results.values()) self.single_gpu_failures = sum(r["failed"]["single"] for r in results.values()) self.multi_gpu_failures = sum(r["failed"]["multi"] for r in results.values()) self.n_failures = self.single_gpu_failures + self.multi_gpu_failures self.n_tests = self.n_failures + self.n_success self.results = results self.thread_ts = None @property def payload(self) -> str: blocks = [self.header] if self.n_failures > 0: blocks.append(self.failures_overwiew) blocks.append(self.failures_detailed) if self.n_failures == 0: blocks.append(self.no_failures) return json.dumps(blocks) @property def time(self) -> str: all_results = self.results.values() time_spent = [] for r in all_results: if len(r["time_spent"]): time_spent.extend([x for x in r["time_spent"].split(", ") if len(x.strip())]) total_secs = 0 for time in time_spent: time_parts = time.split(":") # Time can be formatted as xx:xx:xx, as .xx, or as x.xx if the time spent was less than a minute. if len(time_parts) == 1: time_parts = [0, 0, time_parts[0]] hours, minutes, seconds = int(time_parts[0]), int(time_parts[1]), float(time_parts[2]) total_secs += hours * 3600 + minutes * 60 + seconds hours, minutes, seconds = total_secs // 3600, (total_secs % 3600) // 60, total_secs % 60 return f"{int(hours)}h{int(minutes)}m{int(seconds)}s" @property def failures_overwiew(self) -> Dict: return { "type": "section", "text": { "type": "plain_text", "text": ( f"There were {self.n_failures} failures, out of {self.n_tests} tests.\n" f"The suite ran in {self.time}." ), "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } @property def failures_detailed(self) -> Dict: failures = {k: v["failed"] for k, v in self.results.items()} individual_reports = [] for key, value in failures.items(): device_report = self.get_device_report(value) if sum(value.values()): report = f"{device_report}{key}" individual_reports.append(report) header = "Single | Multi | Category\n" failures_report = prepare_reports( title="The following quantization tests had failures", header=header, reports=individual_reports ) return {"type": "section", "text": {"type": "mrkdwn", "text": failures_report}} def post(self): payload = self.payload print("Sending the following payload") print(json.dumps({"blocks": json.loads(payload)})) text = f"{self.n_failures} failures out of {self.n_tests} tests," if self.n_failures else "All tests passed." self.thread_ts = client.chat_postMessage( channel=SLACK_REPORT_CHANNEL_ID, blocks=payload, text=text, ) def post_reply(self): if self.thread_ts is None: raise ValueError("Can only post reply if a post has been made.") for job, job_result in self.results.items(): if len(job_result["failures"]): for device, failures in job_result["failures"].items(): blocks = self.get_reply_blocks( job, job_result, failures, device, text=f'Number of failures: {job_result["failed"][device]}', ) print("Sending the following reply") print(json.dumps({"blocks": blocks})) client.chat_postMessage( channel="#transformers-ci-daily-quantization", text=f"Results for {job}", blocks=blocks, thread_ts=self.thread_ts["ts"], ) time.sleep(1) if __name__ == "__main__": setup_status = os.environ.get("SETUP_STATUS") SLACK_REPORT_CHANNEL_ID = os.environ["SLACK_REPORT_CHANNEL"] setup_failed = True if setup_status is not None and setup_status != "success" else False # This env. variable is set in workflow file (under the job `send_results`). ci_event = os.environ["CI_EVENT"] title = f"๐Ÿค— Results of the {ci_event} tests." if setup_failed: Message.error_out( title, ci_title="", runner_not_available=False, runner_failed=False, setup_failed=setup_failed ) exit(0) arguments = sys.argv[1:][0] try: quantization_matrix = ast.literal_eval(arguments) # Need to change from elements like `quantization/bnb` to `quantization_bnb` (the ones used as artifact names). quantization_matrix = [x.replace("quantization/", "quantization_") for x in quantization_matrix] except SyntaxError: Message.error_out(title, ci_title="") raise ValueError("Errored out.") available_artifacts = retrieve_available_artifacts() quantization_results = { quant: { "failed": {"single": 0, "multi": 0}, "success": 0, "time_spent": "", "failures": {}, "job_link": {}, } for quant in quantization_matrix if f"run_quantization_torch_gpu_{ quant }_test_reports" in available_artifacts } github_actions_jobs = get_jobs( workflow_run_id=os.environ["GITHUB_RUN_ID"], token=os.environ["ACCESS_REPO_INFO_TOKEN"] ) github_actions_job_links = {job["name"]: job["html_url"] for job in github_actions_jobs} artifact_name_to_job_map = {} for job in github_actions_jobs: for step in job["steps"]: if step["name"].startswith("Test suite reports artifacts: "): artifact_name = step["name"][len("Test suite reports artifacts: ") :] artifact_name_to_job_map[artifact_name] = job break for quant in quantization_results.keys(): for artifact_path in available_artifacts[f"run_quantization_torch_gpu_{ quant }_test_reports"].paths: artifact = retrieve_artifact(artifact_path["path"], artifact_path["gpu"]) if "stats" in artifact: # Link to the GitHub Action job job = artifact_name_to_job_map[artifact_path["path"]] quantization_results[quant]["job_link"][artifact_path["gpu"]] = job["html_url"] failed, success, time_spent = handle_test_results(artifact["stats"]) quantization_results[quant]["failed"][artifact_path["gpu"]] += failed quantization_results[quant]["success"] += success quantization_results[quant]["time_spent"] += time_spent[1:-1] + ", " stacktraces = handle_stacktraces(artifact["failures_line"]) for line in artifact["summary_short"].split("\n"): if line.startswith("FAILED "): line = line[len("FAILED ") :] line = line.split()[0].replace("\n", "") if artifact_path["gpu"] not in quantization_results[quant]["failures"]: quantization_results[quant]["failures"][artifact_path["gpu"]] = [] quantization_results[quant]["failures"][artifact_path["gpu"]].append( {"line": line, "trace": stacktraces.pop(0)} ) message = QuantizationMessage( title, results=quantization_results, ) message.post() message.post_reply()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/not_doctested.txt
docs/source/en/_config.py docs/source/en/accelerate.md docs/source/en/add_new_model.md docs/source/en/add_new_pipeline.md docs/source/en/attention.md docs/source/en/benchmarks.md docs/source/en/bertology.md docs/source/en/big_models.md docs/source/en/community.md docs/source/en/contributing.md docs/source/en/create_a_model.md docs/source/en/custom_models.md docs/source/en/custom_tools.md docs/source/en/debugging.md docs/source/en/fast_tokenizers.md docs/source/en/glossary.md docs/source/en/hpo_train.md docs/source/en/index.md docs/source/en/installation.md docs/source/en/internal/audio_utils.md docs/source/en/internal/file_utils.md docs/source/en/internal/image_processing_utils.md docs/source/en/internal/modeling_utils.md docs/source/en/internal/pipelines_utils.md docs/source/en/internal/time_series_utils.md docs/source/en/internal/tokenization_utils.md docs/source/en/internal/trainer_utils.md docs/source/en/llm_tutorial.md docs/source/en/main_classes/agent.md docs/source/en/main_classes/callback.md docs/source/en/main_classes/configuration.md docs/source/en/main_classes/data_collator.md docs/source/en/main_classes/deepspeed.md docs/source/en/main_classes/feature_extractor.md docs/source/en/main_classes/image_processor.md docs/source/en/main_classes/keras_callbacks.md docs/source/en/main_classes/logging.md docs/source/en/main_classes/model.md docs/source/en/main_classes/onnx.md docs/source/en/main_classes/optimizer_schedules.md docs/source/en/main_classes/output.md docs/source/en/main_classes/pipelines.md docs/source/en/main_classes/processors.md docs/source/en/main_classes/quantization.md docs/source/en/main_classes/tokenizer.md docs/source/en/main_classes/trainer.md docs/source/en/model_doc/albert.md docs/source/en/model_doc/align.md docs/source/en/model_doc/altclip.md docs/source/en/model_doc/audio-spectrogram-transformer.md docs/source/en/model_doc/auto.md docs/source/en/model_doc/autoformer.md docs/source/en/model_doc/bark.md docs/source/en/model_doc/bart.md docs/source/en/model_doc/barthez.md docs/source/en/model_doc/bartpho.md docs/source/en/model_doc/beit.md docs/source/en/model_doc/bert-generation.md docs/source/en/model_doc/bert-japanese.md docs/source/en/model_doc/bert.md docs/source/en/model_doc/bertweet.md docs/source/en/model_doc/big_bird.md docs/source/en/model_doc/bigbird_pegasus.md docs/source/en/model_doc/biogpt.md docs/source/en/model_doc/bit.md docs/source/en/model_doc/blenderbot-small.md docs/source/en/model_doc/blenderbot.md docs/source/en/model_doc/blip-2.md docs/source/en/model_doc/blip.md docs/source/en/model_doc/bloom.md docs/source/en/model_doc/bort.md docs/source/en/model_doc/bridgetower.md docs/source/en/model_doc/camembert.md docs/source/en/model_doc/canine.md docs/source/en/model_doc/chinese_clip.md docs/source/en/model_doc/clap.md docs/source/en/model_doc/clip.md docs/source/en/model_doc/clipseg.md docs/source/en/model_doc/codegen.md docs/source/en/model_doc/conditional_detr.md docs/source/en/model_doc/convbert.md docs/source/en/model_doc/convnext.md docs/source/en/model_doc/convnextv2.md docs/source/en/model_doc/cpm.md docs/source/en/model_doc/cpmant.md docs/source/en/model_doc/ctrl.md docs/source/en/model_doc/cvt.md docs/source/en/model_doc/data2vec.md docs/source/en/model_doc/deberta-v2.md docs/source/en/model_doc/deberta.md docs/source/en/model_doc/decision_transformer.md docs/source/en/model_doc/deformable_detr.md docs/source/en/model_doc/deit.md docs/source/en/model_doc/deplot.md docs/source/en/model_doc/deta.md docs/source/en/model_doc/detr.md docs/source/en/model_doc/dialogpt.md docs/source/en/model_doc/dinat.md docs/source/en/model_doc/dinov2.md docs/source/en/model_doc/distilbert.md docs/source/en/model_doc/dit.md docs/source/en/model_doc/dpr.md docs/source/en/model_doc/dpt.md docs/source/en/model_doc/efficientformer.md docs/source/en/model_doc/efficientnet.md docs/source/en/model_doc/electra.md docs/source/en/model_doc/encodec.md docs/source/en/model_doc/ernie.md docs/source/en/model_doc/ernie_m.md docs/source/en/model_doc/esm.md docs/source/en/model_doc/flan-t5.md docs/source/en/model_doc/flan-ul2.md docs/source/en/model_doc/flaubert.md docs/source/en/model_doc/flava.md docs/source/en/model_doc/fnet.md docs/source/en/model_doc/focalnet.md docs/source/en/model_doc/fsmt.md docs/source/en/model_doc/funnel.md docs/source/en/model_doc/git.md docs/source/en/model_doc/glpn.md docs/source/en/model_doc/gpt-sw3.md docs/source/en/model_doc/gpt2.md docs/source/en/model_doc/gpt_bigcode.md docs/source/en/model_doc/gpt_neo.md docs/source/en/model_doc/gpt_neox.md docs/source/en/model_doc/gpt_neox_japanese.md docs/source/en/model_doc/gptj.md docs/source/en/model_doc/gptsan-japanese.md docs/source/en/model_doc/graphormer.md docs/source/en/model_doc/groupvit.md docs/source/en/model_doc/herbert.md docs/source/en/model_doc/hubert.md docs/source/en/model_doc/ibert.md docs/source/en/model_doc/idefics.md docs/source/en/model_doc/imagegpt.md docs/source/en/model_doc/informer.md docs/source/en/model_doc/instructblip.md docs/source/en/model_doc/jukebox.md docs/source/en/model_doc/layoutlm.md docs/source/en/model_doc/layoutlmv2.md docs/source/en/model_doc/layoutlmv3.md docs/source/en/model_doc/layoutxlm.md docs/source/en/model_doc/led.md docs/source/en/model_doc/levit.md docs/source/en/model_doc/lilt.md docs/source/en/model_doc/llama.md docs/source/en/model_doc/llama2.md docs/source/en/model_doc/llava.md docs/source/en/model_doc/llava_next.md docs/source/en/model_doc/longformer.md docs/source/en/model_doc/longt5.md docs/source/en/model_doc/luke.md docs/source/en/model_doc/lxmert.md docs/source/en/model_doc/m2m_100.md docs/source/en/model_doc/madlad-400.md docs/source/en/model_doc/marian.md docs/source/en/model_doc/mask2former.md docs/source/en/model_doc/maskformer.md docs/source/en/model_doc/matcha.md docs/source/en/model_doc/mbart.md docs/source/en/model_doc/mctct.md docs/source/en/model_doc/mega.md docs/source/en/model_doc/megatron-bert.md docs/source/en/model_doc/megatron_gpt2.md docs/source/en/model_doc/mgp-str.md docs/source/en/model_doc/mistral.md docs/source/en/model_doc/mixtral.md docs/source/en/model_doc/mluke.md docs/source/en/model_doc/mms.md docs/source/en/model_doc/mobilebert.md docs/source/en/model_doc/mobilenet_v1.md docs/source/en/model_doc/mobilenet_v2.md docs/source/en/model_doc/mobilevit.md docs/source/en/model_doc/mobilevitv2.md docs/source/en/model_doc/mpnet.md docs/source/en/model_doc/mpt.md docs/source/en/model_doc/mra.md docs/source/en/model_doc/mt5.md docs/source/en/model_doc/musicgen.md docs/source/en/model_doc/musicgen_melody.md docs/source/en/model_doc/mvp.md docs/source/en/model_doc/nat.md docs/source/en/model_doc/nezha.md docs/source/en/model_doc/nllb-moe.md docs/source/en/model_doc/nllb.md docs/source/en/model_doc/nystromformer.md docs/source/en/model_doc/oneformer.md docs/source/en/model_doc/open-llama.md docs/source/en/model_doc/openai-gpt.md docs/source/en/model_doc/opt.md docs/source/en/model_doc/owlvit.md docs/source/en/model_doc/pegasus.md docs/source/en/model_doc/pegasus_x.md docs/source/en/model_doc/perceiver.md docs/source/en/model_doc/phobert.md docs/source/en/model_doc/pix2struct.md docs/source/en/model_doc/plbart.md docs/source/en/model_doc/poolformer.md docs/source/en/model_doc/pop2piano.md docs/source/en/model_doc/prophetnet.md docs/source/en/model_doc/pvt.md docs/source/en/model_doc/qdqbert.md docs/source/en/model_doc/qwen2.md docs/source/en/model_doc/qwen2_moe.md docs/source/en/model_doc/rag.md docs/source/en/model_doc/realm.md docs/source/en/model_doc/reformer.md docs/source/en/model_doc/regnet.md docs/source/en/model_doc/rembert.md docs/source/en/model_doc/resnet.md docs/source/en/model_doc/retribert.md docs/source/en/model_doc/roberta-prelayernorm.md docs/source/en/model_doc/roberta.md docs/source/en/model_doc/roc_bert.md docs/source/en/model_doc/roformer.md docs/source/en/model_doc/rwkv.md docs/source/en/model_doc/sam.md docs/source/en/model_doc/segformer.md docs/source/en/model_doc/sew-d.md docs/source/en/model_doc/sew.md docs/source/en/model_doc/speech-encoder-decoder.md docs/source/en/model_doc/speech_to_text_2.md docs/source/en/model_doc/speecht5.md docs/source/en/model_doc/splinter.md docs/source/en/model_doc/squeezebert.md docs/source/en/model_doc/swiftformer.md docs/source/en/model_doc/swin.md docs/source/en/model_doc/swin2sr.md docs/source/en/model_doc/swinv2.md docs/source/en/model_doc/table-transformer.md docs/source/en/model_doc/tapas.md docs/source/en/model_doc/time_series_transformer.md docs/source/en/model_doc/timesformer.md docs/source/en/model_doc/trajectory_transformer.md docs/source/en/model_doc/transfo-xl.md docs/source/en/model_doc/trocr.md docs/source/en/model_doc/tvlt.md docs/source/en/model_doc/ul2.md docs/source/en/model_doc/umt5.md docs/source/en/model_doc/unispeech-sat.md docs/source/en/model_doc/unispeech.md docs/source/en/model_doc/upernet.md docs/source/en/model_doc/van.md docs/source/en/model_doc/videomae.md docs/source/en/model_doc/vilt.md docs/source/en/model_doc/vipllava.md docs/source/en/model_doc/vision-encoder-decoder.md docs/source/en/model_doc/vision-text-dual-encoder.md docs/source/en/model_doc/visual_bert.md docs/source/en/model_doc/vit.md docs/source/en/model_doc/vit_hybrid.md docs/source/en/model_doc/vit_mae.md docs/source/en/model_doc/vit_msn.md docs/source/en/model_doc/vivit.md docs/source/en/model_doc/wav2vec2-conformer.md docs/source/en/model_doc/wav2vec2.md docs/source/en/model_doc/wav2vec2_phoneme.md docs/source/en/model_doc/wavlm.md docs/source/en/model_doc/whisper.md docs/source/en/model_doc/xclip.md docs/source/en/model_doc/xglm.md docs/source/en/model_doc/xlm-prophetnet.md docs/source/en/model_doc/xlm-roberta-xl.md docs/source/en/model_doc/xlm-roberta.md docs/source/en/model_doc/xlm-v.md docs/source/en/model_doc/xlm.md docs/source/en/model_doc/xlnet.md docs/source/en/model_doc/xls_r.md docs/source/en/model_doc/xlsr_wav2vec2.md docs/source/en/model_doc/xmod.md docs/source/en/model_doc/yolos.md docs/source/en/model_doc/yoso.md docs/source/en/model_memory_anatomy.md docs/source/en/model_sharing.md docs/source/en/model_summary.md docs/source/en/multilingual.md docs/source/en/notebooks.md docs/source/en/pad_truncation.md docs/source/en/peft.md docs/source/en/perf_hardware.md docs/source/en/perf_infer_cpu.md docs/source/en/perf_infer_gpu_one.md docs/source/en/perf_torch_compile.md docs/source/en/perf_train_cpu.md docs/source/en/perf_train_cpu_many.md docs/source/en/perf_train_gpu_many.md docs/source/en/perf_train_gpu_one.md docs/source/en/perf_train_special.md docs/source/en/perf_train_tpu_tf.md docs/source/en/performance.md docs/source/en/perplexity.md docs/source/en/philosophy.md docs/source/en/pipeline_webserver.md docs/source/en/pr_checks.md docs/source/en/preprocessing.md docs/source/en/run_scripts.md docs/source/en/sagemaker.md docs/source/en/serialization.md docs/source/en/tasks/asr.md docs/source/en/tasks/audio_classification.md docs/source/en/tasks/document_question_answering.md docs/source/en/tasks/idefics.md docs/source/en/tasks/image_captioning.md docs/source/en/tasks/image_classification.md docs/source/en/tasks/language_modeling.md docs/source/en/tasks/masked_language_modeling.md docs/source/en/tasks/monocular_depth_estimation.md docs/source/en/tasks/multiple_choice.md docs/source/en/tasks/object_detection.md docs/source/en/tasks/question_answering.md docs/source/en/tasks/semantic_segmentation.md docs/source/en/tasks/sequence_classification.md docs/source/en/tasks/summarization.md docs/source/en/tasks/text-to-speech.md docs/source/en/tasks/token_classification.md docs/source/en/tasks/translation.md docs/source/en/tasks/video_classification.md docs/source/en/tasks/visual_question_answering.md docs/source/en/tasks/zero_shot_image_classification.md docs/source/en/tasks/zero_shot_object_detection.md docs/source/en/tasks_explained.md docs/source/en/tf_xla.md docs/source/en/tflite.md docs/source/en/tokenizer_summary.md docs/source/en/torchscript.md docs/source/en/training.md docs/source/en/transformers_agents.md docs/source/en/troubleshooting.md src/transformers/activations.py src/transformers/activations_tf.py src/transformers/audio_utils.py src/transformers/benchmark/benchmark.py src/transformers/benchmark/benchmark_args.py src/transformers/benchmark/benchmark_args_tf.py src/transformers/benchmark/benchmark_args_utils.py src/transformers/benchmark/benchmark_tf.py src/transformers/benchmark/benchmark_utils.py src/transformers/commands/add_new_model_like.py src/transformers/commands/convert.py src/transformers/commands/download.py src/transformers/commands/env.py src/transformers/commands/lfs.py src/transformers/commands/pt_to_tf.py src/transformers/commands/run.py src/transformers/commands/serving.py src/transformers/commands/train.py src/transformers/commands/transformers_cli.py src/transformers/commands/user.py src/transformers/configuration_utils.py src/transformers/convert_graph_to_onnx.py src/transformers/convert_pytorch_checkpoint_to_tf2.py src/transformers/convert_slow_tokenizer.py src/transformers/convert_slow_tokenizers_checkpoints_to_fast.py src/transformers/convert_tf_hub_seq_to_seq_bert_to_pytorch.py src/transformers/data/data_collator.py src/transformers/data/datasets/glue.py src/transformers/data/datasets/language_modeling.py src/transformers/data/datasets/squad.py src/transformers/data/metrics/squad_metrics.py src/transformers/data/processors/glue.py src/transformers/data/processors/squad.py src/transformers/data/processors/utils.py src/transformers/data/processors/xnli.py src/transformers/debug_utils.py src/transformers/deepspeed.py src/transformers/dependency_versions_check.py src/transformers/dependency_versions_table.py src/transformers/dynamic_module_utils.py src/transformers/feature_extraction_sequence_utils.py src/transformers/feature_extraction_utils.py src/transformers/file_utils.py src/transformers/hf_argparser.py src/transformers/hyperparameter_search.py src/transformers/image_processing_utils.py src/transformers/image_transforms.py src/transformers/image_utils.py src/transformers/integrations/bitsandbytes.py src/transformers/integrations/deepspeed.py src/transformers/integrations/integration_utils.py src/transformers/integrations/peft.py src/transformers/keras_callbacks.py src/transformers/modelcard.py src/transformers/modeling_flax_outputs.py src/transformers/modeling_flax_pytorch_utils.py src/transformers/modeling_flax_utils.py src/transformers/modeling_outputs.py src/transformers/modeling_tf_outputs.py src/transformers/modeling_tf_pytorch_utils.py src/transformers/modeling_tf_utils.py src/transformers/modeling_utils.py src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py src/transformers/models/albert/modeling_flax_albert.py src/transformers/models/align/configuration_align.py src/transformers/models/align/convert_align_tf_to_hf.py src/transformers/models/align/modeling_align.py src/transformers/models/altclip/configuration_altclip.py src/transformers/models/altclip/modeling_altclip.py src/transformers/models/audio_spectrogram_transformer/configuration_audio_spectrogram_transformer.py src/transformers/models/audio_spectrogram_transformer/convert_audio_spectrogram_transformer_original_to_pytorch.py src/transformers/models/auto/auto_factory.py src/transformers/models/auto/configuration_auto.py src/transformers/models/auto/modeling_auto.py src/transformers/models/auto/modeling_flax_auto.py src/transformers/models/auto/modeling_tf_auto.py src/transformers/models/autoformer/configuration_autoformer.py src/transformers/models/autoformer/modeling_autoformer.py src/transformers/models/bark/convert_suno_to_hf.py src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/bart/modeling_flax_bart.py src/transformers/models/bart/modeling_tf_bart.py src/transformers/models/beit/convert_beit_unilm_to_pytorch.py src/transformers/models/beit/modeling_flax_beit.py src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py src/transformers/models/bert/convert_bert_token_dropping_original_tf2_checkpoint_to_pytorch.py src/transformers/models/bert/modeling_flax_bert.py src/transformers/models/bert_generation/modeling_bert_generation.py src/transformers/models/big_bird/convert_bigbird_original_tf_checkpoint_to_pytorch.py src/transformers/models/big_bird/modeling_flax_big_bird.py src/transformers/models/bigbird_pegasus/convert_bigbird_pegasus_tf_to_pytorch.py src/transformers/models/biogpt/configuration_biogpt.py src/transformers/models/biogpt/convert_biogpt_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/biogpt/modeling_biogpt.py src/transformers/models/bit/configuration_bit.py src/transformers/models/bit/convert_bit_to_pytorch.py src/transformers/models/bit/modeling_bit.py src/transformers/models/blenderbot/convert_blenderbot_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/blenderbot/modeling_flax_blenderbot.py src/transformers/models/blenderbot/modeling_tf_blenderbot.py src/transformers/models/blenderbot_small/modeling_flax_blenderbot_small.py src/transformers/models/blenderbot_small/modeling_tf_blenderbot_small.py src/transformers/models/blip/configuration_blip.py src/transformers/models/blip/convert_blip_original_pytorch_to_hf.py src/transformers/models/blip/modeling_blip_text.py src/transformers/models/blip/modeling_tf_blip_text.py src/transformers/models/blip_2/configuration_blip_2.py src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py src/transformers/models/blip_2/modeling_blip_2.py src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py src/transformers/models/bloom/modeling_bloom.py src/transformers/models/bloom/modeling_flax_bloom.py src/transformers/models/bridgetower/configuration_bridgetower.py src/transformers/models/bridgetower/modeling_bridgetower.py src/transformers/models/bros/convert_bros_to_pytorch.py src/transformers/models/byt5/convert_byt5_original_tf_checkpoint_to_pytorch.py src/transformers/models/camembert/modeling_camembert.py src/transformers/models/camembert/modeling_tf_camembert.py src/transformers/models/canine/convert_canine_original_tf_checkpoint_to_pytorch.py src/transformers/models/chinese_clip/configuration_chinese_clip.py src/transformers/models/chinese_clip/convert_chinese_clip_original_pytorch_to_hf.py src/transformers/models/chinese_clip/modeling_chinese_clip.py src/transformers/models/clap/convert_clap_original_pytorch_to_hf.py src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py src/transformers/models/clip/modeling_clip.py src/transformers/models/clip/modeling_flax_clip.py src/transformers/models/clip/modeling_tf_clip.py src/transformers/models/clipseg/configuration_clipseg.py src/transformers/models/clipseg/convert_clipseg_original_pytorch_to_hf.py src/transformers/models/codegen/modeling_codegen.py src/transformers/models/conditional_detr/convert_conditional_detr_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/convbert/convert_convbert_original_tf1_checkpoint_to_pytorch_and_tf2.py src/transformers/models/convbert/modeling_convbert.py src/transformers/models/convbert/modeling_tf_convbert.py src/transformers/models/convnext/convert_convnext_to_pytorch.py src/transformers/models/convnext/modeling_tf_convnext.py src/transformers/models/convnextv2/configuration_convnextv2.py src/transformers/models/convnextv2/convert_convnextv2_to_pytorch.py src/transformers/models/convnextv2/modeling_convnextv2.py src/transformers/models/cpmant/configuration_cpmant.py src/transformers/models/cpmant/modeling_cpmant.py src/transformers/models/cpmant/tokenization_cpmant.py src/transformers/models/ctrl/modeling_tf_ctrl.py src/transformers/models/cvt/convert_cvt_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/cvt/modeling_tf_cvt.py src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/data2vec/convert_data2vec_text_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/data2vec/convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/data2vec/modeling_data2vec_text.py src/transformers/models/data2vec/modeling_tf_data2vec_vision.py src/transformers/models/deberta/modeling_tf_deberta.py src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py src/transformers/models/decision_transformer/modeling_decision_transformer.py src/transformers/models/deformable_detr/convert_deformable_detr_to_pytorch.py src/transformers/models/deformable_detr/load_custom.py src/transformers/models/deit/convert_deit_timm_to_pytorch.py src/transformers/models/deprecated/bort/convert_bort_original_gluonnlp_checkpoint_to_pytorch.py src/transformers/models/deprecated/mctct/configuration_mctct.py src/transformers/models/deprecated/mctct/feature_extraction_mctct.py src/transformers/models/deprecated/mctct/modeling_mctct.py src/transformers/models/deprecated/mctct/processing_mctct.py src/transformers/models/deprecated/mmbt/configuration_mmbt.py src/transformers/models/deprecated/mmbt/modeling_mmbt.py src/transformers/models/deprecated/open_llama/configuration_open_llama.py src/transformers/models/deprecated/open_llama/modeling_open_llama.py src/transformers/models/deprecated/retribert/configuration_retribert.py src/transformers/models/deprecated/retribert/modeling_retribert.py src/transformers/models/deprecated/retribert/tokenization_retribert.py src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py src/transformers/models/deprecated/tapex/tokenization_tapex.py src/transformers/models/deprecated/trajectory_transformer/configuration_trajectory_transformer.py src/transformers/models/deprecated/trajectory_transformer/convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/deprecated/trajectory_transformer/modeling_trajectory_transformer.py src/transformers/models/deprecated/transfo_xl/convert_transfo_xl_original_tf_checkpoint_to_pytorch.py src/transformers/models/deprecated/transfo_xl/modeling_tf_transfo_xl.py src/transformers/models/deprecated/transfo_xl/modeling_tf_transfo_xl_utilities.py src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl.py src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl_utilities.py src/transformers/models/deprecated/van/configuration_van.py src/transformers/models/deprecated/van/convert_van_to_pytorch.py src/transformers/models/deprecated/van/modeling_van.py src/transformers/models/deta/convert_deta_resnet_to_pytorch.py src/transformers/models/deta/convert_deta_swin_to_pytorch.py src/transformers/models/detr/convert_detr_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/detr/convert_detr_to_pytorch.py src/transformers/models/dialogpt/convert_dialogpt_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/dinov2/configuration_dinov2.py src/transformers/models/dinov2/convert_dinov2_to_hf.py src/transformers/models/dinov2/modeling_dinov2.py src/transformers/models/distilbert/modeling_distilbert.py src/transformers/models/distilbert/modeling_flax_distilbert.py src/transformers/models/distilbert/modeling_tf_distilbert.py src/transformers/models/dit/convert_dit_unilm_to_pytorch.py src/transformers/models/donut/configuration_donut_swin.py src/transformers/models/donut/convert_donut_to_pytorch.py src/transformers/models/donut/modeling_donut_swin.py src/transformers/models/dpr/convert_dpr_original_checkpoint_to_pytorch.py src/transformers/models/dpr/modeling_dpr.py src/transformers/models/dpr/modeling_tf_dpr.py src/transformers/models/dpt/configuration_dpt.py src/transformers/models/dpt/convert_dpt_hybrid_to_pytorch.py src/transformers/models/dpt/convert_dpt_to_pytorch.py src/transformers/models/efficientformer/configuration_efficientformer.py src/transformers/models/efficientformer/convert_efficientformer_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/efficientformer/modeling_efficientformer.py src/transformers/models/efficientnet/configuration_efficientnet.py src/transformers/models/efficientnet/convert_efficientnet_to_pytorch.py src/transformers/models/efficientnet/modeling_efficientnet.py src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py src/transformers/models/electra/modeling_flax_electra.py src/transformers/models/encodec/configuration_encodec.py src/transformers/models/encodec/convert_encodec_checkpoint_to_pytorch.py src/transformers/models/encoder_decoder/modeling_encoder_decoder.py src/transformers/models/encoder_decoder/modeling_flax_encoder_decoder.py src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py src/transformers/models/ernie/modeling_ernie.py src/transformers/models/esm/configuration_esm.py src/transformers/models/esm/convert_esm.py src/transformers/models/esm/modeling_esm.py src/transformers/models/esm/modeling_esmfold.py src/transformers/models/esm/modeling_tf_esm.py src/transformers/models/esm/openfold_utils/chunk_utils.py src/transformers/models/esm/openfold_utils/data_transforms.py src/transformers/models/esm/openfold_utils/feats.py src/transformers/models/esm/openfold_utils/loss.py src/transformers/models/esm/openfold_utils/protein.py src/transformers/models/esm/openfold_utils/residue_constants.py src/transformers/models/esm/openfold_utils/rigid_utils.py src/transformers/models/esm/openfold_utils/tensor_utils.py src/transformers/models/falcon/configuration_falcon.py src/transformers/models/falcon/modeling_falcon.py src/transformers/models/flaubert/configuration_flaubert.py src/transformers/models/flaubert/modeling_flaubert.py src/transformers/models/flaubert/modeling_tf_flaubert.py src/transformers/models/flava/convert_dalle_to_flava_codebook.py src/transformers/models/flava/convert_flava_original_pytorch_to_hf.py src/transformers/models/flava/modeling_flava.py src/transformers/models/fnet/convert_fnet_original_flax_checkpoint_to_pytorch.py src/transformers/models/fnet/modeling_fnet.py src/transformers/models/focalnet/configuration_focalnet.py src/transformers/models/focalnet/convert_focalnet_to_hf_format.py src/transformers/models/focalnet/modeling_focalnet.py src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/fsmt/modeling_fsmt.py src/transformers/models/funnel/configuration_funnel.py src/transformers/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py src/transformers/models/funnel/modeling_funnel.py src/transformers/models/funnel/modeling_tf_funnel.py src/transformers/models/fuyu/convert_fuyu_model_weights_to_hf.py src/transformers/models/gemma/configuration_gemma.py src/transformers/models/gemma/convert_gemma_weights_to_hf.py src/transformers/models/gemma/modeling_flax_gemma.py src/transformers/models/gemma/modeling_gemma.py src/transformers/models/git/configuration_git.py src/transformers/models/git/convert_git_to_pytorch.py src/transformers/models/glpn/configuration_glpn.py src/transformers/models/glpn/convert_glpn_to_pytorch.py src/transformers/models/gpt2/CONVERSION.md src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py src/transformers/models/gpt2/modeling_flax_gpt2.py src/transformers/models/gpt2/modeling_tf_gpt2.py src/transformers/models/gpt_bigcode/configuration_gpt_bigcode.py src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py src/transformers/models/gpt_neo/modeling_gpt_neo.py src/transformers/models/gpt_neox/modeling_gpt_neox.py src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py src/transformers/models/gpt_sw3/convert_megatron_to_pytorch.py src/transformers/models/gptj/configuration_gptj.py src/transformers/models/gptj/modeling_flax_gptj.py src/transformers/models/gptj/modeling_tf_gptj.py src/transformers/models/gptsan_japanese/configuration_gptsan_japanese.py src/transformers/models/gptsan_japanese/convert_gptsan_tf_checkpoint_to_pytorch.py src/transformers/models/gptsan_japanese/modeling_gptsan_japanese.py src/transformers/models/graphormer/collating_graphormer.py src/transformers/models/graphormer/configuration_graphormer.py src/transformers/models/graphormer/modeling_graphormer.py src/transformers/models/groupvit/configuration_groupvit.py src/transformers/models/groupvit/convert_groupvit_nvlab_to_hf.py src/transformers/models/hubert/configuration_hubert.py src/transformers/models/hubert/convert_distilhubert_original_s3prl_checkpoint_to_pytorch.py src/transformers/models/hubert/convert_hubert_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/hubert/convert_hubert_original_s3prl_checkpoint_to_pytorch.py src/transformers/models/hubert/modeling_tf_hubert.py src/transformers/models/ibert/configuration_ibert.py src/transformers/models/ibert/modeling_ibert.py src/transformers/models/ibert/quant_modules.py src/transformers/models/idefics/configuration_idefics.py src/transformers/models/idefics/image_processing_idefics.py src/transformers/models/idefics/modeling_idefics.py src/transformers/models/idefics/perceiver.py src/transformers/models/idefics/processing_idefics.py src/transformers/models/idefics/vision.py src/transformers/models/imagegpt/convert_imagegpt_original_tf2_to_pytorch.py src/transformers/models/informer/configuration_informer.py src/transformers/models/informer/modeling_informer.py src/transformers/models/instructblip/configuration_instructblip.py src/transformers/models/instructblip/convert_instructblip_original_to_pytorch.py src/transformers/models/instructblip/modeling_instructblip.py src/transformers/models/instructblip/processing_instructblip.py src/transformers/models/jamba/configuration_jamba.py src/transformers/models/jamba/modeling_jamba.py src/transformers/models/jukebox/configuration_jukebox.py src/transformers/models/jukebox/convert_jukebox.py src/transformers/models/jukebox/modeling_jukebox.py src/transformers/models/kosmos2/convert_kosmos2_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/led/configuration_led.py src/transformers/models/led/modeling_led.py src/transformers/models/led/modeling_tf_led.py src/transformers/models/levit/convert_levit_timm_to_pytorch.py src/transformers/models/levit/modeling_levit.py src/transformers/models/lilt/configuration_lilt.py src/transformers/models/llama/configuration_llama.py src/transformers/models/llama/convert_llama_weights_to_hf.py src/transformers/models/llama/modeling_llama.py src/transformers/models/llava/configuration_llava.py src/transformers/models/llava/modeling_llava.py src/transformers/models/llava_next/configuration_llava_next.py src/transformers/models/llava_next/modeling_llava_next.py src/transformers/models/longformer/configuration_longformer.py src/transformers/models/longformer/convert_longformer_original_pytorch_lightning_to_pytorch.py src/transformers/models/longt5/configuration_longt5.py src/transformers/models/longt5/convert_longt5x_checkpoint_to_flax.py src/transformers/models/longt5/modeling_flax_longt5.py src/transformers/models/luke/configuration_luke.py src/transformers/models/luke/convert_luke_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/luke/modeling_luke.py src/transformers/models/lxmert/configuration_lxmert.py src/transformers/models/lxmert/convert_lxmert_original_tf_checkpoint_to_pytorch.py src/transformers/models/lxmert/modeling_lxmert.py src/transformers/models/lxmert/modeling_tf_lxmert.py src/transformers/models/m2m_100/convert_m2m100_original_checkpoint_to_pytorch.py src/transformers/models/m2m_100/modeling_m2m_100.py src/transformers/models/marian/configuration_marian.py src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py src/transformers/models/marian/convert_marian_to_pytorch.py src/transformers/models/marian/modeling_flax_marian.py src/transformers/models/marian/modeling_tf_marian.py src/transformers/models/markuplm/configuration_markuplm.py src/transformers/models/markuplm/feature_extraction_markuplm.py src/transformers/models/mask2former/convert_mask2former_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/maskformer/configuration_maskformer_swin.py src/transformers/models/maskformer/convert_maskformer_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/maskformer/convert_maskformer_resnet_to_pytorch.py src/transformers/models/maskformer/convert_maskformer_swin_to_pytorch.py src/transformers/models/maskformer/modeling_maskformer_swin.py src/transformers/models/mbart/convert_mbart_original_checkpoint_to_pytorch.py src/transformers/models/mbart/modeling_flax_mbart.py src/transformers/models/mega/configuration_mega.py src/transformers/models/mega/convert_mega_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/mega/modeling_mega.py src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py src/transformers/models/megatron_bert/modeling_megatron_bert.py src/transformers/models/megatron_gpt2/checkpoint_reshaping_and_interoperability.py src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py src/transformers/models/mgp_str/configuration_mgp_str.py src/transformers/models/mgp_str/modeling_mgp_str.py src/transformers/models/mistral/configuration_mistral.py src/transformers/models/mistral/modeling_mistral.py src/transformers/models/mixtral/configuration_mixtral.py src/transformers/models/mixtral/modeling_mixtral.py src/transformers/models/mluke/convert_mluke_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/mobilebert/convert_mobilebert_original_tf_checkpoint_to_pytorch.py src/transformers/models/mobilenet_v1/configuration_mobilenet_v1.py src/transformers/models/mobilenet_v1/convert_original_tf_checkpoint_to_pytorch.py src/transformers/models/mobilenet_v2/configuration_mobilenet_v2.py src/transformers/models/mobilenet_v2/convert_original_tf_checkpoint_to_pytorch.py src/transformers/models/mobilevit/configuration_mobilevit.py src/transformers/models/mobilevit/convert_mlcvnets_to_pytorch.py src/transformers/models/mobilevitv2/convert_mlcvnets_to_pytorch.py src/transformers/models/mpnet/configuration_mpnet.py src/transformers/models/mpnet/modeling_mpnet.py src/transformers/models/mpnet/modeling_tf_mpnet.py src/transformers/models/mpt/configuration_mpt.py src/transformers/models/mpt/modeling_mpt.py src/transformers/models/mra/configuration_mra.py src/transformers/models/mra/convert_mra_pytorch_to_pytorch.py src/transformers/models/mra/modeling_mra.py src/transformers/models/mt5/configuration_mt5.py src/transformers/models/mt5/modeling_flax_mt5.py src/transformers/models/mt5/modeling_mt5.py src/transformers/models/mt5/modeling_tf_mt5.py src/transformers/models/musicgen/convert_musicgen_transformers.py src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py src/transformers/models/mvp/modeling_mvp.py src/transformers/models/nezha/modeling_nezha.py src/transformers/models/nllb_moe/configuration_nllb_moe.py src/transformers/models/nllb_moe/convert_nllb_moe_sharded_original_checkpoint_to_pytorch.py src/transformers/models/nllb_moe/modeling_nllb_moe.py src/transformers/models/nougat/convert_nougat_to_hf.py src/transformers/models/nystromformer/configuration_nystromformer.py src/transformers/models/nystromformer/convert_nystromformer_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/nystromformer/modeling_nystromformer.py src/transformers/models/oneformer/convert_to_hf_oneformer.py src/transformers/models/openai/convert_openai_original_tf_checkpoint_to_pytorch.py src/transformers/models/openai/modeling_openai.py src/transformers/models/openai/modeling_tf_openai.py src/transformers/models/opt/convert_opt_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/opt/modeling_flax_opt.py src/transformers/models/owlvit/configuration_owlvit.py src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py src/transformers/models/pegasus/convert_pegasus_tf_to_pytorch.py src/transformers/models/pegasus/modeling_flax_pegasus.py src/transformers/models/pegasus/modeling_tf_pegasus.py src/transformers/models/pegasus_x/modeling_pegasus_x.py src/transformers/models/perceiver/configuration_perceiver.py src/transformers/models/perceiver/convert_perceiver_haiku_to_pytorch.py src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py src/transformers/models/persimmon/modeling_persimmon.py src/transformers/models/pix2struct/configuration_pix2struct.py src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py src/transformers/models/pix2struct/image_processing_pix2struct.py src/transformers/models/pix2struct/processing_pix2struct.py src/transformers/models/plbart/convert_plbart_original_checkpoint_to_torch.py src/transformers/models/poolformer/convert_poolformer_original_to_pytorch.py src/transformers/models/pop2piano/convert_pop2piano_weights_to_hf.py src/transformers/models/pop2piano/feature_extraction_pop2piano.py src/transformers/models/pop2piano/processing_pop2piano.py src/transformers/models/pop2piano/tokenization_pop2piano.py src/transformers/models/prophetnet/configuration_prophetnet.py src/transformers/models/prophetnet/convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/prophetnet/modeling_prophetnet.py src/transformers/models/pvt/configuration_pvt.py src/transformers/models/pvt/convert_pvt_to_pytorch.py src/transformers/models/pvt/image_processing_pvt.py src/transformers/models/pvt/modeling_pvt.py src/transformers/models/qdqbert/configuration_qdqbert.py src/transformers/models/qdqbert/modeling_qdqbert.py src/transformers/models/qwen2/configuration_qwen2.py src/transformers/models/qwen2/modeling_qwen2.py src/transformers/models/qwen2/tokenization_qwen2.py src/transformers/models/qwen2/tokenization_qwen2_fast.py src/transformers/models/qwen2_moe/configuration_qwen2_moe.py src/transformers/models/qwen2_moe/modeling_qwen2_moe.py src/transformers/models/rag/configuration_rag.py src/transformers/models/rag/modeling_rag.py src/transformers/models/rag/modeling_tf_rag.py src/transformers/models/rag/retrieval_rag.py src/transformers/models/realm/modeling_realm.py src/transformers/models/realm/retrieval_realm.py src/transformers/models/recurrent_gemma/modeling_recurrent_gemma.py src/transformers/models/reformer/convert_reformer_trax_checkpoint_to_pytorch.py src/transformers/models/regnet/configuration_regnet.py src/transformers/models/regnet/convert_regnet_seer_10b_to_pytorch.py src/transformers/models/regnet/convert_regnet_to_pytorch.py src/transformers/models/regnet/modeling_flax_regnet.py src/transformers/models/rembert/configuration_rembert.py src/transformers/models/rembert/convert_rembert_tf_checkpoint_to_pytorch.py src/transformers/models/rembert/modeling_rembert.py src/transformers/models/rembert/modeling_tf_rembert.py src/transformers/models/resnet/convert_resnet_to_pytorch.py src/transformers/models/resnet/modeling_flax_resnet.py src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/roberta/modeling_flax_roberta.py src/transformers/models/roberta_prelayernorm/convert_roberta_prelayernorm_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/roberta_prelayernorm/modeling_flax_roberta_prelayernorm.py src/transformers/models/roc_bert/configuration_roc_bert.py src/transformers/models/roformer/convert_roformer_original_tf_checkpoint_to_pytorch.py src/transformers/models/roformer/modeling_flax_roformer.py src/transformers/models/roformer/modeling_roformer.py src/transformers/models/roformer/modeling_tf_roformer.py src/transformers/models/rwkv/configuration_rwkv.py src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py src/transformers/models/rwkv/modeling_rwkv.py src/transformers/models/sam/configuration_sam.py src/transformers/models/sam/convert_sam_to_hf.py src/transformers/models/sam/image_processing_sam.py src/transformers/models/sam/modeling_sam.py src/transformers/models/sam/modeling_tf_sam.py src/transformers/models/sam/processing_sam.py src/transformers/models/seamless_m4t/convert_fairseq2_to_hf.py src/transformers/models/seamless_m4t_v2/convert_fairseq2_to_hf.py src/transformers/models/segformer/configuration_segformer.py src/transformers/models/segformer/convert_segformer_original_to_pytorch.py src/transformers/models/sew/convert_sew_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/sew_d/convert_sew_d_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py src/transformers/models/speech_encoder_decoder/convert_mbart_wav2vec2_seq2seq_original_to_pytorch.py src/transformers/models/speech_encoder_decoder/convert_speech_to_text_wav2vec2_seq2seq_original_to_pytorch.py src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py src/transformers/models/speech_to_text/convert_s2t_fairseq_to_tfms.py src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py src/transformers/models/speecht5/configuration_speecht5.py src/transformers/models/speecht5/convert_hifigan.py src/transformers/models/speecht5/convert_speecht5_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/speecht5/number_normalizer.py src/transformers/models/splinter/configuration_splinter.py src/transformers/models/splinter/modeling_splinter.py src/transformers/models/squeezebert/modeling_squeezebert.py src/transformers/models/stablelm/modeling_stablelm.py src/transformers/models/starcoder2/modeling_starcoder2.py src/transformers/models/swiftformer/configuration_swiftformer.py src/transformers/models/swiftformer/convert_swiftformer_original_to_hf.py src/transformers/models/swiftformer/modeling_swiftformer.py src/transformers/models/swin/convert_swin_simmim_to_pytorch.py src/transformers/models/swin/convert_swin_timm_to_pytorch.py src/transformers/models/swin/modeling_tf_swin.py src/transformers/models/swin2sr/configuration_swin2sr.py src/transformers/models/swin2sr/convert_swin2sr_original_to_pytorch.py src/transformers/models/swinv2/convert_swinv2_timm_to_pytorch.py src/transformers/models/swinv2/modeling_swinv2.py src/transformers/models/switch_transformers/configuration_switch_transformers.py src/transformers/models/switch_transformers/convert_big_switch.py src/transformers/models/switch_transformers/convert_switch_transformers_original_flax_checkpoint_to_pytorch.py src/transformers/models/switch_transformers/modeling_switch_transformers.py src/transformers/models/t5/configuration_t5.py src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py src/transformers/models/t5/modeling_flax_t5.py src/transformers/models/t5/modeling_t5.py src/transformers/models/t5/modeling_tf_t5.py src/transformers/models/table_transformer/configuration_table_transformer.py src/transformers/models/table_transformer/convert_table_transformer_to_hf.py src/transformers/models/table_transformer/convert_table_transformer_to_hf_no_timm.py src/transformers/models/tapas/configuration_tapas.py src/transformers/models/tapas/convert_tapas_original_tf_checkpoint_to_pytorch.py src/transformers/models/tapas/modeling_tapas.py src/transformers/models/tapas/modeling_tf_tapas.py src/transformers/models/timesformer/convert_timesformer_to_pytorch.py src/transformers/models/timm_backbone/configuration_timm_backbone.py src/transformers/models/timm_backbone/modeling_timm_backbone.py src/transformers/models/trocr/convert_trocr_unilm_to_pytorch.py src/transformers/models/tvlt/configuration_tvlt.py src/transformers/models/tvlt/modeling_tvlt.py src/transformers/models/umt5/configuration_umt5.py src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py src/transformers/models/umt5/modeling_umt5.py src/transformers/models/unispeech/convert_unispeech_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/unispeech_sat/configuration_unispeech_sat.py src/transformers/models/unispeech_sat/convert_unispeech_original_s3prl_checkpoint_to_pytorch.py src/transformers/models/unispeech_sat/convert_unispeech_sat_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/upernet/configuration_upernet.py src/transformers/models/upernet/convert_convnext_upernet_to_pytorch.py src/transformers/models/upernet/convert_swin_upernet_to_pytorch.py src/transformers/models/videomae/configuration_videomae.py src/transformers/models/videomae/convert_videomae_to_pytorch.py src/transformers/models/vilt/configuration_vilt.py src/transformers/models/vilt/convert_vilt_original_to_pytorch.py src/transformers/models/vipllava/configuration_vipllava.py src/transformers/models/vipllava/modeling_vipllava.py src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py src/transformers/models/visual_bert/convert_visual_bert_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/visual_bert/modeling_visual_bert.py src/transformers/models/vit/convert_dino_to_pytorch.py src/transformers/models/vit/convert_vit_timm_to_pytorch.py src/transformers/models/vit/modeling_flax_vit.py src/transformers/models/vit_hybrid/configuration_vit_hybrid.py src/transformers/models/vit_hybrid/convert_vit_hybrid_timm_to_pytorch.py src/transformers/models/vit_hybrid/modeling_vit_hybrid.py src/transformers/models/vit_mae/convert_vit_mae_to_pytorch.py src/transformers/models/vit_mae/modeling_tf_vit_mae.py src/transformers/models/vit_msn/configuration_vit_msn.py src/transformers/models/vit_msn/convert_msn_to_pytorch.py src/transformers/models/vivit/configuration_vivit.py src/transformers/models/vivit/convert_vivit_flax_to_pytorch.py src/transformers/models/vivit/image_processing_vivit.py src/transformers/models/vivit/modeling_vivit.py src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/wav2vec2/convert_wav2vec2_original_s3prl_checkpoint_to_pytorch.py src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py src/transformers/models/wav2vec2_bert/convert_wav2vec2_seamless_checkpoint.py src/transformers/models/wav2vec2_conformer/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/wavlm/convert_wavlm_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/wavlm/convert_wavlm_original_s3prl_checkpoint_to_pytorch.py src/transformers/models/whisper/convert_openai_to_hf.py src/transformers/models/whisper/english_normalizer.py src/transformers/models/whisper/modeling_flax_whisper.py src/transformers/models/x_clip/configuration_x_clip.py src/transformers/models/x_clip/convert_x_clip_original_pytorch_to_hf.py src/transformers/models/xglm/configuration_xglm.py src/transformers/models/xglm/convert_xglm_original_ckpt_to_trfms.py src/transformers/models/xglm/modeling_flax_xglm.py src/transformers/models/xglm/modeling_tf_xglm.py src/transformers/models/xglm/modeling_xglm.py src/transformers/models/xlm/convert_xlm_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/xlm/modeling_tf_xlm.py src/transformers/models/xlm/modeling_xlm.py src/transformers/models/xlm_prophetnet/configuration_xlm_prophetnet.py src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py src/transformers/models/xlm_roberta/modeling_xlm_roberta.py src/transformers/models/xlm_roberta_xl/convert_xlm_roberta_xl_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py src/transformers/models/xlnet/convert_xlnet_original_tf_checkpoint_to_pytorch.py src/transformers/models/xlnet/modeling_tf_xlnet.py src/transformers/models/xlnet/modeling_xlnet.py src/transformers/models/xmod/convert_xmod_original_pytorch_checkpoint_to_pytorch.py src/transformers/models/yolos/convert_yolos_to_pytorch.py src/transformers/models/yoso/convert_yoso_pytorch_to_pytorch.py src/transformers/models/yoso/modeling_yoso.py src/transformers/onnx/__main__.py src/transformers/onnx/config.py src/transformers/onnx/convert.py src/transformers/onnx/features.py src/transformers/onnx/utils.py src/transformers/optimization.py src/transformers/optimization_tf.py src/transformers/pipelines/audio_classification.py src/transformers/pipelines/audio_utils.py src/transformers/pipelines/automatic_speech_recognition.py src/transformers/pipelines/base.py src/transformers/pipelines/conversational.py src/transformers/pipelines/depth_estimation.py src/transformers/pipelines/document_question_answering.py src/transformers/pipelines/feature_extraction.py src/transformers/pipelines/fill_mask.py src/transformers/pipelines/image_classification.py src/transformers/pipelines/image_segmentation.py src/transformers/pipelines/image_to_text.py src/transformers/pipelines/mask_generation.py src/transformers/pipelines/object_detection.py src/transformers/pipelines/pt_utils.py src/transformers/pipelines/question_answering.py src/transformers/pipelines/table_question_answering.py src/transformers/pipelines/text_classification.py src/transformers/pipelines/token_classification.py src/transformers/pipelines/video_classification.py src/transformers/pipelines/visual_question_answering.py src/transformers/pipelines/zero_shot_audio_classification.py src/transformers/pipelines/zero_shot_classification.py src/transformers/pipelines/zero_shot_image_classification.py src/transformers/pipelines/zero_shot_object_detection.py src/transformers/processing_utils.py src/transformers/pytorch_utils.py src/transformers/quantizers/auto.py src/transformers/quantizers/base.py src/transformers/quantizers/quantizer_awq.py src/transformers/quantizers/quantizer_bnb_4bit.py src/transformers/quantizers/quantizer_bnb_8bit.py src/transformers/quantizers/quantizer_gptq.py src/transformers/quantizers/quantizers_utils.py src/transformers/sagemaker/trainer_sm.py src/transformers/sagemaker/training_args_sm.py src/transformers/testing_utils.py src/transformers/tf_utils.py src/transformers/time_series_utils.py src/transformers/tokenization_utils.py src/transformers/tokenization_utils_base.py src/transformers/tokenization_utils_fast.py src/transformers/tools/agent_types.py src/transformers/tools/agents.py src/transformers/tools/base.py src/transformers/tools/document_question_answering.py src/transformers/tools/evaluate_agent.py src/transformers/tools/image_captioning.py src/transformers/tools/image_question_answering.py src/transformers/tools/image_segmentation.py src/transformers/tools/prompts.py src/transformers/tools/python_interpreter.py src/transformers/tools/speech_to_text.py src/transformers/tools/text_classification.py src/transformers/tools/text_question_answering.py src/transformers/tools/text_summarization.py src/transformers/tools/text_to_speech.py src/transformers/tools/translation.py src/transformers/trainer.py src/transformers/trainer_callback.py src/transformers/trainer_pt_utils.py src/transformers/trainer_seq2seq.py src/transformers/trainer_utils.py src/transformers/training_args.py src/transformers/training_args_seq2seq.py src/transformers/training_args_tf.py src/transformers/utils/backbone_utils.py src/transformers/utils/bitsandbytes.py src/transformers/utils/constants.py src/transformers/utils/doc.py src/transformers/utils/dummy_detectron2_objects.py src/transformers/utils/dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects.py src/transformers/utils/dummy_flax_objects.py src/transformers/utils/dummy_keras_nlp_objects.py src/transformers/utils/dummy_music_objects.py src/transformers/utils/dummy_pt_objects.py src/transformers/utils/dummy_sentencepiece_and_tokenizers_objects.py src/transformers/utils/dummy_sentencepiece_objects.py src/transformers/utils/dummy_speech_objects.py src/transformers/utils/dummy_tensorflow_text_objects.py src/transformers/utils/dummy_tf_objects.py src/transformers/utils/dummy_tokenizers_objects.py src/transformers/utils/dummy_vision_objects.py src/transformers/utils/fx.py src/transformers/utils/generic.py src/transformers/utils/hp_naming.py src/transformers/utils/hub.py src/transformers/utils/import_utils.py src/transformers/utils/logging.py src/transformers/utils/model_parallel_utils.py src/transformers/utils/notebook.py src/transformers/utils/peft_utils.py src/transformers/utils/quantization_config.py src/transformers/utils/sentencepiece_model_pb2.py src/transformers/utils/sentencepiece_model_pb2_new.py src/transformers/utils/versions.py
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_build.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import importlib from pathlib import Path # Test all the extensions added in the setup FILES_TO_FIND = [ "kernels/rwkv/wkv_cuda.cu", "kernels/rwkv/wkv_op.cpp", "kernels/deformable_detr/ms_deform_attn.h", "kernels/deformable_detr/cuda/ms_deform_im2col_cuda.cuh", "models/graphormer/algos_graphormer.pyx", ] def test_custom_files_are_present(transformers_path): # Test all the extensions added in the setup for file in FILES_TO_FIND: if not (transformers_path / file).exists(): return False return True if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--check_lib", action="store_true", help="Whether to check the build or the actual package.") args = parser.parse_args() if args.check_lib: transformers_module = importlib.import_module("transformers") transformers_path = Path(transformers_module.__file__).parent else: transformers_path = Path.cwd() / "build/lib/transformers" if not test_custom_files_are_present(transformers_path): raise ValueError("The built release does not contain the custom files. Fix this before going further!")
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/release.py
# coding=utf-8 # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that prepares the repository for releases (or patches) by updating all versions in the relevant places. It also performs some post-release cleanup, by updating the links in the main README to respective model doc pages (from main to stable). To prepare for a release, use from the root of the repo on the release branch with: ```bash python release.py ``` or use `make pre-release`. To prepare for a patch release, use from the root of the repo on the release branch with: ```bash python release.py --patch ``` or use `make pre-patch`. To do the post-release cleanup, use from the root of the repo on the main branch with: ```bash python release.py --post_release ``` or use `make post-release`. """ import argparse import os import re import packaging.version # All paths are defined with the intent that this script should be run from the root of the repo. PATH_TO_EXAMPLES = "examples/" # This maps a type of file to the pattern to look for when searching where the version is defined, as well as the # template to follow when replacing it with the new version. REPLACE_PATTERNS = { "examples": (re.compile(r'^check_min_version\("[^"]+"\)\s*$', re.MULTILINE), 'check_min_version("VERSION")\n'), "init": (re.compile(r'^__version__\s+=\s+"([^"]+)"\s*$', re.MULTILINE), '__version__ = "VERSION"\n'), "setup": (re.compile(r'^(\s*)version\s*=\s*"[^"]+",', re.MULTILINE), r'\1version="VERSION",'), } # This maps a type of file to its path in Transformers REPLACE_FILES = { "init": "src/transformers/__init__.py", "setup": "setup.py", } README_FILE = "README.md" def update_version_in_file(fname: str, version: str, file_type: str): """ Update the version of Transformers in one file. Args: fname (`str`): The path to the file where we want to update the version. version (`str`): The new version to set in the file. file_type (`str`): The type of the file (should be a key in `REPLACE_PATTERNS`). """ with open(fname, "r", encoding="utf-8", newline="\n") as f: code = f.read() re_pattern, replace = REPLACE_PATTERNS[file_type] replace = replace.replace("VERSION", version) code = re_pattern.sub(replace, code) with open(fname, "w", encoding="utf-8", newline="\n") as f: f.write(code) def update_version_in_examples(version: str): """ Update the version in all examples files. Args: version (`str`): The new version to set in the examples. """ for folder, directories, fnames in os.walk(PATH_TO_EXAMPLES): # Removing some of the folders with non-actively maintained examples from the walk if "research_projects" in directories: directories.remove("research_projects") if "legacy" in directories: directories.remove("legacy") for fname in fnames: if fname.endswith(".py"): update_version_in_file(os.path.join(folder, fname), version, file_type="examples") def global_version_update(version: str, patch: bool = False): """ Update the version in all needed files. Args: version (`str`): The new version to set everywhere. patch (`bool`, *optional*, defaults to `False`): Whether or not this is a patch release. """ for pattern, fname in REPLACE_FILES.items(): update_version_in_file(fname, version, pattern) if not patch: # We don't update the version in the examples for patch releases. update_version_in_examples(version) def clean_main_ref_in_model_list(): """ Replace the links from main doc to stable doc in the model list of the README. """ # If the introduction or the conclusion of the list change, the prompts may need to be updated. _start_prompt = "๐Ÿค— Transformers currently provides the following architectures" _end_prompt = "1. Want to contribute a new model?" with open(README_FILE, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() # Find the start of the list. start_index = 0 while not lines[start_index].startswith(_start_prompt): start_index += 1 start_index += 1 index = start_index # Update the lines in the model list. while not lines[index].startswith(_end_prompt): if lines[index].startswith("1."): lines[index] = lines[index].replace( "https://huggingface.co/docs/transformers/main/model_doc", "https://huggingface.co/docs/transformers/model_doc", ) index += 1 with open(README_FILE, "w", encoding="utf-8", newline="\n") as f: f.writelines(lines) def get_version() -> packaging.version.Version: """ Reads the current version in the main __init__. """ with open(REPLACE_FILES["init"], "r") as f: code = f.read() default_version = REPLACE_PATTERNS["init"][0].search(code).groups()[0] return packaging.version.parse(default_version) def pre_release_work(patch: bool = False): """ Do all the necessary pre-release steps: - figure out the next minor release version and ask confirmation - update the version eveywhere - clean-up the model list in the main README Args: patch (`bool`, *optional*, defaults to `False`): Whether or not this is a patch release. """ # First let's get the default version: base version if we are in dev, bump minor otherwise. default_version = get_version() if patch and default_version.is_devrelease: raise ValueError("Can't create a patch version from the dev branch, checkout a released version!") if default_version.is_devrelease: default_version = default_version.base_version elif patch: default_version = f"{default_version.major}.{default_version.minor}.{default_version.micro + 1}" else: default_version = f"{default_version.major}.{default_version.minor + 1}.0" # Now let's ask nicely if we have found the right version. version = input(f"Which version are you releasing? [{default_version}]") if len(version) == 0: version = default_version print(f"Updating version to {version}.") global_version_update(version, patch=patch) if not patch: print("Cleaning main README, don't forget to run `make fix-copies`.") clean_main_ref_in_model_list() def post_release_work(): """ Do all the necesarry post-release steps: - figure out the next dev version and ask confirmation - update the version eveywhere - clean-up the model list in the main README """ # First let's get the current version current_version = get_version() dev_version = f"{current_version.major}.{current_version.minor + 1}.0.dev0" current_version = current_version.base_version # Check with the user we got that right. version = input(f"Which version are we developing now? [{dev_version}]") if len(version) == 0: version = dev_version print(f"Updating version to {version}.") global_version_update(version) print("Cleaning main README, don't forget to run `make fix-copies`.") clean_main_ref_in_model_list() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--post_release", action="store_true", help="Whether this is pre or post release.") parser.add_argument("--patch", action="store_true", help="Whether or not this is a patch release.") args = parser.parse_args() if not args.post_release: pre_release_work(patch=args.patch) elif args.patch: print("Nothing to do after a patch :-)") else: post_release_work()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/add_pipeline_model_mapping_to_test.py
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """A script to add and/or update the attribute `pipeline_model_mapping` in model test files. This script will be (mostly) used in the following 2 situations: - run within a (scheduled) CI job to: - check if model test files in the library have updated `pipeline_model_mapping`, - and/or update test files and (possibly) open a GitHub pull request automatically - being run by a `transformers` member to quickly check and update some particular test file(s) This script is **NOT** intended to be run (manually) by community contributors. """ import argparse import glob import inspect import os import re import unittest from get_test_info import get_test_classes from tests.test_pipeline_mixin import pipeline_test_mapping PIPELINE_TEST_MAPPING = {} for task, _ in pipeline_test_mapping.items(): PIPELINE_TEST_MAPPING[task] = {"pt": None, "tf": None} # DO **NOT** add item to this set (unless the reason is approved) TEST_FILE_TO_IGNORE = { "tests/models/esm/test_modeling_esmfold.py", # The pipeline test mapping is added to `test_modeling_esm.py` } def get_framework(test_class): """Infer the framework from the test class `test_class`.""" if "ModelTesterMixin" in [x.__name__ for x in test_class.__bases__]: return "pt" elif "TFModelTesterMixin" in [x.__name__ for x in test_class.__bases__]: return "tf" elif "FlaxModelTesterMixin" in [x.__name__ for x in test_class.__bases__]: return "flax" else: return None def get_mapping_for_task(task, framework): """Get mappings defined in `XXXPipelineTests` for the task `task`.""" # Use the cached results if PIPELINE_TEST_MAPPING[task].get(framework, None) is not None: return PIPELINE_TEST_MAPPING[task][framework] pipeline_test_class = pipeline_test_mapping[task]["test"] mapping = None if framework == "pt": mapping = getattr(pipeline_test_class, "model_mapping", None) elif framework == "tf": mapping = getattr(pipeline_test_class, "tf_model_mapping", None) if mapping is not None: mapping = dict(mapping.items()) # cache the results PIPELINE_TEST_MAPPING[task][framework] = mapping return mapping def get_model_for_pipeline_test(test_class, task): """Get the model architecture(s) related to the test class `test_class` for a pipeline `task`.""" framework = get_framework(test_class) if framework is None: return None mapping = get_mapping_for_task(task, framework) if mapping is None: return None config_classes = list({model_class.config_class for model_class in test_class.all_model_classes}) if len(config_classes) != 1: raise ValueError("There should be exactly one configuration class from `test_class.all_model_classes`.") # This could be a list/tuple of model classes, but it's rare. model_class = mapping.get(config_classes[0], None) if isinstance(model_class, (tuple, list)): model_class = sorted(model_class, key=lambda x: x.__name__) return model_class def get_pipeline_model_mapping(test_class): """Get `pipeline_model_mapping` for `test_class`.""" mapping = [(task, get_model_for_pipeline_test(test_class, task)) for task in pipeline_test_mapping] mapping = sorted([(task, model) for task, model in mapping if model is not None], key=lambda x: x[0]) return dict(mapping) def get_pipeline_model_mapping_string(test_class): """Get `pipeline_model_mapping` for `test_class` as a string (to be added to the test file). This will be a 1-line string. After this is added to a test file, `make style` will format it beautifully. """ framework = get_framework(test_class) if framework == "pt": framework = "torch" default_value = "{}" mapping = get_pipeline_model_mapping(test_class) if len(mapping) == 0: return "" texts = [] for task, model_classes in mapping.items(): if isinstance(model_classes, (tuple, list)): # A list/tuple of model classes value = "(" + ", ".join([x.__name__ for x in model_classes]) + ")" else: # A single model class value = model_classes.__name__ texts.append(f'"{task}": {value}') text = "{" + ", ".join(texts) + "}" text = f"pipeline_model_mapping = {text} if is_{framework}_available() else {default_value}" return text def is_valid_test_class(test_class): """Restrict to `XXXModelTesterMixin` and should be a subclass of `unittest.TestCase`.""" base_class_names = {"ModelTesterMixin", "TFModelTesterMixin", "FlaxModelTesterMixin"} if not issubclass(test_class, unittest.TestCase): return False return len(base_class_names.intersection([x.__name__ for x in test_class.__bases__])) > 0 def find_test_class(test_file): """Find a test class in `test_file` to which we will add `pipeline_model_mapping`.""" test_classes = [x for x in get_test_classes(test_file) if is_valid_test_class(x)] target_test_class = None for test_class in test_classes: # If a test class has defined `pipeline_model_mapping`, let's take it if getattr(test_class, "pipeline_model_mapping", None) is not None: target_test_class = test_class break # Take the test class with the shortest name (just a heuristic) if target_test_class is None and len(test_classes) > 0: target_test_class = sorted(test_classes, key=lambda x: (len(x.__name__), x.__name__))[0] return target_test_class def find_block_ending(lines, start_idx, indent_level): end_idx = start_idx for idx, line in enumerate(lines[start_idx:]): indent = len(line) - len(line.lstrip()) if idx == 0 or indent > indent_level or (indent == indent_level and line.strip() == ")"): end_idx = start_idx + idx elif idx > 0 and indent <= indent_level: # Outside the definition block of `pipeline_model_mapping` break return end_idx def add_pipeline_model_mapping(test_class, overwrite=False): """Add `pipeline_model_mapping` to `test_class`.""" if getattr(test_class, "pipeline_model_mapping", None) is not None: if not overwrite: return "", -1 line_to_add = get_pipeline_model_mapping_string(test_class) if len(line_to_add) == 0: return "", -1 line_to_add = line_to_add + "\n" # The code defined the class `test_class` class_lines, class_start_line_no = inspect.getsourcelines(test_class) # `inspect` gives the code for an object, including decorator(s) if any. # We (only) need the exact line of the class definition. for idx, line in enumerate(class_lines): if line.lstrip().startswith("class "): class_lines = class_lines[idx:] class_start_line_no += idx break class_end_line_no = class_start_line_no + len(class_lines) - 1 # The index in `class_lines` that starts the definition of `all_model_classes`, `all_generative_model_classes` or # `pipeline_model_mapping`. This assumes they are defined in such order, and we take the start index of the last # block that appears in a `test_class`. start_idx = None # The indent level of the line at `class_lines[start_idx]` (if defined) indent_level = 0 # To record if `pipeline_model_mapping` is found in `test_class`. def_line = None for idx, line in enumerate(class_lines): if line.strip().startswith("all_model_classes = "): indent_level = len(line) - len(line.lstrip()) start_idx = idx elif line.strip().startswith("all_generative_model_classes = "): indent_level = len(line) - len(line.lstrip()) start_idx = idx elif line.strip().startswith("pipeline_model_mapping = "): indent_level = len(line) - len(line.lstrip()) start_idx = idx def_line = line break if start_idx is None: return "", -1 # Find the ending index (inclusive) of the above found block. end_idx = find_block_ending(class_lines, start_idx, indent_level) # Extract `is_xxx_available()` from existing blocks: some models require specific libraries like `timm` and use # `is_timm_available()` instead of `is_torch_available()`. # Keep leading and trailing whitespaces r = re.compile(r"\s(is_\S+?_available\(\))\s") for line in class_lines[start_idx : end_idx + 1]: backend_condition = r.search(line) if backend_condition is not None: # replace the leading and trailing whitespaces to the space character " ". target = " " + backend_condition[0][1:-1] + " " line_to_add = r.sub(target, line_to_add) break if def_line is None: # `pipeline_model_mapping` is not defined. The target index is set to the ending index (inclusive) of # `all_model_classes` or `all_generative_model_classes`. target_idx = end_idx else: # `pipeline_model_mapping` is defined. The target index is set to be one **BEFORE** its start index. target_idx = start_idx - 1 # mark the lines of the currently existing `pipeline_model_mapping` to be removed. for idx in range(start_idx, end_idx + 1): # These lines are going to be removed before writing to the test file. class_lines[idx] = None # noqa # Make sure the test class is a subclass of `PipelineTesterMixin`. parent_classes = [x.__name__ for x in test_class.__bases__] if "PipelineTesterMixin" not in parent_classes: # Put `PipelineTesterMixin` just before `unittest.TestCase` _parent_classes = [x for x in parent_classes if x != "TestCase"] + ["PipelineTesterMixin"] if "TestCase" in parent_classes: # Here we **assume** the original string is always with `unittest.TestCase`. _parent_classes.append("unittest.TestCase") parent_classes = ", ".join(_parent_classes) for idx, line in enumerate(class_lines): # Find the ending of the declaration of `test_class` if line.strip().endswith("):"): # mark the lines of the declaration of `test_class` to be removed for _idx in range(idx + 1): class_lines[_idx] = None # noqa break # Add the new, one-line, class declaration for `test_class` class_lines[0] = f"class {test_class.__name__}({parent_classes}):\n" # Add indentation line_to_add = " " * indent_level + line_to_add # Insert `pipeline_model_mapping` to `class_lines`. # (The line at `target_idx` should be kept by definition!) class_lines = class_lines[: target_idx + 1] + [line_to_add] + class_lines[target_idx + 1 :] # Remove the lines that are marked to be removed class_lines = [x for x in class_lines if x is not None] # Move from test class to module (in order to write to the test file) module_lines = inspect.getsourcelines(inspect.getmodule(test_class))[0] # Be careful with the 1-off between line numbers and array indices module_lines = module_lines[: class_start_line_no - 1] + class_lines + module_lines[class_end_line_no:] code = "".join(module_lines) moddule_file = inspect.getsourcefile(test_class) with open(moddule_file, "w", encoding="UTF-8", newline="\n") as fp: fp.write(code) return line_to_add def add_pipeline_model_mapping_to_test_file(test_file, overwrite=False): """Add `pipeline_model_mapping` to `test_file`.""" test_class = find_test_class(test_file) if test_class: add_pipeline_model_mapping(test_class, overwrite=overwrite) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--test_file", type=str, help="A path to the test file, starting with the repository's `tests` directory." ) parser.add_argument( "--all", action="store_true", help="If to check and modify all test files.", ) parser.add_argument( "--overwrite", action="store_true", help="If to overwrite a test class if it has already defined `pipeline_model_mapping`.", ) args = parser.parse_args() if not args.all and not args.test_file: raise ValueError("Please specify either `test_file` or pass `--all` to check/modify all test files.") elif args.all and args.test_file: raise ValueError("Only one of `--test_file` and `--all` could be specified.") test_files = [] if args.test_file: test_files = [args.test_file] else: pattern = os.path.join("tests", "models", "**", "test_modeling_*.py") for test_file in glob.glob(pattern): # `Flax` is not concerned at this moment if not test_file.startswith("test_modeling_flax_"): test_files.append(test_file) for test_file in test_files: if test_file in TEST_FILE_TO_IGNORE: print(f"[SKIPPED] {test_file} is skipped as it is in `TEST_FILE_TO_IGNORE` in the file {__file__}.") continue add_pipeline_model_mapping_to_test_file(test_file, overwrite=args.overwrite)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/update_metadata.py
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that updates the metadata of the Transformers library in the repository `huggingface/transformers-metadata`. Usage for an update (as used by the GitHub action `update_metadata`): ```bash python utils/update_metadata.py --token <token> --commit_sha <commit_sha> ``` Usage to check all pipelines are properly defined in the constant `PIPELINE_TAGS_AND_AUTO_MODELS` of this script, so that new pipelines are properly added as metadata (as used in `make repo-consistency`): ```bash python utils/update_metadata.py --check-only ``` """ import argparse import collections import os import re import tempfile from typing import Dict, List, Tuple import pandas as pd from datasets import Dataset from huggingface_hub import hf_hub_download, upload_folder from transformers.utils import direct_transformers_import # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/update_metadata.py TRANSFORMERS_PATH = "src/transformers" # This is to make sure the transformers module imported is the one in the repo. transformers_module = direct_transformers_import(TRANSFORMERS_PATH) # Regexes that match TF/Flax/PT model names. _re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") _re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") # Will match any TF or Flax model too so need to be in an else branch afterthe two previous regexes. _re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") # Fill this with tuples (pipeline_tag, model_mapping, auto_model) PIPELINE_TAGS_AND_AUTO_MODELS = [ ("pretraining", "MODEL_FOR_PRETRAINING_MAPPING_NAMES", "AutoModelForPreTraining"), ("feature-extraction", "MODEL_MAPPING_NAMES", "AutoModel"), ("image-feature-extraction", "MODEL_FOR_IMAGE_MAPPING_NAMES", "AutoModel"), ("audio-classification", "MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES", "AutoModelForAudioClassification"), ("text-generation", "MODEL_FOR_CAUSAL_LM_MAPPING_NAMES", "AutoModelForCausalLM"), ("automatic-speech-recognition", "MODEL_FOR_CTC_MAPPING_NAMES", "AutoModelForCTC"), ("image-classification", "MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForImageClassification"), ("image-segmentation", "MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES", "AutoModelForImageSegmentation"), ("image-to-image", "MODEL_FOR_IMAGE_TO_IMAGE_MAPPING_NAMES", "AutoModelForImageToImage"), ("fill-mask", "MODEL_FOR_MASKED_LM_MAPPING_NAMES", "AutoModelForMaskedLM"), ("object-detection", "MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES", "AutoModelForObjectDetection"), ( "zero-shot-object-detection", "MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING_NAMES", "AutoModelForZeroShotObjectDetection", ), ("question-answering", "MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForQuestionAnswering"), ("text2text-generation", "MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES", "AutoModelForSeq2SeqLM"), ("text-classification", "MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForSequenceClassification"), ("automatic-speech-recognition", "MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES", "AutoModelForSpeechSeq2Seq"), ( "table-question-answering", "MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForTableQuestionAnswering", ), ("token-classification", "MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES", "AutoModelForTokenClassification"), ("multiple-choice", "MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES", "AutoModelForMultipleChoice"), ( "next-sentence-prediction", "MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES", "AutoModelForNextSentencePrediction", ), ( "audio-frame-classification", "MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING_NAMES", "AutoModelForAudioFrameClassification", ), ("audio-xvector", "MODEL_FOR_AUDIO_XVECTOR_MAPPING_NAMES", "AutoModelForAudioXVector"), ( "document-question-answering", "MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForDocumentQuestionAnswering", ), ( "visual-question-answering", "MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForVisualQuestionAnswering", ), ("image-to-text", "MODEL_FOR_FOR_VISION_2_SEQ_MAPPING_NAMES", "AutoModelForVision2Seq"), ( "zero-shot-image-classification", "MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForZeroShotImageClassification", ), ("depth-estimation", "MODEL_FOR_DEPTH_ESTIMATION_MAPPING_NAMES", "AutoModelForDepthEstimation"), ("video-classification", "MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING_NAMES", "AutoModelForVideoClassification"), ("mask-generation", "MODEL_FOR_MASK_GENERATION_MAPPING_NAMES", "AutoModelForMaskGeneration"), ("text-to-audio", "MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES", "AutoModelForTextToSpectrogram"), ("text-to-audio", "MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING_NAMES", "AutoModelForTextToWaveform"), ] def camel_case_split(identifier: str) -> List[str]: """ Split a camel-cased name into words. Args: identifier (`str`): The camel-cased name to parse. Returns: `List[str]`: The list of words in the identifier (as seprated by capital letters). Example: ```py >>> camel_case_split("CamelCasedClass") ["Camel", "Cased", "Class"] ``` """ # Regex thanks to https://stackoverflow.com/questions/29916065/how-to-do-camelcase-split-in-python matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier) return [m.group(0) for m in matches] def get_frameworks_table() -> pd.DataFrame: """ Generates a dataframe containing the supported auto classes for each model type, using the content of the auto modules. """ # Dictionary model names to config. config_maping_names = transformers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES model_prefix_to_model_type = { config.replace("Config", ""): model_type for model_type, config in config_maping_names.items() } # Dictionaries flagging if each model prefix has a backend in PT/TF/Flax. pt_models = collections.defaultdict(bool) tf_models = collections.defaultdict(bool) flax_models = collections.defaultdict(bool) # Let's lookup through all transformers object (once) and find if models are supported by a given backend. for attr_name in dir(transformers_module): lookup_dict = None if _re_tf_models.match(attr_name) is not None: lookup_dict = tf_models attr_name = _re_tf_models.match(attr_name).groups()[0] elif _re_flax_models.match(attr_name) is not None: lookup_dict = flax_models attr_name = _re_flax_models.match(attr_name).groups()[0] elif _re_pt_models.match(attr_name) is not None: lookup_dict = pt_models attr_name = _re_pt_models.match(attr_name).groups()[0] if lookup_dict is not None: while len(attr_name) > 0: if attr_name in model_prefix_to_model_type: lookup_dict[model_prefix_to_model_type[attr_name]] = True break # Try again after removing the last word in the name attr_name = "".join(camel_case_split(attr_name)[:-1]) all_models = set(list(pt_models.keys()) + list(tf_models.keys()) + list(flax_models.keys())) all_models = list(all_models) all_models.sort() data = {"model_type": all_models} data["pytorch"] = [pt_models[t] for t in all_models] data["tensorflow"] = [tf_models[t] for t in all_models] data["flax"] = [flax_models[t] for t in all_models] # Now let's find the right processing class for each model. In order we check if there is a Processor, then a # Tokenizer, then a FeatureExtractor, then an ImageProcessor processors = {} for t in all_models: if t in transformers_module.models.auto.processing_auto.PROCESSOR_MAPPING_NAMES: processors[t] = "AutoProcessor" elif t in transformers_module.models.auto.tokenization_auto.TOKENIZER_MAPPING_NAMES: processors[t] = "AutoTokenizer" elif t in transformers_module.models.auto.image_processing_auto.IMAGE_PROCESSOR_MAPPING_NAMES: processors[t] = "AutoImageProcessor" elif t in transformers_module.models.auto.feature_extraction_auto.FEATURE_EXTRACTOR_MAPPING_NAMES: processors[t] = "AutoFeatureExtractor" else: # Default to AutoTokenizer if a model has nothing, for backward compatibility. processors[t] = "AutoTokenizer" data["processor"] = [processors[t] for t in all_models] return pd.DataFrame(data) def update_pipeline_and_auto_class_table(table: Dict[str, Tuple[str, str]]) -> Dict[str, Tuple[str, str]]: """ Update the table maping models to pipelines and auto classes without removing old keys if they don't exist anymore. Args: table (`Dict[str, Tuple[str, str]]`): The existing table mapping model names to a tuple containing the pipeline tag and the auto-class name with which they should be used. Returns: `Dict[str, Tuple[str, str]]`: The updated table in the same format. """ auto_modules = [ transformers_module.models.auto.modeling_auto, transformers_module.models.auto.modeling_tf_auto, transformers_module.models.auto.modeling_flax_auto, ] for pipeline_tag, model_mapping, auto_class in PIPELINE_TAGS_AND_AUTO_MODELS: model_mappings = [model_mapping, f"TF_{model_mapping}", f"FLAX_{model_mapping}"] auto_classes = [auto_class, f"TF_{auto_class}", f"Flax_{auto_class}"] # Loop through all three frameworks for module, cls, mapping in zip(auto_modules, auto_classes, model_mappings): # The type of pipeline may not exist in this framework if not hasattr(module, mapping): continue # First extract all model_names model_names = [] for name in getattr(module, mapping).values(): if isinstance(name, str): model_names.append(name) else: model_names.extend(list(name)) # Add pipeline tag and auto model class for those models table.update({model_name: (pipeline_tag, cls) for model_name in model_names}) return table def update_metadata(token: str, commit_sha: str): """ Update the metadata for the Transformers repo in `huggingface/transformers-metadata`. Args: token (`str`): A valid token giving write access to `huggingface/transformers-metadata`. commit_sha (`str`): The commit SHA on Transformers corresponding to this update. """ frameworks_table = get_frameworks_table() frameworks_dataset = Dataset.from_pandas(frameworks_table) resolved_tags_file = hf_hub_download( "huggingface/transformers-metadata", "pipeline_tags.json", repo_type="dataset", token=token ) tags_dataset = Dataset.from_json(resolved_tags_file) table = { tags_dataset[i]["model_class"]: (tags_dataset[i]["pipeline_tag"], tags_dataset[i]["auto_class"]) for i in range(len(tags_dataset)) } table = update_pipeline_and_auto_class_table(table) # Sort the model classes to avoid some nondeterministic updates to create false update commits. model_classes = sorted(table.keys()) tags_table = pd.DataFrame( { "model_class": model_classes, "pipeline_tag": [table[m][0] for m in model_classes], "auto_class": [table[m][1] for m in model_classes], } ) tags_dataset = Dataset.from_pandas(tags_table) hub_frameworks_json = hf_hub_download( repo_id="huggingface/transformers-metadata", filename="frameworks.json", repo_type="dataset", token=token, ) with open(hub_frameworks_json) as f: hub_frameworks_json = f.read() hub_pipeline_tags_json = hf_hub_download( repo_id="huggingface/transformers-metadata", filename="pipeline_tags.json", repo_type="dataset", token=token, ) with open(hub_pipeline_tags_json) as f: hub_pipeline_tags_json = f.read() with tempfile.TemporaryDirectory() as tmp_dir: frameworks_dataset.to_json(os.path.join(tmp_dir, "frameworks.json")) tags_dataset.to_json(os.path.join(tmp_dir, "pipeline_tags.json")) with open(os.path.join(tmp_dir, "frameworks.json")) as f: frameworks_json = f.read() with open(os.path.join(tmp_dir, "pipeline_tags.json")) as f: pipeline_tags_json = f.read() frameworks_equal = hub_frameworks_json == frameworks_json hub_pipeline_tags_equal = hub_pipeline_tags_json == pipeline_tags_json if frameworks_equal and hub_pipeline_tags_equal: print("No updates on the Hub, not pushing the metadata files.") return if commit_sha is not None: commit_message = ( f"Update with commit {commit_sha}\n\nSee: " f"https://github.com/huggingface/transformers/commit/{commit_sha}" ) else: commit_message = "Update" upload_folder( repo_id="huggingface/transformers-metadata", folder_path=tmp_dir, repo_type="dataset", token=token, commit_message=commit_message, ) def check_pipeline_tags(): """ Check all pipeline tags are properly defined in the `PIPELINE_TAGS_AND_AUTO_MODELS` constant of this script. """ in_table = {tag: cls for tag, _, cls in PIPELINE_TAGS_AND_AUTO_MODELS} pipeline_tasks = transformers_module.pipelines.SUPPORTED_TASKS missing = [] for key in pipeline_tasks: if key not in in_table: model = pipeline_tasks[key]["pt"] if isinstance(model, (list, tuple)): model = model[0] model = model.__name__ if model not in in_table.values(): missing.append(key) if len(missing) > 0: msg = ", ".join(missing) raise ValueError( "The following pipeline tags are not present in the `PIPELINE_TAGS_AND_AUTO_MODELS` constant inside " f"`utils/update_metadata.py`: {msg}. Please add them!" ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--token", type=str, help="The token to use to push to the transformers-metadata dataset.") parser.add_argument("--commit_sha", type=str, help="The sha of the commit going with this update.") parser.add_argument("--check-only", action="store_true", help="Activate to just check all pipelines are present.") args = parser.parse_args() if args.check_only: check_pipeline_tags() else: update_metadata(args.token, args.commit_sha)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/get_github_job_time.py
import argparse import math import traceback import dateutil.parser as date_parser import requests def extract_time_from_single_job(job): """Extract time info from a single job in a GitHub Actions workflow run""" job_info = {} start = job["started_at"] end = job["completed_at"] start_datetime = date_parser.parse(start) end_datetime = date_parser.parse(end) duration_in_min = round((end_datetime - start_datetime).total_seconds() / 60.0) job_info["started_at"] = start job_info["completed_at"] = end job_info["duration"] = duration_in_min return job_info def get_job_time(workflow_run_id, token=None): """Extract time info for all jobs in a GitHub Actions workflow run""" headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100" result = requests.get(url, headers=headers).json() job_time = {} try: job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]}) return job_time except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return {} if __name__ == "__main__": r""" Example: python get_github_job_time.py --workflow_run_id 2945609517 """ parser = argparse.ArgumentParser() # Required parameters parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.") args = parser.parse_args() job_time = get_job_time(args.workflow_run_id) job_time = dict(sorted(job_time.items(), key=lambda item: item[1]["duration"], reverse=True)) for k, v in job_time.items(): print(f'{k}: {v["duration"]}')
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_tf_ops.py
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import json import os from tensorflow.core.protobuf.saved_model_pb2 import SavedModel # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_copies.py REPO_PATH = "." # Internal TensorFlow ops that can be safely ignored (mostly specific to a saved model) INTERNAL_OPS = [ "Assert", "AssignVariableOp", "EmptyTensorList", "MergeV2Checkpoints", "ReadVariableOp", "ResourceGather", "RestoreV2", "SaveV2", "ShardedFilename", "StatefulPartitionedCall", "StaticRegexFullMatch", "VarHandleOp", ] def onnx_compliancy(saved_model_path, strict, opset): saved_model = SavedModel() onnx_ops = [] with open(os.path.join(REPO_PATH, "utils", "tf_ops", "onnx.json")) as f: onnx_opsets = json.load(f)["opsets"] for i in range(1, opset + 1): onnx_ops.extend(onnx_opsets[str(i)]) with open(saved_model_path, "rb") as f: saved_model.ParseFromString(f.read()) model_op_names = set() # Iterate over every metagraph in case there is more than one (a saved model can contain multiple graphs) for meta_graph in saved_model.meta_graphs: # Add operations in the graph definition model_op_names.update(node.op for node in meta_graph.graph_def.node) # Go through the functions in the graph definition for func in meta_graph.graph_def.library.function: # Add operations in each function model_op_names.update(node.op for node in func.node_def) # Convert to list, sorted if you want model_op_names = sorted(model_op_names) incompatible_ops = [] for op in model_op_names: if op not in onnx_ops and op not in INTERNAL_OPS: incompatible_ops.append(op) if strict and len(incompatible_ops) > 0: raise Exception(f"Found the following incompatible ops for the opset {opset}:\n" + incompatible_ops) elif len(incompatible_ops) > 0: print(f"Found the following incompatible ops for the opset {opset}:") print(*incompatible_ops, sep="\n") else: print(f"The saved model {saved_model_path} can properly be converted with ONNX.") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--saved_model_path", help="Path of the saved model to check (the .pb file).") parser.add_argument( "--opset", default=12, type=int, help="The ONNX opset against which the model has to be tested." ) parser.add_argument( "--framework", choices=["onnx"], default="onnx", help="Frameworks against which to test the saved model." ) parser.add_argument( "--strict", action="store_true", help="Whether make the checking strict (raise errors) or not (raise warnings)" ) args = parser.parse_args() if args.framework == "onnx": onnx_compliancy(args.saved_model_path, args.strict, args.opset)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/check_repo.py
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that performs several consistency checks on the repo. This includes: - checking all models are properly defined in the __init__ of models/ - checking all models are in the main __init__ - checking all models are properly tested - checking all object in the main __init__ are documented - checking all models are in at least one auto class - checking all the auto mapping are properly defined (no typos, importable) - checking the list of deprecated models is up to date Use from the root of the repo with (as used in `make repo-consistency`): ```bash python utils/check_repo.py ``` It has no auto-fix mode. """ import inspect import os import re import sys import types import warnings from collections import OrderedDict from difflib import get_close_matches from pathlib import Path from typing import List, Tuple from transformers import is_flax_available, is_tf_available, is_torch_available from transformers.models.auto import get_values from transformers.models.auto.configuration_auto import CONFIG_MAPPING_NAMES from transformers.models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING_NAMES from transformers.models.auto.image_processing_auto import IMAGE_PROCESSOR_MAPPING_NAMES from transformers.models.auto.processing_auto import PROCESSOR_MAPPING_NAMES from transformers.models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES from transformers.utils import ENV_VARS_TRUE_VALUES, direct_transformers_import # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_repo.py PATH_TO_TRANSFORMERS = "src/transformers" PATH_TO_TESTS = "tests" PATH_TO_DOC = "docs/source/en" # Update this list with models that are supposed to be private. PRIVATE_MODELS = [ "AltRobertaModel", "DPRSpanPredictor", "UdopStack", "LongT5Stack", "RealmBertModel", "T5Stack", "MT5Stack", "UMT5Stack", "Pop2PianoStack", "SwitchTransformersStack", "TFDPRSpanPredictor", "MaskFormerSwinModel", "MaskFormerSwinPreTrainedModel", "BridgeTowerTextModel", "BridgeTowerVisionModel", "Kosmos2TextModel", "Kosmos2TextForCausalLM", "Kosmos2VisionModel", "SeamlessM4Tv2TextToUnitModel", "SeamlessM4Tv2CodeHifiGan", "SeamlessM4Tv2TextToUnitForConditionalGeneration", ] # Update this list for models that are not tested with a comment explaining the reason it should not be. # Being in this list is an exception and should **not** be the rule. IGNORE_NON_TESTED = PRIVATE_MODELS.copy() + [ # models to ignore for not tested "RecurrentGemmaModel", # Building part of bigger (tested) model. "FuyuForCausalLM", # Not tested fort now "InstructBlipQFormerModel", # Building part of bigger (tested) model. "UMT5EncoderModel", # Building part of bigger (tested) model. "Blip2QFormerModel", # Building part of bigger (tested) model. "ErnieMForInformationExtraction", "FastSpeech2ConformerHifiGan", # Already tested by SpeechT5HifiGan (# Copied from) "FastSpeech2ConformerWithHifiGan", # Built with two smaller (tested) models. "GraphormerDecoderHead", # Building part of bigger (tested) model. "JukeboxVQVAE", # Building part of bigger (tested) model. "JukeboxPrior", # Building part of bigger (tested) model. "DecisionTransformerGPT2Model", # Building part of bigger (tested) model. "SegformerDecodeHead", # Building part of bigger (tested) model. "MgpstrModel", # Building part of bigger (tested) model. "BertLMHeadModel", # Needs to be setup as decoder. "MegatronBertLMHeadModel", # Building part of bigger (tested) model. "RealmBertModel", # Building part of bigger (tested) model. "RealmReader", # Not regular model. "RealmScorer", # Not regular model. "RealmForOpenQA", # Not regular model. "ReformerForMaskedLM", # Needs to be setup as decoder. "TFElectraMainLayer", # Building part of bigger (tested) model (should it be a TFPreTrainedModel ?) "TFRobertaForMultipleChoice", # TODO: fix "TFRobertaPreLayerNormForMultipleChoice", # TODO: fix "SeparableConv1D", # Building part of bigger (tested) model. "FlaxBartForCausalLM", # Building part of bigger (tested) model. "FlaxBertForCausalLM", # Building part of bigger (tested) model. Tested implicitly through FlaxRobertaForCausalLM. "OPTDecoderWrapper", "TFSegformerDecodeHead", # Not a regular model. "AltRobertaModel", # Building part of bigger (tested) model. "BlipTextLMHeadModel", # No need to test it as it is tested by BlipTextVision models "TFBlipTextLMHeadModel", # No need to test it as it is tested by BlipTextVision models "BridgeTowerTextModel", # No need to test it as it is tested by BridgeTowerModel model. "BridgeTowerVisionModel", # No need to test it as it is tested by BridgeTowerModel model. "BarkCausalModel", # Building part of bigger (tested) model. "BarkModel", # Does not have a forward signature - generation tested with integration tests. "SeamlessM4TTextToUnitModel", # Building part of bigger (tested) model. "SeamlessM4TCodeHifiGan", # Building part of bigger (tested) model. "SeamlessM4TTextToUnitForConditionalGeneration", # Building part of bigger (tested) model. ] # Update this list with test files that don't have a tester with a `all_model_classes` variable and which don't # trigger the common tests. TEST_FILES_WITH_NO_COMMON_TESTS = [ "models/decision_transformer/test_modeling_decision_transformer.py", "models/camembert/test_modeling_camembert.py", "models/mt5/test_modeling_flax_mt5.py", "models/mbart/test_modeling_mbart.py", "models/mt5/test_modeling_mt5.py", "models/pegasus/test_modeling_pegasus.py", "models/camembert/test_modeling_tf_camembert.py", "models/mt5/test_modeling_tf_mt5.py", "models/xlm_roberta/test_modeling_tf_xlm_roberta.py", "models/xlm_roberta/test_modeling_flax_xlm_roberta.py", "models/xlm_prophetnet/test_modeling_xlm_prophetnet.py", "models/xlm_roberta/test_modeling_xlm_roberta.py", "models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py", "models/vision_text_dual_encoder/test_modeling_tf_vision_text_dual_encoder.py", "models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py", "models/decision_transformer/test_modeling_decision_transformer.py", "models/bark/test_modeling_bark.py", ] # Update this list for models that are not in any of the auto MODEL_XXX_MAPPING. Being in this list is an exception and # should **not** be the rule. IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [ # models to ignore for model xxx mapping "AlignTextModel", "AlignVisionModel", "ClapTextModel", "ClapTextModelWithProjection", "ClapAudioModel", "ClapAudioModelWithProjection", "Blip2ForConditionalGeneration", "Blip2QFormerModel", "Blip2VisionModel", "ErnieMForInformationExtraction", "FastSpeech2ConformerHifiGan", "FastSpeech2ConformerWithHifiGan", "GitVisionModel", "GraphormerModel", "GraphormerForGraphClassification", "BlipForConditionalGeneration", "BlipForImageTextRetrieval", "BlipForQuestionAnswering", "BlipVisionModel", "BlipTextLMHeadModel", "BlipTextModel", "BrosSpadeEEForTokenClassification", "BrosSpadeELForTokenClassification", "TFBlipForConditionalGeneration", "TFBlipForImageTextRetrieval", "TFBlipForQuestionAnswering", "TFBlipVisionModel", "TFBlipTextLMHeadModel", "TFBlipTextModel", "Swin2SRForImageSuperResolution", "BridgeTowerForImageAndTextRetrieval", "BridgeTowerForMaskedLM", "BridgeTowerForContrastiveLearning", "CLIPSegForImageSegmentation", "CLIPSegVisionModel", "CLIPSegTextModel", "EsmForProteinFolding", "GPTSanJapaneseModel", "TimeSeriesTransformerForPrediction", "InformerForPrediction", "AutoformerForPrediction", "PatchTSTForPretraining", "PatchTSTForPrediction", "JukeboxVQVAE", "JukeboxPrior", "SamModel", "DPTForDepthEstimation", "DecisionTransformerGPT2Model", "GLPNForDepthEstimation", "ViltForImagesAndTextClassification", "ViltForImageAndTextRetrieval", "ViltForTokenClassification", "ViltForMaskedLM", "PerceiverForMultimodalAutoencoding", "PerceiverForOpticalFlow", "SegformerDecodeHead", "TFSegformerDecodeHead", "FlaxBeitForMaskedImageModeling", "BeitForMaskedImageModeling", "ChineseCLIPTextModel", "ChineseCLIPVisionModel", "CLIPTextModel", "CLIPTextModelWithProjection", "CLIPVisionModelWithProjection", "ClvpForCausalLM", "ClvpModel", "GroupViTTextModel", "GroupViTVisionModel", "TFCLIPTextModel", "TFCLIPVisionModel", "TFGroupViTTextModel", "TFGroupViTVisionModel", "FlaxCLIPTextModel", "FlaxCLIPTextModelWithProjection", "FlaxCLIPVisionModel", "FlaxWav2Vec2ForCTC", "DetrForSegmentation", "Pix2StructVisionModel", "Pix2StructTextModel", "Pix2StructForConditionalGeneration", "ConditionalDetrForSegmentation", "DPRReader", "FlaubertForQuestionAnswering", "FlavaImageCodebook", "FlavaTextModel", "FlavaImageModel", "FlavaMultimodalModel", "GPT2DoubleHeadsModel", "GPTSw3DoubleHeadsModel", "InstructBlipVisionModel", "InstructBlipQFormerModel", "LayoutLMForQuestionAnswering", "LukeForMaskedLM", "LukeForEntityClassification", "LukeForEntityPairClassification", "LukeForEntitySpanClassification", "MgpstrModel", "OpenAIGPTDoubleHeadsModel", "OwlViTTextModel", "OwlViTVisionModel", "Owlv2TextModel", "Owlv2VisionModel", "OwlViTForObjectDetection", "PatchTSMixerForPrediction", "PatchTSMixerForPretraining", "RagModel", "RagSequenceForGeneration", "RagTokenForGeneration", "RealmEmbedder", "RealmForOpenQA", "RealmScorer", "RealmReader", "TFDPRReader", "TFGPT2DoubleHeadsModel", "TFLayoutLMForQuestionAnswering", "TFOpenAIGPTDoubleHeadsModel", "TFRagModel", "TFRagSequenceForGeneration", "TFRagTokenForGeneration", "Wav2Vec2ForCTC", "HubertForCTC", "SEWForCTC", "SEWDForCTC", "XLMForQuestionAnswering", "XLNetForQuestionAnswering", "SeparableConv1D", "VisualBertForRegionToPhraseAlignment", "VisualBertForVisualReasoning", "VisualBertForQuestionAnswering", "VisualBertForMultipleChoice", "TFWav2Vec2ForCTC", "TFHubertForCTC", "XCLIPVisionModel", "XCLIPTextModel", "AltCLIPTextModel", "AltCLIPVisionModel", "AltRobertaModel", "TvltForAudioVisualClassification", "BarkCausalModel", "BarkCoarseModel", "BarkFineModel", "BarkSemanticModel", "MusicgenMelodyModel", "MusicgenModel", "MusicgenForConditionalGeneration", "SpeechT5ForSpeechToSpeech", "SpeechT5ForTextToSpeech", "SpeechT5HifiGan", "VitMatteForImageMatting", "SeamlessM4TTextToUnitModel", "SeamlessM4TTextToUnitForConditionalGeneration", "SeamlessM4TCodeHifiGan", "SeamlessM4TForSpeechToSpeech", # no auto class for speech-to-speech "TvpForVideoGrounding", "UdopForConditionalGeneration", "SeamlessM4Tv2NARTextToUnitModel", "SeamlessM4Tv2NARTextToUnitForConditionalGeneration", "SeamlessM4Tv2CodeHifiGan", "SeamlessM4Tv2ForSpeechToSpeech", # no auto class for speech-to-speech "SegGptForImageSegmentation", "SiglipVisionModel", "SiglipTextModel", ] # DO NOT edit this list! # (The corresponding pytorch objects should never have been in the main `__init__`, but it's too late to remove) OBJECT_TO_SKIP_IN_MAIN_INIT_CHECK = [ "FlaxBertLayer", "FlaxBigBirdLayer", "FlaxRoFormerLayer", "TFBertLayer", "TFLxmertEncoder", "TFLxmertXLayer", "TFMPNetLayer", "TFMobileBertLayer", "TFSegformerLayer", "TFViTMAELayer", ] # Update this list for models that have multiple model types for the same model doc. MODEL_TYPE_TO_DOC_MAPPING = OrderedDict( [ ("data2vec-text", "data2vec"), ("data2vec-audio", "data2vec"), ("data2vec-vision", "data2vec"), ("donut-swin", "donut"), ] ) # This is to make sure the transformers module imported is the one in the repo. transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) def check_missing_backends(): """ Checks if all backends are installed (otherwise the check of this script is incomplete). Will error in the CI if that's not the case but only throw a warning for users running this. """ missing_backends = [] if not is_torch_available(): missing_backends.append("PyTorch") if not is_tf_available(): missing_backends.append("TensorFlow") if not is_flax_available(): missing_backends.append("Flax") if len(missing_backends) > 0: missing = ", ".join(missing_backends) if os.getenv("TRANSFORMERS_IS_CI", "").upper() in ENV_VARS_TRUE_VALUES: raise Exception( "Full repo consistency checks require all backends to be installed (with `pip install -e '.[dev]'` in the " f"Transformers repo, the following are missing: {missing}." ) else: warnings.warn( "Full repo consistency checks require all backends to be installed (with `pip install -e '.[dev]'` in the " f"Transformers repo, the following are missing: {missing}. While it's probably fine as long as you " "didn't make any change in one of those backends modeling files, you should probably execute the " "command above to be on the safe side." ) def check_model_list(): """ Checks the model listed as subfolders of `models` match the models available in `transformers.models`. """ # Get the models from the directory structure of `src/transformers/models/` models_dir = os.path.join(PATH_TO_TRANSFORMERS, "models") _models = [] for model in os.listdir(models_dir): if model == "deprecated": continue model_dir = os.path.join(models_dir, model) if os.path.isdir(model_dir) and "__init__.py" in os.listdir(model_dir): _models.append(model) # Get the models in the submodule `transformers.models` models = [model for model in dir(transformers.models) if not model.startswith("__")] missing_models = sorted(set(_models).difference(models)) if missing_models: raise Exception( f"The following models should be included in {models_dir}/__init__.py: {','.join(missing_models)}." ) # If some modeling modules should be ignored for all checks, they should be added in the nested list # _ignore_modules of this function. def get_model_modules() -> List[str]: """Get all the model modules inside the transformers library (except deprecated models).""" _ignore_modules = [ "modeling_auto", "modeling_encoder_decoder", "modeling_marian", "modeling_mmbt", "modeling_outputs", "modeling_retribert", "modeling_utils", "modeling_flax_auto", "modeling_flax_encoder_decoder", "modeling_flax_utils", "modeling_speech_encoder_decoder", "modeling_flax_speech_encoder_decoder", "modeling_flax_vision_encoder_decoder", "modeling_timm_backbone", "modeling_tf_auto", "modeling_tf_encoder_decoder", "modeling_tf_outputs", "modeling_tf_pytorch_utils", "modeling_tf_utils", "modeling_tf_vision_encoder_decoder", "modeling_vision_encoder_decoder", ] modules = [] for model in dir(transformers.models): # There are some magic dunder attributes in the dir, we ignore them if model == "deprecated" or model.startswith("__"): continue model_module = getattr(transformers.models, model) for submodule in dir(model_module): if submodule.startswith("modeling") and submodule not in _ignore_modules: modeling_module = getattr(model_module, submodule) if inspect.ismodule(modeling_module): modules.append(modeling_module) return modules def get_models(module: types.ModuleType, include_pretrained: bool = False) -> List[Tuple[str, type]]: """ Get the objects in a module that are models. Args: module (`types.ModuleType`): The module from which we are extracting models. include_pretrained (`bool`, *optional*, defaults to `False`): Whether or not to include the `PreTrainedModel` subclass (like `BertPreTrainedModel`) or not. Returns: List[Tuple[str, type]]: List of models as tuples (class name, actual class). """ models = [] model_classes = (transformers.PreTrainedModel, transformers.TFPreTrainedModel, transformers.FlaxPreTrainedModel) for attr_name in dir(module): if not include_pretrained and ("Pretrained" in attr_name or "PreTrained" in attr_name): continue attr = getattr(module, attr_name) if isinstance(attr, type) and issubclass(attr, model_classes) and attr.__module__ == module.__name__: models.append((attr_name, attr)) return models def is_building_block(model: str) -> bool: """ Returns `True` if a model is a building block part of a bigger model. """ if model.endswith("Wrapper"): return True if model.endswith("Encoder"): return True if model.endswith("Decoder"): return True if model.endswith("Prenet"): return True def is_a_private_model(model: str) -> bool: """Returns `True` if the model should not be in the main init.""" if model in PRIVATE_MODELS: return True return is_building_block(model) def check_models_are_in_init(): """Checks all models defined in the library are in the main init.""" models_not_in_init = [] dir_transformers = dir(transformers) for module in get_model_modules(): models_not_in_init += [ model[0] for model in get_models(module, include_pretrained=True) if model[0] not in dir_transformers ] # Remove private models models_not_in_init = [model for model in models_not_in_init if not is_a_private_model(model)] if len(models_not_in_init) > 0: raise Exception(f"The following models should be in the main init: {','.join(models_not_in_init)}.") # If some test_modeling files should be ignored when checking models are all tested, they should be added in the # nested list _ignore_files of this function. def get_model_test_files() -> List[str]: """ Get the model test files. Returns: `List[str]`: The list of test files. The returned files will NOT contain the `tests` (i.e. `PATH_TO_TESTS` defined in this script). They will be considered as paths relative to `tests`. A caller has to use `os.path.join(PATH_TO_TESTS, ...)` to access the files. """ _ignore_files = [ "test_modeling_common", "test_modeling_encoder_decoder", "test_modeling_flax_encoder_decoder", "test_modeling_flax_speech_encoder_decoder", "test_modeling_marian", "test_modeling_tf_common", "test_modeling_tf_encoder_decoder", ] test_files = [] model_test_root = os.path.join(PATH_TO_TESTS, "models") model_test_dirs = [] for x in os.listdir(model_test_root): x = os.path.join(model_test_root, x) if os.path.isdir(x): model_test_dirs.append(x) for target_dir in [PATH_TO_TESTS] + model_test_dirs: for file_or_dir in os.listdir(target_dir): path = os.path.join(target_dir, file_or_dir) if os.path.isfile(path): filename = os.path.split(path)[-1] if "test_modeling" in filename and os.path.splitext(filename)[0] not in _ignore_files: file = os.path.join(*path.split(os.sep)[1:]) test_files.append(file) return test_files # This is a bit hacky but I didn't find a way to import the test_file as a module and read inside the tester class # for the all_model_classes variable. def find_tested_models(test_file: str) -> List[str]: """ Parse the content of test_file to detect what's in `all_model_classes`. This detects the models that inherit from the common test class. Args: test_file (`str`): The path to the test file to check Returns: `List[str]`: The list of models tested in that file. """ with open(os.path.join(PATH_TO_TESTS, test_file), "r", encoding="utf-8", newline="\n") as f: content = f.read() all_models = re.findall(r"all_model_classes\s+=\s+\(\s*\(([^\)]*)\)", content) # Check with one less parenthesis as well all_models += re.findall(r"all_model_classes\s+=\s+\(([^\)]*)\)", content) if len(all_models) > 0: model_tested = [] for entry in all_models: for line in entry.split(","): name = line.strip() if len(name) > 0: model_tested.append(name) return model_tested def should_be_tested(model_name: str) -> bool: """ Whether or not a model should be tested. """ if model_name in IGNORE_NON_TESTED: return False return not is_building_block(model_name) def check_models_are_tested(module: types.ModuleType, test_file: str) -> List[str]: """Check models defined in a module are all tested in a given file. Args: module (`types.ModuleType`): The module in which we get the models. test_file (`str`): The path to the file where the module is tested. Returns: `List[str]`: The list of error messages corresponding to models not tested. """ # XxxPreTrainedModel are not tested defined_models = get_models(module) tested_models = find_tested_models(test_file) if tested_models is None: if test_file.replace(os.path.sep, "/") in TEST_FILES_WITH_NO_COMMON_TESTS: return return [ f"{test_file} should define `all_model_classes` to apply common tests to the models it tests. " + "If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file " + "`utils/check_repo.py`." ] failures = [] for model_name, _ in defined_models: if model_name not in tested_models and should_be_tested(model_name): failures.append( f"{model_name} is defined in {module.__name__} but is not tested in " + f"{os.path.join(PATH_TO_TESTS, test_file)}. Add it to the all_model_classes in that file." + "If common tests should not applied to that model, add its name to `IGNORE_NON_TESTED`" + "in the file `utils/check_repo.py`." ) return failures def check_all_models_are_tested(): """Check all models are properly tested.""" modules = get_model_modules() test_files = get_model_test_files() failures = [] for module in modules: # Matches a module to its test file. test_file = [file for file in test_files if f"test_{module.__name__.split('.')[-1]}.py" in file] if len(test_file) == 0: failures.append(f"{module.__name__} does not have its corresponding test file {test_file}.") elif len(test_file) > 1: failures.append(f"{module.__name__} has several test files: {test_file}.") else: test_file = test_file[0] new_failures = check_models_are_tested(module, test_file) if new_failures is not None: failures += new_failures if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) def get_all_auto_configured_models() -> List[str]: """Return the list of all models in at least one auto class.""" result = set() # To avoid duplicates we concatenate all model classes in a set. if is_torch_available(): for attr_name in dir(transformers.models.auto.modeling_auto): if attr_name.startswith("MODEL_") and attr_name.endswith("MAPPING_NAMES"): result = result | set(get_values(getattr(transformers.models.auto.modeling_auto, attr_name))) if is_tf_available(): for attr_name in dir(transformers.models.auto.modeling_tf_auto): if attr_name.startswith("TF_MODEL_") and attr_name.endswith("MAPPING_NAMES"): result = result | set(get_values(getattr(transformers.models.auto.modeling_tf_auto, attr_name))) if is_flax_available(): for attr_name in dir(transformers.models.auto.modeling_flax_auto): if attr_name.startswith("FLAX_MODEL_") and attr_name.endswith("MAPPING_NAMES"): result = result | set(get_values(getattr(transformers.models.auto.modeling_flax_auto, attr_name))) return list(result) def ignore_unautoclassed(model_name: str) -> bool: """Rules to determine if a model should be in an auto class.""" # Special white list if model_name in IGNORE_NON_AUTO_CONFIGURED: return True # Encoder and Decoder should be ignored if "Encoder" in model_name or "Decoder" in model_name: return True return False def check_models_are_auto_configured(module: types.ModuleType, all_auto_models: List[str]) -> List[str]: """ Check models defined in module are each in an auto class. Args: module (`types.ModuleType`): The module in which we get the models. all_auto_models (`List[str]`): The list of all models in an auto class (as obtained with `get_all_auto_configured_models()`). Returns: `List[str]`: The list of error messages corresponding to models not tested. """ defined_models = get_models(module) failures = [] for model_name, _ in defined_models: if model_name not in all_auto_models and not ignore_unautoclassed(model_name): failures.append( f"{model_name} is defined in {module.__name__} but is not present in any of the auto mapping. " "If that is intended behavior, add its name to `IGNORE_NON_AUTO_CONFIGURED` in the file " "`utils/check_repo.py`." ) return failures def check_all_models_are_auto_configured(): """Check all models are each in an auto class.""" # This is where we need to check we have all backends or the check is incomplete. check_missing_backends() modules = get_model_modules() all_auto_models = get_all_auto_configured_models() failures = [] for module in modules: new_failures = check_models_are_auto_configured(module, all_auto_models) if new_failures is not None: failures += new_failures if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) def check_all_auto_object_names_being_defined(): """Check all names defined in auto (name) mappings exist in the library.""" # This is where we need to check we have all backends or the check is incomplete. check_missing_backends() failures = [] mappings_to_check = { "TOKENIZER_MAPPING_NAMES": TOKENIZER_MAPPING_NAMES, "IMAGE_PROCESSOR_MAPPING_NAMES": IMAGE_PROCESSOR_MAPPING_NAMES, "FEATURE_EXTRACTOR_MAPPING_NAMES": FEATURE_EXTRACTOR_MAPPING_NAMES, "PROCESSOR_MAPPING_NAMES": PROCESSOR_MAPPING_NAMES, } # Each auto modeling files contains multiple mappings. Let's get them in a dynamic way. for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]: module = getattr(transformers.models.auto, module_name, None) if module is None: continue # all mappings in a single auto modeling file mapping_names = [x for x in dir(module) if x.endswith("_MAPPING_NAMES")] mappings_to_check.update({name: getattr(module, name) for name in mapping_names}) for name, mapping in mappings_to_check.items(): for _, class_names in mapping.items(): if not isinstance(class_names, tuple): class_names = (class_names,) for class_name in class_names: if class_name is None: continue # dummy object is accepted if not hasattr(transformers, class_name): # If the class name is in a model name mapping, let's not check if there is a definition in any modeling # module, if it's a private model defined in this file. if name.endswith("MODEL_MAPPING_NAMES") and is_a_private_model(class_name): continue if name.endswith("MODEL_FOR_IMAGE_MAPPING_NAMES") and is_a_private_model(class_name): continue failures.append( f"`{class_name}` appears in the mapping `{name}` but it is not defined in the library." ) if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) def check_all_auto_mapping_names_in_config_mapping_names(): """Check all keys defined in auto mappings (mappings of names) appear in `CONFIG_MAPPING_NAMES`.""" # This is where we need to check we have all backends or the check is incomplete. check_missing_backends() failures = [] # `TOKENIZER_PROCESSOR_MAPPING_NAMES` and `AutoTokenizer` is special, and don't need to follow the rule. mappings_to_check = { "IMAGE_PROCESSOR_MAPPING_NAMES": IMAGE_PROCESSOR_MAPPING_NAMES, "FEATURE_EXTRACTOR_MAPPING_NAMES": FEATURE_EXTRACTOR_MAPPING_NAMES, "PROCESSOR_MAPPING_NAMES": PROCESSOR_MAPPING_NAMES, } # Each auto modeling files contains multiple mappings. Let's get them in a dynamic way. for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]: module = getattr(transformers.models.auto, module_name, None) if module is None: continue # all mappings in a single auto modeling file mapping_names = [x for x in dir(module) if x.endswith("_MAPPING_NAMES")] mappings_to_check.update({name: getattr(module, name) for name in mapping_names}) for name, mapping in mappings_to_check.items(): for model_type in mapping: if model_type not in CONFIG_MAPPING_NAMES: failures.append( f"`{model_type}` appears in the mapping `{name}` but it is not defined in the keys of " "`CONFIG_MAPPING_NAMES`." ) if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) def check_all_auto_mappings_importable(): """Check all auto mappings can be imported.""" # This is where we need to check we have all backends or the check is incomplete. check_missing_backends() failures = [] mappings_to_check = {} # Each auto modeling files contains multiple mappings. Let's get them in a dynamic way. for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]: module = getattr(transformers.models.auto, module_name, None) if module is None: continue # all mappings in a single auto modeling file mapping_names = [x for x in dir(module) if x.endswith("_MAPPING_NAMES")] mappings_to_check.update({name: getattr(module, name) for name in mapping_names}) for name in mappings_to_check: name = name.replace("_MAPPING_NAMES", "_MAPPING") if not hasattr(transformers, name): failures.append(f"`{name}`") if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) def check_objects_being_equally_in_main_init(): """ Check if a (TensorFlow or Flax) object is in the main __init__ iif its counterpart in PyTorch is. """ attrs = dir(transformers) failures = [] for attr in attrs: obj = getattr(transformers, attr) if not hasattr(obj, "__module__") or "models.deprecated" in obj.__module__: continue module_path = obj.__module__ module_name = module_path.split(".")[-1] module_dir = ".".join(module_path.split(".")[:-1]) if ( module_name.startswith("modeling_") and not module_name.startswith("modeling_tf_") and not module_name.startswith("modeling_flax_") ): parent_module = sys.modules[module_dir] frameworks = [] if is_tf_available(): frameworks.append("TF") if is_flax_available(): frameworks.append("Flax") for framework in frameworks: other_module_path = module_path.replace("modeling_", f"modeling_{framework.lower()}_") if os.path.isfile("src/" + other_module_path.replace(".", "/") + ".py"): other_module_name = module_name.replace("modeling_", f"modeling_{framework.lower()}_") other_module = getattr(parent_module, other_module_name) if hasattr(other_module, f"{framework}{attr}"): if not hasattr(transformers, f"{framework}{attr}"): if f"{framework}{attr}" not in OBJECT_TO_SKIP_IN_MAIN_INIT_CHECK: failures.append(f"{framework}{attr}") if hasattr(other_module, f"{framework}_{attr}"): if not hasattr(transformers, f"{framework}_{attr}"): if f"{framework}_{attr}" not in OBJECT_TO_SKIP_IN_MAIN_INIT_CHECK: failures.append(f"{framework}_{attr}") if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) _re_decorator = re.compile(r"^\s*@(\S+)\s+$") def check_decorator_order(filename: str) -> List[int]: """ Check that in a given test file, the slow decorator is always last. Args: filename (`str`): The path to a test file to check. Returns: `List[int]`: The list of failures as a list of indices where there are problems. """ with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() decorator_before = None errors = [] for i, line in enumerate(lines): search = _re_decorator.search(line) if search is not None: decorator_name = search.groups()[0] if decorator_before is not None and decorator_name.startswith("parameterized"): errors.append(i) decorator_before = decorator_name elif decorator_before is not None: decorator_before = None return errors def check_all_decorator_order(): """Check that in all test files, the slow decorator is always last.""" errors = [] for fname in os.listdir(PATH_TO_TESTS): if fname.endswith(".py"): filename = os.path.join(PATH_TO_TESTS, fname) new_errors = check_decorator_order(filename) errors += [f"- {filename}, line {i}" for i in new_errors] if len(errors) > 0: msg = "\n".join(errors) raise ValueError( "The parameterized decorator (and its variants) should always be first, but this is not the case in the" f" following files:\n{msg}" ) def find_all_documented_objects() -> List[str]: """ Parse the content of all doc files to detect which classes and functions it documents. Returns: `List[str]`: The list of all object names being documented. """ documented_obj = [] for doc_file in Path(PATH_TO_DOC).glob("**/*.rst"): with open(doc_file, "r", encoding="utf-8", newline="\n") as f: content = f.read() raw_doc_objs = re.findall(r"(?:autoclass|autofunction):: transformers.(\S+)\s+", content) documented_obj += [obj.split(".")[-1] for obj in raw_doc_objs] for doc_file in Path(PATH_TO_DOC).glob("**/*.md"): with open(doc_file, "r", encoding="utf-8", newline="\n") as f: content = f.read() raw_doc_objs = re.findall(r"\[\[autodoc\]\]\s+(\S+)\s+", content) documented_obj += [obj.split(".")[-1] for obj in raw_doc_objs] return documented_obj # One good reason for not being documented is to be deprecated. Put in this list deprecated objects. DEPRECATED_OBJECTS = [ "AutoModelWithLMHead", "BartPretrainedModel", "DataCollator", "DataCollatorForSOP", "GlueDataset", "GlueDataTrainingArguments", "LineByLineTextDataset", "LineByLineWithRefDataset", "LineByLineWithSOPTextDataset", "NerPipeline", "PretrainedBartModel", "PretrainedFSMTModel", "SingleSentenceClassificationProcessor", "SquadDataTrainingArguments", "SquadDataset", "SquadExample", "SquadFeatures", "SquadV1Processor", "SquadV2Processor", "TFAutoModelWithLMHead", "TFBartPretrainedModel", "TextDataset", "TextDatasetForNextSentencePrediction", "Wav2Vec2ForMaskedLM", "Wav2Vec2Tokenizer", "glue_compute_metrics", "glue_convert_examples_to_features", "glue_output_modes", "glue_processors", "glue_tasks_num_labels", "squad_convert_examples_to_features", "xnli_compute_metrics", "xnli_output_modes", "xnli_processors", "xnli_tasks_num_labels", "TFTrainingArguments", ] # Exceptionally, some objects should not be documented after all rules passed. # ONLY PUT SOMETHING IN THIS LIST AS A LAST RESORT! UNDOCUMENTED_OBJECTS = [ "AddedToken", # This is a tokenizers class. "BasicTokenizer", # Internal, should never have been in the main init. "CharacterTokenizer", # Internal, should never have been in the main init. "DPRPretrainedReader", # Like an Encoder. "DummyObject", # Just picked by mistake sometimes. "MecabTokenizer", # Internal, should never have been in the main init. "ModelCard", # Internal type. "SqueezeBertModule", # Internal building block (should have been called SqueezeBertLayer) "TFDPRPretrainedReader", # Like an Encoder. "TransfoXLCorpus", # Internal type. "WordpieceTokenizer", # Internal, should never have been in the main init. "absl", # External module "add_end_docstrings", # Internal, should never have been in the main init. "add_start_docstrings", # Internal, should never have been in the main init. "convert_tf_weight_name_to_pt_weight_name", # Internal used to convert model weights "logger", # Internal logger "logging", # External module "requires_backends", # Internal function "AltRobertaModel", # Internal module ] # This list should be empty. Objects in it should get their own doc page. SHOULD_HAVE_THEIR_OWN_PAGE = [ # Benchmarks "PyTorchBenchmark", "PyTorchBenchmarkArguments", "TensorFlowBenchmark", "TensorFlowBenchmarkArguments", "AutoBackbone", "BeitBackbone", "BitBackbone", "ConvNextBackbone", "ConvNextV2Backbone", "DinatBackbone", "Dinov2Backbone", "FocalNetBackbone", "MaskFormerSwinBackbone", "MaskFormerSwinConfig", "MaskFormerSwinModel", "NatBackbone", "PvtV2Backbone", "ResNetBackbone", "SwinBackbone", "Swinv2Backbone", "TimmBackbone", "TimmBackboneConfig", "VitDetBackbone", ] def ignore_undocumented(name: str) -> bool: """Rules to determine if `name` should be undocumented (returns `True` if it should not be documented).""" # NOT DOCUMENTED ON PURPOSE. # Constants uppercase are not documented. if name.isupper(): return True # PreTrainedModels / Encoders / Decoders / Layers / Embeddings / Attention are not documented. if ( name.endswith("PreTrainedModel") or name.endswith("Decoder") or name.endswith("Encoder") or name.endswith("Layer") or name.endswith("Embeddings") or name.endswith("Attention") ): return True # Submodules are not documented. if os.path.isdir(os.path.join(PATH_TO_TRANSFORMERS, name)) or os.path.isfile( os.path.join(PATH_TO_TRANSFORMERS, f"{name}.py") ): return True # All load functions are not documented. if name.startswith("load_tf") or name.startswith("load_pytorch"): return True # is_xxx_available functions are not documented. if name.startswith("is_") and name.endswith("_available"): return True # Deprecated objects are not documented. if name in DEPRECATED_OBJECTS or name in UNDOCUMENTED_OBJECTS: return True # MMBT model does not really work. if name.startswith("MMBT"): return True if name in SHOULD_HAVE_THEIR_OWN_PAGE: return True return False def check_all_objects_are_documented(): """Check all models are properly documented.""" documented_objs = find_all_documented_objects() modules = transformers._modules objects = [c for c in dir(transformers) if c not in modules and not c.startswith("_")] undocumented_objs = [c for c in objects if c not in documented_objs and not ignore_undocumented(c)] if len(undocumented_objs) > 0: raise Exception( "The following objects are in the public init so should be documented:\n - " + "\n - ".join(undocumented_objs) ) check_docstrings_are_in_md() check_model_type_doc_match() def check_model_type_doc_match(): """Check all doc pages have a corresponding model type.""" model_doc_folder = Path(PATH_TO_DOC) / "model_doc" model_docs = [m.stem for m in model_doc_folder.glob("*.md")] model_types = list(transformers.models.auto.configuration_auto.MODEL_NAMES_MAPPING.keys()) model_types = [MODEL_TYPE_TO_DOC_MAPPING[m] if m in MODEL_TYPE_TO_DOC_MAPPING else m for m in model_types] errors = [] for m in model_docs: if m not in model_types and m != "auto": close_matches = get_close_matches(m, model_types) error_message = f"{m} is not a proper model identifier." if len(close_matches) > 0: close_matches = "/".join(close_matches) error_message += f" Did you mean {close_matches}?" errors.append(error_message) if len(errors) > 0: raise ValueError( "Some model doc pages do not match any existing model type:\n" + "\n".join(errors) + "\nYou can add any missing model type to the `MODEL_NAMES_MAPPING` constant in " "models/auto/configuration_auto.py." ) # Re pattern to catch :obj:`xx`, :class:`xx`, :func:`xx` or :meth:`xx`. _re_rst_special_words = re.compile(r":(?:obj|func|class|meth):`([^`]+)`") # Re pattern to catch things between double backquotes. _re_double_backquotes = re.compile(r"(^|[^`])``([^`]+)``([^`]|$)") # Re pattern to catch example introduction. _re_rst_example = re.compile(r"^\s*Example.*::\s*$", flags=re.MULTILINE) def is_rst_docstring(docstring: str) -> True: """ Returns `True` if `docstring` is written in rst. """ if _re_rst_special_words.search(docstring) is not None: return True if _re_double_backquotes.search(docstring) is not None: return True if _re_rst_example.search(docstring) is not None: return True return False def check_docstrings_are_in_md(): """Check all docstrings are written in md and nor rst.""" files_with_rst = [] for file in Path(PATH_TO_TRANSFORMERS).glob("**/*.py"): with open(file, encoding="utf-8") as f: code = f.read() docstrings = code.split('"""') for idx, docstring in enumerate(docstrings): if idx % 2 == 0 or not is_rst_docstring(docstring): continue files_with_rst.append(file) break if len(files_with_rst) > 0: raise ValueError( "The following files have docstrings written in rst:\n" + "\n".join([f"- {f}" for f in files_with_rst]) + "\nTo fix this run `doc-builder convert path_to_py_file` after installing `doc-builder`\n" "(`pip install git+https://github.com/huggingface/doc-builder`)" ) def check_deprecated_constant_is_up_to_date(): """ Check if the constant `DEPRECATED_MODELS` in `models/auto/configuration_auto.py` is up to date. """ deprecated_folder = os.path.join(PATH_TO_TRANSFORMERS, "models", "deprecated") deprecated_models = [m for m in os.listdir(deprecated_folder) if not m.startswith("_")] constant_to_check = transformers.models.auto.configuration_auto.DEPRECATED_MODELS message = [] missing_models = sorted(set(deprecated_models) - set(constant_to_check)) if len(missing_models) != 0: missing_models = ", ".join(missing_models) message.append( "The following models are in the deprecated folder, make sure to add them to `DEPRECATED_MODELS` in " f"`models/auto/configuration_auto.py`: {missing_models}." ) extra_models = sorted(set(constant_to_check) - set(deprecated_models)) if len(extra_models) != 0: extra_models = ", ".join(extra_models) message.append( "The following models are in the `DEPRECATED_MODELS` constant but not in the deprecated folder. Either " f"remove them from the constant or move to the deprecated folder: {extra_models}." ) if len(message) > 0: raise Exception("\n".join(message)) def check_repo_quality(): """Check all models are properly tested and documented.""" print("Checking all models are included.") check_model_list() print("Checking all models are public.") check_models_are_in_init() print("Checking all models are properly tested.") check_all_decorator_order() check_all_models_are_tested() print("Checking all objects are properly documented.") check_all_objects_are_documented() print("Checking all models are in at least one auto class.") check_all_models_are_auto_configured() print("Checking all names in auto name mappings are defined.") check_all_auto_object_names_being_defined() print("Checking all keys in auto name mappings are defined in `CONFIG_MAPPING_NAMES`.") check_all_auto_mapping_names_in_config_mapping_names() print("Checking all auto mappings could be imported.") check_all_auto_mappings_importable() print("Checking all objects are equally (across frameworks) in the main __init__.") check_objects_being_equally_in_main_init() print("Checking the DEPRECATED_MODELS constant is up to date.") check_deprecated_constant_is_up_to_date() if __name__ == "__main__": check_repo_quality()
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/tests_fetcher.py
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Welcome to tests_fetcher V2. This util is designed to fetch tests to run on a PR so that only the tests impacted by the modifications are run, and when too many models are being impacted, only run the tests of a subset of core models. It works like this. Stage 1: Identify the modified files. For jobs that run on the main branch, it's just the diff with the last commit. On a PR, this takes all the files from the branching point to the current commit (so all modifications in a PR, not just the last commit) but excludes modifications that are on docstrings or comments only. Stage 2: Extract the tests to run. This is done by looking at the imports in each module and test file: if module A imports module B, then changing module B impacts module A, so the tests using module A should be run. We thus get the dependencies of each model and then recursively builds the 'reverse' map of dependencies to get all modules and tests impacted by a given file. We then only keep the tests (and only the core models tests if there are too many modules). Caveats: - This module only filters tests by files (not individual tests) so it's better to have tests for different things in different files. - This module assumes inits are just importing things, not really building objects, so it's better to structure them this way and move objects building in separate submodules. Usage: Base use to fetch the tests in a pull request ```bash python utils/tests_fetcher.py ``` Base use to fetch the tests on a the main branch (with diff from the last commit): ```bash python utils/tests_fetcher.py --diff_with_last_commit ``` """ import argparse import collections import importlib.util import json import os import re import tempfile from contextlib import contextmanager from pathlib import Path from typing import Dict, List, Optional, Tuple, Union from git import Repo PATH_TO_REPO = Path(__file__).parent.parent.resolve() PATH_TO_EXAMPLES = PATH_TO_REPO / "examples" PATH_TO_TRANFORMERS = PATH_TO_REPO / "src/transformers" PATH_TO_TESTS = PATH_TO_REPO / "tests" # The value is just a heuristic to determine if we `guess` all models are impacted. # This variable has effect only if `filter_models=False`. NUM_MODELS_TO_TRIGGER_FULL_CI = 30 # List here the models to always test. IMPORTANT_MODELS = [ "auto", # Most downloaded models "bert", "clip", "t5", "xlm-roberta", "gpt2", "bart", "mpnet", "gpt-j", "wav2vec2", "deberta-v2", "layoutlm", "llama", "opt", "longformer", "vit", "whisper", # Pipeline-specific model (to be sure each pipeline has one model in this list) "tapas", "vilt", "clap", "detr", "owlvit", "dpt", "videomae", ] @contextmanager def checkout_commit(repo: Repo, commit_id: str): """ Context manager that checks out a given commit when entered, but gets back to the reference it was at on exit. Args: repo (`git.Repo`): A git repository (for instance the Transformers repo). commit_id (`str`): The commit reference to checkout inside the context manager. """ current_head = repo.head.commit if repo.head.is_detached else repo.head.ref try: repo.git.checkout(commit_id) yield finally: repo.git.checkout(current_head) def clean_code(content: str) -> str: """ Remove docstrings, empty line or comments from some code (used to detect if a diff is real or only concern comments or docstings). Args: content (`str`): The code to clean Returns: `str`: The cleaned code. """ # We need to deactivate autoformatting here to write escaped triple quotes (we cannot use real triple quotes or # this would mess up the result if this function applied to this particular file). # fmt: off # Remove docstrings by splitting on triple " then triple ': splits = content.split('\"\"\"') content = "".join(splits[::2]) splits = content.split("\'\'\'") # fmt: on content = "".join(splits[::2]) # Remove empty lines and comments lines_to_keep = [] for line in content.split("\n"): # remove anything that is after a # sign. line = re.sub("#.*$", "", line) # remove white lines if len(line) != 0 and not line.isspace(): lines_to_keep.append(line) return "\n".join(lines_to_keep) def keep_doc_examples_only(content: str) -> str: """ Remove everything from the code content except the doc examples (used to determined if a diff should trigger doc tests or not). Args: content (`str`): The code to clean Returns: `str`: The cleaned code. """ # Keep doc examples only by splitting on triple "`" splits = content.split("```") # Add leading and trailing "```" so the navigation is easier when compared to the original input `content` content = "```" + "```".join(splits[1::2]) + "```" # Remove empty lines and comments lines_to_keep = [] for line in content.split("\n"): # remove anything that is after a # sign. line = re.sub("#.*$", "", line) # remove white lines if len(line) != 0 and not line.isspace(): lines_to_keep.append(line) return "\n".join(lines_to_keep) def get_all_tests() -> List[str]: """ Walks the `tests` folder to return a list of files/subfolders. This is used to split the tests to run when using paralellism. The split is: - folders under `tests`: (`tokenization`, `pipelines`, etc) except the subfolder `models` is excluded. - folders under `tests/models`: `bert`, `gpt2`, etc. - test files under `tests`: `test_modeling_common.py`, `test_tokenization_common.py`, etc. """ # test folders/files directly under `tests` folder tests = os.listdir(PATH_TO_TESTS) tests = [f"tests/{f}" for f in tests if "__pycache__" not in f] tests = sorted([f for f in tests if (PATH_TO_REPO / f).is_dir() or f.startswith("tests/test_")]) # model specific test folders model_test_folders = os.listdir(PATH_TO_TESTS / "models") model_test_folders = [f"tests/models/{f}" for f in model_test_folders if "__pycache__" not in f] model_test_folders = sorted([f for f in model_test_folders if (PATH_TO_REPO / f).is_dir()]) tests.remove("tests/models") # Sagemaker tests are not meant to be run on the CI. if "tests/sagemaker" in tests: tests.remove("tests/sagemaker") tests = model_test_folders + tests return tests def diff_is_docstring_only(repo: Repo, branching_point: str, filename: str) -> bool: """ Check if the diff is only in docstrings (or comments and whitespace) in a filename. Args: repo (`git.Repo`): A git repository (for instance the Transformers repo). branching_point (`str`): The commit reference of where to compare for the diff. filename (`str`): The filename where we want to know if the diff isonly in docstrings/comments. Returns: `bool`: Whether the diff is docstring/comments only or not. """ folder = Path(repo.working_dir) with checkout_commit(repo, branching_point): with open(folder / filename, "r", encoding="utf-8") as f: old_content = f.read() with open(folder / filename, "r", encoding="utf-8") as f: new_content = f.read() old_content_clean = clean_code(old_content) new_content_clean = clean_code(new_content) return old_content_clean == new_content_clean def diff_contains_doc_examples(repo: Repo, branching_point: str, filename: str) -> bool: """ Check if the diff is only in code examples of the doc in a filename. Args: repo (`git.Repo`): A git repository (for instance the Transformers repo). branching_point (`str`): The commit reference of where to compare for the diff. filename (`str`): The filename where we want to know if the diff is only in codes examples. Returns: `bool`: Whether the diff is only in code examples of the doc or not. """ folder = Path(repo.working_dir) with checkout_commit(repo, branching_point): with open(folder / filename, "r", encoding="utf-8") as f: old_content = f.read() with open(folder / filename, "r", encoding="utf-8") as f: new_content = f.read() old_content_clean = keep_doc_examples_only(old_content) new_content_clean = keep_doc_examples_only(new_content) return old_content_clean != new_content_clean def get_impacted_files_from_tiny_model_summary(diff_with_last_commit: bool = False) -> List[str]: """ Return a list of python modeling files that are impacted by the changes of `tiny_model_summary.json` in between: - the current head and the main branch if `diff_with_last_commit=False` (default) - the current head and its parent commit otherwise. Returns: `List[str]`: The list of Python modeling files that are impacted by the changes of `tiny_model_summary.json`. """ repo = Repo(PATH_TO_REPO) folder = Path(repo.working_dir) if not diff_with_last_commit: print(f"main is at {repo.refs.main.commit}") print(f"Current head is at {repo.head.commit}") commits = repo.merge_base(repo.refs.main, repo.head) for commit in commits: print(f"Branching commit: {commit}") else: print(f"main is at {repo.head.commit}") commits = repo.head.commit.parents for commit in commits: print(f"Parent commit: {commit}") if not os.path.isfile(folder / "tests/utils/tiny_model_summary.json"): return [] files = set() for commit in commits: with checkout_commit(repo, commit): with open(folder / "tests/utils/tiny_model_summary.json", "r", encoding="utf-8") as f: old_content = f.read() with open(folder / "tests/utils/tiny_model_summary.json", "r", encoding="utf-8") as f: new_content = f.read() # get the content as json object old_content = json.loads(old_content) new_content = json.loads(new_content) old_keys = set(old_content.keys()) new_keys = set(new_content.keys()) # get the difference keys_with_diff = old_keys.symmetric_difference(new_keys) common_keys = old_keys.intersection(new_keys) # if both have the same key, check its content for key in common_keys: if old_content[key] != new_content[key]: keys_with_diff.add(key) # get the model classes impacted_model_classes = [] for key in keys_with_diff: if key in new_keys: impacted_model_classes.extend(new_content[key]["model_classes"]) # get the module where the model classes are defined. We want to use the main `__init__` file, but it requires # all the framework being installed, which is not ideal for a simple script like test fetcher. # So we create a temporary and modified main `__init__` and access its `_import_structure`. with open(folder / "src/transformers/__init__.py") as fp: lines = fp.readlines() new_lines = [] # Get all the code related to `_import_structure` for line in lines: if line == "_import_structure = {\n": new_lines.append(line) elif line == "# Direct imports for type-checking\n": break elif len(new_lines) > 0: # bypass the framework check so we can get all the information even if frameworks are not available line = re.sub(r"is_.+_available\(\)", "True", line) line = line.replace("OptionalDependencyNotAvailable", "Exception") line = line.replace("Exception()", "Exception") new_lines.append(line) # create and load the temporary module with tempfile.TemporaryDirectory() as tmpdirname: with open(os.path.join(tmpdirname, "temp_init.py"), "w") as fp: fp.write("".join(new_lines)) spec = importlib.util.spec_from_file_location("temp_init", os.path.join(tmpdirname, "temp_init.py")) module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) # Finally, get `_import_structure` that we need import_structure = module._import_structure # map model classes to their defined module reversed_structure = {} for key, values in import_structure.items(): for value in values: reversed_structure[value] = key # Get the corresponding modeling file path for model_class in impacted_model_classes: module = reversed_structure[model_class] framework = "" if model_class.startswith("TF"): framework = "tf" elif model_class.startswith("Flax"): framework = "flax" fn = ( f"modeling_{module.split('.')[-1]}.py" if framework == "" else f"modeling_{framework}_{module.split('.')[-1]}.py" ) files.add( f"src.transformers.{module}.{fn}".replace(".", os.path.sep).replace(f"{os.path.sep}py", ".py") ) return sorted(files) def get_diff(repo: Repo, base_commit: str, commits: List[str]) -> List[str]: """ Get the diff between a base commit and one or several commits. Args: repo (`git.Repo`): A git repository (for instance the Transformers repo). base_commit (`str`): The commit reference of where to compare for the diff. This is the current commit, not the branching point! commits (`List[str]`): The list of commits with which to compare the repo at `base_commit` (so the branching point). Returns: `List[str]`: The list of Python files with a diff (files added, renamed or deleted are always returned, files modified are returned if the diff in the file is not only in docstrings or comments, see `diff_is_docstring_only`). """ print("\n### DIFF ###\n") code_diff = [] for commit in commits: for diff_obj in commit.diff(base_commit): # We always add new python files if diff_obj.change_type == "A" and diff_obj.b_path.endswith(".py"): code_diff.append(diff_obj.b_path) # We check that deleted python files won't break corresponding tests. elif diff_obj.change_type == "D" and diff_obj.a_path.endswith(".py"): code_diff.append(diff_obj.a_path) # Now for modified files elif diff_obj.change_type in ["M", "R"] and diff_obj.b_path.endswith(".py"): # In case of renames, we'll look at the tests using both the old and new name. if diff_obj.a_path != diff_obj.b_path: code_diff.extend([diff_obj.a_path, diff_obj.b_path]) else: # Otherwise, we check modifications are in code and not docstrings. if diff_is_docstring_only(repo, commit, diff_obj.b_path): print(f"Ignoring diff in {diff_obj.b_path} as it only concerns docstrings or comments.") else: code_diff.append(diff_obj.a_path) return code_diff def get_modified_python_files(diff_with_last_commit: bool = False) -> List[str]: """ Return a list of python files that have been modified between: - the current head and the main branch if `diff_with_last_commit=False` (default) - the current head and its parent commit otherwise. Returns: `List[str]`: The list of Python files with a diff (files added, renamed or deleted are always returned, files modified are returned if the diff in the file is not only in docstrings or comments, see `diff_is_docstring_only`). """ repo = Repo(PATH_TO_REPO) if not diff_with_last_commit: print(f"main is at {repo.refs.main.commit}") print(f"Current head is at {repo.head.commit}") branching_commits = repo.merge_base(repo.refs.main, repo.head) for commit in branching_commits: print(f"Branching commit: {commit}") return get_diff(repo, repo.head.commit, branching_commits) else: print(f"main is at {repo.head.commit}") parent_commits = repo.head.commit.parents for commit in parent_commits: print(f"Parent commit: {commit}") return get_diff(repo, repo.head.commit, parent_commits) def get_diff_for_doctesting(repo: Repo, base_commit: str, commits: List[str]) -> List[str]: """ Get the diff in doc examples between a base commit and one or several commits. Args: repo (`git.Repo`): A git repository (for instance the Transformers repo). base_commit (`str`): The commit reference of where to compare for the diff. This is the current commit, not the branching point! commits (`List[str]`): The list of commits with which to compare the repo at `base_commit` (so the branching point). Returns: `List[str]`: The list of Python and Markdown files with a diff (files added or renamed are always returned, files modified are returned if the diff in the file is only in doctest examples). """ print("\n### DIFF ###\n") code_diff = [] for commit in commits: for diff_obj in commit.diff(base_commit): # We only consider Python files and doc files. if not diff_obj.b_path.endswith(".py") and not diff_obj.b_path.endswith(".md"): continue # We always add new python/md files if diff_obj.change_type in ["A"]: code_diff.append(diff_obj.b_path) # Now for modified files elif diff_obj.change_type in ["M", "R"]: # In case of renames, we'll look at the tests using both the old and new name. if diff_obj.a_path != diff_obj.b_path: code_diff.extend([diff_obj.a_path, diff_obj.b_path]) else: # Otherwise, we check modifications contain some doc example(s). if diff_contains_doc_examples(repo, commit, diff_obj.b_path): code_diff.append(diff_obj.a_path) else: print(f"Ignoring diff in {diff_obj.b_path} as it doesn't contain any doc example.") return code_diff def get_all_doctest_files() -> List[str]: """ Return the complete list of python and Markdown files on which we run doctest. At this moment, we restrict this to only take files from `src/` or `docs/source/en/` that are not in `utils/not_doctested.txt`. Returns: `List[str]`: The complete list of Python and Markdown files on which we run doctest. """ py_files = [str(x.relative_to(PATH_TO_REPO)) for x in PATH_TO_REPO.glob("**/*.py")] md_files = [str(x.relative_to(PATH_TO_REPO)) for x in PATH_TO_REPO.glob("**/*.md")] test_files_to_run = py_files + md_files # change to use "/" as path separator test_files_to_run = ["/".join(Path(x).parts) for x in test_files_to_run] # don't run doctest for files in `src/transformers/models/deprecated` test_files_to_run = [x for x in test_files_to_run if "models/deprecated" not in x] # only include files in `src` or `docs/source/en/` test_files_to_run = [x for x in test_files_to_run if x.startswith(("src/", "docs/source/en/"))] # not include init files test_files_to_run = [x for x in test_files_to_run if not x.endswith(("__init__.py",))] # These are files not doctested yet. with open("utils/not_doctested.txt") as fp: not_doctested = {x.split(" ")[0] for x in fp.read().strip().split("\n")} # So far we don't have 100% coverage for doctest. This line will be removed once we achieve 100%. test_files_to_run = [x for x in test_files_to_run if x not in not_doctested] return sorted(test_files_to_run) def get_new_doctest_files(repo, base_commit, branching_commit) -> List[str]: """ Get the list of files that were removed from "utils/not_doctested.txt", between `base_commit` and `branching_commit`. Returns: `List[str]`: List of files that were removed from "utils/not_doctested.txt". """ for diff_obj in branching_commit.diff(base_commit): # Ignores all but the "utils/not_doctested.txt" file. if diff_obj.a_path != "utils/not_doctested.txt": continue # Loads the two versions folder = Path(repo.working_dir) with checkout_commit(repo, branching_commit): with open(folder / "utils/not_doctested.txt", "r", encoding="utf-8") as f: old_content = f.read() with open(folder / "utils/not_doctested.txt", "r", encoding="utf-8") as f: new_content = f.read() # Compute the removed lines and return them removed_content = {x.split(" ")[0] for x in old_content.split("\n")} - { x.split(" ")[0] for x in new_content.split("\n") } return sorted(removed_content) return [] def get_doctest_files(diff_with_last_commit: bool = False) -> List[str]: """ Return a list of python and Markdown files where doc example have been modified between: - the current head and the main branch if `diff_with_last_commit=False` (default) - the current head and its parent commit otherwise. Returns: `List[str]`: The list of Python and Markdown files with a diff (files added or renamed are always returned, files modified are returned if the diff in the file is only in doctest examples). """ repo = Repo(PATH_TO_REPO) test_files_to_run = [] # noqa if not diff_with_last_commit: print(f"main is at {repo.refs.main.commit}") print(f"Current head is at {repo.head.commit}") branching_commits = repo.merge_base(repo.refs.main, repo.head) for commit in branching_commits: print(f"Branching commit: {commit}") test_files_to_run = get_diff_for_doctesting(repo, repo.head.commit, branching_commits) else: print(f"main is at {repo.head.commit}") parent_commits = repo.head.commit.parents for commit in parent_commits: print(f"Parent commit: {commit}") test_files_to_run = get_diff_for_doctesting(repo, repo.head.commit, parent_commits) all_test_files_to_run = get_all_doctest_files() # Add to the test files to run any removed entry from "utils/not_doctested.txt". new_test_files = get_new_doctest_files(repo, repo.head.commit, repo.refs.main.commit) test_files_to_run = list(set(test_files_to_run + new_test_files)) # Do not run slow doctest tests on CircleCI with open("utils/slow_documentation_tests.txt") as fp: slow_documentation_tests = set(fp.read().strip().split("\n")) test_files_to_run = [ x for x in test_files_to_run if x in all_test_files_to_run and x not in slow_documentation_tests ] # Make sure we did not end up with a test file that was removed test_files_to_run = [f for f in test_files_to_run if (PATH_TO_REPO / f).exists()] return sorted(test_files_to_run) # (:?^|\n) -> Non-catching group for the beginning of the doc or a new line. # \s*from\s+(\.+\S+)\s+import\s+([^\n]+) -> Line only contains from .xxx import yyy and we catch .xxx and yyy # (?=\n) -> Look-ahead to a new line. We can't just put \n here or using find_all on this re will only catch every # other import. _re_single_line_relative_imports = re.compile(r"(?:^|\n)\s*from\s+(\.+\S+)\s+import\s+([^\n]+)(?=\n)") # (:?^|\n) -> Non-catching group for the beginning of the doc or a new line. # \s*from\s+(\.+\S+)\s+import\s+\(([^\)]+)\) -> Line continues with from .xxx import (yyy) and we catch .xxx and yyy # yyy will take multiple lines otherwise there wouldn't be parenthesis. _re_multi_line_relative_imports = re.compile(r"(?:^|\n)\s*from\s+(\.+\S+)\s+import\s+\(([^\)]+)\)") # (:?^|\n) -> Non-catching group for the beginning of the doc or a new line. # \s*from\s+transformers(\S*)\s+import\s+([^\n]+) -> Line only contains from transformers.xxx import yyy and we catch # .xxx and yyy # (?=\n) -> Look-ahead to a new line. We can't just put \n here or using find_all on this re will only catch every # other import. _re_single_line_direct_imports = re.compile(r"(?:^|\n)\s*from\s+transformers(\S*)\s+import\s+([^\n]+)(?=\n)") # (:?^|\n) -> Non-catching group for the beginning of the doc or a new line. # \s*from\s+transformers(\S*)\s+import\s+\(([^\)]+)\) -> Line continues with from transformers.xxx import (yyy) and we # catch .xxx and yyy. yyy will take multiple lines otherwise there wouldn't be parenthesis. _re_multi_line_direct_imports = re.compile(r"(?:^|\n)\s*from\s+transformers(\S*)\s+import\s+\(([^\)]+)\)") def extract_imports(module_fname: str, cache: Dict[str, List[str]] = None) -> List[str]: """ Get the imports a given module makes. Args: module_fname (`str`): The name of the file of the module where we want to look at the imports (given relative to the root of the repo). cache (Dictionary `str` to `List[str]`, *optional*): To speed up this function if it was previously called on `module_fname`, the cache of all previously computed results. Returns: `List[str]`: The list of module filenames imported in the input `module_fname` (a submodule we import from that is a subfolder will give its init file). """ if cache is not None and module_fname in cache: return cache[module_fname] with open(PATH_TO_REPO / module_fname, "r", encoding="utf-8") as f: content = f.read() # Filter out all docstrings to not get imports in code examples. As before we need to deactivate formatting to # keep this as escaped quotes and avoid this function failing on this file. splits = content.split('\"\"\"') # fmt: skip content = "".join(splits[::2]) module_parts = str(module_fname).split(os.path.sep) imported_modules = [] # Let's start with relative imports relative_imports = _re_single_line_relative_imports.findall(content) relative_imports = [ (mod, imp) for mod, imp in relative_imports if "# tests_ignore" not in imp and imp.strip() != "(" ] multiline_relative_imports = _re_multi_line_relative_imports.findall(content) relative_imports += [(mod, imp) for mod, imp in multiline_relative_imports if "# tests_ignore" not in imp] # We need to remove parts of the module name depending on the depth of the relative imports. for module, imports in relative_imports: level = 0 while module.startswith("."): module = module[1:] level += 1 if len(module) > 0: dep_parts = module_parts[: len(module_parts) - level] + module.split(".") else: dep_parts = module_parts[: len(module_parts) - level] imported_module = os.path.sep.join(dep_parts) imported_modules.append((imported_module, [imp.strip() for imp in imports.split(",")])) # Let's continue with direct imports direct_imports = _re_single_line_direct_imports.findall(content) direct_imports = [(mod, imp) for mod, imp in direct_imports if "# tests_ignore" not in imp and imp.strip() != "("] multiline_direct_imports = _re_multi_line_direct_imports.findall(content) direct_imports += [(mod, imp) for mod, imp in multiline_direct_imports if "# tests_ignore" not in imp] # We need to find the relative path of those imports. for module, imports in direct_imports: import_parts = module.split(".")[1:] # ignore the name of the repo since we add it below. dep_parts = ["src", "transformers"] + import_parts imported_module = os.path.sep.join(dep_parts) imported_modules.append((imported_module, [imp.strip() for imp in imports.split(",")])) result = [] # Double check we get proper modules (either a python file or a folder with an init). for module_file, imports in imported_modules: if (PATH_TO_REPO / f"{module_file}.py").is_file(): module_file = f"{module_file}.py" elif (PATH_TO_REPO / module_file).is_dir() and (PATH_TO_REPO / module_file / "__init__.py").is_file(): module_file = os.path.sep.join([module_file, "__init__.py"]) imports = [imp for imp in imports if len(imp) > 0 and re.match("^[A-Za-z0-9_]*$", imp)] if len(imports) > 0: result.append((module_file, imports)) if cache is not None: cache[module_fname] = result return result def get_module_dependencies(module_fname: str, cache: Dict[str, List[str]] = None) -> List[str]: """ Refines the result of `extract_imports` to remove subfolders and get a proper list of module filenames: if a file as an import `from utils import Foo, Bar`, with `utils` being a subfolder containing many files, this will traverse the `utils` init file to check where those dependencies come from: for instance the files utils/foo.py and utils/bar.py. Warning: This presupposes that all intermediate inits are properly built (with imports from the respective submodules) and work better if objects are defined in submodules and not the intermediate init (otherwise the intermediate init is added, and inits usually have a lot of dependencies). Args: module_fname (`str`): The name of the file of the module where we want to look at the imports (given relative to the root of the repo). cache (Dictionary `str` to `List[str]`, *optional*): To speed up this function if it was previously called on `module_fname`, the cache of all previously computed results. Returns: `List[str]`: The list of module filenames imported in the input `module_fname` (with submodule imports refined). """ dependencies = [] imported_modules = extract_imports(module_fname, cache=cache) # The while loop is to recursively traverse all inits we may encounter: we will add things as we go. while len(imported_modules) > 0: new_modules = [] for module, imports in imported_modules: # If we end up in an __init__ we are often not actually importing from this init (except in the case where # the object is fully defined in the __init__) if module.endswith("__init__.py"): # So we get the imports from that init then try to find where our objects come from. new_imported_modules = extract_imports(module, cache=cache) for new_module, new_imports in new_imported_modules: if any(i in new_imports for i in imports): if new_module not in dependencies: new_modules.append((new_module, [i for i in new_imports if i in imports])) imports = [i for i in imports if i not in new_imports] if len(imports) > 0: # If there are any objects lefts, they may be a submodule path_to_module = PATH_TO_REPO / module.replace("__init__.py", "") dependencies.extend( [ os.path.join(module.replace("__init__.py", ""), f"{i}.py") for i in imports if (path_to_module / f"{i}.py").is_file() ] ) imports = [i for i in imports if not (path_to_module / f"{i}.py").is_file()] if len(imports) > 0: # Then if there are still objects left, they are fully defined in the init, so we keep it as a # dependency. dependencies.append(module) else: dependencies.append(module) imported_modules = new_modules return dependencies def create_reverse_dependency_tree() -> List[Tuple[str, str]]: """ Create a list of all edges (a, b) which mean that modifying a impacts b with a going over all module and test files. """ cache = {} all_modules = list(PATH_TO_TRANFORMERS.glob("**/*.py")) + list(PATH_TO_TESTS.glob("**/*.py")) all_modules = [str(mod.relative_to(PATH_TO_REPO)) for mod in all_modules] edges = [(dep, mod) for mod in all_modules for dep in get_module_dependencies(mod, cache=cache)] return list(set(edges)) def get_tree_starting_at(module: str, edges: List[Tuple[str, str]]) -> List[Union[str, List[str]]]: """ Returns the tree starting at a given module following all edges. Args: module (`str`): The module that will be the root of the subtree we want. eges (`List[Tuple[str, str]]`): The list of all edges of the tree. Returns: `List[Union[str, List[str]]]`: The tree to print in the following format: [module, [list of edges starting at module], [list of edges starting at the preceding level], ...] """ vertices_seen = [module] new_edges = [edge for edge in edges if edge[0] == module and edge[1] != module and "__init__.py" not in edge[1]] tree = [module] while len(new_edges) > 0: tree.append(new_edges) final_vertices = list({edge[1] for edge in new_edges}) vertices_seen.extend(final_vertices) new_edges = [ edge for edge in edges if edge[0] in final_vertices and edge[1] not in vertices_seen and "__init__.py" not in edge[1] ] return tree def print_tree_deps_of(module, all_edges=None): """ Prints the tree of modules depending on a given module. Args: module (`str`): The module that will be the root of the subtree we want. all_eges (`List[Tuple[str, str]]`, *optional*): The list of all edges of the tree. Will be set to `create_reverse_dependency_tree()` if not passed. """ if all_edges is None: all_edges = create_reverse_dependency_tree() tree = get_tree_starting_at(module, all_edges) # The list of lines is a list of tuples (line_to_be_printed, module) # Keeping the modules lets us know where to insert each new lines in the list. lines = [(tree[0], tree[0])] for index in range(1, len(tree)): edges = tree[index] start_edges = {edge[0] for edge in edges} for start in start_edges: end_edges = {edge[1] for edge in edges if edge[0] == start} # We will insert all those edges just after the line showing start. pos = 0 while lines[pos][1] != start: pos += 1 lines = lines[: pos + 1] + [(" " * (2 * index) + end, end) for end in end_edges] + lines[pos + 1 :] for line in lines: # We don't print the refs that where just here to help build lines. print(line[0]) def init_test_examples_dependencies() -> Tuple[Dict[str, List[str]], List[str]]: """ The test examples do not import from the examples (which are just scripts, not modules) so we need som extra care initializing the dependency map, which is the goal of this function. It initializes the dependency map for example files by linking each example to the example test file for the example framework. Returns: `Tuple[Dict[str, List[str]], List[str]]`: A tuple with two elements: the initialized dependency map which is a dict test example file to list of example files potentially tested by that test file, and the list of all example files (to avoid recomputing it later). """ test_example_deps = {} all_examples = [] for framework in ["flax", "pytorch", "tensorflow"]: test_files = list((PATH_TO_EXAMPLES / framework).glob("test_*.py")) all_examples.extend(test_files) # Remove the files at the root of examples/framework since they are not proper examples (they are eith utils # or example test files). examples = [ f for f in (PATH_TO_EXAMPLES / framework).glob("**/*.py") if f.parent != PATH_TO_EXAMPLES / framework ] all_examples.extend(examples) for test_file in test_files: with open(test_file, "r", encoding="utf-8") as f: content = f.read() # Map all examples to the test files found in examples/framework. test_example_deps[str(test_file.relative_to(PATH_TO_REPO))] = [ str(e.relative_to(PATH_TO_REPO)) for e in examples if e.name in content ] # Also map the test files to themselves. test_example_deps[str(test_file.relative_to(PATH_TO_REPO))].append( str(test_file.relative_to(PATH_TO_REPO)) ) return test_example_deps, all_examples def create_reverse_dependency_map() -> Dict[str, List[str]]: """ Create the dependency map from module/test filename to the list of modules/tests that depend on it recursively. Returns: `Dict[str, List[str]]`: The reverse dependency map as a dictionary mapping filenames to all the filenames depending on it recursively. This way the tests impacted by a change in file A are the test files in the list corresponding to key A in this result. """ cache = {} # Start from the example deps init. example_deps, examples = init_test_examples_dependencies() # Add all modules and all tests to all examples all_modules = list(PATH_TO_TRANFORMERS.glob("**/*.py")) + list(PATH_TO_TESTS.glob("**/*.py")) + examples all_modules = [str(mod.relative_to(PATH_TO_REPO)) for mod in all_modules] # Compute the direct dependencies of all modules. direct_deps = {m: get_module_dependencies(m, cache=cache) for m in all_modules} direct_deps.update(example_deps) # This recurses the dependencies something_changed = True while something_changed: something_changed = False for m in all_modules: for d in direct_deps[m]: # We stop recursing at an init (cause we always end up in the main init and we don't want to add all # files which the main init imports) if d.endswith("__init__.py"): continue if d not in direct_deps: raise ValueError(f"KeyError:{d}. From {m}") new_deps = set(direct_deps[d]) - set(direct_deps[m]) if len(new_deps) > 0: direct_deps[m].extend(list(new_deps)) something_changed = True # Finally we can build the reverse map. reverse_map = collections.defaultdict(list) for m in all_modules: for d in direct_deps[m]: reverse_map[d].append(m) # For inits, we don't do the reverse deps but the direct deps: if modifying an init, we want to make sure we test # all the modules impacted by that init. for m in [f for f in all_modules if f.endswith("__init__.py")]: direct_deps = get_module_dependencies(m, cache=cache) deps = sum([reverse_map[d] for d in direct_deps if not d.endswith("__init__.py")], direct_deps) reverse_map[m] = list(set(deps) - {m}) return reverse_map def create_module_to_test_map( reverse_map: Dict[str, List[str]] = None, filter_models: bool = False ) -> Dict[str, List[str]]: """ Extract the tests from the reverse_dependency_map and potentially filters the model tests. Args: reverse_map (`Dict[str, List[str]]`, *optional*): The reverse dependency map as created by `create_reverse_dependency_map`. Will default to the result of that function if not provided. filter_models (`bool`, *optional*, defaults to `False`): Whether or not to filter model tests to only include core models if a file impacts a lot of models. Returns: `Dict[str, List[str]]`: A dictionary that maps each file to the tests to execute if that file was modified. """ if reverse_map is None: reverse_map = create_reverse_dependency_map() # Utility that tells us if a given file is a test (taking test examples into account) def is_test(fname): if fname.startswith("tests"): return True if fname.startswith("examples") and fname.split(os.path.sep)[-1].startswith("test"): return True return False # Build the test map test_map = {module: [f for f in deps if is_test(f)] for module, deps in reverse_map.items()} if not filter_models: return test_map # Now we deal with the filtering if `filter_models` is True. num_model_tests = len(list(PATH_TO_TESTS.glob("models/*"))) def has_many_models(tests): # We filter to core models when a given file impacts more than half the model tests. model_tests = {Path(t).parts[2] for t in tests if t.startswith("tests/models/")} return len(model_tests) > num_model_tests // 2 # for each module (if specified in the argument `module`) of the form `models/my_model` (i.e. starting with it), # we always keep the tests (those are already in the argument `tests`) which are in `tests/models/my_model`. # This is to avoid them being excluded when a module has many impacted tests: the directly related test files should # always be included! def filter_tests(tests, module=""): return [ t for t in tests if not t.startswith("tests/models/") or Path(t).parts[2] in IMPORTANT_MODELS # at this point, `t` is of the form `tests/models/my_model`, and we check if `models/my_model` # (i.e. `parts[1:3]`) is in `module`. or "/".join(Path(t).parts[1:3]) in module ] return { module: (filter_tests(tests, module=module) if has_many_models(tests) else tests) for module, tests in test_map.items() } def check_imports_all_exist(): """ Isn't used per se by the test fetcher but might be used later as a quality check. Putting this here for now so the code is not lost. This checks all imports in a given file do exist. """ cache = {} all_modules = list(PATH_TO_TRANFORMERS.glob("**/*.py")) + list(PATH_TO_TESTS.glob("**/*.py")) all_modules = [str(mod.relative_to(PATH_TO_REPO)) for mod in all_modules] direct_deps = {m: get_module_dependencies(m, cache=cache) for m in all_modules} for module, deps in direct_deps.items(): for dep in deps: if not (PATH_TO_REPO / dep).is_file(): print(f"{module} has dependency on {dep} which does not exist.") def _print_list(l) -> str: """ Pretty print a list of elements with one line per element and a - starting each line. """ return "\n".join([f"- {f}" for f in l]) def create_json_map(test_files_to_run: List[str], json_output_file: str): """ Creates a map from a list of tests to run to easily split them by category, when running parallelism of slow tests. Args: test_files_to_run (`List[str]`): The list of tests to run. json_output_file (`str`): The path where to store the built json map. """ if json_output_file is None: return test_map = {} for test_file in test_files_to_run: # `test_file` is a path to a test folder/file, starting with `tests/`. For example, # - `tests/models/bert/test_modeling_bert.py` or `tests/models/bert` # - `tests/trainer/test_trainer.py` or `tests/trainer` # - `tests/test_modeling_common.py` names = test_file.split(os.path.sep) if names[1] == "models": # take the part like `models/bert` for modeling tests key = os.path.sep.join(names[1:3]) elif len(names) > 2 or not test_file.endswith(".py"): # test folders under `tests` or python files under them # take the part like tokenization, `pipeline`, etc. for other test categories key = os.path.sep.join(names[1:2]) else: # common test files directly under `tests/` key = "common" if key not in test_map: test_map[key] = [] test_map[key].append(test_file) # sort the keys & values keys = sorted(test_map.keys()) test_map = {k: " ".join(sorted(test_map[k])) for k in keys} with open(json_output_file, "w", encoding="UTF-8") as fp: json.dump(test_map, fp, ensure_ascii=False) def infer_tests_to_run( output_file: str, diff_with_last_commit: bool = False, filter_models: bool = True, json_output_file: Optional[str] = None, ): """ The main function called by the test fetcher. Determines the tests to run from the diff. Args: output_file (`str`): The path where to store the summary of the test fetcher analysis. Other files will be stored in the same folder: - examples_test_list.txt: The list of examples tests to run. - test_repo_utils.txt: Will indicate if the repo utils tests should be run or not. - doctest_list.txt: The list of doctests to run. diff_with_last_commit (`bool`, *optional*, defaults to `False`): Whether to analyze the diff with the last commit (for use on the main branch after a PR is merged) or with the branching point from main (for use on each PR). filter_models (`bool`, *optional*, defaults to `True`): Whether or not to filter the tests to core models only, when a file modified results in a lot of model tests. json_output_file (`str`, *optional*): The path where to store the json file mapping categories of tests to tests to run (used for parallelism or the slow tests). """ modified_files = get_modified_python_files(diff_with_last_commit=diff_with_last_commit) print(f"\n### MODIFIED FILES ###\n{_print_list(modified_files)}") # Create the map that will give us all impacted modules. reverse_map = create_reverse_dependency_map() impacted_files = modified_files.copy() for f in modified_files: if f in reverse_map: impacted_files.extend(reverse_map[f]) # Remove duplicates impacted_files = sorted(set(impacted_files)) print(f"\n### IMPACTED FILES ###\n{_print_list(impacted_files)}") model_impacted = {"/".join(x.split("/")[:3]) for x in impacted_files if x.startswith("tests/models/")} # Grab the corresponding test files: if any(x in modified_files for x in ["setup.py", ".circleci/create_circleci_config.py"]): test_files_to_run = ["tests", "examples"] repo_utils_launch = True elif not filter_models and len(model_impacted) >= NUM_MODELS_TO_TRIGGER_FULL_CI: print( f"More than {NUM_MODELS_TO_TRIGGER_FULL_CI - 1} models are impacted and `filter_models=False`. CI is configured to test everything." ) test_files_to_run = ["tests", "examples"] repo_utils_launch = True else: # All modified tests need to be run. test_files_to_run = [ f for f in modified_files if f.startswith("tests") and f.split(os.path.sep)[-1].startswith("test") ] impacted_files = get_impacted_files_from_tiny_model_summary(diff_with_last_commit=diff_with_last_commit) # Then we grab the corresponding test files. test_map = create_module_to_test_map(reverse_map=reverse_map, filter_models=filter_models) for f in modified_files + impacted_files: if f in test_map: test_files_to_run.extend(test_map[f]) test_files_to_run = sorted(set(test_files_to_run)) # Remove repo utils tests test_files_to_run = [f for f in test_files_to_run if not f.split(os.path.sep)[1] == "repo_utils"] # Remove SageMaker tests test_files_to_run = [f for f in test_files_to_run if not f.split(os.path.sep)[1] == "sagemaker"] # Make sure we did not end up with a test file that was removed test_files_to_run = [f for f in test_files_to_run if (PATH_TO_REPO / f).exists()] repo_utils_launch = any(f.split(os.path.sep)[0] == "utils" for f in modified_files) if repo_utils_launch: repo_util_file = Path(output_file).parent / "test_repo_utils.txt" with open(repo_util_file, "w", encoding="utf-8") as f: f.write("tests/repo_utils") examples_tests_to_run = [f for f in test_files_to_run if f.startswith("examples")] test_files_to_run = [f for f in test_files_to_run if not f.startswith("examples")] print(f"\n### TEST TO RUN ###\n{_print_list(test_files_to_run)}") if len(test_files_to_run) > 0: with open(output_file, "w", encoding="utf-8") as f: f.write(" ".join(test_files_to_run)) # Create a map that maps test categories to test files, i.e. `models/bert` -> [...test_modeling_bert.py, ...] # Get all test directories (and some common test files) under `tests` and `tests/models` if `test_files_to_run` # contains `tests` (i.e. when `setup.py` is changed). if "tests" in test_files_to_run: test_files_to_run = get_all_tests() create_json_map(test_files_to_run, json_output_file) print(f"\n### EXAMPLES TEST TO RUN ###\n{_print_list(examples_tests_to_run)}") if len(examples_tests_to_run) > 0: # We use `all` in the case `commit_flags["test_all"]` as well as in `create_circleci_config.py` for processing if examples_tests_to_run == ["examples"]: examples_tests_to_run = ["all"] example_file = Path(output_file).parent / "examples_test_list.txt" with open(example_file, "w", encoding="utf-8") as f: f.write(" ".join(examples_tests_to_run)) doctest_list = get_doctest_files() print(f"\n### DOCTEST TO RUN ###\n{_print_list(doctest_list)}") if len(doctest_list) > 0: doctest_file = Path(output_file).parent / "doctest_list.txt" with open(doctest_file, "w", encoding="utf-8") as f: f.write(" ".join(doctest_list)) def filter_tests(output_file: str, filters: List[str]): """ Reads the content of the output file and filters out all the tests in a list of given folders. Args: output_file (`str` or `os.PathLike`): The path to the output file of the tests fetcher. filters (`List[str]`): A list of folders to filter. """ if not os.path.isfile(output_file): print("No test file found.") return with open(output_file, "r", encoding="utf-8") as f: test_files = f.read().split(" ") if len(test_files) == 0 or test_files == [""]: print("No tests to filter.") return if test_files == ["tests"]: test_files = [os.path.join("tests", f) for f in os.listdir("tests") if f not in ["__init__.py"] + filters] else: test_files = [f for f in test_files if f.split(os.path.sep)[1] not in filters] with open(output_file, "w", encoding="utf-8") as f: f.write(" ".join(test_files)) def parse_commit_message(commit_message: str) -> Dict[str, bool]: """ Parses the commit message to detect if a command is there to skip, force all or part of the CI. Args: commit_message (`str`): The commit message of the current commit. Returns: `Dict[str, bool]`: A dictionary of strings to bools with keys the following keys: `"skip"`, `"test_all_models"` and `"test_all"`. """ if commit_message is None: return {"skip": False, "no_filter": False, "test_all": False} command_search = re.search(r"\[([^\]]*)\]", commit_message) if command_search is not None: command = command_search.groups()[0] command = command.lower().replace("-", " ").replace("_", " ") skip = command in ["ci skip", "skip ci", "circleci skip", "skip circleci"] no_filter = set(command.split(" ")) == {"no", "filter"} test_all = set(command.split(" ")) == {"test", "all"} return {"skip": skip, "no_filter": no_filter, "test_all": test_all} else: return {"skip": False, "no_filter": False, "test_all": False} if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--output_file", type=str, default="test_list.txt", help="Where to store the list of tests to run" ) parser.add_argument( "--json_output_file", type=str, default="test_map.json", help="Where to store the tests to run in a dictionary format mapping test categories to test files", ) parser.add_argument( "--diff_with_last_commit", action="store_true", help="To fetch the tests between the current commit and the last commit", ) parser.add_argument( "--filter_tests", action="store_true", help="Will filter the pipeline/repo utils tests outside of the generated list of tests.", ) parser.add_argument( "--print_dependencies_of", type=str, help="Will only print the tree of modules depending on the file passed.", default=None, ) parser.add_argument( "--commit_message", type=str, help="The commit message (which could contain a command to force all tests or skip the CI).", default=None, ) args = parser.parse_args() if args.print_dependencies_of is not None: print_tree_deps_of(args.print_dependencies_of) elif args.filter_tests: filter_tests(args.output_file, ["pipelines", "repo_utils"]) else: repo = Repo(PATH_TO_REPO) commit_message = repo.head.commit.message commit_flags = parse_commit_message(commit_message) if commit_flags["skip"]: print("Force-skipping the CI") quit() if commit_flags["no_filter"]: print("Running all tests fetched without filtering.") if commit_flags["test_all"]: print("Force-launching all tests") is_main_branch = not repo.head.is_detached and repo.head.ref == repo.refs.main diff_with_last_commit = args.diff_with_last_commit if not diff_with_last_commit and is_main_branch: print("main branch detected, fetching tests against last commit.") diff_with_last_commit = True if not commit_flags["test_all"]: try: infer_tests_to_run( args.output_file, diff_with_last_commit=diff_with_last_commit, json_output_file=args.json_output_file, filter_models=(not (commit_flags["no_filter"] or is_main_branch)), ) filter_tests(args.output_file, ["repo_utils"]) except Exception as e: print(f"\nError when trying to grab the relevant tests: {e}\n\nRunning all tests.") commit_flags["test_all"] = True if commit_flags["test_all"]: with open(args.output_file, "w", encoding="utf-8") as f: f.write("tests") example_file = Path(args.output_file).parent / "examples_test_list.txt" with open(example_file, "w", encoding="utf-8") as f: f.write("all") test_files_to_run = get_all_tests() create_json_map(test_files_to_run, args.json_output_file)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/custom_init_isort.py
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that sorts the imports in the custom inits of Transformers. Transformers uses init files that delay the import of an object to when it's actually needed. This is to avoid the main init importing all models, which would make the line `import transformers` very slow when the user has all optional dependencies installed. The inits with delayed imports have two halves: one definining a dictionary `_import_structure` which maps modules to the name of the objects in each module, and one in `TYPE_CHECKING` which looks like a normal init for type-checkers. `isort` or `ruff` properly sort the second half which looks like traditionl imports, the goal of this script is to sort the first half. Use from the root of the repo with: ```bash python utils/custom_init_isort.py ``` which will auto-sort the imports (used in `make style`). For a check only (as used in `make quality`) run: ```bash python utils/custom_init_isort.py --check_only ``` """ import argparse import os import re from typing import Any, Callable, List, Optional # Path is defined with the intent you should run this script from the root of the repo. PATH_TO_TRANSFORMERS = "src/transformers" # Pattern that looks at the indentation in a line. _re_indent = re.compile(r"^(\s*)\S") # Pattern that matches `"key":" and puts `key` in group 0. _re_direct_key = re.compile(r'^\s*"([^"]+)":') # Pattern that matches `_import_structure["key"]` and puts `key` in group 0. _re_indirect_key = re.compile(r'^\s*_import_structure\["([^"]+)"\]') # Pattern that matches `"key",` and puts `key` in group 0. _re_strip_line = re.compile(r'^\s*"([^"]+)",\s*$') # Pattern that matches any `[stuff]` and puts `stuff` in group 0. _re_bracket_content = re.compile(r"\[([^\]]+)\]") def get_indent(line: str) -> str: """Returns the indent in given line (as string).""" search = _re_indent.search(line) return "" if search is None else search.groups()[0] def split_code_in_indented_blocks( code: str, indent_level: str = "", start_prompt: Optional[str] = None, end_prompt: Optional[str] = None ) -> List[str]: """ Split some code into its indented blocks, starting at a given level. Args: code (`str`): The code to split. indent_level (`str`): The indent level (as string) to use for identifying the blocks to split. start_prompt (`str`, *optional*): If provided, only starts splitting at the line where this text is. end_prompt (`str`, *optional*): If provided, stops splitting at a line where this text is. Warning: The text before `start_prompt` or after `end_prompt` (if provided) is not ignored, just not split. The input `code` can thus be retrieved by joining the result. Returns: `List[str]`: The list of blocks. """ # Let's split the code into lines and move to start_index. index = 0 lines = code.split("\n") if start_prompt is not None: while not lines[index].startswith(start_prompt): index += 1 blocks = ["\n".join(lines[:index])] else: blocks = [] # This variable contains the block treated at a given time. current_block = [lines[index]] index += 1 # We split into blocks until we get to the `end_prompt` (or the end of the file). while index < len(lines) and (end_prompt is None or not lines[index].startswith(end_prompt)): # We have a non-empty line with the proper indent -> start of a new block if len(lines[index]) > 0 and get_indent(lines[index]) == indent_level: # Store the current block in the result and rest. There are two cases: the line is part of the block (like # a closing parenthesis) or not. if len(current_block) > 0 and get_indent(current_block[-1]).startswith(indent_level + " "): # Line is part of the current block current_block.append(lines[index]) blocks.append("\n".join(current_block)) if index < len(lines) - 1: current_block = [lines[index + 1]] index += 1 else: current_block = [] else: # Line is not part of the current block blocks.append("\n".join(current_block)) current_block = [lines[index]] else: # Just add the line to the current block current_block.append(lines[index]) index += 1 # Adds current block if it's nonempty. if len(current_block) > 0: blocks.append("\n".join(current_block)) # Add final block after end_prompt if provided. if end_prompt is not None and index < len(lines): blocks.append("\n".join(lines[index:])) return blocks def ignore_underscore_and_lowercase(key: Callable[[Any], str]) -> Callable[[Any], str]: """ Wraps a key function (as used in a sort) to lowercase and ignore underscores. """ def _inner(x): return key(x).lower().replace("_", "") return _inner def sort_objects(objects: List[Any], key: Optional[Callable[[Any], str]] = None) -> List[Any]: """ Sort a list of objects following the rules of isort (all uppercased first, camel-cased second and lower-cased last). Args: objects (`List[Any]`): The list of objects to sort. key (`Callable[[Any], str]`, *optional*): A function taking an object as input and returning a string, used to sort them by alphabetical order. If not provided, will default to noop (so a `key` must be provided if the `objects` are not of type string). Returns: `List[Any]`: The sorted list with the same elements as in the inputs """ # If no key is provided, we use a noop. def noop(x): return x if key is None: key = noop # Constants are all uppercase, they go first. constants = [obj for obj in objects if key(obj).isupper()] # Classes are not all uppercase but start with a capital, they go second. classes = [obj for obj in objects if key(obj)[0].isupper() and not key(obj).isupper()] # Functions begin with a lowercase, they go last. functions = [obj for obj in objects if not key(obj)[0].isupper()] # Then we sort each group. key1 = ignore_underscore_and_lowercase(key) return sorted(constants, key=key1) + sorted(classes, key=key1) + sorted(functions, key=key1) def sort_objects_in_import(import_statement: str) -> str: """ Sorts the imports in a single import statement. Args: import_statement (`str`): The import statement in which to sort the imports. Returns: `str`: The same as the input, but with objects properly sorted. """ # This inner function sort imports between [ ]. def _replace(match): imports = match.groups()[0] # If there is one import only, nothing to do. if "," not in imports: return f"[{imports}]" keys = [part.strip().replace('"', "") for part in imports.split(",")] # We will have a final empty element if the line finished with a comma. if len(keys[-1]) == 0: keys = keys[:-1] return "[" + ", ".join([f'"{k}"' for k in sort_objects(keys)]) + "]" lines = import_statement.split("\n") if len(lines) > 3: # Here we have to sort internal imports that are on several lines (one per name): # key: [ # "object1", # "object2", # ... # ] # We may have to ignore one or two lines on each side. idx = 2 if lines[1].strip() == "[" else 1 keys_to_sort = [(i, _re_strip_line.search(line).groups()[0]) for i, line in enumerate(lines[idx:-idx])] sorted_indices = sort_objects(keys_to_sort, key=lambda x: x[1]) sorted_lines = [lines[x[0] + idx] for x in sorted_indices] return "\n".join(lines[:idx] + sorted_lines + lines[-idx:]) elif len(lines) == 3: # Here we have to sort internal imports that are on one separate line: # key: [ # "object1", "object2", ... # ] if _re_bracket_content.search(lines[1]) is not None: lines[1] = _re_bracket_content.sub(_replace, lines[1]) else: keys = [part.strip().replace('"', "") for part in lines[1].split(",")] # We will have a final empty element if the line finished with a comma. if len(keys[-1]) == 0: keys = keys[:-1] lines[1] = get_indent(lines[1]) + ", ".join([f'"{k}"' for k in sort_objects(keys)]) return "\n".join(lines) else: # Finally we have to deal with imports fitting on one line import_statement = _re_bracket_content.sub(_replace, import_statement) return import_statement def sort_imports(file: str, check_only: bool = True): """ Sort the imports defined in the `_import_structure` of a given init. Args: file (`str`): The path to the init to check/fix. check_only (`bool`, *optional*, defaults to `True`): Whether or not to just check (and not auto-fix) the init. """ with open(file, encoding="utf-8") as f: code = f.read() # If the file is not a custom init, there is nothing to do. if "_import_structure" not in code: return # Blocks of indent level 0 main_blocks = split_code_in_indented_blocks( code, start_prompt="_import_structure = {", end_prompt="if TYPE_CHECKING:" ) # We ignore block 0 (everything untils start_prompt) and the last block (everything after end_prompt). for block_idx in range(1, len(main_blocks) - 1): # Check if the block contains some `_import_structure`s thingy to sort. block = main_blocks[block_idx] block_lines = block.split("\n") # Get to the start of the imports. line_idx = 0 while line_idx < len(block_lines) and "_import_structure" not in block_lines[line_idx]: # Skip dummy import blocks if "import dummy" in block_lines[line_idx]: line_idx = len(block_lines) else: line_idx += 1 if line_idx >= len(block_lines): continue # Ignore beginning and last line: they don't contain anything. internal_block_code = "\n".join(block_lines[line_idx:-1]) indent = get_indent(block_lines[1]) # Slit the internal block into blocks of indent level 1. internal_blocks = split_code_in_indented_blocks(internal_block_code, indent_level=indent) # We have two categories of import key: list or _import_structure[key].append/extend pattern = _re_direct_key if "_import_structure = {" in block_lines[0] else _re_indirect_key # Grab the keys, but there is a trap: some lines are empty or just comments. keys = [(pattern.search(b).groups()[0] if pattern.search(b) is not None else None) for b in internal_blocks] # We only sort the lines with a key. keys_to_sort = [(i, key) for i, key in enumerate(keys) if key is not None] sorted_indices = [x[0] for x in sorted(keys_to_sort, key=lambda x: x[1])] # We reorder the blocks by leaving empty lines/comments as they were and reorder the rest. count = 0 reorderded_blocks = [] for i in range(len(internal_blocks)): if keys[i] is None: reorderded_blocks.append(internal_blocks[i]) else: block = sort_objects_in_import(internal_blocks[sorted_indices[count]]) reorderded_blocks.append(block) count += 1 # And we put our main block back together with its first and last line. main_blocks[block_idx] = "\n".join(block_lines[:line_idx] + reorderded_blocks + [block_lines[-1]]) if code != "\n".join(main_blocks): if check_only: return True else: print(f"Overwriting {file}.") with open(file, "w", encoding="utf-8") as f: f.write("\n".join(main_blocks)) def sort_imports_in_all_inits(check_only=True): """ Sort the imports defined in the `_import_structure` of all inits in the repo. Args: check_only (`bool`, *optional*, defaults to `True`): Whether or not to just check (and not auto-fix) the init. """ failures = [] for root, _, files in os.walk(PATH_TO_TRANSFORMERS): if "__init__.py" in files: result = sort_imports(os.path.join(root, "__init__.py"), check_only=check_only) if result: failures = [os.path.join(root, "__init__.py")] if len(failures) > 0: raise ValueError(f"Would overwrite {len(failures)} files, run `make style`.") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--check_only", action="store_true", help="Whether to only check or fix style.") args = parser.parse_args() sort_imports_in_all_inits(check_only=args.check_only)
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/utils/models_to_deprecate.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Script to find a candidate list of models to deprecate based on the number of downloads and the date of the last commit. """ import argparse import glob import json import os from collections import defaultdict from datetime import datetime, timezone from pathlib import Path from git import Repo from huggingface_hub import HfApi api = HfApi() PATH_TO_REPO = Path(__file__).parent.parent.resolve() repo = Repo(PATH_TO_REPO) class HubModelLister: """ Utility for getting models from the hub based on tags. Handles errors without crashing the script. """ def __init__(self, tags): self.tags = tags self.model_list = api.list_models(tags=tags) def __iter__(self): try: yield from self.model_list except Exception as e: print(f"Error: {e}") return def _extract_commit_hash(commits): for commit in commits: if commit.startswith("commit "): return commit.split(" ")[1] return "" def get_list_of_repo_model_paths(models_dir): # Get list of all models in the library models = glob.glob(os.path.join(models_dir, "*/modeling_*.py")) # Remove flax and tf models models = [model for model in models if "_flax_" not in model] models = [model for model in models if "_tf_" not in model] # Get list of all deprecated models in the library deprecated_models = glob.glob(os.path.join(models_dir, "deprecated", "*")) # For each deprecated model, remove the deprecated models from the list of all models as well as the symlink path for deprecated_model in deprecated_models: deprecated_model_name = "/" + deprecated_model.split("/")[-1] + "/" models = [model for model in models if deprecated_model_name not in model] # Remove deprecated models models = [model for model in models if "/deprecated" not in model] # Remove auto models = [model for model in models if "/auto/" not in model] return models def get_list_of_models_to_deprecate( thresh_num_downloads=5_000, thresh_date=None, use_cache=False, save_model_info=False, max_num_models=-1, ): if thresh_date is None: thresh_date = datetime.now(timezone.utc).replace(year=datetime.now(timezone.utc).year - 1) else: thresh_date = datetime.strptime(thresh_date, "%Y-%m-%d").replace(tzinfo=timezone.utc) models_dir = PATH_TO_REPO / "src/transformers/models" model_paths = get_list_of_repo_model_paths(models_dir=models_dir) if use_cache and os.path.exists("models_info.json"): with open("models_info.json", "r") as f: models_info = json.load(f) # Convert datetimes back to datetime objects for model, info in models_info.items(): info["first_commit_datetime"] = datetime.fromisoformat(info["first_commit_datetime"]) else: # Build a dictionary of model info: first commit datetime, commit hash, model path models_info = defaultdict(dict) for model_path in model_paths: model = model_path.split("/")[-2] if model in models_info: continue commits = repo.git.log("--diff-filter=A", "--", model_path).split("\n") commit_hash = _extract_commit_hash(commits) commit_obj = repo.commit(commit_hash) committed_datetime = commit_obj.committed_datetime models_info[model]["commit_hash"] = commit_hash models_info[model]["first_commit_datetime"] = committed_datetime models_info[model]["model_path"] = model_path models_info[model]["downloads"] = 0 # Some tags on the hub are formatted differently than in the library tags = [model] if "_" in model: tags.append(model.replace("_", "-")) models_info[model]["tags"] = tags # Filter out models which were added less than a year ago models_info = { model: info for model, info in models_info.items() if info["first_commit_datetime"] < thresh_date } # We make successive calls to the hub, filtering based on the model tags n_seen = 0 for model, model_info in models_info.items(): for model_tag in model_info["tags"]: model_list = HubModelLister(tags=model_tag) for i, hub_model in enumerate(model_list): n_seen += 1 if i % 100 == 0: print(f"Processing model {i} for tag {model_tag}") if max_num_models != -1 and i > n_seen: break if hub_model.private: continue model_info["downloads"] += hub_model.downloads if save_model_info and not (use_cache and os.path.exists("models_info.json")): # Make datetimes serializable for model, info in models_info.items(): info["first_commit_datetime"] = info["first_commit_datetime"].isoformat() with open("models_info.json", "w") as f: json.dump(models_info, f, indent=4) print("\nModels to deprecate:") n_models_to_deprecate = 0 models_to_deprecate = {} for model, info in models_info.items(): n_downloads = info["downloads"] if n_downloads < thresh_num_downloads: n_models_to_deprecate += 1 models_to_deprecate[model] = info print(f"\nModel: {model}") print(f"Downloads: {n_downloads}") print(f"Date: {info['first_commit_datetime']}") print(f"\nNumber of models to deprecate: {n_models_to_deprecate}") print("Before deprecating make sure to verify the models, including if they're used as a module in other models.") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--save_model_info", action="store_true", help="Save the retrieved model info to a json file.") parser.add_argument( "--use_cache", action="store_true", help="Use the cached model info instead of calling the hub." ) parser.add_argument( "--thresh_num_downloads", type=int, default=5_000, help="Threshold number of downloads below which a model should be deprecated. Default is 5,000.", ) parser.add_argument( "--thresh_date", type=str, default=None, help="Date to consider the first commit from. Format: YYYY-MM-DD. If unset, defaults to one year ago from today.", ) parser.add_argument( "--max_num_models", type=int, default=-1, help="Maximum number of models to consider from the hub. -1 means all models. Useful for testing.", ) args = parser.parse_args() models_to_deprecate = get_list_of_models_to_deprecate( thresh_num_downloads=args.thresh_num_downloads, thresh_date=args.thresh_date, use_cache=args.use_cache, save_model_info=args.save_model_info, max_num_models=args.max_num_models, )
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_pipeline.py
import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits}
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_feature_extraction.py
from transformers import Wav2Vec2FeatureExtractor class CustomFeatureExtractor(Wav2Vec2FeatureExtractor): pass
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_processing.py
from transformers import ProcessorMixin class CustomProcessor(ProcessorMixin): feature_extractor_class = "AutoFeatureExtractor" tokenizer_class = "AutoTokenizer"
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_modeling.py
import torch from transformers import PreTrainedModel from .custom_configuration import CustomConfig, NoSuperInitConfig class CustomModel(PreTrainedModel): config_class = CustomConfig def __init__(self, config): super().__init__(config) self.linear = torch.nn.Linear(config.hidden_size, config.hidden_size) def forward(self, x): return self.linear(x) def _init_weights(self, module): pass class NoSuperInitModel(PreTrainedModel): config_class = NoSuperInitConfig def __init__(self, config): super().__init__(config) self.linear = torch.nn.Linear(config.attribute, config.attribute) def forward(self, x): return self.linear(x) def _init_weights(self, module): pass
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_tokenization_fast.py
from transformers import BertTokenizerFast from .custom_tokenization import CustomTokenizer class CustomTokenizerFast(BertTokenizerFast): slow_tokenizer_class = CustomTokenizer pass
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_image_processing.py
from transformers import CLIPImageProcessor class CustomImageProcessor(CLIPImageProcessor): pass
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_tokenization.py
from transformers import BertTokenizer class CustomTokenizer(BertTokenizer): pass
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/test_module/custom_configuration.py
from transformers import PretrainedConfig class CustomConfig(PretrainedConfig): model_type = "custom" def __init__(self, attribute=1, **kwargs): self.attribute = attribute super().__init__(**kwargs) class NoSuperInitConfig(PretrainedConfig): model_type = "custom" def __init__(self, attribute=1, **kwargs): self.attribute = attribute
0
mavonic_private_repos/transformers/utils
mavonic_private_repos/transformers/utils/tf_ops/onnx.json
{ "opsets": { "1": [ "Abs", "Add", "AddV2", "ArgMax", "ArgMin", "AvgPool", "AvgPool3D", "BatchMatMul", "BatchMatMulV2", "BatchToSpaceND", "BiasAdd", "BiasAddV1", "Cast", "Ceil", "CheckNumerics", "ComplexAbs", "Concat", "ConcatV2", "Const", "ConstV2", "Conv1D", "Conv2D", "Conv2DBackpropInput", "Conv3D", "Conv3DBackpropInputV2", "DepthToSpace", "DepthwiseConv2d", "DepthwiseConv2dNative", "Div", "Dropout", "Elu", "Equal", "Erf", "Exp", "ExpandDims", "Flatten", "Floor", "Gather", "GatherNd", "GatherV2", "Greater", "Identity", "IdentityN", "If", "LRN", "LSTMBlockCell", "LeakyRelu", "Less", "Log", "LogSoftmax", "LogicalAnd", "LogicalNot", "LogicalOr", "LookupTableSizeV2", "MatMul", "Max", "MaxPool", "MaxPool3D", "MaxPoolV2", "Maximum", "Mean", "Min", "Minimum", "MirrorPad", "Mul", "Neg", "NoOp", "NotEqual", "OneHot", "Pack", "Pad", "PadV2", "Placeholder", "PlaceholderV2", "PlaceholderWithDefault", "Pow", "Prod", "RFFT", "RandomNormal", "RandomNormalLike", "RandomUniform", "RandomUniformLike", "RealDiv", "Reciprocal", "Relu", "Relu6", "Reshape", "Rsqrt", "Selu", "Shape", "Sigmoid", "Sign", "Size", "Slice", "Softmax", "Softplus", "Softsign", "SpaceToBatchND", "SpaceToDepth", "Split", "SplitV", "Sqrt", "Square", "SquaredDifference", "Squeeze", "StatelessIf", "StopGradient", "StridedSlice", "StringJoin", "Sub", "Sum", "Tanh", "Tile", "TopKV2", "Transpose", "TruncateDiv", "Unpack", "ZerosLike" ], "2": [], "3": [], "4": [], "5": [], "6": [ "AddN", "All", "Any", "FloorDiv", "FusedBatchNorm", "FusedBatchNormV2", "FusedBatchNormV3" ], "7": [ "Acos", "Asin", "Atan", "Cos", "Fill", "FloorMod", "GreaterEqual", "LessEqual", "Loop", "MatrixBandPart", "Multinomial", "Range", "ResizeBilinear", "ResizeNearestNeighbor", "Scan", "Select", "SelectV2", "Sin", "SoftmaxCrossEntropyWithLogits", "SparseSoftmaxCrossEntropyWithLogits", "StatelessWhile", "Tan", "TensorListFromTensor", "TensorListGetItem", "TensorListLength", "TensorListReserve", "TensorListResize", "TensorListSetItem", "TensorListStack", "While" ], "8": [ "BroadcastTo", "ClipByValue", "FIFOQueueV2", "HashTableV2", "IteratorGetNext", "IteratorV2", "LookupTableFindV2", "MaxPoolWithArgmax", "QueueDequeueManyV2", "QueueDequeueUpToV2", "QueueDequeueV2", "ReverseSequence" ], "9": [ "SegmentMax", "SegmentMean", "SegmentMin", "SegmentProd", "SegmentSum", "Sinh", "SparseSegmentMean", "SparseSegmentMeanWithNumSegments", "SparseSegmentSqrtN", "SparseSegmentSqrtNWithNumSegments", "SparseSegmentSum", "SparseSegmentSumWithNumSegments", "UnsortedSegmentMax", "UnsortedSegmentMin", "UnsortedSegmentProd", "UnsortedSegmentSum", "Where" ], "10": [ "CropAndResize", "CudnnRNN", "DynamicStitch", "FakeQuantWithMinMaxArgs", "IsFinite", "IsInf", "NonMaxSuppressionV2", "NonMaxSuppressionV3", "NonMaxSuppressionV4", "NonMaxSuppressionV5", "ParallelDynamicStitch", "ReverseV2", "Roll" ], "11": [ "Bincount", "Cumsum", "InvertPermutation", "LeftShift", "MatrixDeterminant", "MatrixDiagPart", "MatrixDiagPartV2", "MatrixDiagPartV3", "RaggedRange", "RightShift", "Round", "ScatterNd", "SparseFillEmptyRows", "SparseReshape", "SparseToDense", "TensorScatterUpdate", "Unique" ], "12": [ "Einsum", "MatrixDiag", "MatrixDiagV2", "MatrixDiagV3", "MatrixSetDiagV3", "SquaredDistance" ], "13": [] } }
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/quality.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y time git ENV VIRTUAL_ENV=/usr/local RUN pip install uv && uv venv RUN uv pip install --no-cache-dir -U pip setuptools GitPython transformers "ruff==0.1.5" urllib3 RUN apt-get install -y jq curl && apt-get clean && rm -rf /var/lib/apt/lists/*
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/pipeline-tf.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git cmake g++ ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN pip install --no-cache-dir "transformers[sklearn,tf-cpu,testing,sentencepiece,tf-speech,vision]" RUN uv pip install --no-cache-dir "protobuf==3.20.3" tensorflow_probability RUN apt-get clean && rm -rf /var/lib/apt/lists/*
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/examples-tf.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git RUN apt-get install -y g++ cmake ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv RUN uv pip install --no-cache-dir -U pip setuptools albumentations seqeval RUN pip install --upgrade --no-cache-dir "transformers[tf-cpu,sklearn,testing,sentencepiece,tf-speech,vision]" RUN uv pip install --no-cache-dir "protobuf==3.20.3" RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/*
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/exotic-models.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 ARG REF=main USER root RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git libgl1-mesa-glx libgl1 g++ tesseract-ocr ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-cache-dir --no-deps timm accelerate RUN pip install -U --upgrade-strategy eager --no-cache-dir pytesseract python-Levenshtein opencv-python nltk # RUN uv pip install --no-cache-dir natten==0.15.1+torch210cpu -f https://shi-labs.com/natten/wheels RUN pip install --no-cache-dir "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[testing, vision]" 'scikit-learn' 'torch-stft' 'nose' 'dataset' # RUN git clone https://github.com/facebookresearch/detectron2.git # RUN python3 -m pip install --no-cache-dir -e detectron2 RUN pip install 'git+https://github.com/facebookresearch/detectron2.git@92ae9f0b92aba5867824b4f12aa06a22a60a45d3' RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/*
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/torch-light.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git git-lfs ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-cache-dir librosa "transformers[sklearn,sentencepiece,vision,testing]" RUN pip uninstall -y transformers
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/consistency.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y time git pkg-config make git-lfs ENV VIRTUAL_ENV=/usr/local RUN pip install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools GitPython RUN uv pip install --no-cache-dir --upgrade 'torch' --index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-cache-dir tensorflow-cpu tf-keras RUN uv pip install --no-cache-dir "transformers[flax,quality,vision,testing]" RUN git lfs install RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/jax-light.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git g++ cmake ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN pip install --no-cache-dir "scipy<1.13" "transformers[flax,testing,sentencepiece,flax-speech,vision]" RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/pipeline-torch.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git pkg-config openssh-client git ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-cache-dir librosa "transformers[sklearn,sentencepiece,vision,testing]" RUN pip uninstall -y transformers
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/torch-tf-light.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 ARG REF=main RUN echo ${REF} USER root RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git git-lfs ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN uv pip install --no-cache-dir --no-deps accelerate --extra-index-url https://download.pytorch.org/whl/cpu RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu RUN git lfs install RUN uv pip install --no-cache-dir pypi-kenlm RUN pip install --no-cache-dir "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[tf-cpu,sklearn,sentencepiece,vision,testing]" RUN uv pip install --no-cache-dir "protobuf==3.20.3" librosa RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/torch-jax-light.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN uv pip install --no-deps accelerate RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu RUN pip install --no-cache-dir "scipy<1.13" "transformers[flax, audio, sklearn,sentencepiece,vision,testing]" # RUN pip install --no-cache-dir "scipy<1.13" "transformers[flax,testing,sentencepiece,flax-speech,vision]" RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/custom-tokenizers.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git cmake wget xz-utils build-essential g++5 libprotobuf-dev protobuf-compiler ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN wget https://github.com/ku-nlp/jumanpp/releases/download/v2.0.0-rc3/jumanpp-2.0.0-rc3.tar.xz RUN tar xvf jumanpp-2.0.0-rc3.tar.xz RUN mkdir jumanpp-2.0.0-rc3/bld WORKDIR ./jumanpp-2.0.0-rc3/bld RUN wget -LO catch.hpp https://github.com/catchorg/Catch2/releases/download/v2.13.8/catch.hpp RUN mv catch.hpp ../libs/ RUN cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local RUN make install -j 10 RUN uv pip install --no-cache --upgrade 'torch' --index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-cache-dir --no-deps accelerate --extra-index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-cache-dir "transformers[ja,testing,sentencepiece,jieba,spacy,ftfy,rjieba]" unidic unidic-lite # spacy is not used so not tested. Causes to failures. TODO fix later RUN python3 -m unidic download RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/* RUN apt remove -y g++ cmake xz-utils libprotobuf-dev protobuf-compiler
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/examples-torch.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu RUN uv pip install --no-cache-dir librosa "transformers[sklearn,sentencepiece,vision,testing]" seqeval albumentations jiwer RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/*
0
mavonic_private_repos/transformers
mavonic_private_repos/transformers/docker/tf-light.dockerfile
FROM python:3.10-slim ENV PYTHONDONTWRITEBYTECODE=1 USER root RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ pkg-config openssh-client git RUN apt-get install -y cmake ENV VIRTUAL_ENV=/usr/local RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools RUN pip install --upgrade --no-cache-dir "transformers[tf-cpu,sklearn,testing,sentencepiece,tf-speech,vision]" RUN uv pip install --no-cache-dir "protobuf==3.20.3" RUN pip uninstall -y transformers RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh
#!/bin/bash source ~/.bashrc echo "running docker-entrypoint.sh" conda activate container echo $KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS echo "printed TPU info" export XRT_TPU_CONFIG="tpu_worker;0;${KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS:7}" exec "$@"#!/bin/bash
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/dataset.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: huggingface-cluster-disk spec: storageClassName: "" capacity: storage: 500Gi accessModes: - ReadOnlyMany claimRef: namespace: default name: huggingface-cluster-disk-claim gcePersistentDisk: pdName: huggingface-cluster-disk fsType: ext4 readOnly: true --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: huggingface-cluster-disk-claim spec: # Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass. # A nil storageClassName value uses the default StorageClass. For details, see # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 storageClassName: "" accessModes: - ReadOnlyMany resources: requests: storage: 1Ki
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/bert-base-cased.jsonnet
local base = import 'templates/base.libsonnet'; local tpus = import 'templates/tpus.libsonnet'; local utils = import "templates/utils.libsonnet"; local volumes = import "templates/volumes.libsonnet"; local bertBaseCased = base.BaseTest { frameworkPrefix: "hf", modelName: "bert-base-cased", mode: "example", configMaps: [], timeout: 3600, # 1 hour, in seconds image: std.extVar('image'), imageTag: std.extVar('image-tag'), tpuSettings+: { softwareVersion: "pytorch-nightly", }, accelerator: tpus.v3_8, volumeMap+: { datasets: volumes.PersistentVolumeSpec { name: "huggingface-cluster-disk", mountPath: "/datasets", }, }, command: utils.scriptCommand( ||| python -m pytest -s transformers/examples/pytorch/test_xla_examples.py -v test_exit_code=$? echo "\nFinished running commands.\n" test $test_exit_code -eq 0 ||| ), }; bertBaseCased.oneshotJob
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/Dockerfile
FROM google/cloud-sdk:slim # Build args. ARG GITHUB_REF=refs/heads/main # TODO: This Dockerfile installs pytorch/xla 3.6 wheels. There are also 3.7 # wheels available; see below. ENV PYTHON_VERSION=3.6 RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ curl \ ca-certificates # Install conda and python. # NOTE new Conda does not forward the exit status... https://github.com/conda/conda/issues/8385 RUN curl -o ~/miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-4.7.12-Linux-x86_64.sh && \ chmod +x ~/miniconda.sh && \ ~/miniconda.sh -b && \ rm ~/miniconda.sh ENV PATH=/root/miniconda3/bin:$PATH RUN conda create -y --name container python=$PYTHON_VERSION # Run the rest of commands within the new conda env. # Use absolute path to appease Codefactor. SHELL ["/root/miniconda3/bin/conda", "run", "-n", "container", "/bin/bash", "-c"] RUN conda install -y python=$PYTHON_VERSION mkl RUN pip uninstall -y torch && \ # Python 3.7 wheels are available. Replace cp36-cp36m with cp37-cp37m gsutil cp 'gs://tpu-pytorch/wheels/torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \ gsutil cp 'gs://tpu-pytorch/wheels/torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \ gsutil cp 'gs://tpu-pytorch/wheels/torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \ pip install 'torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ pip install 'torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ pip install 'torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ rm 'torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ rm 'torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ rm 'torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ apt-get install -y libomp5 ENV LD_LIBRARY_PATH=root/miniconda3/envs/container/lib # Install huggingface/transformers at the current PR, plus dependencies. RUN git clone https://github.com/huggingface/transformers.git && \ cd transformers && \ git fetch origin $GITHUB_REF:CI && \ git checkout CI && \ cd .. && \ pip install ./transformers && \ pip install -r ./transformers/examples/pytorch/_test_requirements.txt && \ pip install pytest RUN python -c "import torch_xla; print(torch_xla.__version__)" RUN python -c "import transformers as trf; print(trf.__version__)" RUN conda init bash COPY docker-entrypoint.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/docker-entrypoint.sh ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] CMD ["bash"]
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-tensorflow-gpu/Dockerfile
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 LABEL maintainer="Hugging Face" ARG DEBIAN_FRONTEND=noninteractive RUN apt update RUN apt install -y git libsndfile1-dev tesseract-ocr espeak-ng python3 python3-pip ffmpeg RUN python3 -m pip install --no-cache-dir --upgrade pip ARG REF=main RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-tensorflow,testing] # If set to nothing, will install the latest version ARG TENSORFLOW='2.13' RUN [ ${#TENSORFLOW} -gt 0 ] && VERSION='tensorflow=='$TENSORFLOW'.*' || VERSION='tensorflow'; python3 -m pip install --no-cache-dir -U $VERSION RUN python3 -m pip uninstall -y torch flax RUN python3 -m pip install -U "itsdangerous<2.1.0" RUN python3 -m pip install --no-cache-dir -U tensorflow_probability # When installing in editable mode, `transformers` is not recognized as a package. # this line must be added in order for python to be aware of transformers. RUN cd transformers && python3 setup.py develop
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-pytorch-gpu/Dockerfile
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 LABEL maintainer="Hugging Face" ARG DEBIAN_FRONTEND=noninteractive RUN apt update RUN apt install -y git libsndfile1-dev tesseract-ocr espeak-ng python3 python3-pip ffmpeg RUN python3 -m pip install --no-cache-dir --upgrade pip ARG REF=main RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF # If set to nothing, will install the latest version ARG PYTORCH='2.1.1' ARG TORCH_VISION='' ARG TORCH_AUDIO='' # Example: `cu102`, `cu113`, etc. ARG CUDA='cu121' RUN [ ${#PYTORCH} -gt 0 ] && VERSION='torch=='$PYTORCH'.*' || VERSION='torch'; python3 -m pip install --no-cache-dir -U $VERSION --extra-index-url https://download.pytorch.org/whl/$CUDA RUN [ ${#TORCH_VISION} -gt 0 ] && VERSION='torchvision=='TORCH_VISION'.*' || VERSION='torchvision'; python3 -m pip install --no-cache-dir -U $VERSION --extra-index-url https://download.pytorch.org/whl/$CUDA RUN [ ${#TORCH_AUDIO} -gt 0 ] && VERSION='torchaudio=='TORCH_AUDIO'.*' || VERSION='torchaudio'; python3 -m pip install --no-cache-dir -U $VERSION --extra-index-url https://download.pytorch.org/whl/$CUDA RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch,testing,video] RUN python3 -m pip uninstall -y tensorflow flax RUN python3 -m pip install --no-cache-dir git+https://github.com/facebookresearch/detectron2.git pytesseract RUN python3 -m pip install -U "itsdangerous<2.1.0" # When installing in editable mode, `transformers` is not recognized as a package. # this line must be added in order for python to be aware of transformers. RUN cd transformers && python3 setup.py develop
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-doc-builder/Dockerfile
FROM python:3.10 LABEL maintainer="Hugging Face" RUN apt update RUN git clone https://github.com/huggingface/transformers RUN python3 -m pip install --no-cache-dir --upgrade pip && python3 -m pip install --no-cache-dir git+https://github.com/huggingface/doc-builder ./transformers[dev] RUN apt-get -y update && apt-get install -y libsndfile1-dev && apt install -y tesseract-ocr # Torch needs to be installed before deepspeed RUN python3 -m pip install --no-cache-dir ./transformers[deepspeed] RUN python3 -m pip install --no-cache-dir torchvision git+https://github.com/facebookresearch/detectron2.git pytesseract RUN python3 -m pip install -U "itsdangerous<2.1.0" # Test if the image could successfully build the doc. before publishing the image RUN doc-builder build transformers transformers/docs/source/en --build_dir doc-build-dev --notebook_dir notebooks/transformers_doc --clean RUN rm -rf doc-build-dev
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-pytorch-deepspeed-amd-gpu/Dockerfile
FROM rocm/dev-ubuntu-22.04:5.6 LABEL maintainer="Hugging Face" ARG DEBIAN_FRONTEND=noninteractive ARG PYTORCH='2.1.1' ARG TORCH_VISION='0.16.1' ARG TORCH_AUDIO='2.1.1' ARG ROCM='5.6' RUN apt update && \ apt install -y --no-install-recommends \ libaio-dev \ git \ # These are required to build deepspeed. python3-dev \ python-is-python3 \ rocrand-dev \ rocthrust-dev \ hipsparse-dev \ hipblas-dev \ rocblas-dev && \ apt clean && \ rm -rf /var/lib/apt/lists/* RUN python3 -m pip install --no-cache-dir --upgrade pip ninja "pydantic<2" RUN python3 -m pip uninstall -y apex torch torchvision torchaudio RUN python3 -m pip install torch==$PYTORCH torchvision==$TORCH_VISION torchaudio==$TORCH_AUDIO --index-url https://download.pytorch.org/whl/rocm$ROCM --no-cache-dir # Pre-build DeepSpeed, so it's be ready for testing (to avoid timeout) RUN DS_BUILD_CPU_ADAM=1 DS_BUILD_FUSED_ADAM=1 python3 -m pip install deepspeed --global-option="build_ext" --global-option="-j8" --no-cache-dir -v --disable-pip-version-check 2>&1 ARG REF=main WORKDIR / # Invalidate docker cache from here if new commit is available. ADD https://api.github.com/repos/huggingface/transformers/git/refs/heads/main version.json RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF RUN python3 -m pip install --no-cache-dir ./transformers[accelerate,testing,sentencepiece,sklearn] # When installing in editable mode, `transformers` is not recognized as a package. # this line must be added in order for python to be aware of transformers. RUN cd transformers && python3 setup.py develop RUN python3 -c "from deepspeed.launcher.runner import main" # Remove nvml as it is not compatible with ROCm RUN python3 -m pip uninstall py3nvml pynvml -y
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-pytorch-amd-gpu/Dockerfile
FROM rocm/dev-ubuntu-20.04:5.6 # rocm/pytorch has no version with 2.1.0 LABEL maintainer="Hugging Face" ARG DEBIAN_FRONTEND=noninteractive ARG PYTORCH='2.1.0' ARG TORCH_VISION='0.16.0' ARG TORCH_AUDIO='2.1.0' ARG ROCM='5.6' RUN apt update && \ apt install -y --no-install-recommends git libsndfile1-dev tesseract-ocr espeak-ng python3 python3-dev python3-pip ffmpeg && \ apt clean && \ rm -rf /var/lib/apt/lists/* RUN python3 -m pip install --no-cache-dir --upgrade pip RUN python3 -m pip install torch==$PYTORCH torchvision==$TORCH_VISION torchaudio==$TORCH_AUDIO --index-url https://download.pytorch.org/whl/rocm$ROCM RUN python3 -m pip install --no-cache-dir --upgrade pip setuptools ninja git+https://github.com/facebookresearch/detectron2.git pytesseract "itsdangerous<2.1.0" ARG REF=main WORKDIR / # Invalidate docker cache from here if new commit is available. ADD https://api.github.com/repos/huggingface/transformers/git/refs/heads/main version.json RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch,testing,video] RUN python3 -m pip uninstall -y tensorflow flax # When installing in editable mode, `transformers` is not recognized as a package. # this line must be added in order for python to be aware of transformers. RUN cd transformers && python3 setup.py develop # Remove nvml as it is not compatible with ROCm RUN python3 -m pip uninstall py3nvml pynvml -y
0
mavonic_private_repos/transformers/docker
mavonic_private_repos/transformers/docker/transformers-quantization-latest-gpu/Dockerfile
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 LABEL maintainer="Hugging Face" ARG DEBIAN_FRONTEND=noninteractive # Use login shell to read variables from `~/.profile` (to pass dynamic created variables between RUN commands) SHELL ["sh", "-lc"] # The following `ARG` are mainly used to specify the versions explicitly & directly in this docker file, and not meant # to be used as arguments for docker build (so far). ARG PYTORCH='2.2.1' # Example: `cu102`, `cu113`, etc. ARG CUDA='cu118' RUN apt update RUN apt install -y git libsndfile1-dev tesseract-ocr espeak-ng python python3-pip ffmpeg RUN python3 -m pip install --no-cache-dir --upgrade pip ARG REF=main RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF RUN [ ${#PYTORCH} -gt 0 ] && VERSION='torch=='$PYTORCH'.*' || VERSION='torch'; echo "export VERSION='$VERSION'" >> ~/.profile RUN echo torch=$VERSION # `torchvision` and `torchaudio` should be installed along with `torch`, especially for nightly build. # Currently, let's just use their latest releases (when `torch` is installed with a release version) RUN python3 -m pip install --no-cache-dir -U $VERSION torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/$CUDA RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch] RUN python3 -m pip install --no-cache-dir git+https://github.com/huggingface/accelerate@main#egg=accelerate # needed in bnb and awq RUN python3 -m pip install --no-cache-dir einops # Add bitsandbytes for mixed int8 testing RUN python3 -m pip install --no-cache-dir bitsandbytes # Add auto-gptq for gtpq quantization testing RUN python3 -m pip install --no-cache-dir auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Add optimum for gptq quantization testing RUN python3 -m pip install --no-cache-dir git+https://github.com/huggingface/optimum@main#egg=optimum # Add aqlm for quantization testing RUN python3 -m pip install --no-cache-dir aqlm[gpu]==1.0.2 # Add hqq for quantization testing RUN python3 -m pip install --no-cache-dir hqq # Add autoawq for quantization testing # >=v0.2.3 needed for compatibility with torch 2.2.1 RUN python3 -m pip install --no-cache-dir https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.3/autoawq-0.2.3+cu118-cp38-cp38-linux_x86_64.whl # Add quanto for quantization testing RUN python3 -m pip install --no-cache-dir quanto # Add eetq for quantization testing RUN python3 -m pip install git+https://github.com/NetEase-FuXi/EETQ.git # When installing in editable mode, `transformers` is not recognized as a package. # this line must be added in order for python to be aware of transformers. RUN cd transformers && python3 setup.py develop
0