Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
202015004/wav2vec2-base-TLT-Shreya-trial
https://huggingface.co/202015004/wav2vec2-base-TLT-Shreya-trial
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 202015004/wav2vec2-base-TLT-Shreya-trial ### Model URL : https://huggingface.co/202015004/wav2vec2-base-TLT-Shreya-trial ### Model Description : No model card New: Create and edit this model card directly on the website!
202015004/wav2vec2-base-timit-demo-colab
https://huggingface.co/202015004/wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 202015004/wav2vec2-base-timit-demo-colab ### Model URL : https://huggingface.co/202015004/wav2vec2-base-timit-demo-colab ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
202015004/wav2vec2-base-timit-trial_by_SHREYA
https://huggingface.co/202015004/wav2vec2-base-timit-trial_by_SHREYA
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 202015004/wav2vec2-base-timit-trial_by_SHREYA ### Model URL : https://huggingface.co/202015004/wav2vec2-base-timit-trial_by_SHREYA ### Model Description : No model card New: Create and edit this model card directly on the website!
275Gameplay/test
https://huggingface.co/275Gameplay/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 275Gameplay/test ### Model URL : https://huggingface.co/275Gameplay/test ### Model Description : No model card New: Create and edit this model card directly on the website!
2early4coffee/DialoGPT-medium-deadpool
https://huggingface.co/2early4coffee/DialoGPT-medium-deadpool
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 2early4coffee/DialoGPT-medium-deadpool ### Model URL : https://huggingface.co/2early4coffee/DialoGPT-medium-deadpool ### Model Description :
2early4coffee/DialoGPT-small-deadpool
https://huggingface.co/2early4coffee/DialoGPT-small-deadpool
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 2early4coffee/DialoGPT-small-deadpool ### Model URL : https://huggingface.co/2early4coffee/DialoGPT-small-deadpool ### Model Description :
2umm3r/bert-base-uncased-finetuned-cls
https://huggingface.co/2umm3r/bert-base-uncased-finetuned-cls
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 2umm3r/bert-base-uncased-finetuned-cls ### Model URL : https://huggingface.co/2umm3r/bert-base-uncased-finetuned-cls ### Model Description : No model card New: Create and edit this model card directly on the website!
2umm3r/distilbert-base-uncased-finetuned-cola
https://huggingface.co/2umm3r/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 2umm3r/distilbert-base-uncased-finetuned-cola ### Model URL : https://huggingface.co/2umm3r/distilbert-base-uncased-finetuned-cola ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
3koozy/gpt2-HxH
https://huggingface.co/3koozy/gpt2-HxH
this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.you can find a link to the used dataset here : https://www.kaggle.com/bkoozy/hunter-x-hunter-subtitles you can find a colab notebook for fine-tuning the gpt2 model here : https://github.com/3koozy/fine-tune-gpt2-HxH/
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 3koozy/gpt2-HxH ### Model URL : https://huggingface.co/3koozy/gpt2-HxH ### Model Description : this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.you can find a link to the used dataset here : https://www.kaggle.com/bkoozy/hunter-x-hunter-subtitles you can find a colab notebook for fine-tuning the gpt2 model here : https://github.com/3koozy/fine-tune-gpt2-HxH/
3zooze/Dd
https://huggingface.co/3zooze/Dd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 3zooze/Dd ### Model URL : https://huggingface.co/3zooze/Dd ### Model Description : No model card New: Create and edit this model card directly on the website!
Akshay-Vs/AI
https://huggingface.co/Akshay-Vs/AI
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Akshay-Vs/AI ### Model URL : https://huggingface.co/Akshay-Vs/AI ### Model Description : No model card New: Create and edit this model card directly on the website!
511663/bert_finetuning_test
https://huggingface.co/511663/bert_finetuning_test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 511663/bert_finetuning_test ### Model URL : https://huggingface.co/511663/bert_finetuning_test ### Model Description : No model card New: Create and edit this model card directly on the website!
54Tor/test
https://huggingface.co/54Tor/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 54Tor/test ### Model URL : https://huggingface.co/54Tor/test ### Model Description : No model card New: Create and edit this model card directly on the website!
5dimension/test
https://huggingface.co/5dimension/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 5dimension/test ### Model URL : https://huggingface.co/5dimension/test ### Model Description : No model card New: Create and edit this model card directly on the website!
609ead0502/test
https://huggingface.co/609ead0502/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 609ead0502/test ### Model URL : https://huggingface.co/609ead0502/test ### Model Description : No model card New: Create and edit this model card directly on the website!
61birds/distilbert-base-uncased-finetuned-cola
https://huggingface.co/61birds/distilbert-base-uncased-finetuned-cola
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 61birds/distilbert-base-uncased-finetuned-cola ### Model URL : https://huggingface.co/61birds/distilbert-base-uncased-finetuned-cola ### Model Description : No model card New: Create and edit this model card directly on the website!
842458199/model_name
https://huggingface.co/842458199/model_name
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 842458199/model_name ### Model URL : https://huggingface.co/842458199/model_name ### Model Description : No model card New: Create and edit this model card directly on the website!
850886470/xxy_gpt2_chinese
https://huggingface.co/850886470/xxy_gpt2_chinese
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 850886470/xxy_gpt2_chinese ### Model URL : https://huggingface.co/850886470/xxy_gpt2_chinese ### Model Description : No model card New: Create and edit this model card directly on the website!
873101411/distilbert-base-uncased-finetuned-squad
https://huggingface.co/873101411/distilbert-base-uncased-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 873101411/distilbert-base-uncased-finetuned-squad ### Model URL : https://huggingface.co/873101411/distilbert-base-uncased-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
91Rodman/111
https://huggingface.co/91Rodman/111
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 91Rodman/111 ### Model URL : https://huggingface.co/91Rodman/111 ### Model Description : No model card New: Create and edit this model card directly on the website!
923/distilbert-base-uncased-finetuned-squad
https://huggingface.co/923/distilbert-base-uncased-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 923/distilbert-base-uncased-finetuned-squad ### Model URL : https://huggingface.co/923/distilbert-base-uncased-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
9pinus/macbert-base-chinese-medical-collation
https://huggingface.co/9pinus/macbert-base-chinese-medical-collation
This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset. You can use this model directly with a pipeline for token classification:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 9pinus/macbert-base-chinese-medical-collation ### Model URL : https://huggingface.co/9pinus/macbert-base-chinese-medical-collation ### Model Description : This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset. You can use this model directly with a pipeline for token classification:
9pinus/macbert-base-chinese-medicine-recognition
https://huggingface.co/9pinus/macbert-base-chinese-medicine-recognition
This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : 9pinus/macbert-base-chinese-medicine-recognition ### Model URL : https://huggingface.co/9pinus/macbert-base-chinese-medicine-recognition ### Model Description : This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.
A-bhimany-u08/bert-base-cased-qqp
https://huggingface.co/A-bhimany-u08/bert-base-cased-qqp
bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are not_duplicate (label 0) or duplicate (label 1). The model achieves 89% evaluation accuracy
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : A-bhimany-u08/bert-base-cased-qqp ### Model URL : https://huggingface.co/A-bhimany-u08/bert-base-cased-qqp ### Model Description : bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are not_duplicate (label 0) or duplicate (label 1). The model achieves 89% evaluation accuracy
AAli/bert-base-cased-wikitext2
https://huggingface.co/AAli/bert-base-cased-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/bert-base-cased-wikitext2 ### Model URL : https://huggingface.co/AAli/bert-base-cased-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/bert-base-uncased-finetuned-swag
https://huggingface.co/AAli/bert-base-uncased-finetuned-swag
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/bert-base-uncased-finetuned-swag ### Model URL : https://huggingface.co/AAli/bert-base-uncased-finetuned-swag ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/distilbert-base-uncased-finetuned-cola
https://huggingface.co/AAli/distilbert-base-uncased-finetuned-cola
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/distilbert-base-uncased-finetuned-cola ### Model URL : https://huggingface.co/AAli/distilbert-base-uncased-finetuned-cola ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/distilbert-base-uncased-finetuned-ner
https://huggingface.co/AAli/distilbert-base-uncased-finetuned-ner
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/distilbert-base-uncased-finetuned-ner ### Model URL : https://huggingface.co/AAli/distilbert-base-uncased-finetuned-ner ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/distilbert-base-uncased-finetuned-squad
https://huggingface.co/AAli/distilbert-base-uncased-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/distilbert-base-uncased-finetuned-squad ### Model URL : https://huggingface.co/AAli/distilbert-base-uncased-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/distilgpt2-finetuned-wikitext2
https://huggingface.co/AAli/distilgpt2-finetuned-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/distilgpt2-finetuned-wikitext2 ### Model URL : https://huggingface.co/AAli/distilgpt2-finetuned-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/gpt2-wikitext2
https://huggingface.co/AAli/gpt2-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/gpt2-wikitext2 ### Model URL : https://huggingface.co/AAli/gpt2-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/my-new-shiny-tokenizer
https://huggingface.co/AAli/my-new-shiny-tokenizer
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/my-new-shiny-tokenizer ### Model URL : https://huggingface.co/AAli/my-new-shiny-tokenizer ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/opus-mt-en-ro-finetuned-en-to-ro
https://huggingface.co/AAli/opus-mt-en-ro-finetuned-en-to-ro
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/opus-mt-en-ro-finetuned-en-to-ro ### Model URL : https://huggingface.co/AAli/opus-mt-en-ro-finetuned-en-to-ro ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/t5-small-finetuned-xsum
https://huggingface.co/AAli/t5-small-finetuned-xsum
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/t5-small-finetuned-xsum ### Model URL : https://huggingface.co/AAli/t5-small-finetuned-xsum ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/wav2vec2-base-demo-colab
https://huggingface.co/AAli/wav2vec2-base-demo-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/wav2vec2-base-demo-colab ### Model URL : https://huggingface.co/AAli/wav2vec2-base-demo-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
AAli/wav2vec2-base-finetuned-ks
https://huggingface.co/AAli/wav2vec2-base-finetuned-ks
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AAli/wav2vec2-base-finetuned-ks ### Model URL : https://huggingface.co/AAli/wav2vec2-base-finetuned-ks ### Model Description : No model card New: Create and edit this model card directly on the website!
ABBHISHEK/DialoGPT-small-harrypotter
https://huggingface.co/ABBHISHEK/DialoGPT-small-harrypotter
@Harry Potter DialoGPT model
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ABBHISHEK/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/ABBHISHEK/DialoGPT-small-harrypotter ### Model Description : @Harry Potter DialoGPT model
AG/pretraining
https://huggingface.co/AG/pretraining
Pre trained on clus_ chapter only.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AG/pretraining ### Model URL : https://huggingface.co/AG/pretraining ### Model Description : Pre trained on clus_ chapter only.
AHussain0418/distillbert-truth-detector
https://huggingface.co/AHussain0418/distillbert-truth-detector
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AHussain0418/distillbert-truth-detector ### Model URL : https://huggingface.co/AHussain0418/distillbert-truth-detector ### Model Description : No model card New: Create and edit this model card directly on the website!
AI-Ahmed/DisDistilBert-sst-N-Grams-en
https://huggingface.co/AI-Ahmed/DisDistilBert-sst-N-Grams-en
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AI-Ahmed/DisDistilBert-sst-N-Grams-en ### Model URL : https://huggingface.co/AI-Ahmed/DisDistilBert-sst-N-Grams-en ### Model Description :
AI-Growth-Lab/PatentSBERTa
https://huggingface.co/AI-Growth-Lab/PatentSBERTa
https://arxiv.org/abs/2103.11933 https://github.com/AI-Growth-Lab/PatentSBERTa This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 5 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AI-Growth-Lab/PatentSBERTa ### Model URL : https://huggingface.co/AI-Growth-Lab/PatentSBERTa ### Model Description : https://arxiv.org/abs/2103.11933 https://github.com/AI-Growth-Lab/PatentSBERTa This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 5 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
AI-Lab-Makerere/en_lg
https://huggingface.co/AI-Lab-Makerere/en_lg
You can use cURL to access this model:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AI-Lab-Makerere/en_lg ### Model URL : https://huggingface.co/AI-Lab-Makerere/en_lg ### Model Description : You can use cURL to access this model:
AI-Lab-Makerere/lg_en
https://huggingface.co/AI-Lab-Makerere/lg_en
You can use cURL to access this model:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AI-Lab-Makerere/lg_en ### Model URL : https://huggingface.co/AI-Lab-Makerere/lg_en ### Model Description : You can use cURL to access this model:
AI-Nordics/bert-large-swedish-cased
https://huggingface.co/AI-Nordics/bert-large-swedish-cased
This model follows the Bert Large model architecture as implemented in Megatron-LM framework. It was trained with a batch size of 512 in 600k steps. The model contains following parameters: The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below. The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AI-Nordics/bert-large-swedish-cased ### Model URL : https://huggingface.co/AI-Nordics/bert-large-swedish-cased ### Model Description : This model follows the Bert Large model architecture as implemented in Megatron-LM framework. It was trained with a batch size of 512 in 600k steps. The model contains following parameters: The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below. The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
IssakaAI/wav2vec2-large-xls-r-300m-turkish-colab
https://huggingface.co/IssakaAI/wav2vec2-large-xls-r-300m-turkish-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : IssakaAI/wav2vec2-large-xls-r-300m-turkish-colab ### Model URL : https://huggingface.co/IssakaAI/wav2vec2-large-xls-r-300m-turkish-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
AI4Sec/cyner-xlm-roberta-base
https://huggingface.co/AI4Sec/cyner-xlm-roberta-base
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AI4Sec/cyner-xlm-roberta-base ### Model URL : https://huggingface.co/AI4Sec/cyner-xlm-roberta-base ### Model Description :
AI4Sec/cyner-xlm-roberta-large
https://huggingface.co/AI4Sec/cyner-xlm-roberta-large
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AI4Sec/cyner-xlm-roberta-large ### Model URL : https://huggingface.co/AI4Sec/cyner-xlm-roberta-large ### Model Description :
AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2
https://huggingface.co/AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 1438 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2 ### Model URL : https://huggingface.co/AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2 ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 1438 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
https://huggingface.co/AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 1438 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 ### Model URL : https://huggingface.co/AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 1438 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
AIDA-UPM/MSTSb_stsb-xlm-r-multilingual
https://huggingface.co/AIDA-UPM/MSTSb_stsb-xlm-r-multilingual
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 1438 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AIDA-UPM/MSTSb_stsb-xlm-r-multilingual ### Model URL : https://huggingface.co/AIDA-UPM/MSTSb_stsb-xlm-r-multilingual ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 1438 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
AIDA-UPM/bertweet-base-multi-mami
https://huggingface.co/AIDA-UPM/bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AIDA-UPM/bertweet-base-multi-mami ### Model URL : https://huggingface.co/AIDA-UPM/bertweet-base-multi-mami ### Model Description : This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2
https://huggingface.co/AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2
This is a fine-tuned version of paraphrase-multilingual-mpnet-base-v2 from sentence-transformers model with Semantic Textual Similarity Benchmark extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences. This model is fine-tuned version of paraphrase-multilingual-mpnet-base-v2 for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at GitHub. The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below). Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the Multilingual STSB have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es). Here we compare the average multilingual semantic textual similairty capabilities between the paraphrase-multilingual-mpnet-base-v2 based model and the mstsb-paraphrase-multilingual-mpnet-base-v2 fine-tuned model across the 31 tasks. It is worth noting that both models are multilingual, but the second model is adjusted with multilingual data for semantic similarity. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient. The following tables breakdown the performance of mstsb-paraphrase-multilingual-mpnet-base-v2 according to the different tasks. For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks. The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 687 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2 ### Model URL : https://huggingface.co/AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2 ### Model Description : This is a fine-tuned version of paraphrase-multilingual-mpnet-base-v2 from sentence-transformers model with Semantic Textual Similarity Benchmark extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences. This model is fine-tuned version of paraphrase-multilingual-mpnet-base-v2 for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at GitHub. The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below). Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the Multilingual STSB have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es). Here we compare the average multilingual semantic textual similairty capabilities between the paraphrase-multilingual-mpnet-base-v2 based model and the mstsb-paraphrase-multilingual-mpnet-base-v2 fine-tuned model across the 31 tasks. It is worth noting that both models are multilingual, but the second model is adjusted with multilingual data for semantic similarity. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient. The following tables breakdown the performance of mstsb-paraphrase-multilingual-mpnet-base-v2 according to the different tasks. For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks. The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 687 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method:
AIDA-UPM/xlm-roberta-large-snli_mnli_xnli_fever_r1_r2_r3
https://huggingface.co/AIDA-UPM/xlm-roberta-large-snli_mnli_xnli_fever_r1_r2_r3
This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from: The model is validated on ANLI training sets, including R1, R2 and R3. The following results can be expected on the testing splits.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AIDA-UPM/xlm-roberta-large-snli_mnli_xnli_fever_r1_r2_r3 ### Model URL : https://huggingface.co/AIDA-UPM/xlm-roberta-large-snli_mnli_xnli_fever_r1_r2_r3 ### Model Description : This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from: The model is validated on ANLI training sets, including R1, R2 and R3. The following results can be expected on the testing splits.
AIDynamics/DialoGPT-medium-MentorDealerGuy
https://huggingface.co/AIDynamics/DialoGPT-medium-MentorDealerGuy
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AIDynamics/DialoGPT-medium-MentorDealerGuy ### Model URL : https://huggingface.co/AIDynamics/DialoGPT-medium-MentorDealerGuy ### Model Description :
AJ/DialoGPT-small-ricksanchez
https://huggingface.co/AJ/DialoGPT-small-ricksanchez
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AJ/DialoGPT-small-ricksanchez ### Model URL : https://huggingface.co/AJ/DialoGPT-small-ricksanchez ### Model Description :
AJ/rick-ai
https://huggingface.co/AJ/rick-ai
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AJ/rick-ai ### Model URL : https://huggingface.co/AJ/rick-ai ### Model Description : No model card New: Create and edit this model card directly on the website!
AJ/rick-bot
https://huggingface.co/AJ/rick-bot
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AJ/rick-bot ### Model URL : https://huggingface.co/AJ/rick-bot ### Model Description : No model card New: Create and edit this model card directly on the website!
AJ/rick-discord-bot
https://huggingface.co/AJ/rick-discord-bot
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AJ/rick-discord-bot ### Model URL : https://huggingface.co/AJ/rick-discord-bot ### Model Description :
AJ/rick-sanchez-bot
https://huggingface.co/AJ/rick-sanchez-bot
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AJ/rick-sanchez-bot ### Model URL : https://huggingface.co/AJ/rick-sanchez-bot ### Model Description :
AJ-Dude/DialoGPT-small-harrypotter
https://huggingface.co/AJ-Dude/DialoGPT-small-harrypotter
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AJ-Dude/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/AJ-Dude/DialoGPT-small-harrypotter ### Model Description :
AK/ak_nlp
https://huggingface.co/AK/ak_nlp
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AK/ak_nlp ### Model URL : https://huggingface.co/AK/ak_nlp ### Model Description : No model card New: Create and edit this model card directly on the website!
AK270802/DialoGPT-small-harrypotter
https://huggingface.co/AK270802/DialoGPT-small-harrypotter
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AK270802/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/AK270802/DialoGPT-small-harrypotter ### Model Description :
AKMyscich/VetTrain-v1.2
https://huggingface.co/AKMyscich/VetTrain-v1.2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AKMyscich/VetTrain-v1.2 ### Model URL : https://huggingface.co/AKMyscich/VetTrain-v1.2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AKulk/wav2vec2-base-timit-demo-colab
https://huggingface.co/AKulk/wav2vec2-base-timit-demo-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AKulk/wav2vec2-base-timit-demo-colab ### Model URL : https://huggingface.co/AKulk/wav2vec2-base-timit-demo-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
AKulk/wav2vec2-base-timit-epochs10
https://huggingface.co/AKulk/wav2vec2-base-timit-epochs10
This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AKulk/wav2vec2-base-timit-epochs10 ### Model URL : https://huggingface.co/AKulk/wav2vec2-base-timit-epochs10 ### Model Description : This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AKulk/wav2vec2-base-timit-epochs15
https://huggingface.co/AKulk/wav2vec2-base-timit-epochs15
This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AKulk/wav2vec2-base-timit-epochs15 ### Model URL : https://huggingface.co/AKulk/wav2vec2-base-timit-epochs15 ### Model Description : This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AKulk/wav2vec2-base-timit-epochs20
https://huggingface.co/AKulk/wav2vec2-base-timit-epochs20
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AKulk/wav2vec2-base-timit-epochs20 ### Model URL : https://huggingface.co/AKulk/wav2vec2-base-timit-epochs20 ### Model Description : No model card New: Create and edit this model card directly on the website!
AKulk/wav2vec2-base-timit-epochs5
https://huggingface.co/AKulk/wav2vec2-base-timit-epochs5
This model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AKulk/wav2vec2-base-timit-epochs5 ### Model URL : https://huggingface.co/AKulk/wav2vec2-base-timit-epochs5 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
ALINEAR/albert-japanese-v2
https://huggingface.co/ALINEAR/albert-japanese-v2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ALINEAR/albert-japanese-v2 ### Model URL : https://huggingface.co/ALINEAR/albert-japanese-v2 ### Model Description : No model card New: Create and edit this model card directly on the website!
ALINEAR/albert-japanese
https://huggingface.co/ALINEAR/albert-japanese
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ALINEAR/albert-japanese ### Model URL : https://huggingface.co/ALINEAR/albert-japanese ### Model Description : No model card New: Create and edit this model card directly on the website!
ALaks96/distilbart-cnn-12-6
https://huggingface.co/ALaks96/distilbart-cnn-12-6
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ALaks96/distilbart-cnn-12-6 ### Model URL : https://huggingface.co/ALaks96/distilbart-cnn-12-6 ### Model Description : No model card New: Create and edit this model card directly on the website!
ARATHI/electra-small-discriminator-fintuned-cola
https://huggingface.co/ARATHI/electra-small-discriminator-fintuned-cola
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARATHI/electra-small-discriminator-fintuned-cola ### Model URL : https://huggingface.co/ARATHI/electra-small-discriminator-fintuned-cola ### Model Description : No model card New: Create and edit this model card directly on the website!
ARCYVILK/gpt2-bot
https://huggingface.co/ARCYVILK/gpt2-bot
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARCYVILK/gpt2-bot ### Model URL : https://huggingface.co/ARCYVILK/gpt2-bot ### Model Description : No model card New: Create and edit this model card directly on the website!
ARTeLab/it5-summarization-fanpage
https://huggingface.co/ARTeLab/it5-summarization-fanpage
This model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARTeLab/it5-summarization-fanpage ### Model URL : https://huggingface.co/ARTeLab/it5-summarization-fanpage ### Model Description : This model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
ARTeLab/it5-summarization-ilpost
https://huggingface.co/ARTeLab/it5-summarization-ilpost
This model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARTeLab/it5-summarization-ilpost ### Model URL : https://huggingface.co/ARTeLab/it5-summarization-ilpost ### Model Description : This model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training:
ARTeLab/it5-summarization-mlsum
https://huggingface.co/ARTeLab/it5-summarization-mlsum
This model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARTeLab/it5-summarization-mlsum ### Model URL : https://huggingface.co/ARTeLab/it5-summarization-mlsum ### Model Description : This model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
ARTeLab/mbart-summarization-fanpage
https://huggingface.co/ARTeLab/mbart-summarization-fanpage
This model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARTeLab/mbart-summarization-fanpage ### Model URL : https://huggingface.co/ARTeLab/mbart-summarization-fanpage ### Model Description : This model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
ARTeLab/mbart-summarization-ilpost
https://huggingface.co/ARTeLab/mbart-summarization-ilpost
This model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARTeLab/mbart-summarization-ilpost ### Model URL : https://huggingface.co/ARTeLab/mbart-summarization-ilpost ### Model Description : This model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
ARTeLab/mbart-summarization-mlsum
https://huggingface.co/ARTeLab/mbart-summarization-mlsum
This model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ARTeLab/mbart-summarization-mlsum ### Model URL : https://huggingface.co/ARTeLab/mbart-summarization-mlsum ### Model Description : This model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization. It achieves the following results: The following hyperparameters were used during training: More details and results in published work
ASCCCCCCCC/PENGMENGJIE-finetuned-emotion
https://huggingface.co/ASCCCCCCCC/PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/PENGMENGJIE-finetuned-emotion ### Model URL : https://huggingface.co/ASCCCCCCCC/PENGMENGJIE-finetuned-emotion ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
ASCCCCCCCC/PENGMENGJIE
https://huggingface.co/ASCCCCCCCC/PENGMENGJIE
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/PENGMENGJIE ### Model URL : https://huggingface.co/ASCCCCCCCC/PENGMENGJIE ### Model Description :
ASCCCCCCCC/PMJ
https://huggingface.co/ASCCCCCCCC/PMJ
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/PMJ ### Model URL : https://huggingface.co/ASCCCCCCCC/PMJ ### Model Description : No model card New: Create and edit this model card directly on the website!
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh
https://huggingface.co/ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh ### Model URL : https://huggingface.co/ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh ### Model Description : No model card New: Create and edit this model card directly on the website!
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000
https://huggingface.co/ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000
This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 ### Model URL : https://huggingface.co/ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 ### Model Description : This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000
https://huggingface.co/ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000
This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000 ### Model URL : https://huggingface.co/ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000 ### Model Description : This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000
https://huggingface.co/ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000
This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 ### Model URL : https://huggingface.co/ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 ### Model Description : This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000
https://huggingface.co/ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 ### Model URL : https://huggingface.co/ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc
https://huggingface.co/ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc ### Model URL : https://huggingface.co/ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AT/bert-base-uncased-finetuned-wikitext2
https://huggingface.co/AT/bert-base-uncased-finetuned-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AT/bert-base-uncased-finetuned-wikitext2 ### Model URL : https://huggingface.co/AT/bert-base-uncased-finetuned-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AT/distilbert-base-cased-finetuned-wikitext2
https://huggingface.co/AT/distilbert-base-cased-finetuned-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AT/distilbert-base-cased-finetuned-wikitext2 ### Model URL : https://huggingface.co/AT/distilbert-base-cased-finetuned-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AT/distilgpt2-finetuned-wikitext2
https://huggingface.co/AT/distilgpt2-finetuned-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AT/distilgpt2-finetuned-wikitext2 ### Model URL : https://huggingface.co/AT/distilgpt2-finetuned-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AT/distilroberta-base-finetuned-wikitext2
https://huggingface.co/AT/distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of distilroberta-base on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AT/distilroberta-base-finetuned-wikitext2 ### Model URL : https://huggingface.co/AT/distilroberta-base-finetuned-wikitext2 ### Model Description : This model is a fine-tuned version of distilroberta-base on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
ATGdev/DialoGPT-small-harrypotter
https://huggingface.co/ATGdev/DialoGPT-small-harrypotter
#Harry Potter DialoGPT Model
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ATGdev/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/ATGdev/DialoGPT-small-harrypotter ### Model Description : #Harry Potter DialoGPT Model
ATGdev/ai_ironman
https://huggingface.co/ATGdev/ai_ironman
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ATGdev/ai_ironman ### Model URL : https://huggingface.co/ATGdev/ai_ironman ### Model Description : No model card New: Create and edit this model card directly on the website!
AUBMC-AIM/MammoGANesis
https://huggingface.co/AUBMC-AIM/MammoGANesis
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AUBMC-AIM/MammoGANesis ### Model URL : https://huggingface.co/AUBMC-AIM/MammoGANesis ### Model Description :
AUBMC-AIM/OCTaGAN
https://huggingface.co/AUBMC-AIM/OCTaGAN
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AUBMC-AIM/OCTaGAN ### Model URL : https://huggingface.co/AUBMC-AIM/OCTaGAN ### Model Description :
AVAIYA/python-test
https://huggingface.co/AVAIYA/python-test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AVAIYA/python-test ### Model URL : https://huggingface.co/AVAIYA/python-test ### Model Description : No model card New: Create and edit this model card directly on the website!
AVSilva/bertimbau-large-fine-tuned-md
https://huggingface.co/AVSilva/bertimbau-large-fine-tuned-md
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AVSilva/bertimbau-large-fine-tuned-md ### Model URL : https://huggingface.co/AVSilva/bertimbau-large-fine-tuned-md ### Model Description : This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
AVSilva/bertimbau-large-fine-tuned-sd
https://huggingface.co/AVSilva/bertimbau-large-fine-tuned-sd
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AVSilva/bertimbau-large-fine-tuned-sd ### Model URL : https://huggingface.co/AVSilva/bertimbau-large-fine-tuned-sd ### Model Description : This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
AVeryRealHuman/DialoGPT-small-TonyStark
https://huggingface.co/AVeryRealHuman/DialoGPT-small-TonyStark
#Tony Stark DialoGPT model
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AVeryRealHuman/DialoGPT-small-TonyStark ### Model URL : https://huggingface.co/AVeryRealHuman/DialoGPT-small-TonyStark ### Model Description : #Tony Stark DialoGPT model