-
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Exploiting Similarities among Languages for Machine Translation
Paper • 1309.4168 • Published -
Theory, Analysis, and Best Practices for Sigmoid Self-Attention
Paper • 2409.04431 • Published • 1 -
Kolmogorov-Arnold Transformer
Paper • 2409.10594 • Published • 38
Collections
Discover the best community collections!
Collections including paper arxiv:1801.06146
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 13 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 16 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 22
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 11 -
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Paper • 1808.06226 • Published • 1
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
ImageNet Large Scale Visual Recognition Challenge
Paper • 1409.0575 • Published • 8 -
Sequence to Sequence Learning with Neural Networks
Paper • 1409.3215 • Published • 3 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
Mistral 7B
Paper • 2310.06825 • Published • 47 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 20 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11