-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
Playing Atari with Deep Reinforcement Learning
Paper • 1312.5602 • Published -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
Collections
Discover the best community collections!
Collections including paper arxiv:1810.04805
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 13 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 16 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 22
-
ReAct: Synergizing Reasoning and Acting in Language Models
Paper • 2210.03629 • Published • 14 -
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Jamba: A Hybrid Transformer-Mamba Language Model
Paper • 2403.19887 • Published • 104
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
Distributed Representations of Sentences and Documents
Paper • 1405.4053 • Published -
Sequence to Sequence Learning with Neural Networks
Paper • 1409.3215 • Published • 3 -
PaLM: Scaling Language Modeling with Pathways
Paper • 2204.02311 • Published • 2 -
Recent Trends in Deep Learning Based Natural Language Processing
Paper • 1708.02709 • Published
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11 -
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 2
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
Self-Attention with Relative Position Representations
Paper • 1803.02155 • Published -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding
Paper • 2401.12954 • Published • 28
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Transformers Can Achieve Length Generalization But Not Robustly
Paper • 2402.09371 • Published • 12 -
A Thorough Examination of Decoding Methods in the Era of LLMs
Paper • 2402.06925 • Published • 1