TransformerFAM: Feedback attention is working memory Paper • 2404.09173 • Published Apr 14, 2024 • 43
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length Paper • 2404.08801 • Published Apr 12, 2024 • 66
cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser Text Generation • Updated Mar 4, 2024 • 2.23k • 119