MiniMax-01: Scaling Foundation Models with Lightning Attention Paper • 2501.08313 • Published 4 days ago • 258
MiniMax-01: Scaling Foundation Models with Lightning Attention Paper • 2501.08313 • Published 4 days ago • 258 • 5
Scaling Laws for Linear Complexity Language Models Paper • 2406.16690 • Published Jun 24, 2024 • 23 • 4
CO2: Efficient Distributed Training with Full Communication-Computation Overlap Paper • 2401.16265 • Published Jan 29, 2024 • 1
Scaling Laws for Linear Complexity Language Models Paper • 2406.16690 • Published Jun 24, 2024 • 23
Scaling Laws for Linear Complexity Language Models Paper • 2406.16690 • Published Jun 24, 2024 • 23
Scaling Laws for Linear Complexity Language Models Paper • 2406.16690 • Published Jun 24, 2024 • 23 • 4
Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models Paper • 2401.04658 • Published Jan 9, 2024 • 26
Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models Paper • 2401.04658 • Published Jan 9, 2024 • 26