-
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 53 -
Simple linear attention language models balance the recall-throughput tradeoff
Paper • 2402.18668 • Published • 19 -
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition
Paper • 2402.15220 • Published • 19 -
Linear Transformers are Versatile In-Context Learners
Paper • 2402.14180 • Published • 6
Kiran Kamble
kiranr
AI & ML interests
nlp,llm
Recent Activity
liked
a model
7 days ago
deepseek-ai/DeepSeek-R1-Distill-Llama-70B
liked
a model
7 days ago
internlm/internlm3-8b-instruct
liked
a model
7 days ago
MiniMaxAI/MiniMax-Text-01
Organizations
Collections
1
models
1
datasets
None public yet