-
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 53 -
Simple linear attention language models balance the recall-throughput tradeoff
Paper • 2402.18668 • Published • 19 -
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition
Paper • 2402.15220 • Published • 19 -
Linear Transformers are Versatile In-Context Learners
Paper • 2402.14180 • Published • 6
Kiran Kamble
kiranr
AI & ML interests
nlp,llm
Recent Activity
new activity
1 day ago
Writer/palmyra-large:Adding `safetensors` variant of this model
authored
a paper
14 days ago
Expect the Unexpected: FailSafe Long Context QA for Finance
Organizations
Collections
1
models
1
datasets
None public yet