view article Article Mastering Long Contexts in LLMs with KVPress By nvidia and 1 other • 17 days ago • 61
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization Paper • 2406.05981 • Published Jun 10, 2024 • 13
NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations Paper • 2306.06359 • Published Jun 10, 2023 • 1
In-Sensor & Neuromorphic Computing are all you need for Energy Efficient Computer Vision Paper • 2212.10881 • Published Dec 21, 2022
Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT Paper • 2307.11764 • Published Jul 14, 2023
InstaTune: Instantaneous Neural Architecture Search During Fine-Tuning Paper • 2308.15609 • Published Aug 29, 2023
GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM Paper • 2403.05527 • Published Mar 8, 2024 • 1