How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries Paper • 2402.15302 • Published Feb 23, 2024 • 4
AI and Safety Collection We published in several top NLP/AI conferences such as ACL, EMNLP, AAAI, ICWSM • 8 items • Updated 26 days ago • 4
Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance Paper • 2406.11139 • Published Jun 17, 2024 • 13
SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models Paper • 2406.12274 • Published Jun 18, 2024 • 15
Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations Paper • 2406.11801 • Published Jun 17, 2024 • 16
Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models Paper • 2401.10647 • Published Jan 19, 2024 • 4