Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations Paper • 2403.09704 • Published Mar 8 • 31
Using Large Language Models for Natural Language Processing Tasks in Requirements Engineering: A Systematic Guideline Paper • 2402.13823 • Published Feb 21
Customizing Language Model Responses with Contrastive In-Context Learning Paper • 2401.17390 • Published Jan 30
SoFA: Shielded On-the-fly Alignment via Priority Rule Following Paper • 2402.17358 • Published Feb 27