MLGym: A New Framework and Benchmark for Advancing AI Research Agents Paper • 2502.14499 • Published 5 days ago • 161
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark Paper • 2501.05444 • Published Jan 9
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language Models Paper • 2502.14191 • Published 6 days ago • 5
CodeCriticBench: A Holistic Code Critique Benchmark for Large Language Models Paper • 2502.16614 • Published 2 days ago • 19