SciCode: A Research Coding Benchmark Curated by Scientists
Abstract
Since language models (LMs) now outperform average humans on many challenging tasks, it has become increasingly difficult to develop challenging, high-quality, and realistic evaluations. We address this issue by examining LMs' capabilities to generate code for solving real scientific research problems. Incorporating input from scientists and AI researchers in 16 diverse natural science sub-fields, including mathematics, physics, chemistry, biology, and materials science, we created a scientist-curated coding benchmark, SciCode. The problems in SciCode naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, and code synthesis. In total, SciCode contains 338 subproblems decomposed from 80 challenging main problems. It offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. Claude3.5-Sonnet, the best-performing model among those tested, can solve only 4.6% of the problems in the most realistic setting. We believe that SciCode demonstrates both contemporary LMs' progress towards becoming helpful scientific assistants and sheds light on the development and evaluation of scientific AI in the future.
Community
Hi @amber1120 congrats on this work!
Are you planning to share the dataset on the hub? Here's a guide: https://huggingface.co/docs/datasets/loading.
The dataset could then be loaded in 2 lines of code, like so:
from datasets import load_dataset
dataset = load_dataset("your-hf-organization/scicode")
It could then also be linked to this paper, as explained here: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper.
We could also set up a Space using Gradio for the leaderboard.
Let me know if you need any help!
Cheers,
Niels
Open-source @ HF
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions (2024)
- PLUM: Preference Learning Plus Test Cases Yields Better Code Language Models (2024)
- CodeRAG-Bench: Can Retrieval Augment Code Generation? (2024)
- AICoderEval: Improving AI Domain Code Generation of Large Language Models (2024)
- A Survey on Large Language Models for Code Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper