Martyna Płomecka
Research Areas
Authored Publications
Sort By
An AI system to help scientists write expert-level empirical software
Johan Kartiwa
Matthew Abraham
Qian-Ze Zhu
Zahra Shamsi
Shibl Mourad
Julie Wang
Anastasiya Belyaeva
Scott Ellsworth
Yuchen Zhou
Jackson Cui
Grace Joseph
Malcolm Kane
Paul Raccuglia
Ryan Krueger
Jeffrey Cardille
Erica Brand
Renee Johnston
James Thompson
Chris Co
James Manyika
Anna Bulanova
David Smalling
Eser Aygün
Kat Chou
Gheorghe Comanici
arXiv (2025)
Preview abstract
The cycle of scientific discovery is frequently bottlenecked by the slow, manual creation of software to support computational experiments. To address this, we present an AI system that creates expert-level scientific software whose goal is to maximize a quality metric. The system uses a Large Language Model (LLM) and Tree Search (TS) to systematically improve the quality metric and intelligently navigate the large space of possible solutions. The system achieves expert-level results when it explores and integrates complex research ideas from external sources. The effectiveness of tree search is demonstrated across a wide range of benchmarks. In bioinformatics, it discovered 40 novel methods for single-cell data analysis that outperformed the top human-developed methods on a public leaderboard. In epidemiology, it
generated 14 models that outperformed the CDC ensemble and all other individual models for forecasting COVID-19 hospitalizations. Our method also produced state-of-the-art software for geospatial analysis, neural activity prediction in zebrafish, time series forecasting and numerical solution of integrals. By devising and implementing novel solutions to diverse tasks, the system represents a significant step towards accelerating scientific progress.
Keywords: Tree Search, Generative AI, Scorable Scientific Tasks, Empirical Software
View details
CURIE: Evaluating LLMs on multitask long context scientific understanding and reasoning
Matthew Abraham
Haining Pan
Zahra Shamsi
Muqthar Mohammad
Chenfei Jiang
Ruth Alcantara
Gowoon Cheon
Xuejian Ma
Michael Statt
Jackson Cui
Nayantara Mudur
Eun-Ah Kim
Paul Raccuglia
Victor V. Albert
Lizzie Dorfman
Brian Rohr
Shutong Li
Maria Tikhanovskaya
Drew Purves
Elise Kleeman
Philippe Faist
Ean Phing VanLee
International Conference on Learning Representations (ICLR) (2025)
Preview abstract
The core of the scientific problem-solving process involves synthesizing information while applying expert knowledge. Large Language Models (LLMs) have the potential to accelerate this process due to their extensive knowledge across a variety of domains. Recent advancements have also made it possible for LLMs to handle very long "in-context" content. However, existing evaluations of long-context LLMs have focused on assessing their ability to summarize or retrieve information within the given context, primarily in generalist tasks that do not require deep scientific expertise. To facilitate analogous assessments of domain-specific tasks, we introduce the scientific long-Context Understanding and Reasoning Inference Evaluations (CURIE) benchmark. This benchmark provides a set of 8 challenging tasks, derived from around 250 scientific research papers, requiring domain expertise, comprehension of long in-context information, and multi-step reasoning that tests the ability of LLMs to assist scientists in realistic workflows. Tasks in CURIE have been collected from experts in six disciplines - materials science, theoretical condensed matter physics, quantum computing, geospatial analysis, biodiversity, and protein sequencing - covering both experimental and theoretical workflows in science. We evaluate a range of closed and open LLMs on these tasks. Additionally, we propose strategies for task decomposition, which allow for a more nuanced evaluation of the models and facilitate staged multi-step assessments. We hope that insights gained from CURIE can guide the future development of LLMs.
View details