🌁#84: Could Program Synthesis Unlock AGI?

Community Article Published January 20, 2025

we discuss François Chollet's combinatorial approach to reaching AGI, plus offer you a collection of interesting articles, relevant news, and must-read research papers. Dive in!


🔳 Turing Post is on đŸ€— Hugging Face as a resident -> click to follow!


Now, to the main topic:

image/png

Lately, we’ve seen many ideas making promising comebacks to further fuel our already unstoppable race toward superintelligent computers and reasoning robots. Today, let’s talk about program synthesis and why it might be a missing piece in the puzzle toward AGI.

The first time program synthesis captured our attention was in 2019, when François Chollet published his brilliant paper "On the Measure of Intelligence." In it, he introduced the Abstraction and Reasoning Corpus (ARC), a benchmark designed to evaluate human-like general intelligence. There, he emphasized the limitations of deep learning for reasoning and generalization, and argued that program synthesis could serve as a key step toward creating truly intelligent systems. By allowing AI to generate solutions dynamically – writing small programs tailored to specific tasks – program synthesis shifts the focus from static task performance to adaptability and reasoning.

Fast forward to 2025, ARC-AGI has become one of the primary benchmarks for evaluating models aspiring to AGI. François Chollet is taking his ideas even further: launching Ndea, a lab dedicated to advancing AGI by exploring the fascinating hybrid of deep learning and program synthesis. This combination, he believes, could unlock new efficiencies, enabling AI to reason abstractly, learn from minimal data, and solve a broader range of problems than ever before. Let’s see what program synthesis is, where it comes from and how it can be combined with deep learning.

History: Of course, we can trace Program Synthesis way back


to our dearest Alan Turing.

  • Early Years: In 1945, Alan Turing envisioned machines capable of generating programs autonomously. But the formal roots emerged in 1957 when Alonzo Church proposed synthesizing circuits from mathematical requirements, an idea now called "Church's Problem."

image/png

  • Formal Foundations (1960s - 1980s): The field gained a stronger theoretical footing with contributions like the automata-theoretic approach by BĂŒchi and Landweber (1969) and the work of Manna and Waldinger (c. 1980). This period focused on developing formal methods for program synthesis, often based on logical reasoning and deductive techniques.
  • Pragmatic Evolution (1990s-2010s): Program synthesis evolved to incorporate more practical approaches, including sketching (introduced in 2006 with the SKETCH system by Armando Solar-Lezama), where programmers provide partial programs with holes that are automatically filled, and programming-by-examples (PBE) (popularized in the 2010s with tools like Flash Fill in Excel, developed by Sumit Gulwani, which automates data transformations by learning patterns from user-provided input-output examples.)
  • Modern Resurgence (2010s-2020s): The 21st century witnessed a renewed interest in program synthesis, particularly within the formal verification community. This led to advancements like Syntax-guided synthesis (SyGuS), which combines logical specifications with grammatical constraints to guide the synthesis process.

For many years, program synthesis and machine learning have their own independent trajectories, but now we see their collaboration gaining momentum. And there are a few factors that made their integration more feasible and promising:

  • Increased Computational Power: GPUs! Providing enough (and more more more) computational resources to handle the complexity of both program synthesis and machine learning algorithms allows researchers to explore more sophisticated techniques and tackle larger problems.
  • Availability of Large Datasets: The rise of big data and the proliferation of online code repositories provided the raw material for training machine learning models used in program synthesis. These datasets enabled the development of data-driven approaches to guide the search process, learn from examples, and generalize to new situations.
  • Cross-fertilization of Ideas: Software developers transitioning into the ML world brought their specialized knowledge and passion, applying it to various domains.

As and example, in 2023, MIT launched a course “Introduction to program synthesis” describing it as “a new field at the intersection of programming languages, formal methods and AI”.

Chollet’s Vision: The Case for Program Synthesis

François Chollet has long argued that program synthesis is a crucial step toward artificial general intelligence (AGI). He critiques the limitations of deep learning – its dependence on massive datasets, its brittleness, and its struggles with reasoning and generalization. Unlike deep learning, which excels at recognizing patterns but often fails to adapt to novel problems, program synthesis allows AI to generate solutions by reasoning abstractly, offering a more adaptable and scalable approach.

In his landmark work On the Measure of Intelligence, Chollet emphasized separating the process of intelligence (the system that generates solutions) from the output (the specific solutions themselves). He argued that program synthesis – a method where AI creates small, task-specific programs – is an ideal way to evaluate intelligence. This approach shifts focus from static task performance to the ability to adapt dynamically to unseen challenges.

Deep Learning Meets Program Synthesis

Chollet envisions program synthesis as a complementary approach to deep learning, rather than a replacement. While deep learning models can guide program synthesis by narrowing the search space and handling large-scale pattern recognition, program synthesis brings reasoning and abstraction to the table. This hybrid approach could unlock efficiencies and tackle problems that are currently beyond AI’s reach.

In pursuit of this vision, François Chollet and Mike Knoop founded Ndea, an AI research lab focused on advancing AGI through program synthesis. Rooted in Chollet's belief that abstraction is key to intelligence, Ndea aims to develop adaptable AI systems that overcome deep learning's limitations by leveraging symbolic manipulation and code generation for flexible reasoning and generalization.

We’ll be following Ndea closely, as AGI isn’t a challenge that can be solved from a single angle. It requires integration and collaboration across various scientific fields. It’s exciting to see new aspects of AGI being tackled with fresh approaches.


Curated Collections

10 Recent Advancements in Math Reasoning

image/png


Do you like Turing Post? –> Click 'Follow'! And subscribe to receive it straight into your inbox -> https://www.turingpost.com/subscribe


News from The Usual Suspects ©

  • AI’s New Best Friend: Journalism Mistral AI partners with AFP, securing access to 40+ years of archives to power its chatbot, Le Chat. OpenAI, not to be outdone, teams up with Axios, supporting its local’s newsroom expansion in four U.S. cities. Google’s Gemini app also tries to keep up, leveraging AP’s real-time feeds for fresher, trusted content.

From archives to real-time updates, AI’s marriage with journalism is rewriting the rules of news delivery. I feel good about it.

  • Contextual AI: RAG Comes Home The creators of Retrieval-Augmented Generation (RAG) are back with Contextual AI’s new platform, designed to tackle the most complex, knowledge-intensive tasks. With its unified RAG 2.0 architecture, it outperforms fragmented systems by delivering higher accuracy, fewer errors, and real-world reliability. It’s always fascinating to see the originators of a concept bring it to a truly production-ready stage.
  • Hugging Face Launches Free Course on AI Agents Hugging Face has introduced a free, certified course designed to demystify AI agents. Participants will learn how to build intelligent agents using frameworks like LangChain and LlamaIndex, explore real-world applications, and earn a certification by completing hands-on tasks. Whether you're a developer or simply curious, the course offers a solid foundation in this fast-evolving field.
  • Microsoft’s AI Machine Rolls On MatterGen: AI in the Lab Coat Microsoft Research unveils MatterGen, a generative AI tool that designs new materials from scratch. By bypassing traditional screening processes, it’s already creating stable compounds with properties like magnetism and durability. Batteries, solar cells, CO₂ capture – MatterGen could be the key to breakthroughs in sustainable tech. A while ago, we published an interview with one of MatterGen’s coauthors → Read it. AutoGen 0.4: Building Smarter Agents The latest release of AutoGen refines Microsoft’s agentic framework, advancing the tools for developing proactive, task-driven AI systems. New AI Engineering division Microsoft recruits ex-Meta heavyweight Jay Parikh to lead its new AI engineering division. Tasked with scaling supercomputers and platforms, Parikh embodies Nadella’s mission: “Thirty years of change in three.” That’s a very interesting development!
  • Google’s Titans Roar Google Research unveils Titans, an AI model with dynamic "long-term memory" at test time, claiming linear scaling for long inputs. This could shatter Transformers' quadratic constraints and push AI towards human-like cognition. While skeptics flag computational costs and memory bottlenecks, enthusiasts await benchmarks. Titans may just redefine what "attention" really means in 2025.
  • OpenAI: Serving New President Days before inauguration, OpenAI unveils its manifest-like Economic Blueprint, urging investment in chips, energy, and talent to drive AI-powered growth while safeguarding democracy – dancing along the new president’s lines to stay relevant. Meanwhile, ChatGPT gets smarter with “tasks” in beta. Now a proactive assistant, it handles everything from reminders to automating recurring actions – making your to-do list one less thing to think about. Task: “Hey chat, wake me up when everyone is done with hypocrisy.”

We are reading

The freshest research papers, categorized for your convenience

There were quite a few TOP research papers this week, we will mark them with 🌟 in each section. Attention and Transformer Innovations

Reasoning, Thinking and Knowledge Expansion

Scaling Foundation Models

Best Practices for Datasets

Benchmarks and Evaluation

Enhancing Training and Interpretability

Unspecified

That’s all for today. Thank you for reading!


Please share this article to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve.

image/png

Community

Sign up or log in to comment