🦸🏻#8: Rewriting the Rules of Knowledge: How Modern Agents Learn to Adapt
Exploring the shift from static rules to dynamic reasoning in modern AI agents
In one of the previous episodes, we've covered the critical role of profiling in agentic workflows, exploring how agents build awareness of their identity, behavior, environment, performance, and resources. Profiling emerged as the connective tissue linking knowledge, memory, and action, transforming agents from static systems into dynamic collaborators capable of nuanced decision-making. Making them, a digital personality.
In this episode, we’ll shift our focus to knowledge – the foundation of this digital personality’s expertise. How does an agent "know" what it knows? What are the mechanisms behind its expertise, and how do they influence its behavior? Let’s see. Prepare yourself for a fascinating deep dive into history!
What's in Today’s Episode?
- Are Agents Still Knowledge-Based?
- From Explicit Knowledge to Learned Representations
- John McCarthy’s “Programs with Common Sense”
- What Does Knowledge Look Like Today?
- Balancing Knowledge-Based and Learned Systems
- The Mechanics of Knowledge
- The Historical Foundation: A Tale of Two Frameworks
- Concluding Thoughts
- Resources (We put all the links in this section)
We apologize for the anthropomorphizing terms scattered throughout this article – let’s agree they are all in ““.
Are Agents Still Knowledge-Based?
The concept of “knowledge-based agents,” as defined in Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig, marked a turning point in AI. Their vision was clear and logical: agents perceive their environment, make decisions, and act on those decisions in a neat, procedural loop. It’s a beautifully organized system – but one built for a world where things don’t change much.
Today’s world doesn’t play by those rules. Agents aren’t confined to fixed sequences or predictable settings anymore. Instead, they’ve shifted from following procedural knowledge to a more declarative approach: defining outcomes, not steps. Imagine you’re telling an agent, “I need a cake,” and it figures out the rest – whether it’s grabbing ingredients, finding the recipe, or even ordering from a bakery.
This leap is why modern agents thrive in messy, unpredictable environments. They’re no longer following static rules but adapting to the moment, learning on the fly, and collaborating dynamically.
From Explicit Knowledge to Learned Representations
A key distinction lies in how modern agents manage knowledge. Traditional frameworks rely on explicitly programmed rules, while modern agents – particularly those powered by large language models (LLMs) – use learned representations. Modern agents are more like polyglots who’ve grown up surrounded by language. They don’t just know facts – they understand patterns.
Instead of rule-based decision trees, the goal of a contemporary agent is to learn patterns and forecast emergent behaviors enabling systems to dynamically determine how to achieve their goals. This divergence allows for task decomposition, iterative refinement, and multi-agent integration – all hallmarks of today’s agentic systems.
Sometimes, your role as an agentic builder is to teach your agent that what it perceives as correct might actually be wrong. For example, in a television schedule, the "first week of November" might actually begin in late October. Your job is to point this out and let the agent figure out how to adjust its understanding accordingly.
This shift – from explicit rules to learned representations – is what allows today’s agents to improvise, adapt, and excel in tasks they weren’t explicitly programmed for.
John McCarthy’s “Programs with Common Sense”
Interestingly, much of what defines modern agents was anticipated in John McCarthy’s seminal 1958 paper, Programs with Common Sense – way way before LLMs and all this hype. McCarthy envisioned an "advice taker" system capable of reasoning, learning, and acting based on declarative knowledge – essentially, the early blueprint for today’s agentic workflows.
- Reasoning with Declarative Knowledge: McCarthy’s system encoded facts about the world, enabling it to deduce new insights autonomously.
- Learning and Adaptation: He emphasized the importance of agents acquiring new abstractions and concepts over time, evolving alongside their environments.
- Actionable Knowledge: Knowledge was not merely stored; it drove decisions and real-world actions, creating a feedback loop between reasoning and behavior.
Image Credit: McCarthy’s Original paper
The paper's example of "going to the airport" demonstrates how an agent breaks down a high-level goal into smaller tasks using logical rules and available knowledge. This process exactly what current agentic systems trying to achieve, with agents decomposing complex problems into manageable subtasks.
While technology has advanced far beyond the systems McCarthy could have imagined, his principles remain strikingly relevant. Rereading McCarthy feels like reading a time capsule from the future. It’s a reminder that AI’s roots run deeper than we often realize.
What Does Knowledge Look Like Today?
Modern agents don’t just “know” – they work with knowledge in ways that mimic human intelligence. We don’t just stuff facts into a system – the main thing is to make that knowledge dynamic, adaptable, and actionable. Let’s explore other important types of knowledge in the modern agentic workflows.
Structural Knowledge: Building Connections
This is the scaffolding that holds everything together – how concepts are related and interact. Early systems used rigid semantic networks, while modern agents use neural architectures to learn relationships on the fly. A medical diagnosis agent, for example, doesn’t just know “fever = flu.” It learns probabilistic relationships from clinical data, enabling it to consider rare or complex conditions.
Meta-Knowledge: Knowing What You Know
Here’s where things get meta. Modern agents are not just repositories of information – they’re aware of their own knowledge. This awareness allows them to evaluate reasoning processes, identify gaps, and seek missing information. A language model, for instance, might request clarification when it encounters ambiguous inputs, demonstrating a self-awareness of its limitations.
Heuristic Knowledge: Learning the Rules of the Game
Heuristics – those handy problem-solving shortcuts – used to be handcrafted by humans. Once limited to static rules of thumb, heuristic knowledge has become far more dynamic. Systems like AlphaZero develop their own strategies through self-play, bypassing centuries of human-designed heuristics in games like chess. This evolution reflects how modern agents learn to generalize and adapt to new challenges.
The Convergence of Knowledge Forms in Modern Agents
The true power of modern AI systems lies not just in their individual knowledge types, but in how these forms of knowledge interact and reinforce each other. Consider how a modern language model handles a complex task like writing code:
Structural knowledge provides the foundation, representing relationships between programming concepts, syntax patterns, and common architectural designs. Meta-knowledge allows the system to evaluate its understanding of different programming paradigms and libraries, potentially requesting clarification when needed. Heuristic knowledge guides efficient problem decomposition and solution strategies. All of this operates within a declarative framework where the system focuses on the desired outcome rather than predetermined steps.
This integration enables sophisticated behaviors that were impossible in traditional systems:
- Dynamic task decomposition based on context and available resources
- Adaptive problem-solving strategies that combine multiple forms of knowledge
- Seamless collaboration between different specialized components
- Real-time adjustment of approaches based on feedback and intermediate results
Balancing Knowledge-Based and Learned Systems
While modern AI has moved toward learned representations, pure knowledge-based approaches remain vital in specific domains. However, the distinction is becoming increasingly fluid. Many contemporary systems employ hybrid approaches that combine the best of both worlds:
- Medical diagnosis systems maintain explicit rule bases for critical decisions while using learned patterns to identify subtle symptom relationships
- Industrial control systems blend traditional safety constraints with learned optimization strategies
- Financial compliance systems use explicit rules for regulatory requirements while leveraging pattern recognition for fraud detection
- Legal reasoning systems combine structured argumentation with learned language understanding
This hybrid approach represents a mature understanding that different types of knowledge serve different purposes, and the key is knowing when to use each.
The Mechanics of Knowledge
We explored the shift from procedural to declarative knowledge and learned other types of knowledge that are relevant for current agentic systems. But if knowledge is the fuel, these are the engines that drive it: representation, acquisition, and integration.
- Representation – how knowledge is structured. Early systems relied on static tools like semantic networks, while modern agents use dynamic frameworks like knowledge graphs and neural embeddings. Google’s Knowledge Graph, for instance, connects entities and relationships to contextualize search queries, while neural models encode complex patterns for nuanced reasoning.
- Acquisition – how knowledge is learned. Techniques like reinforcement learning allow agents to refine strategies through trial and error, while few-shot learning enables them to adapt with minimal examples. Self-supervised learning trains models by predicting missing data, while interactive learning refines knowledge through real-time feedback, making agents highly adaptable in dynamic environments.
- Integration – how it all comes together. It’s the process of synthesizing diverse knowledge sources – structured, unstructured, and multimodal – into coherent insights that drive decision-making. For example, a climate analysis agent combines satellite imagery, historical weather patterns, and socioeconomic data to predict disaster risks. This synthesis allows agents to navigate complex problem spaces, from autonomous vehicles merging sensor inputs to medical AI diagnosing conditions from multimodal data.
Together, these mechanisms create a loop: representation structures knowledge, acquisition expands it, and integration applies it. This synergy enables agents to dynamically understand, learn, and act, solving complex real-world challenges with intelligence and adaptability.
The Historical Foundation: A Tale of Two Frameworks
We would not be Turing Post if we didn’t slow down and carefully look back, well… once again :) – where the roots of today’s innovations lie. Many recent articles and papers about agents and agentic workflows merely scratch the surface, often labeling the field as nascent. But that’s far from accurate, not only John McCarthy dedicated his thinking tot it – agentic systems have been a subject of active research and development for a few decades! These early works laid the groundwork for what we see today. As we transition deeper into the age of agentic AI, it’s crucial to revisit these foundations and understand the roots that brought us here. While it would be too much to cover all of them, we have chosen a couple that are largely forgotten yet have significantly influenced the field.
In the 1980s, two pivotal frameworks emerged, each addressing distinct but overlapping aspects of agentic behavior: Fagin, Halpern, and Vardi’s knowledge structures and Moore’s theory of knowledge and action. These frameworks represent two sides of the same coin. While Fagin and his colleagues built tools to model and analyze the layered, recursive nature of knowledge, Moore added the critical dimension of action – demonstrating how knowledge evolves as agents interact with the world.
Modeling Knowledge: The Static Framework
Fagin, Halpern, and Vardi’s work introduced the idea of knowledge depth, an elegant way to represent the nested layers of reasoning essential for understanding distributed systems. Imagine you’re a processor in a network trying to reach consensus with others. It’s not enough to know your own state – you must also reason about what others know, what they know about your knowledge, and so on. These infinite regressions are no mere thought experiments; they are critical for solving problems like the Byzantine Agreement, where agents need to coordinate despite faulty communication or even malicious actors.
To address this, Fagin, Halpern, and Vardi introduced knowledge structures, which build knowledge inductively:
Their knowledge structures provided a systematic way to handle this complexity. By building knowledge layer by layer, from "raw reality" to increasingly recursive states, they offered a way to make sense of multi-agent reasoning. This was an intellectual leap that bridged theoretical computer science with practical applications like distributed computing, cryptography, and database theory.
But there’s a limitation to this static framework: it models what agents know at a given point but says little about how they acquire or apply that knowledge. This is where Moore’s contributions come into play.
The Dynamic View: Knowledge Meets Action
Moore’s formal theory of knowledge and action addressed this gap by linking what agents know to what they can do. In his framework, knowledge is not just a static repository but a dynamic process, continually shaped by actions and outcomes.
For instance, consider a robotic agent tasked with assembling a product. Before it acts, the robot must verify that all the necessary components are present (knowledge preconditions). As it works, it gains new knowledge: whether the components fit as expected, whether the tools function correctly, and so on. Each action generates new information that feeds into the robot’s decision-making loop, creating a continuous feedback cycle between knowledge and action.
This interplay between knowledge and action forms a feedback loop:
This dynamic approach is critical for understanding modern agentic workflows, where agents not only reason about their environment but actively shape it through their actions. In Moore’s logic, actions are operators that transform an agent’s knowledge state. This opens the door to reasoning about multi-step workflows, where an agent’s choices at one stage depend on the outcomes of earlier actions.
The Unified Perspective
Taken together, these two frameworks – one static, one dynamic – offer a comprehensive view of agentic systems. Fagin et al. provide the scaffolding for reasoning about knowledge states, while Moore equips us to understand how these states evolve in response to action. This interplay is at the heart of modern AI systems, from collaborative robots to autonomous vehicles.
Consider a self-driving car approaching an intersection with another vehicle. Using Fagin's framework, we can formally represent complex knowledge structures like "The autonomous vehicle knows that the other driver knows there's a stop sign" or "It's common knowledge between both vehicles that right-of-way rules apply." Moore's dynamic logic then allows us to reason about how specific actions (like the car signaling its intentions) create new knowledge states: "After signaling, it becomes common knowledge that the autonomous vehicle intends to turn." And all of that from 1980s.
Concluding Thoughts
There is so much to say about knowledge – the sheer depth of its history and the many ways to think about it are staggering. We haven’t tried to capture every detail or angle here. Instead, our goal was to honor the decades of rigorous research and theoretical work that have brought us to where we stand today and to highlight the paradigm shift from traditional, rule-based agents to adaptive, learning ones.
What’s fascinating is how much of what we’re building now echoes ideas suggested long ago. The shift from procedural systems to declarative, dynamic frameworks hasn’t just advanced agentic workflows; it’s transformed how agents reason, learn, and act with flexibility that once seemed out of reach.
As we look ahead, it’s clear that the real magic lies in the interplay between knowledge, memory, reasoning & planning, reflections, and action. In the next episode, we’ll explore Memory – the mechanism that lets agents connect the dots across time, applying what they know in meaningful, contextual ways. The journey into the heart of agentic intelligence is just getting more interesting.
Resources
that were used to write this article
- Programs with Common Sense (1958) by Dr. J. McCarthy
- A Formal Theory of Knowledge and Action (1984) by Robert C. Moore
- A Model-Theoretic Analysis of Knowledge (1984) by Ronald Fagin, Joseph Y. Halpern, Moshe Y. Vardi
- Common knowledge revisited (1998) by by Ronald Fagin, Joseph Y. Halpern, Moshe Y. Vardi, Yoram Moses
- Artificial Intelligence A Modern Approach Third Edition (pdf) by Stuart Russell and Peter Norvig (we also suggest to buy the latest 4th edition
Sources from Turing Post
We also thank Will Schenk, who provided valuable feedback and helped put this theory into practical perspective.