The Evolution of Knowledge-Based Systems
Early AI research focused heavily on knowledge bases and agents that could interact with them, leading to expert systems. These systems relied on central knowledge bases to make decisions through “if-then” reasoning patterns. While some consider expert systems the first major AI breakthrough, others debate whether they truly belong in the AI category.
Knowledge-based agents remain relevant in modern AI. If you’ve worked in natural language processing, you’ve likely encountered knowledge bases like WordNet. Wikipedia represents another massive knowledge base, encoding semantic relationships between countless entities.
This abundance of knowledge bases raises important questions: How do we build agents that effectively interface with these repositories? How can they update knowledge bases and make decisions based on stored information?
This article explores these questions through a practical introduction to knowledge-based agents, knowledge representation, and logic.
Anatomy of a Knowledge-Based Agent
An agent is any entity that acts within an environment. In AI, we build rational agents—entities that act sensibly within their environment. In well-understood environments, rational agents choose actions that yield desired outcomes. In uncertain environments, they act to maximize expected positive outcomes.
But what about agents that go beyond simple reactions? Consider agents that maintain internal knowledge, reason over that knowledge, and update their understanding through observations and actions. This is the foundation of knowledge-based agents.
Knowledge Bases: The Foundation
The core component of any knowledge-based agent is its knowledge base (KB)—the repository of what the agent knows about the world.
Every KB consists of sentences—not natural language sentences, but statements written in a knowledge representation language. These specialized languages express assertions about the world in formats that enable systematic reasoning.
Knowledge representation languages are designed for systematic representation of world knowledge. This is why we use these specialized languages rather than natural language for agent knowledge bases.
Types of Knowledge in a KB
Axioms are foundational sentences assumed to be true without derivation from other KB content. If you’re familiar with mathematics, this concept will feel natural.
Inferred sentences are derived from existing KB content through logical reasoning. These aren’t fabricated—they’re systematically derived using reasoning rules that may be built into the knowledge representation language itself.
Core Agent Operations
Knowledge-based agents interact with their KBs through two fundamental operations: Ask and Tell.
Ask: Querying Knowledge
Ask is how agents extract information from their KB. When an agent asks a “question” or queries its KB, it must format the request in the KB’s expected format (typically in the knowledge representation language).
The KB responds with sentences that are either:
- Directly stored in the KB, or
- Inferred from existing KB information
This guarantees that responses never contradict the KB’s knowledge.
Tell: Adding Knowledge
Tell updates the KB with new information. Agents use this when they:
- Observe environmental changes
- Update the KB with planned or completed actions
- Add newly learned facts
Like Ask operations, new information must be properly formatted. The KB stores the new sentence and may trigger reasoning and inference processes to derive additional conclusions.
A Generic Knowledge-Based Agent Architecture
Here’s a high-level view of how a knowledge-based agent operates:
def kb_agent(percept):
Tell(KB, make_percept_sentence(percept, t))
action = Ask(KB, make_action_query(t))
Tell(KB, make_action_sentence(action, t))
t = t + 1
return action
This function illustrates the agent’s reasoning cycle:
- Perceive: Convert environmental observations into KB-compatible sentences via
make_percept_sentence()
- Reason: Query the KB for the appropriate action using
make_action_query()
- Act: Record the chosen action in the KB through
make_action_sentence()
- Update: Increment the time step and return the action
The helper functions handle the crucial task of translating between the external world and the knowledge representation language. This architecture focuses purely on the reasoning process—the “brains” of the operation—while abstracting away perception and action execution details.
Design Perspectives for Knowledge-Based Agents
Knowledge-based agents can be analyzed and designed from three distinct levels, each addressing different aspects of the system.
Knowledge Level: The Strategic View
The knowledge level represents the highest level of analysis, focusing on:
- Goals: What objectives does the agent pursue?
- Knowledge scope: How much does the agent know about its world initially?
This level helps us understand the agent’s capabilities and limitations from a strategic perspective, independent of implementation details.
Logical Level: The Representation View
The logical level examines how knowledge is represented and reasoned about:
- Knowledge representation language: Which language best suits our domain?
- Logical framework: Are we using propositional logic, first-order logic, or something else?
Each choice carries trade-offs. Some languages excel at expressing certain types of knowledge but struggle with others. The logical framework determines what kinds of reasoning the agent can perform.
Implementation Level: The Technical View
The implementation level addresses the concrete technical decisions:
- Data structures: How is knowledge stored? (structs, databases, objects, vectors?)
- Algorithms: Which inference procedures are used?
- Performance: How do design choices affect speed and memory usage?
These decisions significantly impact the agent’s practical performance and scalability.
Learning and Knowledge Acquisition
Declarative vs. Procedural Approaches
Knowledge-based agents can be constructed through two primary approaches:
Declarative approach: Initialize the agent with an empty KB, then systematically Tell it all the knowledge it needs. This explicit knowledge encoding offers transparency and modularity.
Procedural approach: Write programs that encode knowledge directly into the agent’s behavior. This approach can be more efficient but less transparent.
Real-world systems typically benefit from combining both approaches, leveraging the strengths of each method.
Incorporating Learning
Learning enhances knowledge-based agents in several ways:
Perceptual learning: Agents can learn to combine observations in novel ways, creating new sentences that improve goal achievement. These learned patterns become part of the KB for future reasoning.
Inference optimization: Learning algorithms can identify efficient reasoning paths within existing KBs, speeding up the inference process.
Knowledge base expansion: Research continues into generating new connections in existing KBs like WordNet. As semantic webs proliferate online, efficient methods for KB expansion become increasingly valuable.
The integration of learning with knowledge-based reasoning represents an active area of AI research, with promising applications in knowledge graph completion, automated reasoning, and intelligent system adaptation.
Looking Forward
This introduction covered the fundamental concepts needed to understand knowledge-based agents:
- Knowledge bases as collections of sentences in knowledge representation languages
- Core operations (Ask and Tell) that enable agent-KB interaction
- Agent architecture for perception, reasoning, and action
- Design perspectives across knowledge, logical, and implementation levels
- Learning integration through declarative and procedural approaches
These foundations prepare us for deeper exploration of logic, knowledge representation languages, and reasoning algorithms in subsequent posts.
Understanding knowledge-based agents provides valuable insights into AI systems that maintain explicit knowledge models, reason systematically about their environment, and adapt through learning. As AI continues to evolve, these principles remain relevant for building transparent, interpretable, and robust intelligent systems.
Next in This Series
Continue exploring knowledge representation with Fundamentals of Logic, where we dive into the mathematical foundations that make knowledge-based reasoning possible.