An Introduction into Knowledge-Based Agents

Early on in the field of AI, there was a focus on knowledge bases and creating agents to interact with them. This resulted in the creation of things that we now refer to as "expert systems." These were complex systems that required a central knowledge base that could then be utilized in a "if-then" manner to make complex decisions and to reason. Some call these systems the first major breakthrough in AI, while others say that expert systems don't even belong in the category of AI.

Expert System

Despite all of this, knowledge-based agents and knowledge bases still have relevance in our current AI endeavors. If you've done work in natural language processing, you may have come across knowledge bases like WordNet. Or maybe you're familiar with Wikipedia, a massive knowledge base that describes all sorts of semantic relationships between entities.

Thinking about this and other huge knowledge bases we have access to in the modern age of the internet begs the question of how do we build agents that interface with knowledge bases, updating them and making complex decisions off of the knowledge contained within them? Is it worth our time? Could it help move AI forward?


Either way, it's useful to reflect on these ideas and the idea of a knowledge-based agent. This article is the first of a multi-part discussion on knowledge-based agents, knowledge representation, and logic.

What's in a Knowledge-Based Agent?

An agent is anything that acts in environment. In AI, one might say that we are trying to study and create rational agents, agents the act in an environment in a way that makes sense. In environments where we perfectly understand action-reaction pairs, we may say that an agent is rational when it chooses the action that will yield its desired outcome. In an environment with any degree of uncertainty, we may say that an agent is rational if it acts in such a way that brings it the highest degree of expected desired outcomes.

Agent in action

But what if we wish to create an agent that is not purely reactionary? What if we wish to create an agent that maintains an internal state of knowledge, reasons over that knowledge, and updates its knowledge as it makes observations and takes actions? This is the idea of knowledge-based agents and knowledge-based intelligence.

Knowledge Bases

The core piece of any knowledge-based agent is its knowledge base, much like you might think based on its name. Oftentimes, and from now on in this post, we will refer to a knowledge base as a KB.

A KB always consists of a set of sentences. Not a natural language sentence, but a sentence written in a different type of language, a knowledge representation language.

A knowledge representation language is just a language for representing assertions about the world in a concise manner that can be reasoned about nicely. Hence, why we might be concerned with constructing our agent's knowledge base in a knowledge representation language.

Sentences that aren't derived from anything else in the KB and just assumed to be true get dubbed the name of axiom (this will be a natural label if you're familiar with the term from any mathematical background).

Any sentences that are derived from the KB are said to be inferred. These sentences aren't made up, they're simply inferred from what is already known. This can be through logical reasoning with a set of rules, rules that may even come naturally out of the knowledge representation language.


Actions of an Agent

In order to interact with its KB, any knowledge-based agent will need two actions: Ask and Tell.

Ask is how an agent extracts information from a KB. We may say that an agent Asks a "question" or maybe we say that an agent queries its KB through the Ask action. The important thing is that the agent formats its request in the format that the KB expects (maybe in a knowledge representation language) and that a KB responds to such a request with a sentence that was either stored in the KB or inferred from the information in the KB. This guarantees that the response the KB gives doesn't contradict any of the knowledge held in the KB.

When we aren't Asking the KB for information, we might wish to update the KB with new information. Maybe our agent has observed a change in the environment or wishes to update the KB with the knowledge that it has made an plan or taken an action. This is what we use the Tell action for. Like with ask, we need to format our knowledge correctly. The KB will take our newly formatted sentence and store it, maybe updating itself through reasoning and inference processes.

Generic Knowledge-Based Agent

So what does a knowledge-based agent look like at a high-level? Maybe something like this:

def kb_agent(percept):
  Tell(KB, make_percept_sentence(percept, t))
  action = Ask(KB, make_action_query(t))
  Tell(KB, make_action_sentence(action, t))
  t = t + 1
  return action

Let's break down what's going on here.

We can think of this as a function that takes in a percept of the environment, updates the KB through the Tell action, Asks the KB what action to take, and then Telling the KB that it is taking such an action.

Inside the two functions, make_percept_sentence and make_action sentence, we would write the logic that formats a perception and an action into the proper knowledge representation language. Likewise, we can understand the function make_action_query as a function that takes a time step and creates a query in the knowledge representation language so that our agent may ask the KB which action it should take.

What's missing from this? The actual code for making the perceptions (those are simply passed into the function) and the code for interacting with the external world to make the actions. This is simply the logic for reasoning over the KB and coming up with what action should be taken. This is the brains of the operation.

Lenses of Knowledge-Based Agents

Now that we have covered the ideas of what a KB is and how a knowledge-based agent interfaces with it, we can look at a knowledge-based agent from several different viewpoints.

Knowledge Level

This is the highest level that we would analyze a knowledge-based agent. This is the level that specifies what an agent's goals are and what knowledge it knows. Maybe we ask the question "how much about the world does my agent know at the start?" This level helps us analyze and ask this question.

Logical Level

This is the level at which we ask how is the logical representation of knowledge stored. This is where we factor in our discussion about knowledge representation languages. Every language has its advantages and disadvantages. Somethings cannot be described as easily in some languages. This level is also where we look at how our logic is represented. Are we using propositional logic? First order logic? These are all questions that we'll talk about more in a different post.

Implementation Level

Finally, we get to the implementation level. Much like you may have inferred, this is the level where we ask how are actually representing this knowledge? Is it wrapped up in a C struct? Do we have a relational database? Are we using objects? Vectors? Just like there are many design choices we can make at the higher levels, there are major design decisions that will be made here as well and severely affect our knowledge-based agent.

Learning in Knowledge-Based Agents

So far we've talked about what a KB is, the basics of knowledge-based agents, and the different levels of analysis we can use to design our agent. Once we have handled our design questions, we can think of creating a knowledge-based agent simply by initializing it with an empty KB and then Telling it all the sentences we want it to start with.

This approach is called declarative, which contrasts to the procedural approach of system design where one would just write a program that already encodes the basic things we want our agent to know. In the real world, we most likely want a synthesis of both declarative and procedural agent building, but its important to understand the difference.

Tying these two approaches together, we can also start to see how one may also incorporate learning into knowledge-based agents. Take for example a situation in which you have an agent that can learn and a knowledge base. An agent that can learn can figure out ways to combine perceptions that it's making so that it can create sentences to be stored in the KB that help it become more successful and achieve its goals faster. There are also learning approaches that take what's already in a KB and help in the inference process. Research in generating new connections in KBs like WordNet exist and efficient ways of doing this are still open research questions that are ever more interesting as more and more semantic webs are created online.


We covered a lot of fundamentals about knowledge-based agents that will help us engage with more of these ideas in the future. We talked about KBs, actions of knowledge-based agents, and what a generic knowledge-based agent could look like. We also discussed different lenses we can look at our agents through, particularly in the design phase. And we've wrapped up with a discussion about learning with KBs and procedural/declarative system design.

If you have any questions, let me know! I am still learning a lot of things about the field of AI, myself, and discussions helps refine understanding.


Comments powered by Disqus