Recap In my last post, I talked about knowledge-based agents and introduced the concepts behind knowledge bases (KBs) and the set of sentences we use to build them up. In this post, we’ll build on the idea of a sentence as a point to dive into logic and logical representation.
This article is the second of a multi-part discussion on knowledge-based agents, knowledge representation, and logic. See the previous posts here:
Knowledge-Based Agents Early on in the field of AI, there was a focus on knowledge bases and creating agents to interact with them. This resulted in the creation of things that we now refer to as “expert systems.” These were complex systems that required a central knowledge base that could then be utilized in a “if-then” manner to make complex decisions and to reason. Some call these systems the first major breakthrough in AI, while others say that expert systems don’t even belong in the category of AI.
Introduction This week has been a great week for natural language processing and I’m really excited (and you should be too)! This week, I happened to browse over to Arxiv Sanity Preserver as I normally do, only instead of being greeted by a barrage of GAN-this or CNN-that, my eyes fell upon two datasets that made me very happy. They were CoQA and QuAC, and in this blog we are going to talk about why CoQA is so exciting.
Introduction If you’re in the field of natural language processing and you’re not excited, you’re about to be! This week, I happened to browse over to Arxiv Sanity Preserver as I normally do, only instead of being greeted by a barrage of GAN-this or CNN-that, my eyes fell upon two datasets that made me very happy. They were CoQA and QuAC, and in this blog we are going to talk about why QuAC is so exciting.
A brief overview of the objective functions used in GANs
What’s in a Generative Model? Before we even think about starting to talk about Generative Adversarial Networks (GANs), we ask what’s in a generative model? Why do we even want to have such a thing? What is the goal? These questions can help seed our thought process to better engage with GANs.
So why do we want a generative model? Well, it’s in the name! We wish to generate something. But what do we wish to generate?
Count Vectorization (AKA One-Hot Encoding) If you haven’t already, check out my previous blog post on word embeddings: Introduction to Word Embeddings.
In that blog post, we talk about a lot of the different ways we can represent words to use in machine learning. It’s a high level overview that we will expand upon here and check out how we can actually use count vectorization on some real text data.
What is a word embedding? A very basic definition of a word embedding is a real number, vector representation of a word. Typically, these days, words with similar meaning will have vector representations that are close together in the embedding space (though this hasn’t always been the case).
When constructing a word embedding space, typically the goal is to capture some sort of relationship in that space, be it meaning, morphology, context, or some other kind of relationship.