Skip to content
· 8 min read

KBAI: Knowledge-Based AI

Key knowledge-based AI concepts from Georgia Tech OMSCS KBAI (CS 7637): knowledge representation, reasoning strategies, learning, planning, and the ARC-AGI project.

Georgia Tech OMSCS KBAI (CS 7637) is a course about building agents that reason, learn, and remember through structured knowledge rather than statistical pattern matching. This post captures what I learned in Spring 2026 about cognitive architectures, knowledge representations, and the ARC-AGI project that runs through the semester.

Course Focus

KBAI is not a machine learning course. There are no gradients, no training loops, no neural network architectures. The course frames AI as the design of cognitive systems built from three intertwined processes: reasoning, learning, and memory. The recurring lens is the Four Schools of AI, which positions knowledge-based approaches alongside (not against) ML, neural, and statistical methods.

I picked the course for two reasons. It counted toward my graduation requirements, and I wanted hands-on exposure to agent-style AI — the line of work that thinks about cognition as reasoning over structured knowledge. The course delivered on that, with a fair amount of writing along the way: each unit has reflective questions and short essays, and the project comes with its own report.

Knowledge Representation

How an agent stores what it knows is the foundation for everything else. The course covers four representations:

This was the part of the course I came back to the most. As a software engineer working on data-heavy systems, I keep asking how to store information so that it stays usable later. Production systems and frames are clean answers to that for a specific kind of agent. The idea that a frame encodes what kind of information you expect about a concept is something I want to bring back to the way I structure data on the engineering side.

Reasoning Strategies

Once knowledge is represented, the agent needs strategies for using it:

These strategies are deliberately simple. The course’s point is that with the right knowledge representation, even basic strategies cover a lot of ground.

Learning

Learning in KBAI is not gradient descent. It is symbolic, often one-shot, and tied to specific knowledge structures:

The thread is that learning is cheap when the representation is right. A version space or a single explained example can teach you something a neural model would need thousands of samples to approximate — but only inside a clean, well-bounded knowledge schema.

Planning and Problem Solving

Planning is the process of selecting and ordering actions to achieve one or more goals:

Planning is treated as a central cognitive process because action selection itself is central to cognition. Most of what an agent does, at any level, eventually reduces to “which action next.”

Common Sense Reasoning

This block was where the course’s framing felt most distinctive:

The primitive actions section is the one that stuck with me most, partly because of the explicit definition of ontology and partly because of the design philosophy underneath it. With a small set of well-chosen primitives, plus a frame structure to compose them and handle implied actions and state changes, you can interpret a surprising range of input without having a huge knowledge base. That kind of compact, structured representation is exactly what makes sense in environments where compute and data are constrained.

Higher-Level Reasoning

The final block covers two cross-cutting topics:

Meta-reasoning is treated as the layer that sits above the reactive and deliberative parts of the architecture. It is what lets an agent recognize that its current approach isn’t working and try something else.

The ARC-AGI Project

The semester-long project is built around ARC-AGI (Abstract and Reasoning Corpus for Artificial General Intelligence), the benchmark proposed by François Chollet that the course recently switched to from the older Raven’s Progressive Matrices project.

The setup is simple to describe. You get a small number of input-output grid pairs, where each grid is a 2D array of integers from 0–9 representing colors. Your agent has to infer the transformation rule from the examples and apply it to a new test input.

The fact that ARC-AGI is an actively used benchmark in the AI industry made it more engaging than working on a closed academic problem. The milestones ramp up: easier problems first, then medium, then hard, then the full set for the final.

Early milestones were tractable. Most problems could be solved by identifying a primitive operation (rotation, fill, shape extraction, color swap) and composing a few of them. The “find a primitive, then compose” pattern from the course mapped naturally onto the code.

The final was where I hit my limit. The hidden test problems forced harder generalization. I had assumed that the shapes and transformations my code already handled would cover the hidden cases too, but they didn’t. “I can already handle this kind of shape” turned out to be very different from “my code will generalize to unseen versions of it.”

Practical Constraints

These constraints make the project feel more like real ML evaluation than a typical homework assignment.

Course Takeaways

KBAI gave me a vocabulary and a set of design patterns for thinking about agents, knowledge, and cognition that I expect to keep coming back to as I work on systems where AI has to fit within tight, real-world constraints.