top of page

What if AI doesn’t need more data—it needs better memory?

  • Writer: Manoj Tiwari
    Manoj Tiwari
  • May 4
  • 3 min read

Updated: May 6

Every company has a John.


The person who doesn’t need to check the manual. Who knows that this sensor fails in humid weather, or that hinge won’t survive a high-traffic corridor. Not because it’s written down—but because they’ve seen it happen.


We’ve spent months trying to answer one question:



How do we build AI systems that don’t just retrieve knowledge—but understand it like John would?


It’s a share-out of what we’ve learned so far designing AI agents for capturing and reasoning over expert knowledge. Our goal: help others tackling similar problems avoid dead ends and find patterns that work.


Evolution of data storage: floppy disk, CD, SD card, and USB drive representing memory progression over time.
Just as memory storage evolved from floppy disks to flash drives, AI needs to evolve from stateless Q&A to memory-driven reasoning systems.


Why Capturing Expert Knowledge is So Hard


Documentation rarely captures what really matters. Experts operate from intuition, not instructions. They know exceptions, patterns, and tradeoffs—none of which live in a PDF.


When those experts leave, their mental model walks out the door.


That’s where AI can help—if it’s structured correctly.



The Architecture That Worked for Us (So Far)


We didn’t build a single GPT-powered monolith. Instead, we found success in a modular, agentic system. Picture a team of AI specialists collaborating like a brain trust:


  • Listener Agent – Gathers raw insights from PDFs, manuals, and past conversations.

  • Memory Agent – Constructs and updates a living knowledge graph, forming semantic connections like a neural web.

  • Reasoning Agent – Synthesizes insights from structured and unstructured data to produce context-aware, accurate answers.


Each one plays a role—but the real power emerges when they work together: learning over time, coordinating with memory, and applying logic to resolve ambiguity. This isn’t just retrieval—it’s reasoning at scale.



Lessons on Designing AI That Remembers and Reasons


Most AI systems forget what happened 5 minutes ago. That’s not how experts work.


So we built a Memory Layer:

  • Short-term memory for conversational flow

  • Long-term memory for storing resolved insights


In our setup, short-term memory is vector-based, optimized for fast retrieval. Long-term memory lives in the knowledge graph—validated, structured, and reusable.

This combo lets our system answer questions more precisely, and over time, more proactively.



A Real-World Use Case: Hardware Compatibility

A contractor asks: "Will the Schlage ALX50 work with the LCN 4040XP?"


Here’s how our system handles it:

  • Router Agent parses the question.

  • Catalog Agent looks up lock/closer compatibility.

  • Quality Agent verifies the logic.

  • Memory Agent checks for prior context (e.g. "this contractor usually uses electrified options").


If no clear answer exists, the system doesn’t hallucinate—it asks a clarifying question.


Want to see this in action? Check out our live demo at showroom.conversant.ai — explore how the system reasons through real product queries. This approach isn’t just for hardware—it could transform how hospitals retain doctor expertise or banks preserve financial know-how.



Where We’re Headed Next

Our focus now is:

  • Agent Feedback Loops — letting agents self-reflect and improve. For instance, we’ve built early tooling to log and compare agent responses over time to fine-tune logic heuristics.

  • Proactive Suggestions — helping users before they even ask. Imagine a contractor looking at a closer, and the system auto-suggests the top 3 compatible locks based on prior job history.

  • More Community Input — this is too big to solve alone.



Want to Build AI That Thinks Like an Expert?

Here’s what we’d suggest based on what’s worked (and not worked) for us:

  • Separate ingestion, memory, and reasoning. Don’t overload a single model.

  • Use a knowledge graph if long-term logic matters.

  • Design your agents like a team: with roles, handoffs, and shared context.

 
 
bottom of page