Every company has a John.
The person who doesn't need to check the manual. Who knows that this sensor fails in humid weather, or that hinge won't survive a high-traffic corridor. Not because it's written down—but because they've seen it happen.
We've spent months trying to answer one question:
How do we build AI systems that don't just retrieve knowledge—but understand it like John would?
It's a share-out of what we've learned so far designing AI agents for capturing and reasoning over expert knowledge. Our goal: help others tackling similar problems avoid dead ends and find patterns that work.
Documentation rarely captures what really matters. Experts operate from intuition, not instructions. They know exceptions, patterns, and tradeoffs—none of which live in a PDF.
When those experts leave, their mental model walks out the door.
That's where AI can help—if it's structured correctly.
We didn't build a single GPT-powered monolith. Instead, we found success in a modular, agentic system. Picture a team of AI specialists collaborating like a brain trust:
Each one plays a role—but the real power emerges when they work together: learning over time, coordinating with memory, and applying logic to resolve ambiguity. This isn't just retrieval—it's reasoning at scale.
Most AI systems forget what happened 5 minutes ago. That's not how experts work.
So we built a Memory Layer:
In our setup, short-term memory is vector-based, optimized for fast retrieval. Long-term memory lives in the knowledge graph—validated, structured, and reusable.
This combo lets our system answer questions more precisely, and over time, more proactively.
A contractor asks: "Will the Schlage ALX50 work with the LCN 4040XP?"
Here's how our system handles it:
If no clear answer exists, the system doesn't hallucinate—it asks a clarifying question.
Want to see this in action? Check out our live demo at showroom.conversant.ai — explore how the system reasons through real product queries. This approach isn't just for hardware—it could transform how hospitals retain doctor expertise or banks preserve financial know-how.
Our focus now is:
Agent Feedback Loops — letting agents self-reflect and improve. For instance, we've built early tooling to log and compare agent responses over time to fine-tune logic heuristics
Proactive Suggestions — helping users before they even ask. Imagine a contractor looking at a closer, and the system auto-suggests the top 3 compatible locks based on prior job history
More Community Input — this is too big to solve alone
Here's what we'd suggest based on what's worked (and not worked) for us: