Pragmatic Awe
Like many, I was awed by LLMs when I first encountered them. Still am. They have godlike knowledge across almost every human discipline. Used right, as an assistant, they truly amplify intelligence.
However, they have fuzzy limitations — at times, they act more like toddlers than intellectual giants. The more I used and studied them, the more these issues came into focus, now represented by a taxonomy of AI failure modes that I've compiled.
The awe is well-founded. But I realized pragmatism is a fundamental discipline when dealing with AI, especially when it comes to complexity, novelty, and scale.
So I approach GenAI with what I call pragmatic awe — not just a perspective but a discipline that resulted in Maxism. We treat AI with both the respect and caution it warrants.
Track Record
Before Maxism, I built things that lasted.
Measurement systems that worked.
As a part-time developer, I built a survey system from scratch for Oregon State University. It ran in production for 13 years before being replaced by Qualtrics. I later built real-time reporting infrastructure for an insights startup, when real-time was still novel in the industry.
Methodologies that got recognized.
Architected a way to capture and segment mobile moments, based on real mobile behaviors and attitudinal data. This was recognized by the Harvard Business Review and won the EXPLOR award for marketing research technology innovation. Later, I led a data science team, and co-architected an Adoption Curve segmentation tool that placed consumers into adoption groups in a category-agnostic way.
Enterprise experience that grounds the vision.
Two decades in customer experience and marketing research. Business process re-engineering for Fortune 500 digital transformation. Working alongside Accenture in the trenches of enterprise change. I know how large organizations actually work — and why most AI deployments fail inside them.
The Cognitive Lens
For twenty years, I've studied how humans think: how they solve problems, make decisions, form preferences, and process information. Underneath the surveys and interviews, marketing research is applied cognitive science in service of understanding behavior.
When AI emerged, I didn't see a new technology. I saw a new form of cognition, one that superficially resembles human thinking but operates on entirely different principles.
Human cognition:
- Grounded in embodied experience
- Constrained by working memory
- Guided by goals, values, and consequences
- Capable of genuine understanding
LLM cognition:
- Pattern-matching at scale
- No grounding in reality
- No persistent memory across contexts
- Optimized for plausibility, not truth
- Incapable of knowing what it doesn't know
Most AI failures trace back to a single mistake: treating AI like a faster human. Giving it human-sized contexts. Expecting human-like judgment. Hoping for human-like reliability.
It doesn't work. The architectures are fundamentally different.
Understanding those differences — from first principles, not from benchmarks — is the foundation of everything Maxism builds. Scalable Thought Architecture, Quest Blueprints, AXIS scoring — none of it would exist without first understanding what AI cognition actually is and how it diverges from the human mind.
The Convergence
AI is a new category — one that requires expertise across disciplines that rarely converge: cognitive science, systems architecture, philosophy, and enterprise operations.
I've spent twenty years working at that convergence. Technical depth. Cognitive expertise. Philosophical rigor. Enterprise experience. First-principles thinking. Architectural vision.
The GenAI space is full of insiders optimizing within the paradigm — bigger models, longer contexts, more parameters. But AI reliability isn't a scaling problem. It's a convergence problem.
AI is so new that best practices don't yet exist. The discipline of managing AI agents hasn't been defined — because the category itself is still emerging. Agent Experience Management is that definition: the frameworks, metrics, and infrastructure that will become standard.
The insiders will keep optimizing models. Maxism is building the infrastructure that makes them actually work.
Connect
I'm selectively engaging with enterprises and partners who want to shape Agent Experience Management as a discipline.