The AI Hangover Is Coming. Your Data is the Remedy.

The ambition was everywhere. Investment teams compressing week-long credit reviews into 20 minutes. Underwriting models outperforming senior analysts on renewal probability. Operators rebuilding entire workflows around AI from the ground up.

And underneath it, quietly, a problem that no keynote addressed directly. The data those systems are reasoning against does not mean the same thing across the stack. The same asset holds four different identifiers across six platforms. The same metric is calculated differently depending on who pulls the report. Definitions conflict. Lineage disappears. Agents produce confident, fluent, wrong answers.

That is not a tools problem. It is an ontology problem. And RETCON 2026 confirmed it from every direction.

Here is what we heard.


THE PEOPLE PROBLEM

AI does not have a trust problem. It has a data problem that looks like a trust problem.

Every panel that touched on AI deployment arrived at the same friction point. Not whether teams would use the tools. Whether they would trust the outputs enough to act on them.

The answer, almost universally, was not yet.

Operators described workflows where the AI produced a result and the analyst rebuilt it from scratch anyway, just to verify. Investment teams running AI-assisted underwriting who still required a human to sign off on every line before it touched a model. Not because the AI was wrong. Because no one could see where the answer came from.

That is not a change management failure. It is a rational response to opacity. When data flows into a system from a dozen sources, gets transformed three times before it reaches a model, and produces an output with no visible path back to its inputs, skepticism is not a cultural barrier. It is good judgment.

The firms that solved adoption did one thing differently. They made the reasoning visible. They showed their teams what the AI was working with, where each input came from, and how the output would change if the input changed. Adoption followed. Not because the tools improved. Because the data beneath them became traceable.

Traceability is not a nice-to-have. It is the condition for trust. And trust, it turns out, is the condition for everything else.

THE TEA: “You need that cohesive layer, or you do not have ownership of the experience. I see terrific solutions across the board. What I am not seeing is them starting to communicate.”  — Whitney Kidd, Preiss

THE PROCESS PROBLEM

The firms that crossed from pilot to production built shared meaning before they built anything else.

The most mature AI deployments at RETCON were not the most technically sophisticated. They were the most structurally intentional.

In retail real estate, landlords who once trusted their own intuitions on tenant performance now build live dashboards from connected data sources, because tenants arrive at lease negotiations armed with traffic analytics and sales comps. The operators who win know their asset better than their tenants do before anyone sits down at the table.

In investment management, the firms with the clearest ROI story were the ones who spent years cleaning and aggregating data before the models existed to run on top of it. One team described building a market rent prediction model trained on a decade of structured industrial data. It now outperforms their own analysts on certain inputs. That outcome was not accidental. It was the product of a deliberate decision to treat data quality as a long-term competitive asset.

The word for what these firms built, before the AI arrived, is ontology. Shared meaning. Consistent definitions. A single language for what an asset is, what a tenant relationship represents, what a metric means, that travels intact from raw input to analyst output.

Without it, every new tool reasons against a different version of the same reality. The problem does not compound slowly. It compounds fast. Surprise, surprise: it always goes back to ontology.

THE TEA: “Data is the biggest moat. We aggregated and cleaned it for a decade. Now our model is more accurate than our investment team on certain inputs.”  — Ohad Porat, Farpoint

THE TECHNOLOGY PROBLEM

The tools are accelerating. The infrastructure beneath them is not keeping pace.

The speed of capability growth is not theoretical anymore. AI can handle tasks measured in hours that took teams weeks eighteen months ago. Credit analysis compressed from a week to 20 minutes. Lease abstractions that once required specialist reviewers running automatically at scale. Deal intake pipelines eliminating entire categories of manual handoff.

These are not experiments. They are production systems. And they were built by firms that made a specific choice: structure the data before deploying the model, not after.

The parallel story at RETCON was harder to hear but impossible to miss. Firms pushing AI into stacks where the same asset carries different identifiers across six systems. Governance frameworks written after the agents are already running. A vendor with autonomous access to live data that went outside the agreed scope, and no one caught it until the damage was done.

The CTOs and CIOs who have navigated this said the same thing from every angle. Governance is not a constraint on innovation. It is the condition for it. Data quality is not the prerequisite for starting. It is the prerequisite for scaling. You cannot govern what you cannot trace. And you cannot trust what you cannot govern.

The firms still in pilot purgatory are not stuck because of model quality. They are stuck because the data beneath the models means something different in every system that touches it. Agents do not guess through fragmented context. They hallucinate through it. In real estate, at the scale these firms are operating, that is not a curiosity. It is a liability that grows every quarter it goes unaddressed.

THE TEA: “Data governance is key. Once you have high quality data, you can do all the things with AI. Otherwise you are stuck in a purgatory of pilots and POCs and not meaningful value.”  — Keats Ali, Lincoln Property Company

The hangover is not inevitable. But it is coming for firms that skip the work.

Real estate did not get here by accident. The tools arrived fast, the use cases were compelling, and the pressure to show AI progress to investors and boards was real. Pilots launched. Budgets moved. Announcements were made.

What did not move at the same pace was the data layer beneath all of it. The shared definitions. The governance. The traceable lineage. The single source of truth for what an asset is, who owns it, how it performs, and what that performance means.

That is the hangover. Not a crash. A slow compounding of technical debt, trust deficits, and misaligned outputs that becomes visible at exactly the moment you need your systems to perform.

The firms that came out of RETCON with a clear path forward were not the ones with the most tools. They were the ones who understood that structure is not the enemy of speed. It is the only thing that makes speed hold.


Why Cherre

Cherre was built to solve the meaning problem before the scale problem.

  • Cherre CONNECT unifies internal and external data across every system in your stack. 
  • Cherre CORE establishes shared ontology: one language, one set of definitions, across every department and partner. 
  • Cherre QUALITY makes trust visible, traceable, and auditable before it reaches an agent. 
  • Cherre ALPHA delivers intelligence that reasons rather than guesses.

That is the difference between AI that compounds and AI that hallucinates. Between agents that scale and agents that expose you. Between advantage that builds and risk that surfaces only after it has already cost you something.