The Inconvenient Truth About CRE and AI: You’re Probably Not Ready (And That’s Costing You The Race)

Everyone’s telling you to adopt AI immediately or get left behind. We’re going to tell you something different: Most CRE teams – spanning enterprise to mid-market – aren’t ready for AI. And rushing into it without solving the foundation is exactly how you fall behind.

In 2024, McKinsey, Goldman Sachs, and PwC promised trillions in productivity gains and a new industrial revolution powered by generative AI. Enterprise leaders filled slides with hockey-stick forecasts. Analysts compared ChatGPT’s release to the birth of the internet. But on the ground, reality told a messier story (Source: The AI Executive’s Handbook. Harness the Ungoverned Machine, 2025).

MIT Sloan’s 2025 study found that roughly 95% of enterprise generative AI pilots failed to scale or deliver measurable P&L impact. (MIT Sloan / NANDA Report (2025): The GenAI Divide: State of AI in Business 2025). 

RAND echoed the pattern: over 80% of AI/ML projects fail, double the failure rate of traditional IT initiatives (RAND Corporation Report (2024): The Root Causes of Failure for Artificial Intelligence Projects – RAND) 

Gartner added its own reality check – fewer than half of AI projects ever reach production, and nearly one-third of GenAI pilots are abandoned after proof-of-concept (Gartner. Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025).

RSM Middle Market AI Survey stated that 91% of mid-market companies have adopted generative AI, yet 92% encountered challenges during rollout (RSM US LLP. Middle Market AI Survey 2025: U.S. and Canada. RSM, 2025).

The failures across the board weren’t due to faulty models. They were structural.

The problem isn’t AI adoption. It’s that teams – regardless of size – are building on quicksand.

Read on to explore the real underlying issue further.

Why Most CRE AI Initiatives Are Failing

You came here for AI advice. But if your property data lives in 47 spreadsheets, your tenant information exists in three systems that don’t talk to each other, and your lease abstracts are PDFs someone updates quarterly, AI isn’t going to save you. It’s going to amplify your chaos at scale – with confidence.

The gap between your ambitions and your ability to execute isn’t closing on its own. Teams are being asked to deliver enterprise-level outcomes under real operational constraints – whether that’s lean staffing, fragmented ownership, or systems that weren’t designed to work together. Meanwhile, competitors are pulling ahead. But here’s what rarely gets said: they’re not winning because they adopted AI faster. They’re winning because they fixed their data foundation first.

The Real Reason You Can’t Wait (It’s Not What You Think)

Here’s what’s happening right now: Everyone’s implementing some degree of “AI”. Your competitors included. Most of them are hitting the same walls you are – or would hit if you rushed in. But here’s the critical difference that’s emerging: A small group of teams are quietly solving the foundation problem first. They’re not louder or more visible than everyone else, but they’re building capabilities that will compound.

While most firms are stuck debugging why their AI recommended the same property twice or hallucinated tenant income, the teams with clean foundations are starting to analyze deal pipelines in minutes instead of days. While the majority wrestle with data reconciliation, a few are beginning to automate lease abstraction that actually works.

The urgency isn’t that your competitors have already won. It’s that the window to build the right foundation is closing. Every quarter you wait, you’re behind. But here’s the twist: Every quarter you spend implementing AI on broken data, you’re further behind. You’re not just standing still – you’re running in the wrong direction while burning resources.

This isn’t about efficiency anymore, it’s about survival. The market is splitting into two camps: 

  • CAMP 1: The small group solving foundations now who will scale AI capabilities exponentially,
  • CAMP 2: Everyone else who will spend years debugging, rebuilding, and trying to patch problems that should have been solved at the start. 

The Three Barriers Nobody’s Solving (And Why That Matters)

1. The Data Foundation Problem Is Uniquely Hard in CRE

Gartner predicts that through 2026, organizations will abandon 60% of AI projects unsupported by AI-ready data. (Source: Gartner “Lack of AI-Ready Data Puts AI Projects at Risk”,  2025). But in commercial real estate, “data quality” is a massive understatement.

Your data isn’t just messy – it’s scattered across systems that don’t talk to one another. Property data lives in building management systems, CRMs, maintenance logs, and leases. Tenant relationships that exist in one system but not another. Legal entities that show up twelve different ways across your portfolio. Lease details that are trapped in PDFs. Historical context that disappears every time you migrate systems.

AI doesn’t fix this. It amplifies it. Your recommendation engine will confidently suggest investments based on the fact that your system thinks “125 Main Street” and “125 Main St.” are two different properties. Your predictive model will forecast cash flows using outdated tenant rosters because nobody standardized how move-outs get recorded across acquisitions.

Before you can use AI to predict NOI trends or optimize portfolio performance, you need:

  • Properties that resolve to single entities across every system, every acquisition, every historical record
  • Tenant relationships that connect to actual spaces, actual companies, actual cash flows – and track how those relationships changed over time
  • Legal entities that resolve correctly despite variations in naming, structure, and ownership
  • Standardized data that speaks one language while preserving the context of what it meant in each source system

This isn’t work you can do manually. The scale makes that impossible. A firm managing 50 properties may be dealing with 5,000 tenants across 15 years of leases, tied to thousands of legal entities and guarantors. An enterprise portfolio multiplies that complexity across hundreds or thousands of assets, decades of history, and constant organizational change. In both cases, you’re talking about millions of data points that must resolve correctly for AI to work.

2. Governance That Actually Scales

In a McKinsey global survey, 65% of respondents say their organizations regularly use generative AI (Source: The State of AI in early 2024, Survey, 2024) and external studies show that many employees adopt AI tools on their own initiative, often without formal approval.  But governance isn’t just about privacy – it’s about trust. Your stakeholders need confidence that your AI outputs are accurate, private, and won’t break as you grow

Three principles build that trust:
  1. Accuracy: Establish human-in-the-loop processes to review AI outputs before they inform decisions. Your AI can generate insights at scale, but human judgment validates them.
  2. Privacy: Set clear guardrails around data usage. Your teams need to know what data AI can access, how it’s used, and what stays protected.
  3. Scalability: Choose solutions that grow with you. The tools you implement today need to integrate seamlessly with your tech stack and remain robust as your portfolio expands – without creating security vulnerabilities.
3. The AI Champion (Not the AI Specialist)

Many teams assume their biggest blocker to AI adoption is a lack of in-house expertise. The instinctive response is to wait for the perfect hire – an AI specialist, a data scientist, or a dedicated innovation team – before getting started.

That’s a mistake.

You don’t need to pause progress until the org chart is perfect. What you need is an AI champion – someone from your existing team who can bridge silos, connect AI efforts to real business problems, and move work forward. This person doesn’t need to be a data scientist. They need to understand how your organization actually operates, where the friction lives, and which problems are worth solving first.

Start small. Choose one high-volume, time-consuming task – standardizing property data, automating lease abstract updates, or generating market comps. Build momentum with quick wins. Progress in AI doesn’t come from headcount alone. It comes from focus, ownership, and disciplined execution.

What Solving the Foundation Actually Looks Like

This is where theory meets reality. You can’t manually clean millions of data points. You can’t build entity resolution engines from scratch. You can’t create semantic models that understand what “property” means across every system you’ve ever used. And you definitely can’t do it while also running your business.

This is the problem Cherre was built to solve.

The Industry’s Largest Knowledge Graph for Commercial Real Estate

Cherre connects over 4 billion legal entities, 2 billion addresses, 160 million parcels, and 110 million buildings. This isn’t just scale – it’s connectivity and insights at a level that would otherwise be impossible for real estate teams to achieve.

Our universal data and semantic models understand commercial real estate – not just as data fields, but as relationships across assets, entities, and time. When you bring your data to Cherre, our award-winning physical and legal entity resolution engines do the hard work automatically:

  • Cleaning and standardization: Your messy data gets automatically cleaned, standardized, and mapped to our models
  • Entity resolution: “125 Main Street,” “125 Main St.,” and “125 Main Street, Suite 100” resolve to the same property. John Smith the guarantor connects to J. Smith on the lease and John T. Smith in your CRM.
  • Historical context: We preserve how data changed over time – tenant move-outs, ownership transfers, lease modifications – so your AI understands what happened, not just what exists now.
  • Flexibility: Extend or modify our models for your unique use cases without breaking the foundation. 

This is the unsexy, foundational, absolutely critical work that makes AI possible. Because here’s the thing: Whoever wins data, wins AI.

The Bottom Line

88% of teams using generative AI report it has impacted their organization more positively than expected (Source: Financial Content, “The AI Paradox”). The difference between those teams and the ones struggling? Foundation.

Teams with strong data foundations are using AI to automatically collect and standardize data, automate high-volume workflows, and surface insights that would otherwise require significant manual effort. The result: organizations operating with far greater leverage than their headcount would suggest.

You came here expecting us to sell you AI. We’re selling you data infrastructure instead. Because after building the industry’s largest knowledge graph for commercial real estate, we’ve seen what happens when teams skip this step: they waste months and budget on AI that doesn’t work, then come back to solve the foundation anyway.

Are you ready to do this right?

Sources

MIT Sloan / NANDA Report (2025). The GenAI Divide: State of AI in Business 2025.

RAND Corporation Report (2024). The Root Causes of Failure for Artificial Intelligence Projects – RAND.

Gartner. Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025

RSM US LLP. Middle Market AI Survey 2025: U.S. and Canada. RSM, 2025.

Gartner. Lack of AI-Ready Data Puts AI Projects at Risk, 2025.

Financial Content. The AI Paradox: Commercial Real Estate Grapples with High Adoption, Low Achievement. Financial Content, 2025.

McKinsey: The state of AI in early 2024: Gen AI adoption spikes and starts to generate value, 2024.

Misha Sulpovar: The AI Executive’s Handbook. Harness the Ungoverned Machine, 2025.