In 2025, 'AI in real estate portals' stopped being a press-release concept and became a product reality. But a second reality arrived at the same time: the market can see the **capability** faster than it can see the **controls**.
GPPI 2025 tracks public AI feature disclosures as signals. The goal is not to count every experiment. The goal is to understand the shape of adoption — and the governance gap that comes with it.
- •In the GPPI 2025 AI announcements sample (n=24 disclosures), activity is back-loaded: 9 disclosures in Q4 (37.5%). This reflects disclosure momentum — not necessarily total internal deployment.
What the 2025 disclosure data actually says
- •Captured AI disclosures in 2025: **n=24**.
- •Disclosures are back-loaded: **37.5%** occur in Q4.
- •Governance visibility is near-zero in disclosure text: **0** mention maturity stage, **0** mention safeguards/auditability, **0** name a model partner/provider.
- •Only **1** captured disclosure is explicitly framed as trust/safety (e.g., fraud/duplicate detection).
- •When AI influences search, ranking, or summarization, it changes who gets seen and why. That makes AI governance visibility economically relevant: it reduces disputes, supports regulator confidence, and protects pricing power when partners ask 'how does this work?'
Use-case mix: what portals are aiming AI at
- •In the 2025 AI disclosures sample (n=24), announcements skew toward discovery/conversion and content/media use cases. Only one captured disclosure is explicitly framed as trust and safety.
Most disclosures cluster in consumer-facing discovery and media experiences (assistants, conversational search, AI-generated descriptions and highlights). Operational AI exists too — but the disclosure emphasis is on what customers can see.
The governance visibility checklist
'Governance visibility' does not mean publishing a 40-page policy. It means being able to answer three questions quickly and credibly:
- 1.**Where is AI in the loop?** (search, ranking, content creation, support, safety)
- 2.**What are the constraints?** (guardrails, human review, provenance labels, thresholds, escalation paths)
- 3.**What can you evidence?** (testing, auditability, correction loops, incident response)
- •For each AI feature: publish a one-page note that includes maturity stage (beta/GA), what the model can and cannot do, what is reviewed by humans, how users can report errors, and who owns accountability.
What leaders should do next
- •Adopt a minimum governance disclosure format for every AI surface that affects visibility or trust.
- •Instrument 'representation incidents' as a new operational category (wrong summaries, wrong routing, hallucinated claims).
- •Prioritize trust and safety AI (fraud, duplicate detection, anomaly detection) with the same urgency as consumer features.
- •Prepare explainability for paid vs organic vs personalized outputs — especially when AI is involved.
- •This signals dataset contains 24 captured public disclosures in 2025. Coverage is partial by design. GPPI reports what can be evidenced in disclosure text and does not infer hidden controls.