GPPI 2025 Signals

    AI and the Trust Gap

    Author: Fouad Bekkar
    Founder of Coraly (GPPI)
    Market commentary - Not investment advice
    Fouad BekkarJanuary 20, 20266 min read

    Key Takeaways

    • 1.AI adoption and trust erosion are not separate trends — they are coupled. Every AI-influenced surface that users cannot explain becomes a potential trust liability.
    • 2.GPPI data shows governance visibility in AI disclosures is near-zero: maturity, safeguards, and accountability are almost never mentioned.
    • 3.The portals most exposed to trust risk are those scaling AI-driven discovery and ranking without parallel investment in transparency and escalation.
    • 4.Addressing the AI-trust gap is not a compliance exercise — it is a pricing power and retention strategy.

    Two GPPI signal threads have been running in parallel through 2025: AI adoption is accelerating in property portals, and trust risk is quietly accumulating. These two threads are not independent.

    When AI influences search ranking, listing summarization, or lead routing — and those decisions are invisible to users and partners — the trust gap grows. Not because AI is inherently dangerous, but because **unexplained influence erodes confidence in fairness**.

    How AI expands the trust gap

    The trust gap GPPI measures is the distance between how a portal appears in aggregate satisfaction metrics and what users experience when systems fail. AI introduces three new fault lines into that gap:

    • **Ranking opacity.** When AI determines which listings appear first, agents and buyers have no clear model for why. Suspicion of pay-to-win dynamics rises even when they do not exist.
    • **Content at scale.** AI-generated descriptions and media lower the cost of both high-quality content and convincing fake content. Portals that use AI for content creation without visible provenance controls inherit the same credibility risk as platforms that allow it freely.
    • **Personalization without accountability.** When search results are personalized, different users see different markets. If that personalization serves commercial rather than relevance goals — and users sense it — the trust loop tightens quickly.
    GPPI Data Note: Governance visibility in AI disclosures
    • In GPPI's 2025 AI disclosures sample (n=24 announcements across portals in 15 countries), zero disclosures mentioned maturity stage (beta/GA), zero named a model partner or provider, and zero described safeguards or auditability mechanisms.
    • Only one disclosure was framed as trust and safety (fraud or duplicate detection). The rest clustered in consumer-facing discovery and content experiences.
    • This pattern confirms what GPPI signals on the trust side also show: AI capability is outpacing governance disclosure by a wide margin.

    Why the gap is a pricing power problem, not just a PR problem

    The trust-to-pricing-power loop GPPI tracks works as follows: when trust falls, scrutiny of monetization rises. When scrutiny rises, disputes with agency partners rise. When disputes rise, pricing power weakens.

    AI adds a new entry point to that loop. A portal that introduces AI-driven ranking without explaining how it interacts with paid placements hands partners a ready-made objection every time they receive a price increase proposal. The question is no longer just 'is the platform worth it?' — it becomes 'can I trust what the platform is doing with my data and my listings?'

    From the trust gap analysis
    • In the 2025 MEI consumer cohort (n=20 portals), UX gaps appear in 65% of portals, scam themes in 45%, and stale inventory themes in 40%. These are reputational load signals — they do not count incidents, they measure how frequently themes surface across consumer feedback channels.
    • AI does not create these themes. But AI that is poorly governed can amplify all three: personalization that surfaces stale listings, content tools that lower barriers to fake listings, and opacity that reads as UX failure when things go wrong.

    What portals should do differently

    The practical agenda is not complex — but it requires treating governance as a product decision, not a legal formality.

    1. 1.**Disclose the AI surface.** For each feature where AI influences what users see, publish a brief note: what it does, what it cannot do, and what human review exists. This does not require publishing model weights — it requires honesty about influence.
    2. 2.**Separate paid, organic, and personalized.** As AI-driven personalization and paid placements coexist in the same results page, the distinction must be visible. Mixed signals create the worst kind of trust debt.
    3. 3.**Instrument trust metrics alongside AI metrics.** Every time a team tracks AI-driven engagement uplift, they should also track complaint theme emergence, escalation volume, and partner dispute rates tied to the same surfaces.
    4. 4.**Prioritize trust and safety AI.** Fraud detection, duplicate suppression, and content provenance should receive the same product investment as consumer-facing AI. They are less visible but more protective.

    The competitive framing

    As AI becomes table stakes across portal products, the differentiation will not come from who shipped AI first. It will come from who built trust infrastructure that made their AI adoption credible — to users, to partners, and to regulators.

    Portals that treat governance visibility as overhead will find that every AI feature adds a small increment to the trust gap. Portals that treat it as infrastructure will find that it converts AI investment into durable pricing power.

    Data note
    • This analysis draws on GPPI's 2025 AI disclosures dataset (n=24) and MEI consumer trust signals (n=20 portals). Both datasets cover public and accessible evidence only. GPPI does not infer internal AI practices from disclosure absences.

    Related Resources

    GPPI 2025 Report

    The full annual benchmark report

    Methodology

    How GPPI measures portal performance

    How to cite GPPI

    GPPI Research. "AI and the Trust Gap." Coraly GPPI Signals, January 20, 2026. https://coraly.ai/signals/ai/ai-and-the-trust-gap