BOREALIS ACADEMY

Back to The Hub

The State of AI Trust: 2026 Industry Report

Executive Summary

The autonomous AI agent economy has entered a critical inflection point. Autonomous agents now manage customer interactions, execute trades, diagnose medical conditions, and conduct research with minimal human intervention. Industry estimates suggest that enterprise AI agent deployment has grown 340% year-over-year through 2025-2026, with deployment accelerating across financial services, healthcare, customer support, and scientific research.

Yet a fundamental asymmetry persists: agents can do far more than we can verify they do reliably. The capability-trust gap has widened precisely as the stakes have risen. Model updates, system integrations, and dynamic execution environments create emergent behaviors that escape traditional evaluation frameworks. When trust failures occur — and they do, with increasing frequency — the consequences are material: financial losses, safety incidents, reputational damage, and regulatory exposure.

This report examines the current state of AI agent trust infrastructure, identifies critical gaps in existing approaches, and proposes a five-dimensional framework for continuous, independently verifiable trust certification. The path forward requires immediate action from developers, enterprises, regulators, and the industry at large. Organizations that invest in trust infrastructure now will dominate trust-gated markets. Those that delay will face compounding regulatory and competitive pressure.

1. The Rise of Autonomous AI Agents

The Shift from Tools to Agents

The distinction between tools and agents is fundamental. A tool responds to explicit user input: you ask a chatbot a question, it answers. An agent acts independently within constraints: it decides when to act, how to act, and in what sequence. This autonomy is the defining feature of the current wave of AI systems.

Early AI systems were passive. They required humans in the loop at each decision point. Current systems operate with sparse human oversight, making dozens or hundreds of decisions per transaction. The shift from supervised inference to autonomous operation represents an order-of-magnitude increase in both capability and risk.

>

Enterprise Adoption Trajectory

Current trajectory indicates that enterprise AI agent adoption will exceed 65% of Fortune 500 companies by end of 2026. Early adoption is concentrated in financial services (trading systems, loan origination, fraud detection), healthcare (diagnostic support, treatment planning), and customer support (ticket routing, issue resolution, escalation management).

Smaller enterprises and mid-market companies are following a similar adoption curve with a 12-18 month lag. The reasons are economic: agents reduce operational costs by 40-60% in domains where they function reliably. Cost pressure is driving deployment faster than trust frameworks can mature.

Agent Categories and Deployment Patterns

The Capability-Trust Gap

Autonomous agents can perform tasks that no human evaluator could fully assess before deployment. A medical diagnostic agent might analyze 1,000 cases per day; pre-deployment testing evaluates perhaps 500 cases. A financial trading agent might execute 10,000 trades per day; testing covers discrete scenarios, not continuous market dynamics.

This gap is not about raw capability — it is about verification. We can measure whether an agent completes a task. We cannot easily measure whether it will continue to behave correctly across all contexts over time. This is the core trust problem.

2. The Trust Crisis

The Sufficiency Problem

Traditional trust verification relies on point-in-time testing: before deployment, evaluate the system against a test set, verify performance meets thresholds, deploy. This approach worked for static systems. It is inadequate for autonomous agents.

Reasons are structural. First, an agent's behavior depends on its training data and model weights, which change with each update. A model that passed testing in January may behave differently after a February update. Second, agents operate in environments — they interact with dynamic inputs, external systems, and emerging conditions that testing cannot fully anticipate. Third, agents exhibit emergent behaviors when integrated with other systems; testing in isolation does not predict behavior in production.

Point-in-time testing is not eliminated; it is necessary but insufficient. Continuous monitoring is the missing layer.

Model Updates and Performance Drift

Organizations update models constantly. Improvements in accuracy sometimes introduce regressions in safety dimensions that were not explicitly tested. A trading agent might improve profit by 2% while increasing max single-trade loss by 15% — a trade-off that testing did not capture.

Industry analysis suggests that 30-40% of model updates in production systems introduce unexpected behavioral changes in at least one dimension. These are often caught by manual monitoring, but detection lag (the time between update deployment and change identification) averages 2-4 weeks.

>

Integration Complexity and Emergent Behavior

Agents do not operate in isolation. A customer service agent integrates with knowledge bases, CRM systems, billing systems, and escalation workflows. A financial agent integrates with market data feeds, risk models, and execution infrastructure. This integration complexity creates emergent behaviors that component testing cannot predict.

In one documented case, a customer service agent repeatedly escalated complex billing questions to human specialists, as designed. After integration with a new knowledge base, the agent began providing incorrect refund information, having learned from partially trained data. The problematic behavior emerged only after production deployment, despite comprehensive component testing.

The Persistent Black Box Problem

Explainability research has advanced significantly. Model cards document capabilities and limitations. SHAP values and attention mechanisms provide some visibility into model decisions. Yet fundamental opacity remains: we can describe how a neural network produced an output, but explaining why it should behave reliably across all contexts remains unsolved.

This is not a deficiency in current explainability tools. It is a property of high-dimensional function approximation. Increasing model scale and capability tends to increase opacity. The trade-off between capability and interpretability is not absolute, but it is substantial.

Real Consequences of Trust Failures

Trust failures in production systems have documented consequences. In financial services, incorrect agent decisions have resulted in erroneous trades (losses: $2M-50M+ per incident). In healthcare, diagnostic agents with undetected bias have led to delayed diagnoses and treatment complications. In customer service, agents operating beyond their training distribution have generated customer churn and regulatory complaints.

These incidents are not outliers; they are the predictable result of deploying complex systems without adequate continuous verification. Organizations incur costs not only from the failures themselves but from loss of customer confidence, regulatory fines, and remediation efforts.

3. The Regulatory Landscape

EU AI Act Framework

The European Union AI Act entered enforcement in August 2026. It establishes a risk-based classification system: prohibited AI (e.g., real-time biometric surveillance), high-risk AI (with mandatory conformity assessments), and general-purpose AI (with transparency requirements).

Autonomous agents deployed in financial services, hiring, criminal justice, and medical diagnosis fall into the high-risk category. Mandatory requirements include:

  • Technical documentation and risk assessments
  • Human oversight procedures
  • Monitoring and logging of agent decisions
  • Performance evaluation on fairness and performance drift
  • Post-market surveillance
  • The regulation is prescriptive about what must be done (documentation, monitoring, human oversight) but not about how to do it. This leaves operational details to implementers, creating a compliance gap: organizations understand the regulatory requirements but lack standardized methods to demonstrate compliance.

    NIST AI Risk Management Framework

    Published in January 2023, the NIST AI Risk Management Framework (AI RMF) provides a foundation for AI risk assessment across four functions: Map, Measure, Manage, and Monitor. The framework does not mandate specific compliance mechanisms but outlines dimensions of risk that organizations should address:

  • Performance risks (accuracy, robustness, fairness)
  • Security risks (adversarial attacks, model poisoning)
  • Privacy risks (training data leakage, membership inference)
  • Accountability and transparency risks
  • Systemic risks (cascading failures, market concentration)
  • The AI RMF is widely referenced by regulators globally, including in Canada (AIDA), the UK (pro-innovation approach), and emerging frameworks in Asia-Pacific regions. It provides a common language but does not establish enforcement mechanisms or certification standards.

    Emerging National Approaches

    The Compliance Gap

    Industry estimates suggest that fewer than 20% of organizations deploying high-risk AI systems have formal compliance programs aligned with EU AI Act or NIST AI RMF requirements. The gap exists for structural reasons: compliance requires continuous monitoring, detailed documentation, and independent assessment — capabilities most organizations do not yet possess.

    Organizations understand the regulatory direction but lack tools to implement it cost-effectively at scale.

    Why Regulation Alone Is Insufficient

    Regulation establishes floors, not ceilings. It defines minimum requirements for documented oversight and human involvement but does not solve the core technical problem: how to verify that autonomous agents behave reliably over time.

    Regulations also lag behind technology. The EU AI Act's provisions reflect technology capabilities circa 2023-2024. By the time regulatory provisions enter enforcement, the technology landscape has evolved. The market needs trust infrastructure that can adapt as quickly as the technology does.

    4. Current Trust Approaches and Their Limitations

    Model Cards and Static Documentation

    Model cards, introduced by Google in 2018, document intended use, performance benchmarks, fairness metrics, and known limitations. They are a substantial improvement over undocumented systems.

    Yet they are inherently static. A model card documents the agent's properties at a point in time. Once deployed and operating in production, the model card's claims may diverge from actual behavior. Updates require manual review and documentation, which is labor-intensive and subject to documentation lag.

    For agents operating with continuous updates and in complex integration environments, model cards are a necessary but insufficient trust signal.

    >

    AI Auditing Firms

    Third-party auditing has emerged as a trust mechanism. Specialized firms (EY, Deloitte, and domain-specific consultancies) conduct pre-deployment assessments of AI systems, evaluating performance, fairness, security, and compliance readiness.

    Audits are valuable. They are independent, which mitigates conflicts of interest inherent in self-assessment. They provide credibility with regulators and customers.

    However, audits are expensive (typically $50K-500K+ per assessment) and point-in-time (evaluation of the system as it exists on the audit date). They are not continuous. A system audited in Q1 2026 may diverge substantially by Q3 2026, and no evaluation detects this drift until the next audit (often a year later, if conducted).

    Auditing firms are also bottlenecked. Market analysis suggests that current auditing capacity serves fewer than 5% of organizations deploying high-risk AI systems. The bottleneck will persist as long as auditing is manual and episodic.

    Internal Testing and Monitoring

    Most organizations conducting risk-aware AI deployments implement internal testing programs: pre-deployment evaluation, ongoing performance monitoring, and incident response procedures. These are essential.

    Internal testing is not sufficient for several reasons. First, it is not independent; it is subject to internal organizational incentives and blind spots. Second, it is reactive rather than proactive; issues are typically identified through incident reports rather than through systematic anomaly detection. Third, it is not verifiable externally; customers and regulators have no reliable way to assess the quality of internal testing.

    Internal testing is a minimum requirement, not a solution.

    Self-Certification and Conflicts of Interest

    Some organizations make public statements about their AI systems' safety, fairness, and compliance. These statements are self-certification without independent verification.

    Self-certification faces an inherent credibility problem: organizations have financial incentives to minimize reported issues and maximize claimed performance. The history of financial services (where firms self-reported on risk exposure before the 2008 crisis) demonstrates the limitations of self-certification as a trust mechanism.

    Self-certification is more credible when paired with independent verification, but independent verification is currently sparse and inconsistent.

    The Missing Layer

    The trust gap is not in testing or documentation, which organizations conduct in some form. The gap is in continuous, independent, verifiable trust scoring that allows the market to differentiate between agents based on demonstrated, monitored behavior over time.

    Organizations need a mechanism to demonstrate that their agents behave as claimed, continuously, and that claims can be verified by external parties (customers, regulators, business partners). This mechanism does not yet exist at scale.

    5. A Framework for Continuous AI Trust

    The Five-Dimension Model

    Borealis Protocol proposes a framework for measuring and certifying AI agent trust across five dimensions. These dimensions were selected to capture the critical aspects of agent behavior that matter to developers, enterprises, and regulators:

    #### 1. Constraint Adherence Does the agent follow its rules consistently? Agents operate within defined constraints: maximum trade sizes, escalation thresholds, action categories it is authorized to take. Constraint adherence measures how often an agent respects these boundaries across all execution contexts.

    This dimension captures the foundational property of safe autonomous operation: agents should not exceed their authorized scope. Violations indicate either insufficient constraint implementation or drift in the agent's learned behavior.

    #### 2. Decision Transparency Can stakeholders understand the agent's reasoning? This dimension measures whether the agent's decisions are documentable, explainable, and auditable. For high-stakes agents, transparency is not optional; it is a prerequisite for human oversight and regulatory compliance.

    Decision transparency includes decision logging (every decision is recorded), decision justification (the agent documents why it made the decision), and decision auditability (external parties can review the reasoning).

    #### 3. Behavioral Consistency Is the agent predictable across contexts? An agent that behaves one way in test scenarios but differently in production, or one way on Monday but differently on Friday, or one way with customer A but differently with customer B, is exhibiting inconsistency that indicates underlying problems.

    Behavioral consistency measures whether an agent's decisions across equivalent contexts remain stable. This is not about achieving perfect accuracy (which may not be achievable), but about ensuring that performance across contexts does not diverge unexpectedly.

    #### 4. Anomaly Detection Are unexpected behaviors caught and flagged? Even well-designed agents can exhibit anomalous behavior due to adversarial inputs, distribution shifts, or model drift. The ability to detect and flag anomalies as they emerge (rather than weeks later in a retrospective review) is critical.

    This dimension measures how effectively an agent's operational environment detects and surfaces deviations from expected behavior patterns.

    #### 5. Audit Completeness Is every decision documented and verifiable? For agents operating in regulated domains, comprehensive auditability is not optional. Every decision must be logged with sufficient context that regulators, auditors, or courts can reconstruct the decision rationale.

    Audit completeness measures whether the agent's operational footprint is sufficient for external reconstruction and verification of decision history.

    Why These Five Dimensions

    These dimensions were chosen because they are:

  • - - -
  • Together, they capture the full scope of what "trust" means for an autonomous agent: operating within authorized scope, being transparent, behaving consistently, detecting anomalies, and maintaining comprehensive auditability.

    The Overlapping Weight System

    The Borealis Protocol trust score uses an overlapping weight system where the five dimensions do not sum to 1.0 but rather overlap with a total weight of 1.16. This design choice is intentional.

    In systems where weights sum to exactly 1.0, optimization pressure naturally leads toward maximizing a single dimension at the expense of others (a phenomenon known as Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure"). An organization could achieve a high trust score by excelling in transparency while neglecting constraint adherence, for instance.

    The overlapping weight system (1.16 total) creates redundancy. Organizations cannot maximize their trust score by excelling in a single dimension. They must maintain baseline performance across all five dimensions and then invest in depth across multiple dimensions.

    The specific allocation is:

  • Constraint Adherence: 0.25
  • Decision Transparency: 0.24
  • Behavioral Consistency: 0.23
  • Anomaly Detection: 0.22
  • Audit Completeness: 0.22
  • The allocation reflects the relative importance of constraint adherence (the foundational safety property) while maintaining meaningful weight on all five dimensions. The overlapping structure incentivizes excellence across all dimensions rather than gaming a single metric.

    6. The Role of Blockchain in Trust Verification

    Why Immutability Matters for Trust Records

    Trust scores and audit records must be tamper-proof. If an organization could retroactively modify its historical trust scores or audit records, the scores lose credibility entirely. An agent could appear to have excellent historical performance when in fact its performance was poor.

    Immutable record-keeping ensures that once a trust score or audit finding is recorded, it cannot be altered. This prevents retroactive score manipulation and ensures that all parties (customers, regulators, the agent operator) have a consistent view of historical performance.

    Immutability does not prevent false initial claims, but it prevents the worse problem: rewriting history to hide past failures.

    Hedera Hashgraph as Infrastructure

    Hedera Hashgraph is an enterprise-grade distributed ledger optimized for high-volume, low-latency transaction recording. Unlike public blockchains (Bitcoin, Ethereum), Hedera is purpose-built for institutional use, supporting:

  • High throughput (thousands of transactions per second)
  • Low latency (1-5 second finality)
  • Formal security guarantees (Byzantine fault tolerance)
  • Compliance with governance structures (Hedera Governing Council)
  • Cost-efficient operation
  • Hedera does not require public transparency; organizations can operate private subnets while maintaining cryptographic verification. This makes it suitable for enterprise trust scoring: agents submit audit records to Hedera, which timestamps and anchors them, making them immutable and verifiable by any interested party.

    The Verification Chain

    The verification chain works as follows:

  • Audit — The agent operates continuously, decisions are logged and evaluated against the five-dimension framework, and audit data is compiled.
  • Hash — The audit data is cryptographically hashed, creating a unique digital fingerprint that is sensitive to any change in the underlying data.
  • On-Chain Record — The hash is submitted to Hedera, which records it with a timestamp and cryptographic proof of inclusion.
  • Public Verification — Any party can verify that a given audit record matches the on-chain hash, confirming that the record has not been altered since it was submitted.
  • This chain prevents retroactive score manipulation. Once an audit is submitted to Hedera, the only way to claim a different historical score is to submit a different audit — which creates a visible conflict that any observer can identify.

    Eliminating Retroactive Score Manipulation

    Without immutable record-keeping, an organization could improve its trust score by rewriting its historical audit records. For example, an organization could remove records of constraint violations from its audit history, improving its historical score.

    Hedera-anchored records prevent this. The organization can submit new audits or corrections going forward, but it cannot retroactively change submitted records. The correction (if justified) appears as a new record, with a timestamp indicating when it was made, not backdated to the original period.

    > ---

    7. Trust-Gated Economies

    The Marketplace Problem

    Imagine a marketplace where customers can purchase autonomous agents. How would a customer evaluate which agent to buy? They would review past performance, customer testimonials, and claimed capabilities. But claimed capabilities are self-reported; past performance data might be incomplete or cherry-picked.

    This is the fundamental marketplace problem: asymmetric information. The seller (the agent operator) has much better information about the agent's actual behavior and limitations than the buyer does. This asymmetry leads to adverse selection: the worst agents are the ones most aggressively marketed, because they have the least to lose from intensive external scrutiny.

    Trust certification addresses this by providing independent, verifiable signals of agent quality. Instead of trusting the seller's claims, buyers can rely on third-party verification of the agent's demonstrated performance.

    Trust as a Market Mechanism

    Trust certification enables tier-based filtering. Marketplaces can establish minimum trust score requirements: to sell a high-risk agent (e.g., medical diagnostic agent), the agent must achieve a trust score above a certain threshold. Lower-risk agents might have lower thresholds.

    This creates a signaling mechanism. An agent with a high trust score attracts customers willing to pay premium prices. An agent with a low score or no certification might only find customers willing to accept higher risk. Over time, agents with demonstrated trustworthiness outcompete those without it.

    Trust-based competition incentivizes organizations to invest in the behaviors that trust scoring rewards: constraint adherence, transparency, consistency, anomaly detection, and auditability.

    The Certification Flywheel

    Trust certification creates a reinforcing cycle:

  • More Verified Agents — As more organizations pursue trust certification, the marketplace has more agents with verified performance data.
  • More Buyer Confidence — Customers feel more confident purchasing agents when they can rely on independent verification rather than self-reported claims.
  • More Commerce — Increased buyer confidence drives increased agent commerce, expanding the market.
  • More Verification — A larger market with more transactions creates demand for more certification capacity and enables scaling of verification infrastructure.
  • This is a virtuous cycle, but it requires a starting point: enough agents pursuing certification to signal market demand, and sufficient infrastructure to support verification at scale.

    Parallels to Established Certification Systems

    Trust certification for AI agents parallels established certification systems that function at massive scale:

    These systems work because they:

  • Are independently verified (not self-reported)
  • Are maintained as public records (searchable and verifiable)
  • Carry market consequences (certified vs. uncertified)
  • Have sufficient scale to be economically meaningful
  • Are enforced by legal and regulatory mechanisms
  • A trust certification system for AI agents must follow this same pattern.

    8. Recommendations

    For Developers

    Build with constraints from day one. Design agents with explicit, auditable constraints. Document what the agent is authorized to do and where it is authorized to operate. As the agent evolves, preserve constraint structure; do not allow constraints to degrade in pursuit of performance improvements.

    Instrument for auditability. Implement comprehensive logging of every decision, including the decision context, decision rationale (if explicable), and decision outcome. Do not log only happy paths; log failures and anomalies with equal rigor.

    Pursue independent certification before you need it. Organizations that wait until regulatory pressure emerges before pursuing trust certification will face time and cost pressure. Early movers in trust certification will gain market differentiation and customer preference. Plan for certification in your product roadmap.

    Invest in continuous monitoring. Build infrastructure to detect performance drift, behavioral anomalies, and constraint violations in real time, not weeks later in a retrospective analysis.

    For Enterprises

    Require third-party trust verification in procurement. When evaluating AI agents for deployment, include independent trust certification as a mandatory criterion, not optional.

    Specify tier requirements in vendor contracts. Define the minimum trust score, verification currency (how recent must the verification be), and monitoring requirements as contractual obligations.

    Demand continuous monitoring, not just pre-deployment testing. Require that agents operate under continuous performance monitoring with defined incident response procedures. Point-in-time audits are not sufficient.

    Build internal verification capacity. Combine third-party certification with internal monitoring. Third-party verification is independent but episodic; internal monitoring is continuous but subject to bias.

    For Regulators

    Recognize independent trust certification as evidence of compliance effort. Organizations that pursue and maintain trust certification demonstrate good-faith compliance with principles in frameworks like NIST AI RMF and the EU AI Act. Regulators should recognize certification as credible evidence supporting compliance claims.

    Support interoperable trust standards. Encourage industry development of standardized trust assessment frameworks, logging requirements, and certification mechanisms. Interoperability enables economies of scale; fragmented standards increase costs.

    Encourage continuous monitoring frameworks over point-in-time audits. Regulatory requirements should emphasize continuous oversight and monitoring, not just pre-deployment assessment. Organizations should be required to maintain monitoring infrastructure and demonstrate continuous compliance, not just pre-deployment compliance.

    Maintain proportionality in regulation. Risk-based requirements should be genuinely proportional: lower-risk agents should face lower compliance burdens. Excessively stringent requirements for low-risk agents will stifle beneficial innovation.

    For the Industry

    Build trust infrastructure before the crisis. History suggests that industries do not proactively build trust infrastructure. They do so after failures generate regulatory and public pressure. The AI industry should be the exception. Invest in standards, infrastructure, and capacity now.

    Invest in standards development. Industry organizations and consortia should coordinate on standard definitions of trust dimensions, logging formats, verification procedures, and certification tiers. This standardization reduces fragmentation and enables scaling.

    Make trust measurable, not aspirational. "Our AI is trustworthy" is an aspiration. Trust scores, verification histories, and performance metrics are measurements. The industry should move toward measurement-based claims supported by verifiable evidence.

    Address the capacity bottleneck. Current auditing and verification capacity is bottlenecked. Industry investment in scaling verification infrastructure (automation, tooling, talent) is necessary to support widespread certification.

    9. Conclusion

    The autonomous AI agent economy is not speculative. It is here. Thousands of organizations are deploying agents in financial services, healthcare, customer support, and research. Growth is accelerating. The capability curve is steep.

    Trust infrastructure has not kept pace. Organizations are deploying agents that can make high-consequence decisions with insufficient visibility into their behavior, insufficient monitoring of their performance, and insufficient mechanisms for external verification.

    This creates risk: regulatory risk (non-compliance with frameworks like the EU AI Act and NIST AI RMF), financial risk (losses from agent failures), safety risk (undetected errors with real-world consequences), and reputational risk (erosion of customer confidence following high-visibility failures).

    It also creates opportunity. The organizations that invest in trust infrastructure now — developing the tools, standards, processes, and certifications necessary for verifiable trust — will lead the market. They will have agents that customers prefer, regulators trust, and business partners willing to integrate with. Those who delay will face reckoning when failures occur and regulation tightens.

    The tools exist. The frameworks are emerging. The regulatory pressure is mounting. The question is not whether trust infrastructure will be built. It is whether it will be built proactively, by the industry, or reactively, imposed by regulators after crises occur.

    The path forward requires coordinated action:

  • Developers must build with constraints, transparency, and auditability as first-class requirements.
  • Enterprises must demand verification and monitoring before deployment.
  • Regulators must establish standards that encourage continuous trust verification.
  • The industry must invest in scaling trust infrastructure before capacity becomes a bottleneck.
  • The AI agent economy will be built on either verified trust or blind faith. The cost of choosing blind faith is measured in financial losses, safety incidents, and regulatory reckoning. The cost of choosing verified trust is investment in infrastructure and processes that scale with the market.

    Organizations that make this choice now will lead. Those that don't will follow, managing crisis and catching up to regulation.

    About Borealis Protocol

    Borealis Protocol is the AI trust certification ecosystem founded in 2025 to address the trust gap in autonomous AI agent deployment. The ecosystem comprises three core components: BorealisMark, an identity and continuous trust scoring system; Borealis Terminal, a trust-gated marketplace where agents are bought and sold based on verified trust scores; and Borealis Academy, a research and education division conducting ongoing analysis of trust frameworks and best practices. Borealis Protocol is committed to making trust verifiable, measurable, and central to the AI agent economy.

    Website: borealisprotocol.com

    Disclaimer

    This report represents Borealis Protocol's analysis of the AI trust landscape as of March 2026. Market estimates and projections are based on publicly available information, regulatory filings, and industry analysis. This document does not constitute financial, legal, or regulatory advice. Organizations should consult with qualified legal and regulatory advisors regarding compliance obligations in their jurisdictions. Regulatory frameworks are evolving; organizations should maintain awareness of regulatory changes that may affect deployment requirements or certification standards.

    Report prepared by Borealis Protocol Research March 2026