KYC/KYB Took Banks Years to Streamline. Now "KYA" Is Here: Recognize AI Agents, Update T&Cs, or Lose the Transaction

by Yannis Larios


AI-driven agentic commerce—where autonomous agents shop on behalf of customers—is fast breaking the assumptions behind banks’ KYC (Know Your Customer) and KYB (Know Your Business) controls. Payment systems built for human-initiated purchases are suddenly blind to who (or should I rather say “what”) is actually pressing “BUY.” The new strategic mandate for Banks and Payment Service Providers is KYA: “Know Your Agent”. If an issuer or acquirer can’t recognize an AI agent initiating a transaction and verify its delegation and limits, they risk false declines, fraud gaps, and lost revenue. Early forecasts suggest AI agents will drive over $1.7 trillion in consumer payments by 2030. This “Next Agenda” issue explores why KYA is now mission-critical, and what Bank boards must do today to secure their share of the agent-led commerce future.

The Problem Now

Traditional KYC/KYB frameworks assume the party initiating a payment is a human customer or a known business, directly interacting with a merchant. That assumption is collapsing. Autonomous AI agents can now search, decide, and transact for users—yet today’s systems can’t discern a trusted agent from a fraud bot. On the one side, issuing banks see an unusual card-not-present request coming in at 3 AM and have zero context that it was triggered by the customer’s authorized AI assistant. On the other side, acquiring banks and merchants, for their part, have spent years programming their e-commerce sites to shut out bots—associating non-human traffic with fraud and credential stuffing. In agentic commerce, this defensive reflex backfires. Legitimate AI shopping agents get mistaken for attackers, triggering all sorts of 3DS, CAPTCHA etc. challenges or outright blocks at checkout. The result? Broken customer experiences and lost sales.

Meanwhile, issuers face their own blind spot. A card purchase initiated by an AI doesn’t fit the normal fraud scoring patterns. With no “device” or human biometrics to authenticate, issuers either decline the transaction as suspicious or approve it without truly knowing who initiated it. Both outcomes carry risk: false declines mean lost transaction revenue, while unwitting approvals invite fraud. As Mastercard’s Chief Digital Officer summed up in October 2025, merchants are asking “How can we distinguish legitimate AI agents from malicious bots? How do we know the consumer authorized the agent? And did the agent carry out the consumer’s instructions correctly?”. Today’s KYC/KYB regime offers no answers to these questions – it was never designed for a world where the “customer” at the point of sale might be -well….- a bot!

The scale of the challenge is growing exponentially. By-mid-2025, generative AI-based shopping traffic to U.S. retail sites was up 4,700% year-over-year! Typically, one would think these would be just bots scraping prices; increasingly they are agents actually completing purchases on behalf of users. Yet many checkout and fraud control systems still reflexively treat any automated interaction as hostile. In short, the current state is a lose-lose: issuers lack the context to approve genuine agent-initiated transactions confidently, acquirers and merchants treat agent traffic as a threat, and genuine purchases fail while fraud may still slip through. Without a new approach, both trust and revenue suffer.

The Strategic Shift

To turn this lose-lose into a win-win, the financial industry is moving from KYC to KYA – Know Your Agent. Instead of assuming every “buyer” is human, banks and payment providers must establish a trust framework for delegated purchases. This is a profound strategic shift: transactions will need to carry proof of the agent’s identity and its mandate from the customer. In other words, an issuer must verify not just the cardholder, but also the algorithm acting on the cardholder’s behalf, including what it is authorized to do. Risk management evolves from analyzing behavioral heuristics to enforcing protocol-level trust in each transaction.

Industry leaders are already charting this new course. In September 2025, Google unveiled the Agent Payments Protocol (AP2) – an open standard to securely authenticate and validate AI-driven transactions. AP2 introduces the concept of cryptographically-signed Mandates that form an auditable chain from the user’s intent, to the exact cart contents, to the final payment. For a human-in-the-loop purchase, an Intent Mandate records the user’s request and an ensuing Cart Mandate confirms “what you see is what you pay” before payment. For fully autonomous scenarios, the user pre-signs a detailed Intent Mandate (e.g. “buy up to €500 of fuel whenever price falls below €1.50/L”) and the agent later composes a Cart Mandate within those limits. This end-to-end evidence chain – intent, cart, payment – creates a non-repudiable audit trail that tackles the core issues of authorization and authenticity in agent-led commerce.

Crucially, these mandates are signed by verifiable credentials. In practice, this means a trusted party (such as a bank or identity provider) issues digital credentials attesting an agent’s identity and powers. Banks may need to act as Credential Providers, supplying customers with the secure digital identities and tokens that their chosen AI agents will use when transacting. We see early signs: Visa began piloting “AI-ready cards” in 2025, replacing static card numbers with dynamic tokens that let merchants verify a given agent is authorized for that account. Similarly, industry groups are working to extend the familiar KYC/KYB standards to cover agent identity, effectively formalizing KYA in compliance regimes.

Regulators are aligning with this shift. The EU’s draft AI Act will require transparency when AI agents interact or make decisions, reinforcing the need to clearly identify AI agents in commerce. Meanwhile, Europe’s forthcoming PSD3 and related open banking frameworks are expected to address third-party data access and delegation, providing a legal basis for agents operating under a user’s consent. Even the existing Strong Customer Authentication (SCA) rules under PSD2, which assume a human is present to enter a code or biometrics, are being revisited. We may soon see that a cryptographically signed agent mandate (with the user’s prior consent) counts as SCA for a delegated transaction – essentially treating the user’s ex-ante authorization of an agent as equivalent to an OTP in real time. In short, the strategic direction is clear: Delegated agent commerce is here to stay, and “knowing your agent” is mission-critical to enabling it at scale.

Case Evidence

  • Google’s AP2 Open Standard (Sep 2025)Google Cloud launched the Agent Payments Protocol with 60+ partners (Mastercard, PayPal, Shopify, etc.) to enable secure, agent-initiated purchases across cards, bank transfers and crypto. AP2 focuses on authorization, authentication and accountability for AI-led payments, using signed digital mandates as proof of the user’s intent and approval (for more on Google AP2 please refer to issue 4 of the “Next Agenda”).

  • Mastercard Agent Pay Framework (Oct 2025)Mastercard introduced its Agent Pay merchant acceptance framework to register and authenticate AI agents before transactions occur. Like Visa’s parallel Trusted Agent Protocol, it’s a no-code solution for merchants to recognize “good” agents vs. bad bots. Mastercard’s system verifies an agent’s identity and authority upfront, so that merchants and issuers can confidently process delegated purchases at scale.

  • Walmart & OpenAI Partnership (Oct 2025)Walmart announced a partnership with OpenAI to let customers shop via ChatGPT, marking one of the first major retailers to embrace agentic commerce. CEO Doug McMillon called it an “important step” toward a more convenient future where shoppers can delegate tasks to AI. Early use cases target complex purchases like travel bookings and simple ones like grocery replenishment.

Execution Playbook for Banks, Payment Service Providers and Merchants

Issuers: Start by binding agents to customers at the account level. Banks should extend their digital banking portals to let customers register approved AI agents (for example, linking a specific Google AI to their card or account). Each agent should be issued a token or credential that tags any transaction it initiates. This allows the issuer’s authorization system to instantly recognize “Transaction X is from Customer Y’s certified agent Z.” Issuers must update fraud models to incorporate these signals – treating known-agent transactions not as random anomalies but as policy-bound events. For instance, if Alice’s AI agent attempts a €200 purchase at 02:00, the bank back-end can verify that agent is one of Alice’s trusted delegates and that the amount is within her preset limit, instead of auto-declining a late-night, customer-not-present purchase. Issuers will also need to adjust their Strong Customer Authentication flows: rather than challenge the absent user, the issuer should rely on the pre-approved mandate (and possibly require periodic re-authentication of the agent’s mandate on the customer’s device for safety).

Networks (Card & Payment Schemes): Card networks and payment schemes are now incorporating agent identity into their transaction messaging standards. This is already underway – Visa’s new Trusted Agent Protocol adds data fields to signal an AI agent’s involvement and its verified consumer behind-the-scenes. Networks will need to maintain agent registries or certification programs (much like they do for payment facilitators or 3-D Secure providers today). A network-level directory of “trusted agents” can help route transactions appropriately or apply liability shifts when an agent is involved. Schemes should also update their operating rules: for example, mandating that acquirers pass along agent credentials, and clarifying dispute resolution when an agent was in the mix. In essence, Networks must provide the interoperability layer for KYA – ensuring an agent recognized by Mastercard or Visa at one end of a transaction is honored by the issuer on the other end. This likely means collaborating on universal agent ID standards (akin to a “digital passport” for AI) and supporting open frameworks like AP2 rather than proprietary ones.

Acquirers & Payment Service Providers: Payment processors and gateways sit at the merchant end and are crucial to making KYA work in practice. Acquirers should enable merchants to flag agent-based transactions upstream. This could mean integrating solutions like Cloudflare’s bot detection (which now has an “agentic commerce” mode) to differentiate good agents from malicious traffic. When an agent transaction comes through, the PSP should attach an “agent metadata” blob to the auth request—e.g. an indicator in the ISO8583 or OAuth payload that includes the agent’s ID, its credential or cryptographic proof, and possibly the user’s original intent reference.

The acquirer’s risk engines must also adapt: rather than blocking non-browser user agents outright, they should check for a valid agent certificate. Passing the agent signal end-to-end is key. For example, if a transaction originates from XYZ Shop’s AI concierge using a delegated card on file, the PSP should communicate to the issuer “this is  XYZ Shop’s AI agent buying on behalf of Customer X, here’s the proof.” In parallel, acquirers need to adjust bot management rules  for the merchants – whitelisting known commerce agents to avoid false declines while continuing to block truly unknown scrapers. Many PSPs will also find new business here: offering “KYA-as-a-service” add-ons that handle agent verification, credential issuance (through partnerships with identity providers), and audit logging for merchant clients.

Merchants: Every merchant must redesign their customer journey to accommodate agentic buyers. Practically, this means building agent-friendly checkout APIs and adapting terms of service. Merchants should implement the emerging protocols (like AP2, Visa TAP, Mastercard Agent Pay) in their e-commerce platforms so that an agent can transact without hacking the website. For instance, instead of a headless bot pretending to click UI buttons (prone to breakage and likely to be blocked), an agent could invoke a secure checkout API endpoint, submit a signed Cart Mandate, and receive confirmation – all invisibly to the human. Merchants should update their fraud and order management rules as well: transactions flagged as “Agent-initiated, verified” might bypass certain manual reviews or 3-D Secure prompts, improving conversion for customers who use AI assistants. On the other hand, if an order comes from an unverified agent, the merchant might decide to decline it or route it for additional verification (just as they would a high-risk order). Internally, new training is needed for ops and support teams: imagine a customer saying “my AI assistant bought the wrong item.” Front-line staff need clear policies for handling such cases (returns, credits) within the bounds of the delegated authority the customer gave. Finally, merchants should prepare their analytics and marketing for the agent era. Product discovery and price comparison may happen AI-to-AI; merchants might publish product data feeds or negotiation interfaces for agents. Ensuring your products are “agent-discoverable” (the way SEO ensured they were human-discoverable) will become a new competence in the KYA world.

The Spillover - Changes to Merchant Terms & Conditions!

Existing merchant terms and conditions (T&Cs) as registered by Acquirers and PSPs, rarely contemplate an AI acting as the buyer. This now changes. To safely welcome agent-driven sales, acquiring banks and merchants should introduce several new clauses and structures:

  • Delegated Purchase Authority: Contracts should explicitly permit transactions initiated by an authorized agent on behalf of the customer. For example, when a customer connects their AI agent to a merchant’s service, the T&Cs should stipulate that the agent’s actions (e.g. placing an order or accepting an offer) are binding as if done by the customer themselves. This clause legitimizes the agent’s role in the eyes of both parties and pre-empts “I didn’t click that” disputes. It should also define the process by which a customer grants and revokes an agent’s authority on their account.

  • Agent Identity & Credentials: Merchant agreements (and technical onboarding docs) should require that any AI agent interacting with the merchant’s systems presents a verifiable credential. In practice, this means the agent (or its platform provider) must identify itself via an agreed standard – such as a digital certificate or token issued under a scheme like Mastercard’s Agent ID or an AP2 credential. The T&Cs can reference industry KYA standards, e.g. “Agent must be registered and authenticated per [Network] guidelines before transacting.” This ensures the merchant only honors agents that have been vetted by a trusted authority. It also gives the merchant cover: if an agent doesn’t provide proper credentials, the merchant can refuse the transaction without breach of contract.

  • Mandate Records & Audit: Because agents operate based on user instructions, the new terms should spell out how those instructions are recorded and used in case of disputes. Merchants may stipulate that the agent (or its platform) must maintain an auditable log of user mandates and share a non-sensitive proof with each transaction. For instance, an order could include an identifier for the user’s Intent Mandate. If later the user claims “I didn’t want this,” the merchant can produce the signed mandate as evidence to resolve the chargeback.

  • Liability for Agent Actions: One of the thorniest issues is who bears the loss if an agent buys something it shouldn’t have. T&Cs should clarify scenarios like: the agent deviated from the user’s instructions, the agent was manipulated by a third party, or the agent’s purchase was legitimate but later unwanted. A balanced approach is to align liability with who introduced the risk. If the merchant fulfilled a proper, credentialed agent order exactly as instructed, then the liability for any unauthorized purchase should lean toward the customer or issuer, not the merchant. (In card terms, this could be akin to a liability shift if the merchant can prove KYA compliance.) Conversely, if the merchant failed to verify the agent or ignored obvious red flags, they would retain liability. We’re likely to see industry standards (potentially via networks or regulation) that clarify these liability splits. In the meantime, forward-looking merchants can proactively include language that purchases by verified agents are considered authorized by the account owner, and that normal refund/return policies apply unless negligence by the merchant is proven.

  • Dispute Resolution & Evidence: Build new procedures into your terms for handling agent-related disputes. For example, a merchant’s policy could state that if a customer repudiates a purchase made by their agent, the merchant will cooperate with the payment provider’s investigation by supplying the agent’s mandate and any communication logs. Also, the customer must first pursue any claims against the agent service (especially if it’s an outside AI like a shopping app), with the merchant stepping in only if the agent clearly failed. This kind of multi-layer dispute clause ensures that disagreements are resolved at the correct layer. Importantly, the existence of a tamper-proof mandate may actually reduce classic “friendly fraud” – it’s harder for a cardholder to claim they didn’t authorize a transaction when there’s a cryptographic contract showing they did.

  • Transparency & Compliance: Finally, merchants should include wording that any AI agent interacting with their platform must identify itself as such (per emerging AI transparency laws) and comply with relevant regulations (e.g. data protection, AI Act obligations). For instance, if an agent is going to scrape pricing data or negotiate on a user’s behalf, it should disclose it’s not human. This protects the merchant as well – ensuring agents using the service have agreed to abide by privacy rules and not abuse the system.

In summary, Banks and Payment Service Providers acting on the acquiring side, should need a 360° update on their Terms & Conditions, to cover who can transact (humans or agents with proof), how consent is captured and evidenced, and who is on the hook if something goes wrong. Getting these legal guardrails in place now will smooth the path for mainstream agentic commerce, while protecting merchants from unnecessary liability.

Board-Ready Actions for Banks and Payment Service Providers

Boards and CEOs should treat KYA enablement as a near-term strategic initiative. The following actions can position your institution for the agentic commerce era and mitigate the risks of inaction:

  1. Appoint a “KYA” Owner and Task Force. Assign an executive to own Know-Your-Agent strategy (e.g. the Chief Risk Officer) and establish a cross-functional task force. Their mandate: develop the bank’s or PSP’s agent-commerce framework in the next 90 days. Owner: CRO. Success metric: Board review of a KYA policy and roadmap by end of next quarter.

  2. Upgrade Infrastructure for Agent Signals. Invest in the necessary tech changes to extend your payment gateway and core systems to handle agent data. This includes supporting new message fields or API calls for agent identity, storing agent credentials in customer profiles, and modifying fraud engines to accept “trusted agent” inputs. Plan a pilot integration with an open standard like AP2 or Visa’s TAP in a sandbox environment. Owner: CTO. Metric: Prototype system handling agent-authenticated transactions (with <5% false declines on agent traffic) by Q2.

  3. Revise Merchant & Customer Agreements. Fast-track updates to your standard contracts (merchant acquiring agreements, online terms of service) to incorporate agentic commerce clauses. Legal should draft KYA provisions as outlined (delegated authority, agent credential requirements, liability rules) and align them with emerging regulatory guidance (PSD3, AI Act). Educate major merchant clients on these changes ahead of time. Owner: Legal Unit. Metric: New KYA clauses approved and rolled out in all key contracts by year-end.

  4. Launch an Agent-Commerce Pilot. Don’t wait for perfect standards—identify a controlled use case to trial agent-based transactions. For example, partner with a fintech or retail client to allow a known AI assistant to initiate low-risk purchases (gift cards, minor consumables) using a small subset of users. Monitor conversion lift, fraud outcomes, and customer satisfaction. Use the pilot to refine your KYA processes (credential issuance, customer consent flows, dispute handling) in real conditions. Owner: Head of Innovation / Partnerships. Metric: Pilot completed with measurable improvements (e.g. +X% sales from agent channel, no increase in chargebacks) and lessons reported back to the board.

  5. Engage Regulators & Industry Peers. Proactively open dialogue with regulators (central bank, data protection authorities) to shape KYA-friendly rules.

By acting on these fronts, boards can ensure their organizations don’t fall behind in the next wave of digital commerce. The risk of doing nothing is that you wake up to find transactions siphoned away to competitors or alternative networks that embraced agentic commerce first. Conversely, seizing the KYA agenda now positions you to capture new volume (imagine capturing those midnight auto-orders that would otherwise fail) and to do so safely and transparently.

As one payments executive noted, “merchants and consumers want to engage with AI agents confidently and securely”– the firms that enable that confidence will win the transaction.

 

Figure 1: Key Shifts in a KYA World (Today vs Future)

If this resonates, please consider subscribing to “The Next Agenda”. For briefings or board-level discussions, feel free to reach out to me; Independent Non-Executive Director dialogues welcomed where my expertise adds value.

Previous
Previous

Auditors Missed a “Breathtaking” $500M Fraud! Here Is How AI Would Have Caught It

Next
Next

The Fragile AI Outsourcing Trap: Why Smart Banks Are Building, Not Buying