August 14, 2025

Why QED invested in Lorikeet
“Alexa, wire $50,000.”
Imagine telling your smart speaker to transfer $50,000. It sounds absurd, even dangerous. But that’s the bar for AI in financial services; not just summarizing information, but taking secure, auditable action.
In industries like eCommerce, hospitality and even healthcare, voice agents are already handling routine interactions – checking order status, booking reservations and triaging support. But in financial services, the stakes are higher. It’s not enough for agents to be fluent; they need to be trusted – by systems, by compliance teams and by regulators.
Financial institutions simply can’t afford an AI “beta mode,” especially in voice, where there’s both more room for misunderstanding and more natural speed. How often have we threatened to close an account in frustration when talking to a bank’s Interactive Voice Response (IVR - or in short, a phone tree), without actually meaning to do it? A slight misinterpretation by a voice agent could have serious consequences.
Here’s what’s wild: we’ve crossed the uncanny valley with voice agents without even realizing it - they are almost perfect enough to be indistinguishable from real human voices, having surpassed the point where their minor faults are just enough to be eerie or off-putting. Across our conversations, almost no one reports end customers objecting to AI voice reps – in fact, customer satisfaction is often higher! (One study found CSAT scores jumped 30 percent after implementing voice AI.)
But this just raises the bar. One of our companies reported that a voice AI agent is causing them to extend human support hours, because they need a human back up for the voice agent, and customers prefer voice AI to text or chat support. The current state of the art is voice agents that can talk fluently but cannot take action on a customer’s behalf. It’s clear the time to act is now, but deploying voice agents that go beyond talk means solving three core challenges: access, integration and trust.
At QED, we’ve been searching for solutions that meet this high bar. That search led us to Lorikeet, a company building AI-powered “customer concierges” for support that don’t just chat, but actually get things done. We recently partnered with Steve Hind and Jamie Hall and the Lorikeet team in leading its $35 million Series A to help accelerate their vision, because we believe they’ve cracked the code on bringing trusted voice (and chat) AI to financial services.
Managing data access is critical
Most AI vendors under-appreciate the importance of access: who has it, when and to which systems. Banks operate within deeply layered permission structures and legacy tech stacks, where managing access isn’t a feature, it’s foundational. Building a smart model isn’t enough; a robust solution must handle data access securely and dynamically at its core. The best vertical AI agents don’t merely retrieve information, they understand user entitlements, adjust behavior based on role and log every interaction for compliance. Without this level of access-aware design, AI agents in finance risk falling into one of two extremes: either glorified FAQ bots that can’t take action, or dangerous liabilities that shouldn’t be trusted with authentication and high-value transactions.
To balance performance with control, we’ve seen two primary deployment approaches emerge in enterprise AI.
- Virtual Private Cloud /VPC deployment (vendor-hosted on vendor’s cloud): The bank accesses the AI model via API, with the vendor hosting it in its cloud. This approach offers quick onboarding and minimal operational lift for the bank – the vendor handles infrastructure, updates and scaling. However, the tradeoff is data sovereignty. Sensitive customer data leaves the bank’s direct control, which often fails internal compliance standards (especially for PII or transaction data). Many financial institutions simply won’t approve a solution that isn’t contained within their oversight.
- BYOC deployment (customer-hosted on customer’s cloud): Alternatively, the bank “brings your own cloud” by deploying the vendor’s model within its own environment. BYOC keeps all data within the bank’s boundaries and ensures compliance, auditability and tighter integration with existing systems. It’s especially valuable for institutions that require explainability, rigorous model monitoring or on-prem compatibility. The tradeoff here is operational complexity. Although many vendors now offer managed BYOC support to streamline updates and troubleshooting, it’s still a heavier lift than a pure SaaS API.
Getting this right is non-negotiable for financial services. We were impressed that Lorikeet understood this from day one. Its platform was designed for fine-grained permissions and flexible deployment to meet customers’ needs. Rather than giving an AI agent free rein, Lorikeet uses granular permissioning and dynamic gating to enforce safe, auditable execution of any action. In practice, that means every step an AI takes on a customer’s account can be constrained by policy: who/what is authorized, what checks must pass and all of it recorded for compliance. This kind of infrastructure is invisible when it’s working, but it’s the difference between a neat demo and a production-ready solution that a CIO and chief risk officer will greenlight.
Voice as a “new” modality: The future is speech-to-speech
We believe AI customer support needs to be multi-modal – chat, SMS, email and voice meeting customers on whichever channel they prefer. If we look back at waves of customer engagement (from call centers, to email, to live chat to SMS), each wave birthed massive outcomes. One could argue voice is the modality where AI can shine the most. It’s among the most natural ways for customers to communicate, yet pre-AI it was the hardest channel to automate and scale. Perhaps because of that, voice is often the first thing buyers ask about when considering AI support, and it’s where the customer experience delta feels largest. Expectations are sky-high for voice agents because we all know how painful legacy phone support can be.
Historically, voice AI operated in discrete stages: first convert speech to text, then process it, then generate a response text, then convert that back to speech. Each step added latency and potential error. With emerging speech-to-speech models, that interaction loop is compressing. Newer voice agents can respond faster and more naturally, potentially even mirroring tone and emotional nuance. The result is a more human-like conversation without the awkward pauses of first-gen voice bots.
Crucially, we don’t believe voice models need to be fully verticalized for financial services. Trying to build a finance-specific speech recognition or text-to-speech engine would likely create a rigid, quickly outdated system. The best platforms remain modular and flexible, plugging in best-in-class voice models as they evolve, rather than reinventing the wheel. (In fact, Lorikeet’s approach is to leverage world-class AI models and focus on the workflow and integration engine on top, rather than spending years building a new LLM or voice codec from scratch.) This flexibility means that as voice AI tech improves, financial institutions can immediately benefit, without ripping out their whole system. It also ensures the focus stays on the unique bits that matter most for finance: security, integration and compliance, as opposed to re-solving generic AI problems.
Knowledge management isn’t enough
Most AI support agents today excel at surfacing knowledge. They can summarize policy documents, explain the steps to reset a password or help route a ticket to the right department. That’s useful, but the leap from knowledge to execution is nontrivial. A bank will not – and should not – authorize an AI to take a meaningful action (initiating a wire transfer, freezing a card or approving a loan or claim, for example) without rigorous integration into core systems, proper entitlement controls and full audit trails. In other words, a chatbot that only answers questions is fine for FAQs. But to actually resolve issues, an AI agent needs to plug into the institution’s workflow and operate with the same guardrails and checks as a human.
This is where most vendors fall short. Many have built elegant conversational interfaces with no underlying workflow automation. They hand off to a human the moment real account actions are required. For AI agents to transition from “assistants” to true operators, they must be able to do things: update an address, escalate a case or refund a charge, all in a secure, compliant way. Until then, they remain fancy knowledge managers, not problem solvers.
Lorikeet recognized this gap early on. Rather than stopping at Q&A, it built its AI concierge to drive workflows end-to-end. A great example: one of Lorikeet’s first deployments was handling lost or stolen debit cards from start to finish. The voice agent verifies the caller’s identity and account status, determines eligibility for a replacement card, updates the customer’s address on file and dispatches a new card, all without human intervention. If you’ve ever waited on hold to report a lost card, you can appreciate how much faster and smoother that experience is when an AI can just resolve it. And importantly, every step in that process is auditable and permissioned (tying back to the access point above – the AI could only do those tasks because Lorikeet’s system was designed with the right hooks and safeguards).
The future isn’t who talks best, it’s who gets trusted
Everyone’s racing to sound more human, to pass the Turing test of conversation. But in financial services, the winners will be the ones who build trust – trust with core systems, with compliance officers and with regulators. A smooth voice and clever AI dialog mean little if the agent can’t be trusted to securely do things. The future of fintech customer service isn’t about who has the most human-like bot; it’s about who has earned the right (from a security and governance standpoint) to let that bot actually help you.