Back to Blog

Why Humans Fail at Underwriting and What That Means for AI Bank Verification

Key Takeaways

  • Upstart's CEO publicly stated that humans have never been good at underwriting, raising uncomfortable questions about what AI can realistically fix.
  • AI underwriting for merchant cash advance works best when it augments human judgment rather than replacing it entirely, especially during bank verification.
  • Automated bank statement analysis catches patterns humans miss, but visual verification of live banking sessions remains the most fraud-resistant layer.
  • MCA lenders who rely solely on document-based AI are exposed to increasingly sophisticated generative AI forgeries.
  • The most effective verification stack in 2026 combines AI-powered document analysis with async screen recording of live bank portals.
TL;DR: Upstart's CEO admitted humans have never been precise at underwriting, but that doesn't mean AI alone is the answer. For MCA lenders, AI underwriting for merchant cash advance works best as a layer on top of verified, visual proof of bank activity. Exact Balance combines AI-guided screen recording with async workflows so lenders get fraud-resistant verification without the scheduling overhead of live calls.

Upstart's Admission and the Underwriting Problem Nobody Wants to Talk About

During Upstart's Q4 earnings call, CEO Paul Gu made a statement that should give every lender pause: "Unfortunately, humans have never really been very good at precisely underwriting loans and figuring out the cash flows they're going to produce for the next 5 years." The comment, reported by deBanked, was refreshingly honest. But it also raises an uncomfortable follow-up question: if humans have always been bad at this, what makes us think AI will automatically be better?

For MCA lenders, the stakes are more immediate than they are for consumer lending platforms. Merchant cash advances involve short time horizons, volatile cash flows, and a borrower pool that traditional credit models were never designed to evaluate. AI underwriting for merchant cash advance is gaining traction fast, but Gu's candor highlights a real tension in the industry. The same biases, blind spots, and incomplete data that undermine human underwriters can also be baked into machine learning models if lenders aren't careful about where they place their trust.

This article unpacks what Upstart's admission actually means for MCA funders, where AI genuinely improves bank verification and underwriting, and where it falls short without a verification layer that captures ground truth.

Why AI Underwriting Can Repeat Human Mistakes at Scale

The Garbage-In Problem with Automated Document Analysis

Most AI underwriting systems in the MCA space rely on ingesting bank statements, categorizing transactions, and computing metrics like average daily balance, deposit regularity, and NSF frequency. These systems are fast. They eliminate hours of manual spreadsheet work. And they are getting remarkably accurate at data extraction, with platforms like LendPathway recently announcing 99.7% reconciliation accuracy on financial documents.

But accuracy of extraction is not the same as accuracy of verification. A perfectly extracted PDF can still be a perfectly forged PDF. Generative AI tools available today can produce bank statements that pass basic visual inspection, match expected formatting for major Canadian and US banks, and contain internally consistent numbers. The AI that reads the document cannot, on its own, distinguish between a real statement and a convincing fake.

This is the garbage-in problem at scale. When human underwriters reviewed statements manually, they occasionally caught visual anomalies: misaligned fonts, slightly off logos, transaction descriptions that didn't match the institution's standard format. Those catches were inconsistent and unreliable, which is exactly Gu's point about human limitations. But replacing that inconsistent human layer with an AI layer that doesn't even attempt visual authenticity checks creates a different kind of vulnerability.

Bias Amplification in MCA Credit Models

Gu's broader point about human imprecision also applies to how training data shapes AI models. Most machine learning credit risk models for MCA are trained on historical funding and repayment data. If past human underwriters consistently underfunded businesses in certain industries, geographies, or revenue brackets, the model learns those patterns as risk signals rather than recognizing them as bias artifacts.

For MCA specifically, this creates a feedback loop. Businesses that were historically declined never generated repayment data, so the model has no positive signal for similar profiles. The AI becomes confident in its decisions, but that confidence is built on an incomplete picture. This doesn't mean AI underwriting is useless. It means that lenders who treat AI output as ground truth, without a verification step that captures what's actually happening in the merchant's bank account, are building on a shaky foundation.

Where AI Genuinely Improves MCA Underwriting

None of this is an argument against AI in lending. The technology delivers real value in several specific applications that matter to MCA funders. Transaction categorization is one: machine learning models can tag thousands of line items across months of bank history in seconds, identifying revenue patterns, recurring expenses, and anomalies like sudden spikes in loan repayments that suggest stacking. Pattern detection at this speed is something no human team can match.

Fraud signal aggregation is another genuine strength. AI systems can cross-reference behavioral signals, including deposit timing consistency, round-number deposit patterns, and gaps in transaction history, to flag applications that warrant deeper review. As we explored in our analysis of how MCA lenders detect synthetic identity fraud in bank verification, these pattern-matching capabilities are essential for catching manufactured identities that look clean on paper.

The key distinction is between AI as a decision-maker and AI as a decision-support tool. The former is where Gu's warning hits hardest. The latter is where the technology genuinely shines, especially when paired with a verification method that captures authenticated, visual proof of bank activity.

How Visual Verification Closes the Gap AI Leaves Open

The core problem with document-only AI underwriting is that it operates one layer removed from reality. The AI reads a representation of bank activity, whether that's a PDF statement, a CSV export, or data pulled through an API. Each of these can be intercepted, modified, or fabricated before it reaches the lender's system.

Visual verification of a live banking session is fundamentally different. When an applicant logs into their actual bank portal and records their screen while navigating account summaries, transaction histories, and balance details, the lender gets something that no document can provide: a timestamped record of what the bank's own servers displayed at a specific moment in time.

This is the principle behind Exact Balance's async verification workflow. Instead of scheduling live calls where an underwriter walks the applicant through their banking portal in real time, the applicant receives a secure link, records their session at their convenience using browser-based screen capture, and submits it for review. An AI-guided floating coach walks them through each step, verifying completion in real time so the recording captures exactly what the underwriter needs to see.

The fraud resistance here comes from the combination of liveness and visual context. A forged PDF doesn't show the browser's address bar confirming the bank's domain. It doesn't capture the natural loading behavior of a live web application. It doesn't include the subtle UI details, like session timeout warnings or personalized greetings, that authenticate a real banking session. These are signals that current generative AI tools struggle to replicate convincingly, and they provide a verification layer that sits above both human judgment and document-based AI analysis.

For funders processing hundreds of applications per month, the async model also solves the operational bottleneck that plagues live verification. As we detailed in our breakdown of why screen recording beats live verification calls for MCA lenders, scheduling overhead alone can delay funding decisions by days. Async recording eliminates that delay entirely while preserving the evidentiary value of seeing the bank portal firsthand.

Building a Verification Stack That Doesn't Repeat Human Failures

Gu's admission should push MCA lenders toward a specific conclusion: neither humans nor AI should be trusted as a single point of verification. The most resilient underwriting workflows in 2026 use multiple layers, each designed to catch what the others miss.

A practical verification stack looks like this. First, automated bank statement analysis handles data extraction and transaction categorization. This is the speed layer. It eliminates manual data entry and produces the quantitative metrics that inform credit decisions. Second, AI-powered fraud detection scans for anomalies in the extracted data: inconsistent formatting, round-number patterns, unusual transaction gaps, and signals that the document may have been tampered with. Third, and critically, visual verification of a live banking session provides authenticated proof that the data in those documents matches what the bank's own system displays. This is the trust layer.

Each layer serves a distinct purpose. The first two are fast and scalable but operate on data that can be fabricated. The third is harder to fake and provides the audit trail that regulators and compliance teams increasingly require. Together, they address the exact problem Gu described: humans alone are imprecise, but AI alone is only as reliable as the inputs it receives.

The lenders gaining a competitive edge are the ones who recognize that speed and trust are not opposing forces. Async workflows make it possible to add a visual verification step without adding scheduling friction. When the applicant records on their time and the underwriter reviews on theirs, the verification layer costs minutes, not days. That's the operational reality that makes a layered stack practical, not just theoretically sound.

Frequently Asked Questions

Can AI fully replace human underwriters in MCA lending?

Not reliably, at least not yet. AI excels at processing large volumes of bank data, categorizing transactions, and flagging anomalies, but it struggles with contextual judgment calls that experienced underwriters handle intuitively. More importantly, AI models trained on historical data can inherit the same biases that made human underwriting imprecise in the first place. The most effective approach treats AI as a decision-support layer rather than the sole decision-maker, pairing automated analysis with human review and visual verification of banking activity.

How do MCA lenders catch forged bank statements that pass AI analysis?

Forged bank statements have become sophisticated enough to pass automated extraction and reconciliation checks. The most reliable way to catch them is visual verification of a live banking session, where the applicant records their actual bank portal in real time. Browser-based screen recordings capture URL bar details, natural page loading behavior, and bank-specific UI elements that are extremely difficult to fake. Platforms like Exact Balance provide this capability through an async workflow, so lenders get visual proof without scheduling live calls.

What is async bank verification and how does it work for MCA lenders?

Async bank verification replaces scheduled live verification calls with on-demand screen recordings. The lender sends the applicant a secure link with custom instructions specifying what to record: account summaries, transaction histories, or specific date ranges. The applicant records their banking portal session directly in their browser with no software installation required. The lender's team reviews the recording whenever it fits their workflow. This eliminates time zone coordination, reduces scheduling overhead to zero, and preserves a full audit trail for compliance purposes.

Is AI underwriting safe to rely on for merchant cash advance decisions?

AI underwriting is safe as one component of a broader verification workflow, but risky as the sole basis for funding decisions. Machine learning models can process bank data faster and more consistently than humans, but they cannot independently confirm that the data they're analyzing is authentic. In 2026, the standard for fraud-resistant MCA underwriting combines automated document analysis, AI-driven anomaly detection, and visual verification of live bank portal sessions. Lenders using all three layers significantly reduce their exposure to fabricated documents and synthetic identities.

Conclusion

Upstart's CEO said the quiet part out loud: humans have never been great at underwriting. But the solution isn't to hand the entire process to AI and hope for the best. For MCA lenders, the path forward is a layered approach where AI handles what it does best, speed, pattern detection, and scale, while visual verification provides the authenticated ground truth that no algorithm can fabricate.

Exact Balance exists at that trust layer. Our async screen recording workflow lets applicants record their live banking sessions on their own schedule, guided by AI coaching, while your team reviews recordings and verifies transaction authenticity on demand. No scheduling calls. No forged documents slipping through. Visit exactbalance.ca to see how async verification fits into your underwriting stack.

Ready to modernize your verification process?

Replace live calls with async screen recordings. Faster decisions, stronger audit trails.

Get Started Free