There is a pattern that comes up again and again in automation projects across FinTech and financial services. A team picks a tool, assigns someone to write tests, and within a few months they have a suite that looks impressive on a dashboard. Hundreds of test cases. Green builds most of the time. A coverage percentage that satisfies the quarterly review.
And then something breaks in production that the automation should have caught.
The honest reason this happens is that most automation frameworks are built to demonstrate progress rather than to prevent defects. When a test suite is designed to impress stakeholders rather than to solve a real testing problem, the cracks show up later, when the damage is harder to contain.
The three signs your framework is a liability
The first sign is a high flakiness rate. If your tests fail intermittently and your team has started treating certain failures as expected, you do not have an automation problem. You have a trust problem. Flaky tests stop being useful almost immediately, because nobody knows whether a failure is real or just noise. The longer you leave flaky tests in the suite, the more your team learns to ignore automation outputs entirely.
The second sign is that nobody owns the maintenance. This one is common in teams where automation was a project rather than a practice. Someone built the framework, the project finished, and now it lives in a corner of the repository that nobody touches unless a test breaks so badly it cannot be ignored. Frameworks without owners decay. The longer they run without investment, the more brittle they become, and the harder they are to fix when something genuinely goes wrong.
The third sign is that your coverage numbers look healthy but production bugs keep happening. Coverage metrics are useful, but they measure what you have tested, not what matters. A framework that runs 800 tests across 60% of your codebase might still be missing the three integration points where your most critical user journeys live. Coverage without risk analysis is a false comfort.
The mindset shift that actually helps
The teams that get automation right think about it differently. They are not trying to write tests. They are trying to design a test architecture.
That distinction matters more than it sounds. Writing tests is a task. Designing an architecture is a discipline. It means deciding what not to automate as deliberately as you decide what to automate. It means building with maintainability in mind from the first line of code. It means treating the framework as a product that your team owns and evolves over time.
Before writing a single test, spend time on structure. Define your layers, decide your naming conventions, agree on how failures will be investigated. Make the hard decisions early, when changing your mind is cheap. By the time you have 500 tests in a framework built without those foundations, changing your mind costs weeks.
If your framework is already in trouble, the fix is not a rewrite. It is a reset of expectations, a clear owner, and a short period of investment focused on reliability over coverage. Start with the tests that run on every build. Make those stable first. Then expand from a foundation that actually holds.
The tool you use matters far less than you think. Playwright, Selenium, Cypress — all of them work when the underlying architecture is sound. None of them can save a framework that was designed to look good rather than to work well.