Fintech News

Re-Architecting Banking Systems: Lessons in Microservices, Risk, and Resilience from a FinTech Expert

Having led architectural transformations and managed complex financial systems across multiple major banks, Manoj Tyagi brings a wealth of experience in both technical leadership and financial domain expertise. His journey from senior software engineer to VP of Architecture showcases a deep understanding of the technical and business aspects of financial technology. Below are ten questions to explore his perspectives on technology, leadership, and the future of banking systems.

What was the most unexpected human or organizational challenge you faced when migrating from monolithic applications to microservices, and how did it shape your architectural decisions?

One of the most unexpected challenges in migrating from monolithic applications to microservices in financial firms was the resistance to change – especially when there was no immediate business requirement driving it. No one wanted to modify a working application out of concern that the new system might not be as robust or accurate as the existing one.

Additionally, a big-bang migration – replacing the entire application at once – was not an option. Instead, the only feasible approach was a phased migration. This required decomposing the monolith into smaller parts and identifying independent components that could be tested and deployed separately. We followed the Strangler Pattern, gradually replacing parts of the monolith with microservices while keeping the legacy system operational. This approach reassured stakeholders that the legacy application was still available and could be rolled back if any issues arose.

As more components transitioned to microservices, stakeholder confidence increased, allowing us to accelerate the migration. Over time, we reached a point where the entire monolith could be fully replaced with microservices.

Another unexpected challenge was data dependency. In many cases, the way data was structured and shared within the organisation influenced decommissioning decisions. Since setting up a separate database for microservices wasn’t immediately feasible, we had to temporarily share data with legacy services. This reinforced the importance of planning for data separation early in the migration process.

Ultimately, these challenges shaped my architectural decisions by emphasizing incremental migration, stakeholder buy-in, rollback strategies, and early data management planning—all critical to ensuring a smooth transition to microservices

How do you reconcile the upfront investment in TDD with the pressure to deliver features quickly in a fast-paced banking environment?

TDD has become an integral part of development in banking, and there is no compromise on that front. However, in the early days, advocating for TDD was challenging. In one of my previous organizations, I had to create separate technical user stories and allocate distinct estimates to ensure TDD was followed. Initially, this approach helped justify the additional effort, and over time, teams recognized its value and accepted the slightly higher development effort compared to just writing code.

That said, TDD ultimately saves significant testing effort, making it more of an investment than an overhead. The key is shifting the perception and effectively communicating its long-term benefits to stakeholders, particularly business teams, who often prioritize speed. By framing TDD as a way to enhance code quality, reduce defects, and accelerate future delivery, it becomes easier to align with the pressure to deliver quickly in a fast-paced banking environment.

How do you balance the need for stability and security in risk management systems with the drive for technological innovation?

This is a great question. The technology landscape is evolving rapidly, and adapting to these changes is crucial for every organization. The financial industry has witnessed dramatic shifts – just a few years ago, people in the U.S. were reluctant to move away from checks, people in India hesitated to use ATMs, and people in Singapore were wary of mobile-based payments. Fast forward to today, and Indian vegetable vendors use QR codes, while Singapore has embraced Scan & Pay and PayNow.

To stay competitive in this environment, banks must adopt a startup mindset. History has shown what happens to companies like Nokia, Yahoo, and Kodak—they failed to adapt to changing realities and were left behind.

For banks, balancing technological innovation with stability and security is particularly challenging, as they must also adhere to strict regulatory requirements. The key lies in adopting proven, secure open-source technologies while ensuring a rigorous change management process. A well-structured framework prevents hasty, unmonitored changes that could introduce vulnerabilities.

Additionally, it’s essential to track production incidents caused by technological upgrades and adjust the pace of innovation accordingly. Leveraging AI for data-driven decision-making can help identify risks early. Moreover, investing in a robust testing infrastructure is critical—the stronger the testing and validation processes, the faster and safer innovation can occur.

In what ways did volunteering with Youth for Seva and teaching mathematics inform your approach to mentoring technical teams?

As the eldest child in my family, guiding and teaching became an undeclared responsibility for my siblings. Later, I taught primary and secondary students in a government school in Hyderabad. Having studied in a government school myself, I understood the common weak areas and structured my teaching accordingly – starting from the basics, breaking down concepts to their core, and then building up from there.

I carried this same approach into mentoring technical teams in my professional life. Since mentoring comes naturally to me, my role as a manager is more about being a mentor and guide rather than just ensuring tasks get done. I emphasize to my teams that beyond earning a salary, the skills they develop on the job will stay with them and grow exponentially over time – compounding faster than money and, in turn, accelerating their financial growth more than money alone ever could.

This passion for mentoring led my organizations to entrust me with onboarding and training freshers and new joiners, a responsibility I have successfully handled since the beginning of my career.

Which assumptions about banking technology do you believe will be most radically challenged or overturned in the next decade?

We have already seen a decline in physical bank branches, and I believe the trend toward digital-only banks will accelerate. Many people haven’t visited a bank branch in years, and cash usage is shrinking – I rarely use paper money beyond small miscellaneous transactions. This shift highlights a growing trust in virtual money. While banks have traditionally relied on trust, this evolving mindset could pose new challenges as fintech companies gain credibility and attract customers away from traditional banking institutions.

As non-banking financial firms (NBFCs) and fintech startups continue building trust, traditional banks may face an existential crisis. Banks are already losing control over financial services as decentralized finance (DeFi) platforms gain traction, offering peer-to-peer lending, cross-border payments without central authorities, and smart contracts.

Additionally, core functions of banks – FX transfers, credit origination, trading, brokerage, and insurance – are being disrupted by specialized fintech startups. But the biggest challenge I foresee is that banks will struggle to attract deposits, which were once easily available and are critical for funding credit requirements.

The main reason? People today are more financially literate and understand better ways to grow their money beyond just bank deposits with low interest rates. To stay relevant, banks must embrace technology, drive innovation, and develop new financial products that cater to evolving customer needs. They need to operate more like startups – treating individual departments as independent business units that continuously leverage technology to adapt and compete in a rapidly changing financial landscape.

How has working in both India and Singapore influenced your approach to technical leadership, particularly regarding cultural differences in hierarchy and innovation?

It took me a few months to fully adjust to Singapore’s working style. Having worked in a U.S. bank in India, transitioning to an Asian bank was a completely different experience. In Singapore, working hours are longer, and the power distance—the gap between senior management and employees – is noticeably greater than in India. Additionally, a few people occasionally speak in Mandarin during meetings, which was initially a challenge. However, I navigated these differences with a structured approach.

To build strong relationships with senior management, I leveraged my manager’s support. I requested him to attend my meetings with senior stakeholders and provide air cover when needed. Once I established credibility and rapport, it became easier to present my ideas effectively.

My experience in India also helped me handle language barriers. In India, regional languages are sometimes used in discussions. I learned that rather than getting frustrated, it’s best to stay patient and ask for a summary in English once the conversation concludes.

Regarding innovation, while an open culture naturally fosters creativity, hierarchical structures don’t necessarily hinder it. Even in rigid environments, innovation is possible—as long as ideas are backed by a concrete proof of concept (PoC) and a clear funding plan. The key is structuring proposals effectively and securing buy-in from leadership.

Beyond cost savings, what hidden complexities do people often overlook when decommissioning an established financial system?

Decommissioning a financial system is far more complex than just cost savings. Unlike other systems, financial systems are subject to strict regulatory oversight, requiring careful handling of security, data retention, interdependencies, and user adoption challenges.

One major complexity is hidden system interdependencies. Teams working with a system for years often underestimate how closely coupled its components are. Functions that seem simple in daily operations may have critical dependencies, making migration or decommissioning more complicated than expected. Comprehensive documentation of components and dependencies is essential to avoid disruptions.

Another challenge is ensuring accuracy and compliance. Legacy and new systems often need to run in parallel until they produce matching results, all while maintaining historical records for regulatory audits. Additionally, the new system must meet or exceed the security and robustness of the legacy system, which has been refined over years of real-world testing.

Edge cases add further complexity. Legacy systems have undergone years of refinements, addressing unexpected scenarios that may not be immediately obvious during migration. Rebuilding or accounting for these scenarios in a new system is challenging.

Beyond technical challenges, psychological resistance from users accustomed to the legacy system is a significant hurdle. Transitioning to a new system can lead to hesitation, productivity dips, and frustration. Additionally, users often expect the new system to include missing ‘good-to-have’ features that were absent in the old system, further increasing complexity.

Ultimately, successful decommissioning requires rigorous documentation, parallel testing, strong user engagement, and regulatory alignment to ensure a smooth transition.

What has been your most instructive failure in moving from hands-on development to architecture leadership, and how did it reshape your approach?

The transition to architecture leadership was relatively smooth for me, as I had been designing systems for quite some time. Early in my career, I worked closely with an architect, assisting in drawing components, box, flow, and context diagrams, which gave me a strong foundation.

That said, one of my most instructive failures was struggling with context switching between hands-on development and high-level architectural decision-making. I initially approached architecture with a developer’s mindset, often diving into micro-level implementation rather than focusing on strategic system design. Instead of maintaining a broad, system-wide perspective, I sometimes got caught up in code-level details, which conflicted with the responsibilities of an architect.

Additionally, as a developer, I was used to working in silos, optimizing for my own tasks or immediate project needs. However, as an architect, I had to prioritize end-to-end simplicity, ensuring that the entire system was intuitive and adaptable for third-party integrations. This shift in mindset – from local optimization to holistic system design – was challenging at first.

Recognizing this mistake, I consciously adjusted my approach, ensuring that my decisions balanced technical feasibility, scalability, and long-term maintainability rather than just immediate implementation concerns. This experience reshaped my leadership style, helping me mentor developers more effectively and design systems with a broader, long-term vision.

How do you ensure that automating processes like credit approval still preserves the necessary human judgment in financial decisions?

Since banks and financial institutions operate in a regulated environment, automation must be implemented with caution to ensure compliance and sound decision-making.

First, not all credit applications are auto-approved – approval depends on business policies, customer segments, creditworthiness, and other risk factors. To maintain human oversight, we implement cross-checks, both manual and automated, to validate auto-approved applications and identify any inconsistencies or errors.

With advancements in AI, automation can now incorporate intelligent decision-making, ensuring that regulatory requirements and risk assessments are factored in while maintaining efficiency and accuracy

When faced with legacy and modern systems, how do you decide whether to refactor existing code or do a complete rewrite, especially for critical financial applications?

When dealing with legacy and modern systems, the decision to refactor or rewrite depends on multiple factors, especially for critical financial applications.

Let’s start with the easier case—modern systems. Refactoring is generally the preferred approach since these systems typically have unit tests and automation in place, ensuring that changes do not break functionality. Additionally, modern applications are recently developed, often with adequate documentation or self-explanatory code, making refactoring a safer and more manageable choice.

However, legacy systems are a different beast. While a complete rewrite sounds appealing—and is favored by solution engineers, developers, testers, and business users alike—it’s rarely feasible. Cost, time, and current business utility often make rewriting impractical. Without unit or automation testing, making any changes to a legacy system is a nightmare.

Legacy systems are usually monoliths with all the associated challenges:

  • Lack of documentation
  • Key developers have left the organization
  • Severe interdependencies, where fixing one issue might break another

In such cases, a hybrid approach works best. Instead of rewriting the entire system, identify isolated components that can be gradually modernized without disrupting the whole application.

In nutshell, the choice between refactoring and rewriting depends on system complexity, availability of testing infrastructure, and cost considerations. For modern systems, refactoring is usually the way to go. For legacy applications, rewriting may be ideal but isn’t always feasible—so a targeted, incremental rewrite is often the most practical solution.

Comments
To Top

Pin It on Pinterest

Share This