With more than a decade protecting Medicaid and state health exchanges, a QA and cybersecurity expert shows why uniting testing and security is crucial against today’s wave of cyberattacks
In 2025, healthcare is still one of the most vulnerable sectors when it comes to cyber risk. According to the U.S. Department of Health and Human Services, there have already been more than 300 reported breaches in the first half of the year—putting the industry on track to surpass last year’s total of 385 cases. The fallout from these attacks is not just about stolen data. The Change Healthcare hack, for example, disrupted claims processing nationwide, delayed payments to providers, and even interrupted access to prescriptions for patients. It was a stark reminder that in healthcare, cybersecurity failures quickly become public health risks.
To learn how organizations can better defend themselves, we turn to Tamerlan Mammadzada, a Senior Software Quality Assurance and Security Testing Engineer who has spent more than a decade building safeguards for critical healthcare IT systems, including Medicaid (helps millions of Americans who couldn’t otherwise afford healthcare get access to doctors, hospitals, prescriptions, and long-term care) and state health benefit exchanges. As a recognized, experienced professional, he is a Senior Member of IEEE, as well as a Distinguished Fellow of the Soft Computing Research Society, a title awarded for contributions to advancing the field. He has authored the book Securing Healthcare Software: A Practical Guide to Functional Testing, Penetration Testing, and Compliance, and he developed a Secure and Compliant Healthcare QA and Penetration Testing Framework for State Health Insurance Marketplaces. This framework unites quality assurance, automation, and penetration testing into one streamlined process, an approach that has already prevented serious vulnerabilities from reaching millions of users.
In this conversation, we explore how healthcare’s cybersecurity crisis highlights the need for integrated defenses, why QA and security must work hand in hand, how automation strengthens resilience, and what lessons other sectors can take from healthcare’s experience.
Tamerlan, with healthcare breaches on the rise, as someone who has spent years safeguarding Medicaid and state health exchanges, what do you think would help organizations best protect their critical systems?
In my view, the most effective step is integrating QA, automation, and penetration testing into a single strategy. Too often, organizations treat these as separate functions, which creates blind spots.
When you build security checks into everyday QA cycles, use automation to catch routine issues early, and then add focused penetration testing to uncover high-risk flaws, you get a much stronger defense. In practice, it meant far fewer defects slipping through, a noticeable drop in repetitive manual work, and, most importantly, keeping serious vulnerabilities out of production.
The bottom line is that integration turns testing from a reactive process into a proactive shield and that’s what critical systems need today.
Your integrated framework for Medicaid and Health Insurance Marketplaces reduced defect leakage into production by more than 40% and cut manual testing by 30%. What do you see as the main reason this approach works so well compared to traditional QA or security testing done separately?
The real strength comes from eliminating silos. Traditional QA teams often focus only on functionality, while security teams come in later, almost like auditors. By the time they find something, it’s already expensive or politically difficult to fix.
In my framework, QA and security testing are part of the same process from the start. Automation helps us spot small defects before they pile up, and penetration testing ensures that even if functionality looks fine, the system won’t collapse under attack. This balance of speed and depth is what makes it so effective.
During your work on Medicaid systems, you led the validation and remediation of a major data incident that affected over 9,000 subscriber records. What lessons did that experience teach you about balancing speed and accuracy under public and regulatory pressure?
That case showed me how high the stakes really are. Regulators wanted answers fast, but rushing without precision would have created even bigger problems. We built a validation framework that allowed us to quickly cross-check the data while maintaining full accuracy.
The key lesson is that speed and accuracy are not opposites; you achieve both when your processes are mature. Automation helped us validate thousands of records in hours, and strong collaboration with developers ensured fixes didn’t break compliance requirements. It reinforced my belief that preparation is everything.
It reminds me of an air traffic controller’s job. You can’t just rush planes through to move faster, because the smallest mistake could have huge consequences. The only way to balance speed and accuracy is to rely on systems that are prepared and tested in advance.
In your penetration testing work on state health insurance enrollment systems, you uncovered a number of serious vulnerabilities. What did those findings show you about the kinds of risks organizations often underestimate?
One key lesson was how often organizations overlook logic-based flaws. In my tests, I found issues like IDOR (Insecure Direct Object Reference), which lets attackers access records they shouldn’t, Server-Side Request Forgery, where crafted requests manipulate internal services, and Server-Side Template Injection or SSTI, which can lead to remote code execution.
Think of it like a hotel: if the front desk gives you the key to someone else’s room, that’s IDOR; if a guest tricks the staff into making calls on their behalf, that’s SSRF. These may not be the headline vulnerabilities on every checklist, but in large systems, they’re just as dangerous. They often slip past automated scans, which is why teams need to think creatively about misuse. Penetration testing, combined with QA and automation, is essential for catching what standard checks miss.
The vulnerabilities you mention, like IDOR and SSRF, sound very technical. How do these risks translate into other industries outside healthcare?
They translate more directly than people think. Take finance: an overlooked authorization flaw can let someone view another customer’s account. Or in government services, a simple misconfigured request could expose citizen data. These aren’t “healthcare-only” risks—they’re logic gaps that can appear in any large, complex system. What healthcare teaches us is the importance of catching them early with penetration testing and creative thinking, before they spiral into systemic failures.
In your book, you’ve also outlined a framework that combines functional testing, penetration testing, and compliance standards into a single guide for healthcare IT teams. What motivated you to formalize this approach, and what gap were you aiming to close for engineers building critical systems?
The book came out of real project experience. I noticed that many teams in healthcare IT were struggling to align functional testing with security and compliance. They often had good QA processes, but those processes weren’t built to catch vulnerabilities or regulatory gaps.
I wanted to create something like a guide that engineers could use immediately. It combines functional testing methods with penetration testing practices. It ties both back to compliance standards like HIPAA, which is about protecting patient health information, and NIST, which sets out the government’s rules for keeping systems secure. The idea was to make secure development less abstract and more actionable.
You’ve also published research on platforms like Academia.edu, where you shared your integrated QA and penetration testing methodology. Why did you choose to make your work accessible in this way, and how do you see it influencing the industry?
I chose to publish on open platforms because access matters. Not every engineer, especially students or those in smaller organizations, can afford subscriptions to journals or expensive training. By putting my research on Academia.edu, I can reach a wider audience.
The response has been encouraging. I’ve had feedback from engineers who adapted parts of the methodology in their own projects. That kind of knowledge transfer is important. It raises the baseline of security practices across the field, which ultimately benefits everyone.
As a Senior Member of IEEE and a Distinguished Fellow of the Soft Computing Research Society, you’ve been involved in reviewing technical contributions and mentoring younger engineers. From your perspective, what skills or mindsets should the next generation of QA and cybersecurity professionals focus on most?
Curiosity is the number one skill. The best testers are the ones who keep asking “what if?”—what if the system is misused, what if someone bypasses the normal flow. That mindset uncovers risks that scripts or tools might miss.
I also stress communication. In healthcare, especially, you’re often explaining risks to non-technical stakeholders like policy teams, compliance officers, and even government officials. If you can’t translate technical issues into business impact, your work won’t have the influence it should.
Looking ahead, as you continue to expand your leadership in secure healthcare software, what do you think will be the biggest change in how industries—healthcare, finance, or government—approach cybersecurity in the next five years?
I believe we’re moving toward preventive security becoming the standard. Right now, too many organizations focus on patching after something goes wrong. But with threats evolving so quickly, that approach won’t hold. Integrated testing, continuous validation, and automation will increasingly be built into systems from day one.
Another big change will be cross-industry convergence. Healthcare, finance, and government may seem very different, but they share the same challenges: protecting sensitive data, staying compliant, and maintaining public trust. The frameworks we’ve built in healthcare can serve as a blueprint for others, and I expect to see wider adoption across sectors.
It’s a bit like building earthquake resistance into a house. You don’t wait for the first tremor and then reinforce the walls—you design strength into the foundation from the start. Cybersecurity is heading the same way: resilience by design, not repair after disaster.
