An expert in test automation, Vladyslav Korol, on how AI, regulations, and scale are reshaping QA
In 2025, software testing is no longer just about finding defects. Enterprises are now racing to automate their entire testing lifecycle, from unit to regression, as they prioritize scalability, efficiency, user experience (UX), and security. According to XRAY, the preferred test management solution for Fortune 500 leaders such as BMW, Samsung, and Airbus, the software testing world is transforming, with automation taking over. But how does it work in practice?
To understand how enterprises adapt testing at scale, we spoke with Vladyslav Korol, a Software Development Engineer in Test with over seven years of experience in building and scaling automated testing solutions. His career includes contributions to major international companies such as Penn Entertainment, CVS, Disney, Amazon, and Mindbody. Vladyslav has developed and maintained automation frameworks for web, mobile, and embedded systems within industries like security software, health and beauty, payments, and entertainment. In this interview, Vladyslav discusses scaling QA, evolving standards, and the role of AI.
“I use AI to solve problems in areas I haven’t encountered before”
Vladyslav, you have already been developing yourself in the industry for more than 7 years and established yourself as a seasoned professional in test automation. How did you transition from working at a Ukrainian construction company to leading US tech companies?
I began my IT career at a Ukrainian construction company, later working with hotel and cruise businesses and in risk assessment before moving to top US tech companies. I graduated from a California testing school that focused on local market requirements, job search strategies, and professional development.
My work on Risk Assessment for a background-check app was especially valuable. It was my first experience in a large international organization, contributing to a major project alongside numerous specialists. Each step expanded my skills and perspective—whether working in the sports, health, and beauty industries or on projects securing private properties and businesses. The foundation I built at LexisNexis and Mindbody enabled me to join Amazon, where we developed next-generation security systems and advanced camera technology. Meanwhile, at Disney, our project laid the groundwork for other companies to build their own mobile apps and authentication systems, serving millions of park guests.
Your diverse background offers valuable insights into testing. Having built automation frameworks and led testing efforts within the world’s largest tech companies, what key differences in testing approaches and main challenges did you encounter across these industry giants?
At Disney, the testing process was the least formalized because our team was developing a platform for other internal teams rather than a direct end-user product. As a result, there weren’t dedicated testers. I personally managed the test automation effort.
At Amazon, the approach was unconventional since we worked with physical devices, which introduced unique challenges around hardware integration, environment variability, and test reliability. We had to design custom strategies for device provisioning and validation that went beyond typical software-only testing.
At Penn Entertainment, I gained my first experience automating two distinct applications across three platforms—web, iOS, and Android—within a single project. Coordinating consistent coverage, maintaining test stability, and managing platform-specific nuances at that scale was both complex and rewarding.
At CVS, I led the development of a new mobile app automation framework using WebdriverIO, which at the time had limited community resources and documentation. That forced us to innovate, write custom utilities, and solve problems independently.
Overall, the main challenges across these companies were balancing flexibility and standardization, adapting to the unique constraints of each environment, and maintaining test reliability at scale.
In 2025, companies are actively investing in QA automation due to increasing product complexity. How are companies implementing AI in testing, and are they using LLMs like ChatGPT to generate test cases?
Scaling automation primarily requires choosing frameworks capable of supporting multiple platforms simultaneously, expanding the team to cover both the backlog and new feature development, and continuously maintaining tests, as applications evolve rapidly and even minor updates can cause significant disruptions. More companies are leveraging tools like Cursor, enhanced with custom rules, to streamline processes ranging from ticket status updates to development workflows and code review automation.
Personally, I use AI to check my code for errors, optimize complex functions, or solve problems in areas I haven’t encountered before.
“Automation works even when people are asleep”
You have led the development of end-to-end automation solutions in enterprise environments like CVS, where your framework remains in use today. Based on your hands-on experience, which tools—Selenium, Appium, or Cypress with AI integration—prove most effective for large-scale implementations?
At CVS, we used a combination of WebdriverIO, Azure DevOps, and Cursor for auxiliary tasks. In my opinion, Appium remains the best solution for larger test frameworks with many engineers and thousands of tests. Modern frameworks are less flexible.
Another area where automation truly shines is risk-based testing. How does automation help prevent real-time incidents, and how do you handle situations when automation fails?
Automation works even when people are asleep. The frequency of automated testing increases the chances of catching bugs that might slip through manual testing. Of course, some issues still need manual verification, such as when tests fail due to external factors like unstable testing environments.
At Penn Entertainment, one of the largest entertainment companies in the US and Canada, I’m responsible for Risk Reports for mobile apps. Our operations span multiple states and provinces, each with its own regulations. Manually testing all combinations would be extremely labor-intensive and error-prone. Instead, our automated test jobs run daily across all regions, with simulated geolocations triggering different application states.
You actively participate in code reviews and technical interviews at Penn Entertainment, assessing the overall quality of solutions and evaluating candidates for automation roles. What technical and soft skills do you prioritize when building high-performing QA teams?
I assess candidates’ technical expertise, adaptability, and experience working on large-scale test projects, where approximately 30 engineers make daily updates. I test basic programming language understanding, debugging skills, ability to handle multiple tasks under stress, CI/CD experience, resolving merge conflicts, and approach to writing test locators. New hires receive framework guides, they start with simple test tasks, and then I collaboratively review their code to identify errors and ensure consistency with our coding standards.
Programming knowledge is increasingly required even for manual testing roles. Experience with cloud services, such as Amazon, and proficiency in Kafka and Docker have become essential.
With your successful track record leading cross-cultural teams in both Ukrainian and US companies, how do you approach knowledge transfer and maintain code quality standards across distributed teams?
It’s not too difficult when you have the right processes in place. You need established Core Hours to facilitate communication across time zones and thorough documentation so that work can easily be picked up by others. Clear coding standards and regular code reviews ensure consistency across all team members, regardless of location.
“The best approach is to create your own test project”
Which emerging technologies show the most promise in test automation today?
Current popular test frameworks include Playwright, Cypress, and WebdriverIO. Many companies are also adopting AI tools and IDEs such as Copilot and Cursor.
I don’t believe manual testing will vanish; instead, it will continue to evolve, driven by emerging trends, especially advancements in AI. We may see fewer pure automation engineers and more developers focused on building internal testing tools to assist other engineers or manual testers.
As a member of the International Association of IT Professionals, you have valuable experience and insights from being part of a well-regarded professional community. What advice would you give QA engineers wanting to transition to automation and work at large international companies?
The best approach is to create your own test project, even based on an existing online project. Define test scenarios, ensure they execute properly, learn framework functionality, and automate test execution in Jenkins/Azure, or similar tools. Deep programming knowledge isn’t initially required, but you need language fundamentals and should build expertise over time.
It’s also important to find and learn AI tools that can complement or improve existing frameworks while maintaining strong problem-solving skills since AI isn’t yet 100% reliable. Adaptability to emerging technologies and evolving processes, along with refining job search strategies, are also crucial.
What are your own career plans and how do you see the future development of your professional path in the QA automation field?
I plan to establish myself as an individual contributor first, with the possibility of transitioning into an executive-level position, such as Vice President of Quality Engineering within a major U.S. technology corporation where I can continue contributing technically while optimizing team workflows and patterns.
