In the rapidly evolving digital landscape, Performance Engineering serves as a bedrock for the success of enterprise software. As systems are required to be more robust, scalable and offer better user experience, engineers behind the scene are becoming more visible and are strategic in product delivery. One such professional is Alex Kuriakose, a software architect and performance engineer with 20+ years of experience in tuning large-scale platforms. Alex is a Senior Software Engineer at Workday and has been working with highly-performant systems at scale and start-ups from the Valley.
This interview covers the professional insights about his career together with his observations about performance engineering transformations and proactive development methods in contemporary software delivery.
Q1: Alex, performance engineering has become integral to enterprise development. What initially drew you to this area?
Early in my career, I had the opportunity to work on a multi-tenant eCommerce platform for one of the world’s largest retailers. It was during this project that I truly recognized the critical importance of scalability for applications serving massive user bases. I realized that performance and scalability are not just technical concerns—they directly impact usability, customer trust, conversion rates, and ultimately, business profitability. Experiencing this firsthand at such an early stage in my career had a profound influence on my professional journey.
What drew me further to performance engineering is its unique position at the intersection of software development, infrastructure management, and user experience. It’s a discipline that demands both deep technical knowledge and a holistic understanding of system behavior, making it both a challenging and highly rewarding area to specialize in.
Q2: At Workday, you’ve contributed to building automation-driven performance testing platforms. Can you elaborate on how these are used?
Well, these platforms perform checks on performance at very early stages when products are developed. These platforms adopt the shift-left approach by giving developers, QA engineers along with SRE teams opportunities to find performance issues before problems reach the production environment. Such tools reproduce real operational situations and monitor system statistics, while delivering valuable insights across teams so that they can become responsible towards performance.
Q3: How has this shift-left approach impacted development and release cycles?
The implementation brought beneficial results for both agility and time needed to resolve issues. To complement the above-mentioned self-service toolset, we have implemented automated performance testing directly into our CI/CD pipelines, enabling teams to catch performance bottlenecks much earlier in the development lifecycle. This early detection drastically reduces the need for reactive fixes after deployment, which traditionally consume more time and resources. This also creates better communication between development and operations teams because they share common language about system performance and area breakdowns.
Q4: You’ve also been involved in several strategic aspects at Workday. What areas have you focused on beyond performance testing?
The team has focused on expanding the AI Assistant’s capacity to serve an expanding user base with low latency demands. The AI assistant capability also extends to the integrated Google Workspace platform, which is particularly popular among Workday platform users. Performance evaluation exists as a fundamental element in each area I have worked on which evaluates system throughput together with response time and resource utilization.
Q5: You previously worked at Walmart and Litmus7 Consulting. How did those experiences shape your current approach?
Working on the platform readiness team at Walmart gave me invaluable insight into operating large-scale systems under peak website traffic. Particularly during major holiday sales, the company’s biggest revenue generating event, the responsibility was immense and the stakes were high. To meet the extreme traffic demands that simulate such holiday traffic, our team developed a custom performance analysis tool tailored to those peak load conditions. At Litmus7 I had the opportunity to work closely with retail clients, helping them tackle complex architectural challenges. This included guiding microservices adoption and enhancing backend systems with advanced security features like device fingerprinting. The range of projects I handled across both companies gave me a well-rounded understanding of how deeply performance, architecture, and security are intertwined in building resilient systems.
Q6: What technologies or tools do you find most effective in performance engineering today?
Over the years, I’ve worked with a wide range of performance engineering tools, including Silk Performer, JMeter, and Blazemeter. Lately, I’ve been particularly impressed with Gatling for its modular design and code-driven architecture, which really streamlines scripting and maintenance. When it comes to system profiling and deeper analysis, I’ve relied on tools like YourKit, JProfiler, and specialized ones like fastthread. I believe the real art of performance engineering lies in choosing the right tool for the job—whether it’s for load generation, pinpointing system bottlenecks around memory, CPU, and I/O, or digging deeper into thread dumps and GC log analysis to uncover hidden performance issues.
Q7: You’ve received recognition for your work, including a CEO’s Award at Walmart. What do these acknowledgments mean to you?
I view them as team accomplishments. You don’t work in a vacuum, and performance engineering requires coordination from development, operations, and leadership. Awards are a nice acknowledgement of the impact, but they also demonstrate the importance of collaborative engineering.
Q8: Looking ahead, where do you see performance engineering heading in the next few years?
Well you see, the distribution of systems continues to rise while AI alongside real-time applications dominate, which leads to shifting performance challenges. Predictive analytics together with self-healing systems will become more prominent as observability strengthens its ties with CI/CD across the IT landscape. The discipline evolves from basic reactive troubleshooting toward proactive performance optimization that relies on data presentation.
Conclusion
Digital platform growth requires performance engineers to play an essential role in the future. Alex Kuriakose’s practice demonstrates how enterprise technology should be constructed for sustained long-term operation and scalable development while his projects quietly support numerous critical applications.
