How implementing a custom user information system and microservices architecture helped Exadel strengthen its position in the industry
Hyperautomation is no longer an abstract trend – it’s become a prerequisite for business survival and growth. Companies are moving from scattered automated solutions to integrated ecosystems where infrastructure, security, and product releases can function without constant human intervention. This is exactly the model Dzmitry Budnikau, a Java software developer at Exadel, is building in practice. Exadel is an outsourcing company with over 2,000 employees across 15 countries, serving major international clients. For three and a half years, Dzmitry has been actively modernizing one of the company’s key internal projects. The containerization, microservices platforms, and custom field systems he initiated and played a crucial role in implementing are already delivering results – release cycles are getting shorter, application performance is improving, and the distributed development team works smoothly even during the pandemic. We spoke with Dzmitry Budnikau about how modern tech businesses built on automation are evolving today.
There’s a lot of talk about hyperautomation right now. Gartner named it one of the year’s top trends. Many of your solutions at Exadel, like implementing Docker and Kubernetes, align with this global trend. Generally speaking, what signals indicate that existing infrastructure is no longer cutting it, and where should companies start their automation journey?
Typically, the first red flag is when processes are manual. When releasing updates depends on specific individuals, isn’t documented, and can’t be reproduced, any hiccup becomes a business risk. When I joined the company in late 2017, that’s exactly what I encountered: they were using outdated Hudson, which had long been discontinued, builds were manual, and there was practically zero visibility. The project simply lacked the hands, initiative, skills – or all of the above – to update the stack. Together with the team, we began modernizing the infrastructure. The first step was migrating to Jenkins in 2018-2019 – an intermediate phase that let us safely prepare for the next transition. In 2020, we moved to GitLab CI, giving us a unified CI/CD platform. At the same time, we implemented containerization with Docker, which standardized our dev and production environments and eliminated the randomness factor. This is a universal step that many companies can start with, regardless of scale.

Companies worry that automation and containerization are complicated and expensive. How do you think these changes pay off?
Automation pays for itself by reducing operational risks and time losses. Here’s a concrete example: previously, our code freeze before a release took about a week out of a four-week release cycle. That meant the team couldn’t develop new features for a quarter of the time. With the shift to automated pipelines through GitLab CI and the implementation of automated testing, analysis, and build systems, we got a unified conveyor where the system itself prepares the product for release. This frees up a significant portion of the release cycle for actual development. We’re already seeing results from this transformation: in 2018, our team released roughly once a month. Now we’re moving toward one release every three weeks, which will speed up delivery of new features to clients by about 25%. Ultimately, the business wins not just in time and process control, but also in development costs.
Moving to microservices architecture is considered a sign of growth. At the same time, this step can create new technical debt. I know you led the migration of the company’s largest internal system to microservices and managed to avoid this trap. How did you pull that off?
Microservices really aren’t a universal solution when a monolithic approach starts holding back business development. In our case, when the company decided to migrate its largest internal system to microservices, I was trusted to kick off the process. I was given a development team of seven people working within a larger project team of about 20 specialists, including testers, analysts, and managers. We started with an intermediate step – implementing the Backend for Frontend pattern. This was safer and less costly than going straight to full microservices, and it let us gradually prepare the architecture for the transition ahead and for both systems, new and old, to coexist. Then the system is gradually split into independent services communicating through APIs. This allows us to delegate development of individual modules to different teams and accelerate feature releases. Teams stop blocking each other – each can work on their piece in parallel. Further implementation of Docker and Kubernetes makes it easy to scale the application based on load and increase fault tolerance dramatically, which is extremely difficult to achieve with a monolithic approach. We built in observability, test automation, and deployment automation from the start – without these, microservices turn into chaos. The trickiest part was figuring out the pipeline configurations and old scripts inherited from the legacy CI – some scripts didn’t work as expected. You have to simultaneously study legacy code and build new infrastructure without stopping the team’s work.
In practice, there’s often a situation where business and IT speak different languages. How does automation help close this gap and simplify workflows?
One working approach is to give the business more autonomy within safe technical boundaries. As part of the migration, we’re implementing a custom fields system that the business can create on their own without involving the dev team every time. Imagine that previously, the data structure was like a rigidly welded metal frame – to add a new field or change something in the structure, you needed a developer and considerable time. I was one of the initiators and developers of a more flexible system that lets business users create the fields they need themselves and configure them for their purposes. Now it’s like Lego – the business can independently assemble the data models they need. We expect this solution will be one of the factors reducing requests from the business for developing standard features by about 25%.
Judging by recent news about breaches, cybersecurity is more critical than ever. You managed to implement modern security protocols before major data leaks became commonplace. How do you build protection that’s both robust and doesn’t slow down users?
User and corporate data breaches have indeed become one of the industry’s most serious problems. From the start of our project, we used our own authentication and authorization system, which handled its tasks for a long time. But as the product evolved and integrations increased, problems emerged: the system became difficult to maintain, didn’t allow for single sign-on, and couldn’t scale without increasing risks. The company currently has about 10 internal systems, and employees have to keep track of numerous different passwords, logging into each system separately. For the IT department, this creates security and support challenges. We analyzed alternatives and began transitioning to the open-source Keycloak platform and modern security protocols. System migration is happening in stages and will centralize access management for all our users, applications, and services. Meanwhile, the authorization process for end users will remain fast and straightforward, and with SSO implementation, it’ll actually become more convenient. An added bonus is simplified security audits because all access is now consolidated in one place.
What mistakes do you think companies most often make when copying others’ experiences transitioning to microservices and DevOps practices?
The most common mistake is trying to copy tools without understanding the processes. Kubernetes, microservices, or SSO don’t solve business problems on their own. It’s important to understand exactly why a company needs a particular technology and what organizational changes should accompany it. Without that, even the most modern solutions just complicate work instead of becoming a source of resilience. I myself underestimated the complexity of integrating existing processes with new infrastructure at the start. I thought migration would take less time. Today we’re still in the process – a project of this scale requires a phased approach and careful work with legacy systems. This situation taught us to assess timelines more realistically and allocate enough time for complex integrations – a lesson I now apply to every new project.

Many analysts today talk about growing technical debt amid accelerated digitalization. Based on your experience and your company’s experience, how do you avoid this?
The main principle is not to postpone architectural decisions. Rapid growth without automation, monitoring, and clear contracts between services almost inevitably leads to accumulating tech debt. If I were starting from scratch today, I’d immediately begin the transition to Docker, implement microservices and a modular structure instead of a monolithic approach. I’d do frequent small releases instead of one big one and invest in automated and integration tests to reduce the scope of manual regression testing. Companies that invest in CI/CD, logging, metrics, and documentation from the start win in the long run, even if it seems excessive at first. This is no longer a theory but a practical conclusion from many large teams.
Equally important are investments in specialists who serve as the link between technology and business. I know that as a Java software developer, you work extensively with teams and young specialists. What should an engineer at a major tech company be like today?
Technical skills remain the foundation, but without understanding the business context, they stop being a competitive advantage. In three and a half years at the company, I went from junior developer to leading a technical development team. Just 7-8 months after joining, I started actively participating in infrastructure modernization, and now I lead the team transforming the company’s key project. And I can say that what mattered most wasn’t so much technical knowledge as communication skills, task prioritization, risk assessment, and the ability to work in a constantly changing environment. So I always advise young colleagues aiming for long-term careers to develop these soft skills, the ability to ask questions and understand how the product makes or saves money. It’s already clear that the system we’re working on successfully handles challenges – the decisions the team made in 2018-2019 about automation and containerization turned out to be critical for the business in 2020. This confirms that the right architectural decisions pay off in the long run. Ultimately, it’s the combination of technology and competent specialists capable of using these technologies for business development that ensures companies’ resilience amid rapid growth and change.