As artificial intelligence tools are gaining traction across industries, their use by modern engineering professionals as standard equipment in their work is also getting normalized. Large Language Models (LLMs) in particular have become everyday companions for developers, assisting with code reviews, documentation generation, and routine automation. Engineers who work with software systems in high-risk financial environments need to handle AI tools with extreme caution, as the possibility of their flawed training data or subtle coding errors can snowball into irreversible huge losses. This is where seasoned technology experts can provide meaningful direction and help organizations harness innovation without compromising integrity. Senior software engineer in fintech domain Aleksei Martoias highlights the risks associated with using artificial intelligence tools in high-stakes environments and shares his experience working with large language models using the CLEAR framework.
Who is Aleksei Martoias
Aleksei Martoias is a senior software engineer with a globally recognized fintech organization. He is known for his work in developing secure, large-scale financial software. Over the years, he has led different mission-critical engineering projects that have changed how digital financial platforms work on a global scale – from updating investment infrastructure to implementing intelligent automation with faster product delivery and an improved user experience. He is known for transforming complex systems into secure, high-performance solutions. He has guided teams through projects that demand exceptional precision, speed, and rigorous compliance. His approach represents a rare combination of engineering rigor and imaginative thinking, the kind that converts ambition about technology into measurable business effect.
Invited to speak about the responsible adoption of AI, Aleksei shares how engineers can leverage structured approaches to drive reliability and accuracy in working with intelligent systems. He introduces the CLEAR framework: Concise, Logical, Explicit, Adaptive, Reflective-a practical approach that revolutionizes the way developers interact with AI coding assistants. Instead of considering AI as an alternative for human judgment, Aleksei considers it a collaborator that needs guidance with precision, context, and critical thinking.
By leveraging hands-on, real-world experience in regulated domains, Aleksei emphasizes the combination of human judgment with structured prompting techniques in reducing risk and enhancing software quality. His perspective gives valuable guidance to professionals who juggle automation, safety, and human reasoning in the modern engineering context.
Q: What was your first experience with AI tools during your engineering work?
Aleksei Martoias: I started experimenting with large language models in my personal project, out of curiosity. I wanted to explore whether machines could truly assist with structured reasoning and problem-solving, not just mimic human logic. As I grew confident, I discovered more secure ways to implement these tools during my work hours. For instance, For instance, my first win was to use LLM to create a script that updated over 200 files containing records of network requests and responses. Automation reduced hours of manual effort to a fraction of the time. This small yet safe experiment made me realize AI’s potential in augmenting actual productivity while maintaining engineering values.
Q: Given your extensive experience working with AI tools, which tasks do you think are well-suited for AI assistance, and which still require human intuition?
Aleksei Martoias: In general, AI is suitable for brainstorming ideas, code review and quality adherence testing, concept explanation, document summaries, and even assisting with terminal commands. The CLEAR Framework for constructing prompts is particularly useful while working with challenging code. Even complex debugging becomes more manageable with this. But no matter what, designing system architecture or creating production-ready implementations still requires human judgment because even the best LLMs can occasionally produce flawed patterns due to imperfection of training data. These systems aren’t intelligent, but rather advanced text predictors, trained on very large data sets that could include poor or even biased examples. They can appear to reason or understand, yet they possess no real cognition or awareness of the real world. That’s why human intuition, expertise and sophisticated thinking remains imperative, especially in decisions impacting compliance, financials, and user trust.
Q: What is the CLEAR framework, and why is it important in the way you use AI for engineering tasks?
Aleksei Martoias: My approach to working with huge language models is based on the CLEAR framework (Concise, Logical, Explicit, Adaptive, and Reflective). It is a disciplined method of crafting prompts so that the encounter yields consistent, verifiable results. In fintech, where precision and accountability are critical, even minor errors can have major implications. CLEAR aids in the development of instructions that are well-defined while remaining adaptable enough to evolve through iteration. Applying it converts AI from a guessing tool to a reasoning tool, which is exactly what engineers require in high-risk environments.
Q: How do you handle AI tools in regulated domains like fintech?
Aleksei Martoias: My first guideline is that today’s AI should never have the ultimate say on the development (or implementation) of systems that are used in the real world. In fintech, even minor shortcomings can lead to financial losses, loss of customer trust, or violations. When I work with “help” from AI, verification comes first. AI allows engineers to explore more quickly, but validation remains entirely human.
Q: Could you explain how you use the CLEAR framework in practice?
Aleksei Martoias: I use it both for software projects and everyday AI tasks. For instance, when I need to implement a script, I provide explicit prompts with detailed task descriptions, data examples, corner cases, and expected output formats, sometimes using “GIVEN… WHEN… THEN” notation. Concise matters, but sometimes a broader context helps improve quality. Adaptive prompting means iterating on the AI output until it meets quality expectations. Reflective prompting is about reviewing results critically, learning from mistakes, and refining future queries. Treating AI like a knowledgeable but sometimes unreliable colleague helps me maintain perspective.
Q: How has your experience leading cross-platform teams influenced your approach to AI-assisted engineering?
Aleksei Martoias: My teams handle complex projects that require careful planning and coordination. My experience says that AI thrives in a strong collaboration culture. I encourage engineers to ask AI for alternative solutions when stuck, then critically review and discuss the results. AI also helps me discover edge cases faster and more easily, allowing us to identify potential pitfalls before they become real issues. Communication with product and design hasn’t changed much, but AI helps me to write concise and clear messages, while allowing me to focus on delivering features without slowing down the pace of work., this builds a more confident engineering organization capable of handling rapid market and technology shifts.
Q: How do you see AI transforming software engineering over the next few years?
Aleksei Martoias: The manner in which AI is progressing means it will certainly be able to automate more routine jobs, freeing engineers to focus on high-level design and problem-solving. But first, we must address issues such as poor output quality and AI model vulnerabilities. In fintech, maintaining correctness, security, and maintainability is critical. I believe the real challenge isn’t the power of artificial intelligence, but its reliability. Machines don’t possess intent or empathy, they only simulate understanding. What we can rely on is human judgment and the shared responsibility to use these tools for outcomes that truly benefit people.
Q: What advice would you give to engineers starting with AI tools or frameworks like CLEAR?
Aleksei Martoias: My first advice would be not to start with high-stakes tasks. Observe how someone else applies the framework or ask AI to generate a step-by-step guide. Initially, solve small tasks, then gradually move to real projects. Apply CLEAR principles to add consistency and structure to your prompts and always review AI outputs critically.
This is a continuous learning process. Over time, this approach strengthens prompt engineering skills and the ability to integrate AI responsibly into production workflows.
Q: Finally, what personal insights do you emphasize when integrating AI into engineering?
Aleksei Martoias: I have always believed that AI is a tool that enhances, not replaces, human potential. Use it to speed, clarify, and broaden options, but never avoid the responsibility of reviewing. Engineers who think this way will lead the next generation of responsible AI, creating systems that are not just intelligent but also transparent, secure, and robust.