Artificial intelligence

Akul Dewan, Senior Product Architect at Akamai Technologies Designing Critical AI/ML Security Applications, to Judge Data Science and Deep Learning Boot Camps at The Erdős Institute in Spring 2025

Data Science and Deep Learning Boot Camps at The Erdős Institute in Spring 2025

Akul Dewan is a Senior Product Architect at Akamai Technologies, where he is responsible for developing architecture designs for the App and API Protections team developing cybersecurity products, with leading-edge applications for national defense, among other sectors. For over a decade, he has focused his software engineering experience on pursuing advanced subject matter expertise in Artificial Intelligence (AI) and Generative AI (gen-AI) technologies. Akul is recognized for his technical innovations and original contributions to the engineering field. He has been awarded two US patents for groundbreaking product developments, and currently has three US patents pending. Akul will serve as an invited Judge for the Spring 2025 Data Science and Deep Learning Boot Camps supporting PhD Career Development at the prestigious Erdős Institute.

After receiving his bachelor’s degree in Information Technology in his native India, Akul earned a Master of Science in Artificial Intelligence from the University of Georgia, Athens (US). Leveraging his advanced academic and work experience on large-scale Software as a Service (SaaS) platforms, Akul has built and led de novo AI teams, and overseen the development of first-ever AI/ML systems, for various technology consulting firms, clients, and use cases. In architecting high throughput, fault-tolerant, and industry-standard compliant AI/ML platforms, he is particularly experienced in systems that can host ML processes that serve Extract, Transform, Load (ETL), near-real time, and real time inference needs. 

In his current role, Akul leads cross-functional teams in sophisticated AI/ML project research, design and development, and oversees governmental regulatory and compliance initiatives.

We spoke with Akul about designing innovative AI tools, how he develops projects with ML capabilities for cyber protection applications, and how he uses AI/ML to solve complex challenges. 

Q: Akul, when you began your Master’s degree program in Artificial Intelligence in 2012, AI was in a relatively primitive state in the industry, as compared to how it has evolved in recent years. For example, transformer architecture did not debut until 2017, so you were about five years ahead of the curve. What led you to move to the US and enroll in a graduate level academic program in AI at that time? Why did you want to specialize in AI?

A: In 2008, I accidentally enrolled myself in a robotics course. They taught us to build robots that did basic tasks like line-following, object tracking, and naïve swarm intelligence. This led me to the world of competitive robotics, and over the next few years, I competed at several robotics competitions at regional and national levels. 

I was fortunate to meet and connect with people who were pursuing academic paths in AI and they explained the potential that AI, as a field, had in the upcoming future. This inspired me to start researching graduate academic programs where I could learn not just robotics, but other fields of AI, as well. I found that the Institute for AI at the University of Georgia offered a master’s program. The faculty then included Dr. Don Potter, Dr. Michael A. Covington, and Dr. Khalid Rasheed, among others. I found their research specialization to be diverse and exciting, and I applied to the program. Fortunately, I was accepted, and this launched my professional journey in AI.

Q: After earning your bachelor’s degree in Information Technology, you worked in a tech role in India in the Quality Assurance domain. You then worked as a software engineer at an Atlanta, GA-based tech leader in contact center automation, where you rose from an engineer to a software architect.  You designed and helped develop their AI/ML platform from scratch, and developed several innovative integrations with contact center technologies. Tell us about this transition from QA to software architect. 

A: The transition has been a long, but fulfilling journey fueled by my insatiable curiosity to learn new skills and seize opportunities as they arise. From every role, every project, and every individual – colleague or leader – I have sought to expand my software skills, technologies, trends, and industry best practices. 

I am grateful that leadership recognized the depth of my contribution to the projects and to the organization, and promoted me to critical roles, which gave me new opportunities to lead impactful initiatives. 

Q: In that role you also earned your first US patent for a breakthrough tool that tracked back-office agent productivity in real time. What made your technology groundbreaking? 

A: Front-office productivity of a contact center agent can be easily tracked. It is transactional in nature – the transaction starts when the phone rings and completes when the call ends. It is difficult to track productivity in back-office, where agents works with emails, online-chats, texts, or other long duration communication mediums. The patented technology solved the problem of productivity tracking. Using several signals from a back-office agent’s desktop, we can now identify whether the agent is involved in productive or non-productive activity, calculate the duration of productive or non-productive activities throughout the day, and the time taken for each productive activity. Additionally, and more interestingly, techniques like “Mouse Clicker” – which is usually used to trick productivity tracking software – can be detected and reported by the technology. 

With this technology, like front-office, contact center managers can empirically gauge productivity for each back-office agent, as well. Managers can also identify opportunities to streamline operations by looking at the statistics of activities where back-office agents spent the most time.  This patent helps contact centers reduce operational costs by automating productivity tracking and uncovering factors that impact productivity.

Q: During that same period, you earned a second patent for developing an AI solution that can predict contact center agent burnout and attrition with a high level of accuracy. Tell us about the SDLC on this product. What inspired the R&D, and how did you develop the technology? What sectors are the primary users? 

A: Attrition in contact centers is a big challenge. An agent is trained for months before taking on the first assignment. Attrition of an agent, especially during peak seasons, increases pressure on peers, reduces productivity, and negatively impacts quality.  There are several reasons for attrition; some can be measured and some cannot. Measurable reasons can range from high Service Level Agreement (SLA) pressure, skills deficit, suboptimal operational processes, or sometimes low morale. 

The R&D for this project started with empirically identifying causality of attrition. We later used ML to identify several common patterns in work productivity for agent’s who were about to quit, and we were able to correlate causality to the patterns. The patent technology for burnout and attrition uses measurable factors to evaluate the probability of agent attrition several days before the agent quits. This is a game-changer, because now contact center management can identify the root cause of attrition, take corrective actions, and in the worst case, proactively search for agent replacements. 

Q: In your current role you are focused on developing solutions that strengthen cybersecurity, and a lot of these use AI/ML techniques. The US Department of Defense is one of the many organizations deploying your solutions, and you have a few patents pending. What can you tell us about your work? What kinds of new challenges are users facing in security as a result of sophisticated and rapidly escalating technologies? How are you using advanced technologies to secure AI applications?

A: Since 2015, software organizations started adopting Application Programing Interface (API) first development. The APIs enable and accelerate new service development, seamless integrations, and standardization of interfaces. As mentioned in Deloitte’s report on “API-enabled digital ecosystems” (April 2021), public APIs derive large revenue for industries like cloud-based software companies, online travel aggregators, and eCommerce platforms. 

Along with providing numerous benefits, public APIs are also an opportunity for malicious actors. The cyber-attack landscape is rapidly evolving, and the types of sophistication of these attacks are increasing every day. Public APIs need to be secured from several type of attacks. For example,  OWASP® Top 10 defines well known attack types. Today, depending on the use of an API, additional security may be required to protect the API, for instance, APIs interacting with Large Language Models (LLMs) have additional OWASP Top 10 attack types.

I work as a Senior Product Architect in App and API Protection (AAP) at Akamai, leading the team responsible for the development and management of Web Application Firewall (WAF) and their associated web security tools. Akamai’s WAF was listed as one of the best Firewalls by Forrester this year. I work with an experienced and renowned team of Threat Intelligence members to design innovative software products that help Akamai’s Application Security be steps ahead of the attackers, and that simplifies protecting Akamai customers’ Apps and APIs from new type of attacks. My specific focus is to work on research, design, and productization of solutions that require AI/ML.  

Among several innovations, a recent project we developed uses advanced machine learning techniques to generate configuration optimization recommendations of WAF. The feature suggests configuration changes to Akamai customers, strengthening  their security posture by optimizing customer-configured WAF exceptions.

Q: Your work with large scale Saas platforms has often been focused on developing AI/ML platforms that provide inference capabilities in near-real or real time. One of your recent innovations supports 10+ machine learning use cases. Can you give us some general and/or sector-specific examples of applications? 

A:  The AI/ML platform I designed is used for multiple use cases. Depending on the need of a use case, a use case may require batched inference of several GB of customer configuration to several TB of web API traffic from across the globe. Depending on the product expectation, the ML inference may be required in real time or near-real time. The platform is equipped to cater the processing needs by massive horizontal scaling support, using best-of-the-breed technology stack, tools, and processes that promotes MLOps best practices. 

Additionally, the platform is developed to enable researchers to conduct rapid tech previews without disrupting production systems or accessing sensitive data; drastically reducing time-to-market.

Q: You’ve also developed specialized expertise in firewall security. How are you ensuring AI/ML guardrails and governance in your design architecture and processes?

A: Akamai systems impact a large part of the internet. Disruption of Akamai system services can cause a ripple of failures across industries around the globe. Rightly so, Akamai follows strict procedures defined in carefully curated Change Safety Strategies. AI/ML Platform embraces the same strategies of change safety. These include, but are not limited to, multistage validations of changes, multiple system checkpoints and benchmarking, and consistent health KPI monitoring.

The data needed for training ML products are evaluated by an Information Security board. Customer identifiers, sensitive data, and access of non-consenting customers is scrubbed. Even after approval systems constantly monitor for traces of unauthorized data or unauthorized data access in the data stores. 

Lastly, across Akamai, production data access is scrubbed and is privileged. The same is adopted in AI/ML Platform. The access to production data is only made available under compliance approvals to the individuals with the authority to access the data.

Q: You’ve recently written about the “cost creep” that often occurs when enterprises try to adopt AI into their legacy systems. One solution strategy that you advised was a separation of responsibility in the ML processing pipeline, but you mentioned that getting C-suite buy-in can be challenging. What is your value proposition? How do you communicate this value to organizational executives? 

A:  As identified by SoftwareOne Holding AG, the availability vs demand for AI skills is 62%.  With this shortage, the expectation that an AI researcher should also be an adept software engineer, or vice-versa, may seldom be fulfilled. I recommend that separation of responsibility, as followed with other SDLC processes like development engineering vs quality assurance engineering, should also be followed with AI/ML pipeline development. 

Data pre-processing and post-processing steps that transform raw/incoming data like feature engineering, noise reduction, data extrapolation, standardization, normalization, aggregations etc., are defined by AI researchers – these are often resource intensive processes applied to large datasets. The Separation of Responsibility model is challenged here with the ownership of process. Since these are defined by AI researchers, the practice of optimization, scalability, supportability, and standards of configurability of the code should also be the responsibility of AI researchers. This by extension requires AI researchers to also be expert production grade software engineers.

My recommendation is to allow pre-processing and post-processing code re-development done by software engineering in collaboration with AI researchers. Being owned and operated by software engineering, the expected code quality, scalability, and flexibility standards are expected to be adhered. In this setup, the ML model artifact, which is developed after training ML models, is the responsibility of AI researchers. These models can also be evolved and deployed independently of the processing pipeline by using Model Repository like MLFlow, as long as new models are compatible with the pipeline code; validation of which can also be automated. 

Certain solutions, like Kubeflow, are helping “blend” the experimentation platform and scalable deployments on resource orchestration platform like Kubernetes. Nonetheless, the challenge of code optimization and supportability remains the same. Another approach, beyond relying on software tools, is to empower AI researchers and software engineers to develop cross-functional skill sets. The Erdos Institute helps people in academia to transition to enterprise careers by teaching them software engineering skills. I have supported and participated in this initiative since 2022 as a mentor-volunteer. 

Q: Throughout your career, you have been responsible for helping build new AI/ML teams and departments, including hiring and supervising team members, including engineers, researchers, product managers, and quality assurance professionals. You have also been involved in peer-code reviews and training. How did you develop the leadership skills required to create new teams from the ground up, and foster collaboration among cross-functional teams? What guidance do you offer new and rising software engineers? 

A: A leader’s scope goes well beyond getting work done from a set of individuals. A leader is responsible, accountable, and must be committed to the success and failures of the projects and, more importantly, of the career path of all team members. A leader is also responsible for fostering trust, ownership, and respect among team members – the very pillars of team spirit.  

Throughout my career, I have been fortunate to have had the opportunity to work with and under several strong leaders, and I have observed and learned a lot from them.

Working with de novo teams and ground-up projects requires a leader with clear vision and a precise execution plan. It is the leader’s responsibility to define, socialize and, sometimes negotiate the vision of the project with the stakeholders. Defining what needs to be done is often ignored, increasing chances of misinterpretation and misunderstandings down the line. I am a strong proponent of iterative deliverables. An execution plan with frequent milestones and a clearly defined expectation of each milestone has more chances of success. 

Q: You have also been very involved in mentoring PhD and PhD candidates students through The Erdős Institute’s career development program, which helps grow a talented workforce by teaching students the skills they need to transition from academia to impactful roles in the industry. In inviting you to serve as a judge for two of their intensive boot camp programs focused on Deep Learning and Data Science in Spring 2025, the Institute noted that they consistently receive accolades from program participants for your profound expertise, and cited your career leadership in a majority of groundbreaking AI initiatives. Why did you choose to volunteer your time to the Institute? And how meaningful is it for you to serve as a judge in these programs?

A: AI tools, theories, and algorithms are being used in every domain. The hiring companies highly value AI/ML knowledge and experience. The Erdős Institute helps PhD candidates to acquire the skills that the job market demands. These PhD candidates are usually graduates, or soon-to-be graduates, who have focused solely on research in their field of expertise for several years. To be market ready, they need skills like Data Science, Deep Learning, Project Management, etc. The mentors, like me, help the candidates to learn by guiding them on real-world course projects. 

The Institute also serves as a job placement platform that helps companies find vetted PhD candidates. As a judge, I will be tasked to evaluate the quality of research and the presented results of projects completed by candidates. Judging results are shared with the companies, helping individuals who meet/exceed expectations to outshine their peers during the interview selection process. 

With my experience of working in academia for 2.5 years during my master’s program at the University of Georgia, and working with several PhDs for years; I have realized that the culture and expectations are different between academia and the corporate world. Volunteering with the Institute, as a mentor or a judge, is an opportunity for me to share my deep knowledge of industry know-hows, to help talented people land a meaningful job within the industry. This benefits the individuals as well as the industry overall, since it is this next generation that will be driving technology innovation forward and keeping companies competitive in the global marketplace.

Q: Last question: any future predictions? Where do you see AI/ML going, and how will you be a part of emerging technologies?

A: Regarding the field of AI/ML engineering, in the upcoming years the adoption of AI/ML in the industry is going to rapidly increase. I predict that basic knowledge of statistics, data science, and machine learning, along with some level of experience using ML tools will become a necessity for all roles in the software industry.  As previously mentioned, I take several initiatives within my organization, and outside the organization, to bridge the skillset gap between software engineers and academic researchers. My aim is to make them ready for the future, where I foresee the job responsibilities blurring between the roles. 

In terms of the trends of AI/ML systems and software, I think we are also going to see a steep increase in government mandated regulations and recommendations on AI/ML usage and practices. I see this as a positive sign towards maturity within this industry. I hope that the decision makers, especially Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) leaders, adopt these within their organizational  systems. 

Comments
To Top

Pin It on Pinterest

Share This