Latest News

Data Engineering Affects Americans’ Plates: How to Simplify Food Ordering and Turn Chaos into Architecture by Special Frameworks

Data Engineering Affects Americans' Plates

Data Engineer Anand Abhishek explains how the standardization of metrics and the use of AI assistants are helping millions of customers in America get their food faster during the period of global changes in the foodtech industry of America.

Foodtech has long since evolved beyond just about food delivery; it is now an industry where technology and analytics play a crucial role. So, Wonder Group acquired Grubhub for $650 million, and their merger was completed quite recently. The decision to merge the companies means that the industry needs to reach a new level and change and expand the system: it is a new chapter in the history of food delivery, as a “super-app” for food is being created that combines delivery, own kitchens, meal kits, and a content component. In this context, data engineers have become key players, providing a unified system for metrics, data continuity, personalization, and scalability.

Anand Abhishek, a Data Engineer II at Grubhub, is an expert who developed the Metrics Layer Framework and AI-powered Data Lineage Assistant. Moreover, he has implemented his own frameworks ACCUT, RIPC и REQUEST to make any development easier and faster. Working with artificial intelligence data allows the company to save at least $300,000 per year on licensing a product like this, which is needed to transform foodtech into a sustainable and efficient analytics system. Grubhub is a major American online takeaway and delivery ordering platform used by millions of Americans. We will learn from Anand about how the introduction of new frameworks and AI assistants has improved the delivery service in America in this interview.

Abhishek, Grubhub, and Wonder Group have recently merged, becoming one of the most prominent deals in the food technology industry. As a data engineer, you had to restructure the work and make significant changes to the application and team workflows. How do you think this event will impact the technological landscape of the industry from a data engineering perspective?

– This merger is more than just the joining of two companies. It is a step towards a new level of data integration and business process collaboration. Our team now has access to more sources of information, ranging from logistics and kitchen operations to customer interactions. This has significantly increased the workload on our analytical infrastructure. My goal is to ensure that all this information is consistent and easily accessible for the marketing, product development, and finance teams. Without this, we cannot make informed decisions quickly and scale our service effectively.

One of your interesting projects has become the Metrics Layer Framework. This tool processes data, works with metrics, and can reduce some routine tasks. Why was there a need to rethink the approach in working with metrics?

– There were multiple instances where the same metrics were used across different departments or teams, with different logic, which led to discrepancies in the data. Therefore, a unified and centralized metrics system would help the company track metrics more accurately and consistently within the organization. All significant changes led to a massive increase in the volume of data and number of stakeholders involved, from product teams to logistics and marketing. Under these circumstances, any errors in metrics or delays in accessing data could slow down decision-making and consequently the launch of new initiatives. To address this issue, we implemented the Metrics Layer Framework, which allowed us to align the approach to metric calculation across all departments quickly, eliminate duplicate work, and automate routine processes. As a result, we were able to not only reduce development time by 40-50% but also establish a unified “data language” during a challenging period of system and process integration. It is essential for us to focus on both customers and internal architecture to ensure smooth and efficient departmental operations.

This development is based on your own unique frameworks. Your name is associated with ACCUT, RIPC, and REQUEST, which are already being called new methodologies in data engineering. The uniqueness of the approach is that it does not work as an isolated service, but as an architectural layer. What distinguishes it from other solutions on the market?

– It was important for us not only to implement another tool, but also to create a complete architecture that would scale as the company grew.  The uniqueness of the development, as you have already noticed, lies in the fact that it does not work as an isolated service, but as an architectural layer that can be easily integrated into the existing infrastructure. Thanks to this, the company was able to implement initiatives faster, maintain high-quality data, and create long-term standards applicable to new projects. The system is not rigid, but it can be easily expanded to adapt to new business processes. Most of the analogues require deep improvements as the company grows.

It is really innovative. How exactly do your frameworks work?

– ACCUT defines a five-level data quality control system – accuracy, correctness, completeness, uniqueness, and timeliness. This helps to identify errors long before they get into production. RIPC standardizes the optimization process: first we reduce the amount of data, then we index it, divide it by keys, and only cache it at the end. This arrangement saves hours of computing and tens of thousands of dollars on infrastructure. REQUEST, in turn, turns vague business objectives into clear technical requirements – what needs to be measured, from which sources to take data, and which quality and time tolerances are acceptable. The main thing is that all three frameworks are not tied to technology: they can be used in Spark, BigQuery, Postgres or Airflow. Their strength lies in their versatility: they turn best practices into an intuitive way of thinking.

 Why is your development of architectural patterns and frameworks for a standardized and modular metrics system significant for the industry?

– Because it solves one of the most common problems – the fragmentation and inconsistency of metrics. With a unified architecture, we have reduced duplication of work, accelerated decision-making, and increased trust in analytics. This system has become a foundation that different teams can rely on, and that is why its impact goes beyond a single project or company

 Can we say that these frameworks have become the basis for further developments, not only for the Metrics Layer Framework, but also, for example, for the AI Data Lineage Assistant?

– Yes, that’s right. REQUEST has become the architectural foundation of the Metrics Layer Framework – it defines how to translate business metrics into modular components. ACCUT is used for automatic data quality checks, and RIPC principles help to scale performance. When we created the AI-Powered Data Lineage Assistant at Grubhub, these ideas developed into a self-learning system. The assistant uses LLM to read the code, interpret metadata, and visualize the data path from the source to the dashboard. Previously, engineers spent days searching for errors in pipelines, now it’s enough to enter a query and get a response in a few seconds. It is not just automation. It is structured intelligence based on years of working with frameworks.

Companies waste up to a third of analysts’ time manually searching for data sources and fixing errors. At foodtech, this can mean delays in delivery and incorrect recommendations to customers. How does an AI-Powered Data Lineage Assistant technically manage to reduce this process to minutes, given that earlier an entire engineering department used to work on it in hours?

– At foodtech, data flows through dozens of systems. When an error occurs, such as in calculating the delivery metric or customer segmentation, the source may be hidden anywhere along the chain. In the past, engineers would spend hours manually analyzing SQL queries, ETL pipeline code, and documentation to determine where the data originated and what might have gone wrong. The AI-powered Data Lineage Assistant automates this process. Using a graph database, it stores relationships between tables, scripts, and metrics. On top of this, there is a Large Language Model that can answer questions in natural language about where the data comes from or what will happen if we change a specific field. This provides engineers and analysts with a complete picture in minutes instead of hours. As the number of data sources grows and their interrelationships become more complex, this tool becomes even more valuable. Therefore, this tool has become crucial: it allows you to quickly react to failures, reduces the workload on the team, and ensures the stability of analytics in an environment where accuracy and speed of decisions directly impact the user experience and business efficiency. The work of engineers is still appreciated; it just got a little easier.

In your experience, you have worked on large-scale projects in not only foodtech but also the media industry, for instance, in Reliance Industries Limited, which is India’s largest private conglomerate, where you developed the recommendation system for Jio Cinema, which is used by over 100 million people. How has your experience with building highly loaded and personalized systems helped you develop solutions for foodtech projects?

– The experience with Jio Cinema was crucial for me, as I worked with millions of users and had to provide personalized content in real-time, without delays. Our team there developed a recommendation system that takes into account user behaviour and business priorities, which resulted in a significant increase in the audience by 22% daily and 17% monthly. Now the challenges are similar in nature, but more complex in terms of data structure. We have not only user behavior within the application, but also logistics parameters, menu availability, promotions, and seasonal changes in demand. My experience has helped me design systems that are scalable and flexible, allowing for rapid changes. For instance, when developing the Metrics Layer Framework, I considered that it should be used by teams with different goals, ranging from personalizing marketing offers to optimizing delivery times. This ability to integrate business priorities with technical architecture has been a key lesson I learned while transitioning from media to food technology.

You have already mentioned that when developing systems, you consider different goals. One of your goals in life is to transfer your experience. You have won Cases&Faces, you mentor engineers and participate in hackathons as a judge. Can you tell why it is so important for you to share your knowledge?

– For me, creating systems is always about developing the people who work with them. That is why I devote time to internal training to ensure that solutions are understandable and accessible to everyone. I am constantly improving my skills by participating in competitions, and as you correctly noted, I won the Cases&Faces award in October in the category: Achievement in technology innovation – Data analytics & Big Data. Attending events such as the Awards and the Code Resurrection Hackathon provides an opportunity to see some of the best engineering solutions and approaches that are already changing the industry. Participation in AI hackathons also allows me to evaluate how new technologies can be applied in various scenarios. My technical education, supported by certificates such as Azure Fundamentals (AZ-900) and Google Cloud Platform Data Engineering, helps me not only evaluate projects from the outside, but also implement similar approaches in operating systems, while maintaining the highest quality standards.

Now I am putting my accumulated experience into training. On the Prepfully and MentorCruise platforms, I prepare engineers for interviews at Meta, Amazon and other companies using the same frameworks. If a person has problems with the design of metrics, we analyze the REQUEST; if with optimization – we use RIPC. These models make the thought process structured and understandable, helping the engineer to think in the system rather than in code fragments.

Your book “SELECT*FROM fact_DE” has become a logical continuation of your frameworks. Is it not just a technical guide, but another attempt to transfer knowledge?

– Exactly. The book was conceived as a way to make knowledge reproducible. In it, I collected not only the frameworks themselves, but also the principles by which they can be applied in any environment from a startup to a corporation. I wanted, the engineer who opened this book, to understand not just how ACCUT or REQUEST work, but why they work – how the logic of quality control, metric structuring, and optimization can become the overall culture of the team. In fact, this is not just a textbook, but a tool that helps build a bridge between generations of engineers so that knowledge does not disappear with the departure of people, but turns into a sustainable system.

You share very valuable experience. How do you assess the impact of mentoring and your approaches on the industry as a whole?

– The philosophy of frameworks changed my approach to engineering. Now I try not to build separate tools, but to create systems that scale together with the company. I believe that the introduction of systems thinking and effective development tools, along with team training, is gradually raising the standards of work in the data engineering and food technology industries in general. Our projects demonstrate how modern technologies, including AI and cloud platforms, can improve processes and accelerate innovation.

 

 

Comments
To Top

Pin It on Pinterest

Share This