Serhii Romanov is an expert in video streaming testing and currently serves as an SDET Manager at Brightgrove, where he leads multiple quality assurance teams across Europe, North America, and South America. Serhii specializes in testing video streaming technologies and protocols like HLS and DASH. He is also the creator of HLS Analyzer, an open-source tool that uses AI to analyze media for stream integrity. We recently sat down with Serhii to discuss how AI is transforming video streaming quality assurance, the role of automation in testing, and what the future of video testing might look like. Serhii also shared his experience serving as an in-person judge at two major AI hackathons in Florida earlier this year.
Let’s talk about AI’s impact. How is AI reshaping video streaming and testing in your field, Serhii?
AI has become an indispensable partner in modern video engineering. Video streaming services generate massive amounts of data – from manifests to media files – and AI can sift through all this data to spot issues far faster than people can. Machine learning models can analyze streaming logs to detect anomalies like repeated buffering or unexpected errors and even predict potential streaming issues under certain network conditions.
Generative AI tools are even more impressive, as they let us articulate an issue in natural language and get suggested test scenarios or checklists, accelerating the creation of comprehensive test suites. Once tests are up and running, AI-powered analytics monitor the results. Vision-based AI can inspect video frames to detect glitches or artifacts that automated scripts might miss. AI also automates routine QA tasks like checking stream URLs, verifying codecs, and ensuring compliance with protocol specs. AI automation can run continuously, verifying live streams or large VOD catalogs for issues. The main goal is to integrate AI into the CI/CD pipeline so that every time a new video is published, the system automatically checks its quality and compatibility. So, altogether, AI is remaking everything we do.
Tell us about the HLS Analyzer and how does it use AI to evaluate video streams?
HLS Analyzer is my answer to the age-old problem of stream validation. It is an open-source tool that leverages AI to analyze streaming content for quality and integrity. In other words, it’s like having an AI-driven expert that reviews streaming metadata.
Streaming manifests have strict rules and even a small error can break playback on some devices. To manage this, I designed HLS Analyzer so that it uses AI to understand the structure and flag issues such as missing metadata, version mismatches, or inconsistencies between the listed content and the actual content.
By combining AI with traditional tools, HLS Analyzer can offer a powerful double-check. If new metadata tags are introduced or we want the tool to verify cross-provider consistency, we can update our AI prompts. It’s very flexible and learns from examples.
How is automation used to ensure video streaming quality, and what role does AI play?
Automation is the foundation of our teams. We run hundreds of automated tests every night, including functional, integration and performance tests. Historically, these have been done using scripts and existing test automation frameworks, but AI will be used more and more to do this.
How so?
AI will be used to generate and optimize test cases, for example, to analyze past results and logs to identify which scenarios caused failures, and then suggest new test variations. After each release, an AI-based engine will review the test matrix and propose additional edge-case tests that can be added to the suite. AI can also help maintain the test suite itself. Modern AI tools detect code or interface changes and update the test steps automatically, which minimizes human effort and cuts down on test failures. This means we will spend less time rewriting old tests and more time on something more important, like exploratory testing.
The goal isn’t to replace QA engineers but to augment them. With AI, our roles just shift from running every single check to supervising an intelligent system. We guide the AI, set expectations, and interpret the results. This is what AI was always supposed to deliver. Getting rid of boring routine work so that we can focus our power on what really matters.
What are the unique challenges of testing video streaming and how can AI help address them?
Usually, video streaming involves multiple bitrates and codecs, variable network conditions, device fragmentation, DRM, and synchronization across audio/video streams. Each factor multiplies the number of test scenarios. A single live stream might have 5 renditions to support different bandwidth conditions, and testing bitrate switching at scale means dozens of combinations. It is very complicated!
Reproducibility is one of the toughest issues, I think. A bug might only happen under specific timing or other conditions. AI can spot patterns that humans miss by analyzing playback logs from real users and identifying rare conditions that caused failures.
Video streams, in general, generate large amounts of media output. Validating every frame manually is just impossible. But AI-based analytics can help to scan these streams. For example, AI might look at streaming metrics over time and identify anomalies or extract data from media files using tools like FFProbe and perform deep analysis to detect inconsistencies. Encryption and DRM make video streaming testing even more challenging. While AI can highlight potential compliance issues by analyzing logs and metadata, active security testing typically still requires human expertise and specialized tools.
So, as you can see, AI really helps with video streaming testing by covering major scenarios and combinations. It really helps to speed up the testing process and gives quality assurance engineers extra time to focus on the most critical aspects. However, there are still some areas where human analysis and human expertise are required to interpret results and draw the right conclusions.
You were a judge at Horizon AI Global Hackathon and HackUSF 2025 recently. What did you learn from these AI-focused events and why are such hackathons important today?
First, both Horizon AI and HackUSF 2025 were explicitly focused on AI and innovation. Horizon AI at the University of Miami was part of the AI Summit of the Americas, with over200 participants, and offered a substantial prize pool of $47,000 in cash. HackUSF 2025 at the University of South Florida was similarly dedicated to artificial intelligence and innovation, with Google and Microsoft as key sponsors who played a crucial role in enabling the hackathon’s success.
Hackathons have evolved a lot since I got started in this field. Almost every hackathon now has an AI theme, I would say. Participants are using tools like TensorFlow, PyTorch, OpenAI APIs and computer vision libraries out of the box. It’s amazing. I saw projects integrating streaming, like AI-driven content recommendations and neural networks for optimizing video compression.
Hackathons accelerate learning and innovation. Teams tackle challenges end-to-end in 24-48 hours.
As a judge, I evaluated projects based on key criteria: innovation and creativity, feasibility, potential impact, and the quality of their demo or proof of concept. In addition, I was also focused on how well each project addressed problems specific to Florida, and as a result, I recognized projects that were creative and additionally practical for the local community.
What is the future of video streaming testing?
AI will be increasingly embedded at every layer, I think. More AI-driven tools like HLS Analyzer will emerge, covering every component of the streaming stack. AI will review live encoding pipelines in real time, providing fast feedback on performance and quality. Generative AI will aid in test writing. QA teams will use AI to draft test scripts or entire test suites from high-level requirements.
But, as I said, human oversight will remain critical. Even if an AI is extremely reliable, it can still fail with new features or in rare edge cases. The future is a collaboration between AI and human expertise. Experienced QA engineers will guide AI tools, interpret results, and make strategic decisions.
