In early 2025, labor-market analysts reported a structural shift that goes far beyond economic cycles. According to a November 2025 study by the Massachusetts Institute of Technology (MIT), AI is now capable of replacing tasks accounting for 11.7% of the U.S. workforce, equivalent to $1.2 trillion in wages, with the most significant impact observed in finance, healthcare, and professional services — sectors once considered relatively stable. This trend creates not only productivity benefits but also sharp legal and ethical challenges: unclear responsibility for automated decisions, algorithmic bias in workplace tools, and the absence of transparent rules for companies integrating AI into daily operations.
Recognizing these gaps, legal expert and AI researcher Filip Ferents has become one of the vocal advocates for establishing fair-automation standards — a framework that defines how employers can adopt AI without harming workers’ rights, data privacy, or contractual obligations.
A Legal Background Shaped by Real Workplace Conflicts
Before turning to LegalTech, Ferents gained extensive experience resolving labor disputes, negotiating employment agreements, and defending workers in cases involving wrongful termination, unpaid wages, and unsafe working conditions.
That background now shapes his perspective: technology can support employees, but only when the rules of its use are transparent and enforceable.
Through Consumer Protect AI and several research initiatives, he has evaluated hundreds of workplace policies and algorithmic management practices. His expertise is recognized internationally:
- American National Quality Mark Award 2024 for excellence in contract and employment-related legal protection.
- Cases & Faces Award for socially impactful innovation.
- Membership in The Ventures Club and advisory roles in tech-ethics initiatives, where he studies fairness, explainability, and digital-rights compliance.
These roles give him a front-row view of how employers implement automation — and where it often goes wrong.
Q&A: What Exactly Needs Regulation — and Why Now?
Q: What is the biggest legal risk in workplace automation today?
A: “Delegation without responsibility. Companies rely on automated scoring, scheduling, or performance evaluation, but there’s no clear accountability if the system makes a harmful decision.”
Q: Should automation be slowed down?
A: “No. It should be structured. We need mandatory transparency: what the algorithm does, what data it uses, and what appeal options employees have.”
Q: How does bias appear in workplace AI?
A: “Through historical datasets, poorly defined criteria, or feedback loops. If an algorithm penalizes someone based on flawed data, the worker must have a mechanism to challenge and correct the outcome.”
Q: What elements should fair-automation standards include?
A: “Human-override rights, independent audits, explainability requirements, and contract clauses that properly define responsibility for automated actions.”
Building Tools for Ethical Adoption of AI
Ferents is currently working on model contract provisions and compliance templates that help companies integrate automation while staying within ethical and legal boundaries.
These tools emphasize clear risk allocation, consent-based data use, and real-time monitoring of algorithmic decisions. His long-term vision is a unified framework that can be applied globally, much like GDPR reshaped privacy norms.
He argues that without such standards, workplace automation will continue to outpace regulation, leaving both employees and small businesses exposed to unpredictable risks. The goal, he stresses, is not to restrain innovation but to make it equitable and legally sustainable.
The Future of Work Needs Legal Engineering
As automation becomes embedded in every operational layer, classic labor law can no longer provide sufficient protection. Ferents believes the next decade will require “legal engineering” — a blend of law, AI ethics, and system design.
For him, the challenge is clear: building tools and rules that allow automation to scale without eroding human rights. If the modern workplace is to remain fair, the legal architecture must evolve as rapidly as the technology that drives it.