Tech News

Belitsoft’s Report on The Current State of Shadow AI

Belitsoft is a custom software development company that implements security testing best practices when developing or testing AI software products.

Shadow AI Challenges

The Policy Gap

Employee use of generative AI tools is already common, but most organizations have not yet built comparably strong oversight. 

Fewer than one-third of companies maintain a detailed, end-to-end policy that covers procurement, testing, security, monitoring, and staff training for AI. 

Despite this limited policy coverage, surveys place generative AI uptake among employees between 72 percent and 83 percent, meaning the majority of day-to-day AI activity occurs without clear rules on acceptable data inputs, output validation, or liability.

Shadow AI Dominance

Roughly 80 percent of the AI applications in use fall into the category of “shadow AI”: software or browser extensions adopted without the knowledge or approval of IT. 

Because security teams do not review these tools, the associated logs and data flows never reach official monitoring systems, leaving no audit trail or enforcement path. 

More than half of employees—particularly from Gen Z and Millennial cohorts—state that they will continue using such unapproved tools even if management issues a ban, which limits the effectiveness of policy updates alone.

Explosive Growth Outpacing Security

Enterprise generative AI traffic contains every prompt, plug-in call, and API request that employees send to text or image generation models while working. This traffic grew by about 890 percent in 2024, multiplying nearly ninefold in a single year. 

Adoption is therefore expanding more quickly than security teams can put new controls in place. Defensive measures—monitoring, access rules, and data loss prevention tools—now lag behind usage.

Control System Failures

Current technical controls detect less than 20 percent of the actual generative AI activity on corporate networks. Endpoint agents and browser-level blocks miss much of the usage, because employees can move to personal devices, call alternative API endpoints, or rely on AI functions embedded in familiar software. 

To address these gaps, specialists recommend shifting governance to a network or SaaS platform. At that deeper control point, every prompt sent to an AI model can be intercepted, inspected for sensitive content or policy violations, and, when necessary, allowed, redacted, or redirected to an approved internal model, all within a response time short enough to maintain a seamless user experience.

Security Risks and Vulnerabilities

Generative AI use is accelerating faster than protective controls. Each new integration point introduces another path for data loss, fraud, or tampering. Matching the pace of security investment and process change—including detailed access rules, comprehensive logs, resilient development pipelines, and deepfake detection—with the pace of AI adoption is necessary to reduce the likelihood of breaches and targeted attacks.

Each prompt, plug-in call, or API request can carry business data, so every transaction is a possible route for data exposure or manipulation. 

Surveys indicate that roughly two-thirds of professionals expect attackers to adopt the same AI tools, using code generators to write exploits or voice cloning to impersonate employees. 

They also expect synthetic audio and video to become far more realistic within twelve months. Only 18 to 21 percent of organizations budget for deepfake detection or equivalent safeguards, underscoring a gap between recognized risk and actual investment.

Several specific weaknesses contribute to the current security gap. 

  1. Hard-coded API keys place reusable secret tokens directly inside source code, so anyone with repository access can reuse the key. 
  2. Cross-tenant context leaks in Copilot let snippets from one customer’s documents appear in another customer’s results. 
  3. Unsecured AI databases expose model stores to the internet without authentication. 
  4. Prompt injection attacks alter or override a model’s instructions during runtime. 
  5. Model poisoning attacks add malicious data to training sets and bias future outputs. 
  6. Undocumented open source dependencies introduce libraries without formal records, making later vulnerability checks difficult.

Large language model agents that search repositories such as SharePoint, Google Drive, or Amazon S3 read corporate files and answer user questions. Without file-level permissions, audit logs, and policy guardrails, these agents can reveal confidential documents—such as payroll records or unreleased designs—to unauthorized users. Addressing this risk requires granular permissions, continuous monitoring, secure model pipelines, and tools that detect fake media.

Regulatory and Legal Enforcement

The European Union’s AI Act classifies systems by risk and requires documentation, transparency, and human oversight –  high-risk systems must pass conformity assessments and keep audit trails. Several EU and US jurisdictions have also banned or restricted foreign language models such as DeepSeek when the providers cannot meet local data protection or security standards. These measures compel organizations that rely on ad hoc rules to establish formal, auditable control frameworks before the enforcement deadlines take effect.

Regulators have begun applying penalties. The US Federal Trade Commission fined a startup that advertised an automated form generator as a “robot lawyer”, and the Securities and Exchange Commission sanctioned a company whose fraud detection algorithm worked no better than a coin toss. The Department of Justice has made AI-enabled healthcare fraud its top white-collar priority, signaling closer scrutiny across sectors.

Civil litigation is rising in tandem. Plaintiffs claim that Workday’s résumé-screening tool discriminates by age, that several insurers’ predictive models wrongly deny valid claims, that Anthropic trained its model on Reddit content without permission, and that chatbot output on Google’s Character.AI contributed to a teenager’s suicide. Courts have allowed some of these cases to proceed, which suggests that AI output may fall under existing product liability rules.

Technical practice presents an additional risk. Sending personal or proprietary data to public language models with servers outside domestic jurisdictions can breach statutes such as the GDPR and HIPAA, because the cross-border processing itself counts as an unauthorized transfer even if no public leak occurs.

Regulators, prosecutors, and courts are moving from guidance to enforcement. Organizations that fail to adopt structured, auditable AI governance now face fines, civil damages, and potential criminal liability.

Workforce Impact: Skills Gaps and Training Needs

The workforce is adopting AI faster than companies are supplying training and managerial guidance. Unless organizations close this skills gap and give concrete instructions on how to apply the time that AI frees up, the potential efficiency gains will remain fragmented and difficult to scale.

Forty-two percent of knowledge workers say they must acquire additional AI skills within the next six months—an eight percentage point rise from last year—and eighty-nine percent expect to need those skills within two years. “New AI skills” covers practical abilities such as writing precise prompts, interpreting model output, integrating AI workflows into existing tools, and recognizing data privacy limits. Despite this urgency, thirty-two percent of employers provide no formal AI training, and only twenty-two percent extend training to every employee. Because training is so uneven, only thirty-six percent of workers feel they know enough to use AI effectively and safely –  the remainder consider themselves underprepared, meaning they lack confidence in choosing the right tools, safeguarding data, and validating results.

Among frontline staff—the employees who deliver day-to-day services or production work—structured guidance is weak. Just one in four receive active coaching from their direct managers on how to use AI. 

In the United Kingdom, one half of current users report that generative tools save them more than an hour a day, chiefly by drafting emails, summarizing documents, or automating routine queries. Yet sixty percent of those same users say nobody has clarified how they should deploy the recovered time—for example, whether to focus on additional client work, process improvements, or new projects. 

Without clear direction, each team develops its own ad hoc approach, so time savings do not automatically translate into consistent processes or measurable business value.

Demonstrated Benefits and Efficiency Gains

Generative AI applications are already saving labor and time, and additional benefits are expected. Realizing those benefits consistently depends on selective controls—such as private models for sensitive data—instead of blanket bans.

Most organizations still operate with limited oversight of generative AI tools – access controls, audit trails, and formal approval processes remain incomplete. Even under those conditions, 56 percent of surveyed employees say that AI lets them complete the same workload in less time, and 71 percent report specific instances—such as drafting messages or summarizing documents—where tasks finish faster. Sixty-two percent anticipate further gains during the next year, naming targets like lower operating costs, quicker decisions, and higher output volume.

Industry examples match these perceptions. In hospitality, customer service teams run chatbots to answer routine guest questions, shift planners use AI assistants that fill rosters automatically, and marketing staff rely on language models to draft advertisements –  each tool cuts manual effort and shortens response times. 

Engineering and retail firms apply a dual-channel approach: prompts containing source code, product drawings, or demand forecasts go to an internal model hosted on company servers, while non-sensitive requests continue through public services. This separation accelerates coding, design adjustments, and inventory projections without placing confidential material in external environments.

Organizations that prohibit generative AI tools forgo these efficiencies, whereas those that allow use under defined safeguards record measurable improvements in throughput and planning accuracy.

Workplace Surveillance and Human Impact Concerns

AI systems can centralize power in code and distance managers from accountability unless safeguards—transparency, human oversight, and participatory design—are present at every stage of deployment.

AI-enabled surveillance refers to the routine, automated monitoring of employees’ actions, movements, keystrokes, output, or communications through cameras, sensors, logs, and analytics that run on machine learning models. When these systems deliver opaque decisions, the people affected cannot see which data points, weightings, or rules shaped the outcome, so they are unable to verify its accuracy or fairness. AI can also supply covert behavioral nudges—subtle prompts, rankings, or default settings that steer how staff work without openly stating that influence is being applied.

Applied together, these mechanisms can erode autonomy, trust, and well-being by gradually reducing employees’ sense of control, lowering confidence in management, and adding stress. To counter those effects, organizations need to build transparency and accountability into deployments. That involves publishing clear explanations of system purpose, data sources, and decision criteria, maintaining audit trails, opening appeal channels, and performing independent reviews before launch.

Without such measures, AI programs risk dehumanizing staff by treating them as interchangeable data points and by shifting blame—”the algorithm decided”—when outcomes are unpopular or harmful. They can also manipulate behavior through algorithmic guidance that aligns with management goals but may not be visible to employees.

Responsible leadership addresses these concerns by anticipating dark patterns, by tying AI use to explicit human oversight that can review and override automated steps, and by encouraging employee engagement through early explanation, feedback collection, and shared guardrails.

Structured Governance Framework

Placing a single control layer between users and every generative AI endpoint provides consistent security, cost control, and operational metrics while allowing employees to work in their usual applications.

A structured generative AI program starts with visibility. The team keeps a single, maintained inventory that lists every model, plug-in, and standard prompt in use, so anyone responsible for risk can see the full footprint at any time.

Governance

Access is set to the least privilege each employee needs; data loss prevention rules block sensitive information from leaving the firm; tamper-proof logs capture every prompt and response; and scheduled red-team tests check whether the models can be tricked, biased, or misled.

Enablement

All staff receive training that matches their roles, workflows are adjusted so AI output fits naturally into daily tasks, and each request is routed to the lowest-cost or in-house model that meets quality requirements.

Implementation

Before any new tool goes live, security, privacy, and compliance teams vet it. Once approved, the tool is added to the shared inventory. Role-based permissions limit what each user can do, and audit-quality logs store the full prompt, response, time stamp, user ID, and model version.

Control measures

Red-team simulations run on a calendar to test for prompt injection, hallucination, and bias, and guardrails block or require human sign-off for actions such as changing source data, sending external emails, or committing code.

Traffic management

An AI control plane intercepts every prompt, inspects it for restricted data, blocks uploads that breach policy, and routes non-sensitive requests to cheaper public or private models. The same layer records key performance indicators—how many prompts were redirected, how many were blocked by DLP rules, user satisfaction ratings, and cost per token—so policies and budgets can be updated with evidence.

Foundation Requirements

An AI initiative succeeds only when legal, data, and infrastructure foundations are in place first. 

  • Clear contractual terms and trademark filings protect intellectual property and privacy.
  • Clean, standardized data and modern systems allow accurate model output. 
  • Automated monitoring tools let small teams maintain security and compliance at scale. 

Neglecting any of these foundations raises legal, operational, or security risks that the AI itself cannot solve.

When a company uploads its data, or customer information to a public-facing AI service, it is placing that content on servers the business does not control. Doing so can transfer some intellectual property rights to the platform under its terms of service, and it can expose personal data in ways that violate the General Data Protection Regulation (GDPR). To prevent this, venue owners should review and update supplier contracts so that any AI provider is bound by strict confidentiality and data protection clauses. They should add internal policy language that tells staff which data may or may not be shared with an external model. Registering trademarks before any public AI use protects the company’s original data from being reused without permission.

Future Outlook: AI Agents and Strategic Implications

AI agents are autonomous software components that apply machine learning models to carry out business tasks and make decisions without ongoing human direction. Most professionals consider these agents essential for future operations, yet few organizations have moved beyond limited pilots; very few run agents in production environments that handle day-to-day work.

This gap between planned use and actual deployment reflects a broader risk-action gap. Many firms acknowledge legal, security, and ethical risks but have not put concrete safeguards in place. Closing this gap requires institutionalizing robust governance: assigning clear roles, enforcing least-privilege access controls, logging every agent action, monitoring behavior continuously, and defining escalation paths for exceptions. With those elements in place, AI shifts from a potential liability—prone to compliance failures or data breaches—to a reliable tool that lowers costs, speeds decisions, and creates new revenue opportunities.

Organizations that establish these controls early will gain a competitive advantage through safe, large-scale AI deployment. Laggards that delay investments face escalating exposure to data leaks, regulatory penalties, and reputational damage, all of which become harder and costlier to address once agents are embedded in critical workflows.

The statistics used are extracted from these reports: ISACA – AI Pulse Poll 2025, BCG – AI at Work 2025, Microsoft & LinkedIn – Work Trend Index 2024 “From Hype to Habit”, World Economic Forum – Global Cybersecurity Outlook 2025, Zluri – State of AI in the Workplace 2025, and Palo Alto Networks – State of Generative AI 2025 report.

About the Author:

About the Author

Dmitry Baraishuk is a partner and Chief Innovation Officer at a software development company Belitsoft (a Noventiq company). Dmitry has been leading a department specializing in custom software development for 20 years. His department delivered hundreds of projects in AI software development, healthcare and finance IT consulting, application modernization, cloud migration, data analytics implementation, and more for startups and enterprises in the US, UK, and Canada.

Comments
To Top

Pin It on Pinterest

Share This