Artificial intelligence

Making AI Useful at Work: Angelina Samoilova on Turning Experiments into Everyday Workflows

Making AI Useful at Work Angelina Samoilova on Turning Experiments into Everyday Workflows

Angelina Samoilova has built her career at the intersection of hypergrowth and digital transformation. Having lived in nine countries across five continents, she now works with start-ups and scale-ups across EMEA on digital workplace strategy and AI adoption. Previously at Remote, she helped scale the company from zero to 2,000 employees, leading APAC expansion and closing over $1 million in ARR in year one. Her experience navigating operational chaos and regional complexities shapes her pragmatic approach: turning AI experiments into scalable, everyday workflows that drive real organisational value.

Please tell us more about yourself.

I’m Angelina Samoilova, I have lived in 9 countries, across 5 continents and I am now based in Dublin. I am working with tech start-ups and scale-ups across EMEA on their digital workplace strategy and AI adoption. Before this, I spent several years at Remote – a fast-growing company that scaled from zero to 2,000 people, so I’ve experienced the operational chaos of hypergrowth firsthand. At Remote, I worked on global expansion initiatives, particularly building out the APAC market. I was the first sales representative for APAC and closed over a million dollars in ARR in the first year laying the foundations for the APAC go-to-market team to grow to over 50 people. That combination, seeing what breaks at scale and helping companies navigate regional differences, shapes how I think about practical AI implementation today.

Many teams experiment with AI in pilots but struggle to make it part of everyday processes. From your work with scale-ups, what distinguishes experiments that fizzle out from those that successfully become daily workflows?

The biggest difference is starting with a concrete problem, not a technology. The experiments that stick begin with someone saying, “I’m spending five hours a week hunting for information across different systems” or “We’re manually processing RFPs and it’s killing us.” When you can directly tie AI to solving that specific pain, adoption happens naturally.

The ones that fizzle? They usually start with “We should do AI” without a clear business case. Or they try to implement everything at once instead of targeting high-impact, low-complexity wins first.

The other critical factor is integration into existing workflows. If people have to remember to open a separate app or change how they work fundamentally, it won’t stick. AI needs to meet people where they already are, in their communication tools, their documentation systems, their daily processes. When it becomes invisible infrastructure rather than another tool to manage, that’s when it becomes part of daily workflow.

You have advised teams on governed AI and practical use cases. Where do you see AI agents making the most tangible difference today, and which tasks still benefit most from human oversight?

The tangible wins are in knowledge retrieval and workflow automation. AI that can search across scattered systems, communications, documentation, code repositories, and surface contextual answers is delivering immediate value. Same with automating repetitive workflows: taking meeting outcomes and turning them into structured tasks, generating first drafts of routine documents, organizing information without manual effort.

The sweet spot is AI handling the research, organization, and first pass while humans make the final decisions. For compliance work, customer communications, or strategic decisions, you need human judgment. AI is brilliant at “given this context, execute this task,” but it can’t read organizational dynamics, understand political nuances, or make judgment calls about what matters most to your business right now.

Human oversight is also critical for quality control. AI can draft an RFP response, but someone needs to verify accuracy and tone. It can suggest next steps, but someone needs to prioritize based on business context. The teams winning with AI understand this balance, they’re not trying to eliminate human involvement, they’re trying to eliminate human drudgery.

Fast-growing companies often end up with too many overlapping tools. How do you approach the challenge of consolidating a stack without slowing teams down or creating resistance?

Don’t rip everything away at once. Start by mapping where you have the most duplication or the biggest pain points, five different note-taking tools, three project management platforms. Those are your opportunities.

The key is demonstrating value before you take things away. Show teams how consolidation actually simplifies their workflow rather than complicating it. If you can prove that reducing from eight apps to three means less context-switching and faster access to information, resistance drops significantly.

You also need to meet teams where they are. If engineering has a tool they love, don’t force them to abandon it immediately. Look for integration points instead. Can non-technical teams get the visibility they need without requiring everyone to adopt the same tools? Often, you can consolidate viewing access while maintaining specialized tools for power users.

Resistance comes when people feel like you’re making their work harder. If you can show them you’re actually reducing cognitive load and administrative overhead, adoption follows. The mistake is treating tool consolidation as a cost-cutting exercise rather than a productivity initiative.

The phrase “governed AI” can sound restrictive. What simple rules or frameworks have you seen work well to ensure responsible use of AI while keeping innovation moving?

Good governance is invisible, it works in the background without slowing people down. The frameworks that work are practical, not bureaucratic.

First, respect existing permission structures. AI should inherit your organization’s access model. If someone can’t see a document today, they shouldn’t get AI-generated insights from it tomorrow. This isn’t creating new restrictions; it’s ensuring AI respects boundaries you’ve already established.

Second, establish clear data governance early. Know where your data lives, who owns it, and what compliance requirements apply. I’ve seen too many teams get excited about capabilities and then hit a wall when legal asks basic questions about data residency or retention. Have those conversations upfront.

Third, build in visibility. You need to understand what’s being accessed, what questions are being asked, and how AI is being used. This isn’t surveillance, it’s having the insights to spot issues early and understand where AI delivers value versus where it adds noise.

Fourth, designate ownership that spans technical and ethical dimensions. Someone needs to be asking: What data is AI accessing? Should it be? How do we know the data is good? Is bias being introduced? These need ongoing attention, not one-time answers.

The key insight: governance shouldn’t brake innovation, it should enable it. When teams trust that AI works with good data and respects company values, adoption accelerates. The restrictions people worry about are usually poorly designed afterthoughts. Good governance is the foundation that lets you move fast with confidence.

As someone focused on digital workplace strategy, how can AI actually improve knowledge discipline, rather than just adding another layer of noise?

AI forces organizational discipline because it only works when knowledge is properly organized. If your information is scattered across twelve systems, buried in random documents, lost in chat threads, AI can’t help you. But when you centralize and structure knowledge, AI becomes a force multiplier.

I see this pattern constantly: teams want AI to answer questions about their processes or policies. Where are those documented? “Uh, some here, some there, some in email…” That’s the problem. AI actually incentivizes cleanup because the value is so clear when it works versus so frustrating when it doesn’t.

The other benefit is AI surfaces what’s outdated. If it’s pulling from content that hasn’t been updated in two years, that becomes visible. It forces teams to think about currency, ownership, and what’s actually still relevant versus digital clutter.

So AI doesn’t create discipline, it requires it and rewards teams who have it. The companies getting the most value are the ones treating AI implementation as an opportunity to fix their knowledge management, not just as another technology layer.

Rolling out new tools or AI processes requires behaviour change. What role do champions, training, and everyday prompts play in getting people to adopt new ways of working?

Champions are critical. You need evangelists in each department who understand the value and can demonstrate it to peers. Without that internal push, adoption stalls no matter how good the solution is.

Training has to be ongoing and contextual, not a one-time event. Show marketing teams content calendar examples. Show engineering teams how it integrates with their code workflows. Show finance teams reporting use cases. Generic demos get generic adoption. Specific, relevant examples drive behavior change.

Everyday prompts matter enormously. Automated notifications, integrated workflows, reminders that pop up in tools people already use, these keep new behaviors top of mind. The goal is making it easier to use the new approach than to fall back on old habits.

The mistake teams make is thinking technology alone drives adoption. It doesn’t. You need champions who build enthusiasm, training that’s relevant to actual work, and system design that makes new behaviors the path of least resistance. That’s when change sticks.

You have worked on both global expansion and regional builds. How do cultural or regional differences shape the way organisations adopt AI and digital workplace tools?

Regional differences are significant. European companies prioritize GDPR compliance and data residency, it’s often non-negotiable. APAC markets show varying levels of AI optimism, with some moving very fast on experimentation and others taking a more measured approach focused on proven ROI.

Time zones shape requirements too. Distributed teams need strong asynchronous capabilities, comprehensive documentation, meeting transcripts, clear audit trails. Synchronous-heavy cultures have different priorities around real-time collaboration.

Attitudes toward change vary. Some regions have a “move fast and test” mentality. Others want extensive business cases before committing. You can’t assume one approach works everywhere. You need local champions who understand regional context and can translate value propositions in culturally relevant ways.

The mistake is treating global rollouts as one-size-fits-all. Successful adoption requires adapting your approach, quick wins in one market, comprehensive ROI in another, with governance frameworks that respect local requirements.

You have been appointed as a judge across independent programmes and hackathons. How do you evaluate what is truly innovative versus what is more of a repackaging in the AI and workplace tech space?

I’ve judged hackathons focused on “Tech for Good” and business innovation awards, and the evaluation framework is consistent: look beyond the surface to understand the actual value being created.

For hackathons, I evaluate across innovation, originality, impact, use of technology, and user experience. True innovation shows up when someone identifies a gap that others missed or approaches a known problem from an unexpected angle. In tech-for-good contexts, I look for solutions that demonstrate genuine understanding of the communities they’re trying to help, not just technology looking for a problem.

In business awards, the bar is higher, you need sustained impact, not just a proof of concept. Did this fundamentally change how the organization operates? Can you quantify the business outcomes? Is this defensible and scalable?

Whether I’m judging a weekend hackathon or a major business achievement, the question is the same: does this solve a problem that matters in a way that couldn’t be done before? The best innovations make that case clearly and demonstrate tangible results.

Looking ahead, what practical AI use cases do you think will become standard in 100–300-person companies within the next two years, and which trends are overhyped?

What will be standard:

Meeting intelligence, automatic transcription, summarization, action item extraction that flows directly into task systems. Manual note-taking will feel archaic.

Cross-platform knowledge search, AI that searches across all your systems and gives contextual answers with proper citations. The “I know we discussed this somewhere” problem will be solved.

Routine content generation, RFP responses, marketing drafts, project briefs. AI won’t write the final version, but it’ll get you 70% there, cutting hours of work to minutes.

Workflow automation through natural language, telling systems what you need instead of manually building it. This dramatically lowers the technical barrier for process optimization.

What’s overhyped:

“AI will replace entire teams”, no. Companies winning are augmenting teams, not replacing them. The human-AI partnership is where value lives.

“General-purpose AI assistants that do everything”, specificity wins. AI deeply integrated into your context will always beat generic chatbots trying to do everything.

“Fully autonomous agents making strategic decisions”, we’re not close. AI excels at execution, but strategy requires understanding organizational dynamics, priorities, and context that AI doesn’t have. That’s not changing in two years.

The real opportunity is making AI useful for everyday work, not chasing sci-fi scenarios. That’s where the value is, and that’s what will actually transform how 100–300 person companies operate.

Comments
To Top

Pin It on Pinterest

Share This