Artificial intelligence

Designing AI That Sees Context, Not Bias in Recruiting

Designing AI That Sees Context

Roman Ishchenko explained how an AI-driven recruiting system can understand context, use complex data models, and still preserve a human-centered, fair approach to evaluating candidates.

As artificial intelligence becomes deeply embedded in everyday work processes, one question grows more urgent: how can we preserve the human side of decision-making? Nowhere is this tension sharper than in hiring, where an algorithm’s mistake can directly affect someone’s career. Automation can analyze vast amounts of information, uncover hidden patterns, and streamline workflows, but its purpose is not to replace human judgment. Its purpose is to strengthen that judgment, making decisions more informed, fair, and context-aware.

This is the challenge Raised AI is built to solve. The company develops an intelligent hiring engine that helps organizations identify truly relevant, high-performing candidates by modeling roles, enriching fragmented data, and reducing biases that often distort traditional recruitment.

We spoke with Roman Ishchenko, the founder of Raised AI and a technical and mathematical expert who designed the core architecture of the platform — from the structure of the matching process to the data models and the interaction between AI components. In this interview, Roman explains how to build AI that doesn’t lose sight of what matters most — the person,  and why this balance between technology and values is shaping the future of hiring.

Roman, why did you choose recruiting as the industry to apply advanced AI? There are so many fields where AI could be used, why this one?

It actually happened quite organically. I started with a simple observation: hiring is one of the most information-heavy processes inside any company, and yet it has almost no tooling capable of understanding the complexity of that information.

A recruiter sees a résumé and a job description. But beneath that, there’s a whole world: the skill stack the candidate likely used, the industry they worked in, how teams at their previous company are structured, whether that company recently changed direction, whether certain roles tend to succeed in certain environments, how career trajectories typically evolve — and all of this is evolving constantly.

It’s almost impossible for a human to mentally hold and process all these signals. But for AI, especially modern models, this is exactly the type of problem they’re good at: messy, unstructured, high-context data with lots of missing pieces. Once I realized that, recruiting became a very natural domain to focus on.

Your system goes beyond reading resumes — what other information do you gather? What kind of data needs to be included for proper matching?

A lot of what we do is contextual understanding. We don’t rely only on what a candidate writes. We combine that with information about the companies they worked for, the technologies those companies use, their products, their industry dynamics, and recent events that might influence candidate behavior.

For example, if we know a company is building a mobile app using a specific stack, and a candidate was part of the mobile team there, the model can reasonably infer what technologies they likely worked with, even if it wasn’t spelled out. Similarly, if there are reliable reports that a company went through a reorganization or layoffs, the system can treat that as a signal of potential job-seeking activity.

A recruiter might be aware of a few such things. AI can process thousands of these signals simultaneously. That’s where the real value is: enriching the incomplete picture that candidates and companies naturally provide.

How does this actually work at the technical level? What’s happening inside?

We use a layered approach. There’s a data foundation that keeps expanding — résumés, job descriptions, client feedback, recruiter notes, previous hiring outcomes, and a lot of company-level intelligence we continuously enrich.

On top of that, we run models that break down each job into fundamental criteria like responsibilities, scope, seniority, technical skills, domain context, and so on. For each of those criteria, we fine-tune AI components that score candidates separately. These scores then feed into a ranking pipeline that produces an overall match score with explanations.

Over time, the system keeps improving: as recruiters accept or reject profiles, the model adjusts. After several iterations, it starts to understand the nuances of a particular role or a recruiter’s preferences.

So it’s not one model — it’s an ecosystem of models, each doing a specific part of the reasoning.

You have a PhD in a very technical field, and your academic research focused on graph theory and complex systems. How did that background help you build an AI-driven recruiting system?

It helped more than I expected. That’s surprisingly relevant to hiring. A hiring ecosystem can naturally be viewed as a graph: a candidate connected to skills, skills connected to technologies, technologies connected to companies, and companies connected to industries. When you miss one part, the structure around it often tells you what’s likely true.

So in a funny way, the foundations I worked on during my PhD ended up mapping very naturally onto how we think about the hiring ecosystem today. It gave me a way of seeing hiring not as a static résumé with job description process, but as a dynamic network of signals you can analyze, reconstruct, and understand at scale.

When we talk about evaluating people with AI, fairness becomes a very important concern. How do you ensure that it doesn’t introduce bias?

We take a very strict approach. The models never see information that could trigger bias — no names, no photos, no gender markers, no age hints, no addresses, no dates that could imply age. All of that is removed before the data reaches any scoring model.

The system looks only at professional criteria: skills, responsibilities, level of ownership, technologies, scope of past roles. This already removes much of the bias that exists in traditional hiring. Humans, even unintentionally, can be influenced by irrelevant signals. AI can be structurally prevented from seeing them.

We also run fairness evaluations regularly. If we ever see differences in outcomes between groups with equivalent professional profiles, we retrain with constraints. Fairness isn’t a marketing line for us — it’s an operational requirement.

What makes Raised AI’s data foundation unique?

A lot of AI tools rely solely on public information. That’s useful, but shallow. We combine public data with a very large proprietary layer: historical placements, recruiter decisions, client feedback, interaction patterns, anonymized communication data, interview summaries, and the outcomes of past hiring cycles.

This gives the system a much deeper understanding of what “success” looks like in different contexts. Over time, the model learns not just to identify skilled candidates — but to identify candidates who thrive in certain environments. That’s something you only get when you close the loop between matching, outcomes, and learning.

And beyond matching, does Raised AI also automate operational tasks?

Yes, we built a communication layer that adapts messaging to candidates, drafts outreach, creates follow-ups, summarizes meetings, and even schedules interviews automatically when availability is confirmed. The goal is to let recruiters focus on judgment and relationships, not repetitive tasks.

You can think of it as: the AI handles the execution; the human handles the decisions.

What’s your broader vision for all this? What direction do you see your company going?

The long-term goal isn’t only to automate tasks or make hiring faster. It’s to fundamentally raise the level of intelligence in the process: to make it more informed, more fair, more context-aware, and far more precise than it has ever been.

Hiring today is still dominated by intuition. And intuition is valuable — but it shouldn’t carry the entire weight of the decision. My aim isn’t to replace that human judgment, but to support it with an AI layer that understands real-world context, connects all the hidden dots, and brings structure to something that has historically been very unstructured.

Ultimately, I see this platform becoming a leading hiring engine for companies — a core layer in how organizations understand talent, make decisions, and build teams. We’re trying to define what recruiting should look like in the AI era: efficient, fair, deeply informed, and built around humans doing the things only humans can do. If we can pave that path, I think we can help reshape how hiring works globally.

Comments
To Top

Pin It on Pinterest

Share This