Is the Algorithm Fairer Than the Manager? The Equity Challenge

22 Décembre 2025

Is the Algorithm Fairer Than the Manager? The Equity Challenge

Equity and diversity are stated goals for almost all large organizations. Yet, on the ground, progress is slow. Internal promotions and project assignments often continue to favor the same profiles.

Why this gap between intention and reality? The answer lies less in ill will than in the cognitive shortcuts of the human brain.

Paradoxically, technology could be the best way to reintroduce equity. But let's be clear: the algorithm does not make the moral decision. It makes the pre-selection process more consistent and traceable. By delegating sorting to a rules engine, the organization protects itself against its own unconscious reflexes.

The Problem: Affinity Bias

The main enemy of diversity is affinity bias. Faced with a choice, a manager naturally tends to favor a person with whom they get along well or who shares the same codes.

This reflex minimizes perceived risk but has two consequences:

  1. Homogeneity: Recruiting clones impoverishes group creativity.

  2. Invisibility: Competent but atypical profiles fly under the radar.

The Machine's Asset: Filtering by Criteria

A matching algorithm is a strict rules engine. It possesses a quality that humans do not: it does not get distracted by noise.

If the algorithm is configured to match based on three criteria (technical skill, availability, risk appetite), it will consider only these three criteria. It can be configured to ignore gender, age, or charisma.

A pedagogical example:

Out of 40 eligible profiles for a project, a manager might shortlist 5 by reflex (people they know). The engine proposes 10 based on the grid, including 4 "invisible" profiles that are perfectly aligned but outside the manager's network.

The Critical Nuance: Beware of Proxies

However, saying the algorithm is "blind" is not enough. We must be wary of proxy variables—data points that indirectly betray a protected characteristic (e.g., a zip code indicating social origin, or a resume gap indicating parental leave).

Equity does not come from the machine itself, but from the vigilance in defining criteria. The tool acts as a mirror: it forces decision-makers to be explicit about what they are looking for.

4 Concrete Safeguards

To ensure the algorithm remains a tool for equity, four operational safeguards must be in place:

  1. Explicit Criteria: Define an allowlist (skills, availability) and a denylist (protected traits and known proxies).

  2. Regular Audits: Check if the output systematically excludes a specific population (e.g., by analyzing selection rates at equal skill levels).

  3. Documented Override: When a human rejects an algorithmic recommendation, they must note the reason. This creates accountability and makes bias detectable.

  4. Historical Testing: Test the rules on past data to see if they reproduce history or correct it.

Conclusion

The algorithm is not meant to replace the manager in the final decision. The human choice remains indispensable to validate nuance and context.

However, using a matching engine for the pre-selection phase helps clean the process of the most common cognitive biases. The tool acts as a neutral filter. It guarantees that everyone has been evaluated on the same grid, offering an equal chance for atypical profiles to rise to the surface.

Don't let chance form your groups.

Training, recruitment, or project management: Harmate transforms individual responses into high-performing collectives to reveal true synergies.

Discover Harmate