Responsible AI Platform

Fairness by design – before AI sees the job posting

··7 min read
Delen:
Dutch version not available

The lesson from monitoring

Rima's team now has five indicators on the dashboard and resolves drift incidents quickly. Yet she notices that each outlier can often be traced back to a source deeper in the chain: the text of the job posting, the selection questions, or the interview script. Bias creeps in long before a model starts calculating. If you want real stability in your metrics, you need to prevent errors before technology amplifies them. That's called fairness by design.

<Image src="/blog/images/posts/fairness-by-design-before-ai-sees-the-job-posting/foto1.png" alt="Fairness by design prevents bias before AI is used in recruitment processes" width={800} height={450} quality={80} priority={true} sizes="(max-width: 768px) 100vw, 800px" />

The daily toolkit, but through the lens of the AI Act

Consider the tools that an average HR or recruitment team can't do without:

AI toolFunctionalityRisk under AI Act
CV parserLabels and ranks incoming resumes in the ATSHigh
ChatbotChecks basic requirements, rejects or schedules interviewsHigh
Video analysis platformAnalyzes language, facial expressions and voice during job interviewsHigh
Assessment toolBuilds personality profiles through game-based testsHigh
Internal mobility modulePredicts which employees are ready for promotionHigh
Social media insights toolIdentifies when potential candidates are approachableHigh
Skill cloudAdvises career paths based on skills in the HR systemHigh
Reference softwareAutomatically checks references and generates scoring reportsHigh

All these tools decide – directly or indirectly – about access to employment. This places them in the "high risk" category under the AI Act. Anyone who wants to fix issues only after they've made their judgment is constantly playing catch-up.

The job description as first line of defense

Rima starts with something seemingly simple: the words in the job posting. Research shows that terms like "rockstar" or "tiger" attract more male applicants, while "heavy lifting" might discourage female candidates in logistics. Rima now sends every text through a language module that analyzes only the tone. No demographic prediction, just a notification for stereotypical language. The text becomes more neutral, the influx automatically more diverse, even before the CV parser gets involved.

<Image src="/blog/images/posts/fairness-by-design-before-ai-sees-the-job-posting/foto2.png" alt="Neutral job descriptions ensure diverse candidate flow" width={800} height={450} quality={75} loading="lazy" sizes="(max-width: 768px) 100vw, 800px" />

Screening questions that don't discriminate

The knockout question is next on the agenda. The chatbot asks if a candidate has a work permit. Previously, a "no" meant immediate rejection. Now a second question follows: "Can you obtain a permit within six months?" and additional explanation if needed. The tool remains automated, but a conscious choice prevents qualified candidates from disappearing too early. It simultaneously meets the AI Act requirement for human measure and transparency.

Video analysis on a data diet

The video analysis platform delivers a monthly test report. Rima now asks for one extra column: feature importance. She wants to know exactly what weight facial expressions, voice, and word choice receive. If voice intonation suddenly weighs thirty percent, the update goes back to the sandbox until it's clear whether that change is truly relevant. This way, new bias doesn't quietly sneak in.

<Image src="/blog/images/posts/fairness-by-design-before-ai-sees-the-job-posting/foto3.png" alt="Feature importance analysis of video interviews prevents unnoticed bias" width={800} height={450} quality={75} loading="lazy" sizes="(max-width: 768px) 100vw, 800px" />

Color codes instead of automatic no

Rima's internal mobility module ranks colleagues for promotion. Automatic "not suitable" labels are a thing of the past. Instead, candidates receive a traffic light color: green can proceed, orange or red requires a recruiter's look and a short motivation line. That single line lands in the logbook and serves as training data later. This way, human judgment is explicitly recorded.

A quick fairness scan for new tools

When IT offers a new ATS package with smart plug-ins, three questions are ready:

Fairness questionPurposeResult
Does this system decide on access to work?Identifying high-risk AI systems according to the AI ActApply appropriate compliance requirements
Which data fields does the model use?Detecting indirect proxies for protected characteristicsPostal codes and hobbies are potential red flags
Can the vendor demonstrate how bias is detected?Validating quality assurance at the vendorAssess reliability of the tool

Without satisfactory answers, the package doesn't make it through the gate.

Rima's fairness diary

Friday afternoon, Rima opens her laptop and pulls up the Notion file where she keeps her weekly findings. In her dashboard, she immediately sees the impact that four targeted adjustments have made.

"Rewrote job description for the warehouse," she reads aloud while reviewing her notes. The figures alongside speak for themselves: eight percent more female candidates responded to the modified text. By replacing sentences like "able to lift 25 kilos" with "uses technical aids to move goods," not only did the tone change, but so did the applicant flow. The recruiters had barely noticed the change, but the system did.

Her second note concerns the chatbot adjustment. Seventeen additional candidates made it to the longlist. Candidates who were previously automatically rejected because they didn't yet have a work permit were now offered a follow-up route: "Can you obtain a permit within six months?" This small change resulted in several valuable IT profiles that would otherwise never have been considered.

For the video analysis platform, Rima had clearly built in warning signals. The vendor reported last week that the weight of voice intonation in the algorithm had been increased to almost thirty percent. Rima had reduced this back to fifteen percent, which had noticeably narrowed the difference in video scores between male and female candidates. "Interesting," she notes, "how small model adjustments have such a big influence on who proceeds."

She's most proud of the forty-two motivation lines that recruiters added this week. Instead of the internal mobility system automatically rejecting candidates, recruiters now need to provide a brief explanation when someone receives an 'orange' or 'red' marking. This human context proves invaluable: "Sarah has relevant experience but lacks certification" or "Jayden's profile better suits the Finance team." These notes not only feed the system with training data but also make decisions more transparent.

The figures return to the indicators from part 3. The overrule percentage has dropped from 35% to 22% – recruiters trust the system more because it's better calibrated. The candidate NPS has risen to 8.4, partly because rejected candidates now have a clearer picture of why they didn't proceed. Fairness by design proves tangible and measurable.

<Image src="/blog/images/posts/fairness-by-design-before-ai-sees-the-job-posting/foto6.png" alt="Measurable results of fairness by design in the recruitment process" width={800} height={450} quality={75} loading="lazy" sizes="(max-width: 768px) 100vw, 800px" />

Less work than it seems

"We don't have a data team" was also Rima's first thought. She now reserves one afternoon per week for fairness, gives each recruiter two extra minutes for the motivation line, and discusses findings in regular meetings. The benefits – fewer fires to put out, fewer angry candidates, faster audits – far outweigh the scheduled hours.

Looking ahead

Preventing bias is cheaper than fixing bias. Next week in part 5: how do you convince board members and budget holders that investing in AI literacy, monitoring, and fairness design is not only ethically smart but delivers solid returns?


Embed AI scans your tool stack for AI risks, rewrites job descriptions with neutral language, and translates vendor reports into clear actions. Want to know more? Send a message to info@embed.ai