Designing Job Search Tools That Work for Neurodivergent Candidates (and Everyone Else)

AI is transforming the job search, but without intentional design, it can just as easily raise barriers as remove them. If you want to attract – and fairly evaluate – neurodivergent talent, the mandate is simple: teach people how to use AI well, make the journey accessible, and build trust at every step.

At the HR Technology Conference & Exposition I was at a session run by my friend Crystal Lay, MBA MScIOP – an award winning Global Employer Brand & I/O psych executive – in which she talked through her research, and shared what AI reveals about hiring, bias and belonging. She highlighted a study with over 450 participants. Some of the key findings included men’s higher confidence in technology, women’s underestimation of their skills, and the importance of familiarity over gender in AI adoption. Neurodivergent individuals, particularly women, showed higher AI usage and developed skills.

Here are my main takeaways:

Start with skills, not slogans

Many candidates already use AI multiple times a day. Help them to use it well. Publish a plain-language “AI starter kit” on your careers site with:

  • Prompt guides for common tasks (CV tailoring, cover letters, portfolio curation, transcript summaries)
  • When/why to use AI for each role type and task
  • Advice and guidance on how to verify facts and use personal evidence
  • Personalisation tips (always begin with your own information, achievements, and voice)

When we show candidates how to work with AI – not like Google, but like a conversational partner – we lift the quality for everyone and reduce anxiety for people who benefit from structure and scaffolding.

Designing for neurodivergent accessibility

Language and layout matter. Use conversational copy, clear headings, white space, and short blocks that are easy to scan. Offer modal choices for high-stress steps:

  • If AI video interviews feel impersonal or confusing, provide equivalent alternatives: typed responses, audio-only, or off-camera options.
  • For screening tasks, let candidates choose between written or recorded submissions.

Accessibility isn’t about removing standards; it’s about providing multiple, comparable paths to demonstrate the same capability.

Put psychological safety on the record

Trust is earned, so state – explicitly – where ethical AI assistance is allowed (and where it isn’t), and describe your own use: how your teams rely on AI, how you review outputs, where humans stay accountable. Then maintain regular transparency updates: what bias tests you ran, what you found, and what you changed. When candidates see you’re on top of risk, they’re more willing to engage honestly.

Use AI where it helps – not everywhere

Not every step needs a bot. Prioritise bias-tested tools that add value at the right moment (e.g. prompt helpers embedded in the application form). Be cautious with practices candidates commonly flag as alienating – like automated video interviews – and make sure there’s a true opt-in alternative.

Fix the plumbing or don’t ship the chatbot

Your chatbot is only as good as the content you feed it. If your careers site or your knowledge base is thin, the bot will guess – and candidates will lose trust. Invest in a robust content layer (policies, FAQs, job frameworks) before you turn on AI. Screen vendor tools against your content footprint and accessibility requirements.

Co-design, don’t guess

Build with neurodivergent job seekers and other marginalised groups. Run iterative tests with mixed methods: qualitative sessions to hear what works and why, plus quantitative surveys to see patterns at scale. Test over time – accessibility is about repeatable ease, not a one-off demo.

Handle AI-written CVs thoughtfully

Yes, AI is in many applications. Treat detection signals as conversation starters and not auto-rejects – especially when writing isn’t the job’s core competency. For roles where original writing matters, be clear in the posting and request a supervised work sample. Blanket bans will disproportionately harm neurodivergent candidates for whom AI is a vital organisational support.

Keep supporting after day one

Onboarding is where equity becomes habit. Provide short trainings on ethical AI use, team norms, and verification practices. Create clear routes for reporting data or bias issues – frontline employees will spot problems faster than pre-scheduled audits – and close any loopholes with updates.

The AI Effect on Entry-Level Jobs and Career Progression

Using ChatGPT might make you stupid.” That bold statement – based on a study – appeared on a number of news sites and in business journals recently. The article was accompanied by brain scan images suggesting that AI erodes critical thinking.

It’s the kind of story guaranteed to spark outrage – particularly among older generations who see technology as a shortcut rather than a skill. Needless to say it was a topic ripe for discussion between me and Danielle Farage on our #FromXtoZ podcast!

And also needless to say – the truth is far more complex, and raises bigger questions about how AI is reshaping not just how we work, but how we learn and progress in our careers.

The Disappearing Entry-Level Job

For decades, entry-level jobs were designed around repetitive, and often quite menial, tasks. Interns summarised files, created reports, and performed groundwork that provided valuable context and an understanding of how things fit together. While boring at times, those tasks were the building blocks for developing judgment and critical thinking. They helped you learn how to spot patterns, understand stakeholders, and prepare for more senior responsibilities.

Today, those very tasks are being done by AI in seconds. Need a summary? ChatGPT delivers one instantly. Need a cover letter? AI can generate multiple versions faster than you can type your name. For employers, this is a productivity boost. For graduates, juniors and interns, it means fewer “easy” tasks to start with – and potentially fewer opportunities to learn by doing.

Learning Gaps and Lost Context

One of the risks we talked bout is that when AI handles entry-level tasks, people may lose valuable context. The act of digging through files, for example, could teach you how information is structured, help to learn what’s important, and why things are done a certain way.

Without these experiences, new hires may have less foundational knowledge – and therefore slower long-term development opportunities – which echoes a common complaint among Gen Z workers that either they have little to do, or they are immediately thrown into complex tasks without the understanding that entry-level work used to provide.

That jump can accelerate learning for some, but for others, it can create stress and lead to potential skill gaps.

Shifting Skill Priorities

If AI can handle repetitive tasks, what skills will matter more?

Soft skills are rapidly rising to the top of the list – communication, collaboration, creativity, and emotional intelligence. Critical thinking is still essential, but it may shift away from basic data gathering and toward making strategic connections and asking better questions.

For example, instead of summarising a document, a junior analyst might now be expected to analyse AI’s summary and extract what’s missing or misleading. Instead of drafting a cover letter from scratch, they might focus on personalising and contextualising AI’s output in a way that resonates with their employers.

Changing Brains, Changing Learning

Our conversation also touched on how our brains – and our learning habits – are changing. Gen Z (and AlphaGen) have been exposed to technology and gamified learning from childhood so have different cognitive expectations. Tasks requiring deep focus and delayed gratification (like writing reports or doing long-form research) can feel more challenging when our brains are wired for quick dopamine hits from apps, games, and social media.

This is more than just a workplace issue; it’s a societal one. As technology accelerates, how we teach, train, and even design work needs to adapt to different cognitive baselines. Should we be worried about critical thinking decline? Or should we embrace the fact that tools like ChatGPT free up mental energy for deeper and more analytical thinking? The answer likely depends on how organisations and educators adapt.

Rethinking Entry-Level Work

The old career ladder was built on predictable steps: you start with basic tasks, learn the ropes, then climb upward as you gain experience. AI is dismantling some of those steps. That’s not necessarily bad – many interns now handle complex projects far earlier in their careers than previous generations ever did – but it requires intentional design. Employers need to:

  • Redefine entry-level roles to focus on applied problem-solving, creativity, and human interaction.
  • Provide context in new ways—mentorship, job shadowing, and structured learning can fill gaps left by disappearing grunt work.
  • Invest in soft skill development as AI takes over technical routine tasks.

A Transitional Phase

Ultimately, we’re currently in a transitional phase. Entry-level jobs are not disappearing, but they are transforming. The work experience of someone starting out today looks nothing like it did even five years ago. That can feel unsettling, but it’s also an opportunity – to design jobs, education, and career pathways that prepare people not just to survive in an AI-driven workplace but to thrive.

The big question is not whether AI is making us “stupid” – it’s how we will redefine learning, working, and progression in a world where machines handle the basics and humans focus on what truly requires a human touch.

You can check out the full podcast conversation here : https://www.youtube.com/watch?v=cu6W-UqLj2Q

Or through the image below

And let us know what you think in the comments…..

A Potential Framework for Mitigating AI Bias in Talent Acquisition

In a recent newsletter I wrote about some of the takeaways from my interview with Heidi Barnett, President at isolved Talent Acquisition (formerly ApplicantPro), at the Unleash conference about the evolution of Talent Acquisition. The integration of AI and advanced analytics in candidate profiling presents us with both a tremendous opportunity and also significant risk. While these technologies can enhance efficiency and improve matching accuracy, they also have the potential to perpetuate or amplify existing biases in hiring practices.

In this – the second part of my interview with Heidi – I’m specifically looking at some of the ways in which TA professionals can proactively address these challenges.

Understanding the Sources of AI Bias

AI bias in TA typically stems from three primary sources: historical data, algorithmic design, and implementation choices. Historical hiring data can often reflect previous discriminatory practices, unconscious biases, or systemic inequalities that existed in previous recruitment decisions. When AI systems learn from this data, they can inadvertently replicate these patterns.

Algorithmic design bias can occur when the parameters and weightings built into AI systems favour certain demographic groups or characteristics. For example, if an algorithm heavily weights specific educational institutions or previous company experiences, it may systematically exclude qualified candidates from underrepresented backgrounds.

Implementation bias happens when organisations fail to properly configure, monitor, or maintain their AI systems. This can include using inappropriate data sets, failing to regularly oversee and audit decision outcomes, or not accounting for changing market conditions and organisational needs.

Establishing Frameworks for Bias Detection

TA professionals must start taking a more systematic approach to identifying bias before it impacts hiring decisions. Start by conducting regular audits of your AI system’s outputs, and analysing hiring patterns across different demographic groups. This should help identify any statistical disparities in screening rates, interview invitations, and final hiring decisions.

Another way is to create baseline metrics that track diversity at each stage of the recruitment funnel, and then compare these metrics before and after AI implementation to help identify any trends that may give cause for concern. Pay particular attention to how multiple identity factors might compound bias effects.

It’s also key to establish feedback loops with hiring managers, candidates, and internal diversity teams to gather qualitative insights about any potential biases. Sometimes bias manifests in subtle ways that statistical analysis might miss, such as the language used in AI-generated communications or the types of questions prioritised in screening processes.

Implementing Technical Safeguards

It’s key to work with your technology vendors to understand how their algorithms function and what safeguards they’ve built in. Demand transparency about training data sources, algorithmic decision-making processes, and bias testing procedures. Reputable vendors should be able to provide detailed documentation about their bias mitigation efforts.

Also important to implement human oversight checkpoints at critical decision stages. While AI can handle initial screening efficiently, human reviewers should still be involved in final candidate selections. Train these reviewers to recognise potential bias indicators and provide them with diverse candidate profiles for consideration.

You can also consider using multiple AI tools or approaches for candidate evaluation, comparing results to identify potential bias blind spots. If different systems consistently exclude similar demographic groups, this may indicate systemic bias that requires investigation.

Building Inclusive Data Practices

Audit your historical hiring data before using it to train AI systems. Remove or adjust data points that reflect past discriminatory practices. This might include eliminating certain educational requirements that weren’t truly necessary for job success or adjusting for historical underrepresentation in specific roles.

Expand your data sources to include more diverse talent pools. If your historical data primarily reflects candidates from certain networks or sources, actively seek data from underrepresented communities, alternative education pathways, and non-traditional career backgrounds.

Regularly refresh your training data to reflect current market conditions and organisational values. AI systems trained on outdated data may not align with current diversity and inclusion goals or may miss emerging talent sources.

Creating Accountability Structures

Establish clear governance structures for AI bias monitoring and mitigation. Assign specific team members responsibility for conducting regular bias audits and create procedures for addressing findings that give rise for concern. This accountability should extend to senior leadership, ensuring that bias mitigation receives appropriate organisational priority.

Document your bias mitigation efforts thoroughly. This documentation can serve multiple purposes: it demonstrates due diligence in legal contexts, provides learning opportunities for continuous improvement, and creates institutional knowledge that survives personnel changes.

Set specific, measurable goals for bias reduction and diversity improvement. Regularly track progress against these goals and adjust your approaches based on results. Consider tying these metrics to team performance evaluations and organisational success measures.

Continuous Learning and Adaptation

The landscape of AI bias is constantly evolving as technology advances and our understanding deepens. Stay current with research, best practices, and regulatory developments in AI ethics and employment law. Try and participate in industry forums and professional development opportunities focused on responsible AI implementation.

Regularly reassess bias mitigation strategies as your organisation grows and changes. What works for a small company may not scale effectively, and what’s appropriate for one industry may not apply to another. Be prepared to adapt your approaches based on new insights and changing circumstances.

Foster a culture of continuous improvement around bias mitigation. Encourage team members to raise concerns about potential bias and create safe spaces for discussing these sensitive topics. The most effective bias mitigation happens when entire teams are engaged and committed to the effort.

Moving Forward Responsibly

Addressing AI bias in talent acquisition isn’t a one-time project – it’s an ongoing commitment that requires vigilance, resources, and organisational support. The goal isn’t to eliminate all AI tools due to bias concerns, but rather to implement them responsibly with appropriate safeguards and oversight.

By taking proactive steps to understand, detect, and mitigate bias, TA professionals can harness the power of AI while maintaining fair and inclusive hiring practices. This balanced approach will ultimately lead to better hiring outcomes, stronger organisational diversity, and reduced legal and reputational risks.

The future of Talent Acquisition depends on our ability to leverage technology while preserving human values of fairness and inclusion.

Check out my full interview conversation with Heidi here :