Designing Job Search Tools That Work for Neurodivergent Candidates (and Everyone Else)

AI is transforming the job search, but without intentional design, it can just as easily raise barriers as remove them. If you want to attract – and fairly evaluate – neurodivergent talent, the mandate is simple: teach people how to use AI well, make the journey accessible, and build trust at every step.

At the HR Technology Conference & Exposition I was at a session run by my friend Crystal Lay, MBA MScIOP – an award winning Global Employer Brand & I/O psych executive – in which she talked through her research, and shared what AI reveals about hiring, bias and belonging. She highlighted a study with over 450 participants. Some of the key findings included men’s higher confidence in technology, women’s underestimation of their skills, and the importance of familiarity over gender in AI adoption. Neurodivergent individuals, particularly women, showed higher AI usage and developed skills.

Here are my main takeaways:

Start with skills, not slogans

Many candidates already use AI multiple times a day. Help them to use it well. Publish a plain-language “AI starter kit” on your careers site with:

  • Prompt guides for common tasks (CV tailoring, cover letters, portfolio curation, transcript summaries)
  • When/why to use AI for each role type and task
  • Advice and guidance on how to verify facts and use personal evidence
  • Personalisation tips (always begin with your own information, achievements, and voice)

When we show candidates how to work with AI – not like Google, but like a conversational partner – we lift the quality for everyone and reduce anxiety for people who benefit from structure and scaffolding.

Designing for neurodivergent accessibility

Language and layout matter. Use conversational copy, clear headings, white space, and short blocks that are easy to scan. Offer modal choices for high-stress steps:

  • If AI video interviews feel impersonal or confusing, provide equivalent alternatives: typed responses, audio-only, or off-camera options.
  • For screening tasks, let candidates choose between written or recorded submissions.

Accessibility isn’t about removing standards; it’s about providing multiple, comparable paths to demonstrate the same capability.

Put psychological safety on the record

Trust is earned, so state – explicitly – where ethical AI assistance is allowed (and where it isn’t), and describe your own use: how your teams rely on AI, how you review outputs, where humans stay accountable. Then maintain regular transparency updates: what bias tests you ran, what you found, and what you changed. When candidates see you’re on top of risk, they’re more willing to engage honestly.

Use AI where it helps – not everywhere

Not every step needs a bot. Prioritise bias-tested tools that add value at the right moment (e.g. prompt helpers embedded in the application form). Be cautious with practices candidates commonly flag as alienating – like automated video interviews – and make sure there’s a true opt-in alternative.

Fix the plumbing or don’t ship the chatbot

Your chatbot is only as good as the content you feed it. If your careers site or your knowledge base is thin, the bot will guess – and candidates will lose trust. Invest in a robust content layer (policies, FAQs, job frameworks) before you turn on AI. Screen vendor tools against your content footprint and accessibility requirements.

Co-design, don’t guess

Build with neurodivergent job seekers and other marginalised groups. Run iterative tests with mixed methods: qualitative sessions to hear what works and why, plus quantitative surveys to see patterns at scale. Test over time – accessibility is about repeatable ease, not a one-off demo.

Handle AI-written CVs thoughtfully

Yes, AI is in many applications. Treat detection signals as conversation starters and not auto-rejects – especially when writing isn’t the job’s core competency. For roles where original writing matters, be clear in the posting and request a supervised work sample. Blanket bans will disproportionately harm neurodivergent candidates for whom AI is a vital organisational support.

Keep supporting after day one

Onboarding is where equity becomes habit. Provide short trainings on ethical AI use, team norms, and verification practices. Create clear routes for reporting data or bias issues – frontline employees will spot problems faster than pre-scheduled audits – and close any loopholes with updates.

Leave a comment