Labor law: employment racisme and artificial compréhension, beware of employers | lieu négoce magazine

KAREN MICHAEL Special Correspondence

Employers continue to find new ways to automate business practices, including through programs that enable artificial intelligence (AI) in various employment methods, including recruitment.

These new technologies, while promising in many ways, have caught the attention of the Equal Employment Opportunity Commission, which launched the AI ​​and Algorithmic Justice Initiative last year. The EEOC announced the initiative’s intended mission to “ensure that the use of software, including artificial intelligence (AI), machine learning, and other new technologies used in hiring and other employment decisions, complies with the federal civil rights laws that EEOC enforces.”

EEOC announced: “Through this initiative, the EEOC will take a closer look at how existing and emerging technologies are fundamentally changing the way employment decisions are made. The goal of the initiative is to guide employers, employees, job seekers and suppliers to ensure that these technologies are used fairly and in accordance with federal equal employment opportunity laws.”

People also read…

In a clear sign of the potential dangers of AI, earlier this month the EEOC filed a lawsuit against iTutorGroup for age discrimination after the EEOC determined that the company had programmed its online software to automatically reject more than 200 older candidates. meeting the requirements.

Three companies provide English tutoring services to students in China. According to the lawsuit, in 2020 the company programmed its tutoring application software to automatically reject female candidates aged 55 or older and male candidates aged 60 or older.

If true, it would be in violation of the Age Discrimination in Employment Act, which protects job seekers and employees aged 40 and over.

EEOC Chair Charlotte A. Burrows said: “Age discrimination is unfair and illegal. Even when technology automates discrimination, the responsibility still lies with the employer.” She added: “This case is an example of why the EEOC recently launched the AI ​​and Algorithmic Justice Initiative. Workers facing discrimination due to their employer’s use of technology can rely on the EEOC to seek remedies.”

EEOC has also just released its first promised technical assistance that explains the Americans with Disabilities Act and the use of software, algorithms and artificial intelligence to evaluate applicants and employees. This guide will be discussed in more detail in my “AI Part 2” column next week.

According to the new EEOC guidance, AI can be used in a variety of applications, including automated resume review, hiring, hiring and workflow chatbot software, video interviews, analytics, employee monitoring, and worker management.

These programs are often combined with algorithms, which the EEOC defines as “a set of instructions that a computer can follow to achieve some goal.” These employment algorithms provide tools for algorithmic decision making that can be used at all stages of the employment life cycle from hiring to firing.

AI adds another layer of sophistication through which AI can be used to develop algorithms that help employers and improve their efficiency by helping them make decisions. The EEOC cites the Congressional definition of AI as “a machine system that can, for a given set of human-defined goals, make predictions, recommendations, or decisions that affect a real or virtual environment.”

Examples cited by the EEOC include:

  • R
  • star
  • sum
  • star
  • crawlers that prioritize applications using specific keywords.
  • Employee monitoring software that rates employees based on their keystrokes or other factors.
  • “Virtual assistants” or “chatbots” that ask applicants about their qualifications and reject those who don’t meet predetermined requirements.
  • Video interview software that evaluates candidates based on their facial expressions and speech patterns.
  • Testing software that provides “job fit” scores to candidates or employees regarding their personality, abilities, cognitive skills, or perceived “cultural fit” based on their performance in a game or more traditional test.

Employers need to learn what types of AI software and programs exist in their current HR functions and ensure that their own data analytics ensures fairness and non-discrimination in these technologies.

Employers should also be careful to blindly follow the promises of new technology’s effectiveness.

Next week, I’ll talk about how AI can create disability discrimination.

Additional information can be found on the website.

Karen Michael is attorney and president of Richmond-based Karen Michael PLC and author of Stay at Work. She can be contacted at

Leave a Comment