Getting a Job: Working for AI
I have been fortunate to have had my job for more than twenty years. I have never looked for a job in the twenty-first century. If I did, the process would be a lot different than it was in the 1990s. Monster.com, the first online resume database, only launched in 1999. And while the internet might have had job listings, old-fashioned snail mail was still the main way to apply for a job for many years after that.
Back in the twentieth century, writing a good resume was key. It still is today, but an algorithm is likely to be the first to “see” your resume. In theory, this is meant to help streamline the hiring process and perhaps even get better candidates. Even a first interview might be submitted as a video, screened by a bot to read a candidate’s facial expressions and keywords used.
Some candidates can be immediately disqualified for lacking minimum qualifications. I once reviewed applications and occasionally some people will apply to be a sociology professor without a degree—even a bachelor’s degree. But for the most part, we read through packets of material for each qualified candidate, including letters of recommendation and publications.
While this work is time consuming, we have not considered turning it over to an algorithm, even to get the process started. According to a Harvard Business Review report, hiring algorithms can be deeply problematic:
To attract applicants, many employers use algorithmic ad platforms and job boards to reach the most “relevant” job seekers. These systems, which promise employers more efficient use of recruitment budgets, are often making highly superficial predictions: they predict not who will be successful in the role, but who is most likely to click on that job ad.
These predictions can lead jobs ads to be delivered in a way that reinforces gender and racial stereotypes, even when employers have no such intent. In a recent study we conducted together with colleagues from Northeastern University and USC, we found, among other things, that broadly targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies went to an audience that was approximately 75% black. This is a quintessential case of an algorithm reproducing bias from the real world, without human intervention.
World Economic Forum noted similar problems:
It has been shown that in the US labor market, African-American names are systematically discriminated against, while white names receive more callbacks for interviews. However, we observe bias not only because of human error, but also because the algorithms increasingly used by recruiters are not neutral; rather, they reproduce the same human errors they are supposed to eliminate. For example, the algorithm that Amazon employed between 2014 and 2017 to screen job applicants reportedly penalized words such as ‘women’ or the names of women’s colleges on applicants’ CVs.
These forms of artificial intelligence, or AI, reflect and even amplify existing biases embedded in the workforce. They don’t end with the hiring process. As a Los Angeles Times column recently discussed, drivers for Uber and Lyft have found themselves “deactivated”—bot-speak for terminated—presumably by an algorithm:
A new survey of 810 Uber and Lyft drivers in California shows that two-thirds have been deactivated at least once. Of those, 40% of Uber drivers and 24% of Lyft drivers were terminated permanently. A third never got an explanation from the gig app companies.
Drivers of color saw a higher rate of deactivation than white drivers — 69% to 57%, respectively. A vast majority of the drivers (86%) faced economic hardship after getting fired by the app, and 12% lost their homes.
Deactivation hit even the most experienced drivers: The report, conducted by Rideshare Drivers United and the Asian Law Caucus, found that drivers who were deactivated had worked, on average, 4 1/2 years for Uber and four years for Lyft.
The World Economic Forum article concludes with suggestions for workers on how to craft a good AI-read resume, but the implication is that it is up to workers to somehow outsmart the algorithm. The article’s link title, “AI Assisted Recruitment is Biased: Here’s How to Beat it” implies that it can be beat. That seems unlikely. Given the complexities of any algorithm are largely proprietary; translation: users typically don’t know exactly what it is looking for.
Of course, AI isn’t going away, and it can potentially enhance our lives. New research into using AI for medical diagnoses might potentially save lives…or reflect existing inequalities in healthcare. Rather than blaming AI, the solution is human-based, and requires that we examine systemic inequalities and consider how they might be removed from algorithms. Maybe someone will develop an algorithm for that.
Her job is quite hard
Posted by: geometry dash | March 27, 2023 at 11:41 PM
This is a really great site. Thanks for your sharing.
Posted by: Greffe de Cheveux Turquie | November 03, 2023 at 04:24 AM
Thank you very much for this information, it was very useful to me.
Posted by: Dekorasyon | December 08, 2023 at 03:44 AM