> The Court has provisionally certified an ADEA collective, which includes: “All individuals aged 40 and over who, from September 24, 2020, through the present, applied for job opportunities using Workday, Inc.’s job application platform and were denied employment recommendations.” In this context, being “denied” an “employment recommendation” means that (i) the individual’s application was scored, sorted, ranked, or screened by Workday’s AI; (ii) the result of the AI scoring, sorting, ranking, or screening was not a recommendation to hire; and (iii) that result was communicated to the prospective employer, or the result was an automatic rejection by Workday.
This is the best light you can shine on the discrimination. Most often it really is managers taking their “seniority” literally. As in, they don’t want to take the risk their reports are smarter, more experienced or capable of replacing them, so they discriminate on the basis of age. It’s counterintuitive, but this feels truest from my historical observation.
They said ethics demand that any AI that is going to pass judgment on humans must be able to explain its reasoning. An if-then rule says this, or even a statistical correlation between A and B indicates that would be fine. Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.
LLMs may be able to provide that, but it would have to be carefully built into the system.
That's a great point: funny, sad, and true.
My AI class predated LLMs. The implicit assumption was that the explanation had to be correct and verifiable, which may not be achievable with LLMs.
I believe the point is that it's much easier to create a plausible justification than an accurate justification. So simply requiring that the system produce some kind of explanation doesn't help, unless there are rigorous controls to make sure it's accurate.
That could get interesting, as most companies will not provide feedback if you are denied employment.
I never liked these "trust me bro we're court authorized, give us all your PII to join the class action" setups on random domains. Makes phishing seem inevitable. Why can't we have a .gov that hosts all these as subdomains?
I'm interested to see Workday's defense in this case. Will it be "we can't be held liable for our AI", and will it work against a law as "strong" as ADEA?
https://www.bbc.com/travel/article/20240222-air-canada-chatb...