Back to the Future: How Well Equipped is Irish Employment Equality Law to Adapt to Artifical Intelligence

AuthorNaomi Foale
PositionLL.B. Law; LLM candidate in Innovation, Technology and the Law at the University of Edinburgh
© 2020 Naomi Foale and Dublin University Law Society
The novelty of twenty-first century artificial intelligence (AI) is its ability
to make decisions in areas as varied as criminal sanctions,
credit scores
and access to employment.
Through the delegation of recruitment
decision-making to AI, the technology promises that it ‘slashes turnover,
reduces bias and dramatically improves [the] quality of hire’
and delivers
‘better hiring with AI-driven predictions’.
Attracted by the efficiency and
apparent objectivity of data-driven decisions, employers are using AI to
screen out up to 72% of CVs before recruiters actually set eyes upon them.
However, amongst these utopian claims
there is a major reason
for concern AI is highly susceptible to inheriting the biases that it
* LL.B. Law; LLM candidate in Innovation, Technology and the Law at the University of
Edinburgh. The author would like to thank Professor Mark Bell for his support in writing
the dissertation that this article is based on. The author would also like to thank Niamh
Flannery for opening a door into the technology industry and providing invaluable
mentorship, sparking an interest in technology and its relationship with the law. Finally,
the author is eternally grateful to Eolann Davis for his patience, diligence and commitment
throughout the editing process.
‘State v Loomis’ (2017) 130 Harvard Law Review 1530.
Danielle Keats Citron and Frank Pasquale, ‘The Scored Society: Due Process for
Automated Predictions’ (2014) 89 Washington Law Review 1.
Claire Cain Miller, ‘Can an Algorithm Hire Better Than a Human?’ The New York Times
(25 June 2015) <
than-a-human.html> accessed 6 March 2019.
Ideal, ‘Artificial Intelligence for Recruiting’ accessed 11
February 2020.
HireVue, ‘Better Hiring with AI Driven Pr edictions’ accessed
11 February 2020.
Gideon Mann and Cathy O’Neil, ‘Hiring Algorithms Are Not Neutral’ (Harvard Business
Review, 9 December 2016) 2016/12/hiring-algorithms-are-not-neutral>
accessed 6 March 2019.
Neil M Richards and Jonathan H King, ‘Three Paradoxes of Big Data’ (2013) 66 Stanford
Law Review Online 41, 45.
2020] Back to the Future
promises to eliminate.
Amazon, one of the most technologically
sophisticated companies in the world,
abandoned their CV-screening AI
tool because, after three years of development, it persistently
discriminated against women.
Other companies, such as Unilever and
Kraft Heinz, plough ahead with using these technologies.
at the recruitment stage is very serious, because it restricts a person’s
ability to access the workplace, and thus has negative knock-on financial
and psychological effects.
AI-driven recruitment decision-making thus
demands a close scrutiny of the adequacy of employment equality law, in
light of this technological change.
AI’s ability to discriminate in this way can be described as
‘classification bias’, meaning an employer’s use of ‘classification schemes,
such as data algorithms, to sort or score workers in ways that worsen
inequality or disadvantage along the lines of race, sex or other
discriminatory grounds that are prohibited by Irish employment equality
Although classification bias conceptually corresponds to our
understanding of indirect discrimination according to the Employment
Equality Acts 1998-2015 (the Employment Equality Acts), its unique
characteristics present a complainant with significant practical obstacles
that effectively eliminate any opportunity for a legal remedy. It is highly
unlikely that a complainant can successfully make out a prima facie case
of discrimination because they lack access to the algorithm and data in
question. In the unlikely event that a complainant acquires access to the
relevant data, the current test for ‘particular disadvantage’ is unsuited to
the peculiar statistical nature of AI. Even if a complainant surpasses these
obstacles, an employer can likely justify even a discriminatory algorithm.
Mann and O’Neil (n 6).
Shannon Bond, ‘Amazon’s ever-increasing power unnerves vendors’ Financial Times (20
September 2018) bc8a-11e8-94b2-17176fbf93f5>
accessed 11 February 2020.
Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’
Reuters (10 October 2018) <
women-idUSKCN1MK08G> accessed 11 February 2020.
HireVue, ‘Unilever finds top talent faster with HireVue assessments’
accessed 11 February 2020; Maria Aspan, ‘A.I. is transforming the job interview – and
everything after’ FORTUNE (20 January 2020)
technology-ai-hiring-recruitment/> accessed 11 February 2020.
Marguerite Bolger, Claire Bruton and Clíona Kimber, Employment Equality Law (1st edn,
Round Hall 2012) [10-14].
Pauline T Kim, ‘Data-Driven Discrimination at Work’ (2017) 58 William and Mary Law
Review 857, 866.
Trinity College Law Review [Vol 23
Therefore, it is submitted that a marriage of employment equality law and
the recently introduced General Data Protection Regulation (GDPR) may
alleviate some of the barriers facing a complainant, by mandating ex ante
action and providing the requisite information for complainants to make
out a stronger prima facie case.
After analysing AI and its tendency towards technological bias in
Part I, this article shall go on to examine how classification bias
conceptually conforms to our understanding of indirect discrimination
according to the Employment Equality Acts. Part II shows how, for an
indirect discrimination claim, the burden of proof and particular
disadvantage requirements of the Employment Equality Acts, coupled
with the employer’s ability to objectively justify AI-driven recruitment,
make it almost impossible for a complainant to succeed in a classification
bias case. In Part III, this article will then outline how an innovative
application of the GDPR would address some of these practical barriers
and offer ex ante regulatory action to prevent classification bias from
arising in the first place.
It should be noted that employers generally engage a third-party
software vendor to deliver AI-driven recruitment services, such as Ideal or
It is beyond the scope of this paper to consider whether such
vendors entail legal liability for the discriminatory effects of their services.
Instead, the focus will be on the application of the Employment Equality
Acts to employers in the context of classification bias.
I. Artificial Intelligence and Employment
A. What is Artificial Intelligence?
Artificial intelligence is not new. The first academic workshop that
focused on AI was proposed in September 1955.
It is, however, novel that
AI is becoming increasingly central to decision-making processes in fields
with high-impact outcomes; such as law, medicine and employment.
There is no universally accepted definition of AI, though the UK
HireVue (n 5); Ideal (n 4).
John McCarthy and others, ‘A Proposal for the Dartmouth Summer Research Project on
Artificial Intelligence’ (31 August 1955) 1
> accessed 11 February 2020.
Will Knight, ‘The Dark Secret at the Heart of AI’ (2017) MIT Technology Review
accessed 11 February 2020; ‘State v Loomis’ (n 1); Cain Miller (n 3).
2020] Back to the Future
Government Industrial Strategy outlines a useful one: ‘Technologies with
the ability to perform tasks that would otherwise require human
intelligence, such as visual perception, speech recognition, and language
A distinguishing characteristic of modern AI is its ability to
learn and adapt over time.
This is distinct from conventional computer
coding in that, ‘[i]nstead of a programmer writing the commands to solve
a problem, the program generates its own algorithm based on example
data and a desired output’
and thus, ‘the machine essentially programs
In a famous example, the American supermarket chain Target
used an algorithm which learned to detect customers’ pregnancies based
on data from their shopping habits.
This paper will focus on AI systems that screen applications for
interview. At this stage, these systems can act as decision-makers by
eliminating candidates such that a recruiter never sees the application. As
mentioned in Part I, AI can screen out up to 72% of CVs without any
human review.
This paper is thus concerned with classification bias that
occurs as a result of AI-driven recruitment screening processes.
B. The Employment Context
As with all technological change, AI presents both opportunities and risks.
For recruitment processes, in many ways, AI can make better decisions
than humans who are limited by bias, cognitive limitations and undue
over-focus on certain factors.
In particular, human subconscious bias can
inform decision-making without the decision-maker even being aware of
By contrast, AI informs its decisions based on thousands of data points.
Department for Business, Energy and Industrial Strategy, ‘Industrial Strategy: Building a
Britain fit for the future’ (HM Government 2017) 37
for-the-future> accessed 11 February 2020.
Select Committee on Artificial Intelligence, ‘AI in the UK: ready, willing and able?’
(House of Lords 2018) 14
accessed 11
February 2020.
Will Knight (n 16).
Charles Duhigg, ‘How Companies Learn Your Secrets’ The New York Times Magazine (16
February 2012) <>
accessed 11 February 2020.
Mann and O’Neil (n 6).
Kim, ‘Data-Driven Discrimina tion’ (n 13) 871.
Charles R Lawrence III, ‘The Id, the Ego and Equal Protection: Reckoning with
Unconscious Racism’ (1987) 39 Stanford Law Review 317, 322.
Trinity College Law Review [Vol 23
For example, an AI programme developed by Cherry Tree Analytics has
successfully allowed individuals with criminal records to return to work
in call centres. It predicts how likely such individuals are to reoffend and
compares it to how likely other prospective candidates, without criminal
records, are to commit a crime.
Candidates with criminal records who
are equally or less likely to commit a crime than someone with no criminal
record can then be considered for a vacant position.
During a human-led
process, candidates like this are generally disregarded at the outset despite
the fact that, on average, individuals with criminal records stay in call
centre jobs for twenty-one months longer than their peers without
criminal records.
Additionally, AI can scan thousands of applications
cheaply and quickly, which can widen and thus diversify the pool of
However, the reliance on AI systems to remove bias from the
traditional recruitment process is misplaced. As Gandy Jr highlights,
whilst AI-driven decision-making ‘may reduce the impact of biased
individuals...[it] may also normalize the far more massive impacts of
system-level biases and blind spots.’
This structural disadvantage is
reflected in the data that informs AI-driven decisions. As the European
Economic and Social Committee point out, such data is far from objective
and is instead ‘easy to manipulate, may be biased, may reflect cultural,
gender and other prejudices and preferences and may contain errors’.
the previously mentioned case of Amazon, the algorithm persistently
discriminated against women because the model learned from previous
applications to the company, which were predominantly submitted by
Similarly, AI-driven sentencing recommendations that assess the
risk of recidivism across America have underestimated the risk of white
Zev Eigen, Artificial Intelligence and Employment Law Conference, NYU School of Law
available at accessed 11 February 2020. See also
Lydia Belanger, ‘With This Company’s New Tool, You Can Run a Free Background Check
on Yourself’ Entrepreneur (6 September 2018)
accessed 11 February 2020.
Eigen (n 25).
Dylan Minor, Nicola Persico and Deborah M Weiss, ‘Criminal background and job
performance’ (2018) 7 IZA Journal of Labour Policy 8; Eigen (n 25).
Oscar H Gandy Jr, ‘Engaging Rational Discrimination: Exploring Reasons for Placing
Regulatory Constraints on Decision Support Systems’ 12 Ethics and Information
Technology 29, 33.
European Economic and Social Committee, ‘Artificial intelligence – The consequences of
artificial intelligence on the (digital) single market, production, consumption, employment
and society’ [2017] OJ C288/43, 6.
Dastin (n 10).
2020] Back to the Future
defendants and overestimated the risk of black defendants.
This is
explained by the structural bias of the American criminal justice system,
where 37.5% of prison inmates are black
even though black people only
make up 13.4% of the overall U.S. population.
Therefore, even though AI
can remove individual subconscious bias, structurally biased data means
that AI can produce patterns of discrimination that resemble those that
motivated the development of employment equality law.
Compounding this issue of structurally biased data is AI’s
correlative rather than causative nature. AI learns by processing huge
volumes of data and uncovering patterns.
For example, AI has uncovered
that ‘liking’ curly fries on Facebook suggests greater intellectual ability,
and visiting a particular Japanese manga site suggests greater coding
ability for software engineers.
Clearly, an affinity for curly fries does not
lead to higher intelligence and visiting a specific manga site does not lead
to superior coding ability. As King and Mrkonich explain, ‘two traits, an
appreciation of manga and coding aptitude, are correlated, but neither
causes the other’.
In this way, AI can draw inferences based on
correlations with no clear causal link to job performance and in the process
deny employment opportunities. Notwithstanding these concerns,
correlations can act as proxies for the discriminatory grounds outlined in
the Employment Equality Acts, such as gender or race, and work to
exclude disadvantaged groups.
For example, if an employer accepts that
visiting a specific manga site indicates superior coding ability, then
Select Committee (n 18) 42; Julia Angwin and others, ‘Machine Bias’ ProPublica (23 May
2016) bias-risk-assessments-in-criminal-
sentencing> accessed 16 February 2020; ‘State v Loomis’ (n 1).
Federal Bureau of Prisons, ‘Inmate Race’
accessed 21 February
United States Census Bureau, ‘QuickFacts’
acts/fact/table/US/PST045218> accessed 21 February 2020.
Kim, ‘Data-Driven Discrimina tion’ (n 13) 861.
Select Committee (n 18) 41.
Michal Kosinski, David Stillwell and Thore Graepel, ‘Private Traits and Attributes Are
Predictable from Digital Records of Human Behavior’ (2013) 110 Proceedings of the
National Academy of Sciences of the United States of America 5802, 5804.
Don Peck, ‘They’re Watching You at Work’ The Atlantic (December 2013)
work/354681/> accessed 11 February 2020.
Allan G King and Marko Mrkonich, ‘“Big Data” and the Risk of Employment
Discrimination’ (2016) 68(3) Oklahoma Law Review 555, 560.
Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California
Law Review 671, 691.
Trinity College Law Review [Vol 23
because the majority of visitors to manga sites are men, this inference
works to exclude women from that employment opportunity.
Of course, ranking candidates for screening purposes is not new.
Traditional skills assessments have also been legally challenged for
discrimination. In Essop and others v Home Office (UK Border Agency), the
appellants successfully challenged a test which was used to assess
candidates for promotion in the civil service.
Black and minority ethnic
and older candidates experienced lower pass rates compared to their white
and younger peers, though there was no explanation as to why this was
the case.
Such tests, however, seek certain characteristics (e.g. strong
problem solving) and assess for demonstrations of that ability. The unique
risks of AI lie in its correlative nature, which seeks patterns first, rather
than demonstrations of ability. The outcome can be discrimination on the
basis of the resulting proxies, which may be entirely unrelated to the job.
A further significant cause for concern is the opacity of AI. The
‘black box phenomenon’ means that the reasoning and logic of an AI-
driven decision is generally very technologically difficult to ascertain.
For example, a car which has ‘learned’ how to drive itself might do
something unexpected, such as remain stationary at a green light, and even
the engineers who designed the system ‘may struggle to isolate the reason
for any single action.’
As one commentator put it:
‘[y]ou can’t just look inside a deep neural network to see how it
works. A network’s reasoning is embedded in the behavior of
thousands of simulated neurons, arranged into dozens or even
hundreds of intricately interconnected layers.’
This transparency problem is exacerbated by the fact that algorithms are
generally trade secrets so their operation is hidden,
as well as a lack of
Caroline Criado Perez, Invisible Women: Exposing Data Bias in a World Designed for Men
(Random House 2019) 106-108; Sarah Gordon, ‘It’s a man’s world – how data are rife with
insidious sexism’ Financial Times (1 March 2019)
28a0-11e9-a5ab-ff8ef2b976c7> accessed 11 February 2020.
Kim, ‘Data-Driven Discrimina tion’ (n 13) 874.
ibid [7-11].
Manuel Carabantes, ‘Black-box artificial intelligence: an epistemological and critical
analysis’ (2019) AI & Society 1, 3.
Knight (n 16).
Under EU law, algorithms can be protected as trade secrets and therefore the companies
behind recruitment algorithms do not have to disclose how the algorithms operate. See
2020] Back to the Future
public understanding about AI.
As AI is increasingly used to make
important employment decisions, it is deeply troubling that those
decisions cannot be scrutinised in the same way that one can scrutinise a
human decision-maker.
Unlike other applications of AI, such as fraud detection, the feedback
effect in the employment sphere means that an algorithm cannot ‘unlearn’
its bias. AI can learn from a feedback loop, such that if a credit card fraud
detection algorithm incorrectly flags a transaction as fraudulent, the
customer reports the error and the system learns from its mistake.
same goes if it fails to flag a fraudulent transaction, the customer will alert
the card company and the algorithm learns to better detect fraud.
However, in the employment context, this feedback loop is only half-
complete. Whilst an algorithm can learn from incorrectly selecting a
candidate who becomes a problematic employee, it cannot learn from
errors by omission. If an excellent candidate was rejected, the algorithm
will never self-correct for the future because that mistake will not come to
light. Thus, any discrimination becomes a self-fulfilling prophecy, where
the algorithm continues to systematically exclude disadvantaged groups.
There are three main ways that AI-driven recruitment decisions can
An employer can explicitly rely on a protected
characteristic to instruct the algorithm. For example, companies including
Goldman Sachs and Target have used Facebook to explicitly target job
advertisements on the basis of age.
An employer can also discriminate
(intentionally or unintentionally) by relying on an apparently neutral
Council Directive (EU) 2016/943 on the protection of undisclosed know-how and business
information (trade secrets) against their unlawful acquisition, use and disclosure OJ L157/1;
Rembert Niebel, Lorenzo de Martinis and Birgit Clark, ‘The EU Trade Secrets Directive: all
change for trade secret protection in Europe?’ (2018) 13(6) JIPLP 445, 447; Nicholas
Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’
(December 2013) Tow Center for Digital Journalism 11
<> accessed 11 February
Select Committee (n 18) 22.
Barocas and Selbst (n 39) 679.
Philipp Hacker, ‘Teaching Fairness to artificial intelligence: Existing and novel strategies
against algorithmic discrimination under EU Law’ (2018) 55(4) Common Market Law
Review 1143, 1150.
See Pauline Kim and Sharion Scott, ‘Discrimination in Online Employment Recruiting’
(2019) 63 St. Louis University Law Journal 23.
Julia Angwin, Noam Scheiber and Ariana Tobin, ‘Dozens of Companies Are Using
Facebook to Exclude Older Workers From Job Ads’ ProPublica (20 December 2017)
ads-age-discrimination-targeting> accessed
11 February 2020.
Trinity College Law Review [Vol 23
variable that is closely correlated to a protected characteristic, such as the
use of a manga site visited predominantly by men, to indicate coding
ability. Finally, the model itself could be biased because of discriminatory
data, such as Amazon’s CV screening algorithm that discriminated against
Thereaux neatly summarises the risks presented by AI, ‘We take
bias, which in certain forms is what we call ‘culture’, put it in a black box
and crystallise it for ever… We have even more of a problem when we
think that that black box has the truth and we follow it blindly’.
Therefore, despite the characterisation of AI as objective and accurate,
the reality is that AI is highly susceptible to inheriting structural biases
and the result is classification bias. Given that this arises as a result of the
correlative nature of AI and structurally biased data, merely removing the
labels of protected characteristics from the algorithm does not prevent it
from discriminating.
Additionally, retaining these labels facilitates the
observance of the algorithm’s impact upon protected groups.
II. Access to Employment and Classification Bias
There is a widespread and compelling narrative, advanced by the
technology community, that the application of legislation and regulation
to technology threatens innovation and thus is economically damaging.
However, as De Stefano states, ‘[t]hese assumptions must be questioned’.
Indeed, good technology regulation can have a positive net economic
impact, as the European Commission has reported,
and in the face of
public scrutiny, many of the technology industry’s largest companies are
Select Committee (n 18) 42.
Kate Crawford and Jason Schultz, ‘Big Data and Due Process: Toward a Framework to
Redress Predictive Privacy Harms’ (2014) 55 Boston College Law Review 93, 127.
Miriam Kullmann ‘Platform Work, Algorithmic Decision-Making and EU Gender
Equality Law’ (2018) 34 International Journal of Comparative Labour Law and Industrial
Relations 1, 14; Cynthia Dwork and Deirdre K Mulligan, ‘It’s Not Privacy, and It’s Not Fair’
(2013) 66 Stanford Law Review Online 35, 37.
Kim, ‘Data-Driven Discrimina tion’ (n 13) 899.
Select Committee (n 18) 112-116.
Valerio De Stefano, ‘”Negotiating the Algorithm” Automation, artificial intelligence and
labour protection’ (2018) International Labour Office Working Paper 246, 2.
European Commission, ‘Assessing the Impacts of EU Regulatory Barriers on Innovation:
Final Report’ (2017) 13
21 February 2020.
2020] Back to the Future
now calling for more, not less, regulation.
Additionally, as long as AI
produces outcomes that are good enough by hiring strong candidates, even
if the model produces discriminatory effects, then the employer has no
market incentive to invest in reducing that bias (for example by
purchasing unbiased data).
Furthermore, classification bias can create
self-fulfilling prophecies, as described above, which can exclude
disadvantaged groups from employment in the long-term.
ensuring that the development of AI-driven recruitment is fair depends on
the legal environment within which it resides incentivising employers to
reduce the discriminatory effects of AI-driven recruitment processes. With
this in mind, Part II examines the extent to which employment equality
law can capture classification bias and offer complainants meaningful
A. Employment Equality Law
Fortunately, classification bias is, conceptually at least, well
accommodated within the existing employment equality regime. AI-
driven screening processes comfortably fall within Section 8(1) of the
Employment Equality Acts, which prohibits discrimination in relation to,
inter alia, ‘access to employment’. Section 8(5) elaborates that this includes
discrimination in ‘any arrangements the employer makes for the purpose
of deciding to whom employment should be offered’. These ‘very broad’
provisions relating to access to employment include within their remit
shortlisting for interviews.
The discriminatory grounds that this relates
to are outlined in Section 6(2) including: gender, marital status, family
status, sexual orientation, religion, age, disability, race or membership of
the Traveller community. The law is thus very clear; discrimination during
the selection process for employment is prohibited. It therefore follows
that where an employer makes arrangements for AI to screen candidates
for interview, which results in classification bias disadvantaging these
Mark Zuckerberg, ‘Mark Zuckerberg: Big Tech needs more regulation’ Financial Times
(16 February 2020) 4f18-11ea-95a0-43d18ec715f5>
accessed 21 February 2020; Chris Nuttall, ‘Google chief calls for AI regulation’ Financial
Times (20 January 2020) 4f18-11ea-95a0-
43d18ec715f5> accessed 21 February 2020.
For a detailed technical analysis about the costs associated with fixing discriminatory AI,
see Hacker (n 50) 1146-1150; Kim, ‘Data-Driven Discrim ination’ (n 13) 865.
Keats Citron and Pasquale (n 2) 18; Crawford and Schultz (n 54) 103.
Clíona Kimber and Claire Bruton, ‘Chapter 17: Employment Equality’ in Ailbhe Murphy
and Maeve Regan (eds), Employment Law (2nd edn, Bloomsbury Professional 2017) [17.07].
Trinity College Law Review [Vol 23
protected groups, then such discrimination is prohibited.
Moreover, the
legislation predates advancements in AI and so its provisions should be
interpreted generously.
The legislation outlines two types of discrimination, direct and
indirect. Direct discrimination encompasses formal equality
and refers to
a situation where ‘a person is treated less favourably than another person
is, has been or would be treated in a comparable situation’ on any of the
discriminatory grounds.
By contrast, indirect discrimination occurs
where ‘an apparently neutral provision puts people of a particular
characteristic, at a particular disadvantage.’
Classification bias occurs
where an apparently neutral provision,
such as AI-driven application
screening, disproportionately disadvantages protected groups.
Conceptually, therefore, classification bias is a natural fit for indirect
discrimination in most cases. Just as indirect discrimination is
unconcerned with an employer’s motive or intention,
so too does
classification bias generally arise unintentionally.
The correlative nature
of AI means that proxies (such as an affinity for manga) can easily give
rise to indirect discrimination (e.g. towards women). Indirect
discrimination is also ‘designed to tackle the more subtle and often
institutionalised ways in which employees may be treated less
akin to the way classification bias reflects hidden and
structural disadvantage through biased data or proxies for protected
B. Indirect Discrimination
Indirect discrimination is thus the most appropriate legal framework to
capture classification bias. However, AI differs from traditional
recruitment methods that might result in indirect discrimination, such as
See also Hacker (n 50) 1154.
See Bolger, Bruton and Kimber (n 12) [1-30].
Emplo yment Eq uality Acts s 6(1).
Kimber and Bruton (n 63) [17.16]; see also Employment Equality Acts s 19(4).
The CJEU has expressed a fairly expansive understanding of the concept of ‘provision’.
See Enderby v Frenchay Health Authority [1993] ECR I-5535; Bolger, Bruton and Kimber (n
12) [2-204–2-205].
Bolger, Bruton and Kimber (n 12) [2-75].
Kullman (n 55) 15.
Kimber and Bruton (n 63) [17.15].
2020] Back to the Future
competence assessments, because AI ‘involve[s] opaque decision
processes, rest[s] on unexplained correlations, and lack[s] clearly
articulated employer justifications’.
The legal framework that sustains
indirect discrimination was not designed for these characteristics. The
current requirements for making a successful claim, namely a successful
prima facie case and an employer’s inability to rebut the claim or lack of
objective justification, present almost insurmountable practical obstacles
that result in a negligible chance of success for a complainant.
It should be noted at the outset that the pool of potential litigants is
small. Before even making a claim, a complainant must possess sufficient
knowledge of the recruitment process. This includes firstly, knowing that
AI was used to screen their application and secondly, knowing that it was
biased. As explored below, the transparency obligations imposed upon
employers are rather bare and it is therefore unlikely that candidates
disadvantaged by classification bias will possess this requisite knowledge.
Additionally, for access to employment discrimination the monetary cap
is a rather low and ‘questionable’ 13,000, which offers little incentive for
individual litigants to proceed with a claim.
i. Burden of Proof
In recognising that ‘generally claimants have little, if any, direct evidence
of discrimination’,
the Employment Equality Acts partly shift the burden
of proof to the employer.
Section 85A provides that ‘[w]here...facts are
established by….a complainant from which it may be presumed that there
has been discrimination in relation to him or her, it is for the respondent
to prove the contrary.’
Despite this partial shift in the burden, the
threshold for a complainant remains high, in light of their lack of access
to the algorithm.
The leading decision for shifting the burden of proof is the Labour
Court decision in Southern Health Board v Mitchell,
requiring a
complainant to prove their facts on the balance of probabilities and, once
Kim, ‘Data-Driven Discrimina tion’ (n 13) 905.
Emplo yment Eq uality Acts s 84(2)(b) ; Bolger, Bruton and Kimber (n 12) [10]-[14].
Bolger, Bruton and Kimber (n 12) [2-124].
Kimber and Bruton (n 63) [17.25].
See also the Council Directive 2006/54 on the implementation of the principle of equal
opportunities and equal treatment of men and w omen in matters of employme nt and
occupation [2006] OJ L 204/23 art 19(1) (recast).
[2001] ELR 201. Though this decision predates the new Directives and S85A it remains
the leading case. See Kimber and Bruton (n 63) [17.25].
Trinity College Law Review [Vol 23
proven, these facts must be sufficiently significant so as to suggest that
discrimination can be inferred.
In that case, the complainant failed to
demonstrate that she possessed superior qualifications compared to the
successful candidate and therefore failed to discharge the burden. To prove
facts that are of sufficient significance to raise an inference of
discrimination, a complainant must have insight into how the AI works
and aggregate data showing its impact for protected groups. Not only is
analysing such data resource intensive and requires expert witnesses, but
because algorithms are trade secrets, a complainant will lack the requisite
access to prove facts from which discrimination can be inferred.
In Cork City Council v McCarthy,
the Labour Court elaborated upon
this test and explained that to make out a prima facie case the complainant
does not need to show that ‘discrimination is the only, or indeed the most
likely, explanation which can be drawn from the proved facts’,
discrimination can instead be ‘within the range of inferences which can
reasonably be drawn from those facts’.
Thus, a complainant does not
have to prove that classification bias is the only explanation for their
unsuccessful application. However, this standard still requires access to
the data outlining how the algorithm impacted protected groups, even to
infer discrimination as one of many possible explanations.
At first glance, the decision in Inoue v NBK Designs Ltd appears
helpful for a classification bias complainant.
The Labour Court held that
‘It would be alien to the ethos of this court to oblige parties to undertake
the inconvenience and expense involved in producing elaborate statistical
evidence to prove matters which are obvious to the members of the court
by drawing on their own knowledge and expertise’.
The clear problem
for classification bias is that its complexity and novelty mean that the
court’s own knowledge and expertise will be insufficient, thus extensive
statistical analysis will be demanded of complainants to discharge the
burden of proof. Given the obstacles associated with obtaining this, the
significant hurdle for potential complainants remains.
The Workplace Relations Commission considers it a fact of sufficient
significance to infer discrimination where a complainant has greater
experience for the role than the successful candidate, as outlined in
Meehan v Leitrim County Council.
However, this avenue for
[2001] ELR 201. See Kimber and Bruton (n 63) [17.25].
EDA 0 821 (16 Dece mber 20 08).
[2003] ELR 98.
ibid 104.
Meehan v Leitrim County Council DEC-E2006-014.
2020] Back to the Future
demonstrating discrimination is significantly curtailed by the Court of
Justice (CJEU) decision in Meister,
which suggests that employers can
maintain a high level of opacity and do not need to share the qualifications
of successful candidates. The CJEU held that there was no specific
entitlement under EU law for a complainant to access information
regarding the successful candidate in order to make out a claim of
discrimination. Although there was no such right, the CJEU held that an
employer’s refusal to disclose such information could be a factor in favour
of presuming that the candidate has suffered discrimination.
It was
particularly relevant that the employer did not call Ms Meister for
interview even though they did not object that she met the requirements
for the role.
It has been suggested that, for Irish law, this means a
complainant or the equality officer ‘could seek from the respondent
statistical data which would assist in establishing discrimination’
failure to provide such data can be taken ‘into account in deciding whether
disproportionate impact has been established.’
Unfortunately, it is doubtful whether an employer’s refusal to
disclose such data would strengthen the case of a classification bias
complainant. Not only is the guidance in Meister ‘extremely vague’,
the associative costs of conducting an audit of the AI and the fact that
algorithms are trade secrets can only mean that refusal to disclose data
that may indicate discrimination would lead to a weak prima facie case.
There is no explicit obligation on employers to disclose information about
their recruitment processes until after a complainant has successfully
made out their prima facie case. Given that a court is unlikely to take this
as a strong indicator of discrimination, an employer can likely remain
immune from any legal consequences for maintaining the secrecy of an
algorithm’s discriminatory impact.
Case C415/10 Galina Meister v Speech Design Carrier Systems GmbH [2012] 2 CMLR 39.
For a good commentary of the case, see Ciaran O’Mara, ‘Can an Unsuccessful Job Applicant
Demand that the Prospective Employer Provide him or Her with Documents about the
Successful Applicant?’ (2012) IELJ 9(3) 102.
ibid [44].
ibid [45].
Mel Cousins, ‘Education and Equal Status Acts: Stokes v Christian Brothers High School
Clonmel’ (2015) 38(1) DULJ 157, 169.
Hacker (n 50) 1169.
ibid 1170.
Trinity College Law Review [Vol 23
Moreover, speculation or assertions of discrimination are
Equally, a mere difference in characteristic (e.g. gender or
race) between the successful and unsuccessful candidate is not enough to
raise an inference of discrimination.
The bottom line is that to
successfully establish facts of sufficient significance to infer
discrimination, a complainant’s greatest impediment is a lack of access to
the data and the algorithm.
ii. Causal Connection and Particular Disadvantage
It is highly improbable that a complainant obtains access to the relevant
data. However, in the event that they overcome this hurdle, it is worth
considering what they must show in order to prove that, on the balance of
probabilities, they have suffered a particular disadvantage as a result of
their particular characteristics.
The case law suggests that complainants do not have to demonstrate
why an algorithm discriminates, as per Nathan v Bailey Gibson,
academic understanding of the Employment Equality
Acts. Ms Nathan claimed that a pre-entry closed shop agreement requiring
membership of the Irish Print Union for an employment position was
discriminatory because its membership was predominantly male. The
Supreme Court held that the complainant was ‘not prove a
causal connection between the practice complained of and the sex of the
It is sufficient for a prima facie case ‘to show that the
practice complained of bears significantly more heavily on members of the
complainant's sex than on members of the other sex’.
Thus, it should be
sufficient for a complainant to show that an algorithm has a
disproportionate impact against their protected group, without having to
demonstrate why that is the case. This would greatly alleviate the burden
imposed by the black box, which makes it technologically almost
impossible to uncover an algorithm’s decision-making reasoning and the
causes of its bias.
Valpeters v Melbury Developments Ltd [2010] ELR 64; Sheehy Skeffington v National
University of Ireland, Galway DEC-E2014-078 [4.5].
Meehan (n 82).
[1996] ELR 114.
Cathy Maguire, ‘Nathan v. Bailey Gibson: Curing Past Injustices?’ (1996) 14 Irish Law
Times 232, 234. See also Essop (n 42) [33].
Nathan (n 94) 128.
2020] Back to the Future
However, the threshold to prove the existence of the particular
disadvantage remains high, even if the complainant does not have to show
why it exists. It is insufficient to show that a provision disadvantages a
protected group, a complainant must show that the disadvantage suffered
is a ‘particular’ disadvantage.
This requirement received limited
attention initially, until the decision in Stokes v Christian Brothers School
Boys whose fathers had attended the school were offered places
without having to enter the lottery for admission. The complainant was a
Traveller child whose father had not attended the school and was
unsuccessful in his application to attend the school. The complainant used
statistical evidence to show the very low proportion of Travellers from his
father’s generation who had attended secondary school, which
demonstrated ‘extreme educational deprivation’
amongst the Traveller
community. The complainant argued that the parental policy was
therefore indirectly discriminatory. Clarke J in the Supreme Court,
interpreted ‘particular disadvantage’ to mean ‘significant or
However, the threshold for ‘significant or appreciable’ is
still very onerous, as the Supreme Court ultimately dismissed the
complainant’s appeal on the grounds that there was insufficient statistical
evidence to show that Traveller children suffered a particular
disadvantage. Even though fewer than 100 Travellers of his father’s
generation attended post-primary school and the parental rule reduced the
complainant’s chance of admission to the school from 70 percent to 55
percent, the Court concluded that this was not ‘significant or
The decision therefore demonstrated that the Court
demands a high statistical disparity to show particular disadvantage.
‘focusing on statistics and ignoring the discrimination’, the Supreme Court
‘erected considerable barriers to successful indirect discrimination’
Thus the decision ‘will undoubtedly lead to an increased burden
on complainants to demonstrate indirect discrimination’ if it is applied in
the employment sphere.
Although the abundance of data in AI-driven decision-making may
appear to be an advantage for a complainant seeking to demonstrate
Bolger, Bruton and Kimber (n 12) [ 2-206].
[2015] IESC 13. See also Enderby (n 69) [19].
Cousins (n 88) 158.
Stokes (n 99) [9.2].
ibid [3.7]; Cousins (n 88) 159.
Stokes (n 99) [9.3].
Cousins (n 88) 157.
Kimber and Bruton (n 63) [17.17].
Trinity College Law Review [Vol 23
‘significant or appreciable’ disadvantage arising from classification bias, it
is not. Precisely because so much data informs the AI, it is ‘highly likely
that any difference between demographic groupsno matter how slight
will be statistically significant’.
Therefore, because the Court appears to
demand high statistical disparities to demonstrate particular disadvantage,
a discriminating algorithm is unlikely to demonstrate this level of
statistical disparity even if it exhibits bias. The consequence is that, as King
and Mrkonich state, ‘statistical criteria risk trivializing the important
question of what constitutes discrimination’.
Statistical evidence is
obviously very important for understanding the discriminatory impact of
a neutral provision, but seeking large statistical differences and ignoring
the context of a case can result in an unjust outcome, as in Stokes.
For AI, that means a court must adjust its understanding of statistical
disparity because the disparities may appear significantly numerically
smaller, but in fact are greatly significant because of the vast volumes of
data that inform AI-driven decisions. Thus, if Stokes is not applied to the
employment sphere or if the court interprets ‘significant or appreciable’
differently in the case of data-heavy AI, then this impediment can be
overcome for complainants who have access to data that demonstrates the
disproportionate impact of AI-driven recruitment screening.
Therefore, although the partial shift of the burden of proof mitigates
against the difficulties a complainant experiences, in the case of
classification bias the obstacles currently remain challenging because a
classification bias complainant cannot access the data necessary to prove
their prima facie case. Additionally, the current statistical threshold
expected by the court, as outlined in Stokes, is inappropriate in the context
of AI. However, if this were reframed to account for the data-driven nature
of AI, this obstacle can be overcome.
iii. The Employer’s Burden and Objective Justification
In the unlikely event that a complainant successfully makes out a prima
facie case of indirect discrimination arising from classification bias, the
burden of proof shifts to the employer.
They can either rebut the claim
or objectively justify the provision. It is submitted that by taking the latter
path, an employer can likely avoid liability for even a discriminatory
King and Mrkonich (n 38) 569.
ibid 568.
Kimber and Bruton (n 63) [17.27].
2020] Back to the Future
To rebut a prima facie case, the Labour Court in Portroe Stevedores v
held that because an employer possesses the necessary facts to
provide an explanation for the prima facie discrimination, they must
deliver ‘cogent evidence’
to discharge the burden. It is certainly the case
that an employer possesses superior insight into the data model in
However, an employer is more likely to prefer objectively
justifying the model because, in the face of AI’s opaque black box,
demonstrating cogent evidence of a transparent and well-documented
selection procedure is very challenging.
Thus, objective justification represents a more ‘straightforward
compared to rebutting the prima facie case. The definition of
objective justification provided by the Employment Equality Acts is not
particularly specific.
The Labour Court articulated a more detailed
interpretation of objective justification in Department of Justice, Equality
and Law Reform v The Civil Public and Services Union.
Here it was held
that objective justification requires objective reasons unrelated to the
discriminatory ground in question. These reasons must further be
necessary, appropriate and proportional to achieve the objective pursued,
correspond to a real need on the part of the undertaking, and be utilised as
a justification throughout the period during which the discriminatory
treatment existed. For the first aspects of the test, that there are objective
reasons for the difference unrelated to the protected characteristic in
question, an employer has a strong hand. As Hacker explains, statistics are
the basis of machine learning and engineers can closely measure its
statistical predictive accuracy, i.e. how well it predicts future job
performance, giving the appearance that an algorithm is strongly
However, biased outcomes and predictive accuracy are not
mutually exclusive. Where an algorithm is tested based on biased data, its
predictive accuracy can still appear to be sound and an algorithm to appear
objective, even if it is discriminatory.
The decision in Stokes
demonstrated that the courts value statistics in forming their
understanding of discrimination and AI relies heavily on statistical
[2005] ELR 282.
Kullman (n 55) 15.
Hacker (n 50) 1160.
Kimber and Bruton (n 63) [17.18].
Hacker (n 50) 1161-1162.
Barocas and Selbst (n 39) 707; Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep
Learning (MIT Press 2016) 113-115.
Trinity College Law Review [Vol 23
assessments of job relatedness, thus appearing to be objective. Therefore,
a mechanical application of parts (1) and (2) provides a very favourable
outlook for an employer. Thus, even though AI-driven recruitment may
increase instances of indirect discrimination, ‘claims of predictive
accuracy of algorithms often furnish an easy business justification to
entities using machine-learned models’ to justify these effects.
In terms of the test requiring that the difference corresponds to a
real need, and is appropriate and necessary to the objective pursued, the
Internet has facilitated an exponential increase in the number of job
applications that roles receive.
Indeed, HireVue’s website states that
Unilever received 250,000 applications for 800 vacancies.
Herein lies an
‘easy business justification’,
because employers can demonstrate a real
need for AI to deal with this additional influx of time-consuming
applications. The objective for a recruiting employer is to ‘[find] the right
person for the job’
and, at least in terms of time, AI makes that process
far more efficient.
The CJEU outlined a ‘rigorous test of justifiability’ in Bilka-Kaufhaus
v Weber von Hartz.
Namely, ‘the means chosen for achieving... [an]
objective [must] correspond to a real need on the part of the undertaking,
[be] appropriate with a view to achieving the objective in question and are
necessary to that end.’
However, the CJEU has regularly been ‘too
willing’ to accept justifications for indirect discrimination, particularly
‘where market forces are most clearly at stake’.
Thus, a court may be
entirely willing to accept the economic ground of improved efficiency in
light of increases in the number of applications that employers have to
process. Given that humans are limited by their own bias and AI
purportedly sidesteps this issue, a court is unlikely to find that an
algorithm is an inappropriate measure for achieving the objective pursued
unless a stricter test for justification is adopted.
Hacker (n 50) 1145.
Stephen Buranyi, ‘‘Dehumanising, impenetrable, frustrating’: the grim reality of job
hunting in the age of AI’ The Guardian (4 March 2018)
frustrating-the-grim-reality-of-job-hunting-in-the-age-of-ai> accessed 11 February 2020.
HireVue, ‘Unilever finds top talent faster with HireVue assessments’ (n 11).
Hacker (n 50) 1145.
Alicia Compton ‘Overview of recruitment and selection’ (2006) Irish Employment Law
Journal 3(4) 124, 124.
(Case C-170/84) [1986] ECR 1607; Bolger, Bruton and Kimber (n 12) [1-33].
(Case C-170/84) 1631.
Bolger, Bruton and Kimber (n 12) [1-33].
2020] Back to the Future
The final significant hurdle to successfully demonstrating an
objective justification for classification bias is proportionality. According
to O’Leary, the proportionality limb has offered Member States significant
It is therefore difficult to judge how a court would manage
these opposing interests, but employers can suggest that using AI allows
them to widen the net of applicants such that the pool of candidates
supports increased diversity. This might involve a consideration of the
limitations associated with human bias, compared to the structural bias
that influences AI. This is challenging because, as explained above, AI can
reduce the impact of individual bias between individuals, whilst
simultaneously entrenching structural biases. A further difficulty
associated with assessing proportionality is that it is possible for an
algorithm to be biased while maintaining strong predictive accuracy
terms of selecting candidates who go on to become successful employees.
However, in light of the challenge faced by employers to fairly assess job
candidates when there may be thousands of applications for a small
number of jobsalongside the touted credentials of AI to objectively
screen candidates whilst reducing human bias, it seems likely that a court
will find such measures to be proportional.
In summary, for a complainant to succeed in a classification bias
action, they must make out a prima facie case which is ‘an almost
impossible task without access to the data and algorithms’.
They must
also fulfil an outdated test for statistically ‘significant or appreciable’
particular disadvantage, that is not appropriate for heavily data-driven
algorithms. Having surpassed these hurdles, an employer then has a strong
case to objectively justify the practice, unless the court enforces a stricter
standard. Thus, an alternative model is required to offer such complainants
meaningful accountability and redress.
If AI is to deliver on its promises of an objective and bias-free recruitment
process, it must be transparent and fair. Given the obstacles for
complainants outlined above, this requires looking beyond the traditional
Síofra O’Leary, Employment Law at the European Court of Justice: Judicial Structures,
Policies and Processes (1st edn, Hart Publishing 2002) 149-150.
Hacker (n 50) 1164.
ibid 1169.
Trinity College Law Review [Vol 23
boundaries of employment equality law. The GDPR
offers ex ante
protection against classification bias, whilst the introduction of
transparency obligations better equips individual litigants to make out a
prima facie case who, as a last resort, go to court. It is not proposed to enter
into a full analysis of the GDPR due to its breadth, however a marriage of
certain provisions of the GDPR and employment law offers a more
effective enforcement mechanism than individual litigation alone.
The ex ante provisions of the GDPR are a welcome change, because
this contributes to preventing bias happening in the first place and reduces
the dependence of employment equality law on individual litigation as an
enforcement mechanism.
Not only do costs act as a significant deterrent
for complainants,
but because the harms associated with classification
bias are more ‘diffuse’
given its structural nature, the regulatory nature
of the GDPR is also more appropriate. Additionally, the GDPR’s principle-
based approach,
including fairness and transparency, ensure that the
regulation will not be outpaced by technological advances in the near
future. Moreover, the regulator is better equipped than courts to grasp the
complexity of algorithmic decision-making.
Finally, the ‘headline-
grabbing fines’
of up to 20 million or 4 percent of worldwide turnover
put the GDPR ‘on the agenda’
for employers in a way that individual
litigation does not.
Furthermore, the Data Protection Act provides that a data subject
can bring a data protection action ‘where he or she considers that his or
her rights under a relevant enactment have been infringed as a result of
the processing of his or her personal data in a manner that fails to comply
with a relevant enactment’.
The Court has the power to provide ‘relief
by way of injunction or declaration’ or compensation for damage suffered
In Ireland, the GDPR is given practical effect by The Data Protection Act 2018.
Hacker (n 50) 1145-1146. Evelyn Ellis and Philippa Watson, EU Anti-Discrimination Law
(2nd edn, OUP 2012) 506.
Noted in the context of credit score algorithmic decision-making, see Keats Citron and
Pasquale (n 2) 16.
Kim, ‘Data-Driven Discrimina tion’ (n 13) 934.
Ann-Marie Hardiman, ‘Protecting our privacy’ (2018) 23(6) The Bar Review 156, 157.
For example, State v Loomis (n 1).
Eoin Cannon, ‘Data Protection Act 2018’ (2018) 23(3) The Bar Review 79, 81. See Council
Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing
of personal data and on the free movement of such data, and repealing Directive 95/46/EC
(General Data Protection Regulation) [2016] OJ L 119/1 art 83.
Michelle Ryan, ‘The General Data Protection Regulation and Obligations for Irish
Employers’ (2018) 15(2) IELJ 48, 49.
S 117(1) Data Protection Act, implementing Article 82(1) GDPR.
2020] Back to the Future
by the data subject.
This provides individuals who have experienced
classification bias with a potential alternative avenue to employment
equality law, through which they can receive compensation from an
employer who has used a discriminatory AI-driven recruitment screening
process. The GDPR's capacity for compensating individuals for breaches
of their rights has been discussed extensively elsewhere by O'Dell, who
looks to provisions within the Data Protection Bill 2017 that, though
under-utilised thus far, are intended to empower individuals in taking
claims against data controllers. Unfortunately, discussion of the
framework for individual compensation within Ireland is outside the scope
of this article, but such a provision could be a welcome avenue for litigants
to pursue their claims as an alternative to employment equality law.
Even though the GDPR can mitigate against the impact of
classification bias, its provisions were not necessarily written with this
issue specifically in mind. This results in the need to combine a patchwork
of provisions, as outlined below. Wachter and Mittelstadt propose a 'right
to reasonable inferences', which would lead to a more comprehensive and
coherent mechanism to tackle classification bias.
However, as the GDPR
already represented a sweeping change to data protection law, it is
unlikely that we will see any significant changes in the immediate future.
The provisions outlined below will at least contribute to preventing
discrimination happening in the first place and creating more
A. Data Protection Impact Assessments (DPIA)
Given that AI tools will always reflect structural bias to some degree, the
situation demands ‘scrutiny of how these systems operate in practice’.
The GDPR appears to offer an appropriate mechanism to require
employers to assess the risks of AI-driven recruitment screening processes
and classification bias. Data controllers must conduct Data Protection
Impact Assessments (DPIA) prior to data processing where it ‘is likely to
result in a high risk to the rights and freedoms of natural persons’.
Anyone who decides the purpose and means of processing personal data
S 117(4) Data Protection Act.
Sandra Wachter and Brent Mittelstadt, ‘A Right to Reasonable Inferences: Re-Thinking
Data Protection Law in the Age of Big Data and AI’ (2019) 2 Columbia Business Law
Review 1.
Pauline T Kim, ‘Auditing Algorithms for Discrimination’ (2017) 166 University of
Pennsylvania Law Review Online 189, 196.
General Data Protection Regulation (n 134) art 35(1).
Trinity College Law Review [Vol 23
is considered to be a data controller under the GDPR.
employers who decide to process personal data, for the purposes of AI-
driven recruitment screening, would thus be considered to be data
controllers. The DPIA means that data controllers have to ‘identify and
mitigate against any data protection related risks arising from
This includes assessing ‘the necessity and proportionality of
the processing’, assessing ‘the risks to the rights and freedoms of data
subjects’ and ‘the measures envisaged to address the risks’.
Failure to
conduct a DPIA, when the data processing requires it, can result in an
administrative fine issued by the Irish Data Protection Commission.
Given that equal treatment is a key tenet of European Union law,
the risk of classification bias for access to employment should prompt
employers to carry out a DPIA.
Article 35(3) cites, amongst other
examples, that a DPIA is required when there is ‘extensive evaluation of
personal aspects…which is based on automated processing…and on which
decisions are based that produce legal effects…or similarly significantly
affect the natural person’. Just as classification bias excludes individuals
from employment opportunities on the basis of prohibited discriminatory
grounds, the Article 29 Working Party has said that data processing which
leads to ‘the exclusion or discrimination against individuals’ can constitute
legal or similar significant effect.
The Article 29 Working Party has also
made clear that the reference to ‘rights and freedoms’ extends to include
the prohibition of discrimination and DPIAs are ‘particularly relevant
when a new data processing technology is being introduced’.
DPIAs are necessary in the context of AI-driven recruitment screening
processes and requires employers to identify and mitigate the associated
risks to job candidates. Measures to mitigate against the risks associated
ibid art 4(7).
Data Protection Commission, ‘Data Protection Impact Assessments’
protection-impact-assessment> accessed 11 February 2020.
General Data Protection Regulation (n 134) article 35(6) and recitals 84 and 90.
ibid art 35; Article 29 Working Party ‘Guidelines on Data Protection Impact Assessment
(DPIA) and determining whether processing is ‘likely to result in a high risk’ for the
purposes of Regulation 2016/679’ WP 248 rev.01 (4 April 2017) 4.
A long-established principle with origins in the Treaty of Rome. See Bolger, Bruton and
Kimber (n 12) [2-20].
American scholars have noted the importance of technological due process for AI
decision-making. See Danielle Keats Citron, ‘Technological Due Process’ (2008) 85(6)
Washington University Law Review 1249.
Article 29 Working Party ‘Guidelines on Data Protection Impact Assessment (DPIA)’ (n
144) 9.
ibid 6, 8.
2020] Back to the Future
with these systems could include auditing algorithms for bias or investing
in unbiased data.
Consequently, DPIAs encourage employers to be more
proactive about addressing the risks of classification bias, which reduces
the dependence on the mechanism of individual litigation, through
employment equality law, to address these harms.
If, having conducted a DPIA, the risks to data subjects remain high
and cannot be adequately addressed, then employers would be required to
consult the Data Protection Commission (DPC).
The Data Protection Act
2018 (the Data Protection Act) requires the DPC to respond within six
weeks of being consulted, though the DPC has not been entirely clear what
may be included in their response.
However, the UK Information
Commissioner has stated that when they are provided with a DPIA, they
will provide a written response.
They will advise as to whether the risks
are acceptable and if not, outline further action required or advise not to
carry out the processing.
If the Irish DPC took a similar approach, this
would prevent the use of recruitment algorithms that perpetuate
classification bias, where action cannot be taken to mitigate against that
B. GDPR Principles: Accuracy, Fairness and Transparency
There is also great promise in the twin provisions of Article 5, which
includes the accuracy and fairness principles,
and Recital 71, which
states that data controllers ‘should use appropriate mathematical or
statistical procedures…[to] prevent, inter alia, discriminatory effects’. One
way that employers could fulfil this obligation is by developing a set of
organisational ‘risk management guidelines to improve transparency and
interpretability of the algorithm’s mechanics, reduce bias in the data,
outline review procedures of the algorithm prior to implementation and
detail steps to address any bias that is detected.
Employers’ failure to be
See Kim, ‘Auditing Algorithms’ (n 139)
Article 29 Working Party ‘Guidelines on Data Protection Impact Assessment (DPIA)’ (n
144) 19.
Data Protection Act 2018 s 84.
Information Commissioner’s Office, ‘Data protection impact assessments’
assessments/> accessed 13 February 2020.
See also Data Protection Act 2018 s 71(1).
Thomas H. Davenport and Vivek Katyal, ‘Every Leader’s Guide to the Ethics of AI’ (6
December 2018) <
Trinity College Law Review [Vol 23
proactive in this way should entail liability under the GDPR, though it is
ultimately up to the DPC as to how this would materialise in practice.
The Article 29 Working Party has also stated that the transparency
principle requires data controllers to ‘explain clearly and simply to
individuals how the profiling or automated decision-making process
works’ and that data subjects have a ‘right to be informed by the controller
about automated decision-making.
This can be read as requiring
employers to inform prospective job candidates that AI assists their
recruitment process and how it does so. This way, potential classification
bias complainants will at least know how AI is used throughout the
recruitment process, which would better equip them to pursue an
employer under employment equality law.
C. Automated Decision-Making
The GDPR also offers a suite of individual rights that act as a ‘sword as
well as a shield’.
Included is a right to access any personal data processed
and, in the case of automated decision-making, ‘meaningful information
about the logic involved’.
Although this does not allow individuals to
access the personal data of others, this provision can be read as requiring
employers to furnish individuals with aggregate data concerning the
existence of algorithmic bias ‘if the bias can be understood as part of the
consequences of the processing for the data subject’.
Although it will
take time to see if Article 15(1) stretches this far, such information could
greatly alleviate the burden for complainants seeking to make out a prima
facie case of classification bias.
However, a limitation on the effectiveness of this provision may not
be the letter of the law, but the technical feasibility of ‘opening up the
black box’ to provide meaningful information about an algorithm’s logic.
AI’s black box is inherently very challenging to scrutinise in terms of
ai/?utm_source=twitter&utm_medium=social&utm_campaign=sm-direct> accessed 11
February 2020.
Article 29 Working Party, ‘Guidelines on Automated individual decision-making and
Profiling for the purposes of Regulation 2016/679’ WP251rev.01 (6 February 2018) 16; See
General Data Protection Regulation (n 134) arts 5(1)(a), 13 and 14.
Ciarán O’Mara, ‘The EU General Data Protection Regulation and its Impact on
Employment Law’ (2016) 13(4) IELJ 114, 114.
General Data Protection Regulation (n 134) arts 15(1) and 15(1)(h).
Hacker (n 50) 1174.
2020] Back to the Future
understanding what drives a particular algorithm’s decision.
the GDPR’s information requirements are not a panacea for addressing
classification bias. It appears that the technology industry has made efforts
towards creating ‘explainable AI’,
but further developments will be
necessary and something that regulation should seek to incentivise.
Despite this, access to aggregate data would at least allow for the
identification of classification bias, which may be sufficient to make out a
prima facie case of indirect discrimination, even if candidates cannot
access the logic involved in the decision.
D. Does the GDPR Prohibit Automated Recruitment Decisions?
Article 22(1) of the GDPR provides that ‘[t]he data subject shall have the
right not to be subject to a decision based solely on automated processing,
including profiling, which produces legal effects concerning him or her or
similarly significantly affects him or her.’
This would appear to prevent
automated decision-making in the employment sphere, because
discrimination in access to employment entails similarly significant effects
to legal effects.
However, Article 22(1) only applies in the case of ‘solely’
automated decision-making, which means it is only relevant to decisions
taken without human involvement.
This provision could therefore apply
when an algorithm screens CVs without any human review, but not when
an algorithm reviews and scores video interviews with the decision still
ultimately taken by a person. Consequently, employers can still use AI to
inform their recruitment decisions as long as there is a person involved at
some stage in the process. Classification bias can still occur under these
circumstances, because there is evidence to suggest that ‘[i]ndividuals tend
to weigh purportedly expert empirical assessments more heavily than
Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations
Without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31(2) Harvard
Journal of Law and Technology 841, 842.
Leo Kelion, ‘Google tackles the black box problem with Explainable AI’ BBC News (24
November 2019) 50506431> accessed 13 February
This is enshrined in the Data Protection Act 2018, s 57.
Article 29 Working Party, ‘Guidelines on Automated individual decision-making and
Profiling’ (n 156) 22.
ibid 8.
Trinity College Law Review [Vol 23
nonempirical evidence’
and tend not to contradict algorithmic
Moreover, the impact of Article 22(1) is further limited by s 57(b) of
the Data Protection Act 2018.
This section allows for solely automated
decision-making where ‘adequate steps have been taken by the controller
to safeguard the legitimate interests of the data subject’, including that the
data subject should be able to ‘request human intervention in the decision-
making process’ and to ‘appeal the decision’. Thus, as long as an employer
provides these safeguards then they are entitled to use AI to screen
candidates using solely automated means. Although prospective
candidates may request human intervention, candidates may naturally be
afraid that such a request would negatively impact the success of their
application. Additionally, the legislation does not expand upon the
meaning of ‘human intervention’. It therefore appears that AI can still
inform the recruitment process even where candidates request human
intervention, as long as a human is still involved in the process. This could
include reviewing AI-driven scoring of candidates and classification bias
can thus still occur.
Overall, it is possible to tentatively say that the ex ante and
transparency measures of the GDPR, as well as its obligations for data
controllers, will operate to achieve the parallel aims of preventing
classification bias in the first place and furnishing individuals with the
requisite data to discharge the burden of proof for a prima facie case of
indirect discrimination. However, the effectiveness of the GDPR is
constrained to some extent by the opacity of AI’s black box. Regulators
will need to encourage the technology industry to make further
developments in creating AI algorithms that are more interpretable in
terms of the logic involved in their decisions.
State v Loomis (n 1); See Stephen A Fennell & William N Hall, ‘Due Process at
Sentencing: An Empirical and Legal Analysis of the Disclosure of Presentence Reports in
Federal Courts’ (1980) 93 Harvard Law Review 1613, 1668–1670.
Angèle Christin, Alex Rosenblat and Danah Boyd, ‘Courts and Predictive Algorithms’
(Data & Civil Rights: A New Era of Policing and Justice 2015) 8
QPMQ> accessed 15 February 2020.
See also Data Protection Act 2018 s 89.
2020] Back to the Future
AI promises to revolutionise our future but, simultaneously, it also
threatens to flare up problems associated with historical discrimination
rooted in our past. This article has shown how, despite extensive evidence
of AI inheriting existing structural biases, employers are using the
technology to screen applicants as part of their recruitment processes.
Although this can lead to classification bias which corresponds to our
understanding of indirect discrimination, potential complainants must
overcome a number of practical barriers to successfully make out an
indirect discrimination claim. It is unlikely that a complainant will have
access to the algorithm to successfully establish facts of sufficient
significance to infer discrimination and thus to meet the burden of proof.
Additionally, the court’s understanding of statistical disparity is not
appropriate for data-driven AI decisions, which inhibits a complainant’s
ability to prove a causal connection and particular disadvantage. Finally,
even if a complainant can surpass these hurdles, the employer is likely to
be able to objectively justify the practice of using AI to screen applications.
Therefore, employment equality law is not an effective mechanism to
address the harms entailed by classification bias at the access to
employment stage.
Consequently, since ‘employment [has] become intrinsically linked
to data processing’,
the GDPR offers a novel and effective solution to
prevent discrimination happening in the first place through DPIAs, which
can reduce dependence on individual litigation as an enforcement
mechanism. The Regulation can also lift the veil on the trade secrecy that
shrouds the data necessary for a complainant to make out a prima facie
case, should these preventative measures fail. However, the technology
industry must prioritise developing explainable AI, which regulators may
need to take action to incentivise.
Thus, although it is a common complaint that technology appears to
consistently outpace the law, there is reason to be optimistic that the law
can adapt quickly enough to the issue of classification bias. Throughout
the development of discrimination law, legislators and judges have moved
at ‘remarkable speed’.
It is sincerely hoped that regulators and judges
take up the mantle to adapt the statistical understanding of particular
Hacker (n 50) 1171.
Tarunabh Khaitan, A Theory of Discrimination Law (Oxford University Press 2015) 3.
Trinity College Law Review [Vol 23
disadvantage outlined in Stokes, adopt a strict stance for preventing
employers objectively justifying classification bias, and rigorously enforce
the GDPR, in order to avoid freezing historical disadvantage through AI-
driven decision-making.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT