1 Introduction
As algorithms transform online and offline worlds for equality, this chapter explores the role of businesses in ensuring a human rights compliant approach to regulating algorithms.1 States and businesses around the world increasingly invest in Artificial Intelligence (“ai”) and adopt and rely on ai systems2 to make decisions.3 Biases and discrimination could occur when ai and algorithms are used to complement or substitute human decision-making. For example, biases can occur in credit lending4 or algorithmic recruitment tools which use datasets to train, validate, test and run the algorithm. Those datasets use historic data, which might contain stereotypes and biases from the real world and therefore lead to biased or discriminatory decisions.5 Biases can
The Toronto Declaration, which aims to protect the right to equality and non-discrimination in machine learning systems, clearly sets out to hold private sector actors to account by specifically stating “[i]nternational law clearly sets out the duty of states to protect human rights; this includes ensuring the right to non-discrimination by private sector actors.”
States should put in place regulation compliant with human rights law for oversight of the use of machine learning by the private sector in contexts that present risk of discriminatory or other rights-harming outcomes, recognizing technical standards may be complementary to regulation. In addition, non-discrimination, data protection, privacy and other areas of law at national and regional levels may expand upon and reinforce international human rights obligations applicable to machine learning.
14
Relying on UN reports and policy proposals on race, disability, and gender as well as existing and proposed human rights frameworks on ai, this chapter discusses biased decision-making and discriminatory outcomes of ai. Rather
First, assuming regulating ai is a conditio sine qua non, the author sketches out the role of business in supporting and ensuring a human rights compliant approach to avoid discrimination (Section 3). Self-binding ai principles and standards of companies are mushrooming worldwide (Section 2). In parallel, international organizations and governments are proposing regulation on ai to protect human rights (Section 4). The analysis will be based on a review and discussion of selected relevant self-binding ai standards of business as well as soft law and legal proposals that foresee a specific role of businesses to achieve human rights in the algorithmic context.
In conclusion, the chapter argues for recognizing a shared responsibility of businesses and states for human rights in the algorithmic age. It identifies and sketches out their respective roles and regulatory approaches to achieve less biases and discrimination (see Section 6 on elements and recommendations for a potential “shared responsibility” framework). Involving businesses as partners and addressees of obligations imposed by legal frameworks is more important than ever in a world where ai technologies, such as llms,17 are developed and deployed more quickly than regulators are able to adopt or adapt new binding rules.18 In any case, while waiting for new general or specific ai rules to be adopted, regulators can rely on general principles of law or specific laws dealing with the technological issues, to investigate and deal with algorithmic discrimination to the extent possible.
2 Self-binding ai Principles and Ethical Standards (Soft Law)
Without much exaggeration, one can speak of the mushrooming of ai principles, best practices, and ethical guidelines on ai.
2.1 The Main Principles in Non-binding Guidelines
Companies follow ai principles that typically address some of the main topics of equality and non-discrimination using wording derived from the diversity, inclusion, equality, and non-discrimination context.25 As such, most guidelines issued by the major ai companies (gafam)
We are also prioritizing ai Diversity & Inclusion education efforts for our ai team when hiring and training employees, and setting clear d&i expectations for our ai managers. We aim to better ensure that the people making our ai products are from as diverse a range of backgrounds and perspectives as the people using them, and that we are inclusive of a broad range of voices in our decision-making.29
It is good that companies address these issues in their ai principles, but in practice they also need to follow up on those promises to ensure that diverse mindsets of developers lead to less biased algorithm designs. Such general statements in ai company policies resemble marketing statements and should similarly be complemented by measurable targets, such as achieving 40% representation of underrepresented groups. Statistics on the representation of women in ai companies could inform customers of these ai projects whether the priority goals are pursued in hiring and training. In addition, the decision-making process should be made more transparent in order to verify whether the ai lifecycle indeed includes the broad range of backgrounds claimed in the company policy.
On biases Google outlines its position on avoiding biases in its ai Principles under the heading “2. Avoid creating or reinforcing unfair bias.” It states that
ai algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
While it can be acknowledged that efforts to address biases and unjust impacts on humans is welcomed, merely repeating legal requirements is not sufficient in the algorithmic age. Discrimination based on protected characteristics is prohibited in most jurisdictions around the world and should therefore clearly guide the company’s actions. Amazon dedicates one paragraph to biases in its guide on Responsible Machine Learning30 and utilizes a tool called Amazon
On fairness, Google acknowledges that
First, ml models learn from existing data collected from the real world, and so an accurate model may learn or even amplify problematic pre-existing biases in the data based on race, gender, religion or other characteristics. For example, a job-matching system might learn to favor male candidates for ceo interviews, or assume female pronouns when translating words like “nurse” or “babysitter” into Spanish, because that matches historical data.
Discussing concrete examples of biases and potential discriminations is useful as it draws attention to the risks and shows the company’s awareness of the problems. Microsoft, for example, set a goal to minimize stereotyping that states that “Microsoft ai systems that describe, depict, or otherwise represent people, cultures, or society are designed to minimize the potential for stereotyping (…) identified demographic groups, including marginalized groups.” Facebook also states that: “To help us consider these issues from a broad range of perspectives, Facebook’s Responsible Innovation team and Diversity, Equity & Inclusion team both facilitate input from a wide range of external experts and voices from underrepresented communities.”33 Here again, broad
While the ai principles appear to be a marketing product in its own way – the wording is appealing and appears to be addressing a popular issue –, there are still many limits and shortcomings if companies aim to address and effectively diminish algorithmic discrimination. Notably, many company guidelines remain vague and, while they pose general principles or ideas, they do not concretely show how the company attempts to avoid discriminatory outcomes (see Table 5.1).
Human rights values contained in tech companies ai policies (human rights values modelled and based on largely on oecd 2020 recommendation on ai and values attributed on the basis of the analysis of the selected tech companies ai guidelines or principles)a
Human rights values |
Amazon |
||
---|---|---|---|
Non-discrimination |
(-) |
(-) |
(-) |
Equality |
(+)* |
(+)* |
(+)* |
Diversity |
(+)* |
(+) * |
(+)* |
Fairness |
(+) ** |
(+) ** |
(+) *** |
Internationally recognized labour rights |
(-) |
(-) |
(-) |
Biases |
(+) ** |
(+) ** |
(+) ** |
a (+) indicates that the company has a policy on the relevant human rights value and (-) indicates no dedicated policy was formulated in the ai guidelines. In addition, a value for each company policy is assigned in accordance with its relevance and suitability to address within the framework of non-binding guidelines the human rights values effectively with the relevant company policy (*** = adequate, ** = limited, * = not sufficient).
2.2 Limits and Shortcomings
In contrast to a national or regional legal standard, while most guidelines have many similarities, no single guideline addresses the same issues. This is true not only for the general issues of transparency, accountability, and responsibility, but also with regard to issues of equality and non-discrimination. This creates
2.3 Concrete Guidance Addressed at ai Developers
Some non-binding documents, like the Fairness, Accountability, and Transparency in Machine Learning (fat/ml), can be of added value, if companies use them while developing algorithms. For example, the Principles for Accountable Algorithms try to “Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g. race, sex, etc).” And the Social Impact Statement for Algorithms,35 asks
that algorithm creators develop a Social Impact Statement using the above principles as a guiding structure. This statement should be revisited and reassessed (at least) three times during the design and development process: design stage, pre-launch, and post-launch. When the system is launched, the statement should be made public as a form of transparency so that the public has expectations for social impact of the system.
Such non-binding principles can incorporate non-discrimination principles into algorithmic design. Using specific design questions templates is another effective approach to improve companies’ impact statements. The method can be used by designers, developers, and tech companies during the model and
Specific guidance could take the form of a simple din-A4 page that is handed out to developers: it would recall some of the main risks of biases, stereotypes, and discrimination in relation to design of models, algorithms, and datasets. Even if the knowledge is already present in the developers’ mindset or has been disseminated by training and seminars, it can make a difference to directly address the person in charge of developing the algorithm. Similar to the airline industry where pilots have been thoroughly trained, checklists are used to ensure the application of some fundamental principles and to ensure that key procedures are properly implemented. Such concrete guidance, specifically designed for those who create algorithms, can be made a requirement in legislative frameworks, which could support the reduction of biases and discriminatory potential at the design stage.
3 The Role of Business to Preserve Human Rights
Multi-stakeholder approaches to human rights are good practice.38 One could argue that there is a special role and responsibility for businesses to preserve human rights39 and avoid discriminatory outcomes in their algorithms. After
It is argued here that throughout the of development ai systems – from the idea, conception of the model and the design of the algorithm to the deployment, monitoring, revision and correction of the algorithm – businesses should have obligations to ensure that they do not cause harm of a discriminatory nature to humans (3.1). Several proposed tools that are aimed at achieving less biases and discrimination will be analyzed from the lens of the role and the obligations of businesses (3.2).
3.1 Business Responsibilities to Avoid Human Rights Harm and Discrimination throughout the Lifecycle of ai Systems
Many guidelines, business consultancies, or legal standards contain references and links in their ai principles concerning the role of companies in the protection of human rights.45 To avoid human rights violations, many companies actively foresee processes at company level.
The oecd Guidelines and Recommendations call on to respect the principle of non-discrimination and promote equality between women and men. The oecd Guidelines for Multinational Enterprises, for example, explicitly refer to non-discrimination in hiring practices as well as promotion practices (paras. 5), non-discrimination in employment and occupation (para. 51), the principle of non-discrimination, and refer to obligations contained in this regard in the International Labor Organization (“ilo”) Conventions (para. 54).
The oecd/legal/0449, Recommendation of the Council on Artificial Intelligence, which is understood to complement existing oecd instruments, includes recommendations both for governments and for ai actors, including businesses.46 The oecd “calls on all ai actors to promote and implement, according to their respective roles, the following Principles for responsible stewardship of trustworthy ai.”47
Within the principle of human-centered values and fairness, the oecd states that “ai actors should respect the rule of law, human rights and democratic values, throughout the ai system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labour rights.”48 On transparency and explainability, the Recommendation states that “ai Actors should commit to transparency and responsible disclosure regarding ai systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art.”
Under the principle of robustness, security and safety, it is recommended that “ai actors should ensure traceability, including in relation to datasets, processes and decisions made during the ai system lifecycle, to enable analysis of
Finally, on accountability, it is recommended that “ai actors should be accountable for the proper functioning of ai systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.”
The Toronto Declaration identifies three core elements or steps for corporate human rights diligence for machine learning systems: (a) identification of potential discriminatory outcomes; (b) prevention and mitigation of discrimination and tracking of responses; and (c) transparency regarding efforts to identify, prevent and mitigate discrimination.49
Similarly, the Council of Europe originally assigned some obligations to states and others to businesses in the proposed Draft Framework Convention on ai. While Chapter ii and specifically Articles 5–7 addressed public authorities, Article 8 for example specifically addressed private actors.
While the European Union’s ai Act, adopted by co-legislators in 2024, takes the legal nature of an EU Regulation, it assigns obligations to companies, states and supervisory authorities, but in particular to the developers and users of ai systems that want to use systems within the EU market or where the impacts of the ai system affect the EU market. In that sense, the EU ai Act can be seen as a sort of product regulation which produces compliance requirements and costs for companies.
Major ai companies use tools throughout the ai lifecycle to ensure fairness and/or detect biases in their algorithms or machine learning tools.51 Other resources used by ai companies include specific guides for ai developers and programmers as well as education and training52 to ensure fair and non-biased algorithms. It would be wise for global ai regulators to take note of the existing assessment infrastructure and capabilities among ai companies when they design new rules. They should also work with independent researchers to assess the concrete needs and feasibility of regulatory obligations, for example to detect, diminish, and eliminate biases from the design, training and other datasets in order to reduce discriminatory outcomes.
3.2 The Tools of the Soft Law Instruments and Proposed Regulatory Frameworks to Achieve Non-discriminatory ai Systems
Among the soft law instruments, the tools suggested can be regrouped in three categories based on the amount of involvement required by businesses and potential compliance efforts: information, active and passive obligations, and far-reaching involvement. Information requirements include providing information to ai systems users and publishing relevant information such as reports on bias audits on websites. Active obligations include conducting ex ante assessments, such as bias audits, to check algorithms for potential biased datasets, biased design, or discriminatory impacts. Passive requirements include all regulatory actions by public authorities that check or verify compliance with
Tools to achieve non-discriminatory ai systemsa
Tools |
Soft law |
Hard law |
Proposals |
---|---|---|---|
Risk management system or Bias audits to address risk of biases |
oecd (1.4.c) |
nyc (§ 20–870, 20–871) |
EU (Art. 9), CoE (Articles 16, 12, 24) |
Human Rights Impact Assessments (hria) or Algorithmic Impacts Assessment (aia) |
(-) |
(-) |
EU (Art. 27)b, CoE (Art. 16) |
Transparency and Documentation tools |
oecd (1.3) |
nyc (§ 20–871) |
EU (Art. 10–13), CoE (Art. 8) |
Remedies/Complaint mechanism |
oecd (1.5) |
nyc (§ 20–872) |
EU (Art. 85, 86), CoE (Art. 14, 15) |
Fines/Penalties |
oecd (1.5) |
nyc (§ 20–872) |
EU (Art. 97) |
a This table analyses the tools that are mentioned in the existing soft (oecd, CoE) and hard law frameworks (A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools, Law 2021/144, December 11, 2021), text available online at
b Spain adopted a Digital Rights Charter in 2021, which includes specific principles on equality and non-discrimination; see notably La Moncloa, “Carta Derechos Digitales,” supra note 14, at Article viii.
Among the proposed hard law instruments, several tools are usually proposed to ensure that regulatory goals like bias mitigation and reducing discriminatory impacts of algorithms are met. First, several proposed international frameworks53 suggest bias audits, a tool that can be used either before market entrance or for monitoring the ai system while in use. Typically, the aim of a bias audit is to detect any biases or discriminatory outcomes and correct them before releasing an ai system on the market. Second, known from environmental impact or human rights, algorithmic impact assessments are considered a comprehensive tool used to check for compliance with bias and discrimination requirements. Companies need to provide detailed information to show that they made all required efforts to ensure no or less biases and discrimination in their algorithms.54 Third, transparency and documentation tools enable potential victims of discrimination and authorities to verify any violations. Fourth, prohibition is a strong tool used to prohibit the use of an ai system in very specific circumstances in an effort to avoid human rights harms. Fifth, an essential tool to ensure compliance is the possibility to impose fines on companies that develop or use ai systems.
4 Legislative Human Rights Frameworks for ai (Hard Law)
Legislative human rights frameworks
United Nations |
Council of Europe |
European Union |
|
---|---|---|---|
General ai framework |
(-) Currently, only reports and political calls |
(+) |
(+) but among the general principles, equality and non-discrimination |
Discrimination-specific instrument |
(+) cedaw as general instrument and currently a General Recommendation (N°40 is drafted which includes considerations on ai and algorithmic discrimination) |
(-) but planned Legal Instrument by 2025 |
(-) partially some of the issues of biases and non-discrimination could be addressed with the regulation of ai systems in the area of education and labor market (Ex: ai recruitment systems) |
Equality and Discrimination principles |
(+) ungps |
(+) |
(+) |
Specific requirements or recommendations for business |
(+) |
(+) Article 8 |
(+) The requirements are mostly addressed to businesses |
4.1 UN Level
The Report of the Special Representative of the Secretary-General John Ruggie on the Issue of Human Rights and Transnational Corporations and Other Business Enterprises, or the “Guiding Principles on Business and Human Rights” (“ungps”), is a major reference document for business and human rights.56 The ungps, adopted by the UN Human Rights Council by Resolution 17/4 on 16 June 2011, are a set of principles directed at governments and businesses that clarify their duties and responsibilities in the context of business operations. For instance, Pillar 2 spells out foundational and operational principles that specify businesses’ responsibility to avoid social impacts wherever they operate and whatever their size or industry, and address any impact that does occur. The ungps rely on three pillars focused on business’ obligation to respect, states’ obligation to protect and the possibility for accountability and remedies.57
The responsibility to respect human rights requires that business enterprises: (a) Avoid causing or contributing to adverse human rights impacts through their own activities, and address such impacts when they occur; (b) Seek to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.
The Report of the Special Rapporteur on the rights of persons with disabilities65 is one of the rare UN reports that addressed issues of ai and is therefore of relevance in this context. Three aspects are highlighted in the Report’s conclusions that are relevant to the current analysis. First, the “unprecedented power of artificial intelligence [which] can be a force for good for persons with disabilities,” and that the “Profound advances for humankind must be properly harnessed to make sure that the farthest left behind can at last benefit fully from science and its advancements.” Second, it acknowledges that “the well documented negative impacts of artificial intelligence on persons with disabilities need to be openly acknowledged and rectified by states, business, national human rights institutions, civil society and organizations of persons with disabilities working together.” In accordance with the present analysis, it pointed out that
At the development level, those negative impacts arise from poor or unrepresentative data sets that are almost bound to lead to discrimination, a lack of transparency in the technology (making it nearly impossible to reveal a discriminatory impact), a short-circuiting of the obligation of reasonable accommodation, which further disadvantages the disabled person, and a lack of effective remedies. While some solutions will be easy and others less straightforward, a common commitment is needed to work in partnership to get the best from the new technology and avoid the worst.
Third, and finally, the document calls for “a fundamental reset of the debate (…) based on more evidence and greater consideration of the rights and obligations contained in the Convention on the Rights of Persons with Disabilities and other human rights instruments.” The recommendations specifically address businesses and the private sector,66 notably in relation to transparency and information obligations,67 disability-inclusive human rights impact assessments for ai,68 human rights due diligence,69 accessible and effective non-judicial remedies and redress for ai caused human rights harms,70 and realistic and representative datasets.71
The recommendations notably state that
States must ensure that human rights ethical frameworks for corporations involved in emerging digital technologies are linked with and informed by binding international human rights law obligations, including on equality and non-discrimination. There is a genuine risk that corporations will reference human rights liberally for the public relations benefits of being seen to be ethical, even in the absence of meaningful interventions to operationalize human rights principles. Although references to human rights, and even to equality and non-discrimination, proliferate in corporate governance documents, these references alone do not ensure accountability. Similarly, implementation of the framework of Guiding Principles on Business and Human Rights, including through initiatives such as the B-Tech Project, must incorporate legally binding obligations to prohibit – and provide effective remedies for – racial discrimination.81
An inherent problem with the ethics-based approaches that are promulgated by technology companies is that ethical commitments have little measurable effect on software development practices if they are not directly tied to structures of accountability in the workplace. From a human rights perspective, relying on companies to regulate themselves is a mistake, and an abdication of State responsibility. The incentives for corporations to meaningfully protect human rights (especially for marginalized groups, which are not commercially dominant) can stand in direct opposition to profit motives. When the stakes are high, fiduciary obligations to shareholders will tend to matter more than considerations
concerning the dignity and human rights of groups that have no means of holding these corporations to account. Furthermore, even well-intentioned corporations are at risk of developing and applying ethical guidelines using a largely technological lens, as opposed to the broader society-wide, dignity-based lens of the human rights framework.82
Finally, the report suggests that corporate human rights due diligence needs to be implemented by states based on human rights law prohibitions on racial discrimination and refers to the European Commission’s proposal for mandatory due diligence for companies.83
- (a)Make all efforts to meet their responsibility to respect all human rights, including through the full operationalization of the Guiding Principles on Business and Human Rights;
- (b)Enhance their efforts to combat discrimination linked to their development, sale or operation of ai systems, including by conducting systematic assessments and monitoring of the outputs of ai systems and of the impacts of their deployment;
- (c)Take decisive steps in order to ensure the diversity of the workforce responsible for the development of ai;
- (d)Provide for or cooperate in remediation through legitimate processes where they have caused or contributed to adverse human
rights impacts, including through effective operational-level grievance mechanisms.86
The two most recent work-streams at the UN level concern the Commission on the Status of Women (“csw”). Each year, the csw adopts agreed conclusions as a result of the 67th session which was dedicated in 2023 mainly to digital and ai topics in the context of women’s rights.87 In addition, the current work of the Committee on the elimination of all discriminations against women (“cedaw committee”) regarding the elaboration of a General Recommendation 4088 on equal representation in decision making systems is recognizing ai systems as game-changing technology.
4.2 Council of Europe Level
The Council of Europe (“CoE”) is a major force that sees itself as the guardian of human rights and democracy. Equality and non-discrimination have been on its agenda and its core instruments since the beginning. The CoE adopted a Framework Convention on ai and Human Rights, Democracy and the Rule of Law89 in May 2024 and is currently working towards a framework for ai regulation that specifically addresses equality, including gender equality, and non-discrimination angle which will be published by 2025.90 The clear advantage of CoE legal frameworks on ai are deeply rooted in human rights, enforceable by the European Court of Human Rights in Strasbourg, and has a wide potential reach within the CoE’s 46 Member States. In addition, other states are following the CoE regulatory process on ai carefully as “representatives
First, the Framework Convention on ai includes rules on the principle of non-discrimination (Art. 10). Previously, Article 12 specifically stated that “Each Party shall (…) ensure that the design, development and application of artificial intelligence systems respect the principle of equality, including gender equality and rights related to discriminated groups and individuals in vulnerable situations.” Now only the Preamble makes reference to “the risks of discrimination in digital contexts, particularly those involving artificial intelligence systems, and their potential effect of creating or aggravating inequalities, including those experienced by women and individuals in vulnerable situations.”
Second, the equality and non-discrimination legal instrument is too far in the future to make specific remarks, but it will be surely based on and incorporate the work of the general framework Convention and consider the work of the EU ai Act.
4.3 European Union Level
The EU is currently the most advanced jurisdiction in the world in terms of addressing adverse effects of ai and algorithms on fundamental rights. Following the EU General Data Protection Regulation (“gdpr”), the EU adopted the Digital Services Act (“dsa”) and the Digital Markets Act (“dma”) and set up the European Center for Algorithmic Transparency (“ecat”), tasked with helping the European Commission enforce the dsa with relevant expert knowledge.92 Considering that in principle, all EU legislation is based on a human and fundamental rights approach, EU law can be an interesting reference point because it offers an example of legally binding nature and model character that is often imitated by other jurisdictions.93
The most important EU legislation in the area of ai, the ai Act, was adopted by both co-legislators (European Parliament in March 2024 and Council of the European Union in May 2024). The ai Act is a legally binding instrument for
Specifically, on algorithmic discrimination, the ai Act could make a fundamental contribution to controlling and diminishing discrimination caused by ai systems. While not obvious at first sight, the horizontal proposal on ai regulation could address the issue of discrimination through the regulatory requirements for High-risk ai systems. The scope of High-risk ai systems would include areas such as education or labor market use cases, for example ai recruitment systems.96 If an ai recruitment system is classified as High-risk, it would need to comply with the requirements, like mandating a certain level of transparency and documentation. As such, if ai applications fall under the scope of the Directive, they would need to fulfill the detailed requirements for High-risk ai systems. This would enable control of the design and datasets of those ai systems, which in turn could help identify and diminish the risk of discriminatory outcomes. The European Parliament wanted to include the principle of discrimination in the operative part of the legislative text.97
5 Avoiding a Shift from Classical Public Lawmaking towards Private Rule-Setting: Advantages and Limits of Including Businesses in Private Regulatory Tasks to Avoid Discrimination
In the age of algorithms, the fast speed with which ai companies develop new algorithms and ai tools coincides with, or triggers, attempts to shape the global ai landscape, which is currently mostly unregulated.106 This facilitates the risk of de facto private rules that govern ai systems in the absence of hard laws. ai companies develop their products in an ai race with regulators running behind in a regulatory race in order to include the latest ai inventions into their regulatory design and ensuring that the ai definition used and the regulatory tools included in the proposed legislative frameworks still adequately
5.1 Advantages of Involving Businesses: Expertise
There are several clear advantages to involving businesses in the fight against discrimination from the start, including any positive changes to the conception of models, design of algorithms, and ai systems to their deployment, and the post-market monitoring. The added value that is provided by the ai businesses and their researchers can take the form of pure knowledge (sharing) or amount to a gradual involvement in development and regulation.
First, and foremost, the developers of ai systems that program the underlying algorithms have the knowledge of what views and concepts shaped their design of the algorithm and how it functions. Sharing this knowledge with regulators enables the latter to make informed decisions more easily, considering that some relevant information is only held by the companies. Often, regulators depend on the flow of information from businesses to administrations to enforce the rules. While the state can impose access to information under specific circumstances, voluntary collaboration is preferred and less costly.
Second, including ai businesses from the outset – when algorithms are being designed – could help to incorporate the principle of non-discrimination by design.108
Third, soft law and hard law proposals often prescribe specific obligations and responsibilities for companies, ranging from transparency, information, and documentation requirements to the obligation to conduct ai impact assessments,109 audits, or monitoring. In each these scenarios, a close
Fourth, regulation can envisage a specific role for business in the development and implementation of regulatory content. Business representatives might be present in oversight bodies or be entrusted with specific roles or responsibilities to help shape the legal rules.110 An example can be found in the proposed EU ai Act, which foresees the specification of some of the content of the future Regulation within the framework of a standardization request entrusted to European Standard setting organizations, the European Committee for Standardization (“cen”) and the European Electrotechnical Committee for Standardization (“cenelec”).111 Nevertheless, considering the risk of regulatory capture, the state needs to be aware of potential imbalances of knowledge and technical expertise, which could lead to a situation where states have to blindly follow the analysis or assessment of companies due to a lack of expertise on the administration’s side.112
5.2 Limits of Involving Businesses in the Regulation of ai: Lack of Legitimacy and Private Interests
First, business decisions are usually steered by private and economic considerations rather than common good considerations. The businesses’ interests might be in conflict with citizens interests’ or the state’s interest. Businesses are not democratically legitimized and accountable in the same way as public bodies.
Second, lack of resources on the side of government institutions might create a temptation to over-rely on business inputs and expertise without being able to understand or duly verify its impacts.
Third, independence can be a problem even if a lot of research is of high quality and standards.113 In addition, high salaries facilitate talent to be attracted by business rather than the resource-drained public administrations or academia, which further concentrates knowledge and power on ai systems within the industry
6 Elements and Recommendations for a Potential “Shared Responsibility” Framework between Business and States
This section briefly highlights some elements and principles that should form part of any potential regulatory framework based on “shared responsibility.” While it has been shown that some of the broad principles contained in ai principles or Ethical guidelines are reflected in business or soft law frameworks, broad principles are not by themselves ill-suited to achieve the objectives of reducing biases and discrimination. It is rather the non-binding nature coupled with broad principles and a company’s freedom to interpret them that makes them less effective. However, broad principles can play a role within a legislative framework when more detailed rules exist that ensure legal certainty and adequate enforcement of the legislative framework. Detailed rules also create an advantageous opportunity for companies to easily ensure compliance, despite having to shoulder a potential increase in regulatory costs due to additional requirements. As a matter of illustration, rather than including the principle of transparency, detailed rules on achieving transparency, such as requirements on documentation, explanations, access to datasets, or the source code of the algorithm, are better suited to support the objectives. The last part of this section will outline some elements and recommendations that should be included in a “Shared Responsibility” framework.
First, technical dialogue between industry representatives and state representatives can prepare the floor for designing a regulatory framework. This will ensure that the specific technologies are understood, and that companies have the possibility to explain their current technical capabilities and the tools they use that address the regulatory goals.
Second, the principles and guidelines developed by businesses need to be assessed and used when designing regulatory frameworks. Elements that are identified as being useful and supporting goals defined in the regulatory framework can be made compulsory in a legal framework.
Third, it is important to move away from abstract concepts and vague narratives, such as transparency, fairness or ethical ai, and move towards concrete and definable concepts like human rights violation, and non-discrimination, gender-based discrimination that can be easily measured and incorporated into a legislative framework.
Fifth, a company’s involvement in the regulatory process can be counter balanced by the inclusion of independent researchers who mediate between the state and companies in its independent assessment of ai risks, compliance, and through expert advice. Such a role should be exercised either by Open Access processes where algorithmic code or datasets are either publicly available or specifically made available to researchers to test, verify, or audit ai systems in line with regulatory requirements.
Sixth, the basis for enforcing discrimination claims in the algorithmic age is a right to know about an ai generated decision.116 Without knowledge, no further enquiries in terms of evidence gathering, access to algorithmic design or datasets, evidentiary thresholds are necessary. In addition to a right to know about an ai decision, clearly defined principles should be established in the legislative framework to allow victims of discrimination and/or administrations, courts or independent researchers as experts to have access to algorithms and the underlying (training)datasets. To complement access to evidence, rules on the burden of proof should be designed in a way that takes account of the imbalance of power between tech companies and potential victims of discrimination, both in terms of the access and expertise necessary to evaluate a claim of discrimination. An automatic reversal of the burden of proof will most likely be an adequate tool to facilitate the preparation of a claim for algorithmic discrimination.
Seventh, a definition of algorithmic discrimination would need to be incorporated to supplement existing non-discrimination law frameworks with regard to the specificities and peculiarities of ai systems. In this context, the risks of algorithmic discrimination should be reflected in the regulatory toolbox.
Ninth, concrete guidance requirements for ai developers should be incorporated as a binding requirement for companies. Specific guidance to developers should include the main elements of a non-discrimination and diversity perspective, like the issues of bias and discrimination at the design stage of building algorithms.
Tenth, in light of the discussed imbalance and distribution of knowledge and understanding of ai systems, it is advisable to envisage the creation of dedicated knowledge centers on the regulators side or equip regulators with sufficient staff and resources to be able to sufficiently assess and regulate ai systems.117 Only these kinds of centers can enable regulatory bodies to fulfill their tasks effectively when confronted with industry knowledge and thereby diminish the risk of regulatory capture.
Key elements and recommendations of a “shared responsibility” framework on algorithmic discrimination
Principles and recommendations of a “shared responsibility” framework |
|
---|---|
1 |
Dialogue at technical level between state and business |
2 |
Incorporation of ideas of ai principles of business into regulation |
3 |
Using concrete and definable concepts instead of abstract concepts |
4 |
Defining the role of business throughout the regulatory lifecycle of ai |
5 |
Role of independent ai experts for regulation and implementation |
6 |
Right to know about an ai decision |
7 |
Definition of algorithmic discrimination |
8 |
Accompanying non-legislative measures on Gender ai Gap and Diversity |
9 |
Concrete guidance document for ai developers |
10 |
ai knowledge centers for regulators |
7 Summary and Concluding Remarks
This chapter aimed to show the role of businesses and states in regulating algorithms and preventing algorithmic discrimination. It was argued that while businesses should be involved in the regulatory process, self-regulation and non-binding ai principles are not the preferred option, and should be disregarded in favor of binding legal rules that can be enforced to ensure the protection of human rights and non-discrimination.
It contributed to the debate on regulating algorithms by focusing specifically on the role of businesses, not only with regard to their future obligations imposed by forthcoming legislative frameworks, but also their role in avoiding discriminations from occurring throughout the lifecycle of ai products that they design. Shedding some light and leading to a more nuanced and balanced view on what businesses can and should do in order to safeguard human rights of those affected by the use of algorithms was one goal of this chapter. In this spirit, it argued that regulators around the world should try to involve businesses in the development of rules by being aware of the limits and the private interests of those who dispose of valuable knowledge for
The essence of the challenge of how best to address algorithmic discrimination when confronted with the choice between non-binding guidelines and legally binding norms has been highlighted by the UN Special Rapporteur on racial discrimination, “Ethical approaches to governing emerging digital technologies must be pursued in line with international human rights law, and states must ensure that these ethical approaches do not function as a substitute for development and enforcement of existing legally binding obligations.”118 Ethical guidelines and approaches by businesses should also be no substitute for future specific regulatory frameworks as has been argued throughout this contribution.
The author would like to thank the organizers and participants of the conference Business and Human Rights in Lausanne (June 30 to July 1, 2023) and in particular Dr. Joseph Wilson for his very useful oral and written comments on the draft chapter. For the sake of simplicity, algorithms and ai systems are used interchangeably. ai systems are understood here as a “machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments” (Art. 2 Council of Europe, [cets 225] – Artificial Intelligence; Art. 3(1) ai Act).
See “The oecd Framework for the Classification of ai systems,” Organisation for Economic Co-operation and Development (“oecd”), accessed May 25, 2024,
See “Artificial Intelligence: ChatGPT Inc,” The Economist, Business, July 1, 2023, 48, which is developing ai adoption index by business.
See Web Arnold, “analysis: What Lenders Should Know About ai and Algorithmic Bias,” Bloomberg Law, April 25, 2023,
See Matthew Burgess, Artificial Intelligence (wired guides): How Machine Learning Will Shape the Next Decade (Random House Business, 2021), 77–80; probably the most cited example is the algorithm developed by Amazon. See Jeffrey Dastin, “Insight – Amazon scraps secret ai recruiting tool that showed bias against women,” Reuters, October 11, 2018,
Tolga Bolukbasi, et al., “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings,” arXiv, July 21,2016,
See for example the generation of images of ceo with the prompt “ceo” which creates images of mostly men with light skin color; see Leonardo Nicoletti and Dina Bass, “Humans are Biased. Generative ai is Even Worse,” Bloomberg, June 9, 2023,
See for example the “oecd Guidelines for Multinational Enterprises on Responsible Business Conduct,” oecd, June 8, 2023, paras. 5, 51, and 54,
“Ministerial Declaration: The G7 Digital and Tech Ministers’ Meeting – 30 April 2023,” G7/G20, April 30, 2023, paras. 39–48,
“The Toronto Declaration: Protecting the right to equality in machine learning,” Toronto Declaration, May 16, 2018, para. 38,
Jules Thomas et al., “De ChatGPT à Midjourney, les intelligences artificielles génératives s’installent dans les entreprises,” LeMonde April 26, 2023
See for example, Arthur Grimonpont, Algocratie: Vivre libre à l’heure des algorithmes (Actes Sud, 2022); Hugues Bersini and Gilles Badinet, Algocratie: Allons-nous donner le pouvoir aux algorithmes? (De Boeck Supérieur, 2023).
“General Comment No. 20, Non-discrimination in economic, social and cultural rights (arts 2, para. 2, of the International Covenant on Economic, Social and Cultural Rights),” UN Committee on Economic, Social and Cultural Rights (“cescr”), July 2, 2009, para. 11,
Toronto Declaration, supra note 10, at para. 40. See in this context also Art. viii of the Spanish Carta Derechos Digitales: “Derecho a la igualdad y a la no discriminación en el entorno digital: 1. El derecho y el principio a la igualdad inherente a las personas será aplicable en los entornos digitales, incluyendo la no discriminación y la no exclusión. En particular, se promoverá la igualdad efectiva de mujeres y hombres en entornos digitales. Se fomentará que los procesos de transfor- mación digital apliquen la perspectiva de género adoptando, en su caso, medidas específicas para garantizar la ausencia de sesgos de género en los datos y algoritmos usados.” “Carta Derechos Digitales,” La Moncloa, July 24, 2021, Article viii,
See also the proposed definition of algorithmic discrimination in the US Blueprint for an ai Bill of Rights: “‘Algorithmic discrimination’ occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex, religion, age, national origin, disability, (..) or any other classification protected by law.” “Blueprint for an ai Bill of Rights,” The White House, last modified November 22, 2023,
See for example “Recommendation cm/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems,” Council of Europe (“CoE”), April 8, 2020,
“An llm is a computerized language model, embodied by an artificial neural network using an enormous amount of ‘parameters’ that are (pre-)trained on many gpus in relatively short time due to massive parallel processing of vast amounts of unlabeled texts (..);” see “Large language model,” Wikipedia, accessed May 25, 2024,
Therefore, regulators need to adopt sufficiently broad definitions and principles that capture potential future ai developments and that at the same time rely on specific and detailed requirements for regulating ai systems. The EU, for example, relies on a regulatory technique that enables the European Commission to change the Annex to the Regulation and thereby ensures a dynamic regulatory system that is fit for purpose to adapt flexibly to new arising ai systems.
See for example the policy recommendation in the report “Ghost in the Machine,” Norwegian Consumer Council, June 2023, 60,
See “New and emerging technologies need urgent oversight and robust transparency: UN experts,” Office of the United Nations High Commissioner for Human Rights (“ohchr”), June 2, 2023,
See for example Richard A. Posner, “The Concept of Regulatory Capture: A Short, Inglorious History,” in Preventing Regulatory Capture: Special Interest Influence and How to Limit it, eds. Daniel Carpenter and David A. Moss (Cambridge University Press, 2013), 49–56.
A recent example of potential regulatory capture is the US Federal Aviation Administration’s reliance on Boeing’s engineers in certifying Boeing 737 max planes, where the faulty Maneuvering Characteristics Augmentation System (mcas) which provides consistent airplane handling characteristics in a very specific set of unusual flight conditions resulted in two crashes. See for example the report of the Ethiopian Ministry investigating the causes of the plane crash. “Investigation Report on Accident to the B737-max8 Reg. et-avj Operated by Ethiopian Airlines, 10 March, 2019,” The Federal Democratic Republic of Ethiopia Ministry of Transport and Logistics Aircraft Accident Investigation Bureau, December 23, 2022,
Anna Jobin, Marcello Ienca, and Effy Vayena, “The global landscape of ai ethics guidelines,” Nature Machine Intelligence 1, no. 9 (September 2019): 389–99. See also Helga Nowotny, In ai We Trust: Power, Illusion and Control of Predictive Algorithms (Polity Press, 2021), 123–25.
For example, Nicholas Diakopoulos et al., “Principles for Accountable Algorithms and a Social Impact Statement for Algorithms,” fat/ml, accessed May 25, 2024,
See for example Annie Batlle, Aude Bernheim, and Flora Vincent, L’intelligence artificielle, pas sans elles! (Belin, 2019).
“Our Principles,” Google, accessed May 25, 2024,
Some even issue general human rights reports; see, for example, Miranda Sissons, “A Closer Look: Meta’s First Annual Human Rights Report,” Meta, July 14, 2022,
Kimberly A. Houser, “Can ai solve the Diversity Problem in the Tech Industry: Mitigating Noise and Bias in Employment Decision-Making,” Stanford Technology Law Review 22 (February 2019): 290–354; Susan Leavy, “Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning,” paper presented at the Proceedings of the 1st international workshop on gender equality in software engineering, May 27 – June 3, 2018,
“Responsible Machine Learning,” Amazon Web Services (“aws”), accessed May 25, 2024, 2,
“Amazon SageMaker Clarify,” aws, accessed May 25, 2024,
Meta has even published an academic paper regarding the issue of fairness. See Chloé Bakalar et al., “Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems,” arXiv, March 24, 2021,
For example, see Meta, which set up the Facebook Oversight Boards which it wants to operate in a similar way as real courts of law where complaints can be launched, and decisions are published on the website. According to the company’s explanation, “[t]he Oversight Board reviews content decisions made by Meta to see if the company acted in line with its policies, values, and human rights commitments. The Board can choose to overturn or uphold Meta’s decision.” See “Improving how Meta treats people and communities around the world,” Oversight Board, accessed May 25, 2024,
See, for example, Nicol Turner Lee, Paul Resnick, and Genie Barton, “Algorithmic bias detection and mitigation: Best practice and policies to reduce consumer harms,” Brookings, May 22, 2019,
See supra note 24, from which questions have been summarised.
See “Safeguarding freedom of expression and access to information: guidelines for a multistakeholder approach in the context of regulating digital platforms,” United Nations Educational, Scientific and Cultural Organization (“unesco”), April 27, 2023,
See, for example, the United Nations Forum on Business and Human Rights, where the 12th Session in November 2023 is addressed “[t]owards effective change in implementing obligations, responsibilities and remedies,” which also mentions “gender, business and human rights,” as a standing issue of the Forum. “12th United Nations Forum on Business and Human Rights,” ohchr, November 27–29, 2023,
See Nicolas Sabouret and Laurent Bibard, L’intelligence artificielle n’est pas une question technologique (De l’aube, 2023), 39.
This logic also seems to underly the European Union’s regulatory framework, such as the Digital Services Act. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/ec (Digital Services Act) (Text with eea relevance), oj l 277, October 27, 2022.
“gpt-4 Technical Report,” OpenAI, arXiv, March 4, 2024,
Italy for example prohibited the use of Open ai’s ChatGPT due to gdpr violations. The current Council of Europe legal proposal suggests the option for Member States to ban or temporarily limit the use of certain ai systems.
“Building capacity for the implementation of the Guiding Principles on Business and Human Rights: Report of the Working Group on the issue of human rights and transnational corporations and other business enterprises*,” hrc, May 18, 2023, para. 84,
See for example Maria Luciana Axente and Ilana Golbin, “9 ethical ai principles for organizations to follow,” World Economic Forum, June 23, 2021,
The G7 recently reaffirmed the oecd “Recommendation,” supra note 8; see G7/G20, “Ministerial Declaration,” supra note 9, at notably paras. 39–42.
Id. at Point iv, 1.2 Human-centered values and fairness.
Toronto Declaration, supra note 10, at paras. 44–51, as summarized in hrc, “Racial discrimination,” supra note 16, at para. 60.
Article 8(b)of the Draft Convention: “effective guidance is provided to relevant public and private actors on how to prevent and mitigate any adverse impacts of the application of an artificial intelligence system on the enjoyment of human rights and fundamental freedoms, the functioning of democracy and the observance of the rule of law in their operations.” “Revised Zero Draft [Framework] Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law,” Committee on Artificial Intelligence (“cai”), January 6, 2023, Article 8(b),
“How we’re using Fairness Flow to help build ai that works better for everyone,” Meta, March 31, 2021,
Amazon for example considers that “Continuous education on the latest developments in ml is an important part of responsible use. aws offers the latest in ml education across your learning journey through programs like the aws Machine Learning University (Bias and Fairness Course).” These videos are available online at Machine Learning University, “Responsible ai,” YouTube, November 22, 2022,
See for example, the nyc Law, infra TABLE 5.2, The White House, “Blueprint,” supra note 15, the EU ai Act, infra TABLE 5.2, but also mentioned as options in the oecd, “Recommendation,” supra note 8, and the CoE Framework Convention, infra TABLE 5.2.
See for example the “Algorithmic Impact Assessment tool,” Government of Canada, accessed May 25, 2024,
See for example Matilda Arvidsson and Gregor Noll, “Artificial Intelligence, Decision Making and International Law,” Nordic Journal of International Law 92, no. 1 (April 2023): 1–8.
For a specific application of the ungps, which have been designed in a world before algorithms and ai, see the specific context of ai in the “B-Tech Project: Multi-Stakeholder Consultation on Gender, Tech, and the Role of Business,” ohchr, June 15, 2023,
ungps, supra note 16, at Principle 16.
Id. at Principle 17.
Id. at Principle 18.
Id. at Principle 19.
Id. at Principle 20.
Id. at Principle 21.
See for example the Proposal for a Directive of the European Parliament and of the Council on Corporate Sustainability Due Diligence and amending Directive (EU) 2019/1937, com/2022/71 final, 2022/0051(cod), February 23, 2022.
Id. at para. 78.
Id. at para. 78(a): “[o]perate with transparency and provide information about how artificial intelligence systems work. That should include alignment with open-source and open data standards and publication of accessible information about how artificial intelligence systems operate.”
Id. at para. 78(b): “[i]mplement disability-inclusive human rights impact assessments of artificial intelligence to identify and rectify its negative impacts on the rights of persons with disabilities. All new artificial intelligence tools should undergo such assessments from a disability rights perspective. Artificial intelligence businesses should conduct their impact assessments in close consultation with organizations representing persons with disabilities and users with disabilities.”
Id. at para. 78(c): “[u]se corporate human rights due diligence to explicitly take account of disability and artificial intelligence. Private sector actors that develop and implement machine-learning technologies must undertake corporate human rights due diligence to proactively identify and manage potential and actual human rights impacts on persons with disabilities, to prevent and mitigate known risk in any future development.”
Id. at para. 78(d): “[e]nsure accessible and effective non-judicial remedies and redress for human rights harms arising from the adverse impacts of artificial intelligence systems on persons with disabilities. This should complement existing legal remedies and align with the International Principles and Guidelines on Access to Justice for Persons with Disabilities.”
Id. at para. 78(d): “[e]nsure that data sets become much more realistic and representative of the diversity of disability and actively consult persons with disabilities and their representative organizations when building technical solutions from the earliest moments in the business cycle. This includes proactively hiring developers of artificial intelligence who have lived experience of disability, or consulting with organizations of persons with disabilities to gain the necessary perspective.”
Id. at para. 15.
Id. at para. 16.
Id. at para. 45.
Id.
Id. at para. 55.
Id. at para. 56.
Id. at para. 59.
Id. at para. 60; See also “Business and Human Rights in Technology Project (“B-Tech Project”): Applying the UN Guiding Principles on Business and Human Rights to Digital Technologies,” ohchr, accessed May 25, 2024,
Id. at para. 62.
Id. at para. 63; “European Commission Promises Mandatory Due Diligence Legislation in 2021,” Responsible Business Conduct (“rbc”), April 30, 2020,
Id. at paras. 51–54.
Id. at para. 61.
See “csw67 (2023),” United Nations Women, March 6–17, 2023,
See Fabian Lütz, “Written submission (focusing on ai, automated decision-making systems and gender equality) for the half-day General Discussion on the equal and inclusive representation of women in decision-making systems, 84th session of cedaw,” ohchr, February 22, 2023,
cai, “Revised Zero Draft,” supra note 50.
Id.; “Council of Europe’s Work in progress,” Council of Europe, last updated January 2024,
See the members and observers of the cai, located at “Committee on Artificial Intelligence (cai),” accessed May 25, 2024,
“European Centre for Algorithmic Transparency,” European Commission, accessed May 25, 2024,
See notably, Anu Bradford, “The Brussels Effect,” Northwestern University Law Review 107, no. 1 (December 2012): 1–68; Anu Bradford, “Chapter 9: The Future of the Brussels Effect,” in Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press, 2020).
This chapter does not discuss the substantive content of the ai Act in detail. For a summary and more information see notably, Fabian Lütz, “Gender equality and artificial intelligence in Europe. Addressing direct and indirect impacts of algorithms on gender-based discrimination,” era Forum 23 (April 2022): 33–52,
See “Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence,” European Commission, December 5, 2022,
See EU ai Act, supra TABLE 5.2, at Annex iii. In the US, specific guidance was recently issued by the U.S. Equal Employment Opportunity Commission (“eeoc”) in relation to algorithmic recruitment and disability; see The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, eeoc-nvta-2022-2, May 12, 2022 (text available online at
The European Parliament proposed to include an article 4a – General principles applicable to all ai systems e) which reads “diversity, non-discrimination and fairness’ means that ai systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.” See the adopted text of the European Parliament in its first reading: “P9_ta(2023)0236: Artificial Intelligence Act – Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (com(2021)0206 – C9-0146/2021 – 2021/0106(cod)) (Ordinary legislative procedure: first reading),” European Parliament, June 14, 2023,
See EU ai Act, supra TABLE 5.2, at recital 36.
Id. at Article 68a and Recital 84a.
Id. at Article 56.
Id. at Article 66a.
Id. at Article 64.
For some proposals and recommendations specifically on EU law, see Fabian Lütz, “Algorithmische Entscheidungsfindung aus der Gleichstellungsperspektive – ein Balanceakt zwischen Gender Data Gap, Gender Bias, Machine Bias und Regulierung,” gender – Zeitschrift für Geschlecht, Kultur und Gesellschaft 15, no. 1 (2023): 26–41,
Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (ai Liability Directive), com(2022) 496 final, 2022/0303(cod), September 28, 2022.
The elections of the European Parliament in 2024 and the newly composed European Commission in terms of the College of Commissioners might be an opportunity to include a reform of the gender equality and non-discrimination Directives on the agenda and in the Commission Work Programme in order to fully take into account the effects and impacts of algorithmic discrimination in the EU acquis in order to complement the legal framework on ai composed most likely by the soon-to-be adopted EU ai Act. Notably, Directive 2006/54/ec of the European Parliament and of the Council of 5 July 2006 on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation (recast), oj l 204, July 26, 2006, and Council Directive 2004/113/ec of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services, oj l 373, December 21, 2004, seem good candidates to be reviewed in line of algorithmic discrimination.
See in general, Paul Nemitz, “Constitutional democracy and technology in the age of artificial intelligence,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133 (October 2018); Paul Nemitz and Eike Gräf, “Artificial Intelligence Must Be Used According to the Law, or Not at All,” Verfassungsblog: On Matters Constitutional, March 28, 2022,
See the example of the EU ai Act, supra TABLE 5.2, which dates from 2021 and which did not fully consider ai systems such as Large Language Models (llms), and that Member States and the European Parliament now call for llms to be included in the regulatory efforts.
Tilburg University, “Non-discrimination by design,” Tilburg University, 2019,
Some companies foresee Guides (“Microsoft Responsible ai Impact Assessment Guide,” Microsoft, June 2022,
Pieter Van Cleynenbreugel, “EU By-Design Regulation in the Algorithmic Society: A Promising Way Forward or Constitutional Nightmare in the Making?,” in Constitutional Challenges in the Algorithmic Society, eds. Hans-W. Miclitz et al. (Cambridge University Press, 2021).
“A Notification under Article 12 of Regulation (EU) No 1025/2012: Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence,” European Commission, December 5, 2022,
See in this regard recent efforts in the U.S. to gain further understanding of ai systems from the ai industry, “fact sheet: Biden-Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment,” The White House, May 23, 2023,
Roman Jurowetzki et al., “The Privatization of ai Research(-ers): Causes and Potential Consequences – From university-industry interaction to public research brain-drain?,” arXiv, last revised February 15, 2021,
As a matter of illustration, one co-author (Chris Russell) of a famous and influential paper by Oxford Academics recently moved to Amazon as a senior applied scientist; see Stephen Zorio, “Machine Learning: How a paper by three Oxford academics influenced aws bias and explainability software,” Amazon, April 1, 2021,
Zorio, “Machine Learning,” supra note 114.
See for example Norwegian Consumer Council, “Ghost in the machine,” supra note 19, at 59, that specifies that “Consumers must have the right to object and to an explanation whenever a generative ai model is used to make decisions that have a significant effect on the consumer.” But these and similar calls for such principles, which are fundamental, tend to forget that the right to know is the first step before objecting to an ai decision or even launching a complaint or achieve legal remedies.
The Norwegian Consumer Council is suggesting in its ai report that “[t]ransnational and national technological expert groups should be established to support enforcement agencies in enforcement endeavors,” and that “[e]nforcement agencies must have all necessary resources to enforce infringements.” Id. at 60.