911爆料网

A Human Rights Assessment of the Generative AI Value Chain

Generative Image models by Linus Zoll & Google DeepMind on betterimagesofAI.org

February 9, 2024
Authors
  • Hannah Darnton portrait

    Hannah Darnton

    Director, Technology and Human Rights, 911爆料网

  • Lindsey Andersen portrait

    Lindsey Andersen

    Associate Director, Human Rights, 911爆料网

  • J.Y. Hoh portrait

    J.Y. Hoh

    Associate Director, Technology and Human Rights, 911爆料网

  • Samone Nigam

    Manager, Technology Sectors, 911爆料网


Key Points

  • 911爆料网 is launching a public Human Rights Assessment of Generative AI across the value chain in 2024.
  • The Assessment will cover the “why”, “how”, and “what” of integrating human rights approaches into company workflows on GenAI governance.
  • 911爆料网 will be engaging with experts and stakeholders over the next few months to inform this work.

Recent advancements in generative AI (GenAI) have accelerated the and of the technology. Although publicly available tools were launched just over a year ago, a found 22% of employees are already using them at work. While early models such as only worked with text, new models like and are multimodal: they can simultaneously process and understand different types of inputs, including text, images, and sounds. Such features improve product performance but also create human rights risks, including new ways to produce harmful content, conduct surveillance, or carry out cyberattacks.

To help companies identify, prioritize, and mitigate these risks and maximize opportunities, 911爆料网 will be conducting a sector-wide human rights assessment (HRA) of GenAI over the coming months. The assessment will identify human rights risks across the value chain of GenAI, from upstream data workers to model developers and end-users, and make recommendations on how to address these risks.

The Human Rights Assessment will be informed by interviews with leading companies that develop and deploy GenAI and with a broad range of stakeholders, such as civil society organizations, intergovernmental organizations, and academics. The assessment will also draw on diverse research sources, including industry papers, academic literature, and NGO reports. 

The HRA will use the proven and internationally recognized methodology provided by the UN Guiding Principles on Business and Human Rights (UNGPs) to provide practical guidance for companies on how to identify, prioritize, and mitigate GenAI-associated risks. The HRA will specify how GenAI developers and deployers can integrate that methodology into existing AI governance workflows, such as model evaluations, impact assessments, and institutional review boards.

To align existing processes and frameworks, the HRA will also explore how rights-based approaches can complement the ethics or trust and safety-based approaches that dominate current industry practice. Company-specific AI Principles have already helped to ground responsible AI product development and deployment in good practice, but integrating rights-based approaches will help companies better meet their commitments by ensuring methodological consistency across the industry. It will provide a more comprehensive understanding of risk that focuses on impacted stakeholders (“rightsholders”), particularly the most vulnerable. 

The HRA is coming at an important inflection point in the responsible AI field. Stakeholders are increasingly emphasizing the importance of a rights-based approach to responsible and safe AI, while the EU’s provisional agreement on the AI Act includes a mandatory obligation to assess high-risk AI systems for impacts on human rights ( (FRIA).

Civil society stakeholders, many of whom , continue to call for a a lack of public analysis and resources that show companies how to take a human rights based approach to AI in practice. We aim to help fill that gap. 

The HRA will build on 911爆料网’s existing work on GenAI and human rights with a variety of companies, as well as our recent collaborations with the B-Tech project of the UN’s Office of the High Commissioner for Human Rights (, , ) and our FAQ on the ethics and human rights implications of GenAI.

We’ll coordinate closely with peers undertaking related research and analysis on the responsible design, development, and deployment of GenAI to ensure the HRA complements rather than duplicates other work. We’ll also engage with a broad group of experts and stakeholders to inform our analysis. 

We aim to publish the HRA and accompanying practical guidance for companies in Q3 of 2024. We look forward to contributing to the vibrant public debate on generative AI and producing helpful, practical resources for the public domain.

For more information on this project please reach out to Hannah Darnton (hdarnton@bsr.org) and Lindsey Andersen (landersen@bsr.org). 

Let’s talk about how 911爆料网 can help you to transform your business and achieve your sustainability goals.

Contact Us