D//F: The Open Source Challenge

CS:0001
CASE STUDY

D//F: The Open Source Challenge

The Digital Infrastructure Insights Fund (D//F) is a multi-funder initiative by Ford Foundation, Alfred P. Sloan Foundation, Omidyar Network, Schmidt Futures and Open Collective sustaining a platform for researchers and practitioners to better understand how open digital Infrastructure is built and deployed.

Openness has become a key trend in Generative AI development, whereby developers make their models, technical components, and associated resources freely accessible to the public. Openness comes with benefits, but also comes with risks: malicious actors can misuse the models in question, and remove the safeguards put in place by developers. Finally, openness remains a complex and often misunderstood concept, allowing some companies to misrepresent the true extent of their model’s openness – a practice increasingly known as “open washing”. Together, these three developments constitute the openness challenge.

In this paper commissioned by the Digital Infrastructure Insights Fund, AWO propose an approach to openness that supports the democratic governance of Generative AI while mitigating associated risks.

Our research highlights the value of expanding access for external researchers to enhance model safety, coupled with measures to limit access for potential malicious actors. It argues that any AI policy framework that seeks to balance the risks and benefits of openness of Generative AI should enhance external researchers’ role in model development and maintenance.

In the future, policymakers around the world should ensure that AI legislation tackles the openness challenge. In particular, the EU should take advantage of the opportunity to tackle this issue presented by the AI Act’s Code of Practice for General Purpose AI, which could address how and when external researchers should be involved in risk evaluation and mitigation.

Key findings

  • Lack of clarity regarding what constitutes “open source” in Generative AI has resulted in open washing: “open washing”, involves companies branding their models as “open-source” or “open” as a form of misleading virtue signalling. Open washing undermines the public’s understanding of AI, create diversions from the risks associated with Gen AI, and fosters a culture of openness that falls short of true transparency.
  • Open washing disproportionately focuses on promoting the benefits of openness without fully addressing its risks: This practice complicates efforts to effectively regulate Generative AI, as these companies are keen to take advantage of AI regulations that put a lighter burden on broadly defined open-source models. When misused, certain forms of openness can enable and exacerbate online safety risks, because openness (1) allows malicious actors to evade developer oversight; (2) allows those actors to remove any safeguards built into models; and (3) allows models to be customised for harmful purposes.
  • Opening up access to external parties can improve risk mitigation measures: Openness has a double-edged impact on safety. While openness increases the ability of malicious actors to misuse the model, it also enhances the ability of external researchers to scrutinise these models, enabling them to identify risks and improve mitigations.
  • An open science approach to releasing models can lead to increased safety: Fostering openness towards external researchers is essential for capturing the full benefits of transparency in AI development. Engaging a diverse range of experts throughout the development and post-release phases allows the AI community to better manage the complex interplay of risks and advantages these models present
  • Barriers exist that limit the potential of openness to external researchers: Developers can pick and choose the areas that their contracted external researchers focus on, which may result in certain risk areas being ignored. Developers may also limit the types of information and the model elements that they make available to external researchers, further limiting the scope and robustness of research.
  • Current policy approaches do not adequately tackle the openness challenge: Neither EU, UK nor US frameworks adequately tackle the openness challenge of Generative AI models. Yet, the EU, UK and US initiatives together provide an outline for what a framework that adequately tackles the challenge of openness could look like.

This paper recommends a set of non-mutually exclusive policy options that could be taken on board by countries who are drafting and/or reviewing their own AI legislation.

Download the full paper.