Meta’s Head of AI Research Advocates for Alterations to Open Source Licensing

Meta’s AI research division aims to continue releasing models without charge, even in the face of criticism asserting that Llama 2 lacks sufficient openness.

Meta's Head of AI Research Advocates for Alterations to Open Source Licensing
Meta’s Head of AI Research Advocates for Alterations to Open Source Licensing

In July, Meta’s Fundamental AI Research (FAIR) center released its extensive language model Llama 2 with relative openness and at no cost, a clear departure from its major competitors. However, within the realm of open-source software, some still perceive the company’s commitment with certain reservations.

Meta’s Head of AI Research Advocates for Alterations to Open Source Licensing

Although Meta’s licensing approach grants many users free access to Llama 2, it retains certain restrictions that do not align with all the criteria defined by the Open Source Initiative (OSI).

According to the OSI’s Open Source Definition, genuine open source offerings encompass not only code or research sharing but also entail free redistribution, accessibility to the source code, allowance for modifications, and freedom from association with specific products.

Meta’s limitations encompass the requirement of a licensing fee for developers serving more than 700 million daily users and the prohibition of other models from training on Llama. Researchers from Radboud University in the Netherlands asserted that Meta’s assertion of Llama 2 as open-source is “misleading,” while social media posts raised questions about Meta’s claim regarding its open-source nature.

Joelle Pineau, who serves as the lead of FAIR and holds the position of Meta’s Vice President for AI Research, acknowledges the constraints associated with Meta’s approach to openness. Nevertheless, she contends that it strikes a necessary balance between reaping the benefits of information-sharing and potential business costs for Meta. In an interview with The Verge, Pineau elaborates that even Meta’s limited openness has brought about an internal shift in the way they approach research, compelling them to ensure that anything they release is not only very secure but also responsible right from the outset.

“Embracing openness has transformed our internal research methodology and motivates us to avoid releasing anything that isn’t thoroughly secure and responsible,” Pineau explains.

Meta’s AI Division Has Previously Engaged in More Open Initiatives

PyTorch, a machine learning coding language utilized for creating generative AI models, stands out as one of Meta’s significant open-source endeavors. In 2016, the company introduced PyTorch to the open-source community, and it has since seen continuous development by external developers. Pineau aims to generate a similar level of enthusiasm for their generative AI models, emphasizing PyTorch’s substantial improvements since becoming open-sourced.

She emphasizes that the extent of the release depends on various factors, including the safety of the code in the hands of external developers.

Pineau explains, “Our decision on how to release our research or code is contingent on the stage of development. When we are uncertain about potential harm or safety, we exercise caution by sharing the research with a more limited audience.

FAIR emphasizes the importance of making their research accessible to “a wide range of researchers” to obtain valuable feedback. This same principle guided Meta when they introduced Llama 2, emphasizing the belief that collaborative innovation is essential in the realm of generative AI.

Check These Out

LEAVE A REPLY

Please enter your comment!
Please enter your name here