Instagram Is Set To Develop Labels for AI-Generated Content

Instagram is set to develop labels for AI-generated content at the moment. The in-development feature as you should know will highlight when a piece of content has been credited or even edited with AI.

Instagram Labels AI-Generated Content

Instagram Labels AI-Generated Content

Instagram now appears to be working on new notices that would get to identify when AI has played a role of any kind in creating content on its platform.

App researcher Alessandro Paluzzi, who at most times discovers new Instagram features just before they are announced or released officially, has just posted a screenshot of a page in the Instagram app that reportedly reads “the creator or Meta said that this content was created or edited with AI.” The specific notice simply notes that in this very case, it is an image that already has been “generated by Meta AI,” before giving a brief description of what generative AI is, and how to identify posts that use AI.

Big Tech Responsibilities on Controlling the Development of AI

The discovery in question comes just briefly after Meta, alongside other major AI players such as Google, Microsoft, and OpenAI, made commitments to the White House regarding the responsible development of AI. As well as also investing in cybersecurity and discrimination research, one of the said commitments is inclusive of developing a watermarking system to inform users as to when content is AI-generated.

How Automated Instagram’s Labeling System Will Be    

It is still very much unclear as to exactly how automated Instagram’s labeling system will be, and also to what extent it will rely on users when they are disclosed when AI has been utilized to create or edit an image. However, the fact that the notice in question contains the words “Meta said,” suggests that in at least some of the cases, the tech firm will proactively apply the announced notice, rather than just relying on the honesty of users themselves. A spokesperson for Meta however has declined to comment to Engadget on the yet-to-be-announced notice, and the company on the other hand did not immediately respond to a request for comment by The Verge.

The Effects of AI Misinformation

Although still very much in its infancy, we already have gotten a taste of what AI-generated misinformation could look like in the real world when a picture of the pope in a swagged-out puffy jacket went viral across social media platforms in the early parts of this year. In this very case, the relatively harmless image was eventually debunked, but it however was seen as a very warning that simple tools are now in existence to help spread dangerous misinformation if they are applied to satellite images as well as political photography.

Meta’s open-Source Large Language Model LLaMA 2

Meta has just recently open-sourced its large language model LLaMA 2, but it is however yet to widely release consumer-facing generative AI features and tools for its own products such as Instagram. We have got a couple of hints here and there of the kinds of features that it is developing, however.

And in an all-hands meeting back in June, CEO Mark Zuckerberg stated that Meta was developing features such as making use of a text prompts in modifying photos for Instagram Stories, Axios reported, and Paluzzi, an app researcher has also sighted signs of an “AI brush” feature for Instagram that could easily “add or replace specific parts” of images. The Financial Times just recently reported that Meta could be set to integrate an AI chatbot ‘personas’ feature into its own products as soon as the coming month.

Other Tech Companies to Announce AI Tools Recently

Beyond the scopes of Meta, Google already has announced a new and improved tool that should make it very much easy for users to determine if an image has been AI-generated. Its “About this image” feature as you should know is launching this very summer, and it is designed to highlight the very first place an image was indexed by the search giant, thus providing vital clues to its origins.



Please enter your comment!
Please enter your name here