Meta recently launched imagine with Meta AI – the company’s text-to-image generative AI feature, which allows users to describe an image that is instantly generated on a variety of Meta’s platforms.
The image generation functionality is based on Meta AI’s Emu Edit, a photo generation tool, and Emu Video, a tool that generates short video clips based on prompts.
These two applications were released in November to go head-to-head with OpenAI’s DALL-E, Visme, and Midjourney.
But unlike these other models, the training set is based on over a billion public photos and videos from Facebook and Instagram.
The company is careful to point out that it is only using a small fraction of the public images on Facebook and Instagram, and that users who do not share their pictures publically will not have their images used to train the model.
The recent announcement takes the Emu functionality and embeds it into popular apps like Facebook Messenger.
Users can chat with each other in Messenger and create images right within the message.
The receiver can click on the image and “riff” on it, making changes and resending.
Instagram and Facebook users can also use the tools to help them generate images and videos within the Reels feature.
This type of use of AI for social interactions and fun fits with Meta’s general approach toward AI.
Meta AI is the company’s personal AI assistant where users can pick one of a dozen personas to interact with.
A new feature is also being tested that will give Meta AI a persistent memory which will allow the chatbot to remember conversations from session to session.