Meta showcases new lifelike avatars that can speak and sound like you
Meta hasn't shared details on when the feature will be open to public but reports suggest it may happen soon
Meta researchers first revealed their Codec-Avatars four years ago in 2019. These avatars were futuristic, photorealistic and three-dimensional. In 2022, researchers improved the technology and demonstrated Meta MUGSY, which allowed users to create these lifelike avatars by simply scanning their faces on an iPhone, as opposed to earlier when creating such an avatar requires over 100 cameras. Now, in 2023, the researchers at Meta have taken the Codec avatars another step forwards. They will reportedly soon be able to learn to speak like you and even sounds like you.
As per a tweet shared by AI Breakfast, the new tech will employ a large language model (LLM) that can be trained to speak like you. It will also come with text-to-voice support that will be able to mimic how you sound in real life.
Meta hasn’t revealed a timeline on when the new tech will be available for public use, but reports suggest that it will happen soon.
When available, users will be able to use just a smartphone to create these avatars. While it’s called instant, it would still take about four minutes to create it because to be able to develop a “lifelike” virtual model, it would need a series of 65 facial expressions. After that’s done, the application would need a few hours to be able to process your final avatar.