Meta’s Motivo AI Model Could Deliver More Lifelike Digital Avatars: Here’s How it Works

Meta is researching and developing new AI models, that could have potential uses in Web3 applications. The Facebook parent firm has released an AI model called Meta Motivo, that can control the bodily movements of digital avatars. It is expected to make the overall metaverse experience better. The newly unveiled model is expected to offer optimised body motion and interaction of avatars in metaverse ecosystems.

The company claims that Motivo the ‘first-of-its-kind behavioural foundation model’. The AI model can enable virtual human avatars to complete a variety of complex whole-body tasks, while making virtual physics more seamless in metaverse.

Through unsupervised reinforcement learning, Meta has made it convenient for Motivo to perform an array of tasks in complex environments. A novel algorithm has been deployed to train this AI model that uses an unlabelled dataset of motions to help it pick up on human-like behaviours while retaining zero-shot inference capabilities, the company said in a blog post.

Announcing the launch of Motivo on X, Meta shared a short video demo showing what the integration of this model with virtual avatars would entail. The clip showed a humanoid avatar performing dance moves and kicks using whole body tasks. Meta said it’s incorporating ‘unsupervised reinforcement learning’ to trigger these ‘human-like behaviour’ in virtual avatars, as part of its attempts to make them look more realistic

The company says that Motivo can solve a range of whole-body control tasks. This includes motion tracking, goal pose reaching, and reward optimisation without any additional training.

Reality Labs is Meta’s internal unit that is working on its metaverse-related initiatives. Since being launched in 2022, Reality Labs has consecutively recorded losses. Despite the pattern, Zuckerberg has hedged his bets on the metaverse, testing newer technologies to fine-tune the overall experience.

Earlier this year, Meta showcased a demo of Hyperscape which turns a smartphone camera into a gateway to photorealistic metaverse environments. Through this, the tool enables smartphones to scan 2D spaces and transform them into hyperrealistic metaverse backgrounds.

In June, Meta bifurcated its Reality Labs team into two divisions, where one team was tasked to work on the metaverse-focussed Quest headsets and the other was made responsible for working on hardware wearables that Meta may launch in the future. The aim of this step was to consolidate the time the Reality Labs’ team puts in to develop newer AI and Web3 technologies.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top