Tennessee on Tuesday released a new artificial intelligence (AI) model that can still mobilize pictures. Dodged Hanvanport, based on the large language model (LLM), and can produce videos with realistic animation based on a reference image and guidance video. Researchers behind this project highlighted that the model could capture both facial data and local movements so that they could be accurately integrated into the reflection image. Tensors have now openly made the Hannanportatrate AI model open, and can be downloaded and run locally from popular reservoirs.
Tennis’s Hanvanportatrate can still survive the portrait
In a post on X (previously known as Twitter), the official handle of Tennessee Hanivan announced that the Haniano portrait model is now available for the open community. The AI model can be downloaded from the list of tensile gut hubs and sore throat. In addition, the print paper is also being hosted on the arocau in advance of the model details. In particular, AI model is available for educational and research -based use issues, but not for commercial use.
Hanwanport can produce lifetime videos using reference image and driving videos. It captures facial data and pushes the video and enters them into a steel portrait image. The company claims that the synchronization of the movement is correct, and even the subtle facial impressions have changed.
Hanvian portrait architecture
Photo Credit: Tension
On its model page, tenant researchers detailed the architecture of the Hanvanportat. This condition is built on the architecture of the control encoder as well as the architecture of stable models. These already trained encoders depict motion information and identification in videos. The data has been caught as control signals, which are then injected into a steel portrait by a Danquing Unit. The company claims that it brings out both local accuracy as well as timely consistency.
Tennsors claim that the AI model improves the current open source alternative on temporary consistency and control parameters, but these measurements have not been independently confirmed.
Such models can be useful in filmmaking and animation industries. Traditionally, the dynamics manually use an expensive motion capture system to mobilize the key frame facial expressions or characters realisticly. Models like the Hanwan Portrait will only allow them to feed character design and target movement and facial expressions, and will be able to produce output. Such LLM also has the potential to make small quality animation accessible for small studios and independent creators.
For the latest tech news and reviews, follow Gadget 360 XFacebook, WhatsApp, Threads and Google News. For the latest videos of gadget and tech, subscribe to our YouTube channel. If you want to know everything about high influence, follow our inner box, which is STT 360 on Instagram and YouTube.
Madiotic dimensions 9400e SOC, 7,200mAh battery launched: Price, specifications


