On Tuesday, Meta ended its new voice dubbing feature globally. The feature of the rails uses generative AI to translate your sound, with optional lips harmony. Mark Zuckerberg first predicted this feature in Meta Connect 2024.
At the time of launch, translations are only available for English (and vice versa) for English. The company says more languages will come later. At least before, it is limited to Facebook creators with 1,000+ followers. However, anyone with any government Instagram account can use it.
The tool trains on your original sound and creates a translated audio track to match your tone. After that, the harmony of the lips is added to the translation of your mouth. The demo clip that the company showed was shown last year-easily.
You can select whether the lip sync and preview is to be included before the posting.
(Meta)
To use this feature, select the option of “Translate your voice with Meta AI” before publishing the rail. This is where you can choose to add lip synchronization. There is an option to review the AI-translated version before the publication. Viewers will see a popup that this is an AI translation.
Meta says this feature works best for camera videos. The company recommends refusing to cover your mouth or excessive background music. It works for two speakers, but it is better to avoid over -leaping your speech.
The company develops this feature as a way for creators to move its audience beyond their native language. Thus, it included a by -line performance tracker, so you can see how well it is doing in every language.
YouTube launched a similar feature last year. Apple has also succeeded in action: messages, phones and fee time apps contain direct translation tools in iOS 26.


