On Wednesday afternoon, I am sitting on a video call that I am listening to Ricky Garvas. I tell me a joke about the cloning of the sound. After that, Audri Herburn follows me to comment on artificial intelligence.
Not surprisingly, none of these people was really on the call. Instead, he is the CEO and chief scientist of Home, Dr. Alan Covon, on the other hand. He is displaying the latest update of his company’s AI Voice Creation Service EV3.
Just watching the audio just 30 seconds, the tool can make a perfect copy of someone’s sound. Not only in their tone or tone, this new feature also captures and creates a copy of it.
You can like
Ricky Garwas has the same dry intellect and sarcastic accent, telling me about the features of sound cloning. And talking in a soft British accent at the time, the audience is a fist and interesting.
But these are not just celebrities. This tool can imitate any sound in the world, all of which is just from a small audio clip. Obviously, such a device has the benefit of changing the world for both better and worse.
Coon sits with Tom’s leader to explain this new device, his background, and why his team wants to revolutionize the world of AI Voice Cloning.
The world of Ham and AI Vice Generation
We work in an area of AI that does not come to fruition so strangely. They are the sound generation software, which claims to be the ‘realistic voice of the world’.
I think this is the fastest developing part of the AI space. Openi and Google are rivals, but what we have done with EV3 is to take the technology to the next stage.
Dr. Allen Kovin, Hum CEO
It has set a long distance over the years, now it has the ability to design the sound in detail, along with a range of speeches, along with a range of sounds. Now, with this latest update, the company can clone any and all sounds.
“I think this is the fastest developing part of the AI’s place,” Coon explained on the call. Open AI and Google are rivals, but what we have done with the EV3 takes this technology to the next stage, “Kovin explained on the call.
(Image Credit: Ham AI)
“The previous models have relied on the imitation of specific people. Then you need every person’s line -tone -data data.
It is obtained using a voice data and a large backblog of learning, so that they do not need to imitate specific people. Give the model a clip of 30 seconds, and it can regenerate it from the beginning. This allows the model to learn your specific infection, tone and personality, while training against a very large backblog of sound data to fill the gap.
Of course, such a model works best when it is well represented. The clip of your mixed clip will not be more similar to your personality. However, it currently works only for English and Spanish, in the future there are more languages.
Ethics to create real sounds
(Image Credit: Shutter Stock)
If, like me, your first thought to hear all this is also a matter of concern, then it is surprising that you have something shared with Konon.
“I think it can be misused. Initially, we were so worried about these dangers that we have decided not to chase the cloning of sound. But we have changed our views because there are many people who have come to us for legitimate use for sound cloning.”
“Directly translate, dubbing, content more accessible to legitimate use cases, to be able to copy your voice for the script, or even reach the fans.”
Although there are issues of use, there are equally negative. Open CEO Sam Altman has recently warned the AI voice cloning risks and the ability to be used in scams and bank voice activities.
This technology that connects with video and image generation may require a deep fax for a while to cause trouble. Konon explained that he was aware of these concerns and claimed that Haum was approaching him as much as he could.
“We are issuing many safety measures with this technology,” Kovin said. We analyze every conversation, and we are still improving in this regard. But we can score how much it is possible that something is being misused on any dimensions, whether there is any permission or permission.
“When people are not using it properly, we can clearly stop access. In our terms, you have to comply with a group of ethical guidelines that we have introduced along with Haum Initiative. These concerns are on our minds since we have started, and when we have made these technologies, we are better.”
Creating guidelines in Ai’s world
(Image Credit: Shutter Stock)
Hum Initiative is a project established by us. This morality is that modern technology, above all, should serve our emotional fitness. This is somewhat ambiguous, but the move has given a list of six principles of sympathetic technologies.
- Technology should be deployed only when its benefits increase to a great extent for large -scale individuals and society for society.
- Technologies should be made to serve our emotional well -being and avoid treating human emotions from a means of eliminating
- Claims on the capabilities, costs and benefits of sympathetic technologies should be supported in strict, comprehensive, multilateral and mutual cooperation.
- Members of diverse settlement and cultural groups deserve access to the benefits of sympathetic technologies without discrimination costs
- People affected by sympathetic technology should access the information necessary to make informed decisions about its use
- A sympathetic technology should be deployed only with the consent of those who have an impact.
(Image Credit: Hum)
Of course, when these are well guidelines, they are subjected to, and when it is followed, it is only beneficial. Konon assured me that these are the beliefs with which we stand and when the sound is cloning, they are well aware of the dangers.
Initially, in Hum, we were so worried about the dangers that we decided not to chase the sound cloning. But we have changed our view because many people have legitimate use matters for cloning that have come to us.
Dr. Allen Kovin, Hum CEO
“We are at the forefront of this technology and we try to be ahead of it,” Kovin explained. I think there will be people who do not respect the guidelines of such a device.
“People should be worried about deep -faxes over the phone, they should be careful with such scams, and this is something I think we need a cross industry effort to deal with it.”
Despite being familiar with the dangers, Coon explained that he believed it was a technology that he had to build.
“The place of AI moves so fast that I have no doubt that in six months, a bad actor will have access to something like this technology,” Kovon said.
The overall thoughts
Konon spent many of our chat instructions and focusing on legitimate concerns about such technology. Its background is in psychology and is firmly convinced that such technology will have a more positive impact on people’s fitness more than negative.
After talking about what Coon misunderstood about such technology, “People are really enjoying cloning their voices with our demo. We have already had thousands of conversations.
They are firmly convinced that it can be used for entertainment, to help enhance people’s confidence and even for training purposes or for films, as well as for dubbing.
Of course, just as many other fields of AI, positive benefits are competing with negative. It is useful to read the script because of being able to read ordinary sound, but it is in danger.
Being able to correct any sound in the world is a long list of concerns. For now, Kon and his team are ahead in the project, and they appear to be associated with the moral aspect of the debate, but we live in such technology life soon.
More from Tom Guide
Back to the laptop
Bai Price (Minimum) Price (Less) Product Name (A to Z) Product Name (Z To A) Retailer Name (A to Z) Retailer Name (Z To A)


