While still largely in its experimental stage, opportunistic criminals have already begun to deploy audio deepfakes to run online and telephonic scams. While many have sought to commercialise the technology by licensing it out to gaming and social media firms, the potential misuses for deepfakes remain worrying. With the process becoming cheaper and widely accessible, the amount of deepfake content has been growing at an alarming rate. Their novel software is only the beginning, as they state that “these algorithms are designed as a starting point for other researchers to develop novel detection methods.” To do so, Joel Frank and Lea Schonherr, from the Horst Gortz Institute for IT Security at Ruhr-Universitat Bochum, amassed around 118,000 samples of synthesised audio voice recordings that amounted to almost 196 hours of fake voice recordings in both English and Japanese.īased on those findings, which were presented at last month’s Conference on Neural Information Processing Systems, Frank and Schonherr claim they have developed a set of algorithms which harness their technique that distinguishes between a real human voice and an imitation. ![]() Now, researchers have developed a deepfake audio detection method designed to spot increasingly realistic audio deepfakes. While deepfake detection software has received a lot of attention, they have mainly focused on analysing image files. ![]() There are two kinds of deepfakes: Video deepfakes, which reproduce the look and voice of an actual person, and audio deepfakes, which imitate a person’s voice. They were deepfakes – a form of digital fabrication powered by artificial intelligence, underpinned by ‘deep learning’ algorithms that learn the movements or sounds of two different recordings and combine them to produce realistic-looking fake media. Last year, a series of videos surfaced of a simulated Tom Cruise that took social media by storm.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |