Visitors Now:
Total Visits:
Total Stories:
Profile image
By Alton Parrish (Reporter)
Contributor profile | More stories
Story Views

Now:
Last Hour:
Last 24 Hours:
Total:

Disney A.I. Knows Cows Go Moo and Pigs Go Oink, System Can Associate Pictures with the Right Sounds

Wednesday, November 16, 2016 0:09
% of readers think this story is Fact. Add your two cents.

(Before It's News)

The cow goes “moo.” The pig goes “oink.” A child can learn from a picture book to associate images with sounds, but building a computer vision system that can train itself isn’t as simple. Using artificial intelligence techniques, however, researchers at Disney Research and ETH Zurich have designed a system that can automatically learn the association between images and the sounds they could plausibly make.

B4INREMOTE-aHR0cHM6Ly80LmJwLmJsb2dzcG90LmNvbS8taFB1ajJ1UHNuZmsvV0N2N3hEMEN0ZkkvQUFBQUFBQUJRYVkvM0l1VHJJY0tuZkkxWHVTSTUwYUJtMEF3U1hLSDEwd1pnQ0xjQi9zNjQwL2NvdyUyQmhpZ2hsYW5kLmpwZw==
Credit: Public Domain Pictures
Given a picture of a car, for instance, their system can automatically return the sound of a car engine.

A system that knows the sound of a car, a splintering dish, or a slamming door might be used in a number of applications, such as adding sound effects to films, or giving audio feedback to people with visual disabilities, noted Jean-Charles Bazin, associate research scientist at Disney Research.
B4INREMOTE-aHR0cHM6Ly8yLmJwLmJsb2dzcG90LmNvbS8tU0pIdXpYM2JlbDgvV0N2N0hVaENUWEkvQUFBQUFBQUJRYVUvcWF3a2hWTldmME01MHQ0YlFPU2NiS0xvc1kwaEJNZDFnQ0xjQi9zNjQwL1N1Z2dlc3RpbmctU291bmRzLWZvci1JbWFnZXMtSW1hZ2UlMkIlMjUyODElMjUyOS5wbmc=
Credit: Disney Research

To solve this challenging task, the research team leveraged data from collections of videos.

“Videos with audio tracks provide us with a natural way to learn correlations between sounds and images,” Bazin said. “Video cameras equipped with microphones capture synchronized audio and visual information. In principle, every video frame is a possible training example.”

Given a still image, humans can easily think of a sound associated with this image. For instance, people might associate the picture of a car with the sound of a car engine. In this paper, we aim to retrieve sounds corresponding to a query image. To solve this challenging task, our approach exploits the correlation between the audio and visual modalities in video collections. A major difficulty is the high amount of uncorrelated audio in the videos, i.e., audio that does not correspond to the main image content, such as voice-over, background music, added sound e ffects, or sounds originating o ff-screen.
Disnet researchers present an unsupervised, clustering-based solution that is able to automatically separate correlated sounds from uncorrelated ones. The core algorithm is based on a joint audio-visual feature space, in which the researchers perform iterated mutual kNN clustering in order to e ffectively filter out uncorrelated sounds. To this end, they also introduce a new dataset of correlated audio-visual data, on which we evaluate our approach and compare it to alternative solutions. 
Experiments show that their approach can successfully deal with a high amount of uncorrelated audio.
Credit: Disney Research

One of the key challenges is that videos often contain many sounds that have nothing to do with the visual content. These uncorrelated sounds can include background music, voice-over narration and off-screen noises and sound effects and can confound the learning scheme.

“Sounds associated with a video image can be highly ambiguous,” explained Markus Gross, vice president for Disney Research. “By figuring out a way to filter out these extraneous sounds, our research team has taken a big step toward an array of new applications for computer vision.”

“If we have a video collection of cars, the videos that contain actual car engine sounds will have audio features that recur across multiple videos” Bazin said. “On the other hand, the uncorrelated sounds that some videos might contain generally won’t share any redundant features with other videos, and thus can be filtered out.”

Once the video frames with uncorrelated sounds are filtered out, a computer algorithm can learn which sounds are associated with an image. Subsequent testing showed that when presented an image, the proposed system often was able to suggest a suitable sound. A user study found that the system consistently returned better results than one trained with the unfiltered original video collection.

Combining creativity and innovation, this research continues Disney’s rich legacy of inventing new ways to tell great stories and leveraging technology required to build the future of entertainment.

These results were recently presented at a European Conference on Computer Vision (ECCV) workshop in Amsterdam. In addition to Jean-Charles Bazin, the research team included Matthias Solèr and Andreas Krause of ETH Zurich’s Computer Science Department, and Oliver Wang and Alexander Sorkine-Hornung of Disney Research. For more information, visit the project web site at

Contacts and sources:
Jennifer Liu
Disney Research

Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

Top Stories
Recent Stories

Register

Newsletter

Email this story
Email this story

If you really want to ban this commenter, please write down the reason:

If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.