Photo by Neil Conway

One of the huge revelations of Macho Zapp's latest multimedia feature Synapse is that science can now read sounds directly from our minds.

Professor Michael Casey of US Ivy League university Dartmouth College has found a way of mapping the activity of the brain when it listened to certain types of music or sounds. The British-born scientist used functional magnetic resonance imaging (fMRI) which takes advantage of the link between brain activity and blood flow to different parts of the brain.

Recording this data then allowed him to create a pattern across various people which he could use to identify what people were thinking about - and then turn that back into sound to (approximately) recreate the sounds from people's thoughts. ((Casey et al., 2012; Hanke et al., 2015))

Professor Michael Casey (Photo by Joseph Mehling ’69)

Professor Michael Casey (Photo by Joseph Mehling ’69)

Science fiction, as it regularly seems to nowadays, has again became science fact. Prof Casey agreed that this discovery opens the doors for uploading and downloading whole intellects further down the line. And if human thoughts can be turned into digital data then, theoretically, it must be possible for a computer to contain human thoughts and therefore emulate a human.

“The very idea that a computer could write a story that would have meaning to a human I think would be big news if it could happen," said Professor Casey over a Skype call, "and I think we need to question the role of technology in culture. It’s already there, but it’s coming in insidiously.

“A lot of what you’re being recommended by sites such as Netflix is being is being mediated by algorithms. It already is a kind of artificial intelligence that’s going on behind these things, not to mention the actual production of the media itself.

“The way film scripts are vetted and the way the films are chosen to be produced goes through a process that also involves data analysis, statistical analysis, a kind of artificial intelligence to figure out which storyline should go, which actor should be in it, which particular movie, so that they can guarantee that they’re gonna make some money.”

Casey's research follows on from work by computer scientists and neuroscientists (Nishimoto et al., 2011; Haxby et al., 2014) which has shown that, just like sound, visual images and movies can be also be mapped using brain imaging, suggesting that we are truly through the looking glass.

Originally from Derby in the UK, Casey received his Ph.D. from the MIT Media Laboratory's Machine Listening group in 1998, whereupon he became a Research Scientist at Mitsubishi Electric Research Laboratories (MERL) followed by a Professor of Computer Science at Goldsmiths, University of London, before joining Dartmouth in 2008.

His research is funded by the National Science Foundation (NSF), the Mellon Foundation, and the Neukom Institute for Computational Science, with prior research awards from Google Inc., the Engineering and Physical Sciences Research Council (EPSRC, UK), and the National Endowment for the Humanities (NEH). 

For more, experience our unique multimedia feature on music psychology here.