Humans are far better at identifying data pattern changes audibly than they are graphically in two dimensions, researchers exploring a radical concept say. They think that servers full of big data would be far more understandable if the numbers were all moved off the computer screens or hardcopies and sonified, or converted into sound.\nThat's because when listening to music, nuances, can jump out at you \u2014 a bad note, for example. And researchers at Virginia Tech say the same thing may apply with number crunching. Data-set anomaly spotting, or comprehension overall, could be enhanced.\nThe team behind a project to prove this is testing the theory with a recently built 129-loudspeaker array installed in a giant immersive cube in Virginia Tech\u2019s performance space\/science lab, the school's Moss Arts Center.\nHow researchers are testing their big data theory\nThe earth\u2019s upper-atmosphere data sets are the test subjects being used, with each bit of atmospheric data converted into a unique sound. The pieces of audio are varied by using changes in amplitude, pitch, and volume.\nThe school\u2019s immersive Cube contains one of the biggest multichannel audio systems in the world, the university claims, and sounds are produced in a special 360-degree 3D format.\n\u201cUsers experience spatial sound, which means they can hear everything around them,\u201d the school says in a news article. \u201cSounds [are] actually placed in specific spots in the room.\u201d\nEach section of the globe\u2019s atmosphere is assigned to one of the Cube\u2019s 129 speakers, which are arranged to project audio in a half-dome-like pattern, thus replicating a hemisphere. Participants wander the Cube while operating an interface that lets them rewind the 3D sounds, zoom in, slow down the audio, and so on.\nThe gesture-based interface they carry then captures the study user data (which amusingly, in turn, needs to be analyzed).\n\u201cIt makes sense that we would want to go beyond two-dimensional graphical models of information and make new discoveries using senses other than our eyes,\u201d says Ivica Ico Bukvic, in another article on university's website. He is associate professor of music composition and multimedia in the\u00a0Virginia Tech's College of Liberal Arts and Human Sciences and one of the collaborators. He is working with Greg Earle, an electrical and computer engineering professor.\nSpatial, immersive representation of big data through sound \u201cis a relatively unexplored area of research, yet provides a unique perspective,\u201d Virginia Tech said of the Spatial Audio Data Immersive Experience (SADIE) project.\nPrevious research in using sound to explore data\nOthers have explored the subject. Diaz Merced, writing a doctoral thesis at The University of Glasgow, proposed using sound to explore space physics data.\nJohn Beckman, founder of Narro, a text-to-audio converter website, alluded to it once in a personal blog post. \u201cIt\u2019s hard to miss a discordant note or change in volume, even when attention is elsewhere,\u201d Beckman said\u00a0in 2015.\nUnrelated to SADIE and Merced, Beckman was posing the question then as to why more data analysis isn\u2019t performed over sound.\nHe points out that sounds and visuals are the two main ways people interact with electronics. However, visual is currently the only way people analyze large data sets.\n\u201cIt seems like our hearing is primed to pick up minute changes, just as much as our sight,\u201d Beckman said then.