IT professionals have too many data sources to count, and they spend a huge amount of time wrestling that data into usable condition, a survey from Ivanti finds. Credit: Getty Images A new survey has found that a growing number of IT professionals have too many data sources to even count, and they are spending more and more time just wrestling that data into usable condition. Ivanti, an IT asset management firm, surveyed 400 IT professionals on their data situation and found IT faces numerous challenges when it comes to siloes, data, and implementation. The key takeaway is data overload is starting to overwhelm IT managers and data lakes are turning into data oceans. Among the findings from Ivanti’s survey: Fifteen percent of IT professionals say they have too many data sources to count, and 37% of professionals said they have about 11-25 different sources for data. More than half of IT professionals (51%) report they have to work with their data for days, weeks or more before it’s actionable. Only 10% of respondents said the data they receive is actionable within minutes. One in three respondents said they have the resources to act on their data, but more than half (52%) said they only sometimes have the resources. “It’s clear from the results of this survey that IT professionals are in need of a more unified approach when working across organizational departments and resulting silos,” said Duane Newman, vice president of product management at Ivanti, in a statement. The problem with siloed data The survey found siloed data represents a number of problems and challenges. Three key priorities suffer the most: automation (46%), user productivity and troubleshooting (42%), and customer experience (41%). The survey also found onboarding/offboarding suffers the least (20%) due to siloes, so apparently HR and IT are getting things right. In terms of what they want from real-time insight, about 70% of IT professionals said their security status was the top priority over other issues. Respondents were least interested in real-time insights around warranty data. Data lake method a recipe for disaster I’ve been immersed in this subject for other publications for some time now. Too many companies are hoovering up data for the sake of collecting it with little clue as to what they will do with it later. One thing you have to say about data warehouses, the schema on write at least forces you to think about what you are collecting and how you might use it because you have to store it away in a usable form. The new data lake method is schema on read, meaning you filter/clean it when you read it into an application, and that’s just a recipe for disaster. If you are looking at data collected a month or a year ago, do you even know what it all is? Now you have to apply schema to data and may not even remember collecting it. Too many people think more data is good when it isn’t. You just drown in it. When you reach a point of having too many data sources to count, you’ve gone too far and are not going to get insight. You’re going to get overwhelmed. Collect data you know you can use. Otherwise you are wasting petabytes of disk space. Related content news analysis AMD launches Instinct AI accelerator to compete with Nvidia AMD enters the AI acceleration game with broad industry support. First shipping product is the Dell PowerEdge XE9680 with AMD Instinct MI300X. By Andy Patrizio Dec 07, 2023 6 mins CPUs and Processors Generative AI Data Center news analysis Western Digital keeps HDDs relevant with major capacity boost Western Digital and rival Seagate are finding new ways to pack data onto disk platters, keeping them relevant in the age of solid-state drives (SSD). By Andy Patrizio Dec 06, 2023 4 mins Enterprise Storage Data Center news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe