One year ago, Charlie Giancarlo took the helm of Pure Storage, which in fiscal year 2018 reported its first billion-dollar year.\nGiancarlo was a managing director and senior advisor at Silver Lake Partners before joining Pure Storage. Prior to that, he held multiple executive positions at Cisco, where he helped steer the company into markets such as Ethernet switching, VoIP, Wi-Fi and telepresence.\n\nGiancarlo talked with Network World's Ann Bednarz about what Pure is doing to keep the storage industry moving forward,\u00a0and how the experience he gained during Cisco\u2019s growth spurt is helping. \nHe described Pure's vision for a data-centric architecture \u2013 an approach that combines the simplicity of direct-attached storage with the scalability and reliability of network storage \u2013 and how it will lead to the eventual collapse of storage tiers. \nGiancarlo also talked about the fate of magnetic disk drives (only for cold storage); why NVMe is important (enables even greater efficiency in flash); and what\u2019s distinctive about the company's pay-per-use Evergreen storage services (no rip-and-replace upgrades).\nHere is an edited transcript of that conversation.\nEnterprise storage has been stuck with the perception that it\u2019s boring. Is that changing? Is the storage industry becoming more innovative?\nThere\u2019s always a bottleneck to the progress in computation, and I think the bottleneck for the last decade, with the growth of data, has been how to handle all the data. Frankly, I think the technology has been behind. Now we\u2019re finally starting to see some real advances in storage. That\u2019s what makes it exciting. When it becomes a bottleneck, that also means there\u2019s a lot of opportunity.\nWhat stands out to you after your first year at Pure Storage? How did the company perform?\nI think our performance speaks for itself. If you look over the last year, we\u2019ve grown an average of 40% year over year. We\u2019ve come out with some great new products that are growing very well. And we continue to lead the market in advancing new technologies. It speaks to the quality of the company overall. Of course, a lot of that was in place before I came on board. It\u2019s a bit too early for me to talk about any real accomplishments. But I do think that what I saw here was a company that had great potential, that was transitioning from being a midsize company to a large company, and that needed to transition some of ways in which it did business. I think I\u2019ve been able to help them start to advance to the next stage. That has to do with the way we work with our partners in field, the way we scale our sales force and our development organization, and the way we aspire to looking at new opportunities for the business.\nIn May, Pure Storage outlined its vision for a data-centric architecture that delivers on the need for agility and performance in enterprise settings. Can you explain the data-centric architecture? What does it involve from a technical standpoint?\nI\u2019ll go back a little bit, in terms of the way that customers design their environment, and I\u2019ll talk about why we now have an opportunity to modify that, and why we should modify that.\nIf you think about what an ideal situation would be, if you could snap your fingers and make magic happen, you\u2019d have one super powerful processor that could address storage that was located right next to it, at speed of light. That would be an ultimate, easy, very straightforward architecture. Now going back 10 or 15 years, the fastest connection that people had was 1 Gigabit Ethernet. They had disks that were maybe 1 terabyte at most, and we had distributed processors.\nIn order to handle the world\u2019s largest computation problems, we still need lots of processors \u2013 that hasn\u2019t changed. But other things have changed.\nFor one, networking speeds are at 100 Gigabit Ethernet and even moving to 400 Gigabit Ethernet. Data has continued to grow explosively, to the point where you can\u2019t just fit it on one disk or SSD. So we need to scale that. But with the very high networking speed and with the density we\u2019re able to get with solid-state storage now, we're able to make it look as if all the data you want is right next to the processors.\nAnother thing that has changed is that many years ago, the application stack was very heavy. It was difficult to construct. It was customized to the specific application environment. You had an operating system, you had security software associated with that operating system, you had remote management software associated with that operating system, and then you had an application that was tuned to it. And it was stuck there. And now you had to get access to lots of data. And the only way to do that was to spread the data out, what was known as scale-out architecture for the data.\nBut today, applications are very lightweight. They\u2019re virtualized and increasingly containerized. They can be placed anywhere. But the data itself is heavy. When you have petabytes of data \u2013 even in an array such as ours, which can fit a petabyte in about 5 inches on a rack \u2013 moving all that data would take a long time. It\u2019s far better to move the application to the data than to move the data to the application. Now, with 100 Gigabit Ethernet interfaces, we can do that.\nSo that\u2019s what we mean by data-centric architecture. It\u2019s designing the architecture for your data processing around the data, rather than designing the data around the application.\nThe other thing that used to happen years ago, and even happens today, is that data was constantly replicated because every application wanted its own copy of the data. Part of the reason for that was performance. They didn\u2019t want to have to share what was limited performance with other applications. Today with solid state data, we can have multiple applications access the data with the full performance they need with quality-of-service guarantees so it\u2019s not affecting the other applications when it gets access to the data.\nThat's another thing we mean by data-centric architecture. It\u2019s reducing the number of copies of your data. Making it easier to get access to it from all the applications that you need \u2013 which reduces costs and increases performance. It also increases security and compliance, because now you\u2019ve reduced the number of copies of your data across the enterprise.\nA differentiator for Pure is its storage-as-a-service pricing model. Can you talk about the Evergreen Storage Service (ES2)?\nOur competitors would view it as just pricing. But it\u2019s a lot more than that. We promise our customers that if they\u2019re on our evergreen model, which is a subscription model, that we will keep their storage system constantly updated to the latest hardware and software \u2013 meaning that they never have to migrate their data off the system. Our competitors can\u2019t do that because they can\u2019t do what is called a nondisruptive upgrade. They can\u2019t replace the hardware and the software without downtime. When our competitor goes to a new product model and obsoletes the old model, they force the customer to migrate the data. They can\u2019t upgrade the old model.\nSo this gives assurance that if a customer is buying now, they won\u2019t need to change out an array in a few years?\nExactly. We do it all in place. If they\u2019re paying the subscription, they don\u2019t pay any more money. We upgrade the system for them as far as the subscription. Another benefit is that we don\u2019t charge them again for the same storage. Let me give you an example. Let\u2019s say they buy a system with 50 terabytes of storage in it. A few years later, if they want to upgrade that to 250 terabytes, they only pay for 200 terabytes. They don\u2019t need to pay for the first 50 terabytes over again.\nIs that a common scenario? Are customers typically making the shift to flash storage in increments? What\u2019s a typical adoption path?\nWe do see that. We see, with our top 25 customers, between 10x and 12x over the next four years, from their first purchase. We see on the order of 4x to 5x over the first two or three years, on average for all of our customers.\nAre we going to see disk drives disappear?\nWe\u2019ll still see disk drives, but they\u2019ll start to migrate to cold storage.\nWe believe that tier one and tier two will collapse. The reason goes back to what I mentioned before: We could have multiple applications accessing the same storage at the same time. So if you already have so-called tier one storage, but now you could allow the apps that typically go to tier two to access the data on the tier one storage, without affecting the tier one applications, you can collapse the tiers.\nWe believe that tier one and tier two will both go to flash as prices drop. For cold storage, we believe it will go the cloud, and even if it goes to the cloud, it will be magnetic. Otherwise known as cheap and deep.\nWhat's the big deal about NVMe?\nLet\u2019s shift to NVMe (non-volatile memory express), which is shaking up the enterprise storage industry. Why should we care about NVMe? What are the benefits?\nWe care because, believe it or not, we\u2019re still dealing with old protocols that were designed for magnetic storage. Before NVMe, the protocols to access storage, even solid-state storage, were designed for magnetic storage. Whether that was SCSI or SATA or SAS or iSCSI \u2013 those were all designed as fundamentally serial interfaces that were designed for relatively slow storage.\nNVM stands for non-volatile memory \u2013 meaning, basically, solid-state memory. NVMe is a more modern protocol that recognizes both the speed of networks that we can now have available to us, as well as the fundamentally parallel nature of solid state.\n\nBE SURE NOT TO MISS:\n\nWhy NVMe? Users weigh benefits of NVMe-accelerated flash storage\n What is a SAN and how does it differ from NAS?\n What is NAS and how do NAS servers corral unstructured data?\n\n\nNVMe is a more parallel way to access the solid-state storage. That\u2019s very meaningful, especially to Pure, frankly, even more than to our competitors, for the following reason: Only Pure uses raw, flash memory. The majority of our products use what we call DirectFlash. We speak directly to the flash, across our entire product. All of our competitors use so-called SSDs, or solid-state disks. Now, \u2018solid state disk\u2019 is a bit of an anachronistic title, because there\u2019s no disk. They\u2019re solid state. But an SSD makes flash memory appear to be a magnetic disk. That\u2019s why they\u2019re designed. Competitors can claim to be all flash, but all they really did was remove a magnetic disk and put in an SSD. But it suffers from all the limitations that the magnetic protocols have provided. They\u2019re relatively slow, they do not optimize their use of flash, and they\u2019re serial in nature. They don\u2019t provide a parallel interface to the flash.\nWe put NVMe into place early last year, so we\u2019re well over a year now using NVMe for access to our flash. It\u2019s very meaningful for us. It means we can be even more efficient in flash. Even faster in terms of both write speeds and read speeds for our customers. And it allowed us to use any type of flash memory that was economical for us to use, consumer and\/or enterprise grade.\nWhat about NVMe over Fabrics (NVMe-oF)?\nNVMe over Fabrics is a very high-speed way of getting access to your storage over traditional interfaces, like Ethernet or Fibre Channel. That will be important for us, because it allows our shared accelerated storage model.\n[Consider the three primary ways to access storage: direct attached storage, SAN and NAS.] With NVMe, we now can make all three of those look the same. We call it shared accelerated storage. We can remove DAS, so instead of servers having to have their own disks on board, they can now have an NVMe interface to an array, and get the same if not better performance than they got before.\nShared accelerated storage can replace SAN with NMVe and get better performance. And with network-attached storage, it\u2019s the same thing: NVMe will make it faster than using traditional protocols.\nLastly, NVMe over Fabrics does create that parallel interface, even though it\u2019s over an Ethernet. It allows multiple-disk access for read and write to occur at the same time. That\u2019s critical for things like AI, analytics and other large, complex, multi-threaded workloads.\nAre enterprises coming to Pure to solve specific workload challenges, or are they looking for broader, more strategic storage overhauls?\nThere\u2019s a spectrum of managers. There are some that are very much caught up in their existing environments and frameworks, and for them it\u2019s really just about improving the way they do things today. There are others, though, that are struggling with the demands placed on them. Moving to new workloads, for example. They\u2019re struggling with scaling application environments with DAS or with SAN, or migrating to things like Amazon S3 environments. When we came out with the data-centric architecture and the details behind that \u2013 including this idea of removing DAS altogether and migrating to a more centralized approach, a data-centric approach \u2013 they said that\u2019s exactly what they have been looking for, and they didn\u2019t realize that they could do it.\nAre there particular workloads that are driving adoption? \nThe areas of AI and analytics are big new areas for us. Customers have realized the way that they\u2019ve managed storage before won\u2019t work for them. It\u2019s frankly too slow, and often just too big. It takes so many racks of equipment to be able to deliver the performance, or hold the amount of data, that it\u2019s unwieldy and impractical. One of the reasons Nvidia partnered with us is because, based on their experience in the field, there was no other data storage system that could provide the speed of data delivery that could keep their graphics processors, their GPUs, fully occupied.\nIs it an advantage, being more of a startup and not having a legacy platform to work around?\nIt\u2019s that simple. We started decades after the other guys. We started Day 1 focused on flash. We only do flash. We don\u2019t have any core elements in our software that are unoptimized for flash. This is a very good point that\u2019s not mentioned often enough: Our software is fundamentally parallel. We use parallel streams in the software to be accessing multiple parts of storage at the same time. Nearly all of our competitors were designed around serial software streams.\nThe future of storage\nWhat are you most excited about as you enter your second year at the company?\nA lot of things. I\u2019m very excited about company itself. How focused the company is on delivering great products and supporting customers. It\u2019s a very can-do attitude. Very enthusiastic, excited about growth. You can\u2019t manufacture great culture. I feel very fortunate to come into a company that has a strong customer-centric, technology-centric culture.\nReally focusing on delivering against the data-centric architecture that we identified at our Accelerate event and continuing to build out the future vision of the company. I think we have a lot of opportunity as we fill out our menu and continue to allow our customers to do more with their data.\nIn the big picture, what can we expect in terms of product development? Bigger, faster?\nCertainly faster, cheaper, greater density. That\u2019s the norm. But increasingly as well we are embracing the cloud and allowing our customers to migrate more of their data into the cloud. Also, allowing our customers to collapse the tiers. That\u2019s a very big change for our customers.\nWhat experience that you gained at Cisco will influence your time at Pure?\nIt\u2019s new but it\u2019s also old. When Cisco bought Kalpana, which is how I ended up there, Cisco had just done its first $250 million quarter. And when I came to Pure, it was just crossing its quarter-billion dollar quarter.\nWhen I left Cisco, it was at $45 billion in revenue. As part of the executive team, I went through a lot of growth at the company and a lot of changes during that period of time. I\u2019ve seen what it takes to get through that growth. It\u2019s not as if you want to create the infrastructure for a $45 billion company when you\u2019re at $1 billion. You need to know all of the transitions that you\u2019ll go through. Having been through it once before, I\u2019m familiar with some of the pitfalls. We made a lot of rookie mistakes at Cisco going through it. I hope I can reduce the number of rookie mistakes that we make here. At the same time, we also realized what it was to develop an entire industry \u2013 starting with just routers and then going from routers to switches to IP telephony and Wi-Fi and all of that. To scale out the company in terms of capabilities. As I look at Pure, the opportunities for us to go into some new areas that are closely related if not directly related to what we do is very exciting for me and for the company.