Intel, Micron boast 1,000-times faster flash memory

Flash memory breakthrough promises significantly faster data analysis.

intel lead image

Dense cross-point architecture in the new memory.

Credit: Intel

Chip-makers Intel and Micron say they've achieved a breakthrough in flash memory design. They reckon that their new class of memory is 1,000-times faster than current memory.

Speedier memory should reduce delays in data reading, which in turn will speed data analysis.

Faster is better

We've been getting conditioned to whooping at news of faster Internet through better pipes, as well as faster phones, PCs, and tablets through faster processors.

We like it. Ever-cheaper memory over the years means we can get more of it, and smaller chips means we can fit more of them in our devices.

And we have indeed gotten used to these regular speed and capacity upgrades for our digital existence.

But, interestingly, non-volatile memory design hasn't changed much in comparison. NAND flash, one of the existing technologies used now, was introduced in 1989. Non-volatile memory is memory that isn't erased when the device is turned off.

But that's about to change with Intel and Micron's 3D XPoint memory tech, the companies say. The new memory promises thousand-times speed gains.

Memory

Why do we care about this speed gain, you might ask? Surely we've been doing just fine with our ever-faster Internet, better chips, and smaller devices?

Well, the answer is that we've become data hogs. We need access to stored data faster so we can analyze the ever-increasing amounts of it in a reasonable amount of time.

Retailers need faster memory, for example, "to more quickly identify fraud detection patterns in financial transactions; and healthcare researchers could process and analyze larger data sets in real time," Intel said in its announcement of the new tech.

In the healthcare case, it would accelerate complex tasks such as genetic analysis and disease tracking.

PCs could also benefit, Intel says. Immersive gaming experiences, for example, could be enhanced with faster access to data.

Big Data

Essentially, we've been creating more and more data, and the more we create, the more useless it all becomes because of the time it takes to interrogate it. That's where faster memory becomes important.

Reducing the lag time "between the processor and data" will allow for "much faster analysis," Rob Crooke of Intel's Non-Volatile Memory Solutions Group said on its website.

Faster analysis could turn large amounts of data into valuable information in split-seconds. Machine learning will benefit, for example.

And it's all because the digital world is becoming ever-larger, Intel points out. Intel says that we're heading towards 44 zettabytes of data by 2020, up from 4.4 zettabytes in 2013, quoting numbers from IDC research. A zettabyte is a trillion gigabytes.

Design

Intel and its partner Micron are achieving the gains through denser memory.

They say that they've "invented unique material compounds and an architecture that's 10-times denser" than conventional DRAM random-access memory.

The memory is in production now and wafers are being made in production lines, Intel says.

How are they doing it? It's a three-dimensional checkerboard design where "memory cells sit at the intersection of word lines and bit lines, allowing the cells to be addressed individually."

"As a result, data can be written and read in small sizes, leading to faster and more efficient read/write processes," the company says.

In other words, faster access to larger data sets.

This article is published as part of the IDG Contributor Network. Want to Join?

To comment on this article and other Network World content, visit our Facebook page or our Twitter stream.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.