Last week I met with several executives of Intel (INTC) at the company’s headquarters in Santa Clara, Calif.
Among them was Rob Crooke, who runs the memory-chip technology business for Intel, which includes NAND flash memory chips developed in partnership with Micron Technology (MU). The business also includes a newer, more novel type of device, “3-D Xpoint,” first introduced in July of 2015 via the Micron partnership, and now sold under the “Optane” brand by Intel.
“We’re in a fantastic market, I can’t complain!” said Crooke with a smile, as I observed that the latest results, in Intel’s Q1 report on April 26, showed the memory business was on fire — sales were up 20% at $1 billion.
Intel’s business of selling solid-state drives, or “SSDs," that are displacing traditional magnetic spinning disk drives has grown rapidly from $12 million in 2008, when Intel first introduced what it called a “reinvention” of SSDs, to $3.5 billion last year.
Revenue from Optane “is not material yet,” says Crooke. But he’s confident the same things that made SSD sales soar will boost Optane in the years to come.
"What happened to make [SSDs] take off was virtualization. When people started putting a bunch VMs [virtual machines, software images of a computer that can be clustered on a single physical server computer] on a server, it set off the I/O blender of random data.”
The I/O, or input-output, “blender," as he calls it, is the “non-locality of data,” which just means that with virtual servers, the data you need are rarely near the processor. You can’t prep the data on a hard drive based on what you think will be the next piece of data the processor needs, a practice called “striping” of hard drives. The data are often somewhere else, in an unpredictable way.
That starts to create problems for the goal of keeping the server microprocessor busy — it becomes a bottleneck.
Cloud-computing giants such as Google (GOOGL) and others, to deal with that non-locality, responded by wanting more and more SSDs.
Now, “Optane has a chance to take off the way SSDs did over 10 years,”says Crooke.
"Data is exploding [in the data center], it's things like VSAN [virtualized storage-area network, collections of pools of data] that are getting bigger and bigger."
"And we are in a unique position to drive this because we control both ends of the wire,” the microprocessor and Optane.
Optane is still a mystery as far as how exactly it is composed: It doesn’t use transistors, like NAND or other memory devices. Crooke declined to offer me the secret formula. The basics of the thing are that it is 1,000 times faster than NAND, and more reliable, and can store much more data in the same specified amount.
That means a terabyte of Optane chips can serve as a decent "cache," a store of the most frequently used data, sitting alongside, say, 30 terabytes of NAND flash, in a server computer.
Just as important, says Crooke, the increase in the density of DRAM memory chips, which are faster than either Optane or NAND, is slowing down with each new generation. Optane is then positioned as a sweet spot, if you will, between DRAM and NAND, the right mix of fast and dense.
"Optane is living between DRAM and NAND, the economics are compelling,” says Crooke.
But that has meant Optane is both a dessert topping, and a floor wax. It’s not always clear what its ultimate mission should be.
The future, a thrilling one to imagine, is Optane replacing SSDs as the main storage for PCs, suggests Crooke. You might start with a computer that has an SSD for most of its storage, and then a small amount of Optane, say, 32 gigabytes, as a cache. But, NAND, over time, is “too slow for things like real-time analytics,” which means that the value of SSDs starts to break down under the speed demands of the most advanced applications.
And so, suggests Crooke, the Optane parts could start to take over from NAND-based SSDs as the main storage in a computer.
“So maybe all of your NAND goes to the cloud,” as bulk storage, suggests Crooke, while Optane becomes mass storage for client devices. DRAM would still exist in that scenario, but “it becomes less important,” he says, because you no longer drop dramatically in speed if you have a “cache miss” in DRAM and have to drop down to storage, because Optane is so fast.
The interface between the processor and storage is a factor that can propel this shift. What’s known as "NVMe," the data bus that connects the processor to storage, is replacing “Serial ATA” as the standard for moving bits back and forth.
NVMe is "very similar to 5G [wireless] in terms of bandwidth and latency,” observes Crooke. And that means can pair well with Optane, a kind of grand alignment in where wired and wireless networking is going.
Intel shares today are up 49 cents, or 0.9%, at $54.41. |