HN

What makes Intel Optane stand out (2023) (zuthof.nl)
4h ago by walterbell 116 points 89 comments
hbogert 3h ago
It stands out, because it didn't sell. Which is weird because there were some pretty big pros about using them. The latency for updating 1 byte was crazy good. Some databases or journals for something like zfs really benefited from this.
amluto 2h ago
Intel did a spectacularly poor job with the ecosystem around the memory cells. They made two plays, and both were flops.

1. “Optane” in DIMM form factor. This targeted (I think) two markets. First, use as slower but cheaper and higher density volatile RAM. There was actual demand — various caching workloads, for example, wanted hundreds of GB or even multiple TB in one server, and Optane was a route to get there. But the machines and DIMMs never really became available. Then there was the idea of using Optane DIMMs as persistent storage. This was always tricky because the DDR interface wasn’t meant for this, and Intel also seems to have a lot of legacy tech in the way (their caching system and memory controller) and, for whatever reason, they seem to be barely capable of improving their own technology. They had multiple serious false starts in the space (a power-supply-early-warning scheme using NMI or MCE to idle the system, a horrible platform-specific register to poke to ask the memory controller to kindly flush itself, and the stillborn PCOMMIT instruction).

2. Very nice NVMe devices. I think this was more of a failure of marketing. If they had marketed a line of SSDs that, coupled with an appropriate filesystem, could give 99% fsync latency of 5 microseconds and they had marketed this, I bet people would have paid. But they did nothing of the sort — instead they just threw around the term “Optane” inconsistently.

These days one could build a PCM-backed CXL-connected memory mapped drive, and the performance might be awesome. Heck, I bet it wouldn’t be too hard to get a GPU to stream weights directly off such a device at NVLink-like speeds. Maybe Intel should try it.

orion138 2h ago
One of the many problems was trying to limit the use of Optane to Intel devices. They should have manufactured and sold Optane memory and let other players build on top of it at a low level.
amluto 2h ago
> Optane memory

Which “Optane memory”? The NVMe product always worked on non-Intel. The NVDIMM products that I played with only ever worked on a very small set of rather specialized Intel platforms. I bet AMD could have supported them about as easily as Intel, and Intel barely ever managed to support them.

wtallis 2h ago
The consumer "Optane memory" products were a combination of NVMe and Intel's proprietary caching software, the latter of which was locked to Intel's platforms. They also did two generations of hybrid Optane+QLC drives that only worked on certain Intel platforms, because they ran a PCIe x2+x2 pair of links over a slot normally used for a single X2 or x4 link.

Yes, the pure-Optane consumer "Optane memory" products were at a hardware level just small, fast NVMe drives that could be use anywhere, but they were never marketed that way.

myself248 1h ago
Exactly. I happen to have all AMD sitting around here, and buying my first Optane devices was a gamble, because I had no idea if they'd work. Only reason I ever did, is they got cheap at one point and I could afford the gamble.

That uncertainty couldn't have done the market any favors.

amluto 1h ago
I feel like this is proving my point. You can’t read “Optane” and have any real idea of what you’re buying.

Also… were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges in the “Rapid Storage” family where some secret sauce in the PCIe host lied to the OS about what was actually connected so an Intel driver could replace the OS’s native storage driver (NVMe, AHCI, or perhaps something worse depending on generation) to implement all the actual logic in software?

It didn’t help Intel that some major storage companies started selling very, very nice flash SSDs in the mean time.

wtallis 1h ago
> were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges

They were definitely part of the series of massive kludges. But aside from the Intel platforms they were marketed for, I never found a PCIe host that could see both of the NVMe devices on the drive. Some hosts would bring up the x2 link to the Optane half of the drive, some hosts would bring up the x2 link to the QLC half of the drive, but I couldn't find any way to get both links active even when the drive was connected downstream of a PCIe switch that definitely had hardware support for bifurcation down to x2 links. I suspect that with appropriate firmware hacking on the host side, it may have been possible to get those drives fully operational on a non-Intel host.

ksec 3h ago
>Which is weird....

It isn't weird at all. I would be surprised if it ever succeed in the first place.

Cost was way too high. Intel not sharing the tech with others other than Micron. Micron wasn't committed to it either, and since unused capacity at the Fab was paid by Intel regardless they dont care. No long term solution or strategy to bring cost down. Neither Intel or Micron have a vision on this. No one wanted another Intel only tech lock in. And despite the high price, it barely made any profits per unit compared to NAND and DRAM which was at the time making historic high profits. Once the NAND and DRAM cycle went down again cost / performance on Optane wasn't as attractive. Samsung even made some form of SLC NAND that performs similar to Optane but cheaper, and even they end up stopped developing for it due to lack of interest.

deepsquirrelnet 2h ago
I worked at Micron in the SSD division when Optane (originally called crosspoint “Xpoint”) was being made. In my mind, there was never a real serious push to productize it. But it’s not clear to me whether that was due to unattractive terms of the joint venture or lack of clear product fit.

There was certainly a time when it seemed they were shopping for engineers opinions of what to do with it, but I think they quickly determined it would be a much smaller market anyway from ssds and didn’t end up pushing on it too hard. I could be wrong though, it’s a big company and my corner was manufacturing and not product development.

rjsw 1m ago
A friend was working at Micron on a rackmount network server with a lot of flash memory, I didn't ask at the time what kind of flash it used. The project was cancelled when nearly finished.
chrneu 2h ago
I worked at Intel for a while and might be able to explain this.

There were/are often projects that come down from management that nobody thinks are worth pursuing. When i say nobody, it might not just be engineers but even say 1 or 2 people in management who just do a shit roll out. There are a lot of layers of Intel and if even one layer in the Intel Sandwich drag their feet it can kill an entire project. I saw it happen a few times in my time there. That one specific node that intel dropped the ball on kind of came back to 2-3 people in one specific department, as an example.

Optane was a minute before I got there, but having been excited about it at the time and somewhat following it, that's the vibe I get from Optane. It had a lot of potential but someone screwed it up and it killed the momentum.

osnium123 1h ago
Are you referring to the Intel 10nm struggles in your reference to 2-3 people?
empiricus 1h ago
This is actually insane. Do you mean 2-4 people in one department basically killed Intel? Roll to disbelief.
LASR 3m ago
Yes this is pretty common in large enterprise-ey tech companies that are successful. There are usually a small group of vocal members that have a strong conviction and drive to make a vision a reality. This is contrary to popular belief that large companies design by committee.

Of course it works exceptionally well when the instinct turns out to be right. But can end companies if it isn’t.

wtallis 1h ago
It's somewhat plausible that a small group of people in one department were responsible for the bad bets that made their 10nm process a failure. But it was very much a group effort for Intel to escalate that problem into the prolonged disaster. Management should have stopped believing the undeliverable promises coming out of their fab side after a year or two, and should have started much sooner to design chips targeting fab processes that actually worked.
jauntywundrkind 3h ago
Cost was fantastically cheap, if you take into account that Optane is going to live >>10x longer than a SSD.

For a lot of bulk storage, yes, you don't have frequently changing data. But for databases or caches, that are under heavy load, optane was not only far faster, but if looking at life-cycle costs, way way less.

wtallis 2h ago
Optane was in the market during a time when the mainstream trend in the SSD industry was all about sacrificing endurance to get higher capacity. It's been several years, and I'm not seeing a lot of regrets from folks who moved to TLC and QLC NAND, and those products are more popular than ever.

The niche that could actually make use of Optane's endurance was small and shrinking, and Intel had no roadmap to significantly improve Optane's $/GB which was unquestionably the technology's biggest weakness.

mort96 21m ago
I never understood what they're meant to do. Intel seemed to picture some future where RAM is persistent; but they were never close to fast enough to replace RAM, and the option to reboot in order to fix some weird state your system has gotten itself into is a feature of computers, not a problem to work around.
bombcar 3h ago
It feels like everyone figured out what to do with them and how just about when they stopped making them.
timschmidt 3h ago
Same for the Larabee / Knights architecture. Would sure be fun to play around with a 500 core Knights CPU with a couple TB of optane for LLM inference.

Intel's got an amazing record of axing projects as soon as they've done the hard work of building an ecosystem.

zozbot234 3h ago
> 500 core

The newest fully E-core based Xeon CPUs have reached that figure by now, at least in dual-socket configs.

timschmidt 3h ago
Yup. And high end GPU compute now has on-package HBM like Knight's had a decade ago, and those new Intel CPUs are finally shipping with AVX reliably again. We lost a decade for workloads that would benefit from both.
thesz 1h ago
In "databases and journals" you rarely update just one byte, you do a transaction that updates data, several indexes and metadata. All of that needs to be atomic.

Power failure can happen in between any of "1 byte updates with crazy latencies." However small latency is, power failure is still faster. Usually, there is a write ahead or some other log that alleviates the problem, this log is usually written in streaming fashion.

What is good, though, is that "blast radius" [1] of failure is smaller than usual - failed one byte write rarely corrupts more that one byte or cache line. SQLite has to deal with 512 (and even more) bytes long possible corruptions on most disks, with Optane it is not necessarily so. So, less data to copy, scan, etc.

[1] https://sqlite.org/psow.html

zozbot234 3h ago
Optane didn't sell because they focused on their weird persistent DIMM sticks, which are a nightmare for enterprise where for many ordinary purposes you want ephemeral data that disappears as soon as you cut power. Thet should have focused on making ordinary storage and solving the interconnect bandwidth and latency problems differently, such as with more up-to-date PCIe standards.
hrmtst93837 38m ago
PCIe was a bottleneck in consumer boxes, but that wasn't the whole problem. Optane's low latency and write endurance looked great on paper, yet once you put it behind SSD controllers and file systems built around NAND assumptions, a lot of the upside got shaved off before users ever saw it.

"Just make it a faster SSD" was never a business. The DIMMs were weird, sure, but the bigger issue was that Optane made the most sense when software treated storage and memory as one tier, and almost nobody was going to rewrite kernels, DBs, and apps for a product that cost more than flash and solved pain most buyers barely felt.

jauntywundrkind 3h ago
I don't think that would be my main complaint. Sticking optane in a dimm was just awkward as hell. You now have different bits of memory with very different characteristics, & you lose a ton of bandwidth.

If CXL was around at the time it would have been such a nice fit, allowing for much lower latency access.

It also seems like in spite of the bad fit, there were enough regular options drives, and they were indeed pretty incredible. Good endurance, reasonable price (and cheap as dirt if you consider that endurance/lifecycle cost!), some just fantastic performance figures. My conclusion is that alas there just aren't many people in the world who are serious about storage performance.

tayo42 2h ago
Can Linux differentiate that different dimms are different? Or does it see it all as one big memory space still?
cogman10 2h ago
IMO, the reason they didn't sell is the ideal usage for them is pairing them with some slow spinning disks. The issue Optane had is that SSD capacity grew dramatically while the price plummeted. The difference between Optane and SSDs was too small. Especially since the M.2 standard proliferated and SSDs took advantage of PCI-E performance.

I believe Optane retained a performance advantage (and I think even today it's still faster than the best SSDs) but SSDs remain good enough and fast enough while being a lot cheaper.

The ideal usage of optane was as a ZIL in ZFS.

zozbot234 2h ago
That may have been the ideal usage back in the day, but ideal usage now is just for setting up swap. Write-heavy workloads are king with Optane, and threshing to swap is the prototypical example of something that's so write-heavy it's a terrible fit for NAND. Optane might not have been "as fast as DRAM" but it was plenty close enough to be fit for purpose.
mort96 17m ago
That would be fine if I could put it in an M.2 slot. But all my computers already have RAM in their RAM slots, and even if I had a spare RAM slot, I don't know that I'd trust the software stack to treat one RAM slot as a drive...

And their whole deal was making RAM persistent anyway, which isn't exactly what I want.

zozbot234 13m ago
Optane M.2-format hardware exists.
exmadscientist 2h ago
> The ideal usage of optane was as a ZIL in ZFS.

It was also the best boot drive money could buy. Still is, I think, though other comments in the thread ask how it compares against today's best, which I'd also love to see.

gozzoo 2h ago
This concept was very popular back in the days when computers used to boot from HDD, but now it doesn't make much sense. I wouldn't notice If my laptop boots for 5 sec instead of 10.
exmadscientist 1h ago
At the time of their introduction Optane drives were noticeably faster to boot your machine than even the fastest available Flash SSD. So in a workstation with multiple hard drives installed anyway, buying one to boot off of made decent sense.

If they had been cheaper, I think they'd have been really, really popular.

bushbaba 2h ago
Not just capacity but SSD speeds also improved to the point it was good enough for many high memory workloads.
epistasis 3h ago
When most people are running databases on AWS RDS, or on ridiculous EBS drives with insanely low throughput and latency, it makes sense to me.

There are very few applications that benefit from such low latency, and if one has to go off the standard path of easy, but slow and expensive and automatically backup up, people will pick the ease.

Having the best technology performance is not enough to have product market fit. The execution required from the side of executives at Intel is far far beyond their capability. They developed a platform and wanted others to do the work of building all the applications. Without that starting killer app, there's not enough adoption to build an ecosystem.

amluto 1h ago
> There are very few applications that benefit from such low latency

Basically any RDBMS? MySQL and Postgres both benefit from high performance storage, but too many customers have moved into the cloud where you can’t get NVMe-like performance for durable storage for anything remotely close to a worthwhile price.

epistasis 1h ago
I'm saying that there are very few downstream applications that use databases that benefit from reducing latency beyond the slow performance of the cloud. Running your database on VMs or baremetal gives better performance, but almost no applications built on databases bother to do it.
p-e-w 3h ago
Optane was a victim of its own hype, such as “entirely new physics”, or “as fast as RAM, but persistent”. The reality felt like a failure afterwards even though it was still revolutionary, objectively speaking.
amelius 3h ago
For a good technical explanation at the physical level of a memory cell:

https://pcper.com/2017/06/how-3d-xpoint-phase-change-memory-...

walterbell 3h ago
Related: "High-bandwidth flash progress and future" (15 comments), https://news.ycombinator.com/item?id=46700384

In an era of RAM shortages and quarterly price increases, Optane remains viable for swap and CPU/GPU cache.

Weryj 1h ago
I’ve been considering buying 8x64g models and setting them as equal priority swap disks (to mitigate the low throughput) for this exact reason.
MrDrMcCoy 19m ago
Can confirm doing so is awesome. Get some slightly bigger ones and partition them for additional use as zil. They're extremely satisfying to use, and depressing to remember that we'll never see their like again.
trollbridge 3h ago
Yeah, I've wondered if we might see a revival of this kind of technology.
newsclues 3h ago
in an era of shortages, if there was an optane factory today ready to print money...
walterbell 3h ago
Secondary market surplus pricing (~$1/GB) value accrues to the buyer..
zozbot234 2h ago
> (~$1/GB)

Isn't that actually crazy good, even insane value for the performance and DWPD you get with Optane, especially with DRAM being ~$15/GB or so? I don't think ~$1/GB NAND is anywhere that good on durability, even if the raw performance is quite possibly higher.

readitalready 3h ago
These are absolute beasts for database servers, and definitely needs to make a comeback.

They suck for large sequential file access, but incredible for small random access: databases.

dangoodmanUT 3h ago
Optane was crazy good tech, it way just too expensive at the time for mass adoption, but the benefits were so good.

Looking at those charts, besides the DWPD it feels like normal NVMe has mostly caught up. I occassionally wonder where a gen 7/8(?) optane would be today if it caught on, it'd probably be nuts.

exmadscientist 2h ago
The actual strength of Optane was on mixed workloads. It's hard to write a flash cell (read-erase-write cycle, higher program voltage, settling time, et cetera). Optane didn't have any of that baggage.

This showed up as amazing numbers on a 50%-read, 50%-write mix. Which, guess what, a lot of real workloads have, but benchmarks don't often cover well. This is why it's a great OS boot drive: there's so much cruddy logging going on (writes) at the same time as reads to actually load the OS. So Optane was king there.

lvl155 1h ago
It’s the best OS drive especially p5800x.
zozbot234 3h ago
> besides the DWPD it feels like normal NVMe has mostly caught up.

So what you mean is that on the most important metric of them all for many workloads, Flash-based NVMe has not caught up at all. When you run a write heavy workload on storage with a limited DWPD (including heavy swapping from RAM) higher performance actually hurts your durability.

ashvardanian 3h ago
I don't have the inside scoop on Intel's current mess, but they definitely have a habit of killing off their coolest projects.
brcmthrowaway 11m ago
Realsense too
rkagerer 2h ago
My understanding is Optane is still unbeaten when it comes to latency. Has anyone examined its use as an OS volume, compared to today's leading SSD's? I know the throughput won't be as high, but in my experience that's not as important to how responsive your machine feels as latency.
hamdingers 2h ago
> Has anyone examined its use as an OS volume, compared to today's leading SSD's?

Late last year I switched from a 1.5tb Optane 905P to a 4tb WD Blue SN5000 NVMe drive in a gaming machine and saw improved load times, which makes sense given the read and write speeds are ~double. No observable difference otherwise.

I'm sure that's not the use case you were looking for. I could probably tease out the difference in latency with benchmarks but that's not how I use the computer.

The 905P is now in service as an SSD cache for a large media server and that came with a big performance boost but the baseline I'm comparing to is just spinning drives.

exmadscientist 1h ago
Unfortunately a gaming machine workload is so read-heavy that I wouldn't expect Optane to square up well. Gaming is all about read speed and overall capacity. You need that heavy I/O mix, especially with low latency deadlines, to see gains from Optane. That limited target use case, coupled with ignorant benchmarking, always limited them.
rkagerer 2h ago
Thanks, that's helpful real-world feedback (not that I wouldn't also be interested in some synthetic benchmark comparisons from someone else).
aggieNick02 1h ago
We benchmarked three of the popular Optane NVMe SSDs about three years ago. There was a short window when they were on clearance and a popular choice as a cache SSD in TrueNAS.

https://pcpartpicker.com/forums/topic/425127-benchmarking-op...

You can compare their benchmarks with the other almost 400 SSDs we've benchmarked. Most impressive is that three years later they are still the top random read QD1 performers, with no traditional flash SSD coming anywhere close:

https://pcpartpicker.com/products/internal-hard-drive/benchm...

They are amazing for how consistent and boring their performance is. Bit level access means no need for TRIM or garbage collection, performance doesn't degrade over time, latency is great, and random IO is not problematic.

aaronmdjones 2h ago
I have a 16 GiB Optane NVMe M.2 drive in my router as a boot drive, running OpenWRT.

It's so incredibly fast and responsive that the LuCI interface completely loads the moment I hit enter on the login form.

speedgoose 2h ago
I configured a hetzner ax101 bare metal server with a 480GB 3d xpoint ssd some years ago. It’s used as the boot volume and it seems fast despite the server being heavily over provisioned, but I can’t really compare because I don’t have a baseline without.
rkagerer 2h ago
Before people claim it doesn't matter due to OS write buffering, I should point out a) today's bloated software and the many-layered, abstracted I/O stack it's built on tends to issue lots of unnecessary flushes, b) read latency is just as important as write (if not moreso) to how responsive your OS feels, particularly if the whole thing doesn't fit in (or preload to) memory.
twotwotwo 2h ago
One potential application I briefly had hope for was really good power loss protection in front of a conventional Flash SSD. You only need a little compared to the overall SSD capacity to be able to correctly report the write was persisted, and it's always running, so there's less of a 'will PLP work when we really need it?' question. (Maybe there's some use as a read cache too? Host RAM's probably better for that, though.) It's going to be rewritten lots of times, but it's supposed to be ready for that.

It seems like there's a very small window, commercially, for new persistent memories. Flash throughput scales really cost-efficiently, and a lot is already built around dealing with the tens-of-microseconds latencies (or worse--networked block storage!). Read latencies you can cache your way out of, and writers can either accept commit latency or play it a little fast and loose (count a replicated write as safe enough or...just not be safe). You have to improve on Flash by enough to make it worth the leap while remaining cheaper than other approaches to the same problem, and you have to be confident enough in pulling it off to invest a ton up front. Not easy!

hedora 2h ago
Any decent SSD has capacitor (enterprise) or battery backed (phones) DRAM. Therefore, a sync write is just “copy the data to an I/O buffer over PCIe”.

For databases, where you do lots of small scattered writes, and lots of small overwrites to the tail of the log, modern SSDs coalesce writes in that buffer, greatly reducing write wear, and allowing the effective write bandwidth to exceed the media write bandwidth.

These schemes are much less expensive than optane.

wtallis 1h ago
> One potential application I briefly had hope for was really good power loss protection in front of a conventional Flash SSD.

That was never going to work out. Adding an entirely new kind of memory to your storage stack was never going to be easier or cheaper than adding a few large capacitors to the drive so it could save the contents of the DRAM that the SSD still needed whether or not there was Optane in the picture.

zozbot234 2h ago
> It seems like there's a very small window, commercially, for new persistent memories. Flash throughput scales really cost-efficiently

Flash is no bueno for write-heavy workloads, and the random-access R/W performance is meh compared to Optane. MLC and SLC have better durability and performance, but still very mid.

exmadscientist 2h ago
Around the time of Optane's discontinuation, the rumor mill was saying that the real reason it got the axe was that it couldn't be shrunk any, so its costs would never go down. Does anyone know if that's true? I never heard anything solid, but it made a lot of sense given what we know about Optane's fab process.

And if no shrink was possible, is that because it was (a) possible but too hard; (b) known blocks to a die shrink; or (c) execs didn't want to pay to find out?

hedora 1h ago
I think it was killed primarily because the DIMM version had a terrible programming API. There was no way to pin a cache line, update it and flush, so no existing database buffer pool algorithms were compatible with it. Some academic work tried to address this, but I don’t know of any products.

The SSD form factor wasn’t any faster at writes than NAND + capacitor-backed power loss protection. The read path was faster, but only in time to first byte. NAND had comparable / better throughput. I forget where the cutoff was, but I think it was less than 4-16KB, which are typical database read sizes.

So, the DIMMs were unprogrammable, and the SSDs had a “sometimes faster, but it depends” performance story.

exmadscientist 1h ago
The DIMMs were their own shitshow and I don't know how they even made it as far as they did.

The SSDs were never going to be dominant at straight read or write workloads, but they were absolutely king of the hill at mixed workloads because, as you note, time to first byte was so low that they switched between read and write faster than anything short of DRAM. This was really, really useful for a lot of workloads, but benchmarkers rarely bothered to look at this corner... despite it being, say, the exact workload of an OS boot drive.

For years there was nothing that could touch them in that corner (OS drive, swap drive, etc) and to this day it's unclear if the best modern drives still can or can't compete.

myself248 1h ago
It sounds like they didn't do a good job of putting the DIMM version in the hands of folks who'd write the drivers just for fun.

The read path is sort of a wash, but writes are still unequalled. NAND writes feel like you're mailing a letter to the floating gate...

zozbot234 1h ago
Isn't this addressed by newer PCIe standards? Of course, even the "new" Optane media reviewed in OP is stuck on PCIe 4.0...
zozbot234 2h ago
That's at least physically half-plausible, but it would be a terrible reason if true. 3.5 in. format hard drives can't be shrunk any, and their costs are correspondingly high, but they still sell - newer versions of NVMe even provide support for them. Same for LTO tape cartridges. Perhaps they expected other persistent-memory technologies to ultimately do better, but we haven't really seen this.

Worth noting though that Optane is also power-hungry for writes compared to NAND. Even when it was current, people noticed this. It's a blocker for many otherwise-plausible use cases, especially re: modern large-scale AI where power is a key consideration.

wtallis 2h ago
> 3.5 in. format hard drives can't be shrunk any,

You're looking at the entirely wrong kind of shrinking. Hard drives are still (gradually) improving storage density: the physical size of a byte on a platter does go down over time.

Optane's memory cells had little or no room for shrinking, and Optane lacked 3D NAND's ability to add more layers with only a small cost increase.

georgeburdell 1h ago
Flash has the same shrink problem. And the solution for Optane was the same: go 3D
exmadscientist 1h ago
I don't think the shrink problem is at all the same for the two technologies. There are some really weird materials and production steps in Optane that are simply not present when making Flash cells.
ritcgab 51m ago
All those nice numbers are just beaten by the unit cost. And the ecosystem is a mess.
myself248 1h ago
My kingdom for a MicroSD card with Optane inside. My dashcam wants it soooo badly.
rkagerer 2h ago
Did anyone ever see retention issues like this guy reported on one of his older models?

https://goughlui.com/2024/07/28/tech-flashback-intel-optane-...

zozbot234 2h ago
That's data retention issues on the very first read-through of the media after sitting in cold storage for many years, with subsequent performance returning to normal. It's definitely something to be aware of (and kudos to the blog poster for running that experiment) but worn-out NAND will behave a lot worse than that.
pgwalsh 2h ago
Sure, they were expensive but they have great endurance and sustained read and write speeds. I use one in my car for camera recordings. I had gone through several other drives but this one has been going on 3 or 4 years now without issue. I have a couple more in use too. It's a shame this tech is going away because it's excellent.
gozzoo 3h ago
Maybe we can also mention the HP Memristor here.
jamiek88 1h ago
Oh I was so excited for that. I devoured any news or blogs or rumours about that immediately!
gigatexal 3h ago
I’m still sad they discontinued them. What’s the alternative now does anything come close?
walterbell 1h ago
Small sizes are on secondary market for ~$1/GB.
zozbot234 1h ago
Which is a bargain compared to what DRAM costs today. If you just include the bare minimum of DRAM for a successful boot and immediately set up the entire "small" Optane drive as swap, that's a viable workstation-class system for comparative peanuts. You can't do this with NAND because the write workload of swap kills the media (I suppose it becomes viable if you monitor SMART wearout indicators and heavily overprovision the storage to leverage the drive's pSLC mode, but you're still treating $~0.10/GB hardware as a consumable and that will cost you) and of course you can't do it with spinning rust because the media is too slow.
FpUser 3h ago
I feel sorry about the situation. From my perspective Optane was a godsend for databases. I was contemplating building a system. Could've been a pinnacle of vertical scalability for cheap.
ece 3h ago
Fabs are expensive and all, but maybe running a right-sized fab could have still been profitable at making optane for low-latency work that it was so good at. Even moreso with RAM prices as they are.
bluedino 1h ago
Now do Intel's HBM/CPU Max
jccx70 19m ago
[dead]
temptemptemp111 3h ago
[dead]