HN

What we learned from a 22-Day storage bug (and how we fixed it) (mux.com)
5d ago by mmcclure 40 points 10 comments
altairprime 1d ago
> During this incident, we discovered we had crossed a scale threshold where our log ingestion pipeline was being rate-limited and quietly discarding logs. Ironically, we ended up with less information as a result, which made it significantly harder to reconstruct what was actually happening.

Last year they posted about using New Relic, Datadog, and Grafana. Would this ‘silent deletion of log data due to quota’ problem be characteristic of any one of them in particular, or is it something we have to watch out for with all of them?

mmcclure 1d ago
We don't use New Relic or Datadog (and never have, afaik), so I'm not sure what post you could be referring to for those two? We have talked publicly about our Grafana use, though, and going from an in-house stack to their cloud product. Actual OP can probably hop in later with a better answer, but it was hitting rate limits on the logging agent, not the logging system.
altairprime 1d ago
Ah! Thank you, that makes a lot more sense. I misunderstood https://data.mux.com/blog/off-with-our-head-how-we-re-making... as suggesting that Mux was making Mux core infrastructure ‘play nice with’ the various providers.
drodman 1d ago
In general you do need to be aware of any agent-level rate limits as well as any ingestion limits from the provider. We do some pretty careful sampling and aggregations for most metrics, logs, and traces we store and as mmcclure said in this case it was the rules on the node agents themselves throwing the errors. The volume logging on some of the critical paths of the service got high enough that the logs were dropped due to our configured rate limits.
pooplord69 1d ago
“We didn’t handle errors, didn’t have logs, and now we do cuz next time” saved you a few mins
mannyv 1d ago
Why bother transcoding on the fly? Storage is cheaper than CPU and the work it takes to determine what needs encoding is excessive.

It implies that you guys are generating the playlists on the fly, tracking the client requests, then feeding that over to your transcoder - which then needs to get the original, seek, and transcode. Why bother?

jon_dahl 1d ago
Mux founder here :wave:

Two answers.

First, it does save money. A meaningful percentage of videos on the internet are never watched in the first place, and an even larger percentage are watched soon after upload and never watched again. We're able to prune unwatched renditions, and if they happen to be requested years later, they're still playable. Transcoding on the fly lets us save both CPU and storage.

Second, it is ridiculously fast. Our median time-to-publish for a 5-20 minute video is 9 seconds. We had a customer (God bless them) complaining a few months ago that it took us something like 40 seconds to transcode a 40 minute video, which actually was slower than normal for us. If you do an async transcode up front, you're looking at 20 minutes, not <1 minute.

Blog post on this: https://www.mux.com/blog/how-to-transcode-video-100x-faster-...

steve_adams_86 23h ago
> A meaningful percentage of videos on the internet are never watched in the first place [...] We're able to prune unwatched renditions, and if they happen to be requested years later, they're still playable.

I worked on something similar a while back, and the data that helped me make a call on whether or not we should transcode on the fly or store renditions was looking at analytics for how often the files are accessed.

I figured out that a large file being transcoded and stored would use more compute resources in ~15 minutes than it was likely to use over the span of _several years_ if it was transcoded on the fly. In a situation where you don't know if the company will exist in several years... You opt for the choice which allows you to stack on the storage later if it's necessary.

That's probably one of very few times I've ever applied YAGNI properly. That was ripe for over engineering

mannyv 19h ago
But can you still seek on a video that's being instantly transcoded? To be honest I don't know if anyone does that except YouTube, and it jumps to the time so theoretically you have about a second or two when the request comes into pull the file and start encoding. It sounds like the mezzanine file is chunked, so the time to pull it down is pretty fast.

Since it's your own player you can hint to the backend.

Do you dynamically generate the manifests too? Or do things get transcoded on request?

mannyv 19h ago
Oh, never mind - yeah, by using the access logs of segments you can effectively anticipate and pre-encode when you need to. And once the hls or cmaf stabilizes you can just encode one resolution. And the player will tell you it wants to move up or down, so you can trigger the encode it wants.

It's interesting your customers want the video immediately; ours don't care about that. But you guys can really build your manifest files and encode immediately, since you're making those mezzanine files.

Then the encoded files are basically a cache that you can evict whenever.

How long did it take you guys to prove that design out?

robutsume 1d ago
[dead]