You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apply standardised formatter to changelog--friends-114.md
This commit was automatically generated by the formatter github action which ran the src/format.js script
Files changed:
friends/changelog--friends-114.md
Copy file name to clipboardExpand all lines: friends/changelog--friends-114.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -72,7 +72,7 @@ What's most interesting is the size. So if you have 64 gigabytes to migrate, how
72
72
73
73
**Jerod Santo:** You'll like this one, because it's kind of up your wheelhouse. Although his home lab is better than yours... I mean, that's the sad part. I mean, happy for him, but sad for you.
74
74
75
-
**Gerhard Lazu:** \[00:08:09.21\] So I think this is going to be almost like a challenge. Like, how can we improve the home labs, too? Linux, Ubuntu, Arch, a couple of things are going to come up... Networks, because they're really interesting... GPUs - how do you run all this stuff in a way that does not break the bank? Because that's the other consideration. I'm not a data center... I wish I was, but I'm not. I'm very sensitive to noise, so I can't have like fans blaring, and 1U, 2U servers, the really shrill ones... So I'm just the Noctua person, that goes for whisper-quiet everything; even fanless, if it's possible... I tried fanless for a hundred gigabits, and it runs really hot. That's the one thing which I was not expecting, just how hot these things run. 400 gigabit is even more crazy, and 800... Wow. So that's really like the next frontier. That's what I'm looking at: 400, 800 and beyond... And all this to service some workloads that have very sensitive latencies in terms of throughput latency... I mean, how do you run a remote GPU? I mean, that's crazy. And what I mean by that - the GPU is in a rack, you're on your laptop, and you're running against the GPU. So you have an NVIDIA GPU on your laptop. How does that even work?
75
+
**Gerhard Lazu:** \[08:09\] So I think this is going to be almost like a challenge. Like, how can we improve the home labs, too? Linux, Ubuntu, Arch, a couple of things are going to come up... Networks, because they're really interesting... GPUs - how do you run all this stuff in a way that does not break the bank? Because that's the other consideration. I'm not a data center... I wish I was, but I'm not. I'm very sensitive to noise, so I can't have like fans blaring, and 1U, 2U servers, the really shrill ones... So I'm just the Noctua person, that goes for whisper-quiet everything; even fanless, if it's possible... I tried fanless for a hundred gigabits, and it runs really hot. That's the one thing which I was not expecting, just how hot these things run. 400 gigabit is even more crazy, and 800... Wow. So that's really like the next frontier. That's what I'm looking at: 400, 800 and beyond... And all this to service some workloads that have very sensitive latencies in terms of throughput latency... I mean, how do you run a remote GPU? I mean, that's crazy. And what I mean by that - the GPU is in a rack, you're on your laptop, and you're running against the GPU. So you have an NVIDIA GPU on your laptop. How does that even work?
76
76
77
77
**Jerod Santo:** So the software on your laptop believes there's a GPU available to it, but it's over the network.
78
78
@@ -128,7 +128,7 @@ What's most interesting is the size. So if you have 64 gigabytes to migrate, how
128
128
129
129
**Gerhard Lazu:** It was just so amazing. After 18 months of building Pipely in the open, with Friends, we shipped it on stage in Denver, and it was so awesome. Seriously, such a great feeling. The audience was clapping, Jerod and Adam were smiling... I was so proud of what we have achieved. Really, really proud. But do you remember what happened a few hours right after we did the stage bit?
130
130
131
-
**Jerod Santo:**\[00:11:57.08\] Uhm, hiking?
131
+
**Jerod Santo:**\[11:57\] Uhm, hiking?
132
132
133
133
**Gerhard Lazu:** Yes... And before that there was something else...
134
134
@@ -236,7 +236,7 @@ What's most interesting is the size. So if you have 64 gigabytes to migrate, how
236
236
237
237
**Jerod Santo:** Okay...
238
238
239
-
**Gerhard Lazu:**\[00:15:55.21\] So our CDN was running on these tiny instances, and it just didn't get very far. So we're looking at an impossibly tiny bike that someone is actually riding. That's what we're seeing right now.
239
+
**Gerhard Lazu:**\[15:55\] So our CDN was running on these tiny instances, and it just didn't get very far. So we're looking at an impossibly tiny bike that someone is actually riding. That's what we're seeing right now.
240
240
241
241
**Adam Stacoviak:** I can't even believe they can ride that bike, by the way. We're watching a video. I don't know, is this on -- okay, it doesn't matter. There's a video with a bike, and a dude on a very, very small bike, and... It's impossible to ride that thing.
242
242
@@ -270,7 +270,7 @@ So in this case, because all of the previous infrastructure was in place, updati
270
270
271
271
**Gerhard Lazu:** Exactly, exactly. So we went back to about 33%.
272
272
273
-
**Jerod Santo:**\[00:19:59.26\] Okay. So they couldn't handle 100% with the underprovisioning that you had done with our Fly VMs.
273
+
**Jerod Santo:**\[19:59\] Okay. So they couldn't handle 100% with the underprovisioning that you had done with our Fly VMs.
274
274
275
275
**Gerhard Lazu:** That's correct, yes. They were too small, they didn't have enough memory, not enough CPU, and there were too few of them. There were certain hotspots that needed more than one instance, and that's what we did.
276
276
@@ -310,7 +310,7 @@ So in this case, because all of the previous infrastructure was in place, updati
310
310
311
311
**Gerhard Lazu:** Well, I did receive some emails that instances were running out of memory and crashing, but it was happening after a while, so that was maybe the equivalent of that.
312
312
313
-
**Jerod Santo:**\[00:24:03.09\] Right.
313
+
**Jerod Santo:**\[24:03\] Right.
314
314
315
315
**Gerhard Lazu:** But in this case, because we were so engrossed in the conversations, we never heard the rattles.
316
316
@@ -384,7 +384,7 @@ So in this case, because all of the previous infrastructure was in place, updati
384
384
385
385
**Adam Stacoviak:** A hundred percent.
386
386
387
-
**Gerhard Lazu:**\[00:28:04.05\] And I'll give you a couple of moments to think about that... \[laughter\]
387
+
**Gerhard Lazu:**\[28:04\] And I'll give you a couple of moments to think about that... \[laughter\]
388
388
389
389
**Adam Stacoviak:** I was like "A hundred percent."
390
390
@@ -432,7 +432,7 @@ The more interesting question would be who would like to join? ...to see where d
432
432
433
433
**Gerhard Lazu:** I think if you ask Adam, he'll say "ChangelogCon." The first Changelog conference ever. Go, go, go.
434
434
435
-
**Adam Stacoviak:**\[00:32:00.23\] Maybe... I kind of liked it just as it was though, honestly. I wouldn't mind having some trusted -- not like demos, but some show and tell. I think there's a lot of pontification from the stage... I'd love to have some show and tell type stuff, if that was a thing. And maybe that's demos. I'm thinking Oxide with their racks, and stuff like that. That's kind of show and tell. But I don't know.
435
+
**Adam Stacoviak:**\[32:00\] Maybe... I kind of liked it just as it was though, honestly. I wouldn't mind having some trusted -- not like demos, but some show and tell. I think there's a lot of pontification from the stage... I'd love to have some show and tell type stuff, if that was a thing. And maybe that's demos. I'm thinking Oxide with their racks, and stuff like that. That's kind of show and tell. But I don't know.
436
436
437
437
**Gerhard Lazu:** That would be really cool.
438
438
@@ -456,7 +456,7 @@ The more interesting question would be who would like to join? ...to see where d
456
456
457
457
**Gerhard Lazu:** Yeah, I think that's a good time.
458
458
459
-
**Break**: \[00:34:33.00\]
459
+
**Break**: \[34:33\]
460
460
461
461
**Gerhard Lazu:** We're looking at all the different steps that we had to take between being on stage at Denver... Do you know which \[unintelligible 00:36:04.24\] were there? Just looking at this list... It's a list on the Pipely repo -- by the way, we're looking at the readme... All the various release candidates of 1.0, before going to 1.0. I thought it would happen on stage, it didn't. Or soon after. It didn't. But it did happen now. So we are beyond, and we are running on 1.0. If you look at 1.0 RC4, limit Varnish memory to 66%. And that's the one commit which I pushed that was on stage. There was the next one, RC5, handle varnish JSON response, failing on startup, and bump the instance size to performance. That was the scale-up that needed to happen.
462
462
@@ -506,7 +506,7 @@ Now, the one thing which was failing, and this was discovered after, I think, we
506
506
507
507
**Jerod Santo:** Plus the hacker spirit. Plus our cache hit ratio was out of our own hands... We wanted it in our own hands.
508
508
509
-
**Gerhard Lazu:**\[00:40:03.29\] Yeah, yeah. It was like the previous screenshot. So this is the moment I turned off all traffic from like forever, in this case, from Fastly. It was only a few days, but you can see that in those few days we had 155,000 cache hits. Sorry, cache misses. 155,000 cache misses. And we had 370,000 cache hits. So the ratio does not look right. That green line, the cache hits, there were days when there were more; or like periods, not days. There were periods, up to maybe half an hour, an hour, when there were more misses than hits. And you do not expect a CDN to behave that way. And by the way, this is across both changelog.com and cdnchangelog.com. So it includes both the static assets, everything. Just a small window, but it just shows the problem.
509
+
**Gerhard Lazu:**\[40:03\] Yeah, yeah. It was like the previous screenshot. So this is the moment I turned off all traffic from like forever, in this case, from Fastly. It was only a few days, but you can see that in those few days we had 155,000 cache hits. Sorry, cache misses. 155,000 cache misses. And we had 370,000 cache hits. So the ratio does not look right. That green line, the cache hits, there were days when there were more; or like periods, not days. There were periods, up to maybe half an hour, an hour, when there were more misses than hits. And you do not expect a CDN to behave that way. And by the way, this is across both changelog.com and cdnchangelog.com. So it includes both the static assets, everything. Just a small window, but it just shows the problem.
510
510
511
511
Now, as a percentage, that translates to 70.5%. So 70.5% cache hits, and that is really not great. Okay, I know you've been expecting this... So let's see. What do you think is our current cache hit versus miss ratio? This is across all requests. So now that we switched across, we had 10 days to measure this... On the new system, what do you think is the cache hit versus miss ratio?
512
512
@@ -566,7 +566,7 @@ Now, as a percentage, that translates to 70.5%. So 70.5% cache hits, and that is
566
566
567
567
**Jerod Santo:** These are XML files that represent the current state of our podcast syndication, our episodes that we're shipping and have shipped. And so they're hit often by robots who are scraping feeds in order to update their podcast indexes, and let people know which episodes are available. And they should be at 99.5%, because they only change when we publish a new episode, which is at this point in our lives three times a week. On a Monday, on a Wednesday, and on a Friday. And every other request, every other day and time is the same exact content.
568
568
569
-
**Gerhard Lazu:**\[00:43:53.20\] That's it. So I would say that this is possibly the most important thing to serve. Because if we don't serve feeds correctly, how do you know what content Changelog has? How do you know when content updates? And this is like worldwide. So I think this is pretty good. And improving on 99.5%, I don't think we should do it.
569
+
**Gerhard Lazu:**\[43:53\] That's it. So I would say that this is possibly the most important thing to serve. Because if we don't serve feeds correctly, how do you know what content Changelog has? How do you know when content updates? And this is like worldwide. So I think this is pretty good. And improving on 99.5%, I don't think we should do it.
570
570
571
571
**Jerod Santo:** No.
572
572
@@ -606,7 +606,7 @@ News... I know this is something that's very important to Jerod. It was 52.6% ca
606
606
607
607
**Adam Stacoviak:** So I'd say news could be similar to feed, pushing that to the boundary, because it doesn't change much. I'd love to explore that when you do the mp3 exploration of large objects getting pushed out... I'd love to just sit on your shoulder, I suppose, or as a fly on the wall kind of thing, just to explore that with you... Because I'm super-curious about what makes that cache get purged out of the memory, myself.
608
608
609
-
**Gerhard Lazu:**\[00:47:54.28\] Yeah. Well, pairing up is something that I'm getting better and better every day. Recorded and published pairing sessions. Jerod has the experience, not Adam...
609
+
**Gerhard Lazu:**\[47:54\] Yeah. Well, pairing up is something that I'm getting better and better every day. Recorded and published pairing sessions. Jerod has the experience, not Adam...
610
610
611
611
**Jerod Santo:** That's right.
612
612
@@ -704,7 +704,7 @@ News... I know this is something that's very important to Jerod. It was 52.6% ca
704
704
705
705
**Gerhard Lazu:** You're welcome, humans. Now, what does that look like? I think 863 times is really difficult to imagine, so I'm going to play something for you to see what it means. So what we have here is one second at the top; that's how long it takes. No, hang on. I'm not playing it. I should be playing it. There you go. Now I'm playing it. Okay. While 833 seconds at the bottom is still loading. And it will continue loading for so long that we're not going to wait 15 minutes for this thing to load, okay? We're not going to wait that. So that's the difference between how fast the homepage loads now, versus how it used to load before. This is for the majority of the users.
706
706
707
-
\[00:52:21.11\] So the cache hit ratio, the connection there was that everything was slow, and there's nothing we could do about it. And I think slow is relative, because when you're talking about milliseconds, I think there's about 50 or maybe 100 milliseconds when things were nearly instant... But in our case, the homepage was taking 150 milliseconds to get served. And the tail latency is really crazy. Like, the tail latency was over a second for the homepage to serve. That was a long time. By the way, this thing is still going, and it's not even like 10% there.
707
+
\[52:21\] So the cache hit ratio, the connection there was that everything was slow, and there's nothing we could do about it. And I think slow is relative, because when you're talking about milliseconds, I think there's about 50 or maybe 100 milliseconds when things were nearly instant... But in our case, the homepage was taking 150 milliseconds to get served. And the tail latency is really crazy. Like, the tail latency was over a second for the homepage to serve. That was a long time. By the way, this thing is still going, and it's not even like 10% there.
708
708
709
709
**Adam Stacoviak:** What's the rationale behind this video? Explain to me how this is supposed to explain things...
710
710
@@ -762,7 +762,7 @@ News... I know this is something that's very important to Jerod. It was 52.6% ca
762
762
763
763
**Jerod Santo:** Not to change the subject on you, but --
764
764
765
-
**Gerhard Lazu:**\[00:56:10.13\] Yeah, Vinyl is coming up in January.
765
+
**Gerhard Lazu:**\[56:10\] Yeah, Vinyl is coming up in January.
0 commit comments