Skip to content

Commit 7ad62ff

Browse files
committed
Add Performance-Fresser Series articles: Kubernetes, NGINX, Microservices, Cloud Hyperscalers, etcd, Service Mesh, and Docker
1 parent 3e6eb32 commit 7ad62ff

7 files changed

+757
-0
lines changed
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
---
2+
author: Zaoui Amine
3+
date: 2026-02-18T16:00:00Z
4+
title: 'Performance-Fresser Series: Today: Cloud Hyperscalers'
5+
featured: true
6+
draft: false
7+
tags:
8+
- cloud
9+
- aws
10+
- azure
11+
- gcp
12+
- infrastructure
13+
- performance
14+
- cost-optimization
15+
categories: [Technology]
16+
---
17+
18+
Cloud-first is the default. Every startup uses AWS. Every enterprise migrates to Azure. Every consultant recommends GCP. But here's the thing: 37signals went from $3.2M per year to $1.3M per year after leaving the cloud. Over $10M saved in five years.
19+
20+
GEICO spent a decade migrating to the cloud. Result: 2.5x higher costs. They're not alone.
21+
22+
The cloud isn't always cheaper. It's often more expensive. Especially when you factor in hidden costs: egress fees, managed services, vendor lock-in.
23+
24+
## The Egress Fee Trap
25+
26+
Cloud providers charge for data leaving their network. AWS charges $0.09 per GB for the first 10 TB. Azure charges $0.05-0.08 per GB. GCP charges $0.12 per GB.
27+
28+
That doesn't sound like much. But it adds up. High-traffic applications transfer terabytes per month. Video streaming. Data processing. API responses. Each byte costs money.
29+
30+
37signals was paying $50,000 per month in egress fees alone. That's $600,000 per year. Just for data leaving AWS. Their own hardware costs less than that.
31+
32+
Egress fees are a tax on success. The more traffic you serve, the more you pay. The more successful you are, the more expensive the cloud becomes.
33+
34+
Self-hosted infrastructure has no egress fees. You pay for bandwidth once. Then you use it. No per-GB charges. No surprise bills.
35+
36+
## The Managed Service Tax
37+
38+
Cloud providers sell managed services. RDS instead of PostgreSQL. ElastiCache instead of Redis. SQS instead of RabbitMQ. Each one costs more than self-hosting.
39+
40+
RDS costs 2-3x more than running PostgreSQL on EC2. ElastiCache costs 2-4x more than running Redis yourself. SQS costs per message. RabbitMQ is free.
41+
42+
Managed services are convenient. They handle backups. They handle scaling. They handle maintenance. But you pay for that convenience. And you pay a lot.
43+
44+
Teams choose managed services "because it's easier." But easier doesn't mean cheaper. And for many companies, cheaper matters more than easier.
45+
46+
## The Lock-In Problem
47+
48+
Once you're on a cloud provider, leaving is hard. Your infrastructure is tied to their services. Your data is in their format. Your configs use their APIs.
49+
50+
Migrating means rewriting everything. Rebuilding infrastructure. Retraining teams. Months of work. Millions of dollars.
51+
52+
Vendor lock-in is real. And it's expensive. Cloud providers know this. That's why they make it easy to get in and hard to get out.
53+
54+
37signals left AWS after 15 years. It took months. It cost money. But they saved $1.9M per year. The migration cost was worth it.
55+
56+
Most companies never leave. They're locked in. They accept rising costs. They accept vendor limitations. They accept that leaving is too expensive.
57+
58+
## The Hidden Costs
59+
60+
Cloud bills are complicated. EC2 instances. RDS databases. S3 storage. CloudFront CDN. Route 53 DNS. CloudWatch monitoring. Data transfer. API calls. Reserved instances. Spot instances. The list goes on.
61+
62+
Teams struggle to understand their cloud bills. They add services. They forget to remove them. They over-provision. They under-optimize. Costs creep up.
63+
64+
A $10,000 per month bill becomes $20,000. Then $30,000. Then $50,000. Teams don't notice until it's too late. By then, they're locked in.
65+
66+
Self-hosted infrastructure has predictable costs. Hardware. Bandwidth. Power. That's it. No surprise charges. No per-API-call fees. No per-GB transfer costs.
67+
68+
## The "Cloud-First" Cargo Cult
69+
70+
Every startup uses AWS. Every enterprise migrates to Azure. Every consultant recommends GCP. But why?
71+
72+
Teams choose cloud "because everyone does it." They don't analyze costs. They don't consider alternatives. They just follow the trend.
73+
74+
Cloud makes sense for some companies. Startups that need to scale quickly. Companies that need global infrastructure. Teams that don't want to manage hardware.
75+
76+
But cloud doesn't make sense for everyone. Companies with predictable workloads. Companies with data sovereignty requirements. Companies that can self-host cheaper.
77+
78+
The "cloud-first" mentality ignores alternatives. It assumes cloud is always better. It's not. Sometimes self-hosting is cheaper. Sometimes self-hosting is simpler. Sometimes self-hosting is better.
79+
80+
## Who Actually Needs It
81+
82+
Cloud makes sense if you have:
83+
84+
- Unpredictable workloads that need to scale quickly
85+
- Global infrastructure requirements
86+
- Teams that don't want to manage hardware
87+
- Compliance requirements that cloud providers meet
88+
- Budget for cloud premium pricing
89+
90+
Most companies don't have these requirements. Most companies have predictable workloads. Most companies can self-host cheaper. Most companies just follow the trend.
91+
92+
If you're a startup with unpredictable growth, cloud makes sense. If you're an enterprise with predictable workloads, self-hosting might be cheaper. If you're somewhere in between, analyze costs. Don't assume cloud is always better.
93+
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
---
2+
author: Zaoui Amine
3+
date: 2026-02-21T16:00:00Z
4+
title: 'Performance-Fresser Series: Today: Docker'
5+
featured: true
6+
draft: false
7+
tags:
8+
- docker
9+
- containers
10+
- infrastructure
11+
- performance
12+
- cost-optimization
13+
categories: [Technology]
14+
---
15+
16+
Docker is everywhere. Every application runs in containers. Every deployment uses Docker. Every team containerizes everything. But here's the thing: Docker adds a runtime layer between your application and the OS. That layer has overhead. That overhead costs money.
17+
18+
Containers aren't free. They consume CPU. They consume memory. They consume disk space. They add complexity. They add operational burden.
19+
20+
Most applications don't need containers. Most applications can run directly on the OS. Most applications don't need the isolation. Most applications don't need the portability.
21+
22+
## The Runtime Tax
23+
24+
Docker adds a runtime layer. The container runtime. The container daemon. The network bridge. The storage driver. All of this overhead.
25+
26+
A basic Docker setup uses 100-200 MB RAM. Plus CPU for container management. Plus disk I/O for image layers. Plus network overhead for container networking.
27+
28+
Production setups run multiple containers. Each one adds overhead. Each one consumes resources. Each one is another thing to manage.
29+
30+
Native applications don't have this overhead. They run directly on the OS. They use fewer resources. They're simpler to manage.
31+
32+
## The Image Bloat Problem
33+
34+
Docker images are bloated. Base images include entire operating systems. Alpine Linux: 5 MB. Ubuntu: 70 MB. Debian: 120 MB. But that's just the base.
35+
36+
Applications add layers. Dependencies. Libraries. Tools. A typical application image: 200-500 MB. Some images: 1-2 GB. That's a lot of bloat.
37+
38+
Teams build images with everything. Development tools. Debugging tools. Documentation. All of this gets shipped to production. All of this increases image size.
39+
40+
Multi-stage builds help. But they're complex. Teams skip them. They ship bloated images. They pay for storage. They pay for transfer. They pay for startup time.
41+
42+
Native applications don't have this bloat. They're just binaries. They're small. They're fast. They don't need image layers.
43+
44+
## The Orchestration Dependency
45+
46+
Docker requires orchestration. Single containers are easy. But production needs orchestration. Kubernetes. Docker Swarm. Nomad. All of this adds complexity.
47+
48+
Teams containerize applications. Then they need orchestration. Then they need service discovery. Then they need load balancing. Then they need monitoring. The complexity explodes.
49+
50+
Native applications don't need orchestration. They can run as systemd services. They can run as process managers. They're simpler. They're easier to manage.
51+
52+
But teams containerize everything. They add orchestration. They add complexity. They add overhead. All because "containers are the standard."
53+
54+
## The "Containerize Everything" Trend
55+
56+
Every application gets containerized. Web apps. APIs. Background jobs. Cron jobs. Everything. But why?
57+
58+
Teams containerize "because everyone does it." They don't analyze whether containers add value. They don't consider alternatives. They just follow the trend.
59+
60+
Containers make sense for some applications. Applications that need isolation. Applications that need portability. Applications that need consistent environments.
61+
62+
But containers don't make sense for everything. Simple applications don't need containers. Native applications don't need containers. System services don't need containers.
63+
64+
## The Portability Illusion
65+
66+
Docker promises portability. "Build once, run anywhere." But that's not always true. Different hosts have different kernels. Different storage drivers. Different network configurations.
67+
68+
Containers work best when hosts are similar. But hosts aren't always similar. Production differs from development. Cloud differs from on-premises. Portability breaks.
69+
70+
Native applications are more portable. They're just binaries. They run anywhere the OS supports them. No container runtime needed. No image layers. Just the application.
71+
72+
## The Isolation Overhead
73+
74+
Docker provides isolation. Namespaces. Cgroups. But that isolation has overhead. CPU overhead. Memory overhead. I/O overhead.
75+
76+
Most applications don't need this isolation. They can run directly on the OS. They can share resources. They can be simpler.
77+
78+
Virtual machines provide better isolation. But they're heavier. Containers provide lighter isolation. But they're still overhead.
79+
80+
If you don't need isolation, skip containers. Run applications directly. Use fewer resources. Simplify operations.
81+
82+
## Who Actually Needs It
83+
84+
Docker makes sense if you have:
85+
86+
- Applications that need isolation from each other
87+
- Applications that need consistent environments across hosts
88+
- Applications that need easy deployment and scaling
89+
- Teams that want to standardize on containers
90+
91+
Most applications don't have these requirements. Most applications can run directly on the OS. Most applications don't need isolation. Most applications don't need portability.
92+
93+
If you're running a simple web app, consider running it directly. Use systemd. Use a process manager. Skip containers. It's simpler. It's cheaper.
94+
95+
If you're running multiple applications that need isolation, containers might make sense. But don't containerize everything. Containerize what needs containers. Skip containers for everything else.
96+
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
---
2+
author: Zaoui Amine
3+
date: 2026-02-19T16:00:00Z
4+
title: 'Performance-Fresser Series: Today: etcd'
5+
featured: true
6+
draft: false
7+
tags:
8+
- etcd
9+
- kubernetes
10+
- distributed-systems
11+
- consensus
12+
- infrastructure
13+
- performance
14+
categories: [Technology]
15+
---
16+
17+
etcd sits at the heart of Kubernetes. Before your applications run, etcd is storing cluster state, coordinating elections, and replicating data. It consumes 2-8 GB RAM per node. It requires 3-5 nodes for high availability. That's 6-40 GB RAM just for cluster coordination.
18+
19+
Most teams don't need distributed consensus. Most teams don't need high availability at the cluster level. Most teams are running small clusters that would work fine with a single node and backups.
20+
21+
But Kubernetes requires etcd. So teams run etcd. They pay the consensus tax. They accept the complexity. They accept the overhead.
22+
23+
## The Consensus Tax
24+
25+
Distributed consensus is expensive. etcd uses the Raft algorithm. It requires 3-5 nodes. Each node stores a full copy of the cluster state. Each node participates in consensus. Each node consumes memory and CPU.
26+
27+
A 3-node etcd cluster needs 6-24 GB RAM total. Plus CPU for consensus operations. Plus network bandwidth for replication. Plus disk I/O for persistence.
28+
29+
All of this before your applications run. All of this just for cluster coordination. All of this overhead.
30+
31+
Single-node systems don't need consensus. They're simpler. They're faster. They use fewer resources. But they're not "highly available." So teams choose etcd instead.
32+
33+
## The HA Illusion
34+
35+
High availability sounds good. But what are you making highly available? The cluster coordination layer? Is that what's important?
36+
37+
Most applications don't need cluster-level HA. Application-level HA is different. Load balancers. Multiple application instances. Database replication. These provide HA where it matters.
38+
39+
Cluster coordination HA protects against etcd failures. But etcd failures are rare. And when etcd fails, the whole cluster fails anyway. So what are you protecting against?
40+
41+
Teams choose etcd "because we need HA." But they don't need HA at the cluster coordination level. They need HA at the application level. Those are different things.
42+
43+
## The Small Cluster Problem
44+
45+
etcd makes sense for large clusters. Hundreds of nodes. Thousands of pods. Complex coordination requirements. But most clusters are small.
46+
47+
A typical Kubernetes cluster has 3-10 nodes. Maybe 20-50 pods. Simple coordination requirements. etcd is overkill.
48+
49+
Small clusters don't need distributed consensus. They don't need 3-node etcd clusters. They don't need the overhead. They need simple coordination. Or no coordination at all.
50+
51+
Teams run etcd because Kubernetes requires it. But they don't need what etcd provides. They're paying for consensus they don't use.
52+
53+
## The Memory Overhead
54+
55+
etcd stores cluster state. Every pod. Every service. Every config map. Every secret. Everything gets stored in etcd. Memory usage grows with cluster size.
56+
57+
A small cluster might use 2-4 GB RAM for etcd. A medium cluster: 4-8 GB. A large cluster: 8-16 GB. Per node. Multiply by 3-5 nodes. That's 6-80 GB RAM just for etcd.
58+
59+
Most applications don't need this much coordination. Most applications can coordinate through databases. Through message queues. Through simpler mechanisms.
60+
61+
But Kubernetes requires etcd. So teams run etcd. They accept the memory overhead. They accept the cost.
62+
63+
## The Complexity Explosion
64+
65+
etcd adds complexity. Cluster formation. Node failures. Split-brain scenarios. Backup and restore. Upgrades. All of this is complex.
66+
67+
Teams need to understand Raft. They need to monitor etcd health. They need to handle etcd failures. They need to backup etcd data. They need to restore etcd from backups.
68+
69+
All of this complexity. All of this operational burden. All for cluster coordination that most teams don't need.
70+
71+
Single-node systems are simpler. No cluster formation. No split-brain scenarios. No consensus overhead. Just backups. That's it.
72+
73+
## Who Actually Needs It
74+
75+
etcd makes sense if you have:
76+
77+
- Large clusters with hundreds of nodes
78+
- Complex coordination requirements
79+
- Need for cluster-level high availability
80+
- Teams that can operate distributed systems
81+
82+
Most teams don't have these requirements. Most teams have small clusters. Most teams don't need cluster-level HA. Most teams can't operate distributed systems effectively.
83+
84+
If you're running a small cluster, consider alternatives. Single-node Kubernetes. Docker Swarm. Nomad. Simpler orchestration. Less overhead. Less complexity.
85+
86+
If you're running Kubernetes, you're stuck with etcd. But understand what you're paying for. Understand the overhead. Understand that most teams don't need what etcd provides.
87+
88+
## The Kubernetes Dependency
89+
90+
Kubernetes requires etcd. There's no way around it. If you want Kubernetes, you get etcd. You pay the consensus tax. You accept the overhead.
91+
92+
But do you need Kubernetes? Most teams don't. Most teams can use simpler orchestration. Most teams can avoid etcd entirely.
93+
94+
If you need Kubernetes, you need etcd. But if you don't need Kubernetes, you don't need etcd. The question is: do you need Kubernetes?
95+
96+
Most teams don't. They choose Kubernetes because it's popular. They get etcd as a side effect. They pay the consensus tax without needing consensus.
97+
98+
## The Honest Answer
99+
100+
If you're running fewer than 20 nodes, etcd is probably overkill. If you don't need cluster-level HA, etcd is probably overkill. If you can't operate distributed systems, etcd is probably overkill.
101+
102+
But if you're running Kubernetes, you're stuck with etcd. Kubernetes requires it. There's no alternative.
103+
104+
The real question is: do you need Kubernetes? If you don't, you don't need etcd. If you do, you need etcd. But understand what you're paying for. Understand the overhead. Understand that most teams don't need what etcd provides.
105+
106+
The goal is to orchestrate applications, not to operate the most sophisticated consensus system. etcd is a tool. It's required for Kubernetes. But Kubernetes isn't required for most teams.
107+
108+
Most teams can avoid etcd by avoiding Kubernetes. Most teams can use simpler orchestration. Most teams can avoid the consensus tax entirely.
109+
110+
But if you're running Kubernetes, you're running etcd. You're paying the tax. You're accepting the overhead. Just understand what you're getting. And what you're paying for.

0 commit comments

Comments
 (0)