Skip to content

Commit 16ed1eb

Browse files
author
Elena Crenguta Lindqvist
committed
blä
1 parent 24df49d commit 16ed1eb

File tree

1 file changed

+2
-4
lines changed

1 file changed

+2
-4
lines changed

itnot/index.html

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@
106106
<br>It means you can run the telecom applications(the SW) on any HW (like Dell, HP, Qanta, SuperMicro, Fujitsu servers, whatnot), in VMs or containers.
107107
<br> Basically, you can run it on Intel HW, 'cause that's so much better ... meltdown ... spectre ... zombieload
108108
<br>
109-
<br>Traces of this decoupling of the network functions from proprietary hardware appliances have been there for many years now.
109+
<br>Traces of this decoupling of the network functions from proprietary hardware have been there for many years now.
110110
<br>Around 2003, I worked in an ISP. We used Cisco routers to do BGP with customers and the upstream provider. I was in awe when GNU Zebra came out and I could run BGP in a Linux box.
111111
<br>Fast forward to today, part of SDN, we use opendaylight with Quagga soft router for BGP ( Quagga is what followed after zebra, it is actually an extinct sub-specie of the African zebra.)
112112
</aside>
@@ -185,7 +185,7 @@
185185
<br>You could use one smartNIC for data per compute, we're talking 2x100 Gbps here, that's *a lot* of bandwidth.
186186
In this case, high availability happens at compute level, not at network card level.
187187
<br><br>
188-
What if you have a dual socket system. If you plug a smartNIC in a PCIe socket, applications might not like crossing that QPI link (from 2017 called UPI) between the CPUs.
188+
What if you have a dual socket system. If you plug a smartNIC in a PCIe slot, applications might not like crossing that QPI link (from 2017 called UPI) between the CPUs.
189189
In this case you can look into using a bifurcated smartNIC, that's basically splitting the card in two physical pieces that you can insert in two PCIe slots, one per NUMA node.
190190
<br><br>
191191
If you use two smartNICs, do you want two separate ovs controllers in your compute? will OpenStack even support that?
@@ -335,8 +335,6 @@
335335
We basically need to overcome the limitations in the Linux kernel which is not ideal for *lots* of packet-processing.
336336
<br><br>
337337
Why is the Linux kernel a problem when we talk about latency and performance ( with performance, I mean higher throughput at a lower CPU cost)?
338-
<br>Well, the Linux kernel is monolithic, it's millions of lines of code.
339-
<br>It contains lots of things like drivers (which makes the Linux kernel work with any hw, not just your specific hw/smartNIC), it allows running many applications at the same time by using a time sharing layer. Resources like CPU, mem exposed by the kernel are shared between all the processes running.
340338
<br><br>The networking stack inside the Linux kernel limits how many packets per second it can process, it was conceived to be slow, a decision taken 20 25 years ago, that packets should be delivered into sockets. If you want to be fast you dont do this upfront.
341339
<br><br>Too many packets per second means CPUs get busy just receiving packets, then either the packets are dropped or we CPU starve the applications.
342340

0 commit comments

Comments
 (0)