|
106 | 106 | <br>It means you can run the telecom applications(the SW) on any HW (like Dell, HP, Qanta, SuperMicro, Fujitsu servers, whatnot), in VMs or containers.
|
107 | 107 | <br> Basically, you can run it on Intel HW, 'cause that's so much better ... meltdown ... spectre ... zombieload
|
108 | 108 | <br>
|
109 |
| -<br>Traces of this decoupling of the network functions from proprietary hardware appliances have been there for many years now. |
| 109 | +<br>Traces of this decoupling of the network functions from proprietary hardware have been there for many years now. |
110 | 110 | <br>Around 2003, I worked in an ISP. We used Cisco routers to do BGP with customers and the upstream provider. I was in awe when GNU Zebra came out and I could run BGP in a Linux box.
|
111 | 111 | <br>Fast forward to today, part of SDN, we use opendaylight with Quagga soft router for BGP ( Quagga is what followed after zebra, it is actually an extinct sub-specie of the African zebra.)
|
112 | 112 | </aside>
|
|
185 | 185 | <br>You could use one smartNIC for data per compute, we're talking 2x100 Gbps here, that's *a lot* of bandwidth.
|
186 | 186 | In this case, high availability happens at compute level, not at network card level.
|
187 | 187 | <br><br>
|
188 |
| -What if you have a dual socket system. If you plug a smartNIC in a PCIe socket, applications might not like crossing that QPI link (from 2017 called UPI) between the CPUs. |
| 188 | +What if you have a dual socket system. If you plug a smartNIC in a PCIe slot, applications might not like crossing that QPI link (from 2017 called UPI) between the CPUs. |
189 | 189 | In this case you can look into using a bifurcated smartNIC, that's basically splitting the card in two physical pieces that you can insert in two PCIe slots, one per NUMA node.
|
190 | 190 | <br><br>
|
191 | 191 | If you use two smartNICs, do you want two separate ovs controllers in your compute? will OpenStack even support that?
|
|
335 | 335 | We basically need to overcome the limitations in the Linux kernel which is not ideal for *lots* of packet-processing.
|
336 | 336 | <br><br>
|
337 | 337 | Why is the Linux kernel a problem when we talk about latency and performance ( with performance, I mean higher throughput at a lower CPU cost)?
|
338 |
| -<br>Well, the Linux kernel is monolithic, it's millions of lines of code. |
339 |
| -<br>It contains lots of things like drivers (which makes the Linux kernel work with any hw, not just your specific hw/smartNIC), it allows running many applications at the same time by using a time sharing layer. Resources like CPU, mem exposed by the kernel are shared between all the processes running. |
340 | 338 | <br><br>The networking stack inside the Linux kernel limits how many packets per second it can process, it was conceived to be slow, a decision taken 20 25 years ago, that packets should be delivered into sockets. If you want to be fast you dont do this upfront.
|
341 | 339 | <br><br>Too many packets per second means CPUs get busy just receiving packets, then either the packets are dropped or we CPU starve the applications.
|
342 | 340 |
|
|
0 commit comments