-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathIaas Network Engineer Notes.txt
32 lines (23 loc) · 4.16 KB
/
Iaas Network Engineer Notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
I have been working for Huawei private cloud IaaS department since 2021. This is my note.
---------------------
Key Terms:
There are clouds. There are some networks in a cloud. There are virtual machines (VM) that can bind their ports into the networks. VMs in the same network have layer 2 connection (packet switching), and VMs in different networks but the same VPC has layer 3 connection (routing).
Manage Plane manages devices, such as virtual routers. Control plane manages how packets get transferred, such as flows. Data plane does actual packet transferring.
---------------------
How Do Virtual Machine packets get transferred?
1. There are routes, macs, and IPs in every virtual machine. For every virtual port, there is a "tap" device on the physical host server. Packet comes out to the tap device.
2. The host server has virtual flows (e.g. OpenFlow) and bridges (e.g. OpenVSwitch bridges) to transfer packets. The bridges are designed with different responsibilities, such as access control, layer 2 switching, layer 3 routing, VXLAN packaging and depackaging.
3. According to the rules of flows (such as a match of source MAC, source IP, in port, VLAN, and custom registers), and the connection setup of bridges, data gets transferred. When a packet gets into a bridge, it starts to match the first table (table 0). It finds the first flow that matches with highest priority, and then gets resubmitted to another table according to the flow. Finally, the packet gets out of the bridges and gets sent out of the host server.
4. There is typically VXLAN packaging before a packet is transferred out of the host server. VXLAN makes the data in overlay easy and separated. You can have the VXLAN Network ID (VNI) match the virtual networks. When the target host server receives the packets, depackaging will happen, and the virtual machine itself doesn't know what happened outside.
5. There are usually switches connected to the host server. There can be routes, NQA/BFD, etc. NQA and BFD can be set up to detect connectivity, and disable the route when the end target is unreachable. We can balance the network traffic load by setting up 2 routes with an identical destination but different nexthop, so both can be selected.
6. A typical use of NQA/BFD is: when there are many identical virtual routers, and packets can get transferred on any one of them. There can be bridges and flows on the virtual routers.
---------------------
How, why, and best practices:
1. Namespaces isolate network resources. Namespaces cannot reach each other by default. We can connect them by veth ports for example.
2. Manage Plane and control plane restarting (e.g. after upgrade) should not affect network traffic.
3. Virtual routers can be VMs. We can upgrade them one by one, and the traffic that was on the upgrading virtual router will be on others.
4. There should be a way to quickly prevent packets entering a specific virtual router. For example, the virtual router can drop all BFD requests, so the switch will no longer think the virtual router is reachable, and therefore stop sending to it.
5. For the virtual routers to elastically scale up or down, they can be VMs. The network traffic going into the host server can be VLAN, and VXLAN gets handled inside the virtual routers.
6. VXLAN vs VLAN. They are both logical networks. VXLAN network is overlay (virtual) while VLAN network is underlay (physical). VXLAN doesn't care about the underlay as long as it reaches. VXLAN uses UDP packets sent over tunnels. VXLAN has 24 bit identifiers (VNI) that supports 16 million domains. VLAN has 12 bit identifiers that support 4096 networks. With VXLAN, it is easy to migrate VMs as they can always have the same IPs.
7. Network resources from different tenants, and virtual private clouds should not interfere with each other. It can be achieved via namespaces, VLAN, VXLAN, virtual bridges and such.
8. Typically inside Linux, packets get transferred inside kernel space. This is sometimes time consuming. DPDK can use fast paths to bypass the kernel space, to eliminate context switching to kernel space. DPDK can have a poll mode driver instead of CPU interrupts. DPDK can have huge pages. DPDK can be NUMA aware to avoid expensive NUMA expenses.