llp | b353464 | 2023-08-02 09:23:52 -0700 | [diff] [blame] | 1 | Verify Network |
| 2 | ---------------- |
| 3 | |
| 4 | This section goes into depth on how SD-Core (which runs *inside* the |
| 5 | Kubernetes cluster) connects to either physical gNBs or an emulated |
| 6 | RAN (both running *outside* the Kubernetes cluster). For the purpose |
| 7 | of this section, we assume you already have a scalable cluster running |
| 8 | (as outlined in the previous section), SD-Core has been installed on |
| 9 | that cluster, and you have a terminal window open on the Master node |
| 10 | in that cluster. |
| 11 | |
| 12 | :numref:`Figure %s <fig-macvlan>` shows a high-level schematic of |
| 13 | Aether's end-to-end User Plane connectivity, where we start by |
| 14 | focusing on the basics: a single Aether node, a single physical gNB, |
| 15 | and just the UPF container running inside SD-Core. The identifiers |
| 16 | shown in gray in the figure (``10.76.28.187``, ``10.76.28.113``, |
| 17 | ``ens18``) are taken from our running example of an actual |
| 18 | deployment (meaning your details will be different). All the other |
| 19 | names and addresses are part of a standard Aether configuration. |
| 20 | |
| 21 | .. _fig-macvlan: |
| 22 | .. figure:: figures/Slide24.png |
| 23 | :width: 700px |
| 24 | :align: center |
| 25 | |
| 26 | The UPF pod running inside the server hosting Aether, with |
| 27 | ``core`` and ``access`` bridging the two. Identifiers |
| 28 | ``10.76.28.187``, ``10.76.28.113``, ``ens18`` are specific to |
| 29 | a particular deployment site. |
| 30 | |
| 31 | As shown in the figure, there are two Macvlan bridges that connect the |
| 32 | physical interface (``ens18`` in our example) with the UPF |
| 33 | container. The ``access`` bridge connects the UPF downstream to the |
| 34 | RAN (this corresponds to 3GPP's N3 interface) and is assigned IP subnet |
| 35 | ``192.168.252.0/24``. The ``core`` bridge connects the UPF upstream |
| 36 | to the Internet (this corresponds to 3GPP's N6 interface) and is assigned |
| 37 | IP subnet ``192.168.250.0/24``. This means, for example, that the |
| 38 | ``access`` interface *inside* the UPF (which is assigned address |
| 39 | ``192.168.252.3``) is the destination IP address of GTP-encapsulated |
| 40 | user plane packets from the gNB. |
| 41 | |
| 42 | Following this basic schematic, it is possible to verify that the UPF |
| 43 | is connected to the network by checking to see that the ``core`` and |
| 44 | ``access`` are properly configured. This can be done using ``ip``, and |
| 45 | you should see results similar to the following: |
| 46 | |
| 47 | .. code-block:: |
| 48 | |
| 49 | $ ip addr show core |
| 50 | 15: core@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 |
| 51 | link/ether 06:f7:7c:65:31:fc brd ff:ff:ff:ff:ff:ff |
| 52 | inet 192.168.250.1/24 brd 192.168.250.255 scope global core |
| 53 | valid_lft forever preferred_lft forever |
| 54 | inet6 fe80::4f7:7cff:fe65:31fc/64 scope link |
| 55 | valid_lft forever preferred_lft forever |
| 56 | |
| 57 | $ ip addr show access |
| 58 | 14: access@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 |
| 59 | link/ether 82:ef:d3:bb:d3:74 brd ff:ff:ff:ff:ff:ff |
| 60 | inet 192.168.252.1/24 brd 192.168.252.255 scope global access |
| 61 | valid_lft forever preferred_lft forever |
| 62 | inet6 fe80::80ef:d3ff:febb:d374/64 scope link |
| 63 | valid_lft forever preferred_lft forever |
| 64 | |
| 65 | The above output from ``ip`` shows the two interfaces visible to the |
| 66 | server, but running *outside* the container. ``kubectl`` can be used |
| 67 | to see what's running *inside* the UPF, where ``bessd`` is the name of |
| 68 | the container image that implements the UPF, and ``access`` and |
| 69 | ``core`` are the last two interfaces shown below: |
| 70 | |
| 71 | .. code-block:: |
| 72 | |
| 73 | $ kubectl -n omec exec -ti upf-0 bessd -- ip addr |
| 74 | 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 |
| 75 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 |
| 76 | inet 127.0.0.1/8 scope host lo |
| 77 | valid_lft forever preferred_lft forever |
| 78 | inet6 ::1/128 scope host |
| 79 | valid_lft forever preferred_lft forever |
| 80 | 3: eth0@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default |
| 81 | link/ether 8a:e2:64:10:4e:be brd ff:ff:ff:ff:ff:ff link-netnsid 0 |
| 82 | inet 192.168.84.19/32 scope global eth0 |
| 83 | valid_lft forever preferred_lft forever |
| 84 | inet6 fe80::88e2:64ff:fe10:4ebe/64 scope link |
| 85 | valid_lft forever preferred_lft forever |
| 86 | 4: access@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default |
| 87 | link/ether 82:b4:ea:00:50:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0 |
| 88 | inet 192.168.252.3/24 brd 192.168.252.255 scope global access |
| 89 | valid_lft forever preferred_lft forever |
| 90 | inet6 fe80::80b4:eaff:fe00:503e/64 scope link |
| 91 | valid_lft forever preferred_lft forever |
| 92 | 5: core@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default |
| 93 | link/ether 4e:ac:69:31:a3:88 brd ff:ff:ff:ff:ff:ff link-netnsid 0 |
| 94 | inet 192.168.250.3/24 brd 192.168.250.255 scope global core |
| 95 | valid_lft forever preferred_lft forever |
| 96 | inet6 fe80::4cac:69ff:fe31:a388/64 scope link |
| 97 | valid_lft forever preferred_lft forever |
| 98 | |
| 99 | When packets flowing upstream from the gNB arrive on the server's |
| 100 | physical interface, they need to be forwarded over the ``access`` |
| 101 | interface. This is done by having the following kernel route |
| 102 | installed, which should be the case if your Aether installation was |
| 103 | successful: |
| 104 | |
| 105 | .. code-block:: |
| 106 | |
| 107 | $ route -n | grep "Iface\|access" |
| 108 | Destination Gateway Genmask Flags Metric Ref Use Iface |
| 109 | 192.168.252.0 0.0.0.0 255.255.255.0 U 0 0 0 access |
| 110 | |
| 111 | Within the UPF, the correct behavior is to forward packets between the |
| 112 | ``access`` and ``core`` interfaces. Upstream packets arriving on the |
| 113 | ``access`` interface have their GTP headers removed and the raw IP |
| 114 | packets are forwarded to the ``core`` interface. The routes inside |
| 115 | the UPF's ``bessd`` container will look something like this: |
| 116 | |
| 117 | .. code-block:: |
| 118 | |
| 119 | $ kubectl -n omec exec -ti upf-0 -c bessd -- ip route |
| 120 | default via 169.254.1.1 dev eth0 |
| 121 | default via 192.168.250.1 dev core metric 110 |
| 122 | 10.76.28.0/24 via 192.168.252.1 dev access |
| 123 | 10.76.28.113 via 169.254.1.1 dev eth0 |
| 124 | 169.254.1.1 dev eth0 scope link |
| 125 | 192.168.250.0/24 dev core proto kernel scope link src 192.168.250.3 |
| 126 | 192.168.252.0/24 dev access proto kernel scope link src 192.168.252.3 |
| 127 | |
| 128 | The default route via ``192.168.250.1`` directs upstream packets to |
| 129 | the Internet via the ``core`` interface, with a next hop of the |
| 130 | ``core`` interface outside the UPF. These packets then undergo source |
| 131 | NAT in the kernel and are sent to the IP destination in the packet. |
| 132 | This means that the ``172.250.0.0/16`` addresses assigned to UEs are |
| 133 | not visible beyond the Aether server. The return (downstream) packets |
| 134 | undergo reverse NAT and now have a destination IP address of the UE. |
| 135 | They are forwarded by the kernel to the ``core`` interface by these |
| 136 | rules on the server: |
| 137 | |
| 138 | .. code-block:: |
| 139 | |
| 140 | $ route -n | grep "Iface\|core" |
| 141 | Destination Gateway Genmask Flags Metric Ref Use Iface |
| 142 | 172.250.0.0 192.168.250.3 255.255.0.0 UG 0 0 0 core |
| 143 | 192.168.250.0 0.0.0.0 255.255.255.0 U 0 0 0 core |
| 144 | |
| 145 | The first rule above matches packets to the UEs on the |
| 146 | ``172.250.0.0/16`` subnet. The next hop for these packets is the |
| 147 | ``core`` IP address inside the UPF. The second rule says that next |
| 148 | hop address is reachable on the ``core`` interface outside the UPF. |
| 149 | As a result, the downstream packets arrive in the UPF where they are |
| 150 | GTP-encapsulated with the IP address of the gNB. |
| 151 | |
| 152 | Note that if you are not finding ``access`` and ``core`` interfaces |
| 153 | outside the UPF, the following commands can be used to create these |
| 154 | two interfaces manually (again using our running example for the |
| 155 | physical ethernet interface): |
| 156 | |
| 157 | .. code-block:: |
| 158 | |
| 159 | $ ip link add core link ens18 type macvlan mode bridge 192.168.250.3 |
| 160 | $ ip link add access link ens18 type macvlan mode bridge 192.168.252.3 |
| 161 | |
| 162 | Beyond this basic understanding, there are three other details of |
| 163 | note. First, we have been focusing on the User Plane because Control |
| 164 | Plane connectivity is much simpler: RAN elements (whether they are |
| 165 | physical gNBs or gNBsim) reach the AMF using the server's actual IP |
| 166 | address (``10.76.28.113`` in our running example). Kubernetes is |
| 167 | configured to forward SCTP packets arriving on port ``38412`` to the |
| 168 | AMF container. |
| 169 | |
| 170 | Second, the basic end-to-end schematic shown in :numref:`Figure %s |
| 171 | <fig-macvlan>` assumes each gNB is assigned an address on the same L2 |
| 172 | network as the Aether cluster (e.g., ``10.76.28.0/24`` in our example |
| 173 | scenario). This works when the gNB is physical or when we want to run |
| 174 | a single gNBsim traffic source, but once we scale up the gNBsim by |
| 175 | co-locating multiple containers on a single server, we need to |
| 176 | introduce another network so each container has a unique IP address |
| 177 | (even though they are all hosted at the same IP address). This more |
| 178 | complex configuration is depicted in :numref:`Figure %s <fig-gnbsim>`, |
| 179 | where ``172.20.0.0/16`` is the IP subnet for the virtual network (also |
| 180 | implemented by a Macvlan bridge, and named ``gnbaccess``). |
| 181 | |
| 182 | .. _fig-gnbsim: |
| 183 | .. figure:: figures/Slide25.png |
| 184 | :width: 600px |
| 185 | :align: center |
| 186 | |
Larry Peterson | 80235b9 | 2023-09-22 11:43:16 -0700 | [diff] [blame^] | 187 | A server running multiple instances of gNBsim, connected to |
| 188 | Aether. |
| 189 | |
| 190 | For completeness, :numref:`Figure %s <fig-start>` shows the Macvlan |
| 191 | setup for the Quick Start configuration, where both the ``gnbaccess`` |
| 192 | bridge and gNBsim container run in the same server as the Core (but |
| 193 | with the container manged by Docker, independent of Kubernetes). |
| 194 | |
| 195 | .. _fig-start: |
| 196 | .. figure:: figures/Slide27.png |
| 197 | :width: 275px |
| 198 | :align: center |
| 199 | |
| 200 | The Quick Start configuration with all components running in a |
| 201 | single server. |
llp | b353464 | 2023-08-02 09:23:52 -0700 | [diff] [blame] | 202 | |
| 203 | Finally, all of the configurable parameters used throughout this |
| 204 | section are defined in the ``core`` and ``gnbsim`` sections of the |
| 205 | ``vars/main.yml`` file. Note that an empty value for |
| 206 | ``core.ran_subnet`` implies the physical L2 network is used to connect |
| 207 | RAN elements to the core, as is typically the case when connecting |
| 208 | physical gNBs. |
| 209 | |
| 210 | |
| 211 | .. code-block:: |
| 212 | |
| 213 | core: |
| 214 | standalone: "true" |
| 215 | data_iface: ens18 |
Larry Peterson | 80235b9 | 2023-09-22 11:43:16 -0700 | [diff] [blame^] | 216 | values_file: "config/sdcore-5g-values.yaml" |
llp | b353464 | 2023-08-02 09:23:52 -0700 | [diff] [blame] | 217 | ran_subnet: "172.20.0.0/16" |
| 218 | helm: |
| 219 | chart_ref: aether/sd-core |
| 220 | chart_version: 0.12.6 |
| 221 | upf: |
| 222 | ip_prefix: "192.168.252.0/24" |
| 223 | amf: |
Larry Peterson | 925ac6b | 2023-08-23 13:38:35 -0700 | [diff] [blame] | 224 | ip: "10.76.28.113" |
llp | b353464 | 2023-08-02 09:23:52 -0700 | [diff] [blame] | 225 | |
| 226 | gnbsim: |
| 227 | ... |
| 228 | router: |
| 229 | data_iface: ens18 |
| 230 | macvlan: |
| 231 | iface: gnbaccess |
| 232 | subnet_prefix: "172.20" |