Larry Peterson | 782fec3 | 2023-10-09 12:30:57 -0700 | [diff] [blame] | 1 | Other Blueprints |
| 2 | ----------------------- |
| 3 | |
| 4 | The previous sections describe how to deploy four Aether blueprints, |
| 5 | corresponding to four variants of ``var/main.yml``. This section |
| 6 | documents additional blueprints, each defined by a combination of |
| 7 | Ansible components: |
| 8 | |
| 9 | * A ``vars/main-blueprint.yml`` file, checked into the |
| 10 | ``aether-onramp`` repo, is the "root" of the blueprint |
| 11 | specification. |
| 12 | |
| 13 | * A ``hosts.ini`` file, documented by example, specifies the target |
| 14 | servers required by the blueprint. |
| 15 | |
| 16 | * A set of Make targets, defined in a submodule and imported into |
| 17 | OnRamp's global Makefile, provides a means to install (``make |
| 18 | blueprint-install``) and uninstall (``make blueprint-uninstall``) |
| 19 | the blueprint. |
| 20 | |
| 21 | * (Optional) A new ``aether-blueprint`` repo defines the Ansible Roles |
| 22 | and Playbooks required to deploy a new component. |
| 23 | |
| 24 | * (Optional) New Roles, Playbooks, and Templates, checked to existing |
Larry Peterson | ea30875 | 2023-10-10 09:54:27 -0700 | [diff] [blame^] | 25 | repos/submodules, customize existing components for integration with |
Larry Peterson | 782fec3 | 2023-10-09 12:30:57 -0700 | [diff] [blame] | 26 | the new blueprint. To support blueprint independence, these elements |
| 27 | are intentionally kept "narrow", rather than glommed onto an |
| 28 | existing element. |
| 29 | |
Larry Peterson | ea30875 | 2023-10-10 09:54:27 -0700 | [diff] [blame^] | 30 | * A Jenkins job, added to the set of OnRamp integration tests, |
| 31 | verifies that the blueprint does not regress. |
Larry Peterson | 782fec3 | 2023-10-09 12:30:57 -0700 | [diff] [blame] | 32 | |
| 33 | By standardizing the process for adding new blueprints to OnRamp, the |
| 34 | goal is to encourage the community to contribute (and maintain) new |
| 35 | Aether configurations and deployment scenarios. The rest of this |
| 36 | section documents community-contributed blueprints to-date. |
| 37 | |
| 38 | Multiple UPFs |
| 39 | ~~~~~~~~~~~~~~~~~~~~~~ |
| 40 | |
| 41 | The base version of SD-Core includes a single UPF, running in the same |
| 42 | Kubernetes namespace as the Core's control plane. This blueprint adds |
| 43 | the ability to bring up multiple UPFs (each in a different namespace), |
| 44 | and uses ROC to establish the *UPF -to-Slice-to-Device* bindings |
| 45 | required to activate end-to-end traffic through each UPF. The |
| 46 | resulting deployment is then verified using gNBsim. |
| 47 | |
| 48 | The Multi-UPF blueprint includes the following: |
| 49 | |
| 50 | * Global vars file ``vars/main-upf.yml`` gives the overall |
| 51 | blueprint specification. |
| 52 | |
| 53 | * Inventory file ``hosts.ini`` is identical to that used in the |
| 54 | Emulated RAN section. Minimally, SD-Core runs on one server and |
| 55 | gNBsim runs on a second server. |
| 56 | |
| 57 | * New make targets, ``5gc-upf-install`` and ``5gc-upf-uninstall``, to |
| 58 | be executed after the standard SD-Core installation. The blueprint |
| 59 | also reuses the ``roc-load`` target to activate new slices in ROC. |
| 60 | |
| 61 | * New Ansible role (``upf``) added to the ``5gc`` submodule, including |
| 62 | a new UPF-specific template (``upf-5g-values.yaml``). |
| 63 | |
| 64 | * New models file (``roc-5g-models-upf2.json``) added to the |
| 65 | ``roc-load`` role in the ``amp`` submodule. This models file is |
| 66 | applied as a patch *on top of* the base set of ROC models. (Since |
| 67 | this blueprint is demonstrated using gNBsim, the assumed base models |
| 68 | are given by ``roc-5g-models.json``.) |
| 69 | |
| 70 | To use Multi-UPF, first copy the vars file to ``main.yml``: |
| 71 | |
| 72 | .. code-block:: |
| 73 | |
| 74 | $ cd vars |
| 75 | $ cp main-upf.yml main.yml |
| 76 | |
| 77 | Then edit ``hosts.ini`` and ``vars/main.yml`` to match your local |
| 78 | target servers, and deploy the base system (as in previous sections): |
| 79 | |
| 80 | .. code-block:: |
| 81 | |
| 82 | $ make k8s-install |
Larry Peterson | 782fec3 | 2023-10-09 12:30:57 -0700 | [diff] [blame] | 83 | $ make roc-install |
| 84 | $ make roc-load |
Larry Peterson | ea30875 | 2023-10-10 09:54:27 -0700 | [diff] [blame^] | 85 | $ make 5gc-core-install |
Larry Peterson | 782fec3 | 2023-10-09 12:30:57 -0700 | [diff] [blame] | 86 | $ make gnbsim-install |
| 87 | |
| 88 | You can also optionally install the monitoring subsystem. Note that |
Larry Peterson | ea30875 | 2023-10-10 09:54:27 -0700 | [diff] [blame^] | 89 | because ``main.yml`` sets ``core.standalone: "false"``), any models |
| 90 | loaded into ROC are automatically applied to SD-Core. |
Larry Peterson | 782fec3 | 2023-10-09 12:30:57 -0700 | [diff] [blame] | 91 | |
| 92 | At this point you are ready to bring up additional UPFs and bind them |
| 93 | to specific slices and devices. This involves first editing the |
| 94 | ``upf`` block in the ``core`` section of ``vars/main.yml``: |
| 95 | |
| 96 | .. code-block:: |
| 97 | |
| 98 | upf: |
| 99 | ip_prefix: "192.168.252.0/24" |
| 100 | iface: "access" |
| 101 | helm: |
| 102 | chart_ref: aether/bess-upf |
| 103 | values_file: "deps/5gc/roles/upf/templates/upf-5g-values.yaml" |
| 104 | additional_upfs: |
| 105 | "1": |
| 106 | ip: |
| 107 | access: "192.168.252.6/24" |
| 108 | core: "192.168.250.6/24" |
| 109 | ue_ip_pool: "172.248.0.0/16" |
| 110 | # "2": |
| 111 | # ip: |
| 112 | # access: "192.168.252.7/24" |
| 113 | # core: "192.168.250.7/24" |
| 114 | # ue_ip_pool: "172.247.0.0/16" |
| 115 | |
| 116 | As shown above, one additional UPF is enabled (beyond the one that |
| 117 | already came up as part of SD-Core, denoted ``upf-0``), with the spec |
| 118 | for yet another UPF commented out. In this example configuration, |
| 119 | each UPF is assigned a subnet on the ``access`` and ``core`` bridges, |
| 120 | along with an IP address pool for UEs to be served by that UPF. Once |
| 121 | done with the edits, launch the new UPF(s) by typing: |
| 122 | |
| 123 | .. code-block:: |
| 124 | |
| 125 | $ make 5gc-upf-install |
| 126 | |
| 127 | At this point the new UPF(s) will be running (you can verify this |
| 128 | using ``kubectl``), but no traffic will be directed to them until UEs |
| 129 | are assigned to their IP address pool. Doing so requires loading the |
| 130 | appropriate bindings into ROC, which you can do by editing the |
| 131 | ``roc_models`` line in ``amp`` section of ``vars/main.yml``. Comment |
| 132 | out the original models file already loaded into ROC, and uncomment |
| 133 | the new patch that is to be applied: |
| 134 | |
| 135 | .. code-block:: |
| 136 | |
| 137 | amp: |
| 138 | # roc_models: "deps/amp/roles/roc-load/templates/roc-5g-models.json" |
| 139 | roc_models: "deps/amp/roles/roc-load/templates/roc-5g-models-upf2.json" |
| 140 | |
| 141 | Then run the following to load the patch: |
| 142 | |
| 143 | .. code-block:: |
| 144 | |
| 145 | $ make roc-load |
| 146 | |
| 147 | At this point you can bring up the Aether GUI and see that a second |
| 148 | slice and a second device group have been mapped onto the second UPF. |
| 149 | |
| 150 | Now you are ready to run traffic through both UPFs, which because the |
| 151 | configuration files identified in the ``servers`` block of the |
| 152 | ``gnbsim`` section of ``vars/main.yml`` align with the IMSIs bound to |
| 153 | each Device Group (which are bound to each slice, which are in turn |
| 154 | bound to each UPF), the emulator sends data through both UPFs. To run |
| 155 | the emulation, type: |
| 156 | |
| 157 | .. code-block:: |
| 158 | |
| 159 | $ make gnbsim-simulator-run |
| 160 | |
| 161 | |