blueprints

Change-Id: I4b72bd612471e23f7c19505ccaeb7e8494e7b425
diff --git a/dict.txt b/dict.txt
index dbbf436..9b9c78d 100644
--- a/dict.txt
+++ b/dict.txt
@@ -152,6 +152,7 @@
 gRPC
 gandalfg
 gerrit
+glommed
 gnbsim
 gui
 gw
@@ -277,6 +278,7 @@
 stateful
 subcomponent
 subdirectories
+submodule
 submodules
 subnet
 subnets
@@ -298,6 +300,7 @@
 udp
 udr
 ul
+uncomment
 unsuffixed
 untagged
 upf
diff --git a/index.rst b/index.rst
index cef3124..a420481 100644
--- a/index.rst
+++ b/index.rst
@@ -22,6 +22,7 @@
    onramp/gnbsim
    onramp/gnb
    onramp/roc
+   onramp/blueprints
 
 .. toctree::
    :maxdepth: 3
diff --git a/onramp/blueprints.rst b/onramp/blueprints.rst
new file mode 100644
index 0000000..02a7c36
--- /dev/null
+++ b/onramp/blueprints.rst
@@ -0,0 +1,164 @@
+Other Blueprints
+-----------------------
+
+The previous sections describe how to deploy four Aether blueprints,
+corresponding to four variants of ``var/main.yml``. This section
+documents additional blueprints, each defined by a combination of
+Ansible components:
+
+* A ``vars/main-blueprint.yml`` file, checked into the
+  ``aether-onramp`` repo, is the "root" of the blueprint
+  specification.
+
+* A ``hosts.ini`` file, documented by example, specifies the target
+  servers required by the blueprint.
+
+* A set of Make targets, defined in a submodule and imported into
+  OnRamp's global Makefile, provides a means to install (``make
+  blueprint-install``) and uninstall (``make blueprint-uninstall``)
+  the blueprint.
+
+* (Optional) A new ``aether-blueprint`` repo defines the Ansible Roles
+  and Playbooks required to deploy a new component.
+
+* (Optional) New Roles, Playbooks, and Templates, checked to existing
+  repos/submodules, customize existing components so they integrate with
+  the new blueprint. To support blueprint independence, these elements
+  are intentionally kept "narrow", rather than glommed onto an
+  existing element.
+
+* A Jenkins job, added to the set of OnRamp integration tests, runs
+  daily to verify that the blueprint does not regress.
+
+By standardizing the process for adding new blueprints to OnRamp, the
+goal is to encourage the community to contribute (and maintain) new
+Aether configurations and deployment scenarios. The rest of this
+section documents community-contributed blueprints to-date.
+
+Multiple UPFs
+~~~~~~~~~~~~~~~~~~~~~~
+
+The base version of SD-Core includes a single UPF, running in the same
+Kubernetes namespace as the Core's control plane. This blueprint adds
+the ability to bring up multiple UPFs (each in a different namespace),
+and uses ROC to establish the *UPF -to-Slice-to-Device* bindings
+required to activate end-to-end traffic through each UPF. The
+resulting deployment is then verified using gNBsim.
+
+The Multi-UPF blueprint includes the following:
+
+* Global vars file ``vars/main-upf.yml`` gives the overall
+  blueprint specification.
+
+* Inventory file ``hosts.ini`` is identical to that used in the
+  Emulated RAN section. Minimally, SD-Core runs on one server and
+  gNBsim runs on a second server.
+
+* New make targets, ``5gc-upf-install`` and ``5gc-upf-uninstall``, to
+  be executed after the standard SD-Core installation. The blueprint
+  also reuses the ``roc-load`` target to activate new slices in ROC.
+
+* New Ansible role (``upf``) added to the ``5gc`` submodule, including
+  a new UPF-specific template (``upf-5g-values.yaml``).
+
+* New models file (``roc-5g-models-upf2.json``) added to the
+  ``roc-load`` role in the ``amp`` submodule. This models file is
+  applied as a patch *on top of* the base set of ROC models. (Since
+  this blueprint is demonstrated using gNBsim, the assumed base models
+  are given by ``roc-5g-models.json``.)
+
+To use Multi-UPF, first copy the vars file to ``main.yml``:
+
+.. code-block::
+
+   $ cd vars
+   $ cp main-upf.yml main.yml
+
+Then edit ``hosts.ini`` and ``vars/main.yml`` to match your local
+target servers, and deploy the base system (as in previous sections):
+
+.. code-block::
+
+   $ make k8s-install
+   $ make 5gc-core-install
+   $ make roc-install
+   $ make roc-load
+   $ make gnbsim-install
+
+You can also optionally install the monitoring subsystem. Note that
+we're installing ROC after the core (and ``main.yml`` has
+``core.standalone: "true"``), but this only effects how SD-Core is
+configured when it is first deployed. Once both are running, any
+additional updates loaded into ROC are automatically applied to
+SD-Core.
+
+At this point you are ready to bring up additional UPFs and bind them
+to specific slices and devices. This involves first editing the
+``upf`` block in the ``core`` section of ``vars/main.yml``:
+
+.. code-block::
+
+   upf:
+      ip_prefix: "192.168.252.0/24"
+      iface: "access"
+      helm:
+          chart_ref: aether/bess-upf
+     values_file: "deps/5gc/roles/upf/templates/upf-5g-values.yaml"
+     additional_upfs:
+         "1":
+            ip:
+               access: "192.168.252.6/24"
+               core:   "192.168.250.6/24"
+            ue_ip_pool: "172.248.0.0/16"
+         # "2":
+         #   ip:
+         #      access: "192.168.252.7/24"
+         #      core:   "192.168.250.7/24"
+         #   ue_ip_pool: "172.247.0.0/16"
+
+As shown above, one additional UPF is enabled (beyond the one that
+already came up as part of SD-Core, denoted ``upf-0``), with the spec
+for yet another UPF commented out.  In this example configuration,
+each UPF is assigned a subnet on the ``access`` and ``core`` bridges,
+along with an IP address pool for UEs to be served by that UPF.  Once
+done with the edits, launch the new UPF(s) by typing:
+
+.. code-block::
+
+   $ make 5gc-upf-install
+
+At this point the new UPF(s) will be running (you can verify this
+using ``kubectl``), but no traffic will be directed to them until UEs
+are assigned to their IP address pool. Doing so requires loading the
+appropriate bindings into ROC, which you can do by editing the
+``roc_models`` line in ``amp`` section of ``vars/main.yml``. Comment
+out the original models file already loaded into ROC, and uncomment
+the new patch that is to be applied:
+
+.. code-block::
+
+   amp:
+      # roc_models: "deps/amp/roles/roc-load/templates/roc-5g-models.json"
+      roc_models: "deps/amp/roles/roc-load/templates/roc-5g-models-upf2.json"
+
+Then run the following to load the patch:
+
+.. code-block::
+
+   $ make roc-load
+
+At this point you can bring up the Aether GUI and see that a second
+slice and a second device group have been mapped onto the second UPF.
+
+Now you are ready to run traffic through both UPFs, which because the
+configuration files identified in the ``servers`` block of the
+``gnbsim`` section of ``vars/main.yml`` align with the IMSIs bound to
+each Device Group (which are bound to each slice, which are in turn
+bound to each UPF), the emulator sends data through both UPFs.  To run
+the emulation, type:
+
+.. code-block::
+
+   $ make gnbsim-simulator-run
+
+
diff --git a/onramp/gnb.rst b/onramp/gnb.rst
index db914ae..e468e46 100644
--- a/onramp/gnb.rst
+++ b/onramp/gnb.rst
@@ -28,8 +28,8 @@
   bookmark for that channel includes summaries of different
   combinations people have tried.
 
-The following assumes you start with a variant of ``vars/main.yml``
-customized for running physical 5G radios, which is easy to do:
+This blueprint assumes you start with a variant of ``vars/main.yml``
+customized for running physical 5G radios. This is easy to do:
 
 .. code-block::
 
@@ -396,8 +396,8 @@
 
 Aether OnRamp is geared towards 5G, but it does support physical eNBs,
 including 4G-based versions of both SD-Core and AMP. It does not
-support an emulated 4G RAN. The 4G scenario uses all the same Ansible
-machinery outlined in earlier sections, but uses a variant of
+support an emulated 4G RAN. The 4G blueprint uses all the same Ansible
+machinery outlined in earlier sections, but starts with a variant of
 ``vars/main.yml`` customized for running physical 4G radios:
 
 .. code-block::
diff --git a/onramp/gnbsim.rst b/onramp/gnbsim.rst
index 3f237d7..f1c1514 100644
--- a/onramp/gnbsim.rst
+++ b/onramp/gnbsim.rst
@@ -6,9 +6,9 @@
 configure gNBsim, so as to both customize and scale the workload it
 generates. We assume gNBsim runs in one or more servers, independent
 of the server(s) that host SD-Core. These servers are specified in the
-``hosts.ini`` file, as described in the section on Scaling Aether. We
-also assume you start with a variant of ``vars/main.yml`` customized
-for running gNBsim, which is easy to do:
+``hosts.ini`` file, as described in the section on Scaling Aether. This
+blueprint assumes you start with a variant of ``vars/main.yml``
+customized for running gNBsim. This is easy to do:
 
 .. code-block::
 
diff --git a/onramp/overview.rst b/onramp/overview.rst
index ef8e3d0..2188824 100644
--- a/onramp/overview.rst
+++ b/onramp/overview.rst
@@ -14,13 +14,14 @@
 Aether OnRamp begins with a *Quick Start* deployment similar to
 `Aether-in-a-Box (AiaB)
 <https://docs.aetherproject.org/master/developer/aiab.html>`__, but
-then goes on to prescribe a sequence of steps a user can follow to
-deploy increasingly complex configurations. These include both
-emulated and physical RANs, culminating in an operational Aether
-cluster capable of running 24/7 and supporting live 5G workloads.
-(OnRamp also supports a 4G version of Aether connected to one or more
-physical eNBs, but we postpone a discussion of that capability until a
-later section. Everything else in this guide assumes 5G.)
+then goes on to prescribe a sequence of steps users can follow to
+deploy increasingly complex configurations. OnRamp refers to each such
+configuration as a *blueprint*, and the set supports both emulated and
+physical RANs, along with the runtime machinery needed to operate an
+Aether cluster supporting live 5G workloads.  (OnRamp also defines a
+4G blueprint that can be used to connected one or more physical eNBs,
+but we postpone a discussion of that capability until a later
+section. Everything else in this guide assumes 5G.)
 
 .. include:: directory.rst
 
diff --git a/onramp/start.rst b/onramp/start.rst
index c909470..3642df3 100644
--- a/onramp/start.rst
+++ b/onramp/start.rst
@@ -122,11 +122,12 @@
    files you find in the corresponding ``deps`` directory. This
    top-level variable file overrides the per-component var files, so
    you will not need to modify the latter. Note that the ``vars``
-   directory contains several variants of ``main.yml``, each tailored
-   for a different deployment scenario. The default ``main.yml``
-   (which is the same as ``main-quickstart.yml``) supports the Quick
-   Start deployment described in this section; we'll substitute the
-   other variants in later sections.
+   directory contains several variants of ``main.yml``, where we think
+   of each as specifying a *blueprint* for a different configuration
+   of Aether. The default ``main.yml`` (which is the same as
+   ``main-quickstart.yml``) gives the blueprint for the Quick Start
+   deployment described in this section; we'll substitute the other
+   blueprints in later sections.
 
 4. File ``hosts.ini`` (host inventory) is Ansible's way of specifying
    the set of servers (physical or virtual) that Ansible targets with