add OnRamp guide

Change-Id: I3ff3f1401d6bb67cfd74d247e545fc56093c52a2

Change-Id: If9741da99140d37cab090e654dbe4bbaff0788cb
diff --git a/conf.py b/conf.py
index b4e403c..691d8fa 100644
--- a/conf.py
+++ b/conf.py
@@ -115,6 +115,13 @@
         'venv-docs',
 ]
 
+# Enable numbered figures
+numfig = True
+numfig_format = {
+    'figure': 'Figure %s.',
+    'table':  'Table %s.'
+    }
+
 # The name of the Pygments (syntax highlighting) style to use.
 pygments_style = None
 
@@ -259,6 +266,7 @@
     r'https://www.fs.com/.*',
     r'https://velero.io/.*',
     r'https://cloud.google.com/.*',
+    r'https://ark.intel.com/.*',
 ]
 
 linkcheck_timeout = 3
diff --git a/dict.txt b/dict.txt
index 6303189..32f9673 100644
--- a/dict.txt
+++ b/dict.txt
@@ -10,6 +10,7 @@
 Cloudlab
 Cratchit
 DNS
+Davie
 Deutsche
 Dex
 Dockerfile
@@ -27,6 +28,7 @@
 IPv
 IaC
 IaaC
+IoT
 Istio
 Jenkins
 Jira
@@ -36,8 +38,10 @@
 Kubernetes
 Kubespray
 LTE
+Makefiles
 ManagementServer
 Mbps
+Microservices
 Mininet
 Multipass
 Netbox
@@ -48,10 +52,13 @@
 PoC
 PoE
 QoS
+RIC
 Radisys
 Raspbian
+RiaB
 Sercomm
 Speedtest
+Sunay
 Supermicro
 SupportedTAs
 TFTP
@@ -61,6 +68,7 @@
 Terraform
 TestVectors
 Tofino
+Toolset
 Tx
 UE
 VOLTHA
@@ -129,11 +137,13 @@
 etcd
 ethernet
 externalIP
+femto
 fredf
 func
 gNB
 gNBSim
 gNBs
+gNBsim
 gNMI
 gNodeB
 gRPC
@@ -146,6 +156,7 @@
 hostname
 hss
 hssdb
+https
 iOS
 iPXE
 iPhones
@@ -153,6 +164,7 @@
 imei
 incrementing
 inotify
+instantiation
 ip
 isn't
 jitter
@@ -178,10 +190,12 @@
 menlo
 mgmt
 microservice
+microservices
 misconfiguration
 mixedGroup
 mme
 mongodb
+nRT
 nVME
 nameserver
 nameservers
@@ -204,6 +218,7 @@
 onosproject
 opencord
 orchestrator
+parameterizing
 patchset
 patchsets
 pcap
@@ -236,6 +251,7 @@
 repo
 repos
 repository
+roadmap
 roc
 routable
 rscript
@@ -257,6 +273,8 @@
 starbucks
 stateful
 subcomponent
+subdirectories
+submodules
 subnet
 subnets
 sudo
@@ -268,6 +286,8 @@
 test
 testOpt
 tfvars
+toolchain
+toolset
 topo
 topologies
 tost
@@ -289,6 +309,7 @@
 webui
 workspace
 workspaces
+xApps
 yaml
 ztp
 µONOS
diff --git a/index.rst b/index.rst
index af7b227..64a2f36 100644
--- a/index.rst
+++ b/index.rst
@@ -11,18 +11,16 @@
 
 Here are some useful places to start with Aether:
 
+* Deploy and operate Aether on your own cluster with :doc:`Aether OnRamp </onramp/overview>`.
+
 * Setup an Aether software development environment with :doc:`Aether-in-a-Box </developer/aiab>`.
 
-* For a PoC deployment, bring up an :doc:`Aether-in-a-Box on 4G Real Radios <developer/aiabhw>`.
-
-* For a PoC deployment, bring up an :doc:`Aether-in-a-Box on 5G Real Radios <developer/aiabhw5g>`.
-
 * Learn about how to :doc:`configure Aether using the ROC </operations/gui>`.
 
 * Learn the requirements of hosting an :doc:`Aether Connected Edge
   </edge_deployment/overview>`.
 
-* Read the most recent :doc:`Release Notes </release/2.0>`.
+* Read the most recent :doc:`Release Notes </release/2.1>`.
 
 Aether Architecture and Components
 ----------------------------------
@@ -45,8 +43,8 @@
   * `SD-RAN Website <https://opennetworking.org/open-ran/>`_
   * :doc:`SD-RAN Documentation <sdran:index>`
 
-More information about mobile networks and 5G can be found in the :doc:`5G
-Mobile Networks: A Systems Approach <sysapproach5g:intro>` book.
+More information about 5G and Aether's architecture can be found in
+the :doc:`Private 5G: A Systems Approach <sysapproach5g:index>` book.
 
 Community
 ---------
@@ -57,7 +55,24 @@
 
 .. toctree::
    :maxdepth: 3
-   :caption: Aether Quick Start
+   :caption: Aether OnRamp
+   :hidden:
+   :glob:
+
+   onramp/overview
+   onramp/directory
+   onramp/start
+   onramp/inspect
+   onramp/scale
+   onramp/network
+   onramp/gnbsim
+   onramp/gnb
+   onramp/roc
+   onramp/enb
+
+.. toctree::
+   :maxdepth: 3
+   :caption: Aether-in-a-Box
    :hidden:
    :glob:
 
diff --git a/onramp/directory.rst b/onramp/directory.rst
new file mode 100644
index 0000000..682012f
--- /dev/null
+++ b/onramp/directory.rst
@@ -0,0 +1,153 @@
+Repositories
+---------------
+
+Aether is assembled from multiple components spanning several Git
+repositories. These include repos for different subsystems (e.g.,
+AMP, SD-Core, SD-RAN), but also for different stages of the development
+pipeline (e.g., source code, deployment artifacts).
+
+This section identifies all the Aether-related repositories, with the
+OnRamp repos listed at the end serving as the starting point for
+anyone that wants to come up-to-speed on the rest of the system.
+
+Source Repos
+~~~~~~~~~~~~~~~~
+
+Source code for Aether and all of its subsystems can be found in
+the following repositories:
+
+* Gerrit repository for the CORD Project
+  (https://gerrit.opencord.org): Microservices for AMP, plus source
+  for the jobs that implement the CI/CD pipeline.
+
+* GitHub repository for the OMEC Project
+  (https://github.com/omec-project): Microservices for SD-Core, plus
+  the emulator (gNBsim) that subjects SD-Core to RAN workloads.
+
+* GitHub repository for the ONOS Project
+  (https://github.com/onosproject): Microservices for SD-Fabric and
+  SD-RAN, plus the YANG models used to generate the Aether API.
+
+* GitHub repository for the Stratum Project
+  (https://github.com/stratum): On-switch components of SD-Fabric.
+
+For Gerrit, you can either browse Gerrit (select the `master` branch)
+or clone the corresponding *<repo-name>* by typing:
+
+.. code-block::
+
+  $ git clone ssh://gerrit.opencord.org:29418/<repo-name>
+
+If port 29418 is blocked by your network administrator, you can try cloning
+using https instead of ssh:
+
+.. code-block::
+
+  $ git clone https://gerrit.opencord.org/<repo-name>
+
+Anyone wanting to participate in Aether's ongoing development will
+want to learn how to contribute new features to these source repos.
+
+Artifact Repos
+~~~~~~~~~~~~~~~~
+
+Aether includes a *Continuous Integration (CI)* pipeline that builds
+deployment artifacts (e.g., Helm Charts, Docker Images) from the
+source code. These artifacts are stored in the following repositories:
+
+Helm Charts
+
+ | https://charts.aetherproject.org
+ | https://charts.onosproject.org
+ | https://charts.opencord.org
+ | https://charts.atomix.io
+ | https://sdrancharts.onosproject.org
+ | https://charts.rancher.io/
+
+Docker Images
+
+ | https://registry.aetherproject.org
+
+Note that as of version 1.20.8, Kubernetes uses the `Containerd
+<https://containerd.io/>`__ runtime system instead of Docker. This is
+transparent to anyone using Aether, which manages containers
+indirectly through Kubernetes (e.g., using ``kubectl``), but does
+impact anyone that directly depends on the Docker toolchain. Also note
+that while Aether documentation often refers its use of "Docker
+containers," it is now more accurate to say that Aether uses
+`OCI-Compliant containers <https://opencontainers.org/>`__.
+
+The Aether CI pipeline keeps the above artifact repos in sync with the
+source repos listed above. Among those source repos are the source
+files for all the Helm Charts:
+
+ | ROC: https://gerrit.opencord.org/plugins/gitiles/roc-helm-charts
+ | SD-RAN: https://github.com/onosproject/sdran-helm-charts
+ | SD-Core: https://gerrit.opencord.org/plugins/gitiles/sdcore-helm-charts
+ | SD-Fabric (Servers): https://github.com/onosproject/onos-helm-charts
+ | SD-Fabric (Switches): https://github.com/stratum/stratum-helm-charts
+
+The QA tests run against code checked into these source repos can be
+found here:
+
+ | https://gerrit.opencord.org/plugins/gitiles/aether-system-tests
+
+For more information about Aether's CI pipeline, including its QA and
+version control strategy, we recommend the Lifecycle Management
+chapter of our companion Edge Cloud Operations book.
+
+.. _reading_cicd:
+.. admonition:: Further Reading
+
+    L. Peterson, A. Bavier, S. Baker, Z. Williams, and B. Davie. `Edge
+    Cloud Operations: A Systems Approach
+    <https://ops.systemsapproach.org/lifecycle.html>`__. June 2022.
+
+OnRamp Repos
+~~~~~~~~~~~~~~~~~~~
+
+The deployment artifacts listed above are, of course, meant to be
+deployed as an operational cloud service. This process, sometimes
+referred to as GitOps, manages the *Continuous Deployment (CD)* half
+of the CI/CD pipeline. OnRamp's approach to GitOps uses a different
+mechanism than the one the ONF ops team originally used to manage its
+multi-site deployment of Aether.  The latter approach has a large
+startup cost, which has proven difficult for others to replicate. (It
+also locks you into deployment toolchain that may or may not be
+appropriate for your situation.)
+
+In its place, OnRamp adopts minimal Ansible tooling. This makes it
+easier to take ownership of the configuration parameters that define
+your specific deployment scenario.  The rest of this guide walks you
+through a step-by-step process of deploying and operating Aether on
+your own hardware.  For now, we simply point you at the collection of
+OnRamp repos:
+
+ | Deploy Aether: https://github.com/opennetworkinglab/aether-onramp
+ | Deploy 5G Core: https://github.com/opennetworkinglab/aether-5gc
+ | Deploy 4G Core: https://github.com/opennetworkinglab/aether-4gc
+ | Deploy Management Plane: https://github.com/opennetworkinglab/aether-amp
+ | Deploy 5G RAN Simulator: https://github.com/opennetworkinglab/aether-gnbsim
+ | Deploy Kubernetes: https://github.com/opennetworkinglab/aether-k8s
+
+It is the first repo that defines a way to integrate all of the Aether
+artifacts into an operational system. That repo, in turn, includes the
+other repos as submodules. Note that each of the submodules is
+self-contained if you are interested in deploying just that subsystem,
+but this guide approaches the deployment challenge from an
+integrated/end-to-end perspective.
+
+Because OnRamp uses Ansible as its primary deployment tool, a general
+understanding of Ansible is helpful (see the suggested reference).
+However, this guide walks you, step-by-step, through the process of
+deploying and operating Aether, so previous experience with Ansible is
+not a requirement. Note that Ansible has evolved to be both a
+"Community Toolset" anyone can use to manage a software deployment,
+and an "Automation Platform" offered as a service by RedHat. OnRamp
+uses the toolset, but not the platform/service.
+
+.. _reading_ansible:
+.. admonition:: Further Reading
+
+   `Overview: How Ansible Works <https://www.ansible.com/overview/how-ansible-works>`__.
+
diff --git a/onramp/enb.rst b/onramp/enb.rst
new file mode 100644
index 0000000..105b7d8
--- /dev/null
+++ b/onramp/enb.rst
@@ -0,0 +1,51 @@
+Physical RAN (4G)
+----------------------
+
+Aether OnRamp is geared towards 5G, but it does support physical eNBs,
+including 4G-based versions of both SD-Core and AMP. It does not,
+however, support an emulated 4G RAN. The 4G scenario uses all the same
+Ansible machinery outlined in earlier sections, but uses a variant of
+``vars/main.yml`` customized for running physical 4G radios:
+
+.. code-block::
+
+   $ cd vars
+   $ cp main-eNB.yml main.yml
+
+Assuming that starting point, the following outlines the key
+differences from the 5G case:
+
+1. There is a 4G-specific repo, which you can find in ``deps/4gc``.
+
+2. The ``core`` section of ``vars/main.yml`` specifies a 4G-specific values file:
+
+   ``values_file: "deps/4gc/roles/core/templates/radio-4g-values.yaml"``
+
+3. The ``amp`` section of ``vars/main.yml`` specifies that 4G-specific
+   models and dashboards get loaded into the ROC and Monitoring
+   services, respectively:
+
+   ``roc_models: "deps/amp/roles/roc-load/templates/roc-4g-models.json"``
+
+   ``monitor_dashboard:  "deps/amp/roles/monitor-load/templates/4g-monitor"``
+
+4. You need to edit two files with details for the 4G SIM cards you
+   use. One is the 4G-specific values file used to configure SD-Core:
+
+   ``deps/4gc/roles/core/templates/radio-4g-values.yaml``
+
+   The other is the 4G-specific Models file used to bootstrap ROC:
+
+   ``deps/amp/roles/roc-load/templates/radio-4g-models.json``
+
+5. There are 4G-specific Make targets for SD-Core, including ``make
+   aether-4gc-install`` and ``make aether-4gc-uninstall``. Note that
+   the generic Make targets for AMP (e.g., ``make
+   aether-amp-install`` and ``make aether-amp-uninstall``) work
+   unchanged.
+
+The Quick Start deployment is for 5G only, but revisiting the Scaling,
+Networking, and Physical Radio sections—substituting the above for
+their 5G counterparts—serves as a guide for bringing up a 4G version
+of Aether.
+
diff --git a/onramp/figures/Sercomm.png b/onramp/figures/Sercomm.png
new file mode 100644
index 0000000..d72bcf0
--- /dev/null
+++ b/onramp/figures/Sercomm.png
Binary files differ
diff --git a/onramp/figures/Slide24.png b/onramp/figures/Slide24.png
new file mode 100644
index 0000000..8f5d55a
--- /dev/null
+++ b/onramp/figures/Slide24.png
Binary files differ
diff --git a/onramp/figures/Slide25.png b/onramp/figures/Slide25.png
new file mode 100644
index 0000000..37c7778
--- /dev/null
+++ b/onramp/figures/Slide25.png
Binary files differ
diff --git a/onramp/gnb.rst b/onramp/gnb.rst
new file mode 100644
index 0000000..dad690e
--- /dev/null
+++ b/onramp/gnb.rst
@@ -0,0 +1,302 @@
+Physical RAN (5G)
+-----------------------
+
+We are now ready to replace the emulated RAN with physical gNBs and
+real UEs. You will need to edit ``hosts.ini`` to reflect the Aether
+cluster you want to support, where just a single server is sufficient
+and there is no reason to include nodes in the ``[gnbsim_nodes]`` set.
+We also assume you start with a variant of ``vars/main.yml``
+customized for running physical 5G radios, which is easy to do:
+
+.. code-block::
+
+   $ cd vars
+   $ cp main-gNB.yml main.yml
+
+The following focuses on a single gNB, which we assume is connected to
+the same L2 network as the Aether cluster. In our running example,
+this implies both are on subnet ``10.76.28.0/24``.
+
+Modify Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Modify the ``core`` section of ``vars/main.yml`` to match the
+following, substituting your local details for ``ens18`` and
+``10.76.28.113``. Of particular note, setting ``ran_subnet`` to the
+empty string indicates that the gNB is connected to the same physical
+L2 network as your Aether cluster, and the new ``values_file`` is
+tailored for a physical RAN rather than the emulated RAN we've been
+using.
+
+.. code-block::
+
+   core:
+       standalone: "true"
+       data_iface: ens18
+       values_file: "deps/5gc/roles/core/templates/radio-5g-values.yaml"
+       ran_subnet: ""
+       helm:
+           chart_ref: aether/sd-core
+           chart_version: 0.12.6
+       upf:
+           ip_prefix: "192.168.252.0/24"
+       amf:
+           ip: "10.76.28.113"
+
+
+Prepare UEs
+~~~~~~~~~~~~
+
+5G-connected devices must have a SIM card, which you are responsible
+for creating and inserting.  You will need a SIM card writer (these
+are readily available for purchase on Amazon) and a PLMN identifier
+constructed from a valid MCC/MNC pair. For our purposes, we use two
+different PLMN ids: ``315010`` constructed from MCC=315 (US) and
+MNC=010 (CBRS), and ``00101`` constructed from MCC=001 (TEST) and
+MNC=01 (TEST). You should use whatever values are appropriate for your
+local environment.  You then assign an IMSI and two secret keys to
+each SIM card. Throughout this section, we use the following values as
+exemplars:
+
+* IMSI: each one is unique, matching pattern ``315010*********`` (up to 15 digits)
+* OPc: ``69d5c2eb2e2e624750541d3bbc692ba5``
+* Key: ``000102030405060708090a0b0c0d0e0f``
+
+Insert the SIM cards into whatever devices you plan to connect to
+Aether.  Be aware that not all phones support the CBRS frequency bands
+that Aether uses. Aether is known to work with recent iPhones (11 and
+greater), Google Pixel phones (4 and greater) and OnePlus phones.  CBRS
+may also be supported by recent phones from Samsung, LG Electronics and
+Motorola Mobility, but these have not been tested. Note that on each phone
+you will need to configure ``internet`` as the *Access Point Name (APN)*.
+Another good option is to use a 5G dongle connected to a Raspberry Pi
+as a demonstration UE. This makes it easier to run diagnostic tests
+from the UE. For example, we have used `APAL's 5G dongle
+<https://www.apaltec.com/dongle/>`__ with Aether.
+
+Finally, modify the ``subscribers`` block of the
+``omec-sub-provision`` section in file
+``deps/5gc/roles/core/templates/radio-5g-values.yaml`` to record the IMSI,
+OPc, and Key values configured onto your SIM cards. The block also
+defines a sequence number that is intended to thwart replay
+attacks. For example, the following code block adds IMSIs between
+``315010999912301`` and ``315010999912310``:
+
+.. code-block::
+
+   subscribers:
+   - ueId-start: "315010999912301"
+     ueId-end: "315010999912310"
+     plmnId: "315010"
+     opc: "69d5c2eb2e2e624750541d3bbc692ba5"
+     key: "000102030405060708090a0b0c0d0e0f"
+     sequenceNumber: 135
+
+Further down in the same ``omec-sub-provision`` section you will find
+two other blocks that also need to be edited. The first,
+``device-groups``, assigns IMSIs to *Device Groups*. You will need to
+reenter the individual IMSIs from the ``subscribers`` block that will
+be part of the device-group:
+
+.. code-block::
+
+   device-groups:
+   - name:  "5g-user-group1"
+      imsis:
+          - "315010999912301"
+          - "315010999912302"
+          - "315010999912303"
+
+The second block, ``network-slices``, sets various parameters
+associated with the *Slices* that connect device groups to
+applications.  Here, you will need to reenter the PLMN information,
+with the other slice parameters remaining unchanged (for now):
+
+.. code-block::
+
+   plmn:
+       mcc: "315"
+       mnc: "010"
+
+Aether supports multiple *Device Groups* and *Slices*, but the data
+entered here is purposely minimal; it's just enough to bring up and
+debug the installation. Over the lifetime of a running system,
+information about *Device Groups* and *Slices* (and the other
+abstractions they build upon) should be entered via the ROC, as
+described the section on Runtime Control. When you get to that point,
+Ansible variable ``standalone`` in ``vars/main.yml`` (which
+corresponds to the override value assigned to
+``provision-network-slice`` in ``radio-5g-values.yaml``) should be set
+to ``false``. Doing so causes the ``device-groups`` and
+``network-slices`` blocks of ``radio-5g-values.yaml`` to be
+ignored. The ``subscribers`` block is always required to configure
+SD-Core.
+
+
+Bring Up Aether
+~~~~~~~~~~~~~~~~~~~~~
+
+You are now ready to bring Aether on-line. We assume a fresh install
+by typing the following in the Ansible container:
+
+.. code-block::
+
+   $ make aether-k8s-install
+   $ make aether-5gc-install
+
+You can verify the installation by running ``kubectl`` just as you did
+in earlier stages. Note that we postpone bringing up the AMP until
+later so as to have fewer moving parts to debug.
+
+
+gNodeB Setup
+~~~~~~~~~~~~~~~~~~~~
+
+Once the SD-Core is up and running, we are ready to bring up the
+physical gNB. The details of how to do this depend on the specific
+device you are using, but we identify the main issues you need to
+address using SERCOMM's 5G femto cell as an example. That particular
+device uses the n78 band and is on the ONF MarketPlace, where you can
+also find a User's Guide.
+
+.. _reading_sercomm:
+.. admonition:: Further Reading
+
+   `SERCOMM – SCE5164-B78 INDOOR SMALL CELL
+   <https://opennetworking.org/products/sercomm-sce5164-b78/>`__.
+
+For the purposes of the following description, we assume the gNB is
+assigned IP address ``10.76.28.187``, which per our running example,
+is on the same L2 network as our Aether server (``10.76.28.113``).
+:numref:`Figure %s <fig-sercomm>` a screenshot of the SERCOMM
+gNB management dashboard, which we reference in the instructions that
+follow:
+
+.. _fig-sercomm:
+.. figure:: figures/Sercomm.png
+    :width: 500px
+    :align: center
+
+    Management dashboard on the Sercomm gNB, showing the dropdown
+    ``Settings`` menu overlayed on the ``NR Cell Configuration`` page
+    (which shows default radio settings).
+
+
+1. **Connect to Management Interface.** Start by connecting a laptop
+   directly to the LAN port on the small cell, pointing your laptop's
+   web browser at the device's management page
+   (``https://10.10.10.189``).  You will need to assign your laptop an
+   IP address on the same subnet (e.g., ``10.10.10.100``).  Once
+   connected, log in with the credentials provided by the vendor.
+
+2. **Configure WAN.** Visit the ``Settings > WAN`` page to configure
+   how the small cell connects to the Internet via its WAN port,
+   either dynamically using DHCP or statically by setting the device's
+   IP address (``10.76.28.187``) and default gateway (``10.76.28.1``).
+
+3. **Access Remote Management.** Once on the Internet, it should be
+   possible to reach the management dashboard without being directly
+   connected to the LAN port (``https://10.76.28.187``).
+
+4. **Connect GPS.** Connect the small cell's GPS antenna to the GPS
+   port, and place the antenna so it has line-of-site to the sky
+   (i.e., place it in a window). The ``Status`` page of the management
+   dashboard should report its latitude, longitude, and fix time.
+
+5. **Spectrum Access System.** One reason the radio needs GPS is so it
+   can report its location to a Spectrum Access System (SAS), a
+   requirement in the US to coordinate access to the CBRS Spectrum in
+   the 3.5 GHz band. For example, the production deployment of Aether
+   uses the `Google SAS portal
+   <https://cloud.google.com/spectrum-access-system/docs/overview>`__,
+   which the small cell can be configured to query periodically. To do
+   so, visit the ``Settings > SAS`` page.  Acquiring the credentials
+   needed to access the SAS requires you go through a certification
+   process, but as a practical matter, it may be possible to test an
+   isolated/low-power femto cell indoors before completing that
+   process. Consult with your local network administrator.
+
+6. **Configure Radio Parameters.** Visit the ``Settings > NR Cell
+   Configuration`` page (shown in the figure) to set parameters that
+   control the radio. It should be sufficient to use the default
+   settings when getting started.
+
+7. **Configure the PLMN.** Visit the ``Settings > 5GC`` page to set
+   the PLMN identifier on the small cell (``00101``) to match the
+   MCC/MNC values (``001`` / ``01`` ) specified in the Core.
+
+8. **Connect to Aether Control Plane.** Also on the ``Settings > 5GC``
+   page, define the AMF Address to be the IP address of your Aether
+   server (e.g., ``10.76.28.113``). Aether's SD-Core is configured to
+   expose the corresponding AMF via a well-known port, so the server's
+   IP address is sufficient to establish connectivity. The ``Status``
+   page of the management dashboard should confirm that control
+   interface is established.
+
+9. **Connect to Aether User Plane.** As described in an earlier
+   section, the Aether User Plane (UPF) is running at IP address
+   ``192.168.252.3``. Connecting to that address requires installing a
+   route to subnet ``192.168.252.0/24``. How you install this route is
+   device and site-dependent. If the small cell provides a means to
+   install static routes, then a route to destination
+   ``192.168.252.0/24`` via gateway ``10.76.28.113`` (the server
+   hosting Aether) will work. If the small cell does not allow static
+   routes (as is the case for the SERCOMM gNB), then ``10.76.28.113``
+   can be installed as the default gateway, but doing so requires that
+   your server also be configured to forward IP packets on to the
+   Internet.
+
+Run Diagnostics
+~~~~~~~~~~~~~~~~~
+
+Successfully connecting a UE to the Internet is not a straightforward
+exercise. It involves configuring the UE, gNB, and SD-Core software in
+a consistent way; establishing SCTP-based control plane (N2) and
+GTP-based user plane (N3) connections between the base station and
+Mobile Core; and traversing multiple IP subnets along the end-to-end
+path.
+
+The UE and gNB provide limited diagnostic tools. For example, it's
+possible to run ``ping`` and ``traceroute`` from both. You can also
+run the ``ksniff`` tool described in the Networking section, but the
+most helpful packet traces you can capture are shown in the following
+commands. You can run these on the Aether server, where we use our
+example ``ens18`` interface for illustrative purposes:
+
+.. code-block::
+
+   $ sudo tcpdump -i any sctp -w sctp-test.pcap
+   $ sudo tcpdump -i ens18 port 2152 -w gtp-outside.pcap
+   $ sudo tcpdump -i access port 2152 -w gtp-inside.pcap
+   $ sudo tcpdump -i core net 172.250.0.0/16 -w n6-inside.pcap
+   $ sudo tcpdump -i ens18 net 172.250.0.0/16 -w n6-outside.pcap
+
+The first trace, saved in file ``sctp.pcap``, captures SCTP packets
+sent to establish the control path between the base station and the
+Mobile Core (i.e., N2 messages). Toggling "Mobile Data" on the UE,
+for example by turning Airplane Mode off and on, will generate the
+relevant control plane traffic.
+
+The second and third traces, saved in files ``gtp-outside.pcap`` and
+``gtp-inside.pcap``, respectively, capture GTP packets (tunneled
+through port ``2152`` ) on the RAN side of the UPF. Setting the
+interface to ``ens18`` corresponds to "outside" the UPF and setting
+the interface to ``access`` corresponds to "inside" the UPF.  Running
+``ping`` from the UE will generate the relevant user plane (N3) traffic.
+
+Similarly, the fourth and fifth traces, saved in files
+``n6-inside.pcap`` and ``n6-outside.pcap``, respectively, capture IP
+packets on the Internet side of the UPF (which is known as the **N6**
+interface in 3GPP). In these two tests, ``net 172.250.0.0/16``
+corresponds to the IP addresses assigned to UEs by the SMF. Running
+``ping`` from the UE will generate the relevant user plane traffic.
+
+If the ``gtp-outside.pcap`` has packets and the ``gtp-inside.pcap``
+is empty (no packets captured), you may run the following commands
+to make sure packets are forwarded from the ``ens18`` interface
+to the ``access`` interface and vice versa:
+
+.. code-block::
+
+   $ sudo iptables -A FORWARD -i ens18 -o access -j ACCEPT
+   $ sudo iptables -A FORWARD -i access -o ens18 -j ACCEPT
diff --git a/onramp/gnbsim.rst b/onramp/gnbsim.rst
new file mode 100644
index 0000000..d71cf8c
--- /dev/null
+++ b/onramp/gnbsim.rst
@@ -0,0 +1,145 @@
+Emulated RAN
+----------------
+
+gNBsim emulates a 5G RAN, generating (mostly) Control Plane traffic
+that can be directed at SD-Core. This section describes how to
+configure gNBsim, so as to both customize and scale the workload it
+generates. We assume gNBsim runs in one or more servers, independent
+of the server(s) that host SD-Core. These servers are specified in the
+``hosts.ini`` file, as described in the section on Scaling Aether. We
+also assume you start with a variant of ``vars/main.yml`` customized
+for running gNBsim, which is easy to do:
+
+.. code-block::
+
+   $ cd vars
+   $ cp main-gnbsim.yml main.yml
+
+Configure gNBsim
+~~~~~~~~~~~~~~~~~~
+
+Two sets of parameters control gNBsim. The first set, found in the
+``gnbsim`` section of ``vars/main.yml``, controls how gNBsim is
+deployed: (1) the number of servers it runs on; (2) the number of
+Docker containers running within each server; (3) what configuration
+to run in each of those containers; and
+(4) how those containers connect to SD-Core. For example, consider the
+following variable definitions:
+
+.. code-block::
+
+   gnbsim:
+       docker:
+           container:
+               image: omecproject/5gc-gnbsim:main-PR_88-cc0d21b
+               prefix: gnbsim
+               count: 2
+           network:
+              macvlan:
+                   name: gnbnet
+
+       router:
+          data_iface: ens18
+          macvlan:
+               iface: gnbaccess
+               subnet_prefix: "172.20"
+
+       servers:
+           0:
+              - "config/gnbsim-s1-p1.yaml"
+              - "config/gnbsim-s1-p2.yaml"
+           1:
+              - "config/gnbsim-s2-p1.yaml"
+              - "config/gnbsim-s2-p2.yaml"
+
+The ``container.count`` variable in the ``docker`` block specifies how
+many containers run in each server (``2`` in this example). The
+``router`` block then gives the network specification needed for these
+containers to connect to the SD-Core; all of these variables are
+described in the previous section on Networking. Finally, the
+``servers`` block names the configuration files that parameterize each
+container. In this example, there are two servers with two containers
+running in each, with ``config/gnbsim-s2-p1.yaml`` parameterizing the
+first container on the second server.
+
+These config files then specify the second set of gNBsim parameters.
+A detailed description of these parameters is outside the scope of
+this guide (see https://github.com/omec-project/gnbsim for details),
+but at a high-level, gNBsim defines a set of *profiles*, each of which
+exercises a common usage scenario that the Core has to deal with. Each
+of these sequences is represented by a ``profileType`` in the config
+file. gNBsim supports seven profiles, which we list here:
+
+.. code-block::
+
+   - profileType: register		# UE Registration
+   - profileType: pdusessest		# UE Initiated Session
+   - profileType: anrelease		# Access Network (AN) Release
+   - profileType: uetriggservicereq	# UE Initiated Service Request
+   - profileType: deregister		# UE Initiated De-registration
+   - profileType: nwtriggeruedereg	# Network Initiated De-registration
+   - profileType: uereqpdusessrelease	# UE Initiated Session Release
+
+The second profile (``pdusettest``) is selected by default. It causes
+the specified number of UEs to register with the Core, initiate a user
+plane session, and then send a minimal data packet over that session.
+Note that the rest of the per-profile parameters are highly redundant.
+For example, they specify the IMSI- and PLMD-related information UEs
+need to connect to the Core.
+
+Finally, it is necessary to edit the ``core`` section of
+``vars/main.yml`` to indicate the address at which gNBsim can find the
+AMF. For our running example, this would look like the following:
+
+.. code-block::
+
+   core:
+       amf: "10.76.28.113"
+
+
+Install/Uninstall gNBsim
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once you have edited the parameters (and assuming you already have
+SD-Core running), you are ready to install gNBsim. This includes starting
+up all the containers and configuring the network so they can reach
+the Core. This is done from the main OnRamp server you've been using,
+where you type:
+
+.. code-block::
+
+   $ make gnbsim-docker-install
+   $ make aether-gnbsim-install
+
+Note that the first step may not be necessary, depending on whether
+Docker is already installed on the server(s) you've designated to host
+gNBsim.
+
+When you are finished, the following uninstalls everything:
+
+.. code-block::
+
+   $ make aether-gnbsim-uninstall
+
+Run gNBsim
+~~~~~~~~~~~~~~~~~~
+
+Once gNBsim is installed and the Docker containers instantiated, you
+can run the simulation by typing
+
+.. code-block::
+
+   $ make aether-gnbsim-run
+
+This can be done multiple times without reinstalling. For each run,
+you can use Docker to view the results, which have been saved in each
+of the containers. To do so, ssh into one of the servers designated to
+to run gNBsim, and then type
+
+.. code-block::
+
+   $ docker exec -it gnbsim-1 cat summary.log
+
+Note that container name ``gnbsim-1`` is constructed from the
+``prefix`` variable defined in the ``docker`` section of
+``vars/main.yml``, with ``-1`` indicating the first container.
diff --git a/onramp/inspect.rst b/onramp/inspect.rst
new file mode 100644
index 0000000..082994e
--- /dev/null
+++ b/onramp/inspect.rst
@@ -0,0 +1,112 @@
+Closer Look
+---------------
+
+Before tearing down your Quick Start deployment, there are two
+additional steps you can take to watch Aether in action. The first is
+to bring up the Aether Management Plane (AMP), which includes
+Dashboards showing different aspects of Aether's runtime behavior. The
+second is to enable packet capture, and then run an analysis tool to
+trace the flow of packets into and out of SD-Core.
+
+
+Install AMP
+~~~~~~~~~~~~~~~
+
+The Aether Management Platform (AMP) is implemented by two Kubernetes
+applications: *Runtime Operational Control (ROC)* and a *Monitoring
+Service*.\ [#]_ AMP can be deployed on the same cluster as SD-Core by
+executing the following Make target:
+
+.. code-block::
+
+   $ make aether-amp-install
+
+Once complete, ``kubectl`` will show the ``aether-roc`` and
+``cattle-monitoring-system`` namespaces running in support of these
+two services, respectively, plus new ``atomix`` pods in the
+``kube-system`` namespace.  Atomix is the scalable key-value store
+that keeps the ROC data model persistent.
+
+.. [#] Note that what the implementation calls ROC, :doc:`Chapter 6 <sysapproach5g:cloud>` refers
+        to generically as *Service Orchestration*.
+
+You can access the dashboards for the two subsystems,
+respectively, at
+
+.. code-block::
+
+   http://<server_ip>:31194
+   http://<server_ip>:30950
+
+The programmatic API underlying the Control Dashboard, which was
+introduced in :doc:`Section 6.4 <sysapproach5g:cloud>`, can be accessed at
+``http://10.76.28.113:31194/aether-roc-api/`` in our example
+deployment (where Aether runs on host ``10.76.28.113``).
+
+There is much more to say about the ROC and the Aether API, which we
+return to in the section on Runtime Control. For now, we suggest you
+simply peruse the Control Dashboard by starting with the dropdown menu
+in the upper right corner. For example, selecting `Devices` will show
+the set of UEs registered with Aether, and selecting `Device-Groups`
+will show how those UEs are aggregated. In an operational setting,
+these values would be entered into the ROC through either the GUI or
+the underlying API. For the Quick Start scenario we're limiting
+ourselves to in this section, these values are loaded from
+``deps/amp/5g-roc/templates/roc-5g-models.json``.
+
+Turning to the Monitoring Dashboard, you will initially see
+Kubernetes-related performance stats. Select the *5G Dashboard* option
+to display information reported by SD-Core. That page should show an
+active (green) UPF, but there will be no base stations or attached
+devices until you rerun the RAN simulator (gNBsim) introduced in the
+previous section. Doing so will also result in the UPF throughput
+panel reporting just a small trace activity. This is because gNBsim
+generates very little User Plane traffic (a few ICMP packets); it is
+primarily designed to stress test SD-Core's Control Plane.
+
+When you are done experimenting with AMP, type the following
+to tear it down:
+
+.. code-block::
+
+   $ make aether-amp-uninstall
+
+Run Ksniff and Wireshark
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In addition to the trace output generated by the simulator, a good way
+to understand the inner working of Aether is to use `Ksniff
+<https://github.com/eldadru/ksniff>`__ (a Kubernetes plugin) to
+capture packets and display their headers as they flow into and out of
+the microservices that implement Aether. Output from Ksniff can then
+be fed into `Wireshark <https://www.wireshark.org/>`__.
+
+To install the Ksniff plugin on the server running Aether, you need to
+first install ``krew``, the Kubernetes plugin manager. Instructions on
+doing that can be found `online
+<https://krew.sigs.k8s.io/docs/user-guide/setup/install/>`__. Once
+that's done, you can install Ksniff by typing:
+
+.. code-block::
+
+   $ kubectl krew install sniff
+
+You can then run Ksniff in the context of a specific Kubernetes pod by
+specifying their namespace and instance names, and then redirecting
+the output to Wireshark. If you don't have a desktop environment on
+your Aether server, you can either view the output using a simpler
+packet analyzer, such as `tshark
+<https://www.wireshark.org/docs/man-pages/tshark.html>`__, or by
+redirecting the PCAP output in a file and transfer it a desktop
+machine for viewing in Wireshark.
+
+For example, the following captures and displays traffic into and out
+of the AMF, where you need to substitute the name of the AMP pod
+you learned from ``kubectle`` in place of ``amf-5887bbf6c5-pc9g2``.
+
+.. code-block::
+
+   $ kubectl sniff -n omec amf-5887bbf6c5-pc9g2 -o - | tshark -r -
+
+Of course, you'll also need to restart the RAN emulator to generate
+workload for this tool to capture.
diff --git a/onramp/network.rst b/onramp/network.rst
new file mode 100644
index 0000000..2e3a430
--- /dev/null
+++ b/onramp/network.rst
@@ -0,0 +1,218 @@
+Verify Network
+----------------
+
+This section goes into depth on how SD-Core (which runs *inside* the
+Kubernetes cluster) connects to either physical gNBs or an emulated
+RAN (both running *outside* the Kubernetes cluster). For the purpose
+of this section, we assume you already have a scalable cluster running
+(as outlined in the previous section), SD-Core has been installed on
+that cluster, and you have a terminal window open on the Master node
+in that cluster.
+
+:numref:`Figure %s <fig-macvlan>` shows a high-level schematic of
+Aether's end-to-end User Plane connectivity, where we start by
+focusing on the basics: a single Aether node, a single physical gNB,
+and just the UPF container running inside SD-Core. The identifiers
+shown in gray in the figure (``10.76.28.187``, ``10.76.28.113``,
+``ens18``) are taken from our running example of an actual
+deployment (meaning your details will be different). All the other
+names and addresses are part of a standard Aether configuration.
+
+.. _fig-macvlan:
+.. figure:: figures/Slide24.png
+    :width: 700px
+    :align: center
+
+    The UPF pod running inside the server hosting Aether, with
+    ``core`` and ``access`` bridging the two. Identifiers
+    ``10.76.28.187``, ``10.76.28.113``, ``ens18`` are specific to
+    a particular deployment site.
+
+As shown in the figure, there are two Macvlan bridges that connect the
+physical interface (``ens18`` in our example) with the UPF
+container. The ``access`` bridge connects the UPF downstream to the
+RAN (this corresponds to 3GPP's N3 interface) and is assigned IP subnet
+``192.168.252.0/24``.  The ``core`` bridge connects the UPF upstream
+to the Internet (this corresponds to 3GPP's N6 interface) and is assigned
+IP subnet ``192.168.250.0/24``.  This means, for example, that the
+``access`` interface *inside* the UPF (which is assigned address
+``192.168.252.3``) is the destination IP address of GTP-encapsulated
+user plane packets from the gNB.
+
+Following this basic schematic, it is possible to verify that the UPF
+is connected to the network by checking to see that the ``core`` and
+``access`` are properly configured. This can be done using ``ip``, and
+you should see results similar to the following:
+
+.. code-block::
+
+   $ ip addr show core
+   15: core@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
+       link/ether 06:f7:7c:65:31:fc brd ff:ff:ff:ff:ff:ff
+       inet 192.168.250.1/24 brd 192.168.250.255 scope global core
+          valid_lft forever preferred_lft forever
+       inet6 fe80::4f7:7cff:fe65:31fc/64 scope link
+          valid_lft forever preferred_lft forever
+
+   $ ip addr show access
+   14: access@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
+       link/ether 82:ef:d3:bb:d3:74 brd ff:ff:ff:ff:ff:ff
+       inet 192.168.252.1/24 brd 192.168.252.255 scope global access
+          valid_lft forever preferred_lft forever
+       inet6 fe80::80ef:d3ff:febb:d374/64 scope link
+          valid_lft forever preferred_lft forever
+
+The above output from ``ip`` shows the two interfaces visible to the
+server, but running *outside* the container. ``kubectl`` can be used
+to see what's running *inside* the UPF, where ``bessd`` is the name of
+the container image that implements the UPF, and ``access`` and
+``core`` are the last two interfaces shown below:
+
+.. code-block::
+
+   $ kubectl -n omec exec -ti upf-0 bessd -- ip addr
+   1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+       link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+       inet 127.0.0.1/8 scope host lo
+       valid_lft forever preferred_lft forever
+       inet6 ::1/128 scope host
+       valid_lft forever preferred_lft forever
+   3: eth0@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
+       link/ether 8a:e2:64:10:4e:be brd ff:ff:ff:ff:ff:ff link-netnsid 0
+       inet 192.168.84.19/32 scope global eth0
+       valid_lft forever preferred_lft forever
+       inet6 fe80::88e2:64ff:fe10:4ebe/64 scope link
+       valid_lft forever preferred_lft forever
+   4: access@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
+       link/ether 82:b4:ea:00:50:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0
+       inet 192.168.252.3/24 brd 192.168.252.255 scope global access
+       valid_lft forever preferred_lft forever
+       inet6 fe80::80b4:eaff:fe00:503e/64 scope link
+       valid_lft forever preferred_lft forever
+   5: core@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
+       link/ether 4e:ac:69:31:a3:88 brd ff:ff:ff:ff:ff:ff link-netnsid 0
+       inet 192.168.250.3/24 brd 192.168.250.255 scope global core
+       valid_lft forever preferred_lft forever
+       inet6 fe80::4cac:69ff:fe31:a388/64 scope link
+       valid_lft forever preferred_lft forever
+
+When packets flowing upstream from the gNB arrive on the server's
+physical interface, they need to be forwarded over the ``access``
+interface.  This is done by having the following kernel route
+installed, which should be the case if your Aether installation was
+successful:
+
+.. code-block::
+
+   $ route -n | grep "Iface\|access"
+   Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
+   192.168.252.0   0.0.0.0         255.255.255.0   U     0      0        0 access
+
+Within the UPF, the correct behavior is to forward packets between the
+``access`` and ``core`` interfaces.  Upstream packets arriving on the
+``access`` interface have their GTP headers removed and the raw IP
+packets are forwarded to the ``core`` interface.  The routes inside
+the UPF's ``bessd`` container will look something like this:
+
+.. code-block::
+
+   $ kubectl -n omec exec -ti upf-0 -c bessd -- ip route
+   default via 169.254.1.1 dev eth0
+   default via 192.168.250.1 dev core metric 110
+   10.76.28.0/24 via 192.168.252.1 dev access
+   10.76.28.113 via 169.254.1.1 dev eth0
+   169.254.1.1 dev eth0 scope link
+   192.168.250.0/24 dev core proto kernel scope link src 192.168.250.3
+   192.168.252.0/24 dev access proto kernel scope link src 192.168.252.3
+
+The default route via ``192.168.250.1`` directs upstream packets to
+the Internet via the ``core`` interface, with a next hop of the
+``core`` interface outside the UPF.  These packets then undergo source
+NAT in the kernel and are sent to the IP destination in the packet.
+This means that the ``172.250.0.0/16`` addresses assigned to UEs are
+not visible beyond the Aether server. The return (downstream) packets
+undergo reverse NAT and now have a destination IP address of the UE.
+They are forwarded by the kernel to the ``core`` interface by these
+rules on the server:
+
+.. code-block::
+
+   $ route -n | grep "Iface\|core"
+   Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
+   172.250.0.0     192.168.250.3   255.255.0.0     UG    0      0        0 core
+   192.168.250.0   0.0.0.0         255.255.255.0   U     0      0        0 core
+
+The first rule above matches packets to the UEs on the
+``172.250.0.0/16`` subnet.  The next hop for these packets is the
+``core`` IP address inside the UPF.  The second rule says that next
+hop address is reachable on the ``core`` interface outside the UPF.
+As a result, the downstream packets arrive in the UPF where they are
+GTP-encapsulated with the IP address of the gNB.
+
+Note that if you are not finding ``access`` and ``core`` interfaces
+outside the UPF, the following commands can be used to create these
+two interfaces manually (again using our running example for the
+physical ethernet interface):
+
+.. code-block::
+
+    $ ip link add core link ens18 type macvlan mode bridge 192.168.250.3
+    $ ip link add access link ens18 type macvlan mode bridge 192.168.252.3
+
+Beyond this basic understanding, there are three other details of
+note. First, we have been focusing on the User Plane because Control
+Plane connectivity is much simpler: RAN elements (whether they are
+physical gNBs or gNBsim) reach the AMF using the server's actual IP
+address (``10.76.28.113`` in our running example). Kubernetes is
+configured to forward SCTP packets arriving on port ``38412`` to the
+AMF container.
+
+Second, the basic end-to-end schematic shown in :numref:`Figure %s
+<fig-macvlan>` assumes each gNB is assigned an address on the same L2
+network as the Aether cluster (e.g., ``10.76.28.0/24`` in our example
+scenario). This works when the gNB is physical or when we want to run
+a single gNBsim traffic source, but once we scale up the gNBsim by
+co-locating multiple containers on a single server, we need to
+introduce another network so each container has a unique IP address
+(even though they are all hosted at the same IP address). This more
+complex configuration is depicted in :numref:`Figure %s <fig-gnbsim>`,
+where ``172.20.0.0/16`` is the IP subnet for the virtual network (also
+implemented by a Macvlan bridge, and named ``gnbaccess``).
+
+.. _fig-gnbsim:
+.. figure:: figures/Slide25.png
+    :width: 600px
+    :align: center
+
+    A server running multiple instances of gNBsim, connected to Aether.
+
+Finally, all of the configurable parameters used throughout this
+section are defined in the ``core`` and ``gnbsim`` sections of the
+``vars/main.yml`` file. Note that an empty value for
+``core.ran_subnet`` implies the physical L2 network is used to connect
+RAN elements to the core, as is typically the case when connecting
+physical gNBs.
+
+
+.. code-block::
+
+    core:
+        standalone: "true"
+        data_iface: ens18
+        values_file: "config/hpa-5g-values.yaml"
+        ran_subnet: "172.20.0.0/16"
+        helm:
+           chart_ref: aether/sd-core
+           chart_version: 0.12.6
+        upf:
+           ip_prefix: "192.168.252.0/24"
+        amf:
+           ip: "172.16.41.103"
+
+    gnbsim:
+        ...
+        router:
+            data_iface: ens18
+            macvlan:
+                iface: gnbaccess
+                subnet_prefix: "172.20"
diff --git a/onramp/overview.rst b/onramp/overview.rst
new file mode 100644
index 0000000..75f1e86
--- /dev/null
+++ b/onramp/overview.rst
@@ -0,0 +1,91 @@
+Overview
+----------------
+
+Aether is an open source 5G edge cloud connectivity service that
+supports enterprise deployments of Private 5G. Aether's architecture
+is described in a companion book; this guide references sections of
+that book to fill in details about Aether's design.
+
+.. _reading_private5g:
+.. admonition:: Further Reading
+
+   L. Peterson, O. Sunay, and B. Davie. `Private 5G: A Systems
+   Approach <https://5g.systemsapproach.org>`__. 2023.
+
+Source code for all the individual components that comprise Aether
+(e.g., AMP, SD-Core, SD-RAN, SD-Fabric) can be downloaded, and
+deployment artifacts built from that source code (e.g., Docker Images,
+Helm Charts, Fleet Bundles, Terraform Templates, Ansible Playbooks)
+can be used to bring up a running instance of Aether on local
+hardware. (See the *Source Directory* section of this guide for
+information about where to find the relevant repositories.)
+
+A multi-site deployment of Aether has been running since 2020 in
+support of the *Pronto Project*, but that deployment depends on an ops
+team with significant insider knowledge about Aether's engineering
+details. It is difficult for others to reproduce that know-how and
+bring up their own Aether clusters.  Aether is also available as two
+self-contained software packages that were originally designed to
+support developers working on individual components.  These packages
+are straightforward to install and run, even in a VM on your laptop,
+so they also provide an easy way to get started:
+
+* `Aether-in-a-Box (AiaB)
+  <https://docs.aetherproject.org/master/developer/aiab.html>`__:
+  Includes SD-Core and the online aspects of AMP (Service
+  Orchestrator and the Monitoring Subsystem). AiaB can be configured
+  to work with either an emulated RAN or physical small cell radios
+  (both 4G and 5G).
+
+* `SDRAN-in-a-Box (RiaB)
+  <https://docs.sd-ran.org/master/sdran-in-a-box/README.html>`__:
+  Includes the ONOS-based nRT-RIC, the O-RAN defined E2SM-KPI and
+  E2SM-RC Service Models, and example xApps. RiaB can be configured to
+  work with either an emulated RAN (5G) or with OAI's open source RAN stack
+  running on USRP devices (4G).
+
+Note that these two packages do not include SD-Fabric, which depends
+on programmable switching hardware. Readers interested in learning
+more about that capability (including a P4-based UPF) should see the
+Hands-on Programming appendix of our companion SDN book.
+
+.. _reading_pronto:
+.. admonition:: Further Reading
+
+   `Pronto Project: Building Secure Networks Through Verifiable
+   Closed-Loop Control <https://prontoproject.org/>`__.
+
+   `Hands-on Programming (Appendix). Software-Defined Networks: A
+   Systems Approach
+   <https://sdn.systemsapproach.org/exercises.html>`__. November 2021.
+
+As a tool targeted at developers, AiaB and RiaB support a streamlined
+modify-build-test loop, but a significant gap remains between these
+self-contained versions of Aether and an operational 5G-enabled edge
+cloud deployed into a particular target environment. `Aether OnRamp
+<https://github.com/opennetworkinglab/aether-onramp>`__ is a
+re-packaging of Aether to address that gap. It provides an incremental
+path for users to:
+
+* Learn about all the moving parts in Aether.
+* Customize Aether for different target environments.
+* Deploy and operate Aether with live traffic.
+
+Aether OnRamp begins with a *Quick Start* deployment similar to AiaB,
+but then goes on to prescribe a sequence of steps a user can follow to
+deploy increasingly complex configurations. This culminates in an
+operational Aether cluster capable of running 24/7 and supporting live
+5G workloads.
+
+Note that OnRamp includes support for bringing up a 4G version of
+Aether connected to one or more physical eNBs, but we postpone a
+discussion of that capability until the final section. Everything up
+to that point assumes 5G.
+
+Aether OnRamp is still a work in progress, but anyone
+interested in participating in that effort is encouraged to join the
+discussion on Slack in the `ONF Community Workspace
+<https://onf-community.slack.com/>`__. A roadmap for the work that
+needs to be done can be found in the `Aether OnRamp Wiki
+<https://github.com/opennetworkinglab/aether-onramp/wiki>`__.
+
diff --git a/onramp/roc.rst b/onramp/roc.rst
new file mode 100644
index 0000000..3d19599
--- /dev/null
+++ b/onramp/roc.rst
@@ -0,0 +1,109 @@
+Runtime Control
+-----------------------------------
+
+Aether defines an API (and associated GUI) for managing connectivity
+at runtime. This stage brings up that API/GUI, as implemented by the
+*Runtime Operational Control (ROC)* subsystem, building on the
+physical gNB we connected to Aether in the previous section.
+
+This stage focuses on the abstractions that the ROC layers on top of
+the SD-Core. These abstractions are described in :doc:`Section 6.4
+<sysapproach5g:cloud>` and include *Device Groups* and
+*Slices*. (The full set of model definitions can be found in `GitHub
+<https://github.com/onosproject/aether-models>`__.)  Initial settings
+of these ROC-managed parameters are recorded in
+``deps/amp/roles/roc-load/templates/radio-5g-models.json``. We use
+these values to load the ROC database, saving us from a laborious GUI
+session.
+
+Somewhat confusingly, the *Device-Group* and *Slice* information is
+duplicated between ``deps/5gc/roles/core/templates/radio-5g-values.yaml``
+and this ``radio-5g-models.json`` file. This makes it possible to bring
+up the SD-Core without the ROC, which simplifies the process of
+debugging an initial installation, but having two sources for this
+information leads to problems keeping them in sync, and should be
+avoided.
+
+To this end, Aether treats the ROC as the "single source of truth" for
+*Slices*, *Device Groups*, and all the other abstract objects it
+defines, so we recommend using the GUI or API to make changes over
+time, and avoiding the override values in ``radio-5gc-values.yaml``
+once you've established basic connectivity. And if you want to save
+this bootstrap state in a text file for a possible restart, we
+recommend doing so in ``radio-5g-models.json`` (although this is not a
+substitute for the operational practice of backing up the ROC
+database).
+
+To make ROC the authoritative source of runtime state, first edit the
+``standalone`` variable in the ``core`` section of ``vars/main.yml``,
+setting it to ``false``. This variable indicates whether we want
+SD-Core to run in *Stand Alone* mode, which has been the default
+setting up to this point. Disabling ``standalone`` causes the SD-Core
+to ignore the ``device-groups`` and ``network-slices`` blocks of the
+``omec-sub-provision`` section in ``radio-5gc-values.yaml``, and to instead
+retrieve this information from the ROC.
+
+The next step is to edit ``radio-5g-models.json`` to record the same
+IMSI information you added to ``radio-5gc-values.yaml`` in the
+previous section.  This includes modifying, adding and removing
+``sim-card`` entries as necessary. Note that only the IMSIs need to
+match the earlier data; the ``sim-id`` and ``display-name`` values are
+arbitrary and need only be consistent *within* ``radio-5g-models.json``.
+
+.. code-block::
+
+   "imsi-definition": {
+          "mcc": "315",
+          "mnc": "010",
+          "enterprise": 1,
+          "format": "CCCNNNEESSSSSSS"
+   },
+   ...
+
+   "sim-card": [
+          {
+              "sim-id": "sim-1",
+              "display-name": "SIM 1",
+              "imsi": "315010999912301"
+          },
+   ...
+
+Once you are done with these edits, uninstall the SD-Core you had
+running in the previous stage, and then bring up the ROC followed by a
+new instantiation of the SD-Core:
+
+.. code-block::
+
+   $ make aether-5gc-uninstall
+   $ make aether-amp-install
+   $ make aether-5gc-install
+
+The order is important, since the Core depends on configuration
+parameters provided by the ROC. Also note that you may need to reboot
+the gNB, although it typically does so automatically when it detects
+that the Core has restarted.
+
+To see these initial configuration values using the GUI, open the
+dashboard available at ``http://<server-ip>:31194``. If you select
+``Configuration > Site`` from the drop-down menu at top right, and
+click the ``Edit`` icon associated with the ``Aether Site`` you can
+see (and potentially change) the following values:
+
+* MCC: 315
+* MNC: 010
+
+Although we have no need to do so now, you can make changes to these
+values, and then click ``Update`` to save them to the "commit basket".
+Similarly, if you select ``Sim Cards`` from the drop-down menu at top
+right, the ``Edit`` icon associated with each SIM card allows you to
+see (and potentially change) the IMSI values associated with each device.
+You can also disable individual IMSIs. Again, click ``Update`` if you
+make any changes.
+
+The set of registered IMISs can be aggregated into *Device-Groups* by
+selecting ``Device-Groups`` from the drop-down menu at the top right,
+and adding a new device group.
+
+Finally, if you do make a set of updates, select the ``Basket`` icon
+at top right when you are done, and click the ``Commit`` button. This
+causes the set of changes to be committed as a single transaction.
diff --git a/onramp/scale.rst b/onramp/scale.rst
new file mode 100644
index 0000000..b5cbd50
--- /dev/null
+++ b/onramp/scale.rst
@@ -0,0 +1,121 @@
+Scale Cluster
+-----------------
+
+Everything up to this point has been done as part of the Quick Start
+configuration, with all the components running in a single server (VM
+or physical machine). We now describe how to scale Aether to run on
+multiple servers, where we assume this cluster-based configuration
+throughout the rest of this guide. Before continuing, though, you need
+to remove the Quick Start configuration by typing:
+
+.. code-block::
+
+   $ make aether-uninstall
+
+There are two aspects of our deployment that scale independently. One
+is Aether proper: a Kubernetes cluster running the set of
+microservices that implement SD-Core and AMP (and optionally, other
+edge apps). The second is gNBsim: the emulated RAN that generates
+traffic directed at the Aether cluster. Minimally, two servers are
+required—one for the Aether cluster and one for gNBsim—with each able
+to scale independently. For example, having four servers would support
+a 3-node Aether cluster and a 1-node workload generator. This example
+configuration corresponds to the following ``hosts.ini`` file:
+
+.. code-block::
+
+   [all]
+   node1 ansible_host=172.16.144.50 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether
+   node2 ansible_host=172.16.144.71 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether
+   node3 ansible_host=172.16.144.18 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether
+   node4 ansible_host=172.16.144.93 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether
+
+   [master_nodes]
+   node1
+
+   [worker_nodes]
+   node2
+   node3
+   node4
+
+   [gnbsim_nodes]
+   node4
+
+The first block identifies all the nodes; the second block designates
+which node runs the Ansible client and the Kubernetes control plane
+(this is the node you ssh into and invoke Make targets and ``kubectl``
+commands); the third block designates the worker nodes being managed
+by the Ansible client; and the last block indicate which nodes run the
+gNBsim workload generator (gNBsim scales across multiple Docker
+containers, but these containers are **not** managed by Kubernetes).
+Note that having ``master_nodes`` and ``gnbsim_nodes`` contain exactly
+one/common server is what triggers Ansible to instantiate the Quick
+Start configuration.
+
+You need to modify ``hosts.ini`` to match your target deployment.
+Once you've done that (and assuming you deleted your earlier Quick
+Start configuration), you can re-execute the same set of targets you
+ran before:
+
+.. code-block::
+
+   $ make aether-k8s-install
+   $ make aether-5gc-install
+   $ aeither-amp-install
+   $ make aether-gnbsim-install
+   $ make aether-gnbsim-run
+
+This will run the same gNBsim test case as before, but originating in
+a separate VM. We will return to options for scaling up the gNBsim
+workload in a later section, along with describing how to run physical
+gNBs in place of gNBsim. Note that if you are primarily interested in
+the latter, you can still run Aether on a single server, and then
+connect that node to one or more physical gNBs.
+
+Finally, apart from being able able to run SD-Core and gNBsim on
+separate nodes—thereby cleanly decoupling the Core from the RAN—one
+question we have not yet answered is why you might want to scale the
+Aether cluster to multiple nodes. One answer is that you are concerned
+about availability, so want to introduce redundancy.
+
+A second answer is that you want to run some other edge application,
+such as an IoT or AI/ML platform, on the Aether cluster.  Such
+applications can be co-located with SD-Core, with the latter providing
+local breakout. For example, OpenVINO is a framework for deploying
+inference models to process local video streams streams, for example,
+detecting and counting people who enter the field of view for
+5G-connected cameras. Just like SD-Core, OpenVINO is deployed as a set
+of Kubernetes pods.
+
+.. _reading_openvino:
+.. admonition:: Further Reading
+
+   `OpenVINO Toolkit <https://docs.openvino.ai>`__.
+
+A third possible answer is that you want to scale SD-Core itself, in
+support of a scalable number of UEs. For example, providing
+predictable, low-latency support for hundreds or thousands of IoT
+devices requires horizontally scaling the AMF. OnRamp provides a way
+to experiment with exactly that possibility. If you edit the ``core``
+section of ``vars/main.yml`` to use an alternative values file (in
+place of ``sdcore-5g-values.yaml``):
+
+.. code-block::
+
+   values_file: "deps/5gc/roles/core/templates/hpa-5g-values.yaml"
+
+you can deploy SD-Core with *Horizontal Pod Autoscaling (HPA)*
+enabled. Note that HPA is an experimental feature of SD-Core; it has
+not yet been officially released and is not yet supported.
+
+.. _reading_hpa:
+.. admonition:: Further Reading
+
+   `Horizontal Pod Autoscaling
+   <https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/>`__.
+
+
+
+
+
+
diff --git a/onramp/start.rst b/onramp/start.rst
new file mode 100644
index 0000000..db05317
--- /dev/null
+++ b/onramp/start.rst
@@ -0,0 +1,375 @@
+Quick Start
+-----------------------
+
+Aether OnRamp provides a low-overhead way to get started. It brings up
+a one-node Kubernetes cluster, deploys a 5G version of SD-Core on that
+cluster, and runs an emulated 5G workload against the 5G Core. It
+assumes a low-end server that meets the following requirements:
+
+* Haswell CPU (or newer), with at least 4 CPUs and 12GB RAM.
+* Clean install of Ubuntu 18.04, 20.04, or 22.04, with 4.15 (or later) kernel.
+
+For example, something like an Intel NUC is more than enough to get
+started.
+
+While this guide focuses on deploying Aether OnRamp on a physical
+machine (in anticipation of later stages), this stage can also run in
+a VM.  Options include an AWS VM (Ubuntu 20.04 image on `t2.xlarge`
+instance); a VirtualBox VM running `bento/ubuntu-20.04` `Vagrant
+<https://www.vagrantup.com>`_ box on Intel Mac; a VM created using
+`Multipass <https://multipass.run>`_ on Linux, Mac, or Windows; or
+`VMware Fusion <https://www.vmware.com/products/fusion.html>`__ to run
+a VM on a Mac.
+
+For example, if you have Multipass installed on your laptop, you can
+launch a suitable VM instance by typing:
+
+.. code-block::
+
+   $ multipass launch 20.04 --cpus 4 --disk 50G --memory 16G --name onramp
+
+Prep Environment
+~~~~~~~~~~~~~~~~~~~~~
+
+To install Aether OnRamp, you must be able able to run `sudo` without
+a password, and there should be no firewall running on the server,
+which you can verify as follows:
+
+.. code-block::
+
+   $ sudo ufw status
+   $ sudo iptables -L
+   $ sudo nft list
+
+The first command should report inactive, and the second two commands
+should return blank configurations.
+
+Because the install process fetches artifacts from the Internet, if you
+are behind a proxy you will need to set the standard Linux environment
+variables: `http_proxy`, `https_proxy`, `no_proxy`, `HTTP_PROXY`,
+`HTTPS_PROXY` and `NO_PROXY` with the appropriate values. You also
+need to export `PROXY_ENABLED=true` by typing the following:
+
+.. code-block::
+
+   $ export PROXY_ENABLED=true
+
+This variable can also be set in your ``~/.bashrc`` file to make it
+permanent.
+
+Finally, OnRamp depends on Ansible, which you can install on your
+server as follows:
+
+.. code-block::
+
+   $ sudo apt install pipx
+   $ sudo apt install python3.8-venv
+   $ pipx install --include-deps ansible
+   $ pipx ensurepath
+   $ sudo apt-get install sshpass
+
+
+Download Aether OnRamp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once ready, clone the Aether OnRamp repo on this target deployment
+server:
+
+.. code-block::
+
+   $ git clone --recursive https://github.com/opennetworkinglab/aether-onramp.git
+   $ cd aether-onramp
+
+Taking a quick look at your ``aether-onramp`` directory, there are
+four things to note:
+
+1. The ``deps`` directory contains the Ansible deployment
+   specifications for all the Aether subsystems. Each of these
+   subdirectories (e.g., ``deps/5gc``) is self-contained, meaning you
+   can execute the Make targets in each individual directory. Doing so
+   causes Ansible to run the corresponding playbook. For example, the
+   installation playbook for the 5G Core can be found in
+   ``deps/5gc/roles/core/tasks/install.yml``.
+
+2. The Makefile in the main OnRamp directory imports (``#include``)
+   the per-subsystem Makefiles, meaning all the individual steps
+   required to install Aether can be managed from this main directory.
+   The Makefile includes comments listing the key Make targets defined
+   by the included Makefiles. *Importantly, the rest of this guide
+   assumes you are working in the main OnRamp directory, and not in
+   the individual subsystems.*
+
+3. File ``vars/main.yml`` defines all the Ansible variables you will
+   potentially need to modify to specify your deployment scenario.
+   This file is the union of all the per-component ``var/main.yml``
+   files you find in the corresponding ``deps`` directory. This
+   top-level variable file overrides the per-component var files, so
+   you will not need to modify the latter. Note that the ``vars``
+   directory contains several variants of ``main.yml``, each tailored
+   for a different deployment scenario. The default ``main.yml``
+   (which is the same as ``main-quickstart.yml``) supports the Quick
+   Start deployment described in this section; we'll substitute the
+   other variants in later sections.
+
+4. File ``hosts.ini`` (host inventory) is Ansible's way of specifying
+   the set of servers (physical or virtual) that Ansible targets with
+   various installation playbooks. The default version of ``host.ini``
+   included with OnRamp is simplified to run everything on a single
+   server (the one you've cloned the repo onto), with additional lines
+   you may eventually need for a multi-node cluster commented out.
+
+
+Set Target Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Quick Start deployment described in this section requires that you
+modify two parameters to reflect the specifics of your target
+deployment.
+
+The first is in file ``host.ini``, where you will need to give the IP
+address and login credentials for the server you are working on. At
+this stage, we assume the server you downloaded OnRamp onto is the
+same server you will be installing Aether on.
+
+.. code-block::
+
+   node1  ansible_host=172.16.41.103 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether
+
+In this example, address ``172.16.41.103`` and the three occurrences
+of the string ``aether`` need to be replaced with the appropriate
+values.  Note that if you set up your server to use SSH keys instead
+of passwords, then ``ansible_password=aether`` needs to be replaced
+with ``ansible_ssh_private_key_file=~/.ssh/id_rsa`` (or wherever
+your private key can be found).
+
+The second parameter is in ``vars/main.yml``, where the **two** lines
+currently reading
+
+.. code-block::
+
+   data_iface: ens18
+
+need to be edited to replace ``ens18`` with the device interface for
+you server. You can learn the interface using the Linux ``ip``
+command:
+
+.. code-block::
+
+   $ ip a
+   1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+       link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+       inet 127.0.0.1/8 scope host lo
+          valid_lft forever preferred_lft forever
+       inet6 ::1/128 scope host
+          valid_lft forever preferred_lft forever
+   2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
+       link/ether 2c:f0:5d:f2:d8:21 brd ff:ff:ff:ff:ff:ff
+       inet 10.76.28.113/24 metric 100 brd 10.76.28.255 scope global ens3
+          valid_lft forever preferred_lft forever
+       inet6 fe80::2ef0:5dff:fef2:d821/64 scope link
+          valid_lft forever preferred_lft forever
+
+In this example, the reported interface is ``ens18`` and the IP
+address is ``10.76.28.113`` on subnet ``10.76.28.0/24``.  We will use
+these three values as a running example throughout the guide, as a
+placeholder for your local details.
+
+Note that ``vars/main.yml`` and ``hosts.ini`` are the only two files
+you need to modify for now, but there are additional config files that
+you may want to modify as we move beyond the Quick Start deployment.
+We'll identify those files throughout this section, for informational
+purposes, and revisit them in later sections.
+
+Many of the tasks specified in the various Ansible playbooks result in
+calls to Kubernetes, either directly via ``kubectl``, or indirectly
+via ``helm``. This means that after executing the sequence of
+Makefile targets described in the rest of this guide, you'll want to
+run some combination of the following commands to verify that the
+right things happened:
+
+.. code-block::
+
+   $ kubectl get pods --all-namespaces
+   $ helm repo list
+   $ helm list --namespace kube-system
+
+The first reports the set of Kubernetes namespaces currently running;
+the second shows the known set of repos you are pulling charts from;
+and the third shows the version numbers of the charts currently
+deployed in the ``kube-system`` namespace.
+
+If you are not familiar with ``kubectl`` (the CLI for Kubernetes), we
+recommend that you start with `Kubernetes Tutorial
+<https://kubernetes.io/docs/tutorials/kubernetes-basics/>`__.  And
+although not required, you may also want to install
+`k9s <https://k9scli.io/>`__\ , a terminal-based UI that provides a
+convenient alternative to ``kubectl`` for interacting with Kubernetes.
+
+Note that we have not yet installed Kubernetes or Helm, so these
+commands are not yet available. At this point, the only verification
+step you can take is to type the following:
+
+.. code-block::
+
+   $ make aether-pingall
+
+The output should show that Ansible is able to securely connect to all
+the nodes in your deployment, which is currently just the one that
+Ansible knows as ``node1``.
+
+Install Kubernetes
+~~~~~~~~~~~~~~~~~~~
+
+The next step is to bring up an RKE2.0 Kubernetes cluster on your
+target server. Do this by typing:
+
+.. code-block::
+
+   $ make aether-k8s-install
+
+Once the playbook completes, executing ``kubectl`` will show the
+``kube-system`` namespace running, with output looking something like
+the following:
+
+.. code-block::
+
+   $ kubectl get pods --all-namespaces
+   NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE
+   kube-system   cloud-controller-manager-node1                          1/1     Running     0          2m4s
+   kube-system   etcd-node1                                              1/1     Running     0          104s
+   kube-system   helm-install-rke2-canal-8s67r                           0/1     Completed   0          113s
+   kube-system   helm-install-rke2-coredns-bk5rh                         0/1     Completed   0          113s
+   kube-system   helm-install-rke2-ingress-nginx-lsjz2                   0/1     Completed   0          113s
+   kube-system   helm-install-rke2-metrics-server-t8kxf                  0/1     Completed   0          113s
+   kube-system   helm-install-rke2-multus-tbbhc                          0/1     Completed   0          113s
+   kube-system   kube-apiserver-node1                                    1/1     Running     0          97s
+   kube-system   kube-controller-manager-node1                           1/1     Running     0          2m7s
+   kube-system   kube-multus-ds-96cnl                                    1/1     Running     0          95s
+   kube-system   kube-proxy-node1                                        1/1     Running     0          2m1s
+   kube-system   kube-scheduler-node1                                    1/1     Running     0          2m7s
+   kube-system   rke2-canal-h79qq                                        2/2     Running     0          95s
+   kube-system   rke2-coredns-rke2-coredns-869b5d56d4-tffjh              1/1     Running     0          95s
+   kube-system   rke2-coredns-rke2-coredns-autoscaler-5b947fbb77-pj5vk   1/1     Running     0          95s
+   kube-system   rke2-ingress-nginx-controller-s68rx                     1/1     Running     0          48s
+   kube-system   rke2-metrics-server-6564db4569-snnv4                    1/1     Running     0          56s
+
+If you are interested in seeing the details about how Kubernetes is
+customized for Aether, look at
+``deps/k8s/roles/rke2/templates/master-config.yaml``.  Of particular
+note, we have instructed Kubernetes to allow service for ports ranging
+from ``2000`` to ``36767`` and we are using the ``multus`` and
+``canal`` CNI plugins.
+
+Install SD-Core
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We are now ready to bring up the 5G version of the SD-Core. To do
+that, type:
+
+.. code-block::
+
+   $ make aether-5gc-install
+
+``kubectl`` will now show the ``omec`` namespace running (in addition
+to ``kube-system``), with output similar to the following:
+
+.. code-block::
+
+   $ kubectl get pods -n omec
+   NAME                         READY   STATUS             RESTARTS      AGE
+   amf-5887bbf6c5-pc9g2         1/1     Running            0             6m13s
+   ausf-6dbb7655c7-42z7m        1/1     Running            0             6m13s
+   kafka-0                      1/1     Running            0             6m13s
+   metricfunc-b9f8c667b-r2x9g   1/1     Running            0             6m13s
+   mongodb-0                    1/1     Running            0             6m13s
+   mongodb-1                    1/1     Running            0             4m12s
+   mongodb-arbiter-0            1/1     Running            0             6m13s
+   nrf-54bf88c78c-kcm7t         1/1     Running            0             6m13s
+   nssf-5b85b8978d-d29jm        1/1     Running            0             6m13s
+   pcf-758d7cfb48-dwz9x         1/1     Running            0             6m13s
+   sd-core-zookeeper-0          1/1     Running            0             6m13s
+   simapp-6cccd6f787-jnxc7      1/1     Running            0             6m13s
+   smf-7f89c6d849-wzqvx         1/1     Running            0             6m13s
+   udm-768b9987b4-9qz4p         1/1     Running            0             6m13s
+   udr-8566897d45-kv6zd         1/1     Running            0             6m13s
+   upf-0                        5/5     Running            0             6m13s
+   webui-5894ffd49d-gg2jh       1/1     Running            0             6m13s
+
+You will recognize Kubernetes pods that correspond to many of the
+microservices discussed is :doc:`Chapter 5 <sysapproach5g:core>`. For example,
+``amf-5887bbf6c5-pc9g2`` implements the AMF. Note that for historical
+reasons, the Aether Core is called ``omec`` instead of ``sd-core``.
+
+If you are interested in seeing the details about how SD-Core is
+configured, look at
+``deps/5gc/roles/core/templates/radio-5g-values.yaml``.  This is an
+example of a *values override* file that Helm passes to along to
+Kubernetes when launching the service. Most of the default settings
+will remain unchanged, with the main exception being the
+``subscribers`` block of the ``omec-sub-provision`` section. This
+block will eventually need to be edited to reflect the SIM cards you
+actually deploy. We return to this topic in the section describing how
+to bring up a physical gNB.
+
+
+Run Emulated RAN Test
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We can now test SD-Core with emulated traffic by typing:
+
+.. code-block::
+
+   $ make aether-gnbsim-install
+   $ make aether-gnbsim-run
+
+Note that you can re-execute the ``aether-gnbsim-run`` target multiple
+times, where the results of each run are saved in a file within the
+Docker container running the test. You can access that file by typing:
+
+.. code-block::
+
+   $ docker exec -it gnbsim-1 cat summary.log
+
+If successful, the last lines of the output should look like the
+following:
+
+.. code-block::
+
+   ...
+   2023-04-20T20:21:36Z [INFO][GNBSIM][Profile][profile2] ExecuteProfile ended
+   2023-04-20T20:21:36Z [INFO][GNBSIM][Summary] Profile Name: profile2 , Profile Type: pdusessest
+   2023-04-20T20:21:36Z [INFO][GNBSIM][Summary] UEs Passed: 5 , UEs Failed: 0
+   2023-04-20T20:21:36Z [INFO][GNBSIM][Summary] Profile Status: PASS
+
+This particular test, which runs the cryptically named ``pdusessest``
+profile, emulates five UEs, each of which: (1) registers with the
+Core, (2) initiates a user plane session, and (3) sends a minimal data
+packet over that session. If you are interested in the config file
+that controls the test, including the option of enabling other
+profiles, take a look at
+``deps/gnbsim/config/gnbsim-default.yaml``. We return to the issue of
+customizing gNBsim in a later section.
+
+
+Clean Up
+~~~~~~~~~~~~~~~~~
+
+We recommend continuing on to the next section before wrapping up, but
+when you are ready to tear down your Quick Start deployment of Aether,
+simply execute the following commands:
+
+.. code-block::
+
+   $ make aether-gnbsim-uninstall
+   $ make aether-5gc-uninstall
+   $ make aether-k8s-uninstall
+
+Note that while we stepped through the system one component at a time,
+OnRamp includes compound Make targets. For example, you can uninstall
+everything covered in this section by typing:
+
+.. code-block::
+
+   $ make aether-uninstall
+
+Look at the ``Makefile`` to see the available set of Make targets.