Merge "Updated doc for Aether ROC tests"
diff --git a/.gitignore b/.gitignore
index 27d92e8..68c3e29 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,4 @@
 venv-docs
 _build
 .vscode
+.env
diff --git a/developer/images/aether-config-log.png b/developer/images/aether-config-log.png
new file mode 100644
index 0000000..2b349ef
--- /dev/null
+++ b/developer/images/aether-config-log.png
Binary files differ
diff --git a/developer/images/aether-roc-gui-console-loggedin.png b/developer/images/aether-roc-gui-console-loggedin.png
new file mode 100644
index 0000000..0914fd2
--- /dev/null
+++ b/developer/images/aether-roc-gui-console-loggedin.png
Binary files differ
diff --git a/developer/images/aether-roc-gui-copy-api-key.png b/developer/images/aether-roc-gui-copy-api-key.png
new file mode 100644
index 0000000..90fa304
--- /dev/null
+++ b/developer/images/aether-roc-gui-copy-api-key.png
Binary files differ
diff --git a/developer/images/aether-roc-gui-user-details.png b/developer/images/aether-roc-gui-user-details.png
new file mode 100644
index 0000000..49a0b18
--- /dev/null
+++ b/developer/images/aether-roc-gui-user-details.png
Binary files differ
diff --git a/developer/images/dex-ldap-login-page.png b/developer/images/dex-ldap-login-page.png
new file mode 100644
index 0000000..746928d
--- /dev/null
+++ b/developer/images/dex-ldap-login-page.png
Binary files differ
diff --git a/developer/images/dex-ldap-umbrella-well-known.png b/developer/images/dex-ldap-umbrella-well-known.png
new file mode 100644
index 0000000..ba8c357
--- /dev/null
+++ b/developer/images/dex-ldap-umbrella-well-known.png
Binary files differ
diff --git a/developer/images/postman-auth-token.png b/developer/images/postman-auth-token.png
new file mode 100644
index 0000000..be6f88e
--- /dev/null
+++ b/developer/images/postman-auth-token.png
Binary files differ
diff --git a/developer/roc.rst b/developer/roc.rst
index a3b2f9b..46f05b1 100644
--- a/developer/roc.rst
+++ b/developer/roc.rst
@@ -13,6 +13,10 @@
 As an alternative to the developer’s local machine, a remote environment can be set up, for example on
 cloud infrastructure such as cloudlab.
 
+.. note:: When ROC is deployed it is unsecured by default, with no Authentication or Authorization.
+    To secure ROC so that the Authentication and Authorization can be tested, follow the Securing ROC
+    guide below :ref:`securing_roc`
+
 Installing Prerequisites
 ------------------------
 
@@ -45,25 +49,16 @@
 
    cat > values-override.yaml <<EOF
    import:
-   onos-gui:
-      enabled: true
+     onos-gui:
+       enabled: true
 
    onos-gui:
-   ingress:
-      enabled: false
-
-   sdcore-adapter-v3:
-   prometheusEnabled: false
-
-   sdcore-exporter:
-   prometheusEnabled: false
-
-   onos-exporter:
-   prometheusEnabled: false
+     ingress:
+       enabled: false
 
    aether-roc-gui-v3:
-   ingress:
-      enabled: false
+     ingress:
+       enabled: false
    EOF
 
 Installing the Aether-Roc-Umbrella Helm chart
@@ -83,6 +78,8 @@
    kubectl wait pod -n micro-onos --for=condition=Ready -l type=config --timeout=300s
 
 
+.. _posting-the-mega-patch:
+
 Posting the mega-patch
 ----------------------
 
@@ -207,6 +204,156 @@
 
    kubectl -n micro-onos logs sdcore-adapter-v3-7468cc58dc-ktctz sdcore-adapter-v3
 
+.. _securing_roc:
+
+Securing ROC
+------------
+
+When deploying ROC with the **aether-roc-umbrella** chart, secure mode can be enabled by
+specifying an OpenID Connect (OIDC) issuer like::
+
+    helm -n micro-onos install aether-roc-umbrella sdran/aether-roc-umbrella \
+        --set onos-config.openidc.issuer=http://dex-ldap-umbrella:5556 \
+        --set aether-roc-gui-v3.openidc.issuer=http://dex-ldap-umbrella:5556
+
+The choice of OIDC issuer in this case is **dex-ldap-umbrella**
+
+dex-ldap-umbrella
+~~~~~~~~~~~~~~~~~
+
+Dex is a cloud native OIDC Issuer than can act as a front end to several authentication systems
+e.g. LDAP, Crowd, Google, GitHub
+
+Dex-LDAP-Umbrella is a Helm chart that combines a Dex server with an LDAP installation, and an
+LDAP administration tool. It can be deployed in to the same cluster namespace as **aether-roc-umbrella**.
+
+Its LDAP server is populated with 7 different users in the 2 example enterprises - *starbucks* and *acme*.
+
+When running it should be available at *http://dex-ldap-umbrella:5556/.well-known/openid-configuration*.
+
+See `dex-ldap-umbrella <https://github.com/onosproject/onos-helm-charts/tree/master/dex-ldap-umbrella#readme>`_
+for more details.
+
+As an alternative there is a public Dex server connected to the ONF Crowd server, that allows
+ONF staff to login with their own credentials:
+See `public dex <https://dex.aetherproject.org/dex/.well-known/openid-configuration>`_ for more details.
+
+.. note:: Your RBAC access to ROC will be limited by the groups you belong to in Crowd.
+
+Role Based Access Control
+~~~~~~~~~~~~~~~~~~~~~~~~~
+When secured, access to the configuration in ROC is limited by the **groups** that a user belongs to.
+
+* **AetherROCAdmin** - users in this group have full read **and** write access to all configuration.
+* *<enterprise>* - users in a group the lowercase name of an enterprise, will have **read** access to that enterprise.
+* **EnterpriseAdmin** - users in this group will have read **and** write access the enterprise they belong to.
+
+    For example in *dex-ldap-umbrella* the user *Daisy Duke* belongs to *starbucks* **and**
+    *EnterpriseAdmin* and so has read **and** write access to items linked with *starbucks* enterprise.
+
+    By comparison the user *Elmer Fudd* belongs only to *starbucks* group and so has only **read** access to items
+    linked with the *starbucks* enterprise.
+
+Requests to a Secure System
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+When configuration is retrieved or updated  through *aether-config*, a Bearer Token in the
+form of a Json Web Token (JWT) issued by the selected OIDC Issuer server must accompany
+the request as an Authorization Header.
+
+This applies to both the REST interface of *aether-roc-api* **and** the *gnmi* interface of
+*aether-rconfig*.
+
+In the Aether ROC, a Bearer Token can be generated by logging in and selecting API Key from the
+menu. This pops up a window with a copy button, where the key can be copied.
+
+The key will expire after 24 hours.
+
+.. image:: images/aether-roc-gui-copy-api-key.png
+    :width: 580
+    :alt: Aether ROC GUI allows copying of API Key to clipboard
+
+Accessing the REST interface from a tool like Postman, should include this Auth token.
+
+.. image:: images/postman-auth-token.png
+    :width: 930
+    :alt: Postman showing Authentication Token pasted in
+
+Logging
+~~~~~~~
+The logs of *aether-config* will contain the **username** and **timestamp** of
+any **gnmi** call when security is enabled.
+
+.. image:: images/aether-config-log.png
+    :width: 887
+    :alt: aether-config log message showing username and timestamp
+
+Accessing GUI from an external system
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+To access the ROC GUI from a computer outside the Cluster machine using *port-forwarding* then
+it is necessary to:
+
+* Ensure that all *port-forward*'s have **--address=0.0.0.0**
+* Add to the IP address of the cluster machine to the **/etc/hosts** of the outside computer as::
+
+    <ip address of cluster> dex-ldap-umbrella aether-roc-gui
+* Verify that you can access the Dex server by its name *http://dex-ldap-umbrella:5556/.well-known/openid-configuration*
+* Access the GUI through the hostname (rather than ip address) *http://aether-roc-gui:8183*
+
+Troubleshooting Secure Access
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+While every effort has been made to ensure that securing Aether is simple and effective,
+some difficulties may arise.
+
+One of the most important steps is to validate that the OIDC Issuer (Dex server) can be reached
+from the browser. The **well_known** URL should be available and show the important endpoints are correct.
+
+.. image:: images/dex-ldap-umbrella-well-known.png
+    :width: 580
+    :alt: Dex Well Known page
+
+If logged out of the Browser when accessing the Aether ROC GUI, accessing any page of the application should
+redirect to the Dex login page.
+
+.. image:: images/dex-ldap-login-page.png
+    :width: 493
+    :alt: Dex Login page
+
+When logged in the User details can be seen by clicking the User's name in the drop down menu.
+This shows the **groups** that the user belongs to, and can be used to debug RBAC issues.
+
+.. image:: images/aether-roc-gui-user-details.png
+    :width: 700
+    :alt: User Details page
+
+When you sign out of the ROC GUI, if you are not redirected to the Dex Login Page,
+you should check the Developer Console of the browser. The console should show the correct
+OIDC issuer (dex server), and that Auth is enabled.
+
+.. image:: images/aether-roc-gui-console-loggedin.png
+    :width: 418
+    :alt: Browser Console showing correct configuration
+
+ROC Data Model Conventions and Requirements
+-------------------------------------------
+
+The MEGA-Patch described above will bring up a fully compliant sample data model.
+However, it may be useful to bring up your own data model, customized to a different
+site of sites. This subsection documents conventions and requirements for the Aether
+modeling within the roc.
+
+The ROC models must be configured with the following:
+
+* A default enterprise with the id `defaultent`.
+* A default ip-domain with the id `defaultent-defaultip`.
+* A default site with the id `defaultent-defaultsite`.
+  This site should be linked to the `defaultent` enterprise.
+* A default device group with the id `defaultent-defaultsite-default`.
+  This device group should be linked to the `defaultent-defaultip` ip-domain
+  and the `defaultent-defaultsite` site.
+
+Each Enterprise Site must be configured with a default device group and that default
+device group's name must end in the suffix `-default`. For example, `acme-chicago-default`.
+
 Some exercises to get familiar
 ------------------------------
 
@@ -229,3 +376,5 @@
 received)
 
 .. |ROCGUI| image:: images/rocgui.png
+    :width: 945
+    :alt: ROC GUI showing list of VCS
diff --git a/dict.txt b/dict.txt
index 4d21ab3..5026af7 100644
--- a/dict.txt
+++ b/dict.txt
@@ -91,3 +91,4 @@
 webpage
 webserver
 yaml
+Downlink
\ No newline at end of file
diff --git a/edge_deployment/bess_upf_deployment.rst b/edge_deployment/bess_upf_deployment.rst
index fb8e101..b4de3b7 100644
--- a/edge_deployment/bess_upf_deployment.rst
+++ b/edge_deployment/bess_upf_deployment.rst
@@ -60,7 +60,7 @@
 
 .. code-block:: shell
 
-   $ kubectl get nodes -o json | jq '.items[].status.available'
+   $ kubectl get nodes -o json | jq '.items[].status.allocatable'
    {
      "cpu": "95",
      "ephemeral-storage": "1770223432846",
@@ -104,12 +104,15 @@
          ip: "192.168.4.1/24"
          gateway: "192.168.4.254"
          vlan: 4
-     # Below is required only when connecting to 5G core
-     cfgFiles:
-       upf.json:
-         cpiface:
-           dnn: "8internet"
-           hostname: "upf"
+       # Override SRIOV resource name when using a NIC other than Intel
+       #sriov:
+       #  resourceName: "mellanox.com/mellanox_sriov_vfio"
+     # Add below when connecting to 5G core
+     #cfgFiles:
+     #  upf.json:
+     #    cpiface:
+     #      dnn: "8internet"
+     #      hostname: "upf"
 
 
 Update ``fleet.yaml`` in the same directory to let Fleet use the custom configuration when deploying
diff --git a/edge_deployment/tost_deployment.rst b/edge_deployment/sdfabric_deployment.rst
similarity index 87%
rename from edge_deployment/tost_deployment.rst
rename to edge_deployment/sdfabric_deployment.rst
index 7943b0e..29b8153 100644
--- a/edge_deployment/tost_deployment.rst
+++ b/edge_deployment/sdfabric_deployment.rst
@@ -2,8 +2,8 @@
    SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org>
    SPDX-License-Identifier: Apache-2.0
 
-TOST Deployment
-===============
+SDFabric Deployment
+===================
 
 Update aether-pod-config
 ------------------------
@@ -30,9 +30,8 @@
    ├── onos
    │   ├── app_map.tfvars
    │   ├── backend.tf
+   │   ├── kubeconfig -> ../../../../common/tost/apps/onos/kubeconfig/
    │   ├── main.tf -> ../../../../common/tost/apps/onos/main.tf
-   │   ├── onos-netcfg.json
-   │   ├── onos-netcfg.json.license
    │   ├── onos.yaml
    │   └── variables.tf -> ../../../../common/tost/apps/onos/variables.tf
    ├── stratum
@@ -72,7 +71,6 @@
    project_name     = "tost"
    namespace_name   = "tost"
 
-   app_map = {}
 
 ONOS folder
 """""""""""
@@ -94,7 +92,7 @@
          target_namespace = "onos-tost"
          catalog_name     = "onos"
          template_name    = "onos-tost"
-         template_version = "0.1.18"
+         template_version = "0.1.40"
          values_yaml      = ["onos.yaml"]
       }
    }
@@ -144,17 +142,23 @@
             log4j2.logger.segmentrouting.level = DEBUG
 
       config:
-         server: gerrit.opencord.org
-         repo: aether-pod-configs
-         folder: staging/ace-menlo/tost/onos
-         file: onos-netcfg.json
-         netcfgUrl: http://onos-tost-onos-classic-hs.tost.svc:8181/onos/v1/network/configuration
-         clusterUrl: http://onos-tost-onos-classic-hs.tost.svc:8181/onos/v1/cluster
+        netcfg: >
+          {
+            "devices": {
+              "device:leaf1": {
+                "segmentrouting": {
+                  "ipv4NodeSid": 201,
+                  "ipv4Loopback": "10.128.100.38",
+                  "routerMac": "00:00:0A:80:64:26",
+                  "isEdgeRouter": true,
+                  "adjacencySids": []
+                },
+              }
+            }
+          }
 
-Once the **onos-tost** containers are deployed into Kubernetes,
-it will read **onos-netcfg.json** file from the **aether-pod-config** and please change the folder name to different location if necessary.
 
-**onos-netcfg.json** is environment dependent and please change it to fit your environment.
+**config.netcfg** is environment dependent and please change it to fit your environment.
 
 ..
    TODO: Add an example based on the recommended topology
@@ -175,7 +179,7 @@
          target_namespace = "stratum"
          catalog_name     = "stratum"
          template_name    = "stratum"
-         template_version = "0.1.9"
+         template_version = "0.1.13"
          values_yaml      = ["stratum.yaml"]
       }
    }
@@ -233,20 +237,19 @@
 .. code-block::
 
    apps=["telegraf"]
-
    app_map = {
-      telegraf= {
-         app_name         = "telegraf"
-         project_name     = "tost"
-         target_namespace = "telegraf"
-         catalog_name     = "influxdata"
-         template_name    = "telegraf"
-         template_version = "1.7.23"
-         values_yaml      = ["telegraf.yaml"]
-      }
+     telegraf = {
+       app_name         = "telegraf"
+       project_name     = "tost"
+       target_namespace = "tost"
+       catalog_name     = "aether"
+       template_name    = "tost-telegraf"
+       template_version = "0.1.1"
+       values_yaml      = ["telegraf.yaml"]
+     }
    }
 
-The **telegraf.yaml** used to override the Telegraf Helm Chart and its environment-dependent.
+The **telegraf.yaml** used to override the ONOS-Telegraf Helm Chart and its environment-dependent.
 Please pay attention to the **inputs.addresses** section.
 Telegraf will read data from stratum so we need to specify all Tofino switch’s IP addresses here.
 Taking Menlo staging pod as example, there are four switches so we fill out 4 IP addresses.
@@ -284,23 +287,19 @@
 
 Assumed we would like to set up the **ace-example** pod in the production environment.
 
-1. open the **tools/ace_env**
+1. open the **tools/ace_config.yaml** (You should already have this file when you finish VPN bootstrap stage)
 2. fill out all required variables
-3. import the environment variables from **tools/ace_env**
-4. perform the makefile command to generate configuration and directory for TOST
-5. update **onos-netcfg.json** for ONOS
-6. update **${hostname}-chassis-config.pb.txt** for Stratum
-7. update all switch IPs in **telegraf.yaml**
-8. commit your change and open the Gerrit patch
+3. perform the makefile command to generate configuration and directory for TOST
+4. update **onos.yaml** for ONOS
+5. update **${hostname}-chassis-config.pb.txt** for Stratum
+6. commit your change and open the Gerrit patch
 
 .. code-block:: console
 
-  vim tools/ace_env
-  source tools/ace_env
+  vim tools/ace_config.yaml
   make -C tools/  tost
-  vim production/ace-example/tost/onos/onos-netcfg.json
+  vim production/ace-example/tost/onos/onos.yaml
   vim production/ace-example/tost/stratum/*${hostname}-chassis-config.pb.txt**
-  vim production/ace-example/tost/telegraf/telegraf.yam
   git add commit
   git review
 
@@ -311,7 +310,7 @@
 To recap, most of the files in **tost** folder can be copied from existing examples.
 However, there are a few files we need to pay extra attentions to.
 
-- **onos-netcfg.json** in **onos** folder
+- **onos.yaml** in **onos** folder
 - Chassis config in **stratum** folder
   There should be one chassis config for each switch. The file name needs to be
   **${hostname}-chassis-config.pb.txt**
@@ -574,36 +573,20 @@
 
 .. code-block:: yaml
 
+
    - project:
-         name: deploy-menlo-tost-dev
-         rancher_cluster: "menlo-tost-dev"
-         terraform_dir: "testing/menlo-tost"
-         rancher_api: "{rancher_testing_access}"
-         jobs:
-            - "deploy"
-            - "deploy-onos"
-            - "deploy-stratum"
-            - "deploy-telegraf"
-   - project:
-         name: deploy-menlo-tost-staging
-         rancher_cluster: "ace-menlo"
-         terraform_dir: "staging/ace-menlo"
-         rancher_api: "{rancher_staging_access}"
-         jobs:
-            - "deploy"
-            - "deploy-onos"
-            - "deploy-stratum"
-            - "deploy-telegraf"
-   - project:
-         name: deploy-menlo-production
-         rancher_cluster: "ace-menlo"
-         terraform_dir: "production/ace-menlo"
-         rancher_api: "{rancher_production_access}"
-         jobs:
-            - "deploy"
-            - "deploy-onos"
-            - "deploy-stratum"
-            - "deploy-telegraf"
+       name: deploy-tucson-pairedleaves-dev
+       rancher_cluster: "dev-pairedleaves-tucson"
+       terraform_dir: "staging/dev-pairedleaves-tucson"
+       rancher_api: "{rancher_staging_access}"
+       properties:
+         - onf-infra-onfstaff-private
+       jobs:
+         - "deploy"
+         - "deploy-onos"
+         - "deploy-stratum"
+         - "deploy-telegraf"
+         - "debug-tost"
 
 
 Create Your Own Jenkins Job
@@ -617,17 +600,20 @@
 
 .. code-block:: yaml
 
-   - project:
-         name: deploy-tost-example-production
-         rancher_cluster: "ace-test-example"
-         terraform_dir: "production/tost-example"
-         rancher_api: "{rancher_production_access}"
-         jobs:
-            - "deploy"
-            - "deploy-onos"
-            - "deploy-stratum"
-            - "deploy-telegraf"
 
+   - project:
+       name: deploy-tost-example-production
+       rancher_cluster: "ace-test-example"
+       terraform_dir: "production/tost-example"
+       rancher_api: "{rancher_production_access}"
+       properties:
+         - onf-infra-onfstaff-private
+       jobs:
+         - "deploy"
+         - "deploy-onos"
+         - "deploy-stratum"
+         - "deploy-telegraf"
+         - "debug-tost"
 
 .. note::
 
diff --git a/index.rst b/index.rst
index 75e09b6..e390ac0 100644
--- a/index.rst
+++ b/index.rst
@@ -14,8 +14,8 @@
    :glob:
 
    operations/procedures
-   operations/sop
    operations/subscriber
+   operations/vcs
 
 .. toctree::
    :maxdepth: 3
@@ -31,7 +31,7 @@
    edge_deployment/vpn_bootstrap
    edge_deployment/runtime_deployment
    edge_deployment/bess_upf_deployment
-   edge_deployment/tost_deployment
+   edge_deployment/sdfabric_deployment
    edge_deployment/connectivity_service_update
    edge_deployment/enb_installation
    edge_deployment/troubleshooting
@@ -52,11 +52,11 @@
    :glob:
 
    testing/about_system_tests
-   testing/pdp_testing
-   testing/fabric_testing
    testing/sdcore_testing
    testing/aether-roc-tests
    testing/acceptance_specification
+   testing/fabric_testing
+   testing/pdp_testing
 
 .. toctree::
    :maxdepth: 3
diff --git a/operations/images/aether-roc-gui-add-vcs.png b/operations/images/aether-roc-gui-add-vcs.png
new file mode 100644
index 0000000..6c01949
--- /dev/null
+++ b/operations/images/aether-roc-gui-add-vcs.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-basket-view-new-range.png b/operations/images/aether-roc-gui-basket-view-new-range.png
new file mode 100644
index 0000000..98884f7
--- /dev/null
+++ b/operations/images/aether-roc-gui-basket-view-new-range.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-devicegroup-add.png b/operations/images/aether-roc-gui-devicegroup-add.png
new file mode 100644
index 0000000..2ac66be
--- /dev/null
+++ b/operations/images/aether-roc-gui-devicegroup-add.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-devicegroup-edit.png b/operations/images/aether-roc-gui-devicegroup-edit.png
new file mode 100644
index 0000000..f9ca841
--- /dev/null
+++ b/operations/images/aether-roc-gui-devicegroup-edit.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-devicegroup-monitor.png b/operations/images/aether-roc-gui-devicegroup-monitor.png
new file mode 100644
index 0000000..b83be30
--- /dev/null
+++ b/operations/images/aether-roc-gui-devicegroup-monitor.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-devicegroups-list.png b/operations/images/aether-roc-gui-devicegroups-list.png
new file mode 100644
index 0000000..1abf3a0
--- /dev/null
+++ b/operations/images/aether-roc-gui-devicegroups-list.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-sites-list.png b/operations/images/aether-roc-gui-sites-list.png
new file mode 100644
index 0000000..a51cf2f
--- /dev/null
+++ b/operations/images/aether-roc-gui-sites-list.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-ue-monitor.png b/operations/images/aether-roc-gui-ue-monitor.png
new file mode 100644
index 0000000..208b4da
--- /dev/null
+++ b/operations/images/aether-roc-gui-ue-monitor.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-vcs-edit-showing-app-dg.png b/operations/images/aether-roc-gui-vcs-edit-showing-app-dg.png
new file mode 100644
index 0000000..45cada2
--- /dev/null
+++ b/operations/images/aether-roc-gui-vcs-edit-showing-app-dg.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-vcs-edit.png b/operations/images/aether-roc-gui-vcs-edit.png
new file mode 100644
index 0000000..dda9994
--- /dev/null
+++ b/operations/images/aether-roc-gui-vcs-edit.png
Binary files differ
diff --git a/operations/images/aether-roc-gui-vcs-list.png b/operations/images/aether-roc-gui-vcs-list.png
new file mode 100644
index 0000000..db1361f
--- /dev/null
+++ b/operations/images/aether-roc-gui-vcs-list.png
Binary files differ
diff --git a/operations/images/aether-roc-vcs-monitor.png b/operations/images/aether-roc-vcs-monitor.png
new file mode 100644
index 0000000..2c75164
--- /dev/null
+++ b/operations/images/aether-roc-vcs-monitor.png
Binary files differ
diff --git a/operations/images/monitor-icon.png b/operations/images/monitor-icon.png
new file mode 100644
index 0000000..046b231
--- /dev/null
+++ b/operations/images/monitor-icon.png
Binary files differ
diff --git a/operations/sop.rst b/operations/sop.rst
deleted file mode 100644
index 94fd81a..0000000
--- a/operations/sop.rst
+++ /dev/null
@@ -1,8 +0,0 @@
-..
-   SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org>
-   SPDX-License-Identifier: Apache-2.0
-
-Standard Operating Procedures
-=============================
-
-
diff --git a/operations/subscriber.rst b/operations/subscriber.rst
index edd05f9..87ea6f5 100644
--- a/operations/subscriber.rst
+++ b/operations/subscriber.rst
@@ -8,6 +8,11 @@
 Subscriber management includes workflows associated with provisioning new subscribers, removing
 existing subscribers, and associating subscribers with virtual connectivity services.
 
+.. note::
+    This section refers to a fully installed ROC GUI, properly secured and with Enterprises, Connectivity Services
+    and Sites already configured by a ROC Administrator. The examples shown below are taken from an example
+    configuration shipped with the ROC - the "MEGA Patch" (see :ref:`posting-the-mega-patch`)
+
 Provisioning a new UE
 ---------------------
 
@@ -51,31 +56,98 @@
 TODO: This file will probably be placed under gitops control once the 5G ROC is deployed. Document
 the new location of the file.
 
+.. _configure_device_group:
+
 Configure Connectivity Service for a new UE
 -------------------------------------------
 
-To receive connectivity service, a UE must be added to a DeviceGroup. An enterprise is typically
-organized into one or more sites, each site which may contain one or more DeviceGroups. Navigate
-to the site you where the device will be deployed, find the appropriate device group, and add
+To receive connectivity service, a UE must be added to a DeviceGroup. An Enterprise is typically
+organized into one or more Sites, each Site which may contain one or more DeviceGroups. Navigate
+to the appropriate DeviceGroup which is associated with the Site you wish to deploy on, and add
 the UE's IMSI to the DeviceGroup.
 
-TODO: Describe GUI process and add Picture
+The Site details can be seen by navigating to the Site list view.
 
-Note: For 4G service, a UE may participate in at most one DeviceGroup, and that DeviceGroup may
-participate in at most one VCS. For 5G service, a UE can participate in many DeviceGroups, and each
-DeviceGroup may participate in many VCSes.
+.. image:: images/aether-roc-gui-sites-list.png
+    :width: 755
+    :alt: Sites List View in Aether ROC GUI showing site details
+
+In the ROC GUI, navigate to the Device Groups list view, to see the list of
+Device Groups and their association to Sites.
+
+    |DEVICEGROUP-LIST|
+
+In the DeviceGroup *New York POS* example above an Imsi Range **store** of **70-73** will mean the set of Imsi
+IDs (when the *format* specifier of the *starbucks-newyork* Site are applied to
+its *MCC*, *MNC* and *Enterprise*) of
+
+* 021032002000070 (021-032-002-000070)
+* 021032002000071
+* 021032002000072
+* 021032002000073
+
+.. note::
+    For 4G service, a UE may participate in at most one DeviceGroup, and that DeviceGroup may
+    participate in at most one VCS. For 5G service, a UE can participate in many DeviceGroups, and each
+    DeviceGroup may participate in many VCSes.
+
+Editing
+*******
+Edit the DeviceGroup by clicking on the Edit icon, and in the Edit page,
+adjust an existing range or create a new range (by clicking on the `+` icon).
+
+    |DEVICEGROUP-EDIT|
+
+The following restrictions apply
+
+#. The Imsi ID specified in "from" or "to" is relative to *MCC*, *MNC* and *Enterprise* of the Site.
+#. The maximum value of an Imsi ID is defined by the number of **S** characters in the `format` specifier of the Site.
+#. Imsi Ranges are contiguous ranges of Imsi IDs. To accommodate non contiguous Imsi IDs, add extra Ranges.
+#. Imsi Ranges can have a maximum span of 100 between "from" and "to" Imsi IDs. Break bigger spans in to many ranges.
+#. Imsi ranges within a DeviceGroup cannot not overlap.
+
+When the entries on the DeviceGroup edit page are valid the **Update** will become available
+
+* Click this to add the changes to the **Basket** of configuration changes
+* Observe that the **Basket** icon (2nd icon from top right) displays the number of changes
+
+.. note::
+    The changes are not committed to **aether-config** until the **Basket** is committed.
+    This allows several changes to be gathered together in one transaction and checked before committing.
+
+.. _committing:
+
+Committing
+**********
+To commit the changes
+
+#. click on the **Basket** icon (2nd icon from top right) to see the Basket view
+#. inspect the changes to be committed (optional)
+#. click **commit** to perform the commit
+#. observe the response that's temporarily displayed that shows the success or failure of the commit
+
+.. image:: images/aether-roc-gui-basket-view-new-range.png
+    :width: 635
+    :alt: Basket View with some changes ready to be committed
 
 Remove Connectivity Service from an existing UE
 -----------------------------------------------
 
-Using the ROC GUI, navigate to the Device Group that contains the UE,
+Using the ROC GUI, navigate to the DeviceGroup that contains the UE,
 then remove that UE's IMSI from the list. If you are removing a single UE, and the
 DeviceGroup is configured with a range specifier that includes several IMSIs,
 then it might be necessary to split that range into multiple ranges.
 
-TODO: Describe GUI process and add Picture
+* If the UE to be removed has an Imsi ID in the middle of an existing Imsi Range:
+    click the *trash can* icon next to that *Imsi Range* and
+    use the *+* icon to add new Ranges for the remaining Imsi IDs.
+* Alternatively if the UE to be removed has an Imsi ID at the start or end of an existing Imsi Range:
+    then adjust the *from* or *to* value accordingly.
 
-Note: The UE may continue to have connectivity until its next detach/attach cycle.
+    |DEVICEGROUP-EDIT|
+
+.. note::
+    The UE may continue to have connectivity until its next detach/attach cycle.
 
 Create a new DeviceGroup
 ------------------------
@@ -84,15 +156,35 @@
 a default DeviceGroup, but additional DeviceGroups may be created. For example, placing all IP
 Cameras in an my-site-ip-cameras DeviceGroup would allow you to group IP Cameras together.
 
-TODO: Describe GUI process and add Picture
+To add a DeviceGroup, navigate to the list of DeviceGroups and click `Add` in the upper right.
+(This may be greyed out if you do not have appropriate permissions).
+
+* Specify a unique **id** for the DeviceGroup
+    40 characters max and only alphanumeric and `-`, `_` and `.` allowed
+* Choose a *Site* from the list of preconfigured list
+    It will not be possible to add Imsi Ranges until the Site is chosen
+* Imsi Ranges can be added at this stage or later
+
+.. image:: images/aether-roc-gui-devicegroup-add.png
+    :width: 490
+    :alt: Adding a new Device Group requires an *id* and choosing a Site
 
 Delete a DeviceGroup
 --------------------
 
-IF a DeviceGroup is no longer needed, it can be deleted. Deleting a DeviceGroup will not cause
+If a DeviceGroup is no longer needed, it can be deleted. Deleting a DeviceGroup will not cause
 the UEs participating in the group to automatically be moved elsewhere.
 
-TODO: Describe GUI process and add Picture
+.. note::
+    If a Device Group is being used by an existing VCS, then it cannot be removed.
+    Delete the VCS first, and then the DeviceGroup.
+
+A DeviceGroup can be deleted from the DeviceGroup list view, by clicking the *trash can* icon
+next to it. The deletion is added to the **Basket** directly. Navigate to the *Basket View*
+to commit the change.
+
+    |DEVICEGROUP-LIST|
+
 
 Add a DeviceGroup to a Virtual Connectivity Service (VCS)
 ---------------------------------------------------------
@@ -100,10 +192,76 @@
 In order to participate in the connectivity service, a DeviceGroup must be associated with
 a Virtual Connectivity Service (VCS).
 
-TODO: Describe GUI process and add Picture
+Navigate to the *VCS* list view to see the list of VCS's and their associations to DeviceGroups.
+
+    |VCS-LIST|
+
+To edit a *VCS* click on the *edit* button next to it in this list.
+
+This brings up the VCS edit page where (among many other things) zero, one or many
+DeviceGroups can be associated with it.
+
+* Click the *trash can* symbol to remove a DeviceGroup from the VCS
+* Click the *+* icon to add a DeviceGroup
+* Click the *Allow* slider to Allow or Disallow the DeviceGroup
+    This is a way of disabling or reenabling the DeviceGroup within a VCS without having to remove it
+
+.. image:: images/aether-roc-gui-vcs-edit.png
+    :width: 562
+    :alt: VCS Edit View in Aether ROC GUI showing DeviceGroup association editing
 
 Remove a DeviceGroup from a Virtual Connectivity Service (VCS)
 --------------------------------------------------------------
 
-TODO: Describe GUI process and add Picture
+The procedure is covered in the above section.
 
+.. _monitor_device_group:
+
+Monitoring a DeviceGroup
+------------------------
+
+The performance of a Device Group can be monitored in many ways, by clicking its |monitor| (**monitor**) icon:
+
+* From the *VCS Monitor* page, which shows all DeviceGroup's belonging to an VCS.
+* From the DeviceGroup List Page - click the |monitor| icon for the DeviceGroup.
+* When editing an existing DeviceGroup - in the Edit page, the |monitor| is next to the *id*
+
+The *monitor* page itself shows:
+
+* An information Panel for each *IMSI Range* in the *DeviceGroup*
+
+    * Each UE has a |monitor| button that allows further drill down
+    * Each UE is shown with its fully expanded IMSI number (a combination of *Imsi ID* and *Site* parameters)
+* An information panel for the *Site* and *IP Domain* of the *DeviceGroup*
+
+    * Clicking on the down arrow expands each panel
+
+.. image:: images/aether-roc-gui-devicegroup-monitor.png
+    :width: 600
+    :alt: DeviceGroup Monitor View with UE links and information panels
+
+The per UE Monitor panel contains:
+
+* a graph of the UE's Throughput and Latency over the last 15 minutes
+* a graph of the UE's connectivity over the last 15 minutes
+
+.. image:: images/aether-roc-gui-ue-monitor.png
+    :width: 600
+    :alt: DeviceGroup Monitor View with UE links and information panels
+
+
+.. |monitor| image:: images/monitor-icon.png
+    :width: 28
+    :alt: Monitor icon
+
+.. |DEVICEGROUP-LIST| image:: images/aether-roc-gui-devicegroups-list.png
+    :width: 755
+    :alt: Device Groups List View in Aether ROC GUI showing Site association and Imsi Range of all DeviceGroups
+
+.. |DEVICEGROUP-EDIT| image:: images/aether-roc-gui-devicegroup-edit.png
+    :width: 755
+    :alt: Device Groups Edit View in Aether ROC GUI showing Imsi Range
+
+.. |VCS-LIST| image:: images/aether-roc-gui-vcs-list.png
+    :width: 920
+    :alt: VCS List View in Aether ROC GUI showing DeviceGroup association
diff --git a/operations/vcs.rst b/operations/vcs.rst
new file mode 100644
index 0000000..2c87613
--- /dev/null
+++ b/operations/vcs.rst
@@ -0,0 +1,128 @@
+..
+   SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org>
+   SPDX-License-Identifier: Apache-2.0
+
+VCS Management
+==============
+
+A **VCS** (Virtual Cellular Service) is a slice of network access for a set of UEs with a defined set of
+QOS parameters.
+
+To define a VCS requires it to be associated with:
+
+* one or more **Application**
+* one or more **DeviceGroup**
+* an **AccessPointList**
+* a **UPF**
+* a **TrafficClass**
+
+and must also be created with attributes like:
+
+* **SD** (slice differentiator)
+* **SST** (slice/service type)
+* **Uplink** (data rate in mbps)
+* **Downlink** (data rate in mbps)
+
+Provisioning a new VCS
+----------------------
+
+.. note::
+    This section refers to a fully installed ROC GUI, properly secured and with Enterprises, Connectivity Services
+    Applications, and Sites already configured by a ROC Administrator. The examples shown below are taken from an example
+    configuration shipped with the ROC - the "MEGA Patch" (see :ref:`posting-the-mega-patch`)
+
+This procedure assumes you have already set up one or more DeviceGroups, containing
+configuration for a number of UEs. Follow the procedure in :ref:`configure_device_group`
+to configure DeviceGroups.
+
+To add a new VCS, click the **Add** button in the VCS List View.
+
+    |VCS-LIST|
+
+In the resulting VCS edit page:
+
+#. enter a VCS ID (this must be unique across the whole system).
+#. enter a Display Name (optional).
+#. enter a Description (optional).
+#. Choose a template
+
+    * this will copy over values from that template, which may be edited individually at this create stage
+    * they will not be editable afterwards.
+#. Choose an *Access Point List* from the drop down list.
+#. Choose a *UPF* from the drop down list.
+
+.. image:: images/aether-roc-gui-add-vcs.png
+    :width: 500
+    :alt: VCS Edit page adding a new VCS
+
+One or more Applications and or DeviceGroups can be associated with the VCS at this
+stage or later, by clicking on the *+* icon.
+
+When chosen, they appear as a list in the VCS edit page, and are automatically enabled/allowed:
+
+.. image:: images/aether-roc-gui-vcs-edit-showing-app-dg.png
+    :width: 300
+    :alt: VCS Edit showing Application and Device Group choice lists
+
+Click on the "Update" to add these changes to the *Basket*.
+
+Click the **Commit** in the *Basket View* to commit the changes. See :ref:`committing`.
+
+Editing an existing VCS
+-----------------------
+When editing an existing VCS, it will not be possible to change:
+
+* the **id**
+* the **template** or any of the parameters beneath it
+
+Existing *Applications* or *DeviceGroups* can be removed by clicking the *trash can* icon next to it.
+
+Alternatively existing *Applications* or *DeviceGroups* can be *disabled/disallowed* by clicking the slider
+next to it. This will have the same effect as disabling it.
+
+If one of the *DeviceGroup*s or *Application*s, *Access Point List*, *Traffic Class* or *UPF*
+itself is modified, then the changes on the VCS will take effect whenever changes to those
+objects are committed.
+
+Removing a VCS
+--------------
+Removing a VCS can be achieved by clicking the *trash can* icon next to the VCS in the
+VCS List page
+
+   |VCS-LIST|
+
+Monitoring a VCS
+----------------
+
+The performance of a VCS can be monitored in many ways, by clicking its |monitor| (**monitor**) icon:
+
+* From the **Dashboard** page, which shows all VCS's belonging to an Enterprise.
+* From the VCS List Page - click the |monitor| icon for the VCS.
+* When editing an existing VCS - in the Edit page, the |monitor| is next to the *id*
+
+The *monitor* page itself shows:
+
+* A stacked bar graph of the Connectivity count of UEs over the last 15 minutes
+
+    * This shows the count of UE in the 3 different states - Active, Inactive and Idle
+* A line graph of the Throughput, Latency and Jitter of the VCS over the last 15 minutes
+* The live Throughput, Latency and Jitter values
+* Information panels for each sub-object of the VCS
+
+    * Clicking on the down arrow expands each panel
+
+The DeviceGroup(s) associated with the VCS has itself a |monitor| button that allows
+monitoring of each DeviceGroup. See :ref:`monitor_device_group`.
+
+.. image:: images/aether-roc-vcs-monitor.png
+    :width: 920
+    :alt: VCS Monitor View with Connectivity and Performance Charts
+
+
+.. |VCS-LIST| image:: images/aether-roc-gui-vcs-list.png
+    :width: 920
+    :alt: VCS List View in Aether ROC GUI
+
+.. |monitor| image:: images/monitor-icon.png
+    :width: 28
+    :alt: Monitor icon