AETHER-2011 (part1)

Add basic 1.5 release documentation framework

Update edge deployment docs

- Describe different topologies and add deployment diagrams
- Split out Pronto BOM and diagrams
- Make BESS diagram a SVG

Updates to Sphinx and modules, fixed linkcheck to have a timeout/retry
for faulty webservers.

Fix spelling and formatting issue across entire site, remove "Smart
quotes" that cause the spellchecker to throw errors. Add many more
dictionary entries. Make spelling fail the build.  Fix all spelling and
grammar errors that triggered failures.

Add autosectionlabel, and manage duplicate section names where they
existed.

Updated readme on image/diagram embedding

Added docs on PoE power cycle with Aruba switches (Pronto)

Change-Id: I7f9f7afae13788f9fe29bfe2683a295ba7b8914e
diff --git a/testing/about_system_tests.rst b/testing/about_system_tests.rst
index 02f97d5..fcba383 100644
--- a/testing/about_system_tests.rst
+++ b/testing/about_system_tests.rst
@@ -9,7 +9,7 @@
 provides highly scalable and low maintenance code which will help cover various
 categories of tests.  Framework includes libraries and tools that allows both
 component level and integration level tests. Robot Framework will be used for
-covering integration tests. Component level test coverages have been
+covering integration tests. Component level test coverage have been
 accomplished by leveraging the existing test frameworks that were developed in
 their respective projects. Component level tests include tests for TOST, PDP,
 SD-CORE areas. For detailed information on component tests, please see their
diff --git a/testing/acceptance_specification.rst b/testing/acceptance_specification.rst
index c4c72e1..32617ea 100644
--- a/testing/acceptance_specification.rst
+++ b/testing/acceptance_specification.rst
@@ -205,7 +205,7 @@
 |6. Find out the S1SetupRequest message and       |                                    |
 |   open the detailed packet information          |                                    |
 |                                                 |                                    |
-|7. Go to "Item 0: id-Global-ENB-ID" section      |                                    |
+|7. Go to "Item 0: id-Global-eNB-ID" section      |                                    |
 |   and check "eNB-ID: macroENB-ID"               |                                    |
 +-------------------------------------------------+------------------------------------+
 
diff --git a/testing/aether-roc-tests.rst b/testing/aether-roc-tests.rst
index 023fec4..2b5aa29 100644
--- a/testing/aether-roc-tests.rst
+++ b/testing/aether-roc-tests.rst
@@ -1,5 +1,5 @@
-Instructions For Running The ROC Tests
-======================================
+ROC Testing
+===========
 
 The REST API and the GUI of the Aether ROC is tested utilizing the Robot Framework.
 The tests are located inside the aether-system-tests repository and they are run nightly using
@@ -11,23 +11,24 @@
 This can be done with the use of Helm (see instructions on
 `this page <https://docs.onosproject.org/onos-docs/docs/content/developers/deploy_with_helm/>`_).
 
-Additionally, it is necessary to add the sdran chart repo with the following command:
+Additionally, it is necessary to add the SD-RAN chart repo with the following command:
 
 .. code-block:: shell
 
     helm repo add sdran --username USER --password PASSWORD https://sdrancharts.onosproject.org
 
-where USER and PASSWORD can be obtained from the Aether Login Information file, which is
-accessibble to the ``onfstaff`` group.
+where USER and PASSWORD can be obtained from the Aether Login Information file,
+which is accessible to the ``onfstaff`` group.
 
-Finally, the ROC GUI tests are running on the Firefox browser, so it is nesessary to have the Firefox browser and the
-Firefox web driver (geckodriver) installed on the system in order to run these tests.
+Finally, the ROC GUI tests are running on the Firefox browser, so it is
+necessary to have the Firefox browser and the Firefox web driver
+(``geckodriver``) installed on the system in order to run these tests.
 
 Running the ROC API tests
 -------------------------
 Follow the steps below to access the ROC API:
 
-1. Deploy the aether-roc-umbrella chart from the sdran repo with the following command:
+1. Deploy the ``aether-roc-umbrella`` chart from the SD-RAN repo with the following command:
 
 .. code-block:: shell
 
@@ -125,13 +126,17 @@
 
 Running the ROC GUI tests
 -------------------------
-We are testing the ROC GUI by installing the ROC on a local dex server. To install the dex server, please follow
-the steps under the "Helm install" section of the Readme file in `this repository <https://github.com/onosproject/onos-helm-charts/tree/master/dex-ldap-umbrella>`_.
 
-Once that you have installed the ``dex-ldap-umbrella`` chart, follow the steps below to install the ROC
-on a local dex server:
+We test the ROC GUI by installing the ROC on a local Dex server. To install the
+Dex server, please follow the steps under the "Helm install" section of the
+readme file in `this repository
+<https://github.com/onosproject/onos-helm-charts/tree/master/dex-ldap-umbrella>`_.
 
-1. Deploy the aether-roc-umbrella chart from the sdran repo with the following command:
+Once that you have installed the ``dex-ldap-umbrella`` chart, follow the steps
+below to install the ROC on a local Dex server:
+
+1. Deploy the ``aether-roc-umbrella`` chart from the SD-RAN repo with the
+   following command:
 
 .. code-block:: shell
 
@@ -176,7 +181,7 @@
 
     kubectl -n micro-onos port-forward $(kubectl -n micro-onos get pods -l type=api -o name) 8181
 
-3. Finalluy, port-forward the dex service to port 5556:
+3. Finally, port-forward the Dex service to port 5556:
 
 .. code-block:: shell
 
@@ -227,11 +232,11 @@
 
     mkdir results
 
-7. Run any Robot Framework test file from the ``3_0_0`` directory.
-Each test file corresponds to one of the Aether 3.0.0 models.
+7. Run any Robot Framework test file from the ``3_0_0`` directory.  Each test
+   file corresponds to one of the Aether 3.0.0 models.
 
 .. code-block:: shell
 
     robot -d results <model-name>.robot
 
-| This will generate test reports and logs in the ``results`` directory.
+This will generate test reports and logs in the ``results`` directory.
diff --git a/testing/pdp_testing.rst b/testing/pdp_testing.rst
index 10108a6..65e0012 100644
--- a/testing/pdp_testing.rst
+++ b/testing/pdp_testing.rst
@@ -3,42 +3,51 @@
    SPDX-License-Identifier: Apache-2.0
 
 PDP Testing
-==============
+===========
 
 
 Test Framework
 --------------
 
-We use `TestVectors`_ to connect to Stratum hardware switches using gRPC and execute gNMI and P4Runtime tests.
-For Aether, we convert existing ptf unit tests written for fabric-tna to TestVectors and execute them on Stratum
-hardware switches in loopback mode using `TestVectors-Runner`_.
+We use `TestVectors`_ to connect to Stratum hardware switches using gRPC and
+execute gNMI and P4Runtime tests.
 
-We use Jenkins to schedule and trigger our tests which run against a set of hardware switches.
+For Aether, we convert existing ptf unit tests written for ``fabric-tna`` to
+TestVectors and execute them on Stratum hardware switches in loopback mode
+using `TestVectors-Runner`_.
+
+We use Jenkins to schedule and trigger our tests which run against a set of
+hardware switches.
 
 Test Scenarios
 -------------------
 
-fabric-tna is a P4 program based on the Tofino Native Architecture(TNA). Currently 4 profiles are supported for
-compiling the fabric-tna P4 program.
+``fabric-tna`` is a P4 program based on the Tofino Native Architecture(TNA).
+Currently 4 profiles are supported for compiling the ``fabric-tna`` P4 program.
 
-1. fabric
-2. fabric-spgw
-3. fabric-int
-4. fabric-spgw-int
+1. ``fabric``
+2. ``fabric-spgw``
+3. ``fabric-int``
+4. ``fabric-spgw-int``
 
-Based on the ptf unit tests for fabric-tna, we generate TestVectors for each profile to run on Stratum hardware
-switches. The names of generated tests can be found in `Test List`_.
+Based on the ptf unit tests for ``fabric-tna``, we generate TestVectors for each
+profile to run on Stratum hardware switches. The names of generated tests can
+be found in `Test List`_.
 
 Prerequisites to Generate, Run Tests
---------------------------------------
+------------------------------------
+
 1. Stratum running on a hardware switch with 4 ports running in loopback mode.
-2. Create a port-map similar to `port-map`_. The three fields are ptf_port, p4_port and iface_name.
-   ptf_port is the port id used in the test, p4_port is a valid port id from Stratum hardware.
-   iface_name is ignored in TestVector generation, this can be any value.
+
+2. Create a port-map similar to `port-map`_. The three fields are ptf_port,
+   p4_port and iface_name.  ptf_port is the port id used in the test, p4_port
+   is a valid port id from Stratum hardware.  iface_name is ignored in
+   TestVector generation, this can be any value.
 
 How to Generate TestVectors
---------------------------------------
-1. Checkout fabric-tna repo.
+---------------------------
+
+1. Checkout ``fabric-tna`` repo.
 
 .. code-block:: shell
 
@@ -62,10 +71,11 @@
 
 switch_ip and switch_port are IP and port where Stratum is running.
 cpu_port is 192 for dual pipe switch and 320 for quad pipe switch.
-Generated TestVectors are stored under 'fabric-tna/ptf/TestVectors'.
+Generated TestVectors are stored under ``fabric-tna/ptf/TestVectors``.
 
 How to Run TestVectors
---------------------------------------
+----------------------
+
 1. Checkout `TestVectors-Runner`_ repo.
 
 .. code-block:: shell
@@ -73,7 +83,7 @@
    $git clone https://github.com/stratum/testvectors-runner -b support-fabric-tna
    $cd testvectors-runner
 
-2. Build tv-runner docker image.
+2. Build ``tv-runner`` docker image.
 
 .. code-block:: shell
 
@@ -103,10 +113,14 @@
 
    $IMAGE_NAME=tvrunner:fabric-tna-binary ./tvrunner.sh --dp-mode loopback --match-type in --target ${tv_dir}/target.pb.txt --portmap ${tv_dir}/portmap.pb.txt --tv-dir ${tv_dir}/${test_name}/teardown
 
-tv_dir is the directory where TestVectors are stored. In this case, tv_dir is 'fabric-tna/ptf/TestVectors'.
-tv_name is the name of the test case. It's also the directory name of the test under 'fabric-tna/ptf/TestVectors'.
+``tv_dir`` is the directory where TestVectors are stored. In this case,
+``tv_dir`` is ``fabric-tna/ptf/TestVectors``.
 
-Results for each test are generated under 'testvectors-runner/results' directory in csv format.
+``tv_name`` is the name of the test case. It's also the directory name of the
+test under ``fabric-tna/ptf/TestVectors``.
+
+Results for each test are generated under ``testvectors-runner/results``
+directory in csv format.
 
 .. _TestVectors: https://github.com/stratum/testvectors
 .. _TestVectors-Runner: https://github.com/stratum/testvectors-runner/tree/support-fabric-tna
diff --git a/testing/sdcore_testing.rst b/testing/sdcore_testing.rst
index 73613c4..ed3be94 100644
--- a/testing/sdcore_testing.rst
+++ b/testing/sdcore_testing.rst
@@ -9,17 +9,14 @@
 --------------
 
 NG40
-~~~~
-
-Overview
-^^^^^^^^
+""""
 
 NG40 tool is used as RAN emulator in SD-Core testing. NG40 runs inside a VM
 which is connected to both Aether control plane and data plane. In testing
-scenarios that involve data plane verifications, NG40 also emulates a few
+scenarios that involve data plane verification, NG40 also emulates a few
 application servers which serve as the destinations of data packets.
 
-A typical NG40 test case involves UE attaching, data plane verifications and
+A typical NG40 test case involves UE attaching, data plane verification and
 UE detaching. During the test NG40 acts as UEs and eNBs and talks to the
 mobile core to complete attach procedures for each UE it emulates. Then NG40
 verifies that data plane works for each attached UE by sending traffic between
@@ -27,7 +24,7 @@
 procedures for each attached UE.
 
 Test cases
-^^^^^^^^^^
+''''''''''
 
 Currently the following NG40 test cases are implemented:
 
@@ -35,7 +32,7 @@
 
 1. ``4G_M2AS_PING_FIX`` (attach, dl ping, detach)
 2. ``4G_M2AS_UDP`` (attach, dl+ul udp traffic, detach)
-3. ``4G_M2AS_TCP`` (attach, relaese, service request, dl+ul tcp traffic, detach)
+3. ``4G_M2AS_TCP`` (attach, release, service request, dl+ul tcp traffic, detach)
 4. ``4G_AS2M_PAGING`` (attach, release, dl udp traffic, detach)
 5. ``4G_M2AS_SRQ_UDP`` (attach, release, service request, dl+ul udp traffic)
 6. ``4G_M2CN_PS`` (combined IMSI/PTMSI attach, detach)
@@ -61,7 +58,7 @@
 take different arguments to run 10K UE attaches with a high attach rate.
 
 Test suites
-^^^^^^^^^^^
+'''''''''''
 
 The test cases are atomic testing units and can be combined to build test
 suites. The following test suites have been built so far:
@@ -81,7 +78,7 @@
    to understand how the system performs under different loads.
 
 Robot Framework
-~~~~~~~~~~~~~~~
+"""""""""""""""
 
 Robot Framework was chosen to build test cases that involve interacting with
 not only NG40 but also other parts of the system. In these scenarios Robot
@@ -89,7 +86,7 @@
 of the system using component specific libraries including NG40.
 
 Currently the ``Integration test suite`` is implemented using Robot
-Framework. In the integration tests Robot Framework calls ng40 library to
+Framework. In the integration tests Robot Framework calls the ng40 library to
 perform normal attach/detach procedures. Meanwhile it injects failures into
 the system (container restarts, link down etc.) by calling functions
 implemented in the k8s library.
@@ -108,10 +105,7 @@
 --------------
 
 Nightly Tests
-~~~~~~~~~~~~~
-
-Overview
-^^^^^^^^
+"""""""""""""
 
 SD-Core nightly tests are a set of jobs managed by Aether Jenkins.
 All four test suites we mentioned above are scheduled to run nightly.
@@ -132,15 +126,15 @@
 2. ``staging`` pod: `func_staging`, `scale_staging`, `perf_staging`, `integ_staging`
 3. ``qa`` pod: `func_qa`, `scale_qa`, `perf_qa`, `integ_qa`
 
-Job structure
-^^^^^^^^^^^^^
+Nightly Job structure
+"""""""""""""""""""""
 
 Take `sdcore_scale_ci-4g` job as an example. It runs the following downstream jobs:
 
-1. `omec_deploy_ci-4g`: this job re-deploys the ci-4g pod with latest OMEC images.
+1. `omec_deploy_ci-4g`: this job re-deploys the ``ci-4g`` pod with latest OMEC images.
 
 .. Note::
-  only the ci-4g and ci-5g pod jobs trigger deployment downstream job. No
+  only the ``ci-4g`` and ``ci-5g`` pod jobs trigger deployment downstream job. No
   re-deployment is performed on the staging and qa pod before the tests
 
 2. `ng40-test_ci-4g`: this job executes the scalability test suite.
@@ -165,26 +159,22 @@
    scale tests
 
 Patchset Tests
-~~~~~~~~~~~~~~
+--------------
 
-Overview
-^^^^^^^^
-
-SD-Core pre-merge verifications cover the following public Github repos: ``c3po``,
+SD-Core pre-merge verification covers the following public Github repos: ``c3po``,
 ``Nucleus``, ``upf-epc`` and the following private Github repos: ``spgw``. ``amf``,
 ``smf``, ``ausf``, ``nssf``, ``nrf``, ``pcf``, ``udm``, ``udr``, ``webconsole``.
-SD-Core CI includes the following verifications:
+SD-Core CI verifies the following:
 
 1. ONF CLA verification
-2. License verifications (FOSSA/Reuse)
+2. License verification (FOSSA/Reuse)
 3. NG40 tests
 
-These verifications are automatically triggered by submitted or updated PR to
-the repos above. They can also be triggered manually by commenting ``retest
-this please`` to the PR. At this moment only CLI and NG40 verifications are
-mandatory.
+These jobs are automatically triggered by submitted or updated PR to the repos
+above. They can also be triggered manually by commenting ``retest this please``
+to the PR. At this moment only CLI and NG40 verification are mandatory.
 
-The NG40 verifications are a set of jobs running on both opencord Jenkins and
+The NG40 verification are a set of jobs running on both opencord Jenkins and
 Aether Jenkins (private). The jobs run on opencord Jenkins include
 
 1. `omec_c3po_container_remote <https://jenkins.opencord.org/job/omec_c3po_container_remote/>`_ (public)
@@ -208,35 +198,38 @@
 12. `udr_premerge_ci-5g`
 13. `webconsole_premerge_ci-5g`
 
-Job structure
-^^^^^^^^^^^^^
+Patchset Job structure
+""""""""""""""""""""""
 
-Take c3po jobs as an example. c3po PR triggers a public job `omec_c3po_container_remote <https://jenkins.opencord.org/job/omec_c3po_container_remote/>`__
-job running on opencord Jenkins through Github webhooks,
-which then triggers a private job `c3po_premerge_ci-4g` running on Aether Jenkins
-using a Jenkins plugin called `Parameterized Remote Trigger Plugin <https://www.jenkins.io/doc/pipeline/steps/Parameterized-Remote-Trigger/>`__.
+Take ``c3po`` jobs as an example. ``c3po`` PR triggers a public job
+`omec_c3po_container_remote
+<https://jenkins.opencord.org/job/omec_c3po_container_remote/>`_ job running
+on opencord Jenkins through Github webhooks, which then triggers a private job
+`c3po_premerge_ci-4g` running on Aether Jenkins using a Jenkins plugin called
+`Parameterized Remote Trigger Plugin
+<https://www.jenkins.io/doc/pipeline/steps/Parameterized-Remote-Trigger/>`_.
 
-The private c3po job runs the following downstream jobs sequentially:
+The private ``c3po`` job runs the following downstream jobs sequentially:
 
-1. `docker-publish-github_c3po`: this job downloads the c3po PR, runs docker
-   build and publishes the c3po docker images to `Aether registry`.
+1. `docker-publish-github_c3po`: this job downloads the ``c3po`` PR, runs docker
+   build and publishes the ``c3po`` docker images to `Aether registry`.
 2. `omec_deploy_ci-4g`: this job deploys the images built from previous job onto
-   the omec ci-4g pod.
+   the omec ``ci-4g`` pod.
 3. `ng40-test_ci-4g`: this job executes the functionality test suite.
 4. `archive-artifacts_ci-4g`: this job collects and uploads k8s and container logs.
 
 After all the downstream jobs are finished, the upstream job (`c3po_premerge_ci-4g`)
-copies artifacts including k8s/container/NG40 logs and pcap files from
+copies artifacts including k8s/container/ng40 logs and pcap files from
 downstream jobs and saves them as Jenkins job artifacts.
 
 These artifacts are also copied to and published by the public job
-(`omec_c3po_container_remote <https://jenkins.opencord.org/job/omec_c3po_container_remote/>`__)
+(`omec_c3po_container_remote <https://jenkins.opencord.org/job/omec_c3po_container_remote/>`_)
 on opencord Jenkins so that they can be accessed by the OMEC community.
 
 Pre-merge jobs for other SD-Core repos share the same structure.
 
 Post-merge
-^^^^^^^^^^
+""""""""""
 
 The following jobs are triggered as post-merge jobs when PRs are merged to
 SD-Core repos:
@@ -255,12 +248,12 @@
 12. `docker-publish-github-merge_udr`
 13. `docker-publish-github-merge_webconsole`
 
-Again take the c3po job as an example. The post-merge job (`docker-publish-github-merge_c3po`)
+Again take the ``c3po`` job as an example. The post-merge job (`docker-publish-github-merge_c3po`)
 runs the following downstream jobs sequentially:
 
 1. `docker-publish-github_c3po`: this is the same job as the one in pre-merge
-   section. It checks out the latest c3po code, runs docker build and
-   publishes the c3po docker images to `docker hub <https://hub.docker.com/u/omecproject>`__.
+   section. It checks out the latest ``c3po`` code, runs docker build and
+   publishes the ``c3po`` docker images to `docker hub <https://hub.docker.com/u/omecproject>`__.
 
 .. Note::
   the images for private repos are published to Aether registry instead of docker hub