[CORD-2585]
Lint check documentation with markdownlint

Change-Id: Iefed0b4beb1da4da8125513d931e6ebfba280f64
(cherry picked from commit 1a3798cca1b900c66a5ac04024265751833c4f65)
diff --git a/docs/README.md b/docs/README.md
index a6ff9b1..3ae8a2c 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,33 +1,36 @@
 # Testing CORD
 
-CORD Tester is an automation framework that has been developed to test CORD. The
-framework currently includes extensions to test R-CORD. Few framework
+CORD Tester is an automation framework that has been developed to test CORD.
+The framework currently includes extensions to test R-CORD. Few framework
 modules have been developed to test M-CORD and E-CORD basic tests.
 
-CORD Tester framework is typically deployed as one or more Docker containers, either on
-the CORD POD or adjacent to the POD and interacts with the POD through the interfaces.
-The Cord-Tester container is deployed as a docker container, residing on the headnode
-of the POD. It is brought up with double vlan tagged interfaces (s-tags and c-tags of
-subscriber's traffic) to conduct dataplane traffic testing. The following reference
-diagram gives a brief overview of how the test container interacts with the POD.
+CORD Tester framework is typically deployed as one or more Docker containers,
+either on the CORD POD or adjacent to the POD and interacts with the POD
+through the interfaces.  The Cord-Tester container is deployed as a docker
+container, residing on the headnode of the POD. It is brought up with double
+vlan tagged interfaces (s-tags and c-tags of subscriber's traffic) to conduct
+dataplane traffic testing. The following reference diagram gives a brief
+overview of how the test container interacts with the POD.
 
 ![Cord Test Container](images/test_container.png)
 
-The framework is modular, making it easy to test all the components
-that make up CORD. It supports both end-to-end tests and
-functional tests of individual components.
-The suite of tests is constantly evolving with more features and tests.
+The framework is modular, making it easy to test all the components that make
+up CORD. It supports both end-to-end tests and functional tests of individual
+components.  The suite of tests is constantly evolving with more features and
+tests.
 
 Few links below provide detailed information of System Test Guide, Test Plans
 and Test Results.
 
 * [System Test Guide](https://wiki.opencord.org/display/CORD/System+Test+Guide)
 * [System Test Plans](https://wiki.opencord.org/display/CORD/System+Test+Plans)
-* [System Test Results](https://wiki.opencord.org/display/CORD/System+Test+Results)
+* [System Test
+  Results](https://wiki.opencord.org/display/CORD/System+Test+Results)
 
-Additional information about the CORD Tester framework can be found
-on the GitHub:
+Additional information about the CORD Tester framework can be found on the
+GitHub:
 
 * [Prerequisites](https://github.com/opencord/cord-tester/blob/master/src/test/setup/prerequisites.sh)
 
 * [Source Code](https://github.com/opencord/cord-tester)
+
diff --git a/docs/contributing.md b/docs/contributing.md
deleted file mode 100644
index e69de29..0000000
--- a/docs/contributing.md
+++ /dev/null
diff --git a/docs/cord-tester-for-ciab-setup.md b/docs/cord-tester-for-ciab-setup.md
index 3b7a17d..63dfb5b 100644
--- a/docs/cord-tester-for-ciab-setup.md
+++ b/docs/cord-tester-for-ciab-setup.md
@@ -1,79 +1,102 @@
-# Testing CiaB(CORD-IN-A-BOX) Using CORD TESTER
-The CORD Automated Tester Suite is an extensible end-to-end system test suite now targeting CORD in a BOX also.
+# Testing CiaB (CORD-IN-A-BOX) Using CORD TESTER
 
-* [How to install](#how_to_install)
-* [How to use](#how_to_use)
+The CORD Automated Tester Suite is an extensible end-to-end system test suite
+now targeting CORD in a BOX also.
 
 ## Prerequisites
 
 * Python 2.7 or later
 * Docker
 
-##  <a name="how_to_install">How to install
+## How to install
 
-```bash
-$ git clone https://github.com/opencord/cord-tester.git
-$ cd cord-tester
-$ cd src/test/setup/
-$ Run prerequisites.sh --cord
-  (It gets you needed dependencies and tools to start)
+```shell
+git clone https://github.com/opencord/cord-tester.git
+cd cord-tester
+cd src/test/setup/
+```
+
+Run `prerequisites.sh --cord` (It gets you needed dependencies and tools to start)
+
 * Build all required test container images
-$ sudo ./cord-test.py build all
+
+```shell
+sudo ./cord-test.py build all
+```
+
 * If you want , you can also pull latest onos from docker hub for test setup.
-$ sudo docker pull onosproject/onos:latest
-* Else setup for test with onos instances (onos-cord and onos-fabric) running in CiaB.
+
+```shell
+sudo docker pull onosproject/onos:latest
+```
+
+* Else setup for test with onos instances (onos-cord and onos-fabric) running
+  in CiaB.
+
 * For Onos cord (Access side onos)
-$ sudo ./cord-test.py setup -m manifest-cord.json
+
+```shell
+sudo ./cord-test.py setup -m manifest-cord.json
+```
+
 * For Fabric onos
-$ sudo ./cord-test.py setup -m manifest-fabric.json
+
+```shell
+sudo ./cord-test.py setup -m manifest-fabric.json
+```
+
 * For running tests using specific test container.
-$ sudo ./cord-test.py run -t tls:eap_auth_exchange.test_eap_tls -c cord-tester1
+
+```shell
+sudo ./cord-test.py run -t tls:eap_auth_exchange.test_eap_tls -c cord-tester1
 ```
-##   <a name="how_to_use">How to use
+
+## How to use
+
+Help:
+
+```shell
+sudo ./cord-test.py -h
 ```
-* Running test case of indivdual modules, some examples
+
+List test cases for individual modules or list all tests.
+
+```shell
+sudo ./cord-test.py list -t <module name>/ all
 ```
+
+Cleanup:
+
+```shell
+sudo ./cord-test.py cleanup -m manifest-cord.json
 ```
-* TLS
+
+## Individual tests
+
+Running test case of indivdual modules, some examples:
+
+### TLS
+
+```shell
+sudo ./cord-test.py  run -t tls:eap_auth_exchange.test_eap_tls
 ```
+
+### IGMP
+
+```shell
+sudo ./cord-test.py  run -t igmp:igmp_exchange.test_igmp_join_verify_traffic
 ```
-$ sudo ./cord-test.py  run -t tls:eap_auth_exchange.test_eap_tls
+
+### VROUTER
+
+```shell
+sudo ./cord-test.py run -t vrouter:vrouter_exchange.test_vrouter_with_5_routes
 ```
-```
-* IGMP
-```
-```
-$ sudo ./cord-test.py  run -t igmp:igmp_exchange.test_igmp_join_verify_traffic
-```
-```
-* VROUTER
-```
-```
-$ sudo ./cord-test.py run -t vrouter:vrouter_exchange.test_vrouter_with_5_routes
-```
-```
-* DHCP
-```
-```
-$ sudo ./cord-test.py  run -t dhcp:dhcp_exchange.test_dhcp_1request
-```
-```
-* For help and usage use -h option in all levels of menu
-```
-```
-$ sudo ./cord-test.py -h
-```
-```
-* For listing test cases of indivisual module or list all tests.
-```
-```
-$ sudo ./cord-test.py list -t <module name>/ all
-```
-```
-* Cleanup
-```
-```
-$ sudo ./cord-test.py cleanup -m manifest-cord.json
+
+### DHCP
+
+```shell
+sudo ./cord-test.py  run -t dhcp:dhcp_exchange.test_dhcp_1request
 ```
 
 
diff --git a/docs/cord-tester-for-voltha.md b/docs/cord-tester-for-voltha.md
index 52e4dbc..570406b 100644
--- a/docs/cord-tester-for-voltha.md
+++ b/docs/cord-tester-for-voltha.md
@@ -1,50 +1,67 @@
-#Steps to test VOLTHA using CORD-TESTER with PONSIM ONU & OLT
+# Steps to test VOLTHA using CORD-TESTER with PONSIM ONU & OLT
 
 ## Install CORD-TESTER
-```
+
+```shell
 ~$ git clone https://github.com/opencord/cord-tester.git
 ~$ cd cord-tester
 ~$ cd /cord-tester/src/test/setup/
 ~$ sudo bash prerequisites.sh
 ~$ sudo ./cord-test.py build all
 ```
-## Install VOLTHA, following this link:
-```
-   https://github.com/opencord/voltha/blob/master/BUILD.md
+
+## Install VOLTHA, following this link
+
+```shell
+https://github.com/opencord/voltha/blob/master/BUILD.md
 ```
 
-## Get into setup directory of cord tester,
+## Get into setup directory of cord tester
+
+```shell
+$cord-tester/src/test/setup/
 ```
-   $cord-tester/src/test/setup/
-```
+
 ## Please make sure of VOLTHA location in manifest-ponsim.json
-```
-   For e.g "voltha_loc" : "/home/ubuntu/cord/incubator/voltha"
-```
-## Run following command to clean up previous installs:
-```
-   sudo ./cord-test.py cleanup -m manifest-ponsim.json
-```
-## Run following command to setup the testing stage with ponsim OLT & ONU:
-   ***This makes a setup of cord-test container (cord-tester1) and hooks up pon interface to UNI port of PONSIM ONU.***
-```
-   sudo ./cord-test.py setup -m manifest-ponsim.json
-```
-## Now run following command to provision the OLT & ONU and run cord subscriber test.
-```
-   sudo ./cord-test.py run -m manifest-ponsim.json -t cordSubscriber:subscriber_exchange.test_cord_subscriber_voltha
-```
-   * This will start the cord tester to run cord subscriber test
-      * CORD Subcriber emulation with AAA TLS & IGMP subscriber channel surfing test for you.
-        Have a look for steps followed to test in output log of test run.
-      * AAA TLS test will validate exchange of multiple messages of eap, hello, certificates, verify data
-        between cord tester TLS client and Radius Server with a validation of flows installed
-        in OLT & ONU
-      * IGMP test will surf channels joining a group and validating the multicast traffic received on it
-        with the flows installed
 
-## Now you can manually also validate on voltha cli for confirmation:
+```shell
+For e.g "voltha_loc" : "/home/ubuntu/cord/incubator/voltha"
 ```
+
+## Run following command to clean up previous installs
+
+```shell
+sudo ./cord-test.py cleanup -m manifest-ponsim.json
+```
+
+## Run following command to setup the testing stage with ponsim OLT & ONU
+
+This makes a setup of cord-test container (cord-tester1) and hooks up pon
+interface to UNI port of PONSIM ONU.***
+
+```shell
+sudo ./cord-test.py setup -m manifest-ponsim.json
+```
+
+## Now run following command to provision the OLT & ONU and run cord subscriber test
+
+```shell
+sudo ./cord-test.py run -m manifest-ponsim.json -t cordSubscriber:subscriber_exchange.test_cord_subscriber_voltha
+```
+
+* This will start the cord tester to run cord subscriber test
+    * CORD Subcriber emulation with AAA TLS & IGMP subscriber channel surfing
+      test for you.  Have a look for steps followed to test in output log of
+      test run.
+    * AAA TLS test will validate exchange of multiple messages of eap, hello,
+      certificates, verify data between cord tester TLS client and Radius
+      Server with a validation of flows installed in OLT & ONU
+    * IGMP test will surf channels joining a group and validating the multicast
+      traffic received on it with the flows installed
+
+## Now you can manually also validate on voltha cli for confirmation
+
+```shell
  ~$(voltha)devices
  ~$(voltha)device <OLT deviceid>
  ~$(device OLT deviceid)flows  <--- for ONU
@@ -54,3 +71,5 @@
  ~$(device ONU deviceid)flows  <--- for ONU
  ~$(device ONU deviceid)ports  <--- for UNI & PON Ports
 ```
+
+
diff --git a/docs/qa_testenv.md b/docs/qa_testenv.md
index 4d4bf6c..9fa5371 100644
--- a/docs/qa_testenv.md
+++ b/docs/qa_testenv.md
@@ -1,12 +1,16 @@
 # CORD Test Environment
 
 Several jenkins based jobs are created to run tests on the following platforms
+
 * Physical POD
 * Virtual POD(Cord-in-a-Box)
 * VMs
 
 ## Test Beds
-Following picture below describes various test environments that are used to setup CORD and a brief overview on the type of tests that are performed on that test bed.
+
+Following picture below describes various test environments that are used to
+setup CORD and a brief overview on the type of tests that are performed on that
+test bed.
 
 ![Test Beds](images/qa-testbeds.png)
 
@@ -16,10 +20,12 @@
 
 ![QA Jenkins Setup](images/qa-jenkins.png)
 
-* To view results from recent runs of the jenkins jobs, please view the [Jenkins dashboard](https://jenkins.opencord.org/)
+* To view results from recent runs of the jenkins jobs, please view the
+  [Jenkins dashboard](https://jenkins.opencord.org/)
 
 ## Jenkins Integration with Physical POD
 
 The following diagram shows how Jenkins interconnects with a Physical POD.
 
 ![QA Physical POD setup](images/qa-pod-setup.png)
+
diff --git a/docs/qa_testsetup.md b/docs/qa_testsetup.md
index e45c6da..58ca1e9 100644
--- a/docs/qa_testsetup.md
+++ b/docs/qa_testsetup.md
@@ -2,64 +2,81 @@
 
 ## Configure Automation Framework
 
-* When the POD/Cord-in-a-Box is installed, cord-tester repo is downloaded on the head node at `/opt/cord/test` directory
-* Tests can be run directly from the headnode or from a different VM then it can be done using the following command
+* When the POD/Cord-in-a-Box is installed, cord-tester repo is downloaded on
+  the head node at `/opt/cord/test` directory
 
-```bash
-$ git clone https://gerrit.opencord.org/cord-tester
-```
-* Before executing any tests, proper modules need to be installed which can be done using the following command
+* Tests can be run directly from the headnode or from a different VM then it
+  can be done using the following command:
 
-```bash
-cd /opt/cord/test/cord-tester/src/test/setup
-sudo ./prerequisites.sh --cord
-```
+  ```bash
+  git clone https://gerrit.opencord.org/cord-tester
+  ```
+
+* Before executing any tests, proper modules need to be installed which can be
+  done using the following command:
+
+  ```bash
+  cd /opt/cord/test/cord-tester/src/test/setup
+  sudo ./prerequisites.sh --cord
+  ```
 
 ## Executing Tests
 
- Most of the tests in cord-tester framework are written in `python` and `RobotFramework`.
- Few examples for test execution are shown below
+Most of the tests in cord-tester framework are written in `python` and
+`RobotFramework`.  Few examples for test execution are shown below
 
- * Executing a sample test
+* Executing a sample test
 
-```bash
-cd /opt/cord/test/cord-tester/src/test/robot/
-pybot SanityPhyPOD.robot
-```
+  ```bash
+  cd /opt/cord/test/cord-tester/src/test/robot/
+  pybot SanityPhyPOD.robot
+  ```
 
 ### Executing Control Plane Tests
-* Each control plane test uses input data in `json` format which are present under `/opt/cord/test/cord-tester/src/test/cord-api/Tests/data`
-* Before running control plane tests, a properties file need to be edited as shown below.
-  Update the following attributes accordingly
 
-```bash
-cd  /opt/cord/test/cord-tester/src/test/cord-api/Properties
-$ cat RestApiProperties.py
+* Each control plane test uses input data in `json` format which are present
+  under `/opt/cord/test/cord-tester/src/test/cord-api/Tests/data`
 
-SERVER_IP = 'localhost'
-SERVER_PORT = '9101'
-USER = 'xosadmin@opencord.org'
-PASSWD = ''
-```
+* Before running control plane tests, a properties file need to be edited as
+  shown below.  Update the following attributes accordingly
+
+  ```bash
+  $ cd /opt/cord/test/cord-tester/src/test/cord-api/Properties
+  $ cat RestApiProperties.py
+
+  SERVER_IP = 'localhost'
+  SERVER_PORT = '9101'
+  USER = 'xosadmin@opencord.org'
+  PASSWD = ''
+  ```
 
 * To run tests
 
-```bash
-$ cd /opt/cord/test/cord-tester/src/test/cord-api/
-$ pybot <testcase.txt>
-```
-## Executing Functional/Module Tests
-* There are several functional tests written to test various modules of CORD independently.
-* Before executing module based tests, following steps need to be performed which will create a `test container` and sets up the environment in the container to run tests.
+  ```bash
+  cd /opt/cord/test/cord-tester/src/test/cord-api/
+  pybot <testcase.txt>
+  ```
 
-```bash
-cd /opt/cord/test/cord-tester/src/test/setup/
-sudo ./cord-test.py setup -m manifest-cord.json
-```
+## Executing Functional/Module Tests
+
+* There are several functional tests written to test various modules of CORD
+  independently.
+
+* Before executing module based tests, following steps need to be performed
+  which will create a `test container` and sets up the environment in the
+  container to run tests.
+
+  ```bash
+  cd /opt/cord/test/cord-tester/src/test/setup/
+  sudo ./cord-test.py setup -m manifest-cord.json
+  ```
 
 * Run a single test from a module
 
-```bash
-sudo ./cord-test.py  run -t dhcp:dhcp_exchange.test_dhcp_1request
-```
-For more detailed explanations of the cord-tester options please check https://github.com/opencord/cord-tester/blob/master/docs/running.md
+  ```bash
+  sudo ./cord-test.py  run -t dhcp:dhcp_exchange.test_dhcp_1request
+  ```
+
+  For more detailed explanations of the cord-tester options please see [Running
+  Tests](running.md).
+
diff --git a/docs/running.md b/docs/running.md
index 6f16620..c7d8d6a 100644
--- a/docs/running.md
+++ b/docs/running.md
@@ -7,7 +7,7 @@
 * Docker
 * vagrant(Optional)
 
-##  How to Install cord-tester
+## How to Install cord-tester
 
 To install `cord-tester`, execute the following:
 
@@ -20,6 +20,7 @@
 $ Run prerequisites.sh which would setup the runtime for cord-tester
 $ sudo ./prerequisites.sh
 $ sudo ./cord-test.py -h
+
 usage: cord-test.py [-h] {run,setup,xos,list,build,metrics,start,cleanup} ...
 
 Cord Tester
@@ -91,13 +92,13 @@
                         test cases.
 ```
 
-If you want to run `cord-tester` without Vagrant and already have an
-ubuntu 14.04 or 16.04 server installed, do the following:
+If you want to run `cord-tester` without Vagrant and already have an ubuntu
+14.04 or 16.04 server installed, do the following:
 
-```
-$ git clone https://github.com/opencord/cord-tester.git
-$ cd cord-tester/src/test/setup/
-$ sudo ./prerequisites.sh
+```shell
+git clone https://github.com/opencord/cord-tester.git
+cd cord-tester/src/test/setup/
+sudo ./prerequisites.sh
 ```
 
 Then follow the same instructions as described above.
@@ -106,47 +107,47 @@
 
 `eval.sh` runs all the test cases for you.
 
-```
-$ sudo ./eval.sh
+```shell
+sudo ./eval.sh
 ```
 
 To run all test cases in a module (e.g., for DHCP):
 
-```
-$ sudo ./cord-test.py run -t dhcp
+```shell
+sudo ./cord-test.py run -t dhcp
 ```
 
 To run a single test case in a module:
 
-```
-$ sudo ./cord-test.py  run -t dhcp:dhcp_exchange.test_dhcp_1request
+```shell
+sudo ./cord-test.py  run -t dhcp:dhcp_exchange.test_dhcp_1request
 ```
 
 To run all test cases:
 
-```
-$ sudo ./cord-test.py  run -t all
+```shell
+sudo ./cord-test.py  run -t all
 ```
 
 To check the list of test cases:
 
-```
-$ sudo ./cord-test.py list -t all/<Module name>
+```shell
+sudo ./cord-test.py list -t all/<Module name>
 ```
 
 To check the list of a specific module:
 
-```
-$ sudo ./cord-test.py list -t dhcp
+```shell
+sudo ./cord-test.py list -t dhcp
 ```
 
 To cleanup all test containers:
 
-```
-$ sudo ./cord-test.py cleanup
+```shell
+sudo ./cord-test.py cleanup
 ```
 
-For other options, run with -h option.
+For other options, run with `-h` option.
 
 ## CORD API Tests
 
@@ -159,16 +160,16 @@
 To install `robotframework` do the following:
 
 ```bash
-     $ sudo pip install robotframework
-     $ sudo pip install pygments
-     $ sudo apt-get install python-wxgtk2.8
-     $ sudo pip install robotframework-ride
+sudo pip install robotframework
+sudo pip install pygments
+sudo apt-get install python-wxgtk2.8
+sudo pip install robotframework-ride
 ```
 
 To bring up IDE for the robot framework, do the following:
 
 ```bash
-   $ ride.py
+ride.py
 ```
 
 ### Execute testcases
@@ -177,11 +178,12 @@
 line:
 
 ```bash
-     $ cd cord-tester/src/test/cord-api/Tests
-     $ pybot <testcase.txt>
+cd cord-tester/src/test/cord-api/Tests
+pybot <testcase.txt>
 ```
 
 ### Input Files for testcases
 
 Input files for the testcases are present in the `tests/data`
 directory.
+
diff --git a/docs/setup.md b/docs/setup.md
deleted file mode 100644
index e69de29..0000000
--- a/docs/setup.md
+++ /dev/null
diff --git a/docs/testcases-listings.md b/docs/testcases-listings.md
index b81149d..8d0be95 100644
--- a/docs/testcases-listings.md
+++ b/docs/testcases-listings.md
@@ -2,16 +2,17 @@
 
 Information of the all testcases listed here can be found at [CORD System Test wiki](https://wiki.opencord.org/display/CORD/Functional)
 
-##  XOS Based Tests
-1.  Ch_defaultImagesCheck.txt
-2.  Ch_DefaultServiceCheck.txt
-3.  Ch_DeploymentTest.txt
-4.  Ch_MultiInstanceTest.txt
-5.  Ch_NodeTest.txt
-6.  Ch_SanityFlavors.txt
-7.  Ch_SanityInstance.txt
-8.  Ch_ServiceTest.txt
-9.  Ch_SingleInstanceTest.txt
+## XOS Based Tests
+
+1. Ch_defaultImagesCheck.txt
+2. Ch_DefaultServiceCheck.txt
+3. Ch_DeploymentTest.txt
+4. Ch_MultiInstanceTest.txt
+5. Ch_NodeTest.txt
+6. Ch_SanityFlavors.txt
+7. Ch_SanityInstance.txt
+8. Ch_ServiceTest.txt
+9. Ch_SingleInstanceTest.txt
 10. Ch_SiteTest.txt
 11. Ch_SliceTest.txt
 12. Ch_SubscriberTest.txt
@@ -22,7 +23,7 @@
 
 ## MODULE BASED TESTS
 
-##  IPERF
+### IPERF
 
 1. test_tcp_using_iperf
 2. test_udp_using_iperf
@@ -35,7 +36,7 @@
 9. test_tcp_mss_with_1490Bytes_using_iperf
 10. test_tcp_mss_with_9000Bytes_for_max_throughput_using_iperf
 
-## DHCP
+### DHCP
 
 11. test_dhcp_1request
 12. test_dhcp_1request_with_invalid_source_mac_broadcast
@@ -71,11 +72,11 @@
 42. test_dhcp_server_client_transactions_per_second
 43. test_dhcp_server_consecutive_successful_clients_per_second
 
-## FABRIC
+### FABRIC
 
 44. test_fabric
 
-## XOS-CONTAINERS-APIS
+### XOS-CONTAINERS-APIS
 
 45. test_xos_base_container_status
 46. test_xos_base_container_ping
diff --git a/docs/testcases.md b/docs/testcases.md
index 711acb7..df5025a 100644
--- a/docs/testcases.md
+++ b/docs/testcases.md
@@ -1,9 +1,10 @@
 # CORD POD Test-cases
 
-This is a rough sketch of planned test-cases, organized in areas.
-Regard it as a wish-list.
-Feel free to contribute to the list and also use the list to get idea(s) where test
-implementation is needed.
+This is a rough sketch of planned test-cases, organized in areas.  Regard it as
+a wish-list.
+
+Feel free to contribute to the list and also use the list to get idea(s) where
+test implementation is needed.
 
 ## Test-Cases
 
@@ -21,44 +22,60 @@
 
 ### Deployment Tests
 
-The scope and objective of these test-cases is to run the automated deployment process on a "pristine" CORD POD and verify that at the end the system gets into a known (verifiable) baseline state, as well as that the feedback from the automated deployment process is consistent with the outcome (no false positives or negatives).
+The scope and objective of these test-cases is to run the automated deployment
+process on a "pristine" CORD POD and verify that at the end the system gets
+into a known (verifiable) baseline state, as well as that the feedback from the
+automated deployment process is consistent with the outcome (no false positives
+or negatives).
 
 Positive test-cases:
 
 * Bring-up and verify basic infrastructure assumptions
-  * Head-end is available, configured correctly, and available for software load
-  * Compute nodes are available and configured correctly, and available for software load
-* Execute automated deployment of CORD infrastructure and verify baseline state. Various options needs to be supported:
-  * Single head-node setup (no clustering)
-  * Triple-head-node setup (clustered)
-  * Single data-plane up-link from servers (no high availability)
-  * Dual data-plane up-link from servers (with high availability)
+    * Head-end is available, configured correctly, and available for software
+      load
+    * Compute nodes are available and configured correctly, and available for
+      software load
+* Execute automated deployment of CORD infrastructure and verify baseline
+  state. Various options needs to be supported:
+    * Single head-node setup (no clustering)
+    * Triple-head-node setup (clustered)
+    * Single data-plane up-link from servers (no high availability)
+    * Dual data-plane up-link from servers (with high availability)
 
 Negative test-cases:
 
 * Verify that deployment automation detects missing equipment
 * Verify that deployment notifies operator of missing configuration
 * Verify that deployment automation detects missing cable
-* Verify that deployment automation detects mis-cabling of fabric and provides useful feedback to remedy the issue
-* Verify that deployment automation detects mis-cabling of servers and provides useful feedback to remedy the issue
+* Verify that deployment automation detects mis-cabling of fabric and provides
+  useful feedback to remedy the issue
+* Verify that deployment automation detects mis-cabling of servers and provides
+  useful feedback to remedy the issue
 
 ### Baseline Readiness Tests
 
 * Verify API availability (XOS, ONOS, OpenStack, etc.)
-* Verify software process inventory (of those processes that are covered by the baseline bring-up)
+* Verify software process inventory (of those processes that are covered by the
+  baseline bring-up)
 
 ### Functional End-User Tests
 
 Positive test-cases:
 
 * Verify that a new OLT can be added to the POD and it is properly initialized
-* Verify that a new ONU can be added to the OLT and it becomes visible in the system
-* Verify that a ONU port going down triggers unprovisioning of service for a subscriber
-* Verify that a new RG can authenticate and gets admitted to the system (receives an IP address, deployment dependent)
+* Verify that a new ONU can be added to the OLT and it becomes visible in the
+  system
+* Verify that a ONU port going down triggers unprovisioning of service for a
+  subscriber
+* Verify that a new RG can authenticate and gets admitted to the system
+  (receives an IP address, deployment dependent)
 * Verify that the RG can access the Intranet and the Internet
-* Verify that the RG receives periodic IGMP Query messages and forwards to set top boxes.
-* Verify that the RG can join a multicast channel and starts receiving bridge flow
-* Verify that the RG, after joining, starts receiving multicast flow within tolerance interval
+* Verify that the RG receives periodic IGMP Query messages and forwards to set
+  top boxes.
+* Verify that the RG can join a multicast channel and starts receiving bridge
+  flow
+* Verify that the RG, after joining, starts receiving multicast flow within
+  tolerance interval
 * Verify that the RG can join multiple multicast streams simultaneously
 * Verify that the RG receives periodic IGMP reports
 
@@ -72,11 +89,14 @@
 Negative test-cases:
 
 * Verify that a subscriber that is not registered cannot join the network
-* Verify that a subscriber RG cannot be added unless it is on the pre-prescribed port (OLT/ONU port?)
-* Verify that a subscriber that has no Internet access cannot reach the Internet
-* Verify that a subscriber with limited channel access cannot subscribe to disabled/prohibited channels
-* Verify that a subscriber identity cannot be re-used at a different RG (no two RGs
-with the same certificate can ever be logged into the system)
+* Verify that a subscriber RG cannot be added unless it is on the
+  pre-prescribed port (OLT/ONU port?)
+* Verify that a subscriber that has no Internet access cannot reach the
+  Internet
+* Verify that a subscriber with limited channel access cannot subscribe to
+  disabled/prohibited channels
+* Verify that a subscriber identity cannot be re-used at a different RG (no two
+  RGs with the same certificate can ever be logged into the system)
 
 ### Transient, fault, HA Tests
 
@@ -84,7 +104,9 @@
 
 Hardware disruption scenarios cycling scenarios:
 
-In the following scenarios, in cases of non-HA setups, the system shall at least recover after the hardware component is restored. In HA scenarios, the system shall be able to ride these scenarios through without service interrupt.
+In the following scenarios, in cases of non-HA setups, the system shall at
+least recover after the hardware component is restored. In HA scenarios, the
+system shall be able to ride these scenarios through without service interrupt.
 
 * Power cycling OLT
 * Power cycling ONU
@@ -96,7 +118,8 @@
 * Replacing a server-to-leaf cable
 * Replacing a leaf-to-spine cable
 
-In HA scenarios, the following shall result in only degraded service, but not loss of service:
+In HA scenarios, the following shall result in only degraded service, but not
+loss of service:
 
 * Powering off a server (and keep it powered off)
 * Powering off a spine fabric switch
@@ -130,33 +153,38 @@
 * Subscriber channel change rate
 * Subscriber aggregate traffic load to Internet
 
-In addition to healthy operation, the following is the list contains what needs to be measured quantitatively, as a function of input load:
+In addition to healthy operation, the following is the list contains what needs
+to be measured quantitatively, as a function of input load:
 
 * CPU utilization per each server
 * Disk utilization per each server
 * Memory utilization per each server
 * Network utilization at various capture points (fabric ports to start with)
-* Channel change "response time" (how long it takes to start receiving bridge traffic as well as real multicast feed)
+* Channel change "response time" (how long it takes to start receiving bridge
+  traffic as well as real multicast feed)
 * Internet access round-trip time
 * CPU/DISK/Memory/Network trends in relationship to number of subscribers
-* After removal of all subscribers system should be "identical" to the new install state (or reasonably similar)
+* After removal of all subscribers system should be "identical" to the new
+  install state (or reasonably similar)
 
 ### Security Tests
 
-The purpose of these tests is to detect vulnerabilities across the various surfaces of CORD, including:
+The purpose of these tests is to detect vulnerabilities across the various
+surfaces of CORD, including:
 
 * PON ports (via ONU ports)
 * NBI APIs
 * Internet up-link
 * CORD POD-Local penetration tests
-  * Via patch cable into management switch
-  * Via fabric ports
-  * Via unused NIC ports of server(s)
-  * Via local console (only if secure boot is enabled)
+    * Via patch cable into management switch
+    * Via fabric ports
+    * Via unused NIC ports of server(s)
+    * Via local console (only if secure boot is enabled)
 
 Tests shall include:
 
-* Port scans on management network: only a pre-defined list of ports shall be open
+* Port scans on management network: only a pre-defined list of ports shall be
+  open
 * Local clustering shall be VLAN-isolated from the management network
 * Qualys free scan
 * SSH vulnerability scans
@@ -164,41 +192,44 @@
 
 [TBD: define more specific test scenarios]
 
-In addition, proprietary scans, such as Nessus Vulnerability Scan will be performed prior to major releases by commercial CORD vendor Ciena.
-
+In addition, proprietary scans, such as Nessus Vulnerability Scan will be
+performed prior to major releases by commercial CORD vendor Ciena.
 
 ### Soak Tests
 
-This is really one comprehensive multi-faceted test run on the POD, involving the following steps:
+This is really one comprehensive multi-faceted test run on the POD, involving
+the following steps:
 
 Preparation phase:
 
 1. Deploy system using the automated deployment process
-1. Verify baseline acceptance
-1. Admit a preset number of RGs
-1. Subscribe to a pre-configured set of multicast feeds
-1. Start a nominal Internet access load pattern on each RG
-1. Optionally (per test config): start background scaled-up load (dpdk-pktgen based)
-1. Capture baseline resource usage (memory, disk utilization per server, per vital process)
+2. Verify baseline acceptance
+3. Admit a preset number of RGs
+4. Subscribe to a pre-configured set of multicast feeds
+5. Start a nominal Internet access load pattern on each RG
+6. Optionally (per test config): start background scaled-up load (dpdk-pktgen
+   based)
+7. Capture baseline resource usage (memory, disk utilization per server, per
+   vital process)
 
 Soak phase (sustained for a preset time period (8h, 24h, 72h, etc.):
 
 1. Periodically monitor health of ongoing sessions (emulated RGs happy?)
-1. Periodically test presence of all processes
-1. Check for stable process ids (rolling id can be a sign of a restarted process)
-1. Periodically capture resource usage, including:
-   * CPU load
-   * process memory use
-   * file descriptors
-   * disk space
-   * disk io
-   * flow table entries in soft and fabric switches
+2. Periodically test presence of all processes
+3. Check for stable process ids (rolling id can be a sign of a restarted
+   process)
+4. Periodically capture resource usage, including:
+    * CPU load
+    * process memory use
+    * file descriptors
+    * disk space
+    * disk io
+    * flow table entries in soft and fabric switches
 
 Final check:
 
 1. Final capture of resource utilization and health report
 
-
 ## Baseline Acceptance Criteria
 
 The baseline acceptance is based on a list of criteria, including:
@@ -211,6 +242,8 @@
 * Verify kernel driver options for NICs (latest driver)
 * Verify kernel settings
 * Verify software inventory (presence and version) of following as applicable
-  * DPDK version
-  * ovs version
-  * etc.
+
+    * DPDK version
+    * ovs version
+    * etc.
+
diff --git a/docs/validate_pods.md b/docs/validate_pods.md
index 48e642c..7fa5df7 100644
--- a/docs/validate_pods.md
+++ b/docs/validate_pods.md
@@ -1,8 +1,8 @@
 # Validating PODs
 
-PODs are deployed everynight using Jenkins Build System.
-After a successful installation of the POD, test jobs are triggered which validate the
-following categories of tests
+PODs are deployed everynight using Jenkins Build System.  After a successful
+installation of the POD, test jobs are triggered which validate the following
+categories of tests
 
 * Post Installation Verification
 
@@ -52,6 +52,7 @@
 cd /opt/cord/test/cord-tester/src/test/robot
 pybot SanityPhyPOD.robot
 ```
+
 ## Functional Tests
 
 Control and Data plane tests can be executed on the POD once the
@@ -74,6 +75,7 @@
 USER = 'xosadmin@opencord.org'
 PASSWD = ''
 ```
+
 * To run the test
 
 ```bash
@@ -97,7 +99,7 @@
 * Configures a dhclient on the Cord-Tester containers interface that is being
   used as the vSG Subscriber
 * Validates a DHCP IP address is received from the vCPE Container and external
-connectivity is reachable through the vCPE
+  connectivity is reachable through the vCPE
 
 To run a data plane test, perform the following steps
 
@@ -141,10 +143,12 @@
 pybot vsg_dataplane_test.robot
 ```
 
->NOTE: All the control and data plane tests can also be executed on a `Virtual POD(Cord-in-a-Box)`
->using the above procedure. Except for the data plane tests, where it needs to be run
->using a different option as there are no crossconnects required to be provisioned on CiaB.
+> NOTE: All the control and data plane tests can also be executed on a `Virtual
+> POD(Cord-in-a-Box)` using the above procedure. Except for the data plane
+> tests, where it needs to be run using a different option as there are no
+> crossconnects required to be provisioned on CiaB.
 
 ```bash
 pybot -e xconnect vsg_dataplane_test.robot
 ```
+