[CORD-2585]
Lint check documentation with markdownlint

Change-Id: I692660818730aa0838492eb8a62d689e3fccc54d
diff --git a/docs/Makefile b/docs/Makefile
index ff4b7d6..08a88da 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -40,6 +40,15 @@
 	ln -s ../../../orchestration/profiles/mcord/docs profiles/mcord
 	ln -s ../../../orchestration/profiles/opencloud/docs profiles/opencloud
 
+lint:
+	@echo "markdownlint(mdl) version: `mdl --version`"
+	@echo "rule descriptions: https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md"
+	@echo "style config:"
+	@echo "---"
+	@cat mdlstyle.rb
+	@echo "---"
+	mdl -s mdlstyle.rb `find . ! -path "./_book/*" ! -path "./node_modules/*" ! -path "./cord-tester/modules/*" -name "*.md"`
+
 xos:
 	ln -s ../../orchestration/xos/docs xos
 
diff --git a/docs/README.md b/docs/README.md
index 92f6c75..aa4134f 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -28,15 +28,15 @@
 
 ## Making Changes to Documentation
 
-The [http://guide.opencord.org](http://guide.opencord.org) website is built using the
-[GitBook Toolchain](https://toolchain.gitbook.com/), with the documentation
-root in [build/docs](https://github.com/opencord/cord/blob/{{ book.branch
-}}/docs) in a checked out source tree.  It is build with `make`, and requires
-that gitbook, python, and a few other tools are installed.
+The [http://guide.opencord.org](http://guide.opencord.org) website is built
+using the [GitBook Toolchain](https://toolchain.gitbook.com/), with the
+documentation root in
+[build/docs](https://github.com/opencord/cord/blob/{{ book.branch }}/docs) in a
+checked out source tree.  It is build with `make`, and requires that gitbook,
+python, and a few other tools are installed.
 
 Source for individual guides is available in the [CORD code
 repository](https://gerrit.opencord.org); look in the `docs` directory of each
 project, with the documentation rooted in `build/docs`. Updates and
 improvements to this documentation can be submitted through Gerrit.
 
-
diff --git a/docs/appendix_basic_config.md b/docs/appendix_basic_config.md
index e4dc6fa..db4bb09 100644
--- a/docs/appendix_basic_config.md
+++ b/docs/appendix_basic_config.md
@@ -1,26 +1,39 @@
-#  Basic Configuration
+# Basic Configuration
 
-The following provides instructions on how to configure the fabric on an installed full POD with two leaf and two spine switches.
-The fabric needs to be configured to forward traffic between the different components of the POD. More info about how to configure the fabric can be found [here](https://wiki.opencord.org/pages/viewpage.action?pageId=3014916).
+The following provides instructions on how to configure the fabric on an
+installed full POD with two leaf and two spine switches.  The fabric needs to
+be configured to forward traffic between the different components of the POD.
+More info about how to configure the fabric can be found
+[here](https://wiki.opencord.org/pages/viewpage.action?pageId=3014916).
 
-Each leaf switch on the fabric corresponds to a separate IP subnet.  The recommended configuration is a POD with two leaves; the leaf1 subnet is `10.6.1.0/24` and the leaf2 subnet is `10.6.2.0/24`.
+Each leaf switch on the fabric corresponds to a separate IP subnet.  The
+recommended configuration is a POD with two leaves; the leaf1 subnet is
+`10.6.1.0/24` and the leaf2 subnet is `10.6.2.0/24`.
 
-##Configure the Fabric:  Overview
+## Configure the Fabric: Overview
 
-A service running on the head node can produce an ONOS network configuration to control the leaf and spine network fabric. This configuration is generated by querying ONOS for the known switches and compute nodes and producing a JSON structure that can be posted to ONOS to implement the fabric.
+A service running on the head node can produce an ONOS network configuration to
+control the leaf and spine network fabric. This configuration is generated by
+querying ONOS for the known switches and compute nodes and producing a JSON
+structure that can be posted to ONOS to implement the fabric.
 
-The configuration generator can be invoked using the `cord generate` command, which prints the configuration to standard output.
+The configuration generator can be invoked using the `cord generate` command,
+which prints the configuration to standard output.
 
-##Remove Stale ONOS Data
+## Remove Stale ONOS Data
 
-Before generating a configuration you need to make sure that the instance of ONOS controlling the fabric doesn't contain any stale data and that has processed a packet from each of the switches and computes nodes.
+Before generating a configuration you need to make sure that the instance of
+ONOS controlling the fabric doesn't contain any stale data and that has
+processed a packet from each of the switches and computes nodes.
 
-ONOS needs to process a packet because it does not have a mechanism to automatically discover the network elements. Thus, to be aware of a device on the network ONOS needs to first receive a packet from it.
+ONOS needs to process a packet because it does not have a mechanism to
+automatically discover the network elements. Thus, to be aware of a device on
+the network ONOS needs to first receive a packet from it.
 
 To remove stale data from ONOS, the ONOS CLI `wipe-out` command can be used:
 
-```
-ssh -p 8101 onos@onos-fabric wipe-out -r -j please
+```shell
+$ ssh -p 8101 onos@onos-fabric wipe-out -r -j please
 Warning: Permanently added '[onos-fabric]:8101,[10.6.0.1]:8101' (RSA) to the list of known hosts.
 Password authentication
 Password:  (password rocks)
@@ -36,13 +49,16 @@
 
 >NOTE: When prompted, use password "rocks".
 
-To ensure ONOS is aware of all the switches and the compute nodes, you must have each switch "connected" to the controller and let each compute node ping over its fabric interface to the controller.
+To ensure ONOS is aware of all the switches and the compute nodes, you must
+have each switch "connected" to the controller and let each compute node ping
+over its fabric interface to the controller.
 
-##Connect the Fabric Switches to ONOS
+## Connect the Fabric Switches to ONOS
 
-If the switches are not already connected, the following command on the head node CLI will initiate a connection.
+If the switches are not already connected, the following command on the head
+node CLI will initiate a connection.
 
-```
+```shell
 for s in $(cord switch list | grep -v IP | awk '{print $3}'); do
 ssh -i ~/.ssh/cord_rsa -qftn root@$s ./connect -bg 2>&1  > $s.log
 done
@@ -50,8 +66,10 @@
 
 You can verify ONOS has recognized the devices using the following command:
 
-```
-ssh -p 8101 onos@onos-fabric devices
+> NOTE: When prompted, use password `rocks`.
+
+```shell
+$ ssh -p 8101 onos@onos-fabric devices
 
 Warning: Permanently added '[onos-fabric]:8101,[10.6.0.1]:8101' (RSA) to the list of known hosts.
 Password authentication
@@ -62,90 +80,109 @@
 id=of:0000cc37ab7cbf6c, available=true, role=MASTER, type=SWITCH, mfr=Broadcom Corp., hw=OF-DPA 2.0, sw=OF-DPA 2.0, serial=, driver=ofdpa, channelId=10.6.0.22:44136, managementAddress=10.6.0.22, protocol=OF_13
 ```
 
->NOTE: This is a sample output that won’t necessarily reflect your output
+> NOTE: This is a sample output that won’t necessarily reflect your output.
+>
+> It may take a few seconds for the switches to initialize and connect to ONOS
 
->NOTE: When prompt, use password "rocks".
+## Configure the Compute Nodes
 
->NOTE: It may take a few seconds for the switches to initialize and connect to ONOS
+Before connecting to ONOS, the compute nodes must be configured with data plane
+IP addresses appropriate to their subnet.  The POD build process assigns data
+plane IP addresses to nodes, but it is not subnet-aware and so IP addresses
+must be changed for compute nodes on the leaf2 switch.
 
-##Configure the Compute Nodes
+### Assign IP addresses
 
-Before connecting to ONOS, the compute nodes must be configured with data plane IP addresses appropriate to their subnet.  The POD build process assigns data plane IP addresses to nodes, but it is not subnet-aware and so IP addresses must be changed for compute nodes on the leaf2 switch.
+Log into the XOS GUI an click on `Nodes`. For nodes connected to leaf2, change
+the `dataPlaneIp` attribute to a unique IP address on the `10.6.2.0/24` subnet
+and click `Save`.
 
-###Assign IP addresses
+XOS will communicate the new IP address to the ONOS VTN app, which will change
+it on the nodes.  Once the switches are connected to ONOS as part of the fabric
+configuration process (see subsequent sections), log into each compute node and
+verify that `br-int` has the new IP address:
 
-Log into the XOS GUI an click on `Nodes`. For nodes connected to leaf2, change the `dataPlaneIp` attribute to a unique IP address on the `10.6.2.0/24` subnet and click `Save`.
-
-XOS will communicate the new IP address to the ONOS VTN app, which will change it on the nodes.  Once the switches are connected to ONOS as part of the fabric configuration process (see subsequent sections), log into each compute node and verify that `br-int` has the new IP address:
-
-```
+```shell
 ip addr list br-int
 ```
 
-###Add Routes to Fabric Subnets
+### Add Routes to Fabric Subnets
 
-Routes must be manually configured on the compute nodes so that traffic between nodes on different leaves will be forwarded via the local spine switch.
+Routes must be manually configured on the compute nodes so that traffic between
+nodes on different leaves will be forwarded via the local spine switch.
 
 Run commands of this form on each compute node:
 
-```
+```shell
 sudo ip route add <remote-leaf-subnet> via <local-spine-ip>
 ```
 
-In this configuration, on the nodes attached to leaf1 (including the head node), run:
+In this configuration, on the nodes attached to leaf1 (including the head
+node), run:
 
-```
+```shell
 sudo ip route add 10.6.2.0/24 via 10.6.1.254
 ```
 
 Likewise, on the compute nodes attached to leaf2, run:
 
-```
+```shell
 sudo ip route add 10.6.1.0/24 via 10.6.2.254
 ```
 
->NOTE: it’s strongly suggested to add it as a permanent route to the nodes, so the route will still be there after a reboot
+> NOTE: it’s strongly suggested to add it as a permanent route to the nodes, so
+> the route will still be there after a reboot
 
-##Configure NAT Gateway on the Head Node (Optional)
+## Configure NAT Gateway on the Head Node (Optional)
 
-In a production POD, a vRouter is responsible for providing connectivity between the fabric and the Internet, but this requires configuring BGP peering between the vRouter and an upstream router.  In environments where this is not feasible, it is possible to use the head node as a NAT gateway for the fabric by configuring some routes on the head node and in ONOS as described below.
+In a production POD, a vRouter is responsible for providing connectivity
+between the fabric and the Internet, but this requires configuring BGP peering
+between the vRouter and an upstream router.  In environments where this is not
+feasible, it is possible to use the head node as a NAT gateway for the fabric
+by configuring some routes on the head node and in ONOS as described below.
 
-###Add Routes for Fabric Subnets
+### Add Routes for Fabric Subnets
 
-The default POD configuration uses the `10.7.1.0/24` subnet for vSG traffic to the Internet, and `10.8.1.0/24` for other Internet traffic.  Add routes on the head node to forward traffic to these subnets into the fabric:
+The default POD configuration uses the `10.7.1.0/24` subnet for vSG traffic to
+the Internet, and `10.8.1.0/24` for other Internet traffic.  Add routes on the
+head node to forward traffic to these subnets into the fabric:
 
-```
+```shell
 sudo route add -net 10.7.1.0/24 gw 10.6.1.254
 sudo route add -net 10.8.1.0/24 gw 10.6.1.254
 ```
 
-###Add Default Route to Head Node from Fabric
+### Add Default Route to Head Node from Fabric
 
-ONOS must be configured to forward all outgoing Internet traffic to the head node's fabric interface, which by default has IP address `10.6.1.1`:
+ONOS must be configured to forward all outgoing Internet traffic to the head
+node's fabric interface, which by default has IP address `10.6.1.1`:
 
-```
+```shell
 ssh -p 8101 onos@onos-fabric route-add 0.0.0.0/0 10.6.1.1
 ```
 
->NOTE: When prompted, use password "rocks".
+> NOTE: When prompted, use password `rocks`.
 
-##Connect Compute Nodes to ONOS
+## Connect Compute Nodes to ONOS
 
-To make sure that ONOS is aware of the compute nodes, the following commands will send a ping over the fabric interface on the head node and each compute node.
+To make sure that ONOS is aware of the compute nodes, the following commands
+will send a ping over the fabric interface on the head node and each compute
+node.
 
-```
+```shell
 ping -c 1 10.6.1.254
 for h in $(cord prov list | grep "^node" | awk '{print $2}'); do
 ssh -i ~/.ssh/cord_rsa -qftn ubuntu@$h ping -c 1 10.6.1.254;
 done
 ```
 
-> NOTE: It is fine if the `ping` command fails; the purpose is to register the node with ONOS.
+> NOTE: It is fine if the `ping` command fails; the purpose is to register the
+> node with ONOS.
 
 You can verify ONOS has recognized the nodes using the following command:
 
-```
-ssh -p 8101 onos@onos-fabric hosts
+```shell
+$ ssh -p 8101 onos@onos-fabric hosts
 Warning: Permanently added '[onos-fabric]:8101,[10.6.0.1]:8101' (RSA) to the list of known hosts.
 Password authentication
 Password:
@@ -153,50 +190,53 @@
 id=3C:FD:FE:9E:94:28/None, mac=3C:FD:FE:9E:94:28, location=of:0000cc37ab7cba58/4, vlan=None, ip(s)=[10.6.1.2], configured=false
 ```
 
->NOTE: When prompt, use password rocks
+> NOTE: When prompted, use password `rocks`
 
-##Generate the Network Configuration
+## Generate the Network Configuration
 
-To modify the fabric configuration for your environment, generate on the head node a new network configuration using the following commands:
+To modify the fabric configuration for your environment, generate on the head
+node a new network configuration using the following commands:
 
-```
+```shell
 cd /opt/cord_profile && \
 cp fabric-network-cfg.json{,.$(date +%Y%m%d-%H%M%S)} && \
 cord generate > fabric-network-cfg.json
 ```
 
-##Load Network Configuration
+## Load Network Configuration
 
-Once these steps are done load the new configuration into XOS, and restart the apps in ONOS:
+Once these steps are done load the new configuration into XOS, and restart the
+apps in ONOS:
 
-###Install Dependencies
+### Install Dependencies
 
-```
+```shell
 sudo pip install httpie
 ```
 
-###Delete Old Configuration
+### Delete Old Configuration
 
-```
+```shell
 http -a onos:rocks DELETE http://onos-fabric:8181/onos/v1/network/configuration/
 ```
 
-###Load New Configuration
+### Load New Configuration
 
-```
+```shell
 http -a onos:rocks POST http://onos-fabric:8181/onos/v1/network/configuration/ < /opt/cord_profile/fabric-network-cfg.json
 ```
 
-###Restart ONOS Apps
+### Restart ONOS Apps
 
-```
+```shell
 http -a onos:rocks DELETE http://onos-fabric:8181/onos/v1/applications/org.onosproject.segmentrouting/active
 http -a onos:rocks POST http://onos-fabric:8181/onos/v1/applications/org.onosproject.segmentrouting/active
 ```
 
-To verify that XOS has pushed the configuration to ONOS, log into ONOS in the onos-fabric VM and run netcfg:
+To verify that XOS has pushed the configuration to ONOS, log into ONOS in the
+onos-fabric VM and run netcfg:
 
-```
+```shell
 $ ssh -p 8101 onos@onos-fabric netcfg
 Password authentication
 Password:
@@ -229,28 +269,32 @@
 
 ## Verify Connectivity over the Fabric
 
-Once the new ONOS configuration is active, the fabric interface on each node should be reachable from the other nodes.  From each compute node, ping the IP address of head node's fabric interface (e.g., `10.6.1.1`).
+Once the new ONOS configuration is active, the fabric interface on each node
+should be reachable from the other nodes.  From each compute node, ping the IP
+address of head node's fabric interface (e.g., `10.6.1.1`).
 
-Sometimes ping fails for various reasons. Reconnecting switches to ONOS often solves the problem:
+Sometimes ping fails for various reasons. Reconnecting switches to ONOS often
+solves the problem.
 
 Log into the switch from the head node:
 
-```
+```shell
 ssh root@SWITCH_IP
 ```
 
->NOTE: Switch IPs can be found by running "cord switch list" on the head node
+> NOTE: Switch IPs can be found by running "cord switch list" on the head node
 
 Kill the current connection and restart a new one:
 
-```
+```shell
 ./killit
 ./connect -bg
 ```
 
 Sometimes restarting ONOS segmentrouting app also helps:
 
-```
+```shell
 http -a onos:rocks DELETE http://onos-fabric:8181/onos/v1/applications/org.onosproject.segmentrouting/active
 http -a onos:rocks POST http://onos-fabric:8181/onos/v1/applications/org.onosproject.segmentrouting/active
 ```
+
diff --git a/docs/appendix_images.md b/docs/appendix_images.md
index a047172..4af9cd5 100644
--- a/docs/appendix_images.md
+++ b/docs/appendix_images.md
@@ -1,27 +1,39 @@
-#  Container Images 
+# Container Images
 
-In the installation process CORD fetches, builds, and deploys a set of container images. 
-These include:
+In the installation process CORD fetches, builds, and deploys a set of
+container images.  These include:
 
-* cord-maas-bootstrap - (directory: bootstrap) run during MaaS installation time to customize the MaaS instance via REST interfaces. 
+* cord-maas-bootstrap - (directory: bootstrap) run during MaaS installation
+  time to customize the MaaS instance via REST interfaces.
 
-* cord-maas-automation - (directory: automation) daemon on the head node to automate PXE booted servers through the MaaS bare metal deployment workflow. 
+* cord-maas-automation - (directory: automation) daemon on the head node to
+  automate PXE booted servers through the MaaS bare metal deployment workflow.
 
-* cord-maas-switchq - (directory: switchq) daemon on the head node that watches for new switches being added to the POD and triggers provisioning when a switch is identified (via the OUI on MAC address). 
+* cord-maas-switchq - (directory: switchq) daemon on the head node that watches
+  for new switches being added to the POD and triggers provisioning when a
+  switch is identified (via the OUI on MAC address).
 
-* cord-maas-provisioner - (directory: provisioner) daemon on the head node that manages the execution of ansible playbooks against switches and compute nodes as they are added to the POD. 
+* cord-maas-provisioner - (directory: provisioner) daemon on the head node that
+  manages the execution of ansible playbooks against switches and compute nodes
+  as they are added to the POD.
 
-* cord-ip-allocator - (directory: ip-allocator) daemon on the head node used to allocate IP address for the fabric interfaces. 
+* cord-ip-allocator - (directory: ip-allocator) daemon on the head node used to
+  allocate IP address for the fabric interfaces.
 
-* cord-dhcp-harvester - (directory: harvester) run on the head node to facilitate CORD / DHCP / DNS integration so that all hosts can be resolved via DNS. 
+* cord-dhcp-harvester - (directory: harvester) run on the head node to
+  facilitate CORD / DHCP / DNS integration so that all hosts can be resolved
+  via DNS.
 
-* opencord/mavenrepo - custom CORD maven repository image to support ONOS application loading from a local repository. 
+* opencord/mavenrepo - custom CORD maven repository image to support ONOS
+  application loading from a local repository.
 
-* cord-test/nose - container from which cord tester test cases originate and validate traffic through the CORD infrastructure. 
+* cord-test/nose - container from which cord tester test cases originate and
+  validate traffic through the CORD infrastructure.
 
-* cord-test/quagga - BGP virtual router to support uplink from CORD fabric network to Internet. 
+* cord-test/quagga - BGP virtual router to support uplink from CORD fabric
+  network to Internet.
 
-* cord-test/radius - Radius server to support cord-tester capability. 
+* cord-test/radius - Radius server to support cord-tester capability.
 
-* opencord/onos - custom version of ONOS for use within the CORD platform. 
+* opencord/onos - custom version of ONOS for use within the CORD platform.
 
diff --git a/docs/appendix_network_settings.md b/docs/appendix_network_settings.md
index 5f8a6a7..7788ab0 100644
--- a/docs/appendix_network_settings.md
+++ b/docs/appendix_network_settings.md
@@ -1,24 +1,43 @@
-# Network Settings 
+# Network Settings
 
-The CORD POD uses two core network interfaces: fabric and mgmtbr. 
-The fabric interface is used to bond all interfaces meant to be used for CORD data traffic and the mgmtbr will be used to bridge all interfaces used for POD management (signaling) traffic. An additional interface of import on the head node is the external interface, or the interface through which the management network accesses upstream servers, such as the Internet. 
+The CORD POD uses two core network interfaces: fabric and mgmtbr.
 
-How physical interfaces are identified and mapped to either the external, the fabric or mgmtbr interface is a combination of their name, NIC driver, and/or bus type. 
+The fabric interface is used to bond all interfaces meant to be used for CORD
+data traffic and the mgmtbr will be used to bridge all interfaces used for POD
+management (signaling) traffic. An additional interface of import on the head
+node is the external interface, or the interface through which the management
+network accesses upstream servers, such as the Internet.
 
-You can verify how your network card matches these informations using on the compute nodes (including the one with head capabilities) 
+How physical interfaces are identified and mapped to either the external, the
+fabric or mgmtbr interface is a combination of their name, NIC driver, and/or
+bus type.
 
-```
+You can verify how your network card matches these informations using on the
+compute nodes (including the one with head capabilities)
+
+```shell
 ethtool -i <name>
 ```
 
-By default, any interface that has a module or kernel driver of tun, bridge, bonding, or veth will be ignored when selecting devices for the fabric and mgmtbr interfaces, as well as any interface that is not associated with a bus type or has a bus type of N/A or tap. All other interfaces that are not ignored will be considered for selection to either the fabric or the mbmtbr interface. 
+By default, any interface that has a module or kernel driver of tun, bridge,
+bonding, or veth will be ignored when selecting devices for the fabric and
+mgmtbr interfaces, as well as any interface that is not associated with a bus
+type or has a bus type of N/A or tap. All other interfaces that are not ignored
+will be considered for selection to either the fabric or the mbmtbr interface.
 
-When deciding which interfaces are in this bond, the deployment script selects the list of available interfaces and filters them on the criteria below. The output is the list of interfaces that should be associated with the bond interface. The resultant list is sorted alphabetically. Finally, the interfaces are configured to be in the bond interface with the first interface in the list being the primary. 
+When deciding which interfaces are in this bond, the deployment script selects
+the list of available interfaces and filters them on the criteria below. The
+output is the list of interfaces that should be associated with the bond
+interface. The resultant list is sorted alphabetically. Finally, the interfaces
+are configured to be in the bond interface with the first interface in the list
+being the primary.
 
-The network configuration can be customized before deploying, using a set of variables that can be set in your deployment configuration file, for example `podX.yml`, in the dev VM, under `/cord/build/podconfig`. 
-Below, an example of most commonly used network variables is reported:
+The network configuration can be customized before deploying, using a set of
+variables that can be set in your deployment configuration file, for example
+`podX.yml`, in the dev VM, under `/cord/build/podconfig`.  Below, an example of
+most commonly used network variables is reported:
 
-```
+```yaml
 'fabric_include_names'='<name1>,<name2>'
 'fabric_include_module_types'='<mod1>,<mod2>'
 'fabric_include_bus_types'='<bus1>,<bus2>'
@@ -39,39 +58,73 @@
 'management_ignore_bus_types'='<bus1>,<bus2>'
 ```
 
-Each of the criteria is specified as a comma separated list of regular expressions. 
+Each of the criteria is specified as a comma separated list of regular
+expressions.
 
-There is a set of include, exclude, and ignore variables, that operate on the interface names, module types and bus types. By setting values on these variables it is fairly easy to customize the network settings. 
+There is a set of include, exclude, and ignore variables, that operate on the
+interface names, module types and bus types. By setting values on these
+variables it is fairly easy to customize the network settings.
 
 The options are processed as following:
 
-1. If a given interface matches an ignore option, it is not available to be selected into either the fabric or mgmtbr interface and will not be modified in the `/etc/network/interface`. 
+1. If a given interface matches an ignore option, it is not available to be
+   selected into either the fabric or mgmtbr interface and will not be modified
+   in the `/etc/network/interface`.
 
-2. If no include criteria are specified and the given interfaces matches then exclude criteria then the interface will be set as manual configuration in the `/etc/network/interface` file and will not be auto activated 
+2. If no include criteria are specified and the given interfaces matches then
+   exclude criteria then the interface will be set as manual configuration in
+   the `/etc/network/interface` file and will not be auto activated
 
-3. If no include criteria are specified and the given interface does NOT match the exclude criteria then this interface will be included in either the frabric or mgmtbr interface. 
+3. If no include criteria are specified and the given interface does NOT match
+   the exclude criteria then this interface will be included in either the
+   frabric or mgmtbr interface.
 
-4. If include criteria are specified and the given interface does not match the criteria then the interface will be ignored and its configuration will NOT be modified 
+4. If include criteria are specified and the given interface does not match the
+   criteria then the interface will be ignored and its configuration will NOT
+   be modified
 
-5. If include criteria are specified and the given interface matches the criteria then if the given interface also matches the exclude criteria then this interface will be set as manual configuration in the /etc/network/interface file and will not be auto activated 
+5. If include criteria are specified and the given interface matches the
+   criteria then if the given interface also matches the exclude criteria then
+   this interface will be set as manual configuration in the
+   /etc/network/interface file and will not be auto activated
 
-6. If include criteria are specified and the given interface matches the criteria and if it does NOT match the exclude criteria then this interface will be included in either the fabric or mgmtbr interface. 
+6. If include criteria are specified and the given interface matches the
+   criteria and if it does NOT match the exclude criteria then this interface
+   will be included in either the fabric or mgmtbr interface.
 
-7. By default, the only criteria that are specified is the fabric include module types and they are set to i40e, mlx4_en. 
+7. By default, the only criteria that are specified is the fabric include
+   module types and they are set to i40e, mlx4_en.
 
-8. If the fabric include module types is specified and the management exclude module types are not specified, then by default the fabric include module types are used as the management exclude module types. This ensures that by default the fabric and the mgmtbr do not intersect on interface module types. 
+8. If the fabric include module types is specified and the management exclude
+   module types are not specified, then by default the fabric include module
+   types are used as the management exclude module types. This ensures that by
+   default the fabric and the mgmtbr do not intersect on interface module
+   types.
 
-9. If an external interface is specified in the deployment configuration, this interface will be added to the farbric and management ignore names list. 
+9. If an external interface is specified in the deployment configuration, this
+   interface will be added to the farbric and management ignore names list.
 
-A common question is how a non-standard card can be used as fabric network card on the compute nodes. To do that, you should check the driver type for the card you want to use with ethtool -i <name>, and insert that in the list under the line `fabric_include_module_types`. 
+A common question is how a non-standard card can be used as fabric network card
+on the compute nodes. To do that, you should check the driver type for the card
+you want to use with `ethtool -i <name>`, and insert that in the list under the
+line `fabric_include_module_types`.
 
 Some notes:
 
-* If the fabric include module types is specified and the management exclude module types are not specified, then by default the fabric include module types are used as the management exclude module types. This ensures that by default the fabric and the mgmtbr do not intersect on interface module types. 
+* If the fabric include module types is specified and the management exclude
+  module types are not specified, then by default the fabric include module
+  types are used as the management exclude module types. This ensures that by
+  default the fabric and the mgmtbr do not intersect on interface module types.
 
-* If an external interface is specified in the deployment configuration, this interface will be added to the fabric and management ignore names list. 
+* If an external interface is specified in the deployment configuration, this
+  interface will be added to the fabric and management ignore names list.
 
-* Each of the criteria is specified as a comma separated list of regular expressions. 
+* Each of the criteria is specified as a comma separated list of regular
+  expressions.
 
->WARNING: The Ansible scripts configure the head node to provide DHCP/DNS/PXE services out of its internal / management network interfaces, to be able to reach the other components of the POD (i.e. the switches and the other compute nodes). These services are instead not exposed out of the external network. 
+> WARNING: The Ansible scripts configure the head node to provide DHCP/DNS/PXE
+> services out of its internal / management network interfaces, to be able to
+> reach the other components of the POD (i.e. the switches and the other
+> compute nodes). These services are instead not exposed out of the external
+> network.
 
diff --git a/docs/appendix_vsg.md b/docs/appendix_vsg.md
index dd17cd8..cf91187 100644
--- a/docs/appendix_vsg.md
+++ b/docs/appendix_vsg.md
@@ -1,6 +1,9 @@
 # vSG Configuration
 
->NOTE: This section is only relevant if you wish to change the default IP address block or gateway MAC address associated with the vSG subnet.  One reason you might want to do this is to associate more IP addresses with the vSG network (the default is a /24).
+> NOTE: This section is only relevant if you wish to change the default IP
+> address block or gateway MAC address associated with the vSG subnet.  One
+> reason you might want to do this is to associate more IP addresses with the
+> vSG network (the default is a /24).
 
 First, login to the CORD head node (`ssh head1` in *CiaB*) and go to the
 `/opt/cord_profile` directory. To configure the fabric gateway, you will need
@@ -16,54 +19,55 @@
       gateway_mac: a4:23:05:06:01:01
 ```
 
-Edit this section so that it reflects the fabric address block that you wish to assign to the
-vSGs, as well as the gateway IP and the MAC address that the vSG should use to
-reach the Internet (e.g., for the vRouter)
+Edit this section so that it reflects the fabric address block that you wish to
+assign to the vSGs, as well as the gateway IP and the MAC address that the vSG
+should use to reach the Internet (e.g., for the vRouter)
 
 Once the `cord-services.yaml` TOSCA file has been edited as described above,
 push it to XOS by running the following:
 
-```
+```shell
 cd /opt/cord_profile
-docker-compose -p rcord exec xos_ui python /opt/xos/tosca/run.py xosadmin@opencord.org
-/opt/cord_profile/cord-services.yaml
+docker-compose -p rcord exec xos_ui python /opt/xos/tosca/run.py xosadmin@opencord.org /opt/cord_profile/cord-services.yaml
 ```
 
 This step is complete once you see the correct information in the VTN app
-configuration in ONOS.  To check that XOS has successfully pushed the network configuration to the ONOS VTN app:
+configuration in ONOS.  To check that XOS has successfully pushed the network
+configuration to the ONOS VTN app:
 
- 1.  Log into ONOS from the head node
+1. Log into ONOS from the head node
 
     * Command: `ssh -p 8102 onos@onos-cord`
     * Password: `rocks`
 
- 2. Run the `netcfg` command. Verify that the updated gateway information is
-present under publicGateways:
+2. Run the `netcfg` command. Verify that the updated gateway information is
+   present under publicGateways:
 
-```json
-onos> netcfg
-"publicGateways" : [
-  {
-    "gatewayIp" : "10.7.1.1",
-    "gatewayMac" : "a4:23:05:06:01:01"
-  }, {
-    "gatewayIp" : "10.8.1.1",
-    "gatewayMac" : "a4:23:05:06:01:01"
-  }
-],
-```
+    ```json
+    onos> netcfg
+    "publicGateways" : [
+      {
+        "gatewayIp" : "10.7.1.1",
+        "gatewayMac" : "a4:23:05:06:01:01"
+      }, {
+        "gatewayIp" : "10.8.1.1",
+        "gatewayMac" : "a4:23:05:06:01:01"
+      }
+    ],
+    ```
 
-> NOTE: The above output is just a sample; you should see the values you configured
+    > NOTE: The above output is just a sample; you should see the values you configured
 
- 3. Run the `cordvtn-nodes` command.  This will look like the following:
+3. Run the `cordvtn-nodes` command.  This will look like the following:
 
-```
-onos> cordvtn-nodes
-  Hostname                      Management IP       Data IP             Data Iface     Br-int                  State
-  sturdy-baseball               10.1.0.14/24        10.6.1.2/24         fabric         of:0000525400d7cf3c     COMPLETE
-  Total 1 nodes
-```
+    ```shell
+    onos> cordvtn-nodes
+      Hostname                      Management IP       Data IP             Data Iface     Br-int                  State
+      sturdy-baseball               10.1.0.14/24        10.6.1.2/24         fabric         of:0000525400d7cf3c     COMPLETE
+      Total 1 nodes
+    ```
 
- 4. Verify that the information for all nodes is correct
+4. Verify that the information for all nodes is correct
 
- 5. Verify that the initialization status of all nodes is `COMPLETE`.
+5. Verify that the initialization status of all nodes is `COMPLETE`.
+
diff --git a/docs/build_images.md b/docs/build_images.md
index 91e5597..34934aa 100644
--- a/docs/build_images.md
+++ b/docs/build_images.md
@@ -34,7 +34,7 @@
 following in the logs. If this happens, imagebuilder will attempt to build the
 image from scratch rather than pulling it:
 
-```
+```python
 NotFound: 404 Client Error: Not Found ("{"message":"manifest for xosproject/xos-gui-extension-builder:<hash> not found"}")
 ```
 
@@ -44,46 +44,47 @@
 
 The imagebuilder program performs the following steps when run:
 
- 1. Reads the [repo manifest file](https://github.com/opencord/manifest/blob/master/default.xml)
-    (checked out as `.repo/manifest`) to get a list of the CORD git repositories.
+1. Reads the [repo manifest file](https://github.com/opencord/manifest/blob/master/default.xml)
+   (checked out as `.repo/manifest`) to get a list of the CORD git repositories.
 
- 2. Reads the [build/docker_images.yml](https://github.com/opencord/cord/blob/{{ book.branch }}/docker_images.yml)
-    file and the generated `cord/build/genconfig/config.yml` file (which
-    contains a `docker_image_whitelist` list from the scenario), to determine
-    which containers are needed for this POD configuration.
+2. Reads the [build/docker_images.yml](https://github.com/opencord/cord/blob/{{
+   book.branch }}/docker_images.yml) file and the generated
+   `cord/build/genconfig/config.yml` file (which contains a
+   `docker_image_whitelist` list from the scenario), to determine which
+   containers are needed for this POD configuration.
 
- 3. For every container that is needed, reads the Dockerfile and determines if
-    any parent images are needed, and creates a tree to order image building.
+3. For every container that is needed, reads the Dockerfile and determines if
+   any parent images are needed, and creates a tree to order image building.
 
- 4. Determines which images need to be rebuilt based on:
+4. Determines which images need to be rebuilt based on:
 
-   - Whether the image exists and is has current tags added to it.
-   - If the Docker build context is *dirty* or differs (is on a different
-     branch) from the git tag specified in the repo manifest
-   - If the image's parent (or grandparent, etc.) needs to be rebuilt
+    * Whether the image exists and is has current tags added to it.
+    * If the Docker build context is *dirty* or differs (is on a different
+      branch) from the git tag specified in the repo manifest
+    * If the image's parent (or grandparent, etc.) needs to be rebuilt
 
- 5. Using this information downloads (pulls) or builds images as needed in a
-    way that is consistent with the CORD source that is on disk.  If an image
-    build is needed, the Docker output of that build is saved to
-    `build/image_logs` on the system where Imagebuilder executes (the
-    `buildhost` in inventory).
+5. Using this information downloads (pulls) or builds images as needed in a
+   way that is consistent with the CORD source that is on disk.  If an image
+   build is needed, the Docker output of that build is saved to
+   `build/image_logs` on the system where Imagebuilder executes (the
+   `buildhost` in inventory).
 
- 6. Tags the image with the `candidate` and (if clean) git hash tags.
+6. Tags the image with the `candidate` and (if clean) git hash tags.
 
- 7. Creates a YAML output file that describes the work it performed, for later
-    use (pushing images, retagging, etc.), and optional a graphviz `.dot` graph
-    file showing the relationships between images.
+7. Creates a YAML output file that describes the work it performed, for later
+   use (pushing images, retagging, etc.), and optional a graphviz `.dot` graph
+   file showing the relationships between images.
 
 ## Image Tagging
 
 CORD container images frequently have multiple tags. The two most common ones
 are:
 
- * The string `candidate`, which says that the container is ready to be
-   deployed on a CORD POD
- * The git commit hash, which is either pulled from DockerHub, or applied when
-   a container is built from an untouched (according to git) source tree.
-   Images built from a modified source tree will not be tagged in this way.
+* The string `candidate`, which says that the container is ready to be deployed
+  on a CORD POD
+* The git commit hash, which is either pulled from DockerHub, or applied when a
+  container is built from an untouched (according to git) source tree.  Images
+  built from a modified source tree will not be tagged in this way.
 
 Imagebuilder use this git hash tag as well as labels on the image of the git
 repos of parent images to determine whether an image is correctly built from
@@ -99,26 +100,26 @@
 
 Required labels for every CORD image:
 
- - `org.label-schema.version`
- - `org.label-schema.name`
- - `org.label-schema.vcs-url`
- - `org.label-schema.build-date`
+* `org.label-schema.version`
+* `org.label-schema.name`
+* `org.label-schema.vcs-url`
+* `org.label-schema.build-date`
 
 Required for clean builds:
 
- - `org.label-schema.version` : *git branch name, ex: `opencord/master`,
-   `opencord/cord-4.0`, , etc.*
- - `org.label-schema.vcs-ref` : *the full 40 character SHA-1 git commit hash,
-   not shortened*
+* `org.label-schema.version` : *git branch name, ex: `opencord/master`,
+  `opencord/cord-4.0`, , etc.*
+* `org.label-schema.vcs-ref` : *the full 40 character SHA-1 git commit hash,
+  not shortened*
 
 Required for dirty builds:
 
- - `org.label-schema.version` : *set to the string `dirty` if there is any
-   differences from the master commit to the build context (either on a
-   different branch, or untracked/changed files in context)*
- - `org.label-schema.vcs-ref` - *set to a commit hash if build context is clean
-   (ie, on another unnamed branch/patchset), or the empty string if the build
-   context contains untracked/changed files.*
+* `org.label-schema.version` : *set to the string `dirty` if there is any
+  differences from the master commit to the build context (either on a
+  different branch, or untracked/changed files in context)*
+* `org.label-schema.vcs-ref` - *set to a commit hash if build context is clean
+  (ie, on another unnamed branch/patchset), or the empty string if the build
+  context contains untracked/changed files.*
 
 For images that use components from another repo (like chameleon being
 integrated with the XOS containers, or maven repo which contains artifacts from
@@ -127,15 +128,15 @@
 `<reponame>`, and the value being the same value as the label-schema
 one would be:
 
- - `org.opencord.component.<reponame>.version`
- - `org.opencord.component.<reponame>.vcs-ref`
- - `org.opencord.component.<reponame>.vcs-url`
+* `org.opencord.component.<reponame>.version`
+* `org.opencord.component.<reponame>.vcs-ref`
+* `org.opencord.component.<reponame>.vcs-url`
 
 These labels are applied by using the `ARG` and `LABEL` option in the
 Dockerfile. The following is an example set of labels for an image that uses
 files from the chameleon and XOS repositories as components:
 
-```
+```dockerfile
 # Label image
 ARG org_label_schema_schema_version=1.0
 ARG org_label_schema_name=openstack-synchronizer
diff --git a/docs/cord_in_china.md b/docs/cord_in_china.md
index a1da811..56d99db 100644
--- a/docs/cord_in_china.md
+++ b/docs/cord_in_china.md
@@ -1,43 +1,57 @@
 # Operating CORD in China
 
-Different community members reported problems operating CORD in China. This section provides a practical guide and some suggestions on how to install the main platform, the use cases, as well as how to manage them after their setup.
+Different community members reported problems operating CORD in China. This
+section provides a practical guide and some suggestions on how to install the
+main platform, the use cases, as well as how to manage them after their setup.
 
-# Install the CORD platform (4+)
-The guide explains how to install the CORD platform in China, providing some specific suggestions and customizations.
+## Install the CORD platform (4+)
 
-> Note: the guide assumes you've already read the main [CORD installation guide](install_physical.md), that you've understood the hardware and software requirements, and that you're well aware of how the standard installation process works.
+The guide explains how to install the CORD platform in China, providing some
+specific suggestions and customizations.
 
-## Update Ubuntu repositories
-We know that the default Ubuntu repositories are quite slow in China. The following repository should work faster:
+> Note: the guide assumes you've already read the main [CORD installation
+> guide](install_physical.md), that you've understood the hardware and software
+> requirements, and that you're well aware of how the standard installation
+> process works.
 
-```
-https://mirrors.tuna.tsinghua.edu.cn/help/ubuntu/
-```
+### Update Ubuntu repositories
 
-Update your Ubuntu repository, both on the development machine and on the head node.
+We know that the default Ubuntu repositories are quite slow in China. The
+following repository should work faster:
 
-## Prepare the dev node machine
+[https://mirrors.tuna.tsinghua.edu.cn/help/ubuntu/](https://mirrors.tuna.tsinghua.edu.cn/help/ubuntu/)
 
-By default, CORD uses an [automated script](quickstarts.md#pod-quickstarts) to prepare the development machine. Unfortunately, the script won't work in China, since the standard Vagrant repositories are not reachable.
+Update your Ubuntu repository, both on the development machine and on the head
+node.
 
-As a work-around, you can execute some manual commands. Copy and paste to your dev node the following:
+### Prepare the dev node machine
 
-Install essential software
-```
+By default, CORD uses an [automated script](quickstarts.md#pod-quickstarts) to
+prepare the development machine. Unfortunately, the script won't work in China,
+since the standard Vagrant repositories are not reachable.
+
+As a work-around, you can execute some manual commands. Copy and paste to your
+dev node the following:
+
+Install essential software:
+
+```shell
 sudo apt-get update &&
 sudo apt-get -y install apt-transport-https build-essential curl git python-dev python-netaddr python-pip software-properties-common sshpass qemu-kvm libvirt-bin libvirt-dev nfs-kernel-server &&
 ```
 
-Install Ansible and related software
-```
+Install Ansible and related software:
+
+```shell
 sudo apt-add-repository -y ppa:ansible/ansible &&
 sudo apt-get update &&
 sudo apt-get install -y ansible &&
 sudo pip install gitpython graphviz
 ```
 
-Install Vagrant and related plugins
-```
+Install Vagrant and related plugins:
+
+```shell
 curl -o /tmp/vagrant.deb https://releases.hashicorp.com/vagrant/1.9.3/ vagrant_1.9.3_x86_64.deb &&
 sudo dpkg -i /tmp/vagrant.deb &&
 vagrant plugin list | grep -q vagrant-libvirt || vagrant plugin install vagrant-libvirt --plugin-version 0.0.35 &&
@@ -45,30 +59,41 @@
 vagrant plugin list | grep -q vagrant-hosts || vagrant plugin install vagrant-hosts &&
 ```
 
-Download Vagrant VMs and mutate it to be used with Libvirt
-```
+Download Vagrant VMs and mutate it to be used with Libvirt:
+
+```shell
 wget https://github.com/opencord/platform-install/releases/download/vms/trusty-server-cloudimg-amd64-vagrant-disk1.box &&
 vagrant box add ubuntu/trusty64 trusty-server-cloudimg-amd64-vagrant-disk1.box &&
 vagrant mutate ubuntu/trusty64 libvirt --input-provider virtualbox
 ```
 
-Install repo
-```
+Install repo:
+
+```shell
 curl -o /tmp/repo 'https://gerrit.opencord.org/gitweb?p=repo.git;a=blob_plain;f=repo;hb=refs/heads/stable' &&
 echo "$REPO_SHA256SUM  /tmp/repo" | sha256sum -c - &&
 sudo mv /tmp/repo /usr/local/bin/repo &&
 sudo chmod a+x /usr/local/bin/repo
 ```
 
-> Note: *repo* is a tool from Google, that CORD uses to automatically manage multiple git repositories used in the project together. The original version of repo connects to Google at every initialization to download some software. This special version will let you connect to ONF instead at every initialization.
+> Note: *repo* is a tool from Google, that CORD uses to automatically manage
+> multiple git repositories used in the project together. The original version
+> of repo connects to Google at every initialization to download some software.
+> This special version will let you connect to ONF instead at every
+> initialization.
 
-## Download the CORD software
+### Download the CORD software
 
-Before it wasn't possible to download directly the CORD repository using repo, since the tool was trying to connect at every initialization to Google to check for updates.
-With the custom version (see above) this is not anymore an issue. You can now download the code as anyone else!
+Before it wasn't possible to download directly the CORD repository using repo,
+since the tool was trying to connect at every initialization to Google to check
+for updates.
+
+With the custom version (see above) this is not anymore an issue. You can now
+download the code as anyone else!
 
 To download the code, follow the steps below:
-```
+
+```shell
 git config --global user.name 'Test User' &&
 git config --global user.email 'test@null.com' &&
 git config --global color.ui false &&
@@ -76,20 +101,31 @@
 repo sync
 ```
 
-> Warning: please, note that master is just an example. You can replace it with the branch you prefer.
+> Warning: please, note that master is just an example. You can replace it with
+> the branch you prefer.
 
-From now on in the guide, the cord directory just extracted will be referenced as CORD_ROOT. The CORD_ROOT directory should be ~/cord.
+From now on in the guide, the cord directory just extracted will be referenced
+as CORD_ROOT. The CORD_ROOT directory should be ~/cord.
 
 ## Replace google.com address with opennetworking.org
-The repository has been generally cleaned-up from references to Google to avoid issues. Anyway, there are still some portions of code referring to it. We suggest you to look carefully in the entire repositories and replace any google.com occurrence with opennetworking.org (or any other preferred address).
 
-## Replace Google DNS
-Google DNS won't work over there. Unfortunately, it is still the default for all CORD services. Look in the entire repository for all the occurrences of 8.8.8.8, 8.8.8.4, or 8.8.4.4, and replace them with your preferred DNS server.
+The repository has been generally cleaned-up from references to Google to avoid
+issues. Anyway, there are still some portions of code referring to it. We
+suggest you to look carefully in the entire repositories and replace any
+google.com occurrence with opennetworking.org (or any other preferred address).
 
-## Replace Maven mirror
-Maven is used to build ONOS apps. The default Maven mirror doesn't work. In CORD_ROOT/onos-apps/settings.xml, add:
+### Replace Google DNS
 
-```
+Google DNS won't work over there. Unfortunately, it is still the default for
+all CORD services. Look in the entire repository for all the occurrences of
+8.8.8.8, 8.8.8.4, or 8.8.4.4, and replace them with your preferred DNS server.
+
+### Replace Maven mirror
+
+Maven is used to build ONOS apps. The default Maven mirror doesn't work. In
+CORD_ROOT/onos-apps/settings.xml, add:
+
+```xml
 <mirror>
     <id>nexus-aliyun</id>
     <mirrorOf>*</mirrorOf>
@@ -98,67 +134,89 @@
 </mirror>
 ```
 
-## Replace Docker mirror
-The default Docker mirror won't work. Docker just opened a new mirror in China, which should be used instead.
+### Replace Docker mirror
+
+The default Docker mirror won't work. Docker just opened a new mirror in China,
+which should be used instead.
 
 Reference guide can be found at: <https://www.docker-cn.com/registry-mirror>
-Set the option ```--registry-mirror=https://registry.docker-cn.com``` in the following files:
+Set the option ```--registry-mirror=https://registry.docker-cn.com``` in the
+following files:
+
 * CORD_ROOT/build/ansible/roles/docker/templates/docker.cfg
 * CORD_ROOT/build/maas/roles/compute-node/tasks/main.yml
 
 ## Replace NPM mirrors
 
-Add a line ```npm config set registry https://registry.npm.taobao.org``` just before the “npm install” command is invoked, in the following files:
+Add a line `npm config set registry https://registry.npm.taobao.org` just
+before the “npm install” command is invoked, in the following files:
+
 * CORD_ROOT/orchestration/xos-gui/Dockerfile
 * CORD_ROOT/orchestration/xos-gui/Dockerfile.xos-gui-extension- builder
 
-Create a file named ```npmrc``` in ```CORD_ROOT/orchestration/xos-gui/``` and add the following content:
+Create a file named `npmrc` in `CORD_ROOT/orchestration/xos-gui/` and add
+the following content:
 
-```
+```cfg
 registry = https://registry.npm.taobao.org
 sass-binary-site = http://npm.taobao.org/mirrors/node-sass
 phantomjs_cdnurl = https://npm.taobao.org/dist/phantomjs
 ```
 
-Add the line ```COPY ${CODE_SOURCE}/npmrc /root/.npmrc``` to the following files, before the ```npm install``` command is invoked, and after the ```CODE_SOURCE``` variable is defined:
+Add the line `COPY ${CODE_SOURCE}/npmrc /root/.npmrc` to the following files,
+before the `npm install` command is invoked, and after the ```CODE_SOURCE```
+variable is defined:
+
 * CORD_ROOT/orchestration/xos-gui/Dockerfile
 * CORD_ROOT/orchestration/xos-gui/Dockerfile.xos-gui-extension-builder
 
-## Fix Google Maps for China
-This is specific to ```E-CORD```, which uses Google Maps to visualize where Central Offices are located. Default Google Maps APIs are not available from China.
-To fix this, replace
+### Fix Google Maps for China
 
-```
+This is specific to `E-CORD`, which uses Google Maps to visualize where
+Central Offices are located. Default Google Maps APIs are not available from
+China.  To fix this, replace
+
+```html
 <div map-lazy-load="https://maps.googleapis.com/maps/api/js?key={API_KEY}">
 ```
 
-with
+with:
 
-```
+```html
 <div map-lazy-load="http://maps.google.cn/maps/api/js?key={API_KEY}">
 ```
 
 in
 
-```
+```html
 CORD_ROOT/orchestration/xos_services/vnaas/xos/gui/src/app/components/vnaasMap.component.html
 ```
 
-## You're all set!
+### You're all set
 
-You're finally ready to deploy CORD following the standard installation procedure described [here](install_physical.md)!
+You're finally ready to deploy CORD following the standard installation
+procedure described [here](install_physical.md)!
 
-# Developing for CORD
+## Developing for CORD
 
-Now that repo issues have been solved, you can start developing for CORD also from China, and go through the default developer workflow described [here](develop.md). Happy coding!
+Now that repo issues have been solved, you can start developing for CORD also
+from China, and go through the default developer workflow described
+[here](develop.md). Happy coding!
 
-# Mailing lists
+## Mailing lists
 
-CORD mailing lists are hosted on Google, but this doesn't mean you can't send and receive emails!
+CORD mailing lists are hosted on Google, but this doesn't mean you can't send
+and receive emails!
 
 Mailing lists are:
+
 * <cord-discuss@opencord.org>
 * <cord-dev@opencord.org>
 
-You don't need to join the mailing list to be able to write to it, but you need that in order to receive automated updates.
-To subscribe to a mailing list, so to see emails from the rest of the community, send a blank email to ```NAME_OF_THE_ML+subscribe@opencord.org```, for example <cord-dev+subscribe@opencord.org>.
+You don't need to join the mailing list to be able to write to it, but you need
+that in order to receive automated updates.  To subscribe to a mailing list, so
+to see emails from the rest of the community, send a blank email to
+
+`NAME_OF_THE_ML+subscribe@opencord.org`, for example
+[cord-dev+subscribe@opencord.org](cord-dev+subscribe@opencord.org).
+
diff --git a/docs/getting_the_code.md b/docs/getting_the_code.md
index bf4ad99..70c68cd 100644
--- a/docs/getting_the_code.md
+++ b/docs/getting_the_code.md
@@ -18,19 +18,22 @@
 sudo chmod a+x /usr/local/bin/repo
 ```
 
-**NOTE**: As mentioned above, you may want to install *repo* using the official repository instead. We forked the original repository and host a copy of the file to make repo downloadable also by organizations that don't have access to Google servers.
+> NOTE: As mentioned above, you may want to install *repo* using the official
+> repository instead. We forked the original repository and host a copy of the
+> file to make repo downloadable also by organizations that don't have access
+> to Google servers.
 
 ## Download CORD repositories
 
 The `cord` repositories are usually checked out to `~/cord` in most of our
 examples and deployments:
 
-<pre><code>
+```shell
 mkdir ~/cord && \
 cd ~/cord && \
 repo init -u https://gerrit.opencord.org/manifest -b {{ book.branch }} && \
 repo sync
-</code></pre>
+```
 
 > NOTE: `-b` specifies the branch name. Development work goes on in `master,
 > and there are also specific stable branches such as `cord-4.0` that can be
@@ -41,7 +44,7 @@
 
 ```sh
 $ ls
-build		component	incubator	onos-apps	orchestration	test
+build component incubator onos-apps orchestration test
 ```
 
 ## Download patchsets
@@ -49,7 +52,7 @@
 Once you've downloaded a CORD source tree, you can download patchsets from
 Gerrit with the following command:
 
-```
+```shell
 repo download orchestration/xos 1234/3
 ```
 
diff --git a/docs/install.md b/docs/install.md
index 9904a29..a011453 100644
--- a/docs/install.md
+++ b/docs/install.md
@@ -23,18 +23,18 @@
 CORD has a unified build system for development and deployment which uses the
 following tools:
 
- - [Ansible](https://docs.ansible.com/ansible/intro_installation.html), *tested
-   with v2.4*
- - [Repo](https://source.android.com/source/downloading#installing-repo),
-   *tested with v1.23 of repo (launcher)*
+- [Ansible](https://docs.ansible.com/ansible/intro_installation.html), *tested
+  with v2.4*
+- [Repo](https://source.android.com/source/downloading#installing-repo),
+  *tested with v1.23 of repo (launcher)*
 
 And either:
 
- - [Docker](https://www.docker.com/community-edition), for *local* build
-   scenarios, *tested with Community Edition version 17.06*
- - [Vagrant](https://www.vagrantup.com/downloads.html), for all other scenarios
-   *tested with version 1.9.3, requires specific plugins and modules if using
-   with libvirt, see `cord-bootstrap.sh` for more details *
+- [Docker](https://www.docker.com/community-edition), for *local* build
+  scenarios, *tested with Community Edition version 17.06*
+- [Vagrant](https://www.vagrantup.com/downloads.html), for all other scenarios
+  *tested with version 1.9.3, requires specific plugins and modules if using
+  with libvirt, see `cord-bootstrap.sh` for more details *
 
 You can manually install these on your development system - see [Getting the
 Source Code](getting_the_code.md) for a more detailed instructions for checking
@@ -46,14 +46,17 @@
 `cord-bootstrap.sh` script to install these tools and check out the CORD source
 tree to `~/cord`.
 
-<pre><code>
+```shell
 curl -o ~/cord-bootstrap.sh https://raw.githubusercontent.com/opencord/cord/{{ book.branch }}/scripts/cord-bootstrap.sh
 chmod +x cord-bootstrap.sh
-</code></pre>
+```
+
+> NOTE: Change the `master` path component in the URL to your desired version
+> branch (ex: `cord-5.0`) if required.
 
 The bootstrap script has the following options:
 
-```
+``` shell
 Usage for ./cord-bootstrap.sh:
   -d                           Install Docker for local scenario.
   -h                           Display this help message.
@@ -70,7 +73,7 @@
 `<project path>:<changeset>/<revision>`.  It can be used multiple
 time. For example:
 
-```
+```shell
 ./cord-bootstrap.sh -p build/platform-install:1233/4 -p orchestration/xos:1234/2
 ```
 
@@ -87,7 +90,7 @@
 In some cases, you may see a message like this if you install software that
 adds you to a group and you aren't already a member:
 
-```
+```shell
 You are not in the group: libvirtd, please logout/login.
 You are not in the group: docker, please logout/login.
 ```
@@ -95,7 +98,7 @@
 In such cases, please logout and login to the system to gain the proper group
 membership.  Another way to tell if you're in the right groups:
 
-```
+```shell
 ~$ groups
 xos-PG0 root
 ~$ vagrant status
@@ -118,24 +121,24 @@
 ### POD Config
 
 The top level configuration for a build is the *POD config* file, which is a
-YAML file stored in
-[build/podconfig](https://github.com/opencord/cord/tree/{{ book.branch }}/podconfig) that
-contains a list of variables that control how the build proceeds, and can
-override the configuration of the rest of the build.
+YAML file stored in [build/podconfig](https://github.com/opencord/cord/tree/{{
+book.branch }}/podconfig) that contains a list of variables that control how
+the build proceeds, and can override the configuration of the rest of the
+build.
 
 A minimal POD Config file must define two variables:
 
 `cord_scenario` - the name of the *scenario* to use, which is defined in a
 directory under [build/scenarios](https://github.com/opencord/cord/tree/{{
-  book.branch }}/scenarios).
+book.branch }}/scenarios).
 
 `cord_profile` - the name of a *profile* to use, defined as a YAML file in
 [build/platform-install/profile_manifests](https://github.com/opencord/platform-install/tree/{{
-  book.branch }}/profile_manifests).
+book.branch }}/profile_manifests).
 
-The included POD configs must be named `<profile>-<scenario>.yml`, except
-for the `physical-example.yml` file which is used for a [Physical
-  POD](install_physical.md) and requires a bit more work to configured.
+The included POD configs must be named `<profile>-<scenario>.yml`, except for
+the `physical-example.yml` file which is used for a [Physical
+POD](install_physical.md) and requires a bit more work to configured.
 
 POD configs are used during a build by passing them with the `PODCONFIG`
 variable to `make` - ex: `make PODCONFIG=rcord-virtual.yml config`
diff --git a/docs/install_offline.md b/docs/install_offline.md
index 0813615..2a09e1b 100644
--- a/docs/install_offline.md
+++ b/docs/install_offline.md
@@ -15,26 +15,30 @@
 acting as a mirror using http. If the head node host has enough disk space, it
 can be the mirror repository.
 
-### Creating the Mirror
+## Creating the Mirror
+
 To create the mirror you will need a host running `Ubtunu 14.04LTS Server`
 that has access, at least temporarily, to the Internet. On this host
 `apt-mirror` and `docker-engine` should be installed. `apt-mirror` is a
 utility used to download the Debian packages and index files for the mirror and
 `docker-engine` is used to front the mirror using an `nginx` container.
 
-#### Installing `apt-mirror`
+### Installing `apt-mirror`
+
 To install `apt-mirror` the following commands should suffice.
 
-```
+```shell
 sudo apt-get update
 sudo apt-get install -y apt-mirror
 ```
 
-#### Installing `docker-engine`
-To install `docker-engine` please follow the directions provided by Docker at
-https://docs.docker.com/engine/installation/linux/ubuntulinux/.
+### Installing `docker-engine`
 
-#### `apt-mirror` Configuration
+To install `docker-engine` please follow the directions provided by Docker at
+[https://docs.docker.com/engine/installation/linux/ubuntulinux/](https://docs.docker.com/engine/installation/linux/ubuntulinux/).
+
+### `apt-mirror` Configuration
+
 `apt-mirror` takes a configuration that downloads the Debian packages and
 indexes to support a mirror. Save the following `apt-mirror` configuration to
 a local file, e.g., `cord-mirror.list`. This will create a mirror for
@@ -48,7 +52,7 @@
 additional repositories required by CORD, but this is an exercise left up to
 the reader.
 
-```
+```shell
 ############# config ##################
 #
 # set base_path    /var/spool/apt-mirror
@@ -86,7 +90,7 @@
 After the `apt-mirror` configuration is ready, you can start the mirroring
 process using the command
 
-```
+```shell
 sudo apt-mirror cord-mirror.list
 ```
 
@@ -99,14 +103,15 @@
 take a nap, enjoy the great out doors, or simply spend some time on
 Facebook or Netflix.
 
-#### Staring the `HTTP` Archive Server
+## Staring the `HTTP` Archive Server
+
 Before the archive server is started, the `nginx` docker image should be
 downloaded from dockerhub.com. Once this image is download the host should
 no longer require Internet access and thus the install of CORD can be
 completed offline. To download (pull) the `nginx` image use the following
 command:
 
-```
+```shell
 sudo docker pull nginx:1.10
 ```
 
@@ -117,7 +122,7 @@
 `-p`. The command line option `-v` is used to specify the location where
 the mirror files were downloaded, `/var/spool/apt-mirror` by default.
 
-```
+```shell
 sudo docker run \
     --name local-repository \
     --restart unless-stopped \
@@ -126,7 +131,8 @@
     -d nginx:1.10
 ```
 
-### Using the Local Repository
+## Using the Local Repository
+
 To use a local repository to deploy CORD, the POD deployment configuration must
 be modified to point to the mirror or local repository that you will be using.
 This is done by setting the following variables in the POD deployment
@@ -137,7 +143,7 @@
 values is that this is the value that will be used in the *Ansible*
 `apt_repository` task under the parameter `repo`.
 
-```
+```yaml
 seedServer
   extraVars:
     - ubuntu_apt_repo="deb [arch=amd64] http://10.10.10.10:8888/mirror/archive.ubuntu.com/ubuntu trusty main universe"
@@ -149,3 +155,4 @@
     - dell_apt_repo="deb [arch-amd64] http://10.10.10.10:8888/mirror/linux.dell.com/repo/community trusty openmanage"
     - juju_apt_repo="deb [arch-amd64] http://10.10.10.10:8888/mirror/ppa.launchpad.net/juju/stable/ubuntu trusty main"
 ```
+
diff --git a/docs/install_physical.md b/docs/install_physical.md
index 7df96a9..56e257e 100644
--- a/docs/install_physical.md
+++ b/docs/install_physical.md
@@ -17,13 +17,13 @@
 access devices or any upstream connectivity to the metro network; those details
 are included later in this section.
 
-<img src="images/physical-overview.png" alt="Drawing" style="width: 400px;"/>
+![Physical Network Overview](images/physical-overview.png)
 
 ### Logical Configuration: Data Plane Network
 
 The following diagram is a high level logical representation of a typical CORD POD.
 
-<img src="images/dataplane.png" alt="Drawing" style="width: 700px;"/>
+![Logical Data Plane Network](images/dataplane.png)
 
 The figure shows 40G data plane connections (red), where end-user traffic
 goes from the access devices to the metro network (green). User traffic
@@ -37,7 +37,7 @@
 The following diagram shows in blue how the components of the system are
 connected through the management network.
 
-<img src="images/controlplane.png" alt="Drawing" style="width: 500px;"/>
+![Logical Control Plane Network](images/controlplane.png)
 
 As shown in this figure, the head node is the only server in the POD connected
 both to Internet and to the other components of the system. The compute nodes
@@ -95,15 +95,15 @@
 * 3x Physical Servers: one to be used as head node, two to be used as compute
   nodes.
 
-   * Suggested Model: OCP-qualified QuantaGrid D51B-1U server. Each server is
-     configured with 2x Intel E5-2630 v4 10C 2.2GHz 85W, 64GB of RAM 2133MHz
-     DDR4, 2x 500GB HDD, and a 40 Gig adapter.
+    * Suggested Model: OCP-qualified QuantaGrid D51B-1U server. Each server is
+      configured with 2x Intel E5-2630 v4 10C 2.2GHz 85W, 64GB of RAM 2133MHz
+      DDR4, 2x 500GB HDD, and a 40 Gig adapter.
 
-   * Strongly Suggested NIC:
-       * Intel Ethernet Converged Network Adapters XL710 10/40 GbE PCIe 3.0, x8
-         Dual port.
-       * ConnectX®-3 EN Single/Dual-Port 10/40/56GbE Adapters w/ PCI Express
-         3.0.
+    * Strongly Suggested NIC:
+        * Intel Ethernet Converged Network Adapters XL710 10/40 GbE PCIe 3.0, x8
+          Dual port.
+        * ConnectX®-3 EN Single/Dual-Port 10/40/56GbE Adapters w/ PCI Express
+          3.0.
 
 > NOTE: while the machines mentioned above are generic standard x86 servers,
 > and can be potentially substituted with any other machine, it’s quite
@@ -113,14 +113,14 @@
 > see [Network Settings](appendix_network_settings.md) for more information.
 
 * 4x Fabric Switches
-     * Suggested Model: OCP-qualified Accton 6712 switch. Each switch is
-       configured with 32x40GE ports; produced by EdgeCore and HP.
+    * Suggested Model: OCP-qualified Accton 6712 switch. Each switch is
+      configured with 32x40GE ports; produced by EdgeCore and HP.
 
 * 7x Fiber Cables with QSFP+ (Intel compatible) or 7 DAC QSFP+ (Intel
   compatible) cables
 
-     * Suggested Model: Robofiber QSFP-40G-03C QSFP+ 40G direct attach passive
-       copper cable, 3m length - S/N: QSFP-40G-03C.
+    * Suggested Model: Robofiber QSFP-40G-03C QSFP+ 40G direct attach passive
+      copper cable, 3m length - S/N: QSFP-40G-03C.
 
 * 1x 1G L2 copper management switch supporting VLANs or 2x 1G L2 copper
   management switches
@@ -147,21 +147,23 @@
 The figure also shows data plane connections in red (as described in the next
 paragraph).
 
-<img src="images/physical-cabling-diagram.png" alt="Drawing" style="width: 800px;"/>
+![Physical Cabling Diagram](images/physical-cabling-diagram.png)
 
 The external and the management networks can be separated either using two
 different switches, or the same physical switch and by using VLANs.
 
-> NOTE: Head node IPMI connectivity is optional.
+#### IPMI Management
 
-> NOTE: IPMI ports do not have to be necessarily connected to the external
-> network. The requirement is that compute node IPMI interfaces need to be
-> reachable from the head node. This is possible also through the internal /
-> management network.
+Head node IPMI connectivity is optional.
 
-> NOTE: Vendors often allow a shared management port to provide IPMI
-> functionalities. One of the NICs used for system management (e.g., eth0) can
-> be shared, to be used at the same time also as IPMI port.
+IPMI ports do not have to be necessarily connected to the external
+network. The requirement is that compute node IPMI interfaces need to be
+reachable from the head node. This is possible also through the internal /
+management network.
+
+Vendors often allow a shared management port to provide IPMI
+functionalities. One of the NICs used for system management (e.g., eth0) can
+be shared, to be used at the same time also as IPMI port.
 
 #### External Network
 
@@ -201,7 +203,7 @@
 (in green), from the access devices to the point the POD connects to the metro
 network.
 
-<img src="images/dataplane.png" alt="Drawing" style="width: 700px;"/>
+![Logical Data Plane Network](images/dataplane.png)
 
 The fabric switches are assembled to form a leaf and spine topology. A typical
 full POD has two leafs and two spines. Currently, this is a pure 40G network.
@@ -263,7 +265,7 @@
 
 #### Create a User with "sudoer" permissions (no password)
 
-```
+```shell
 sudo adduser cord && \
 sudo adduser cord sudo && \
 echo 'cord ALL=(ALL) NOPASSWD:ALL' | sudo tee --append /etc/sudoers.d/90-cloud-init-users
@@ -324,7 +326,7 @@
 Once the POD config YAML file has been created, the composite configuration
 file should be generated with the following command.
 
-```
+```shell
 cd ~/cord/build && \
 make PODCONFIG_PATH={PATH_TO_YOUR_PODCONFIG_FILE.yml} config
 ```
@@ -349,19 +351,19 @@
 
 This step is started with the following command:
 
-```
+```shell
 cd ~/cord/build && \
 make build
 ```
 
-> NOTE: Be patient: this step can take an hour to complete.
+> NOTE: Be patient: this step can take over an hour to complete.
+
+This step is complete when the command successfully runs.
 
 > WARNING: This command sometimes fails for various reasons.  Simply re-running
 > the command often solves the problem. If the command fails it’s better to
 > start from a clean head node.
 
-This step is complete when the command successfully runs.
-
 ### MAAS
 
 As previously mentioned, once the deployment is complete the head node becomes
@@ -384,7 +386,7 @@
 the URL `http://head-node-ip-address/MAAS/images/`, or via the command line
 from head node via the following command:
 
-```
+```shell
 APIKEY=$(sudo maas-region-admin apikey --user=cord) && \
 maas login cord http://localhost/MAAS/api/1.0 "$APIKEY" && \
 maas cord boot-resources read | jq 'map(select(.type != "Synced"))'
@@ -426,7 +428,7 @@
   should appear here, as soon as they get an IP and are recognized by MaaS. To
   see if your devices have been recognized, use the following command:
 
-```
+```shell
 cord harvest list
 ```
 
@@ -434,7 +436,7 @@
   process that happen soon after the OS has been installed on your devices. To
   see the provisioning status of your devices, use the following command:
 
-```
+```shell
 cord prov list
 ```
 
@@ -454,7 +456,7 @@
 For a given node, the provisioning re-starts automatically if the related entry
 gets manually removed. This can be done with the following command:
 
-```
+```shell
 cord prov delete node_name
 ```
 
@@ -467,14 +469,14 @@
 To help you, a sample file is available: `/etc/dhcp/dhcpd.reservations.sample`.
 For each host you want to statically assign an IP, use this syntax:
 
-```
+```shell
 host <name-of-your choice> {
-	hardware ethernet <host-mac-address>;
-	fixed-address  <desired-ip>;
-	}
+  hardware ethernet <host-mac-address>;
+  fixed-address  <desired-ip>;
+  }
 ```
 
-#### Compute Nodes
+#### Compute Node Provisioning
 
 The compute node provisioning process installs the servers as
 OpenStack compute nodes.
@@ -492,7 +494,7 @@
 
 * From the OpenStack CLI on the head node, using the command
 
-```
+```shell
 source /opt/cord_profile/admin-openrc.sh && \
 nova hypervisor-list
 ```
@@ -511,8 +513,8 @@
 
 After a correct provisioning you should see something similar to:
 
-```
-cord prov list
+```shell
+$ cord prov list
 ID                                         NAME                   MAC                IP          STATUS      MESSAGE
 node-c22534a2-bd0f-11e6-a36d-2c600ce3c239  steel-ghost.cord.lab   2c:60:0c:cb:00:3c  10.6.0.107  Complete
 node-c238ea9c-bd0f-11e6-8206-2c600ce3c239  feline-shirt.cord.lab  2c:60:0c:e3:c4:2e  10.6.0.108  Complete
@@ -536,8 +538,8 @@
 As with the compute nodes, following the harvest process, the provisioning will
 happen.  After a correct provisioning you should see something similar to:
 
-```
-cord prov list
+```shell
+$ cord prov list
 ID                                         NAME                    MAC                IP          STATUS      MESSAGE
 cc:37:ab:7c:b7:4c                          UKN-ABCD                cc:37:ab:7c:b7:4c  10.6.0.23   Complete
 cc:37:ab:7c:ba:58                          UKN-EFGH                cc:37:ab:7c:ba:58  10.6.0.20   Complete
@@ -551,7 +553,7 @@
 Once the post deployment provisioning on the fabric switches is complete, the
 task is complete.
 
-##Access to CORD Services
+## Access to CORD Services
 
 Your POD is now installed. You can now try to access the basic services as
 described below.
@@ -578,15 +580,15 @@
 
 From the head node CLI
 
-```
-$ sudo lxc list
+```shell
+sudo lxc list
 ```
 
 lists the set of LXC containers running the various OpenStack-related services.
 These containers can be entered as follows:
 
-```
-$ ssh ubuntu@<container-name>
+```shell
+ssh ubuntu@<container-name>
 ```
 
 ### XOS UI
@@ -595,5 +597,7 @@
 define new service and service dependencies. You can access XOS at:
 
 * Using the XOS GUI at `http://<head-node-ip>/xos`
-* The username is `xosadmin@opencord.org` and the auto-generated password can be found in `/opt/credentials/xosadmin@opencord.org` on the head node
+
+* The username is `xosadmin@opencord.org` and the auto-generated password can
+  be found in `/opt/credentials/xosadmin@opencord.org` on the head node
 
diff --git a/docs/install_virtual.md b/docs/install_virtual.md
index 6f06cff..897e71c 100644
--- a/docs/install_virtual.md
+++ b/docs/install_virtual.md
@@ -19,9 +19,9 @@
 ### Target server requirements
 
 * 64-bit AMD64/x86-64 server, with:
-  * 48GB+ RAM
-  * 12+ CPU cores
-  * 200GB+ disk
+    * 48GB+ RAM
+    * 12+ CPU cores
+    * 200GB+ disk
 * Access to the Internet (no enterprise proxies)
 * Ubuntu 14.04.5 LTS freshly installed with updates
 * User account used to install CORD-in-a-Box has password-less *sudo*
@@ -82,7 +82,7 @@
 Once the system has been bootstrapped, run the following `make` commands to
 launch the build:
 
-```
+```shell
 cd ~/cord/build
 make PODCONFIG=rcord-virtual.yml config
 make -j4 build |& tee ~/build.out
@@ -99,11 +99,13 @@
 If the build completed without errors, you can use the following command to run
 basic end-to-end tests:
 
-```
+```shell
 cd ~/cord/build
 make pod-test
 ```
-> NOTE: This test can only be conducted on the `rcord-virtual` profile. Other profile tests are still WIP. 
+
+> NOTE: This test can only be conducted on the `rcord-virtual` profile. Other
+> profile tests are still WIP.
 
 The output of the tests will be displayed, as well as stored in
 `~/cord/build/logs/<iso8601_datetime>_pod-test`.
@@ -115,7 +117,7 @@
 environmental variable to `~/cord/build/scenarios/cord` and running `vagrant
 status`:
 
-```
+```shell
 ~$ cd cord/build
 ~/cord/build$ export VAGRANT_CWD=~/cord/build/scenarios/cord
 ~/cord/build$ vagrant status
@@ -140,8 +142,8 @@
 bare-metal provisioning) and the ONOS, XOS, and OpenStack services in
 containers.  This VM can be entered as follows:
 
-```
-$ ssh corddev
+```shell
+ssh corddev
 ```
 
 The CORD source tree is mounted at `/opt/cord` inside this VM.
@@ -152,13 +154,13 @@
 ONOS, and XOS services inside containers.  It also simulates a subscriber
 devices using a container.  To enter it, simply type:
 
-```
-$ ssh head1
+```shell
+ssh head1
 ```
 
 Inside the VM, a number of services run in Docker and LXD containers.
 
-```
+```shell
 vagrant@head1:~$ docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}"
 CONTAINER ID        NAMES                                 IMAGE
 84c09b156774        rcord_xos_gui_1                       docker-registry:5000/xosproject/xos-gui:candidate
@@ -199,7 +201,7 @@
 also a Docker image registry, a Maven repository containing the CORD ONOS apps,
 and a number of microservices used in bare-metal provisioning.
 
-```
+```shell
 vagrant@head1:~$ sudo lxc list
 +-------------------------+---------+------------------------------+------+------------+-----------+
 |          NAME           |  STATE  |             IPV4             | IPV6 |    TYPE    | SNAPSHOTS |
@@ -234,16 +236,16 @@
 OpenStack-related services. These containers can be
 entered as follows:
 
-```
-$ ssh ubuntu@<container-name>
+```shell
+ssh ubuntu@<container-name>
 ```
 
 The `testclient` container runs the simulated subscriber device used
 for running simple end-to-end connectivity tests. Its only connectivity is
 to the vSG, but it can be entered using:
 
-```
-$ sudo lxc exec testclient bash
+```shell
+sudo lxc exec testclient bash
 ```
 
 ### compute1 VM
@@ -253,15 +255,15 @@
 node name (assigned by MAAS).  The node name will be something like
 `bony-alley.cord.lab`; in this case, to login you'd run:
 
-```
-$ ssh ubuntu@bony-alley.cord.lab
+```shell
+ssh ubuntu@bony-alley.cord.lab
 ```
 
 Virtual machines created via XOS/OpenStack will be instantiated on this
 compute node.  To login to an OpenStack VM, first get the management IP
 address (172.27.0.x):
 
-```
+```shell
 vagrant@head1:~$ source /opt/cord_profile/admin-openrc.sh
 vagrant@head1:~$ nova list --all-tenants
 +--------------------------------------+-------------------------+--------+------------+-------------+---------------------------------------------------+
@@ -276,7 +278,7 @@
 IP of 172.27.0.2.  Then run `ssh-agent` and add the default key (used to access
 the OpenStack VMs):
 
-```
+```shell
 vagrant@head1:~$ ssh-agent bash
 vagrant@head1:~$ ssh-add
 ```
@@ -285,7 +287,7 @@
 management IP obtained above.  So if the compute node name is
 `bony-alley.cord.lab` and the management IP is 172.27.0.2:
 
-```
+```shell
 vagrant@head1:~$ ssh -A ubuntu@bony-alley.cord.lab
 ubuntu@bony-alley:~$ ssh ubuntu@172.27.0.2
 
@@ -293,7 +295,6 @@
 ubuntu@mysite-vsg-1:~$
 ```
 
-
 ### MAAS GUI
 
 You can access the MAAS (Metal-as-a-Service) GUI by pointing your browser to
@@ -345,18 +346,18 @@
 
 This tests the E2E connectivity of the POD by performing the following steps:
 
- * Sets up a sample CORD subscriber in XOS
- * Launches a vSG for that subscriber on the CORD POD
- * Creates a test client, corresponding to a device in the subscriber's
-   household
- * Connects the test client to the vSG using a simulated OLT
- * Runs `ping` in the client to a public IP address in the Internet
+* Sets up a sample CORD subscriber in XOS
+* Launches a vSG for that subscriber on the CORD POD
+* Creates a test client, corresponding to a device in the subscriber's
+  household
+* Connects the test client to the vSG using a simulated OLT
+* Runs `ping` in the client to a public IP address in the Internet
 
 Success means that traffic is flowing between the subscriber household and the
 Internet via the vSG.  If it succeeded, you should see some lines like these in
 the output:
 
-```
+```shell
 TASK [test-vsg : Output from ping test] ****************************************
 Thursday 27 October 2016  15:29:17 +0000 (0:00:03.144)       0:19:21.336 ******
 ok: [10.100.198.201] => {
@@ -380,18 +381,18 @@
 the *exampleservice* is to demonstrate how new subscriber-facing services can
 be easily deployed to a CORD POD. This test performs the following steps:
 
- * On-boards *exampleservice* into the CORD POD
- * Creates an *exampleservice* tenant, which causes a VM to be created and
-   Apache to be loaded and configured inside
- * Runs a `curl` from the subscriber test client, through the vSG, to the
-   Apache server.
+* On-boards *exampleservice* into the CORD POD
+* Creates an *exampleservice* tenant, which causes a VM to be created and
+  Apache to be loaded and configured inside
+* Runs a `curl` from the subscriber test client, through the vSG, to the Apache
+  server.
 
 Success means that the Apache server launched by the *exampleservice* tenant is
 fully configured and is reachable from the subscriber client via the vSG.  If
 it succeeded, you should see the following lines near the end the `make
 pod-test` output:
 
-```
+```shell
 TASK [test-exampleservice : Output from curl test] *****************************
 Thursday 27 October 2016  15:34:40 +0000 (0:00:01.116)       0:24:44.732 ******
 ok: [10.100.198.201] => {
@@ -418,28 +419,28 @@
 For more information about how the build works, see [Troubleshooting and Build
 Internals](troubleshooting.md).
 
-#### Failed: TASK \[maas-provision : Wait for node to become ready\]
+### Failed: TASK \[maas-provision : Wait for node to become ready\]
 
 This issue occurs when the virtual compute node is not automatically enrolled
 in MAAS.  It may be useful to attach to the console of the compute node to
 see if there are any messages displayed.
 
- * Create an SSH tunnel that forwards port 5902 from the local machine to the
-CIAB server: `ssh -L 5902:localhost:5902 <ciab-server>`
+* Create an SSH tunnel that forwards port 5902 from the local machine to the
+  CIAB server: `ssh -L 5902:localhost:5902 <ciab-server>`
 
- * Connect a VNC client (e.g., [VNC Viewer](https://www.realvnc.com/en/connect/download/viewer/))
-to `localhost:5902` on the local machine.  There is
-no password.
+* Connect a VNC client (e.g., [VNC
+  Viewer](https://www.realvnc.com/en/connect/download/viewer/)) to
+  `localhost:5902` on the local machine.  There is no password.
 
 If you see a stack trace or error message, please post it to the CORD Slack channel.
 
-#### Failed: TASK \[maas-provision : Wait for node to be fully provisioned\]
+### Failed: TASK \[maas-provision : Wait for node to be fully provisioned\]
 
 This means that the node has enlisted in MAAS but something has gone wrong with
 the provisioning process.  In the `head1` VM look in `/etc/maas/ansible/logs/node-<id>.log`
 for the step that has failed.  Post it to the CORD Slack channel to get help.
 
-## Congratulations!
+## Congratulations
 
 If you got this far, you successfully built, deployed, and tested your first
 (virtual) CORD POD.
diff --git a/docs/mdlstyle.rb b/docs/mdlstyle.rb
new file mode 100644
index 0000000..ce1c0a6
--- /dev/null
+++ b/docs/mdlstyle.rb
@@ -0,0 +1,12 @@
+# use all rules
+all
+
+# Intent lists with 4 spaces
+rule 'MD007', :indent => 4
+
+# Don't enforce line length limitations within code blocks and tables
+rule 'MD013', :code_blocks => false, :tables => false
+
+# numbered lists should have the correct order
+rule 'MD029', :style => "ordered"
+
diff --git a/docs/operate/elk_stack.md b/docs/operate/elk_stack.md
index cc0c4fe..9cc9710 100644
--- a/docs/operate/elk_stack.md
+++ b/docs/operate/elk_stack.md
@@ -1,17 +1,17 @@
 # ELK Stack Logs
 
-CORD uses ELK Stack for logging information at all levels. CORD’s
-ELK Stack logger collects information from several components,
-including the XOS Core, API, and various Synchronizers. On a running
-POD, the logs can be accessed at `http://<head-node>:8080/app/kibana`.
+CORD uses ELK Stack for logging information at all levels. CORD’s ELK Stack
+logger collects information from several components, including the XOS Core,
+API, and various Synchronizers. On a running POD, the logs can be accessed at
+`http://<head-node>:8080/app/kibana`.
 
 There is also a second way of accessing low-level logs with additional
 verbosity that do not make it into ELK Stack. This involves accessing log
-messages in various containers directly. You may do so by running the
-following command on the head node.
+messages in various containers directly. You may do so by running the following
+command on the head node.
 
-```
-$ docker logs < container-name
+```shell
+docker logs < container-name
 ```
 
 For most purposes, the logs in ELK Stack should contain enough information
@@ -19,32 +19,34 @@
 multiple components by using the identifiers of XOS data model objects.
 
 > Important!
-> 
-> Before you can start using ELK stack, you must initialize its index. 
-> To do so:
-> 
+>
+> Before you can start using ELK stack, you must initialize its index.  To do
+> so:
+>
 > 1) Replace `logstash-*` with `*` in the text box marked "Index pattern."
-> 
+>
 > 2) Pick `@timestamp` as the "Time Filter Field Name."
-> 
-> Configuring the default logstash- index pattern will lead to HTTP errors in your browser. If you did this by accident, then delete it under Management -> Index Patterns, and create another pattern as described above.
+>
+> Configuring the default logstash- index pattern will lead to HTTP errors in
+> your browser. If you did this by accident, then delete it under Management ->
+> Index Patterns, and create another pattern as described above.
 
 More information about using
 [Kibana](https://www.elastic.co/guide/en/kibana/current/getting-started.html)
-to access ELK Stack logs is available elsewhere, but to illustrate how the logging
-system is used in CORD, consider the following example quieries.
+to access ELK Stack logs is available elsewhere, but to illustrate how the
+logging system is used in CORD, consider the following example quieries.
 
 The first example query enlists log messages in the implementation of a
 particular service synchronizer, in a given time range:
 
-```
+```sql
 +synchronizer_name:vtr-synchronizer AND +@timestamp:[now-1h TO now]
 ```
 
 A second query gets log messages that are linked to the _Network_ data model
 across all services:
 
-```
+```sql
 +model_name: Network
 ```
 
@@ -52,7 +54,7 @@
 _Network_ object in question. You can obtain the object id from the object’s
 page in the XOS GUI.
 
-```
+```sql
 +model_name: Network AND +pk:7
 ```
 
@@ -60,9 +62,7 @@
 contain Python exceptions, and will usually correspond to anomalous
 execution:
 
-```
+```sql
 +synchronizer_name: vtr-synchronizer AND +exception
 ```
 
-
-
diff --git a/docs/operate/power_up.md b/docs/operate/power_up.md
index 06596aa..7afd590 100644
--- a/docs/operate/power_up.md
+++ b/docs/operate/power_up.md
@@ -1,63 +1,69 @@
 # Powering Up a POD
 
-This guide describes how to power up a previously installed CORD POD that
-has been powered down (cleanly or otherwise). The end goal of the power up
+This guide describes how to power up a previously installed CORD POD that has
+been powered down (cleanly or otherwise). The end goal of the power up
 procedure is a fully functioning CORD POD.
 
 ## Boot the Head Node
 
 * **Physical  POD:** Power on the head node
 * **CiaB:** Bring up the head1 VM:
-```
-$ cd ~/cord/build; VAGRANT_CWD=~/cord/build/scenarios/cord vagrant up head1 --provider libvirt
+
+```shell
+cd ~/cord/build; VAGRANT_CWD=~/cord/build/scenarios/cord vagrant up head1 --provider libvirt
 ```
 
 ## Check the Head Node Services
 
 1. Verify that `mgmtbr` and `fabric` interfaces are up and have IP addresses
 2. Verify that MAAS UI is running and accessible:
-  * **Physical POD:** `http://<head-node>/MAAS`
-  * **CiaB:** `http://<ciab-server>:8080/MAAS`
-> **Troubleshooting: MAAS UI not available on CiaB.**
-> If you are running a CiaB and there is no webserver on port 8080, it might
-> be necessary to refresh port forwarding to the prod VM.
-> Run `ps ax|grep 8080`
-> and look for an SSH command (will look something like this):
-```
+    * **Physical POD:** `http://<head-node>/MAAS`
+    * **CiaB:** `http://<ciab-server>:8080/MAAS`
+
+> **Troubleshooting: MAAS UI not available on CiaB.** If you are running a CiaB
+> and there is no webserver on port 8080, it might be necessary to refresh port
+> forwarding to the prod VM.  Run `ps ax|grep 8080` and look for an SSH command
+> (will look something like this):
+
+```shell
 31353 pts/5    S      0:00 ssh -o User=vagrant -o Port=22 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ForwardX11=no -o IdentityFile="/users/acb/cord/build/scenarios/cord/.vagrant/machines/head1/libvirt/private_key" -L *:8080:192.168.121.14:80 -N 192.168.121.14
 ```
+
 > A workaround is to kill this process, and then copy and paste the command
-> above into another window on the CiaB server to set up a new SSH port forwarding connection.
+> above into another window on the CiaB server to set up a new SSH port
+> forwarding connection.
 
-3. Verify that the following Docker containers are running: mavenrepo, switchq, automation, provisioner, generator, harvester, storage, allocator, registry
+3. Verify that the following Docker containers are running: mavenrepo, switchq,
+   automation, provisioner, generator, harvester, storage, allocator, registry
 
-4. Use `sudo lxc list` to ensure that juju lxc containers are running. If any are stopped, use `sudo lxc start <name>` to restart them.
+4. Use `sudo lxc list` to ensure that juju lxc containers are running. If any
+   are stopped, use `sudo lxc start <name>` to restart them.
 
 5. Run: `source /opt/cord_profile/admin-openrc.sh`
 
 6. Verify that the following OpenStack commands work:
-  * `$ keystone user-list`
-  * `$ nova list --all-tenants`
-  * `$ neutron net-list`
-> **Troubleshooting: OpenStack commands give SSL error.**
-> Sometimes Keystone starts up in a strange state and OpenStack
-> commands will fail with various SSL errors.
-> To fix this, it is often sufficient to run:
-`ssh ubuntu@keystone sudo service apache2 restart`
+    * `$ keystone user-list`
+    * `$ nova list --all-tenants`
+    * `$ neutron net-list`
 
+> **Troubleshooting: OpenStack commands give SSL error.** Sometimes Keystone
+> starts up in a strange state and OpenStack commands will fail with various
+> SSL errors.  To fix this, it is often sufficient to run: `ssh ubuntu@keystone
+> sudo service apache2 restart`
 
 ## Power on Leaf and Spine Switches
 
-* **Physical POD:** power on the switches.  
-* **CiaB:** nothing to do; CiaB using OvS switches that come up when the CiaB server is booted.
+* **Physical POD:** power on the switches.
+* **CiaB:** nothing to do; CiaB using OvS switches that come up when the CiaB
+  server is booted.
 
 ## Check the Switches
 
 * **Physical POD:** On the head node:
-1. Get switch IPs by running: cord prov list
-2. Verify that ping works for all switch IPs 
-
-* **CiaB:** run `sudo ovs-vsctl show` on the CiaB server; you should see `leaf1` and `spine1`.
+    1. Get switch IPs by running: cord prov list
+    2. Verify that ping works for all switch IPs
+* **CiaB:** run `sudo ovs-vsctl show` on the CiaB server; you should see
+  `leaf1` and `spine1`.
 
 ## Boot the Compute Nodes
 
@@ -71,8 +77,11 @@
 1. Login to the head node
 2. Run: `source /opt/cord_profile/admin-openrc.sh`
 3. Verify that nova service-list shows the compute node as “up”.
+
 > It may take a few minutes until the node's status is updated in Nova.
-4. Verify that you can log into the compute nodes from the head node as the ubuntu user
+
+4. Verify that you can log into the compute nodes from the head node as the
+   ubuntu user
 
 ## Check XOS
 
@@ -81,10 +90,11 @@
 * **Physical POD:** `http://<head-node>/xos`
 * **CiaB:** `http://<ciab-server>:8080/xos`
 
-If it's not working, try restarting XOS (replace `rcord` with the name of your profile):
+If it's not working, try restarting XOS (replace `rcord` with the name of your
+profile):
 
-```
-$ cd /opt/cord_profile; docker-compose -p rcord restart
+```shell
+cd /opt/cord_profile; docker-compose -p rcord restart
 ```
 
 ## Check VTN
@@ -94,29 +104,33 @@
 1. Run `onos> cordvtn-nodes`
 2. Make sure the compute nodes have COMPLETE status.
 3. Prior to rebooting existing OpenStack VMs:
-  * Run `onos> cordvtn-ports`
-  * Make sure some ports show up
-  * If not, try this:
-    - `onos> cordvtn-sync-neutron-states <keystone-url> admin admin <password>`
-    - `onos> cordvtn-sync-xos-states <xos-url> xosadmin@opencord.org <password>`
+    * Run `onos> cordvtn-ports`
+    * Make sure some ports show up
+    * If not, try this:
+        * `onos> cordvtn-sync-neutron-states <keystone-url> admin admin <password>`
+        * `onos> cordvtn-sync-xos-states <xos-url> xosadmin@opencord.org <password>`
 
-##Boot OpenStack VMs
+## Boot OpenStack VMs
 
 To bring up OpenStack VMs that were running before the POD was shut down:
 
 1. Run `source /opt/cord_profile/admin-openrc.sh`
 2. Get list of VM IDs: `nova list --all-tenants`
 3. For each VM:
-  * `$ nova start <vm-id>`
-  * `$ nova console-log <vm-id>`
-  * Inspect the console log to make sure that the network interfaces get IP addresses.
+    * `$ nova start <vm-id>`
+    * `$ nova console-log <vm-id>`
+    * Inspect the console log to make sure that the network interfaces get IP
+      addresses.
 
 To restart a vSG inside the vSG VM:
 
 1. SSH to the vSG VM
 2. Run: `sudo rm /root/network_is_setup`
 3. Save the vSG Tenant in the XOS UI
-4. Once the synchronizer has re-run, make sure you can ping 8.8.8.8 from inside the vSG container
-```
+4. Once the synchronizer has re-run, make sure you can ping 8.8.8.8 from inside
+   the vSG container
+
+```shell
 sudo docker exec -ti vcpe-222-111 ping 8.8.8.8
 ```
+
diff --git a/docs/quickstarts.md b/docs/quickstarts.md
index 6efb850..bc3bdae 100644
--- a/docs/quickstarts.md
+++ b/docs/quickstarts.md
@@ -11,20 +11,24 @@
 Virtual pod, you can use the
 [cord-bootstrap.sh](install.html#cord-bootstrapsh-script) script:
 
-<pre><code>
-curl -o ~/cord-bootstrap.sh https://raw.githubusercontent.com/opencord/cord/{{ book.branch }}/scripts/cord-bootstrap.sh
+```shell
+curl -o ~/cord-bootstrap.sh https://raw.githubusercontent.com/opencord/cord/master/scripts/cord-bootstrap.sh
 chmod +x cord-bootstrap.sh
 ./cord-bootstrap.sh -v
-</code></pre>
+```
+
+> NOTE: Change the `master` path component in the URL to your desired version
+> branch (ex: `cord-5.0`) if required.
 
 ## Virtual POD (CORD-in-a-Box)
 
-This is a summary of [Installing a Virtual Pod (CORD-in-a-Box)](install_virtual.md).
+This is a summary of [Installing a Virtual Pod
+(CORD-in-a-Box)](install_virtual.md).
 
 To install a CiaB, on a [suitable](#target-server-requirements) Ubuntu 14.04
 system, run the following commands:
 
-```
+```shell
 cd ~/cord/build && \
 make PODCONFIG=rcord-virtual.yml config && \
 make -j4 build |& tee ~/build.out && \
@@ -39,7 +43,7 @@
 option on the `cord-bootstrap.sh` script to download source, then run all the
 make targets involved in doing a build and test cycle:
 
-```
+```shell
 ./cord-bootstrap.sh -v -t "PODCONFIG=rcord-virtual.yml config" -t "build" -t "pod-test"
 ```
 
@@ -52,7 +56,7 @@
 14.04 on a [suitable head node](install_physical.md#detailed-requirements). On
 the target head node, add a `cord` user with `sudo` rights:
 
-```
+```shell
 sudo adduser cord && \
 sudo usermod -a -G sudo cord && \
 echo 'cord ALL=(ALL) NOPASSWD:ALL' | sudo tee --append /etc/sudoers.d/90-cloud-init-users
@@ -61,7 +65,7 @@
 [Create a POD configuration](install.md#pod-config) file in the
 `~/cord/build/podconfig` directory, then run:
 
-```
+```shell
 cd ~/cord/build && \
 make PODCONFIG={YOUR_PODCONFIG_FILE.yml} config && \
 make -j4 build |& tee ~/build.out
diff --git a/docs/release-notes/sd-dot-one.md b/docs/release-notes/sd-dot-one.md
index 86fdbe6..17c5bf9 100644
--- a/docs/release-notes/sd-dot-one.md
+++ b/docs/release-notes/sd-dot-one.md
@@ -9,41 +9,45 @@
 
 The 4.1 release of E-CORD includes the following functionality:
 
-* A hierarchical multi-pod orchestration deployment managed by XOS
-and ONOS. Each of the local PODs registers with the global node, that in
-turn asks for functionality and network/service setup.
+* A hierarchical multi-pod orchestration deployment managed by XOS and ONOS.
+  Each of the local PODs registers with the global node, that in turn asks for
+  functionality and network/service setup.
 
 * Through a hierarchical architecture, an enterprise customer is able to
-provision an end-to-end *E-Line* carrier ethernet service (L2VPN).
+  provision an end-to-end *E-Line* carrier ethernet service (L2VPN).
 
-* L2 Monitoring through OAM and CFM  can be provisioned for the
-E-Line, with details such as delay and jitter observed in the
-local POD where the E-Line starts.
+* L2 Monitoring through OAM and CFM can be provisioned for the E-Line, with
+  details such as delay and jitter observed in the local POD where the E-Line
+  starts.
 
-* Each local POD deploys an R-CORD like service chain to connect to
-the Public Internet. This connectivity configuration is currently
-not automated.
+* Each local POD deploys an R-CORD like service chain to connect to the Public
+  Internet. This connectivity configuration is currently not automated.
 
-* Documentation includes 
-
-  * An overview of E-CORD.
-  * How to install a full E-CORD, including two local PODs and a global node.
-  * A CFM setup guide.
-  * A Developer guide to manipulate and develop on top of E-CORD.
-  * A Troubleshooting guide with reset scripts.
+* Documentation includes
+    * An overview of E-CORD.
+    * How to install a full E-CORD, including two local PODs and a global node.
+    * A CFM setup guide.
+    * A Developer guide to manipulate and develop on top of E-CORD.
+    * A Troubleshooting guide with reset scripts.
 
 ## M-CORD Profile
 
 The 4.1 release of M-CORD supports 3GPP LTE connectivity, including:
 
-* A CUPS compliant open source SPGW (in the form of two VNFs: SPGW-u and SPGW-c).
+* A CUPS compliant open source SPGW (in the form of two VNFs: SPGW-u and
+  SPGW-c).
 * An MME emulator with an integrated HSS, eNBs and UEs.
-  * Emulates one eNB
-  * Emulates up to ten UEs
-  * Includes one MME with an integrated HSS
-  * Can connect one SPGW-C/U
-  * Allows users to attach, detach and send user plane traffic
+    * Emulates one eNB
+    * Emulates up to ten UEs
+    * Includes one MME with an integrated HSS
+    * Can connect one SPGW-C/U
+    * Allows users to attach, detach and send user plane traffic
 * An application that emulates connectivity to the Internet.
 * A composite EPC-as-a-Service that sets up and manages the other services.
 
-The MME emulator is not open source. It comes as a binary, courtesy of ng4T, who provides free trial licenses for limited use. Users will need to apply for a free M-CORD license with ng4T, as described more fully in the Guide. Full versions of the emulator can also be licensed by contacting [ng4t](http://www.ng4t.com) directly.
+The MME emulator is not open source. It comes as a binary, courtesy of ng4T,
+who provides free trial licenses for limited use. Users will need to apply for
+a free M-CORD license with ng4T, as described more fully in the Guide. Full
+versions of the emulator can also be licensed by contacting
+[ng4t](http://www.ng4t.com) directly.
+
diff --git a/docs/release-notes/sd-jira.md b/docs/release-notes/sd-jira.md
index 1ea4f26..a205f52 100644
--- a/docs/release-notes/sd-jira.md
+++ b/docs/release-notes/sd-jira.md
@@ -1,255 +1,265 @@
-### Shared-Delusion: Jira
+# Shared-Delusion: Jira
 
 The following are Epics and Stories culled from Jira.
 
-#### Epic: New Model Policy Framework 
+## Epic: New Model Policy Framework
 
-* Document xproto policies 
-* Debugging and end-to-end testing 
-* Port old security policies to new framework 
-* Implement security enforcements at the API boundary 
-* End-to-end test validation 
-* Design new model policy framework (python part) 
-* Design new model policy framework (xproto part) 
+* Document xproto policies
+* Debugging and end-to-end testing
+* Port old security policies to new framework
+* Implement security enforcements at the API boundary
+* End-to-end test validation
+* Design new model policy framework (python part)
+* Design new model policy framework (xproto part)
 
-#### Epic: Data Model Cleanup 
+## Epic: Data Model Cleanup
 
-* Eliminate Many-To-Many between Image/Flavor and deployment 
-* Cherry-pick and E2E test attic-less development for CORD-3.0 
-* Document attic-less service development for 3.0 + Write helper target 
-* Surgical cleanup for 3.0.1 to make up for not merging data model cleanup 
-* Support packaging and namespacing for models 
-* Autogenerate security policy enforcements 
-* Modify xproto parser to support policies 
-* xproto extension to specify security policies alongside data model definitions 
-* Implement generalized privileges 
-* Design security policies 
-* Port existing code to data validation strategy 
-* Autogenerate data validation code when models are saved 
-* Support operations on field sets and generate unique-together 
-* Merge new API generator + Modeldefs generator 
-* Collapse core models into a single file 
-* Convert xproto howto Google Doc into Markdown/gitbooks 
-* Generate plcorebase and rename it to XOSBase 
-* Update service onboarding Tutorial with documentation on xproto 
-* Make it possible to fetch properties of parent models into child models 
-* Extend IR to support implicit, reverse links 
-* Recreate modeldefs API via xproto 
-* Modify core api content_type_ip and mappings to use string 
-* Support end-to-end singularization and pluralization 
-* Support multiple inputs in xosgen 
-* Implement pure-protobuf support for policy extension in xproto 
-* Implement policy extension in xproto 
-* Implement new credentials system 
-* Generate unicode function from xproto 
-* Delete old generative toolchain 
-* Generate Synchronizer configuration from xproto 
-* Unit test framework for client-side ORM 
-* Eliminate dead code in core attic 
-* Remove obsolete models and fields; rename models and fields 
+* Eliminate Many-To-Many between Image/Flavor and deployment
+* Cherry-pick and E2E test attic-less development for CORD-3.0
+* Document attic-less service development for 3.0 + Write helper target
+* Surgical cleanup for 3.0.1 to make up for not merging data model cleanup
+* Support packaging and namespacing for models
+* Autogenerate security policy enforcements
+* Modify xproto parser to support policies
+* xproto extension to specify security policies alongside data model
+  definitions
+* Implement generalized privileges
+* Design security policies
+* Port existing code to data validation strategy
+* Autogenerate data validation code when models are saved
+* Support operations on field sets and generate unique-together
+* Merge new API generator + Modeldefs generator
+* Collapse core models into a single file
+* Convert xproto howto Google Doc into Markdown/gitbooks
+* Generate plcorebase and rename it to XOSBase
+* Update service onboarding Tutorial with documentation on xproto
+* Make it possible to fetch properties of parent models into child models
+* Extend IR to support implicit, reverse links
+* Recreate modeldefs API via xproto
+* Modify core api content_type_ip and mappings to use string
+* Support end-to-end singularization and pluralization
+* Support multiple inputs in xosgen
+* Implement pure-protobuf support for policy extension in xproto
+* Implement policy extension in xproto
+* Implement new credentials system
+* Generate unicode function from xproto
+* Delete old generative toolchain
+* Generate Synchronizer configuration from xproto
+* Unit test framework for client-side ORM
+* Eliminate dead code in core attic
+* Remove obsolete models and fields; rename models and fields
 
-#### Epic: Remove Hard-coded Service Dependencies 
+## Epic: Remove Hard-coded Service Dependencies
 
-* Eliminate hardcoded dependencies (VTN) 
+* Eliminate hardcoded dependencies (VTN)
 
-#### Epic Rewrite TOSCA Engine 
+## Epic Rewrite TOSCA Engine
 
-* Implement the new TOSCA Engine 
+* Implement the new TOSCA Engine
 
-#### Epic: Service/Tenancy Models 
+## Epic: Service/Tenancy Models
 
-* Clean up pass over ExampleService Models 
-* Replace "kind" properties removed in Service Model refactoring 
-* Port existing R-CORD services to new Service Models 
-* Implement ServiceInstance, ServiceInterface, InterfaceType, and ServiceInterfaceLink models 
-* Implement ServiceDependency Models 
+* Clean up pass over ExampleService Models
+* Replace "kind" properties removed in Service Model refactoring
+* Port existing R-CORD services to new Service Models
+* Implement ServiceInstance, ServiceInterface, InterfaceType, and
+  ServiceInterfaceLink models
+* Implement ServiceDependency Models
 
-#### Epic: Categorize, Unify, and Refactor Synchronizers 
+## Epic: Categorize, Unify, and Refactor Synchronizers
 
-* Better diagnostics for the synchronizer 
-* Refactor synchronizer event loop 
+* Better diagnostics for the synchronizer
+* Refactor synchronizer event loop
 
-#### Epic: Eliminate xos-ui Container 
+## Epic: Eliminate xos-ui Container
 
 * Create a "Debug" tab that can be enabled to show hidden fields
-* Create a method to seed basic models to bootstrap the system 
-* Get suggestions for fields that can be populated by a list of values. Eg: isolation can be only container or vm 
-* Add the ability in xProto to hide fields/models from the GUI 
-* Remove use of handcrafted XOS APIs in VTN 
-* Add API to VTN model to signal that VTN app wants a resync 
-* Inline navigation for related models 
+* Create a method to seed basic models to bootstrap the system
+* Get suggestions for fields that can be populated by a list of values. Eg:
+  isolation can be only container or vm
+* Add the ability in xProto to hide fields/models from the GUI
+* Remove use of handcrafted XOS APIs in VTN
+* Add API to VTN model to signal that VTN app wants a resync
+* Inline navigation for related models
 
-#### Epic: XOS Configuration Management 
+## Epic: XOS Configuration Management
 
-* Config Module - Implementation and Unit Tests 
-* Config Module Design 
+* Config Module - Implementation and Unit Tests
+* Config Module Design
 
-#### Epic: Deployments and Build Automation 
+## Epic: Deployments and Build Automation
 
-* Refactor installation guide on the wiki 
-* Automate fabric configuration in ONOS and load POD configuartion files in pod-configs repo 
-* Automate switch software installation 
-* Refactor Jenkinsfile and automated build process to use yaml configuration file instead of Jenkins variables 
-* Refactor Jenkins file putting parameters and methods for existing commonly used functions 
+* Refactor installation guide on the wiki
+* Automate fabric configuration in ONOS and load POD configuartion files in
+  pod-configs repo
+* Automate switch software installation
+* Refactor Jenkinsfile and automated build process to use yaml configuration
+  file instead of Jenkins variables
+* Refactor Jenkins file putting parameters and methods for existing commonly
+  used functions
 
-#### Epic: Refactor Build/Deploy 
+## Epic: Refactor Build/Deploy
 
-* Add "mock" targets to master Makefile 
-* Update Ansible and docker-compose versions 
-* PI Role Cleanup 
-* Volume mount certificate instead of baking into container 
+* Add "mock" targets to master Makefile
+* Update Ansible and docker-compose versions
+* PI Role Cleanup
+* Volume mount certificate instead of baking into container
 
-#### Epic: Uniform Development Environments 
+## Epic: Uniform Development Environments
 
-* Parallel build support for CiaB 
-* Add Jenkins job for new CiaB build 
-* Separate "build" and "head" nodes in POD build 
-* "node" Docker image on headnode pulled from Docker Hub 
-* "single" pod scenario, mock w/ synchronizers 
-* Bootstrap mavenrepo container using platform-install 
-* Clean up Vagrantfile 
-* CiaB: "prod" VM has minimal dependencies installed 
-* Docker image tagging for deployment 
-* Evaluate imagebuilder.py 
-* Determine where to inventory which container images are built 
-* Add configure targets that generate credentials for each build layer 
-* Add make targets for XOS profile development 
-* Add make targets to install versioned POD or CiaB 
-* Build and push tagged images to Docker Hub 
-* Add CiaB targets to master Makefile 
-* Build ONOS apps in a way that does not require installing Java / Gradle in the dev-env 
-* Unify bootstapping dev-env 
+* Parallel build support for CiaB
+* Add Jenkins job for new CiaB build
+* Separate "build" and "head" nodes in POD build
+* "node" Docker image on headnode pulled from Docker Hub
+* "single" pod scenario, mock w/ synchronizers
+* Bootstrap mavenrepo container using platform-install
+* Clean up Vagrantfile
+* CiaB: "prod" VM has minimal dependencies installed
+* Docker image tagging for deployment
+* Evaluate imagebuilder.py
+* Determine where to inventory which container images are built
+* Add configure targets that generate credentials for each build layer
+* Add make targets for XOS profile development
+* Add make targets to install versioned POD or CiaB
+* Build and push tagged images to Docker Hub
+* Add CiaB targets to master Makefile
+* Build ONOS apps in a way that does not require installing Java / Gradle in
+  the dev-env
+* Unify bootstapping dev-env
 * Write build environment design document
 * Investigate build environment design considerations
-* Write build environment design document 
+* Write build environment design document
 * Investigate build environment design considerations
 
-#### Epic: Expand QA Coverage 
+## Epic: Expand QA Coverage
 
-* Intel CORD manual installation/debugging 
-* Extensive gRPC api tests 
-* Expand API tests to cover combinations of all parameters 
-* Integrate DHCP relay in cord-tester 
-* Implementing DHCP relay test cases in cord-tester for voltha 
-* DHCP relay setup in voltha 
-* Implementation of igmp-proxy test scenarios in VOLTHA context 
-* Scale module development in cord-tester and validating test scenarios on physical pod 
-* Realign XOS based tests to run on jenkins job 
-* Security concern of automatically triggering ng40 test on bare metal as root 
-* Update QA environments to use new make-based build 
-* Add Sanity Tests to physical POD 
-* gRPC API : Generate grpc unit-tests using xosgenx 
-* Develop test plan for scale tests 
-* Develop scale tests for subscribers. 
+* Intel CORD manual installation/debugging
+* Extensive gRPC api tests
+* Expand API tests to cover combinations of all parameters
+* Integrate DHCP relay in cord-tester
+* Implementing DHCP relay test cases in cord-tester for voltha
+* DHCP relay setup in voltha
+* Implementation of igmp-proxy test scenarios in VOLTHA context
+* Scale module development in cord-tester and validating test scenarios on
+  physical pod
+* Realign XOS based tests to run on jenkins job
+* Security concern of automatically triggering ng40 test on bare metal as root
+* Update QA environments to use new make-based build
+* Add Sanity Tests to physical POD
+* gRPC API : Generate grpc unit-tests using xosgenx
+* Develop test plan for scale tests
+* Develop scale tests for subscribers.
 * Develop and validate scale tests for vrouter, igmp
 * Develop and validate scale test cases for vsg, vcpe
-* Add automation script for dpdk pktgen in cord tester 
-* Integrate cord-tester framework to test scale 
-* 3.0.1 Release Tests: Run all existing tests and document verification 
-* Prepare jenkins job environment for ng-core 
-* Implementing Test scenarios in vSG and Exampleservice modules in cord-tester 
-* Implementing test cases in cord-tester to test voltha using ponsim olt and onu, verifying tls, dhcp and igmp flow on it.
-* gRPC API tests: Phase-1 - Framework Analysis 
-* Dynamic Input file generation for tests 
-* New tests to validate Images/services dynamically based on the profile loaded 
-* Chameleon REST API testing 
+* Add automation script for dpdk pktgen in cord tester
+* Integrate cord-tester framework to test scale
+* 3.0.1 Release Tests: Run all existing tests and document verification
+* Prepare jenkins job environment for ng-core
+* Implementing Test scenarios in vSG and Exampleservice modules in cord-tester
+* Implementing test cases in cord-tester to test voltha using ponsim olt and
+  onu, verifying tls, dhcp and igmp flow on it.
+* gRPC API tests: Phase-1 - Framework Analysis
+* Dynamic Input file generation for tests
+* New tests to validate Images/services dynamically based on the profile loaded
+* Chameleon REST API testing
 * Add functional tests (vSG, VTN) for POD
-* Integrate chameleon API tests into jenkins job 
+* Integrate chameleon API tests into jenkins job
 
-#### Epic: Unit Testing Framework 
+## Epic: Unit Testing Framework
 
-* Set up node for Sonarqube 
-* Identify unit test frameworks for all relevant components 
+* Set up node for Sonarqube
+* Identify unit test frameworks for all relevant components
 
-#### Epic: Fabric Features & Improvements 
+## Epic: Fabric Features & Improvements
 
-* Upgrade to OFDPA 3.0 EA4  
-* Update fabric synchronizer to push routes instead of hosts  
-* Support enable/disable ports on STANDBY nodes  
-* DHCP server HA supported by DHCP relay app  
-* DHCPv6 option de/serializers 
-* DHCP Relay Manager 
-* DHCP Relay Store 
-* Create DHCPv6 serializer and deserializer 
-* Add keepalive messages to FPM protocol 
-* Dual-home host failover 
-* Extend Network Cofnig Host Provider to support multihoming 
-* Update host location when port goes down 
-* Extend HostStore to track multiple locations 
-* Add keepalives for FPM connections 
-* Improve hash-groups scaling and routing logic to prepare for dual homing 
-* VLAN support for DHCP Relay and HostLocationProvider 
-* CLI for DHCP relay manager 
-* Test multilink support on CORD pod 
+* Upgrade to OFDPA 3.0 EA4
+* Update fabric synchronizer to push routes instead of hosts
+* Support enable/disable ports on STANDBY nodes
+* DHCP server HA supported by DHCP relay app
+* DHCPv6 option de/serializers
+* DHCP Relay Manager
+* DHCP Relay Store
+* Create DHCPv6 serializer and deserializer
+* Add keepalive messages to FPM protocol
+* Dual-home host failover
+* Extend Network Cofnig Host Provider to support multihoming
+* Update host location when port goes down
+* Extend HostStore to track multiple locations
+* Add keepalives for FPM connections
+* Improve hash-groups scaling and routing logic to prepare for dual homing
+* VLAN support for DHCP Relay and HostLocationProvider
+* CLI for DHCP relay manager
+* Test multilink support on CORD pod
 
-#### Epic: DHCP Relay 
+## Epic: DHCP Relay
 
-* DHCP relay app configuration change for multiple servers 
-* Refactor DHCP relay app 
-* Support DHCPv6 by HostLocationProvider 
+* DHCP relay app configuration change for multiple servers
+* Refactor DHCP relay app
+* Support DHCPv6 by HostLocationProvider
 
-#### Epic: Dual Homing 
+## Epic: Dual Homing
 
-* Extend HostLocationProvider to detect dual-homed hosts 
+* Extend HostLocationProvider to detect dual-homed hosts
 
-#### Epic: Logging 
+## Epic: Logging
 
-* Support reloads for logging module 
-* Implement logging component 
-* Design new logging component 
+* Support reloads for logging module
+* Implement logging component
+* Design new logging component
 
-#### Epic: Add OVS-DPDK support 
+## Epic: Add OVS-DPDK support
 
-* Jenkins pipeline for setting up DPDK-enabled POD on QCT2 
-* Validate DPDK setup on CiaB 
-* Bind fabric interfaces to DPDK 
-* Add portbindings to `networking_onos` Neutron plugin 
-* Change kernel boot options for nodes 
-* Modify the nova-compute charm to install, configure OVS-DPDK 
-* Add OpenStack config options to` juju_config.yml`
-* Configure Nova with DPDK-enabled flavor(s) 
+* Jenkins pipeline for setting up DPDK-enabled POD on QCT2
+* Validate DPDK setup on CiaB
+* Bind fabric interfaces to DPDK
+* Add portbindings to `networking_onos` Neutron plugin
+* Change kernel boot options for nodes
+* Modify the nova-compute charm to install, configure OVS-DPDK
+* Add OpenStack config options to `juju_config.yml`
+* Configure Nova with DPDK-enabled flavor(s)
 
-#### Epic: R-CORD 
+## Epic: R-CORD
 
-* Resurrect VOLT synchronizer and get it configuring the ONOS vOLT app 
-* Deploy VOLTHA with ponsim on CORD POD 
+* Resurrect VOLT synchronizer and get it configuring the ONOS vOLT app
+* Deploy VOLTHA with ponsim on CORD POD
 
-#### Epic: Maintenance 
+## Epic: Maintenance
 
-* Autogenerated OpenStack passwords if composed o only digits cause OpenStack synchronizer failures
-* Specify files for GUI Extensions 
-* Add description and human readable name to modeldefs 
-* Fix 3.0 `xos_base` build 
-* Fix wrong import of service models from core in A-CORD 
-* Update ONOS to 1.10 
-* Update CiaB fabric to use OVS instead of CPqD 
-* Update generated fabric config 
-* CiaB: remove incorrect VAGRANT_CWD from quickstart.md 
-* VRouter tenants are not being reaped 
-* Nuisance errors in `xos_ui` container 
-* Use tmux or mosh to install CiaB 
-* General refactor of the physical POD installation guide 
-* Update ExampleService tutorial 
-* Update VTN section of quickstart_physical.md for CORD 3.0 
-* Cut 3.0.1 release 
-* Duplicate NetworkSlice objects are created 
+* Autogenerated OpenStack passwords if composed o only digits cause OpenStack
+  synchronizer failures
+* Specify files for GUI Extensions
+* Add description and human readable name to modeldefs
+* Fix 3.0 `xos_base` build
+* Fix wrong import of service models from core in A-CORD
+* Update ONOS to 1.10
+* Update CiaB fabric to use OVS instead of CPqD
+* Update generated fabric config
+* CiaB: remove incorrect VAGRANT_CWD from quickstart.md
+* VRouter tenants are not being reaped
+* Nuisance errors in `xos_ui` container
+* Use tmux or mosh to install CiaB
+* General refactor of the physical POD installation guide
+* Update ExampleService tutorial
+* Update VTN section of quickstart_physical.md for CORD 3.0
+* Cut 3.0.1 release
+* Duplicate NetworkSlice objects are created
 * Upgrade ONOS to 1.8.7, cut 3.0.0-rc3
 
-#### Other (not assigned to an Epic)
+## Other (not assigned to an Epic)
 
-* Optimize api-sanity-pipeline job for master branch 
-* China Mobile E-CORD setup 2007-08-13 through -19 
-* Add action to xosTable to navigate to the detail page 
-* Standardize makefiles for maas 
-* Bring Up XOS Services fails on master branch 
-* For local scenarios, non-superuser creation of config file directories 
-* `last_login` field doesn't show up in newly generated API 
-* Investigate mock-rcord breakage 
-* Look into xos-base build failure 
-* Investigate API Sanity Pipeline breakage 
-* Update CORD 3.0 to use release version of plyprotobuf 
-* Create cross linkage of Gerrit commits to Jira stories 
-* Modify CORD Jira workflow to match ONOS 
-* Zombie processes are present as soon as the head node gets deployed 
-* Add CLI command to view routes in FPM layer 
+* Optimize api-sanity-pipeline job for master branch
+* China Mobile E-CORD setup 2007-08-13 through -19
+* Add action to xosTable to navigate to the detail page
+* Standardize makefiles for maas
+* Bring Up XOS Services fails on master branch
+* For local scenarios, non-superuser creation of config file directories
+* `last_login` field doesn't show up in newly generated API
+* Investigate mock-rcord breakage
+* Look into xos-base build failure
+* Investigate API Sanity Pipeline breakage
+* Update CORD 3.0 to use release version of plyprotobuf
+* Create cross linkage of Gerrit commits to Jira stories
+* Modify CORD Jira workflow to match ONOS
+* Zombie processes are present as soon as the head node gets deployed
+* Add CLI command to view routes in FPM layer
 
diff --git a/docs/scripts/defaults.md.j2 b/docs/scripts/defaults.md.j2
index 04738cb..35e8d29 100644
--- a/docs/scripts/defaults.md.j2
+++ b/docs/scripts/defaults.md.j2
@@ -1,22 +1,21 @@
 # Build System Config Glossary
 
-{{ def_docs['frontmatter']['description'] }}
-
+{{ frontmatter }}
 {% for key, val in def_docs|dictsort %}
-### {{ key }}
-
+## {{ key }}
+{% if val['description'] != ''%}
 {{ val['description'] }}
-
+{% endif %}
 Default value:
-```
+
+```yaml
 {{ val['defval_pp'] }}
 ```
 
 Used in:
 
 {% for file in val['reflist']|sort(attribute='path') -%}
- - [{{ file.path }}]({{ file.link }})
+- [{{ file.path }}]({{ file.link }})
 {% endfor -%}
-
 {% endfor %}
 
diff --git a/docs/scripts/defaultsdoc.py b/docs/scripts/defaultsdoc.py
index f75fa85..c17ee49 100644
--- a/docs/scripts/defaultsdoc.py
+++ b/docs/scripts/defaultsdoc.py
@@ -20,7 +20,6 @@
 import jinja2
 import logging
 import os
-import pprint
 import re
 import sys
 import xml.etree.ElementTree as ET
@@ -56,7 +55,7 @@
 args = parser.parse_args()
 
 # find the branch we're on via the repo manifest
-manifest_path =  os.path.abspath("../../.repo/manifest.xml")
+manifest_path = os.path.abspath("../../.repo/manifest.xml")
 try:
     tree = ET.parse(manifest_path)
     manifest_xml = tree.getroot()
@@ -69,15 +68,12 @@
 role_defs = []
 profile_defs = []
 group_defs = []
-
-# frontmatter section is any text at the top of the descriptions.md file, and
-# comes before all other sections
-def_docs = {'frontmatter':{'description':''}}
+def_docs = {}
 
 # find all the files to be processed
 for dirpath, dirnames, filenames in os.walk(args.playbook_dir):
     basepath = re.sub(args.playbook_dir, '', dirpath)
-    for filename in filenames :
+    for filename in filenames:
         filepath = os.path.join(basepath, filename)
 
         if fnmatch.fnmatch(filepath, "roles/*/defaults/*.yml"):
@@ -90,7 +86,6 @@
             group_defs.append(filepath)
 
 
-
 for rd in role_defs:
     rd_vars = {}
     # trim slash so basename grabs the final directory name
@@ -100,10 +95,10 @@
         rd_partialpath = os.path.join(rd_basedir, rd)
 
         # partial URL, without line nums
-        rd_url = "https://github.com/opencord/platform-install/tree/%s/%s" % (repo_branch, rd)
+        rd_url = "https://github.com/opencord/platform-install/tree/%s/%s" % (
+            repo_branch, rd)
 
-        
-        rd_fh= open(rd_fullpath, 'r')
+        rd_fh = open(rd_fullpath, 'r')
 
         # markedloader is for line #'s
         loader = markedyaml.MarkedLoader(rd_fh.read())
@@ -124,27 +119,39 @@
 
         for key, val in rd_vars.iteritems():
 
-           # build full URL to lines. Lines numbered from zero, so +1 on them to match github
-           if marked_vars[key].start_mark.line == marked_vars[key].end_mark.line:
-               full_url = "%s#L%d" % (rd_url, marked_vars[key].start_mark.line+1)
-           else:
-               full_url = "%s#L%d-L%d" % (rd_url, marked_vars[key].start_mark.line, marked_vars[key].end_mark.line)
+            # build full URL to lines. Lines numbered from zero, so +1 on them
+            # to match github
+            if marked_vars[key].start_mark.line == marked_vars[
+                    key].end_mark.line:
+                full_url = "%s#L%d" % (rd_url,
+                                       marked_vars[key].start_mark.line + 1)
+            else:
+                full_url = "%s#L%d-L%d" % (rd_url,
+                                           marked_vars[key].start_mark.line,
+                                           marked_vars[key].end_mark.line)
 
-           if key in def_docs:
+            if key in def_docs:
                 if def_docs[key]['defval'] == val:
-                    def_docs[key]['reflist'].append({'path':rd_partialpath, 'link':full_url})
+                    def_docs[key]['reflist'].append(
+                        {'path': rd_partialpath, 'link': full_url})
                 else:
-                    LOG.error(" %s has different default > %s : %s" % (rd, key, val))
-           else:
-                to_print = { str(key): val }
-                pp = yaml.dump(to_print, indent=4, allow_unicode=False, default_flow_style=False)
+                    LOG.error(
+                        " %s has different default > %s : %s" %
+                        (rd, key, val))
+            else:
+                to_print = {str(key): val}
+                pp = yaml.dump(
+                    to_print,
+                    indent=4,
+                    allow_unicode=False,
+                    default_flow_style=False)
 
                 def_docs[key] = {
-                        'defval': val,
-                        'defval_pp': pp,
-                        'description': "",
-                        'reflist': [{'path':rd_partialpath, 'link':full_url}],
-                        }
+                    'defval': val,
+                    'defval_pp': pp,
+                    'description': "",
+                    'reflist': [{'path': rd_partialpath, 'link': full_url}],
+                }
 
 # read in descriptions file
 descriptions = {}
@@ -158,7 +165,7 @@
 
         if desc_header:
             # add previous description to dict
-            descriptions[desc_name] = desc_lines
+            descriptions[desc_name] = desc_lines.strip()
 
             # set this as the next name, wipe out lines
             desc_name = desc_header.group(1)
@@ -166,14 +173,19 @@
         else:
             desc_lines += d_l
 
-    descriptions[desc_name] = desc_lines
+    descriptions[desc_name] = desc_lines.strip()
+
+# Get the frontmatter out of descriptions, and remove the header line
+frontmatter = re.sub(r'^#.*\n\n', '', descriptions.pop('frontmatter', None))
 
 # add descriptions to def_docs
 for d_name, d_text in descriptions.iteritems():
     if d_name in def_docs:
         def_docs[d_name]['description'] = d_text
     else:
-        LOG.error("Description exists for '%s' but doesn't exist in defaults" % d_name)
+        LOG.error(
+            "Description exists for '%s' but doesn't exist in defaults" %
+            d_name)
 
 # check for missing descriptions
 for key in sorted(def_docs):
@@ -182,10 +194,10 @@
 
 # Add to template and write to output file
 j2env = jinja2.Environment(
-    loader = jinja2.FileSystemLoader('.')
+    loader=jinja2.FileSystemLoader('.')
 )
 
 template = j2env.get_template(args.template)
 
 with open(args.output, 'w') as f:
-    f.write(template.render(def_docs=def_docs))
+    f.write(template.render(def_docs=def_docs, frontmatter=frontmatter))
diff --git a/docs/scripts/descriptions.md b/docs/scripts/descriptions.md
index f87b69a..91eae23 100644
--- a/docs/scripts/descriptions.md
+++ b/docs/scripts/descriptions.md
@@ -1,3 +1,4 @@
+# Build Variable Descriptions
 
 This page documents all the configuration variables that can be set in a [POD
 config](install.md#pod-config), [scenario](install.md#scenarios), or
@@ -5,9 +6,9 @@
 
 These variables are used in and apply to the following repositories:
 
- - [cord](https://github.com/opencord/cord) (aka "build" when checked out)
- - [maas](https://github.com/opencord/maas)
- - [platform-install](https://github.com/opencord/platform-install)
+- [cord](https://github.com/opencord/cord) (aka "build" when checked out)
+- [maas](https://github.com/opencord/maas)
+- [platform-install](https://github.com/opencord/platform-install)
 
 ## addresspool_public_cidr
 
@@ -370,8 +371,6 @@
 
 Hostname (or IP) for the ElasticStack logging host machine.
 
-
-
 ## management_hosts_net_cidr
 
 CIDR for the management_hosts VTN network.
@@ -536,11 +535,13 @@
 
 ## site_name
 
-Machine readable name to use for the CORD site. This should be one word, without spaces.
+Machine readable name to use for the CORD site. This should be one word,
+without spaces.
 
 ## site_suffix
 
-The DNS suffix applied to all machines created for this site. Must be a valid DNS name.
+The DNS suffix applied to all machines created for this site. Must be a valid
+DNS name.
 
 ## ssh_ca_phrase
 
@@ -763,6 +764,7 @@
 profile. Deprecated, see: [xos_new_tosca_config_templates](#xosnewtoscaconfigtemplates)
 
 ## xos_new_tosca_config_templates
+
 List of XOS tosca templates to load that make up the service graph of a
 profile.
 
diff --git a/docs/scripts/markedyaml.py b/docs/scripts/markedyaml.py
index f7c1484..95cd641 100644
--- a/docs/scripts/markedyaml.py
+++ b/docs/scripts/markedyaml.py
@@ -20,17 +20,17 @@
 # Request for licensing clarification made on 2017-09-19
 # Contains improvements to support more types (bool/int/etc.)
 
-import yaml
 from yaml.composer import Composer
-from yaml.reader import Reader
-from yaml.scanner import Scanner
-from yaml.composer import Composer
-from yaml.resolver import Resolver
+from yaml.constructor import SafeConstructor
 from yaml.parser import Parser
-from yaml.constructor import Constructor, BaseConstructor, SafeConstructor
+from yaml.reader import Reader
+from yaml.resolver import Resolver
+from yaml.scanner import Scanner
+
 
 def create_node_class(cls):
     class node_class(cls):
+
         def __init__(self, x, start_mark, end_mark):
             cls.__init__(self, x)
             self.start_mark = start_mark
@@ -41,18 +41,21 @@
     node_class.__name__ = '%s_node' % cls.__name__
     return node_class
 
+
 dict_node = create_node_class(dict)
 list_node = create_node_class(list)
 unicode_node = create_node_class(unicode)
 int_node = create_node_class(int)
 float_node = create_node_class(float)
 
+
 class NodeConstructor(SafeConstructor):
     # To support lazy loading, the original constructors first yield
     # an empty object, then fill them in when iterated. Due to
     # laziness we omit this behaviour (and will only do "deep
     # construction") by first exhausting iterators, then yielding
     # copies.
+
     def construct_yaml_map(self, node):
         obj, = SafeConstructor.construct_yaml_map(self, node)
         return dict_node(obj, node.start_mark, node.end_mark)
@@ -80,31 +83,33 @@
 
 
 NodeConstructor.add_constructor(
-        u'tag:yaml.org,2002:map',
-        NodeConstructor.construct_yaml_map)
+    u'tag:yaml.org,2002:map',
+    NodeConstructor.construct_yaml_map)
 
 NodeConstructor.add_constructor(
-        u'tag:yaml.org,2002:seq',
-        NodeConstructor.construct_yaml_seq)
+    u'tag:yaml.org,2002:seq',
+    NodeConstructor.construct_yaml_seq)
 
 NodeConstructor.add_constructor(
-        u'tag:yaml.org,2002:str',
-        NodeConstructor.construct_yaml_str)
+    u'tag:yaml.org,2002:str',
+    NodeConstructor.construct_yaml_str)
 
 NodeConstructor.add_constructor(
-        u'tag:yaml.org,2002:bool',
-        NodeConstructor.construct_yaml_bool)
+    u'tag:yaml.org,2002:bool',
+    NodeConstructor.construct_yaml_bool)
 
 NodeConstructor.add_constructor(
-        u'tag:yaml.org,2002:int',
-        NodeConstructor.construct_yaml_int)
+    u'tag:yaml.org,2002:int',
+    NodeConstructor.construct_yaml_int)
 
 NodeConstructor.add_constructor(
-        u'tag:yaml.org,2002:float',
-        NodeConstructor.construct_yaml_float)
+    u'tag:yaml.org,2002:float',
+    NodeConstructor.construct_yaml_float)
 
 
-class MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):
+class MarkedLoader(Reader, Scanner, Parser, Composer,
+                   NodeConstructor, Resolver):
+
     def __init__(self, stream):
         Reader.__init__(self, stream)
         Scanner.__init__(self)
@@ -113,6 +118,6 @@
         NodeConstructor.__init__(self)
         Resolver.__init__(self)
 
+
 def get_data(stream):
     return MarkedLoader(stream).get_data()
-
diff --git a/docs/service-profiles.md b/docs/service-profiles.md
index 679768d..e7ee9dc 100644
--- a/docs/service-profiles.md
+++ b/docs/service-profiles.md
@@ -1,13 +1,18 @@
 # Service Profiles
 
-This guide describes each of the service profiles that can be built on top of the CORD platform. The content in this guide is currently thin, but this is the place to document various profiles going forward.
+This guide describes each of the service profiles that can be built on top of
+the CORD platform. The content in this guide is currently thin, but this is the
+place to document various profiles going forward.
 
 Both *Services* and *Service Profiles* are classified as either
 
-* **Official** – Passed QA tests, officially approved by the TST, to be supported in maintenance releases.
+* **Official** – Passed QA tests, officially approved by the TST, to be
+  supported in maintenance releases.
 
-* **Development** –  Not fully tested, included in the release for evaluation, not currently supported.
+* **Development** –  Not fully tested, included in the release for evaluation,
+  not currently supported.
 
-These classifications are defined by the CORD
-[Technical Steering Team (TST)](https://wiki.opencord.org/display/CORD/Technical+Steering+Team)
-and subject to change. 
+These classifications are defined by the CORD [Technical Steering Team
+(TST)](https://wiki.opencord.org/display/CORD/Technical+Steering+Team) and
+subject to change.
+
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 6f9a0fd..328e144 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -19,7 +19,7 @@
 log file in the `~/cord/build/logs` directory - this output isn't interleaved
 with other make targets and will be much easier to read.
 
-```
+```shell
 ~/cord/build$ make printconfig
 Scenario: 'cord'
 Profile: 'rcord'
@@ -55,7 +55,7 @@
 milestones to the list of logs, you can determine which logs failed to result
 in a milestone, and thus did not complete:
 
-```
+```shell
 $ cd ~/cord/build
 $ ls -ltr milestones ; ls -ltr logs
 -rw-r--r-- 1 zdw xos-PG0 181 Oct  3 13:23 README.md
@@ -88,23 +88,22 @@
 `make: *** [targetname] Error #`, and can be found in log files with this
 `grep` command:
 
-```
+```shell
 $ grep -F "make: ***" ~/build.out
 make: *** [milestones/build-maas-images] Error 2
 make: *** Waiting for unfinished jobs....
 make: *** [milestones/build-onos-apps] Error 2
 ```
 
-
 ## Collecting POD diagnostics
 
 There is a `collect-diag` make target that will collect diagnostic information
 for a Virtual or Physical POD. It is run automatically when the `pod-test`
 target is run, but can be run manually at any time:
 
-```
-$ cd ~/cord/build
-$ make collect-diag
+```shell
+cd ~/cord/build
+make collect-diag
 ```
 
 Once it's finished running, ssh to the head node and look for a directory named
@@ -114,7 +113,7 @@
 diagnostic commands run on the head node, Juju status, ONOS diagnostics, and
 OpenStack status:
 
-```
+```shell
 $ ssh head1
 vagrant@head1:~$ ls
 diag-20171003T203058
@@ -183,14 +182,14 @@
 
 Systems that support nested virtualization:
 
--   VMWare - <https://communities.vmware.com/docs/DOC-8970>, <https://communities.vmware.com/community/vmtn/bestpractices/nested>
--   Xen - <http://wiki.xenproject.org/wiki/Nested_Virtualization_in_Xen>
--   KVM - <https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM> , <https://wiki.archlinux.org/index.php/KVM#Nested_virtualization>
--   Hyper V - <https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/user_guide/nesting>
+- VMWare - <https://communities.vmware.com/docs/DOC-8970>, <https://communities.vmware.com/community/vmtn/bestpractices/nested>
+- Xen - <http://wiki.xenproject.org/wiki/Nested_Virtualization_in_Xen>
+- KVM - <https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM> , <https://wiki.archlinux.org/index.php/KVM#Nested_virtualization>
+- Hyper V - <https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/user_guide/nesting>
 
 Systems that lack nested virtualization:
 
--   Virtualbox - track this feature request: <https://www.virtualbox.org/ticket/4032>
+- Virtualbox - track this feature request: <https://www.virtualbox.org/ticket/4032>
 
 ### DNS Lookup Check
 
@@ -309,45 +308,45 @@
 
 There are various utility targets:
 
- - `printconfig`: Prints the configured scenario and profile.
+- `printconfig`: Prints the configured scenario and profile.
 
- - `xos-teardown`: Stop and remove a running set of XOS docker containers,
-   removing the database.
+- `xos-teardown`: Stop and remove a running set of XOS docker containers,
+  removing the database.
 
- - `xos-update-images`: Rebuild the images used by XOS, without tearing down
-   running XOS containers.
+- `xos-update-images`: Rebuild the images used by XOS, without tearing down
+  running XOS containers.
 
- - `collect-diag`: Collect detailed diagnostic information on a deployed head
-   and compute nodes, into `diag-<datestamp>` directory on the head node.
+- `collect-diag`: Collect detailed diagnostic information on a deployed head
+  and compute nodes, into `diag-<datestamp>` directory on the head node.
 
- - `compute-node-refresh`: Reload compute nodes brought up by MaaS into XOS,
-   useful in the cord virtual and physical scenarios
+- `compute-node-refresh`: Reload compute nodes brought up by MaaS into XOS,
+  useful in the cord virtual and physical scenarios
 
- - `pod-test`: Run the `platform-install/pod-test-playbook.yml`, testing the
-   virtual/physical cord scenario.
+- `pod-test`: Run the `platform-install/pod-test-playbook.yml`, testing the
+  virtual/physical cord scenario.
 
- - `vagrant-destroy`: Destroy Vagrant containers (for mock/virtual/physical
-   installs)
+- `vagrant-destroy`: Destroy Vagrant containers (for mock/virtual/physical
+  installs)
 
- - `clean-images`: Have containers rebuild during the next build cycle. Does
-   not actually delete any images, just causes imagebuilder to be run again.
+- `clean-images`: Have containers rebuild during the next build cycle. Does
+  not actually delete any images, just causes imagebuilder to be run again.
 
- - `clean-genconfig`: Deletes the `make config` generated config files in
-   `genconfig`, useful when switching between POD configs
+- `clean-genconfig`: Deletes the `make config` generated config files in
+  `genconfig`, useful when switching between POD configs
 
- - `clean-onos`: Stops the ONOS containers on the head node
+- `clean-onos`: Stops the ONOS containers on the head node
 
- - `clean-openstack`: Cleans up and deletes all instances and networks created
-   in OpenStack.
+- `clean-openstack`: Cleans up and deletes all instances and networks created
+  in OpenStack.
 
- - `clean-profile`: Deletes the `cord_profile` directory
+- `clean-profile`: Deletes the `cord_profile` directory
 
- - `clean-all`: Runs `vagrant-destroy`, `clean-genconfig`, and `clean-profile`
-   targets, removes all milestones. Good for resetting a dev environment back
-   to an unconfigured state.
+- `clean-all`: Runs `vagrant-destroy`, `clean-genconfig`, and `clean-profile`
+  targets, removes all milestones. Good for resetting a dev environment back
+  to an unconfigured state.
 
- - `clean-local`:  `clean-all` but for the `local` scenario - Runs
-   `clean-genconfig` and `clean-profile` targets, removes local milestones.
+- `clean-local`:  `clean-all` but for the `local` scenario - Runs
+  `clean-genconfig` and `clean-profile` targets, removes local milestones.
 
 The `clean-*` utility targets should modify the contents of the milestones
 directory appropriately to cause the steps they clean up after to be rerun on
@@ -359,7 +358,7 @@
 
 To rebuild and update XOS container images, run:
 
-```
+```shell
 make xos-update-images
 make -j4 build
 ```
@@ -370,7 +369,7 @@
 If you additionally want to stop all the XOS containers, clear the database,
 and reload the profile, use `xos-teardown`:
 
-```
+```shell
 make xos-teardown
 make -j4 build
 ```
diff --git a/docs/vrouter.md b/docs/vrouter.md
index d6825a6..5aeb443 100644
--- a/docs/vrouter.md
+++ b/docs/vrouter.md
@@ -38,7 +38,7 @@
 bond is fabric, so if you run `ifconfig` on the compute node you have selected
 to deploy Quagga, you should see this bonded interface appear in the output.
 
-```
+```shell
 ubuntu@fumbling-reason:~$ ifconfig fabric
 fabric    Link encap:Ethernet  HWaddr 00:02:c9:1e:b4:e0
           inet addr:10.6.1.2  Bcast:10.6.1.255  Mask:255.255.255.0
@@ -49,25 +49,25 @@
           collisions:0 txqueuelen:0
           RX bytes:89101760 (89.1 MB)  TX bytes:0 (0.0 B)
 ```
-          
+
 We need to dedicate one of these fabric interfaces to the Quagga container, so
 we'll need to remove it from the bond. You should first identify the name of
 the interface that you want to dedicate. In this example we'll assume it is
 called mlx1. You can then remove it from the bond by editing the
 /etc/network/interfaces file:
 
-```
+```shell
 sudo vi /etc/network/interfaces
 ```
 
 You should see a stanza that looks like this:
 
-```
+```shell
 auto mlx1
 iface mlx1 inet manual
     bond-master fabric
 ```
-    
+
 Simply remove the line `bond-master fabric`, save the file then restart the
 networking service on the compute node.
 
@@ -105,7 +105,7 @@
 
 ## Install and Configure vRouter on ONOS
 
-The vRouter will be run on the `onos-fabric` cluster that controls the physical 
+The vRouter will be run on the `onos-fabric` cluster that controls the physical
 fabric switches.
 
 ### Interface Configuration
@@ -115,7 +115,7 @@
 This is where we configure the second IP address that we allocated from the
 peering subnet. The following shows a configuration example:
 
-```
+```json
 {
     "ports" : {
         "of:0000000000000001/1" : {
@@ -155,7 +155,7 @@
 initial fabric configuration. Then you can run the following command to refresh
 the configuration in ONOS:
 
-```
+```shell
 docker-compose -p rcord exec xos_ui python /opt/xos/tosca/run.py xosadmin@opencord.org /opt/cord_profile/fabric-service.yaml
 ```
 
@@ -167,13 +167,13 @@
 The `onos-fabric` CLI can be accessed with the following command run on the
 head node:
 
-```
-$ ssh karaf@onos-fabric -p 8101
+```shell
+ssh karaf@onos-fabric -p 8101
 ```
 
 On the `onos-fabric` CLI, deactivate and reactivate segment routing:
 
-```
+```shell
 onos> app deactivate org.onosproject.segmentrouting
 onos> app activate org.onosproject.segmentrouting
 ```
@@ -189,14 +189,14 @@
 CORD uses a slightly modified version of Quagga, so the easiest way to deploy
 this is to use the provided docker image.
 
-```
+```shell
 docker pull opencord/quagga
 ```
 
 We also need to download the `pipework` tool which will be used to connect the
 docker image to the physical interface that we set aside earlier.
 
-```
+```shell
 wget https://raw.githubusercontent.com/jpetazzo/pipework/master/pipework
 chmod +x pipework
 ```
@@ -204,21 +204,21 @@
 Create a directory for your Quagga configuration files, and create a
 `bgpd.conf` and `zebra.conf` in there. More on configuring Quagga later.
 
-```
+```shell
 mkdir configs
 ```
 
 Now run the docker image (make sure the path the config directory matches what
 is on your system):
 
-```
+```shell
 sudo docker run --privileged -d -v configs:/etc/quagga -n quagga opencord/quagga
 ```
 
 Finally, we can use the pipework tool to add the physical interface into the
 container so that Quagga can talk out over the fabric:
 
-```
+```shell
 sudo ./pipework mlx1 -i eth1 quagga 10.0.1.3/24
 ```
 
@@ -231,7 +231,7 @@
 the Quagga configuration) you can remove the original container and run a new
 one:
 
-```
+```shell
 docker rm -f quagga
 sudo docker run --privileged -d -v configs:/etc/quagga -n quagga opencord/quagga
 ```
@@ -260,7 +260,7 @@
 
 A minimal Zebra configuration might look like this:
 
-```
+```shell
 !
 hostname cord-zebra
 password cord
@@ -268,6 +268,7 @@
 fpm connection ip 10.6.0.1 port 2620
 !
 ```
+
 The FPM connection IP address is the IP address of one of the `onos-fabric`
 cluster instance that is running the vRouter app.
 
@@ -279,7 +280,7 @@
 An example simple BGP configuration for peering with one BGP peer might look
 like this:
 
-```
+```shell
 hostname bgp
 password cord
 !