Merge "[CORD-2550] Storing info on who created the model"
diff --git a/docs/README.md b/docs/README.md
index f970044..fc32baf 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -2,24 +2,40 @@
 
 The M-CORD (Mobile CORD) profile is `Official` as of 4.1.
 
-## Service Manifest 
+## Service Manifest
+
 M-CORD includes service manifests:
 
-#### [mcord-ng40](https://github.com/opencord/platform-install/blob/cord-4.1/profile_manifests/mcord-ng40.yml)
+### [mcord-ng40](https://github.com/opencord/platform-install/blob/cord-4.1/profile_manifests/mcord-ng40.yml)
 
-| Service              | Source Code         |
+| Service      | Source Code         |
 |--------------|---------------|
-| epc-service                     | https://github.com/opencord/epc-service |
-| Fabric                     | https://github.com/opencord/fabric |
-| ONOS                 | https://github.com/opencord/onos-service |
-| OpenStack                 | https://github.com/opencord/openstack |
-| vENB                 | https://github.com/opencord/venb |
-| vSPGWC                 | https://github.com/opencord/vspgwc |
-| vSPGWU                 | https://github.com/opencord/vspgwu |
-| VTN                 | https://github.com/opencord/vtn |
+| epc-service  | https://github.com/opencord/epc-service |
+| Fabric       | https://github.com/opencord/fabric |
+| ONOS         | https://github.com/opencord/onos-service |
+| OpenStack    | https://github.com/opencord/openstack |
+| vENB         | https://github.com/opencord/venb |
+| vSPGWC       | https://github.com/opencord/vspgwc |
+| vSPGWU       | https://github.com/opencord/vspgwu |
+| VTN          | https://github.com/opencord/vtn |
 
-## Model Extensions 
-M-CORD does not extend CORD's core models.
+## Model Extensions
 
-## GUI Extensions 
+M-CORD extends CORD's core models with the following model specification
+[mcord.xproto](https://github.com/opencord/mcord/blob/master/xos/models/mcord.xproto),
+which represents the subscriber that anchors a chain of ServiceInstances:
+
+```proto
+message MCordSubscriberInstance (ServiceInstance) {
+    option verbose_name = "MCORD Subscriber";
+    option description = "This model holds the informations of a Mobile Subscriber in CORD";
+
+    required string imsi_number = 1 [max_length = 30, content_type = "stripped", blank = False, null = False, db_index = False];
+    optional string apn_number = 2 [max_length = 30, content_type = "stripped", blank = True, null = True, db_index = False];
+    optional int32 ue_status = 3 [max_length = 30, choices = "(('0', 'Detached'), ('1', 'Attached'))", blank = True, null = True, db_index = False];
+}
+```
+
+## GUI Extensions
+
 M-CORD doesn’t include any GUI extension.
diff --git a/docs/dev_guide.md b/docs/dev_guide.md
index bb4258e..d3b3cf9 100644
--- a/docs/dev_guide.md
+++ b/docs/dev_guide.md
@@ -1,37 +1,63 @@
 # Developer Guide
 
-The paragraph described general guidelines for developers who want to download and work on the M-CORD source code, or need to to mock special development environments.
+The paragraph described general guidelines for developers who want to download
+and work on the M-CORD source code, or need to to mock special development
+environments.
 
 ## Download the M-CORD source code
 
-M-CORD is part of the default CORD code base. To know how you can download the cord source code, go at <https://guide.opencord.org/getting_the_code.html>.
-Each M-CORD service lives in a specific repository. A list of M-CORD services and links to their repositories is available in the main page of this guide, at <https://guide.opencord.org/profiles/mcord/>.
+M-CORD is part of the default CORD code base. To know how you can [download the
+cord source code, go here](/getting_the_code.md}.
+
+Each M-CORD service lives in a specific repository. A list of M-CORD services
+and links to their repositories is available in the [main page of this
+guide](/profiles/mcord/).
 
 > Note: M-CORD source code is available from the 4.1 release (branch) of CORD.
 
 ## Developer environments
 
-As for other CORD profiles, M-CORD can also be deployed in environments other than physical PODs. This creates a more convenient environment for developers, using less resources and providing a faster development life-cycle.
+As for other CORD profiles, M-CORD can also be deployed in environments other
+than physical PODs. This creates a more convenient environment for developers,
+using less resources and providing a faster development life-cycle.
 
 Two environments are available, depending on your needs:
-* **Mock/Local Developer Machine**: a development environment running directly on your laptop
+
+* **Mock/Local Developer Machine**: a development environment running directly
+  on your laptop
 * **CORD-in-a-Box**
 
 ### Mock/local Machine Development Environment
 
-To understand what a local development environment is, what it can help you with, and how to build it, look at <https://guide.opencord.org/xos/dev/workflow_mock_single.html>.
-When it’s time to specify the PODCONFIG file, use mcord-ng4t-mock.yml, instead of the default value (rcord-mock.yml)
+To understand what a local development environment is, what it can help you
+with, and how to build it, [see here](/xos/dev/workflow_mock_single.md).
+
+When it’s time to specify the PODCONFIG file, use mcord-ng4t-mock.yml, instead
+of the default value (rcord-mock.yml)
 
 ### CORD-in-a-Box (CiaB) Development
 
-To understand what CiaB is and what it can help you with, look at <https://guide.opencord.org/xos/dev/workflow_pod.html>.  Note that, in general, CiaB is useful for validating basic functionality and not for testing performance.
+To understand what CiaB is and what it can help you with, [see
+here](/xos/dev/workflow_pod.html).  Note that, in general, CiaB is useful for
+validating basic functionality and not for testing performance.
 
-To build M-CORD CiaB, follow the steps at <https://guide.opencord.org/install_virtual.html>.
+To build M-CORD CiaB, follow the [virtual install steps](/install_virtual.md).
 
-> Note: If you are building on CloudLab, specify the profile `MCORD-in-a-Box` rather than `OnePC-Ubuntu14.04.5`.  This will select a machine with enough resources to run M-CORD.  
+> Note: If you are building on CloudLab, specify the profile `MCORD-in-a-Box`
+> rather than `OnePC-Ubuntu14.04.5`.  This will select a machine with enough
+> resources to run M-CORD.
 
-When it’s time to specify the PODCONFIG file, use `mcord-ng40-virtual.yml`, instead of the default value, `rcord-virtual.yml`.
+When it’s time to specify the PODCONFIG file, use `mcord-ng40-virtual.yml`,
+instead of the default value, `rcord-virtual.yml`.
 
-> Warning: At today, given the number of VNFs that M-CORD provides, it requires more resources than what other CORD use-cases do. For this reason, in order to experiment with M-CORD-in-a-Box you’ll need a bigger physical server than the ones required to build other [physical PODs](https://guide.opencord.org/install_physical.html#bill-of-materials-bom--hardware-requirements). Specifically, you'll need to have processors with at least a total of 24 physical cores.
+> Warning: At today, given the number of VNFs that M-CORD provides, it requires
+> more resources than what other CORD use-cases do. For this reason, in order
+> to experiment with M-CORD-in-a-Box you’ll need a bigger physical server than
+> the ones required to build other [physical
+> PODs](/install_physical.md#bill-of-materials-bom--hardware-requirements).
+> Specifically, you'll need to have processors with at least a total of 24
+> physical cores.
 
-More detailed instructions on how to develop and deploy using CiaB can be found in the general CORD troubleshooting guide, at <https://guide.opencord.org/troubleshooting.html>.
+More detailed instructions on how to develop and deploy using CiaB can be found
+in the general [CORD troubleshooting guide](/troubleshooting.md).
+
diff --git a/docs/installation_guide.md b/docs/installation_guide.md
index c585d8d..7523866 100644
--- a/docs/installation_guide.md
+++ b/docs/installation_guide.md
@@ -4,37 +4,60 @@
 
 ## Hardware Requirements
 
-M-CORD by default uses the NG40 software emulator including the RAN, an MME, and a traffic generator. For this reason, it does not require any additional hardware, other than the ones listed for a “traditional” [CORD POD](../../install_physical.md#bill-of-materials-bom--hardware-requirements).
+M-CORD by default uses the NG40 software emulator including the RAN, an MME,
+and a traffic generator. For this reason, it does not require any additional
+hardware, other than the ones listed for a “traditional” [CORD
+POD](/install_physical.md#bill-of-materials-bom--hardware-requirements).
 
-> Warning: The NG40 vTester requires a compute node with Intel XEON CPU with Westmere microarchitecture or better (<https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures>).
+> Warning: The NG40 vTester requires a compute node with Intel XEON CPU with
+> [Westmere microarchitecture or
+> better](https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures).
 
 ## NG40 vTester M-CORD License
 
-As mentioned above, ng4T provides a limited version of its NG40 software, which requires a free license in order to work. The specific NG40 installation steps described in the next paragraph assume the Operator has obtained the license and saved it into  a specific location on the development server.
+As mentioned above, ng4T provides a limited version of its NG40 software, which
+requires a free license in order to work. The specific NG40 installation steps
+described in the next paragraph assume the Operator has obtained the license
+and saved it into  a specific location on the development server.
 
-In order to download a free M-CORD trial license, go to the NG40 M-CORD website (<https://mcord.ng40.com/>) and register. You will be asked for your company name and your company email. After successful authentication of your email and acknowledgment of the free M-CORD NG40 License, you can download the license file called "ng40-license".
+In order to download a free M-CORD trial license, go to the [NG40 M-CORD
+website](https://mcord.ng40.com/) and register. You will be asked for your
+company name and your company email. After successful authentication of your
+email and acknowledgment of the free M-CORD NG40 License, you can download the
+license file called `ng40-license`.
 
 ## M-CORD POD Installation
 
-To install the local node you should follow the steps described in the main physical POD installation (<https://guide.opencord.org/install_physical.html>).
+To install the local node you should follow the steps described in the main
+[physical POD installation](/install_physical.md).
 
-As soon as you have the CORD repository on your development machine, transfer the downloaded M-CORD NG40 license file, to
+As soon as you have the CORD repository on your development machine, transfer
+the downloaded M-CORD NG40 license file, to:
 
 `$CORD_ROOT/orchestration/xos_services/venb/xos/synchronizer/files/ng40-license`
 
-When it’s time to write your pod configuration, use the physical-example.yml file as a template (<https://github.com/opencord/cord/blob/master/podconfig/physical-example.yml>). Either modify it or make a copy of it in the same directory. Fill in the configuration with your own head node data.
+When it’s time to write your pod configuration, use the [physical-example.yml
+file as a
+template](https://github.com/opencord/cord/blob/master/podconfig/physical-example.yml).
+Either modify it or make a copy of it in the same directory. Fill in the
+configuration with your own head node data.
 
 As cord_scenario, use `cord`.
 
 As cord_profile, use `mcord-ng40`.
 
-> Warning: After you’ve finished the basic installation, configure the fabric and your computes, as described at <https://guide.opencord.org/appendix_basic_config.html>.
+> Warning: After you’ve finished the basic installation, configure the fabric
+> and your computes, as described [here](/appendix_basic_config.md).
 
 ## Create an EPC instance
 
-The EPC is the only component that needs to be manually configured, after the fabric gets properly setup. An EPC instance can be created in two ways.
-Create an EPC instance using the XOS-UI
+The EPC is the only component that needs to be manually configured, after the
+fabric gets properly setup. An EPC instance can be created in two ways.
+
+### Create an EPC instance using the XOS-UI
+
 Open in your browser the XOS UI : `http://<your head node>/xos`
+
 * Log in
 * From the left panel, select `vEPC`
 * Click on `Virtual Evolved Packet Core ServiceInstances`
@@ -44,30 +67,36 @@
 * Set `Site id` to `MySite`
 * Press `Save`
 
-## Create an EPC instance using the XOS northbound API
+### Create an EPC instance using the XOS northbound API
 
-`curl -u xosadmin@opencord.org:<password> -X POST http://<ip address of pod>/xosapi/v1/vepc/vepcserviceinstances -H "Content-Type: application/json" -d '{"blueprint":"build", "site_id": 1}'`
+```shell
+curl -u xosadmin@opencord.org:<password> -X POST http://<ip address of pod>/xosapi/v1/vepc/vepcserviceinstances -H "Content-Type: application/json" -d '{"blueprint":"build", "site_id": 1}'
+```
 
 ## Verify a Successful Installation
 
-To verify if the installation was successful, ssh into the head node and follow these steps:
+To verify if the installation was successful, ssh into the head node and follow
+these steps:
 
-Verify that the service synchronizers are running.  Use the `docker ps` command on head node, you should be able to see the following M-CORD synchronizers:
+Verify that the service synchronizers are running.  Use the `docker ps` command
+on head node, you should be able to see the following M-CORD synchronizers:
+
 * mcordng40_venb-synchronizer_1
 * mcordng40_vspgwc-synchronizer_1
 * mcordng40_vspgwu-synchronizer_1
 * mcordng40_vepc-synchronizer_1
 
-Check that the ServiceInstances are running on the head node. Run these commands on the head node:  
+Check that the ServiceInstances are running on the head node. Run these
+commands on the head node:
 
-```
+```shell
 source /opt/cord_profile/admin-openrc.sh
 nova list --all-tenants
 ```
 
 You should see three VMs like this:
 
-```
+```shell
 +--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------------------------------------------------------------+
 | ID                                   | Name            | Status | Task State | Power State | Networks                                                                                    |
 +--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------------------------------------------------------------+
@@ -77,25 +106,35 @@
 +--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------------------------------------------------------------+
 ```
 
-> Note: It may take a few minutes to provision the instances. If you don’t see them immediately, try again after some time.
+> Note: It may take a few minutes to provision the instances. If you don’t see
+> them immediately, try again after some time.
 
 Check that the service synchronizers have run successfully:
-* Log into the XOS GUI on the head node
-* In left panel, click on each item listed below and verify that there is a check mark sign under “backend status”
-  * Vspgwc, then Virtual Serving PDN Gateway -- Control Plane Service Instances
-  * Vspgwu, then Virtual Serving Gateway User Plane Service Instances
-  * Venb, then Virtual eNodeB Service Instances
 
-> Note: It may take a few minutes to run the synchronizers.  If you don’t see a check mark immediately, try again after some time.
+* Log into the XOS GUI on the head node
+* In left panel, click on each item listed below and verify that there is a
+  check mark sign under “backend status”
+    * Vspgwc, then Virtual Serving PDN Gateway -- Control Plane Service
+      Instances
+    * Vspgwu, then Virtual Serving Gateway User Plane Service Instances
+    * Venb, then Virtual eNodeB Service Instances
+
+> NOTE: It may take a few minutes to run the synchronizers.  If you don’t see a
+> check mark immediately, try again after some time.
 
 ## Using the NG40 vTester Software
-You’re now ready to generate some traffic and test M-CORD. To proceed, do the following.
 
-* SSH into the NG40 VNF VM with username and password ng40/ng40. To understand how to access your VNFs look [here](troubleshooting.md#how-to-log-into-a-vnf-vm).
+You’re now ready to generate some traffic and test M-CORD. To proceed, do the
+following.
+
+* SSH into the NG40 VNF VM with username and password ng40/ng40. To understand
+  how to access your VNFs look
+  [here](troubleshooting.md#how-to-log-into-a-vnf-vm).
 * Run `~/verify_quick.sh`
 
 You should see the following output:
-```
+
+```shell
 **** Load Profile Settings ****
 $pps = 1000
 $bps = 500000000
@@ -112,10 +151,13 @@
 Watchtime: 9 Timeout: 1800
 ```
 
-Both the downlink and uplink should show packets counters increasing. The downlink shows packets flowing from the AS (Application Server) to the UE. The uplink shows packets going from the UE to the AS.
+Both the downlink and uplink should show packets counters increasing. The
+downlink shows packets flowing from the AS (Application Server) to the UE. The
+uplink shows packets going from the UE to the AS.
 
-The result for all commands should look like: 
-```
+The result for all commands should look like:
+
+```shell
 Verdict(tc_attach_www) = VERDICT_PASS
 **** Packet Loss ****
 DL Loss= AS_PktTx-S1uPktRx=     0(pkts); 0.00(%)
@@ -123,71 +165,109 @@
 ```
 
 The verdict is configured to check the control plane only.
-For the user plane verification you see the absolute number and percentage of lost packets.
 
-There are multiple test commands available (You can get the parameter description with the flags -h or -?):
+For the user plane verification you see the absolute number and percentage of
+lost packets.
+
+There are multiple test commands available (You can get the parameter
+description with the flags -h or -?):
 
 * **verify_attach.sh**
-```
+
+```shell
 Usage: ./verify_attach.sh [<UEs> [<rate>]]
        UEs: Number of UEs 1..10, default 1
        rate: Attach rate 1..10, default 1
 ```
 
 Send only attach and detach.
+
 Used to verify basic control plane functionality.
 
 * **verify_attach_data.sh**
-```
+
+```shell
 Usage: ./verify_attach_data.sh [<UEs> [<rate>]]
        UEs: Number of UEs 1..10, default 1
        rate: Attach rate 1..10, default 1
 ```
 
 Send attach, detach and a few user plane packets.
+
 Used to verify basic user plane functionality.
-Downlink traffic will be sent without waiting for uplink traffic to arrive at the Application Server.
+
+Downlink traffic will be sent without waiting for uplink traffic to arrive at
+the Application Server.
 
 * **verify_quick.sh**
-```
+
+```shell
 Usage: ./verify_quick.sh [<UEs> [<rate> [<pps>]]]
        UEs: Number of UEs 1..10, default 2
        rate: Attach rate 1..10, default 1
        pps: Packets per Second 1000..60000, default 1000
 ```
 
-Send attach, detach and 1000 pps user plane. Userplane ramp up and down ~20 seconds and total userplane transfer time ~70 seconds.
-500.000.000 bps (maximum bit rate to calculate packet size from pps setting, MTU 1450).
-Used for control plane and userplane verification with low load for a short time.
-Downlink traffic will only be send when uplink traffic arrives at the Application Server.
+Send attach, detach and 1000 pps user plane. Userplane ramp up and down ~20
+seconds and total userplane transfer time ~70 seconds.
+
+500.000.000 bps (maximum bit rate to calculate packet size from pps setting,
+MTU 1450).
+
+Used for control plane and userplane verification with low load for a short
+time.
+
+Downlink traffic will only be send when uplink traffic arrives at the
+Application Server.
 
 * **verify_short.sh**
-```
+
+```shell
 Usage: ./verify_short.sh [<UEs> [<rate> [<pps>]]]
        UEs: Number of UEs 1..10, default 10
        rate: Attach rate 1..10, default 5
        pps: Packets per Second 1000..60000, default 60000
 ```
 
-Send attach, detach and 60000 pps user plane. Userplane ramp up and down ~20 seconds and total userplane transfer time ~70 seconds.
-500.000.000 bps (maximum bit rate to calculate packet size from pps setting, MTU 1450). Used for control plane and userplane verification with medium load for a short time. Downlink traffic will only be send when uplink traffic arrives at the Application Server.
+Send attach, detach and 60000 pps user plane. Userplane ramp up and down ~20
+seconds and total userplane transfer time ~70 seconds.
+
+500.000.000 bps (maximum bit rate to calculate packet size from pps setting,
+MTU 1450). Used for control plane and userplane verification with medium load
+for a short time. Downlink traffic will only be send when uplink traffic
+  arrives at the Application Server.
 
 * **verify_long.sh**
-```
+
+```shell
 Usage: ./verify_long.sh [<UEs> [<rate> [<pps>]]]
        UEs: Number of UEs 1..10, default 10
        rate: Attach rate 1..10, default 5
        pps: Packets per Second 1000..60000, default 60000
 ```
 
-Send attach, detach and 60000 pps user plane. Userplane ramp up and down ~200 seconds and total userplane transfer time ~700 seconds.
-500.000.000 bps (maximum bit rate to calculate packet size from pps setting, MTU 1450).
-Used for control plane and userplane verification with medium load for a longer time.
-Downlink traffic will only be send when uplink traffic arrives at the Application Server.
+Send attach, detach and 60000 pps user plane. Userplane ramp up and down ~200
+seconds and total userplane transfer time ~700 seconds.
+
+500.000.000 bps (maximum bit rate to calculate packet size from pps setting,
+MTU 1450).
+
+Used for control plane and userplane verification with medium load for a longer
+time.
+
+Downlink traffic will only be send when uplink traffic arrives at the
+Application Server.
 
 ### Request / Update the NG40 vTester Software License
 
-If you forgot to request an NG40 license at the beginning of the installation, or if you would like to extend it, you can input your updated license once the NG40 VM is up, following the steps below:
-SSH into the NG40 VNF VM with username and password ng40/ng40. To understand how to access your VNFs, look [here](troubleshooting.md#how-to-log-into-a-vnf-vm).
+If you forgot to request an NG40 license at the beginning of the installation,
+or if you would like to extend it, you can input your updated license once the
+NG40 VM is up, following the steps below:
+
+SSH into the NG40 VNF VM with username and password ng40/ng40. To understand
+how to access your VNFs, look
+[here](/troubleshooting.md#how-to-log-into-a-vnf-vm).
+
 Add the license file with named `ng40-license` to the folder: `~/install/`
 Run the command `~/install/ng40init`.
+
diff --git a/docs/overview.md b/docs/overview.md
index b989eaf..134dbd2 100644
--- a/docs/overview.md
+++ b/docs/overview.md
@@ -2,85 +2,181 @@
 
 ## What is M-CORD?
 
-M-CORD is a CORD use-case for mobile edge cloud. The architecture allows Service Providers to disaggregate both the RAN and the core, as well as to virtualize their components, either as VNFs or as SDN applications. Moreover, the architecture enables programmatic control of the RAN, as well as core services chaining. M-CORD-based edge clouds are also capable of hosting MEC services.
-As for any other CORD flavor, the services management of both VNFs and SDN applications is orchestrated by XOS.
-M-CORD is a powerful platform that allows to rapidly innovate cellular networks, towards 5G. As such, it features some 5G specific functionalities, such as split-RAN (C-RAN), RAN user plane and control plane separation (xRAN), programmable network slicing (ProgRAN), and MME-disaggregation. These features have already been demonstrated.
+M-CORD is a CORD use-case for mobile edge cloud. The architecture allows
+Service Providers to disaggregate both the RAN and the core, as well as to
+virtualize their components, either as VNFs or as SDN applications. Moreover,
+the architecture enables programmatic control of the RAN, as well as core
+services chaining. M-CORD-based edge clouds are also capable of hosting MEC
+services.
 
-The first release (4.1) of M-CORD ships with the basic CORD building blocks (ONOS, XOS, Docker, and Open Stack), as well as with a number of VNFs to support 3GPP LTE connectivity. These VNFs include a CUPS compliant open source SPGW (in the form of two VNFs: SPGW-u and SPGW-c), as well as a VNF emulating an MME with an integrated HSS, eNBs and UEs. The emulator is not open source. It comes as a binary, courtesy of ng4T (<http://www.ng4t.com>), who provides free trial licenses for limited use.
+As for any other CORD flavor, the services management of both VNFs and SDN
+applications is orchestrated by XOS.
 
-Following releases of M-CORD will complete the open source EPC suite, by offering an MME, an HSS, as well as a PCRF VNF. The inclusion of these additional VNFs is targeted for the 6.0 release.    
+M-CORD is a powerful platform that allows to rapidly innovate cellular
+networks, towards 5G. As such, it features some 5G specific functionalities,
+such as split-RAN (C-RAN), RAN user plane and control plane separation (xRAN),
+programmable network slicing (ProgRAN), and MME-disaggregation. These features
+have already been demonstrated.
+
+The first release (4.1) of M-CORD ships with the basic CORD building blocks
+(ONOS, XOS, Docker, and Open Stack), as well as with a number of VNFs to
+support 3GPP LTE connectivity. These VNFs include a CUPS compliant open source
+SPGW (in the form of two VNFs: SPGW-u and SPGW-c), as well as a VNF emulating
+an MME with an integrated HSS, eNBs and UEs. The emulator is not open source.
+It comes as a binary, courtesy of ng4T (<http://www.ng4t.com>), who provides
+free trial licenses for limited use.
+
+Following releases of M-CORD will complete the open source EPC suite, by
+offering an MME, an HSS, as well as a PCRF VNF. The inclusion of these
+additional VNFs is targeted for the 6.0 release.
 
 The picture below shows a diagram representing a generic M-CORD POD.
 
-<img src="static/images/overview.png" alt="M-CORD overview" style="width: 800px;"/>
+![M-CORD Overview](static/images/overview.png)
 
-As shown, M-CORD provides connectivity for wireless User Equipments (UEs) to Packet Data Networks (PDNs). The PDNs are Service Provider specific networks, such as VoLTE networks, and public networks, or such as the Internet. Connectivity wise, at a high level M-CORD uses two networks: a Radio Access Network (RAN) and a core (EPC in LTE and NG core in 5G) network. The RAN is composed of a number of base stations (eNBs in LTE / gNBs in 5G) that provide wireless connectivity to UEs while they move. eNBs are the M-CORD peripherals. Both non-disaggregated and split eNBs architectures are supported. In case of a split-RAN solution, the eNBs are split in two components: a Distributed Unit (DU) and a Centralized Unit (CU). The CU is virtualized and implemented as a VNF, exposed as a service (CUaaS) that can be onboarded, configured and instantiated in XOS. 
+As shown, M-CORD provides connectivity for wireless User Equipments (UEs) to
+Packet Data Networks (PDNs). The PDNs are Service Provider specific networks,
+such as VoLTE networks, and public networks, or such as the Internet.
+Connectivity wise, at a high level M-CORD uses two networks: a Radio Access
+Network (RAN) and a core (EPC in LTE and NG core in 5G) network. The RAN is
+composed of a number of base stations (eNBs in LTE / gNBs in 5G) that provide
+wireless connectivity to UEs while they move. eNBs are the M-CORD peripherals.
+Both non-disaggregated and split eNBs architectures are supported. In case of a
+split-RAN solution, the eNBs are split in two components: a Distributed Unit
+(DU) and a Centralized Unit (CU). The CU is virtualized and implemented as a
+VNF, exposed as a service (CUaaS) that can be onboarded, configured and
+instantiated in XOS.
 
-In the RAN, both the non-disaggregated eNB and the RU, are connected to the M-CORD POD through a physical connection to one of the leaf switches. The traffic generated by the UEs first travels wirelessly to the eNBs, which are connected to the M-CORD fabric.
-The 3GPP cellular connectivity requires a number of core network components. The eNBs need to pass the UEs’ traffic to the SPGW-u VNF through the fabric and the soft-switch, e.g. OVS, running on the server on which the VNF itself is instantiated. 3GPP has its own control plane, which is responsible for mobility and session management, authentication, QoS policy enforcement, billing, charging, etc. For this reason, in addition to the SPGW-u, the eNB needs also connectivity to the MME VNF to exchange 3GPP control messages. The 3GPP control plane service graph also requires connectivity between the MME, the HSS, the SPGW-c, the SPGW-u, and the PCRF VNFs. It’s the SPGW-u VNF that carries the UE traffic out of the M-CORD POD through a leaf switch, towards an external PDN.
+In the RAN, both the non-disaggregated eNB and the RU, are connected to the
+M-CORD POD through a physical connection to one of the leaf switches. The
+traffic generated by the UEs first travels wirelessly to the eNBs, which are
+connected to the M-CORD fabric.
 
-M-CORD is a powerful edge cloud solution: it allows Service Providers to push all the core functionalities to the edge, while they distribute CORD across their edge and central clouds, thereby extending core services across multiple clouds. 
+The 3GPP cellular connectivity requires a number of core network components.
+The eNBs need to pass the UEs’ traffic to the SPGW-u VNF through the fabric and
+the soft-switch, e.g. OVS, running on the server on which the VNF itself is
+instantiated. 3GPP has its own control plane, which is responsible for mobility
+and session management, authentication, QoS policy enforcement, billing,
+charging, etc. For this reason, in addition to the SPGW-u, the eNB needs also
+connectivity to the MME VNF to exchange 3GPP control messages. The 3GPP control
+plane service graph also requires connectivity between the MME, the HSS, the
+SPGW-c, the SPGW-u, and the PCRF VNFs. It’s the SPGW-u VNF that carries the UE
+traffic out of the M-CORD POD through a leaf switch, towards an external PDN.
+
+M-CORD is a powerful edge cloud solution: it allows Service Providers to push
+all the core functionalities to the edge, while they distribute CORD across
+their edge and central clouds, thereby extending core services across multiple
+clouds.
 
 ## Glossary
 
-EXPERTS! Maybe we’re saying something obvious here, but we want to make sure we’ve a common understanding on the basic terminology, before going through the rest of the guide. If you already know all this, just skip this section.
+> EXPERTS! Maybe we’re saying something obvious here, but we want to make sure
+> we’ve a common understanding on the basic terminology, before going through
+> the rest of the guide. If you already know all this, just skip this section.
 
 Following is a list of basic terms used in M-CORD
 
-* **Base Station**: a radio receiver/transmitter of a wireless communications station.
+* **Base Station**: a radio receiver/transmitter of a wireless communications
+  station.
 * **eNodeB** (Evolved Node B): the base station used in 4G/LTE network.
-* **EPC** (Evolved Packet Core): it’s the core network of an LTE system. It allows user mobility, wireless data connections, routing, and authentication
-* **HSS** (Home Subscriber Server): a central database that contains user-related and subscription-related information interacting with the MME.
-* **MME** (Mobility Management Entity): it’s the key control node used in LTE access networks. An MME provides mobility management, session establishment, and authentication.
-* **PCRF** (Policy and Charging Rules Function): a network function defined in the 4G/LTE standard. It computes in real-time the network resources to allocate for an end-user and the related charging policies.
-* **RAN** (Radio Access Network): it describes a technology, a set of devices, to connect UEs to other parts of a network through radio connections
-* **SP-GW-C** (Serving Gateway and PDN Gateway Control plane): a control plane node, responsible for signaling termination, IP address allocation, maintaining UE’s contexts, charging.
-* **SP-GW-U** (Serving Gateway and PDN Gateway User plane): a user plane node connecting the EPC to the external IP networks and to non-3GPP services
-* **UE** (User Equipment): any device used directly by an end-user to communicate to the base station, for example a cell phone.
+* **EPC** (Evolved Packet Core): it’s the core network of an LTE system. It
+  allows user mobility, wireless data connections, routing, and authentication
+* **HSS** (Home Subscriber Server): a central database that contains
+  user-related and subscription-related information interacting with the MME.
+* **MME** (Mobility Management Entity): it’s the key control node used in LTE
+  access networks. An MME provides mobility management, session establishment,
+  and authentication.
+* **PCRF** (Policy and Charging Rules Function): a network function defined in
+  the 4G/LTE standard. It computes in real-time the network resources to
+  allocate for an end-user and the related charging policies.
+* **RAN** (Radio Access Network): it describes a technology, a set of devices,
+  to connect UEs to other parts of a network through radio connections
+* **SP-GW-C** (Serving Gateway and PDN Gateway Control plane): a control plane
+  node, responsible for signaling termination, IP address allocation,
+  maintaining UE’s contexts, charging.
+* **SP-GW-U** (Serving Gateway and PDN Gateway User plane): a user plane node
+  connecting the EPC to the external IP networks and to non-3GPP services
+* **UE** (User Equipment): any device used directly by an end-user to
+  communicate to the base station, for example a cell phone.
 
 ## System Overview
+
 The current release of M-CORD includes:
-* An open source EPC, providing an SPGW control plane and an SPGW user plane (respectively represented in the system by two VMs deployed on the compute nodes). The current release of the EPC doesn’t yet provide MME, HSS and PCRF services.
-* A closed source test suite, emulating UEs, eNodeBs, a minimal version of an MME with an integrated HSS, and an application server (used to emulate the upstream connectivity).
 
-With customizations the system is able to use real hardware base stations, but the released version supports just simple tests with emulated traffic.
+* An open source EPC, providing an SPGW control plane and an SPGW user plane
+  (respectively represented in the system by two VMs deployed on the compute
+  nodes). The current release of the EPC doesn’t yet provide MME, HSS and PCRF
+  services.
+* A closed source test suite, emulating UEs, eNodeBs, a minimal version of an
+  MME with an integrated HSS, and an application server (used to emulate the
+  upstream connectivity).
 
-At high level, a UE emulator generates some traffic, that passes through the EPC, goes to an emulated application server, and then goes back to the traffic generator again.
+With customizations the system is able to use real hardware base stations, but
+the released version supports just simple tests with emulated traffic.
 
-Looking at the system from another perspective, three VMs are provided (some of which implement multiple services:
+At high level, a UE emulator generates some traffic, that passes through the
+EPC, goes to an emulated application server, and then goes back to the traffic
+generator again.
 
-* **mysite_venb-X**: a test suite that emulates the RAN components (UEs, eNodeBs), an application server, and some of the EPC components (MME and HSS)
-* **mysite_vspgwc-X**: a component of the EPC implementing the S-GW and the P-GW control plane functionalities
-* **mysite_vspgwu-X**: a component of the EPC implementing the S-GW and the P-GW user plane functionalities
+Looking at the system from another perspective, three VMs are provided (some of
+which implement multiple services:
 
-> Note: in the list above, X is a number that varies, and that is automatically generated by the system. More information on how to get the list of VMs can be found in the installation and troubleshooting guides, below.
+* **mysite_venb-X**: a test suite that emulates the RAN components (UEs,
+  eNodeBs), an application server, and some of the EPC components (MME and HSS)
+* **mysite_vspgwc-X**: a component of the EPC implementing the S-GW and the
+  P-GW control plane functionalities
+* **mysite_vspgwu-X**: a component of the EPC implementing the S-GW and the
+  P-GW user plane functionalities
+
+> NOTE: in the list above, X is a number that varies, and that is automatically
+> generated by the system. More information on how to get the list of VMs can
+> be found in the installation and troubleshooting guides, below.
 
 The picture below describes how the VMs are connected.
 
-<img src="static/images/vms_diagram.png" alt="M-CORD VMs wiring diagram" style="width: 800px;"/>
+![M-CORD VMs Wiring Diagram](static/images/vms_diagram.png)
 
 Following, is a list of the networks between the VMs:
-* **S11_net**: used to exchange control plane traffic between MME and SPGW (i.e. tunnel and IP address allocation)
-* **S1U_net**: used to exchange user traffic. This specific network is used to exchange data between the RAN components (UEs, eNodeBs) and the EPC
-* **SGI_net**: used to exchange user traffic. This specific network is used to exchange data between the EPC and the Application Server, running on the mysite_venb-X VM
-* **spgw_net**: network dedicated to the communication of the vSPGW components (control plane and user plane)
 
-User traffic is generated on the mysite_venb-X VM. It goes uplink via S1U_net, reaches the EPC, flows through the SGI_net to the application server, emulating Internet. The application server replies, and the answer flows back in downlink, via SGI_net, through the EPC, via S1U_net, back to the emulated RAN  (eNB and UEs).
+* **S11_net**: used to exchange control plane traffic between MME and SPGW
+  (i.e. tunnel and IP address allocation)
+* **S1U_net**: used to exchange user traffic. This specific network is used to
+  exchange data between the RAN components (UEs, eNodeBs) and the EPC
+* **SGI_net**: used to exchange user traffic. This specific network is used to
+  exchange data between the EPC and the Application Server, running on the
+  mysite_venb-X VM
+* **spgw_net**: network dedicated to the communication of the vSPGW components
+  (control plane and user plane)
+
+User traffic is generated on the mysite_venb-X VM. It goes uplink via S1U_net,
+reaches the EPC, flows through the SGI_net to the application server, emulating
+Internet. The application server replies, and the answer flows back in
+downlink, via SGI_net, through the EPC, via S1U_net, back to the emulated RAN
+(eNB and UEs).
 
 ## Evolved Packet Core (EPC)
 
-The EPC shipped with M-CORD is called “Next Generation Infrastructure Core” (NGIC). It’s provided as an open source reference implementation by Intel. In the current release it includes two services, implemented in separate VMs: the vSPGW-C and the vSPGW-U. 
+The EPC shipped with M-CORD is called “Next Generation Infrastructure Core”
+(NGIC). It’s provided as an open source reference implementation by Intel. In
+the current release it includes two services, implemented in separate VMs: the
+vSPGW-C and the vSPGW-U.
 
-The vSPGW-C and the vSPGW-U are the Control User Plane Separated (CUPS) implementation of the conventional SAE-GW (S-GW and P-GW) which deals with
-converged voice and data services on Long Term Evolution (LTE) networks. The NGIC CUPS architecture is aligned with the 3GPP 5G direction. It has been developed using data plane development kit (DPDK) version optimized for Intel Architecture.
+The vSPGW-C and the vSPGW-U are the Control User Plane Separated (CUPS)
+implementation of the conventional SAE-GW (S-GW and P-GW) which deals with
+converged voice and data services on Long Term Evolution (LTE) networks. The
+NGIC CUPS architecture is aligned with the 3GPP 5G direction. It has been
+developed using data plane development kit (DPDK) version optimized for Intel
+Architecture.
 
-If you’re interested to know more and explore the EPC code, go to <https://gerrit.opencord.org/#/admin/projects/ngic>.
+If you’re interested to know more and explore the EPC code, [see
+here](https://gerrit.opencord.org/#/admin/projects/ngic).
 
 ## NG40 vTester software
 
 Within M-CORD, ng4T provides for free a limited version of its NG40 software.
 
 The free version:
+
 * Expires on the April 1st 2018
 * Emulates up to 1 eNB
 * Emulates up to 10 UE
@@ -89,36 +185,102 @@
 * Allows users to attach, detach and send user plane traffic
 * Support only standard Linux interfaces, no DPDK
 
-In order to use the NG40, the Operator will need to apply for a free NG40 M-CORD license with ng4T at the beginning of the setup. Detailed steps can be found in the Installation section below in this guide.
+In order to use the NG40, the Operator will need to apply for a free NG40
+M-CORD license with ng4T at the beginning of the setup. Detailed steps can be
+found in the Installation section below in this guide.
 
-Full versions of the NG40 vTester can do much more than this. This requires users to apply for additional licenses. In order to apply for licenses, users will need to contact directly ng4T, using the email address <support@ng4t.com>. More details about the NG40 can be found at <http://www.ng4t.com>.
+Full versions of the NG40 vTester can do much more than this. This requires
+users to apply for additional licenses. In order to apply for licenses, users
+will need to contact directly ng4T, using the email address `support@ng4t.com`.
+More details about the NG40 can be found at [their website](http://www.ng4t.com).
 
 ## XOS Service Graph
 
-XOS is CORD’s default orchestrator. It is described in detail in the XOS guide. XOS lets service developers describe their services in high-level data models. It then translates those models into configurations of system mechanisms, such as VMs, containers, and overlay networks with the assistance of components such as OpenStack and ONOS. Services can be linked together in the form of graphs. In XOS, everything, including low-level system mechanisms, is implemented as a service.
+XOS is CORD’s default orchestrator. It is described in detail in the XOS guide.
+XOS lets service developers describe their services in high-level data models.
+It then translates those models into configurations of system mechanisms, such
+as VMs, containers, and overlay networks with the assistance of components such
+as OpenStack and ONOS. Services can be linked together in the form of graphs.
+In XOS, everything, including low-level system mechanisms, is implemented as a
+service.
 
-XOS comes with a UI for instantiating and onboarding services, and creating service graphs. In the current implementation of M-CORD, the service graph shown in earlier sections has the following representation in XOS:
+XOS comes with a UI for instantiating and onboarding services, and creating
+service graphs. In the current implementation of M-CORD, the service graph
+shown in earlier sections has the following representation in XOS:
 
-<img src="static/images/services_diagram.png" alt="M-CORD XOS services wiring diagarm" style="width: 800px;"/>
+![M-CORD XOS services wiring diagram](static/images/services_diagram.png)
 
-Instances of services are connected to each other in the data plane via private networks. On the orchestration side, they are connected via XOS-defined relations called ServiceInstanceLinks. Using ServiceInstanceLinks, services can query other services in a given instance of the service graph, for example to discover each other’s configurations. Besides ServiceInstanceLinks, several other constructs are involved in the construction of the service graph. These are outlined below:
-* **Services**: Services are representations of a deployed VNF software. There is only one instance of a service on a given pod. In M-CORD, there is a single service object each for NG40 vTester (vENBService), SPGWC (vSPGWCService) and SPGWU (vSPGWUService). These service objects are brought up by a TOSCA recipe when an MCORD pod is built.
-* **ServiceInstances**: ServiceInstances are representations of deployed VNFs for a single subscriber or as is the case for M-CORD, a class of subscribers. ServiceInstances are created by another one of MCORD’s services, vEPC-as-a-service (described below).
-* **Slices**: Slices are units of compute and network resource allocations. All VMs and Networks created by XOS are associated with slices.
+Instances of services are connected to each other in the data plane via private
+networks. On the orchestration side, they are connected via XOS-defined
+relations called ServiceInstanceLinks. Using ServiceInstanceLinks, services can
+query other services in a given instance of the service graph, for example to
+discover each other’s configurations. Besides ServiceInstanceLinks, several
+other constructs are involved in the construction of the service graph. These
+are outlined below:
+
+* **Services**: Services are representations of a deployed VNF software. There
+  is only one instance of a service on a given pod. In M-CORD, there is a
+  single service object each for NG40 vTester (vENBService), SPGWC
+  (vSPGWCService) and SPGWU (vSPGWUService). These service objects are brought
+  up by a TOSCA recipe when an MCORD pod is built.
+* **ServiceInstances**: ServiceInstances are representations of deployed VNFs
+  for a single subscriber or as is the case for M-CORD, a class of subscribers.
+  ServiceInstances are created by another one of MCORD’s services,
+  vEPC-as-a-service (described below).
+* **Slices**: Slices are units of compute and network resource allocations. All
+  VMs and Networks created by XOS are associated with slices.
 
 ## vEPC-as-a-Service
 
-vEPC-as-a-Service is a special service that only operates in the service control plane, and has no data-plane functionality. Its job is to bring up and help configure instances of the service graph described in this document. The implementation of vEPC-as-a-Service contains a declarative description of the service graph in its config file, <https://github.com/opencord/epc-service/blob/cord-4.1/xos/synchronizer/vepc_config.yaml>. It is contained in an option called “blueprints.” While there is currently only one such blueprint graph, more may be added in the future. 
+vEPC-as-a-Service is a special service that only operates in the service
+control plane, and has no data-plane functionality. Its job is to bring up and
+help configure instances of the service graph described in this document. The
+implementation of vEPC-as-a-Service contains a declarative description of the
+service graph in its config file,
+<https://github.com/opencord/epc-service/blob/cord-4.1/xos/synchronizer/vepc_config.yaml>.
+It is contained in an option called “blueprints.” While there is currently only
+one such blueprint graph, more may be added in the future.
 
-In the blueprint graph, the network section configures the networks, and the graph section defines ServiceInstances and links them together via those networks. 
+In the blueprint graph, the network section configures the networks, and the
+graph section defines ServiceInstances and links them together via those
+networks.
 
-When a new ServiceInstance of vEPC-as-a-Service is created via the UI, the entire service graph in this blueprint is instantiated and linked to that vEPC-as-a-Service instance. The XOS linkages in the graph ensure that services are instantiated in the correct order based on dependencies between them, while services that have no dependencies are instantiated in parallel.
+When a new ServiceInstance of vEPC-as-a-Service is created via the UI, the
+entire service graph in this blueprint is instantiated and linked to that
+vEPC-as-a-Service instance. The XOS linkages in the graph ensure that services
+are instantiated in the correct order based on dependencies between them, while
+services that have no dependencies are instantiated in parallel.
 
-Note that vEPC-as-a-Service does not perform any operations that cannot be invoked via the UI, via the REST API, or via the TOSCA engine. It creates data model objects via XOS APIs, just as a user could do manually via the UI, REST, or TOSCA. However, it conveniently performs all of the operations in a single step, and in a valid order so that when an object is created, its dependencies are guaranteed to be met.
+Note that vEPC-as-a-Service does not perform any operations that cannot be
+invoked via the UI, via the REST API, or via the TOSCA engine. It creates data
+model objects via XOS APIs, just as a user could do manually via the UI, REST,
+or TOSCA. However, it conveniently performs all of the operations in a single
+step, and in a valid order so that when an object is created, its dependencies
+are guaranteed to be met.
 
 ## XOS Synchronizers
-Once the objects in the XOS data model have been created, it is up to the XOS synchronizers to translate them and make the service operational. Synchronizers are controllers that run in their own containers and react to changes in the data model to communicate those changes to the VNF in question. In the M-CORD setup, there are synchronizers for each of the services: vENB, vSPGWU, vSPGWC as well as vEPC-as-a-service. In addition, there are also synchronizers for the back-end components: OpenStack and ONOS.
 
-There are two parts of a synchronizer: `model policies` and `sync steps`. Model Policies operate on the data model. vEPC-as-a-service is a good example of a synchronizer that only contains model policies, as it does not have any data-plane functionality. When a vEPC-as-a-service instance is created, it simply creates the corresponding M-CORD service objects, links them together and retreats, leaving it up to other service synchronizers, as well as ONOS and OpenStack to instantiate VMs, configure them, and create networks and network interfaces.
+Once the objects in the XOS data model have been created, it is up to the XOS
+synchronizers to translate them and make the service operational. Synchronizers
+are controllers that run in their own containers and react to changes in the
+data model to communicate those changes to the VNF in question. In the M-CORD
+setup, there are synchronizers for each of the services: vENB, vSPGWU, vSPGWC
+as well as vEPC-as-a-service. In addition, there are also synchronizers for the
+back-end components: OpenStack and ONOS.
 
-“Sync steps” operate on the rest of the system, with the data model excluded. A sync step is typically linked to an Ansible playbook that configures a piece of software. Aside from the playbook, it contains Python code that collects configurations from the XOS data model, via its own data model, and those of neighbouring services in the service graph, and having done so, translates it into arguments for the playbook.
+There are two parts of a synchronizer: `model policies` and `sync steps`. Model
+Policies operate on the data model. vEPC-as-a-service is a good example of a
+synchronizer that only contains model policies, as it does not have any
+data-plane functionality. When a vEPC-as-a-service instance is created, it
+simply creates the corresponding M-CORD service objects, links them together
+and retreats, leaving it up to other service synchronizers, as well as ONOS and
+OpenStack to instantiate VMs, configure them, and create networks and network
+interfaces.
+
+“Sync steps” operate on the rest of the system, with the data model excluded. A
+sync step is typically linked to an Ansible playbook that configures a piece of
+software. Aside from the playbook, it contains Python code that collects
+configurations from the XOS data model, via its own data model, and those of
+neighbouring services in the service graph, and having done so, translates it
+into arguments for the playbook.
+
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 78c3124..9a0c583 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -1,329 +1,432 @@
 # Troubleshooting
 
-Sometimes, components may not come up in a clean state. In this case, following paragraph may help you to debug, and fix the issues.
+Sometimes, components may not come up in a clean state. In this case, following
+paragraph may help you to debug, and fix the issues.
 
-Most of the times, debug means do the procedure that the automated process would do, manually. Here are few manual configuration examples.
+Most of the times, debug means do the procedure that the automated process
+would do, manually. Here are few manual configuration examples.
 
-Before reading this paragraph, make sure you’ve already covered the general CORD troubleshooting guide, at in the main [troubleshooting section](../../troubleshooting.md).
+Before reading this paragraph, make sure you’ve already covered the general
+CORD troubleshooting guide, at in the main [troubleshooting
+section](/troubleshooting.md).
 
-The troubleshooting guide often mentions interfaces, IPs and networks. For reference, you can use the diagram below.
+The troubleshooting guide often mentions interfaces, IPs and networks. For
+reference, you can use the diagram below.
 
-<img src="static/images/vms_diagram.png" alt="M-CORD VMs wiring diagram" style="width: 800px;"/>
+[M-CORD VMs wiring diagram](static/images/vms_diagram.png)
 
 ## See the status and the IP addresses of your VNFs
 
-You may often need to check the status of your M-CORD VNFs, or access them to apply some extra configurations or to debug. To check the status or to know the IP of your VNF, do the following:
+You may often need to check the status of your M-CORD VNFs, or access them to
+apply some extra configurations or to debug. To check the status or to know the
+IP of your VNF, do the following:
 
 * SSH into the head node
 * Type following commands:
-```
+
+```shell
 source /opt/cord_profile/admin-openrc.sh
 nova list --all-tenants
 ```
 
 ## View interface details for a specific VNF
+
 * SSH into the head node
-* List your VNF VMs, following the procedure at <troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs>
-* Find the ID of the specific instance (i.e. `92dff317-732f-4a6a-aa0d-56a225e9efae`)
+* List your VNF VMs, following the procedure at
+  <troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs>
+* Find the ID of the specific instance (i.e.
+  `92dff317-732f-4a6a-aa0d-56a225e9efae`)
 * To get the interfaces details of the VNF, do
-```
+
+```shell
 nova interface-list ID_YOU_FOUND_ABOVE
 ```
 
 ## How to log into a VNF VM
-Sometimes, you may need to access a VNF (VM) running on one of the compute nodes. To access a VNF, do the following.
+
+Sometimes, you may need to access a VNF (VM) running on one of the compute
+nodes. To access a VNF, do the following.
+
 * SSH into the head node
-* From the head node, SSH into the compute node running the VNF. Use the following commands:
-```
+* From the head node, SSH into the compute node running the VNF. Use the
+  following commands:
+
+```shell
 ssh-agent bash
 ssh-add
 ssh -A ubuntu@COMPUTE_NODE_NAME
 ```
 
-> Note: You can get your compute node name typing cord prov list on the head node
+> NOTE: You can get your compute node name typing cord prov list on the head
+> node
 
 * SSH into the VM from compute node
-```
+
+```shell
 ssh ubuntu@IP_OF_YOUR_VNF_VM
 ```
 
-> Note: To know the IP of your VNF, refer to [this paragraph](troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs). The IP you need is the one reported under the `management network`.
+> NOTE: To know the IP of your VNF, refer to [this
+> paragraph](troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs).
+> The IP you need is the one reported under the `management network`.
 
 ## View an interface name, inside a VNF
 
-In the troubleshooting steps below you’ll be often asked to provide a specific VNF name. To do that, follow the steps below:
-* From the head node, find the IP address of the VNF interface attached to a specific network. To do that, refer to the steps reported [here](troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs).
-* SSH into the VNF, following the steps [here](troubleshooting.md#how-to-log-into-a-vnf-vm).
-* Run ifconfig inside the VNF. Look for the interface IP you discovered at the steps above. You should see listed on the side the interface name.
+In the troubleshooting steps below you’ll be often asked to provide a specific
+VNF name. To do that, follow the steps below:
+
+* From the head node, find the IP address of the VNF interface attached to a
+  specific network. To do that, refer to the steps reported
+  [here](troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs).
+* SSH into the VNF, following the steps
+  [here](troubleshooting.md#how-to-log-into-a-vnf-vm).
+* Run ifconfig inside the VNF. Look for the interface IP you discovered at the
+  steps above. You should see listed on the side the interface name.
 
 ## Understand the M-CORD Synchronizers logs
 
-As the word says, synchronizers are XOS components responsible to synchronize the VNFs status with the configuration input by the Operator. More informations about what synchronizers are and how they work, can be found [here](../../xos/dev/synchronizers.md>.
+As the word says, synchronizers are XOS components responsible to synchronize
+the VNFs status with the configuration input by the Operator. More informations
+about what synchronizers are and how they work, can be found
+[here](/xos/dev/synchronizers.md).
 
-In case of issues, users may need to check the XOS synchronizers logs. Synchronizers are no more than Docker containers running on the head node. Users can access their logs simply using standard Docker commands:
+In case of issues, users may need to check the XOS synchronizers logs.
+Synchronizers are no more than Docker containers running on the head node.
+Users can access their logs simply using standard Docker commands:
 
 * SSH into the head node
 * Type the following
-```
+
+```shell
 docker logs -f NAME_OF_THE_SYNCHRONIZER
 ```
 
-> Note: to list the containers running on the head node (including the ones operating as synchronizers), use the command `docker ps`.
+> NOTE: to list the containers running on the head node (including the ones
+> operating as synchronizers), use the command `docker ps`.
 
 It may happen that some error messages appear in the logs of your M-CORD VNF synchronizers.
 
 Following, is a list of the most common cases.
 
-* **Case 1**: “Exception: defer object `<something>_instance<#>` due to waiting on instance”
-It means a VNF cannot come up correctly.
-To check the overall instances status, follow the procedure described [here](troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs). If your instances are in any status other than `ACTIVE` or `BUILD` there’s an issue. This might happen simply because something temporarily failed during the provisioning process (so you should try to rebuild your VNFs again, following [these instructions](troubleshooting.md#configure-build-and-run-the-spgw-c-and-the-spgw-u), or because there are more serious issues.
+* **Case 1**: “Exception: defer object `<something>_instance<#>` due to waiting
+  on instance” It means a VNF cannot come up correctly.  To check the overall
+  instances status, follow the procedure described
+  [here](troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs).
+  If your instances are in any status other than `ACTIVE` or `BUILD` there’s an
+  issue. This might happen simply because something temporarily failed during
+  the provisioning process (so you should try to rebuild your VNFs again,
+  following [these
+  instructions](troubleshooting.md#configure-build-and-run-the-spgw-c-and-the-spgw-u),
+  or because there are more serious issues.
 
-* **Case 2**: “Exception: IP of SSH proxy not available. Synchronization deferred”
-It means that the Ansible playbook wasn’t able to access some images, since SSH proxy wasn’t available yet. The SSH proxy usually need some time to become available, and the error message automatically disappear when this happens, so you shouldn’t worry about it, as long as the message doesn’t keep showing up.
+* **Case 2**: “Exception: IP of SSH proxy not available. Synchronization
+  deferred” It means that the Ansible playbook wasn’t able to access some
+  images, since SSH proxy wasn’t available yet. The SSH proxy usually need some
+  time to become available, and the error message automatically disappear when
+  this happens, so you shouldn’t worry about it, as long as the message doesn’t
+  keep showing up.
 
-* **Case 3**: Any failed message of ansible playbook
-Finding errors related to Ansible is quite common. While it may be possible to fix the issue manually, it’s generally desirable to build and deploy again the VNFs. This will run the entire Ansible playbook again. See [here](troubleshooting.md#configure-build-and-run-the-spgw-c-and-the-spgw-u) for more details.
+* **Case 3**: Any failed message of ansible playbook Finding errors related to
+  Ansible is quite common. While it may be possible to fix the issue manually,
+  it’s generally desirable to build and deploy again the VNFs. This will run
+  the entire Ansible playbook again. See
+  [here](troubleshooting.md#configure-build-and-run-the-spgw-c-and-the-spgw-u)
+  for more details.
 
-* **Any other issue?**
-Please report it to us at <cord-dev@opencord.org>. We will try to fix it as soon as possible.
+* **Any other issue?** Please report it to us at <cord-dev@opencord.org>. We
+  will try to fix it as soon as possible.
 
 ## Check the SPGW-C and the SPGW-U Status
 
-Sometimes, you may want to double check if your SPGW-C and SPGW-U components status. To do that, follow the steps below.
+Sometimes, you may want to double check if your SPGW-C and SPGW-U components
+status. To do that, follow the steps below.
 
 ### SPGW-C
 
-* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow [this guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
+* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow
+  [this guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
 * Become the root user (the system will pause for a few seconds)
-```
-$ sudo bash
-```
-* Go to `/root/ngic/cp`
-  * If the file `finish_flag_interface_config` exists, the `SPGW-C` has been successfully configured
-  * It the file `finish_flag_build_and_run` exists, the `SPGW-C` build process has successfully completed
-* Open the `results` file. It’s the `SPGW-C` build log file. If the file has similar contents to the one shown below, it means the SPGW-C has been successfully started.
 
-```
-            rx       tx  rx pkts  tx pkts   create   modify  b resrc   create   delete   delete           rel acc               ddn
- time     pkts     pkts     /sec     /sec  session   bearer      cmd   bearer   bearer  session     echo   bearer      ddn      ack
-    0        0        0        0        0        0        0        0        0        0        0        0        0        0        0
-    1        0        0        0        0        0        0        0        0        0        0        0        0        0        0
-```
+  ```shell
+  sudo bash
+  ```
+
+* Go to `/root/ngic/cp`
+
+    * If the file `finish_flag_interface_config` exists, the `SPGW-C` has been
+      successfully configured
+    * It the file `finish_flag_build_and_run` exists, the `SPGW-C` build
+      process has successfully completed
+
+* Open the `results` file. It’s the `SPGW-C` build log file. If the file has
+  similar contents to the one shown below, it means the SPGW-C has been
+  successfully started.
+
+  ```shell
+              rx       tx  rx pkts  tx pkts   create   modify  b resrc   create   delete   delete           rel acc               ddn
+   time     pkts     pkts     /sec     /sec  session   bearer      cmd   bearer   bearer  session     echo   bearer      ddn      ack
+      0        0        0        0        0        0        0        0        0        0        0        0        0        0        0
+      1        0        0        0        0        0        0        0        0        0        0        0        0        0        0
+  ```
 
 ### SPGW-U
 
-* SSH into `SPGW-U` VNF with credentials `ngic/ngic` (to access the VNF follow [these guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
+* SSH into `SPGW-U` VNF with credentials `ngic/ngic` (to access the VNF follow
+  [these guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
 * Go to `/root/ngic/dp`
-  * If the file `finish_flag_interface_config` exists, the `SPGW-U` has been successfully configured
-  * It the file `finish_flag_build_and_run` exists, the `SPGW-U` build process completed successfully
-* Open the `results` file. It’s the `SPGW-U` build log file. If the file has similar contents to the one shown below, it means the `SPGW-U` has been successfully started.
+    * If the file `finish_flag_interface_config` exists, the `SPGW-U` has been
+      successfully configured
+    * It the file `finish_flag_build_and_run` exists, the `SPGW-U` build
+      process completed successfully
+* Open the `results` file. It’s the `SPGW-U` build log file. If the file has
+  similar contents to the one shown below, it means the `SPGW-U` has been
+  successfully started.
 
-```
-DP: RTE NOTICE enabled on lcore 1
-DP: RTE INFO enabled on lcore 1
-DP: RTE NOTICE enabled on lcore 0
-DP: RTE INFO enabled on lcore 0
-DP: RTE NOTICE enabled on lcore 3
-DP: RTE INFO enabled on lcore 3
-DP: RTE NOTICE enabled on lcore 5
-DP: RTE INFO enabled on lcore 5
-DP: RTE NOTICE enabled on lcore 4
-DP: RTE INFO enabled on lcore 4
-API: RTE NOTICE enabled on lcore 4
-API: RTE INFO enabled on lcore 4
-DP: RTE NOTICE enabled on lcore 6
-DP: RTE INFO enabled on lcore 6
-DP: RTE NOTICE enabled on lcore 2
-DP: RTE INFO enabled on lcore 2
-DP: MTR_PROFILE ADD: index 1 cir:64000, cbd:3072, ebs:3072
-Logging CDR Records to ./cdr/20171122165107.cur
-DP: ACL DEL:SDF rule_id:99999
-DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
-DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
-DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
-```
+  ```shell
+  DP: RTE NOTICE enabled on lcore 1
+  DP: RTE INFO enabled on lcore 1
+  DP: RTE NOTICE enabled on lcore 0
+  DP: RTE INFO enabled on lcore 0
+  DP: RTE NOTICE enabled on lcore 3
+  DP: RTE INFO enabled on lcore 3
+  DP: RTE NOTICE enabled on lcore 5
+  DP: RTE INFO enabled on lcore 5
+  DP: RTE NOTICE enabled on lcore 4
+  DP: RTE INFO enabled on lcore 4
+  API: RTE NOTICE enabled on lcore 4
+  API: RTE INFO enabled on lcore 4
+  DP: RTE NOTICE enabled on lcore 6
+  DP: RTE INFO enabled on lcore 6
+  DP: RTE NOTICE enabled on lcore 2
+  DP: RTE INFO enabled on lcore 2
+  DP: MTR_PROFILE ADD: index 1 cir:64000, cbd:3072, ebs:3072
+  Logging CDR Records to ./cdr/20171122165107.cur
+  DP: ACL DEL:SDF rule_id:99999
+  DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
+  DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
+  DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
+  ```
 
-> Note: if the last three lines don’t show up, you should re-build the `SPGW-C` and the `SPGW-U`. See [this](troubleshooting.md#configure-build-and-run-the-spgw-c-and-the-spgw-u) for more specific instructions.
+> NOTE: if the last three lines don’t show up, you should re-build the `SPGW-C`
+> and the `SPGW-U`. See
+> [this](troubleshooting.md#configure-build-and-run-the-spgw-c-and-the-spgw-u)
+> for more specific instructions.
 
 ## Configure, build and run the SPGW-C and the SPGW-U
 
-In most cases, the `SPGW-C` and the `SPGW-U` are automatically configured, built, and started, during the installation process. However, unexpected errors may occur, or you may simply want to re-configure these components. To do that, follow the steps below.
+In most cases, the `SPGW-C` and the `SPGW-U` are automatically configured,
+built, and started, during the installation process. However, unexpected errors
+may occur, or you may simply want to re-configure these components. To do that,
+follow the steps below.
 
-> Warning: Make sure you follow the steps in the order described. The SPGW-U should be built and configured before the SPGW-C
+> WARNING: Make sure you follow the steps in the order described. The SPGW-U
+> should be built and configured before the SPGW-C
 
 ### Configuring the SPGW-U
 
-* SSH into `SPGW-U` VNF with credentials `ngic/ngic` (to access the VNF follow [these guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
+* SSH into `SPGW-U` VNF with credentials `ngic/ngic` (to access the VNF follow
+  [these guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
+
 * Become sudo
 
-```
-sudo su
-```
+  ```shell
+  sudo su
+  ```
 
 * Go to the configuration directory
 
-```
-/root/ngic/config
-```
+  ```shell
+  /root/ngic/config
+  ```
 
 * Edit the `interface.cfg` file:
-  * **dp_comm_ip**: the IP address of the `SPGW-U` interface, attached to the spgw_network
-  * **cp_comm_ip**: the IP address of the SPGW-C interface, attached to the spgw_network
+    * **dp_comm_ip**: the IP address of the `SPGW-U` interface, attached to the spgw_network
+    * **cp_comm_ip**: the IP address of the SPGW-C interface, attached to the spgw_network
 * Edit the `dp_config.cfg` file:
-  * **S1U_IFACE**: the name of the SPGW-U interface, attached to the s1u_network
-  * **SGI_IFACE**: the name of the SPGW-U interface, attached to the sgi_network
-  * **S1U_IP**: the IP address of the SPGW-U interface, attached to the s1u_network
-  * **S1U_MAC**: the MAC address of the SPGW-U interface, attached to the s1u_network
-  * **SGI_IP**: the IP address of the SPGW-U interface attached to the sgi_network
-  * **SGI_MAC**: the MAC address of the SPGW-U interface attached to the sgi_network
+    * **S1U_IFACE**: the name of the SPGW-U interface, attached to the s1u_network
+    * **SGI_IFACE**: the name of the SPGW-U interface, attached to the sgi_network
+    * **S1U_IP**: the IP address of the SPGW-U interface, attached to the s1u_network
+    * **S1U_MAC**: the MAC address of the SPGW-U interface, attached to the s1u_network
+    * **SGI_IP**: the IP address of the SPGW-U interface attached to the sgi_network
+    * **SGI_MAC**: the MAC address of the SPGW-U interface attached to the sgi_network
 * Edit the static_arp.cfg file:
-  * Below the line `[sgi]`, you should find another line with a similar pattern: `IP_addr1 IP_addr2 = MAC_addr`
-    * `IP_addr1` and `IP_addr2` are both the IP address of the vENB interface attached to the sgi_network
-    * `MAC_addr` is the MAC address of the vENB interface, attached to the sgi_network
-  * Below the line `[s1u]`, you should find another line with a similar pattern `IP_addr1 IP_addr2 = MAC_addr`
-    * `IP_addr1` and `IP_addr2` are the IP addresses of the vENB interfaces attached to the s1u_network
-    * `MAC_addr` is the MAC address of the vENB attached to the s1u_network
+    * Below the line `[sgi]`, you should find another line with a similar
+      pattern: `IP_addr1 IP_addr2 = MAC_addr`
+        * `IP_addr1` and `IP_addr2` are both the IP address of the vENB
+          interface attached to the sgi_network
+        * `MAC_addr` is the MAC address of the vENB interface, attached to the
+          sgi_network
+    * Below the line `[s1u]`, you should find another line with a similar
+      pattern `IP_addr1 IP_addr2 = MAC_addr`
+        * `IP_addr1` and `IP_addr2` are the IP addresses of the vENB interfaces
+          attached to the s1u_network
+        * `MAC_addr` is the MAC address of the vENB attached to the s1u_network
 
-> Note: To know the IP addresses and MAC addresses mentioned above, follow the instructions [here](troubleshooting.md#view-interface-details-for-a-specific-vnf).
-
-> Note: To know the interface names mentioned above, follow the instructions [here](troubleshooting.md#view-an-interface-name-inside-a-vnf).
+> Note: To know the IP addresses and MAC addresses mentioned above, follow the
+> instructions
+> [here](troubleshooting.md#view-interface-details-for-a-specific-vnf).
+>
+> To know the interface names mentioned above, follow the instructions
+> [here](troubleshooting.md#view-an-interface-name-inside-a-vnf).
 
 ### Configuring the SPGW-C
 
-* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
+* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow
+  [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
 * Become sudo
-```
-sudo su
-```
+
+  ```shell
+  sudo su
+  ```
+
 * Go to the configuration directory
-```cd /root/ngic/config
-```
+
+  ```shell
+  cd /root/ngic/config
+  ```
+
 * Edit the `interface.cfg` file
-  * **dp_comm_ip**: the IP address of the `SPGW-U` interface, attached to the spgw_network
-  * **cp_comm_ip**: the IP address of the `SPGW-C` interface, attached to the spgw_network
+    * **dp_comm_ip**: the IP address of the `SPGW-U` interface, attached to the spgw_network
+    * **cp_comm_ip**: the IP address of the `SPGW-C` interface, attached to the spgw_network
 * Edit the cp_config.cfg file
-  * **S11_SGW_IP**: the IP address of the SPGW-C interface, attached to the s11_network
-  * **S11_MME_IP**: the IP address of the vENB interface, attached to the s11_network
-  * **S1U_SGW_IP**: the IP addresses of the SPGW-U interface, attached to the s1u_network
+    * **S11_SGW_IP**: the IP address of the SPGW-C interface, attached to the s11_network
+    * **S11_MME_IP**: the IP address of the vENB interface, attached to the s11_network
+    * **S1U_SGW_IP**: the IP addresses of the SPGW-U interface, attached to the s1u_network
 
-> Note: To know the IP addresses mentioned above, follow the instructions here.
-
-> Note: To know the interface names mentioned above, follow the instructions here.
+> Note: To know the IP addresses and MAC addresses mentioned above, follow the
+> instructions
+> [here](troubleshooting.md#view-interface-details-for-a-specific-vnf).
+>
+> To know the interface names mentioned above, follow the instructions
+> [here](troubleshooting.md#view-an-interface-name-inside-a-vnf).
 
 ### Building and running the SPGW-U
 
-* SSH into `SPGW-U` VNF with credentials ngic/ngic (to access the VNF follow [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
+* SSH into `SPGW-U` VNF with credentials ngic/ngic (to access the VNF follow
+  [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
+
 * Become sudo
-```
-sudo su
-```
+
+  ```shell
+  sudo su
+  ```
+
 * Go to the configuration directory
-```
-cd /root/ngic/dp
-```
+
+  ```shell
+  cd /root/ngic/dp
+  ```
+
 * Run the following commands:
-```
-../setenv.sh
-make build
-make
-./udev.sh > results &
-```
 
-* Open `results` file. If the process succeeds, you should see an output similar to the one below
+  ```shell
+  ../setenv.sh
+  make build
+  make
+  ./udev.sh > results &
+  ```
 
-```
-DP: RTE NOTICE enabled on lcore 1
-DP: RTE INFO enabled on lcore 1
-DP: RTE NOTICE enabled on lcore 0
-DP: RTE INFO enabled on lcore 0
-DP: RTE NOTICE enabled on lcore 3
-DP: RTE INFO enabled on lcore 3
-DP: RTE NOTICE enabled on lcore 5
-DP: RTE INFO enabled on lcore 5
-DP: RTE NOTICE enabled on lcore 4
-DP: RTE INFO enabled on lcore 4
-API: RTE NOTICE enabled on lcore 4
-API: RTE INFO enabled on lcore 4
-DP: RTE NOTICE enabled on lcore 6
-DP: RTE INFO enabled on lcore 6
-DP: RTE NOTICE enabled on lcore 2
-DP: RTE INFO enabled on lcore 2
-Building and running the SPGW-C
-SSH into the head node
-SSH into SPGW-C VNF with credentials ngic/ngic account (to access the VNF follow the guide, here)
-Become sudo (sudo su)
-Go to the configuration dir
-ectory: cd /root/ngic/cp
-Run the following commands
-../setenv.sh
-make build
-make
-./run.sh > results &
-Open the “results” file. If the process succeeds, you should see an output similar to the one below
+* Open `results` file. If the process succeeds, you should see an output
+  similar to the one below
 
-            rx       tx  rx pkts  tx pkts   create   modify  b resrc   create   delete   delete           rel acc               ddn
- time     pkts     pkts     /sec     /sec  session   bearer      cmd   bearer   bearer  session     echo   bearer      ddn      ack
-    0        0        0        0        0        0        0        0        0        0        0        0        0        0        0
-    1        0        0        0        0        0        0        0        0        0        0        0        0        0        0
-```
+  ```shell
+  DP: RTE NOTICE enabled on lcore 1
+  DP: RTE INFO enabled on lcore 1
+  DP: RTE NOTICE enabled on lcore 0
+  DP: RTE INFO enabled on lcore 0
+  DP: RTE NOTICE enabled on lcore 3
+  DP: RTE INFO enabled on lcore 3
+  DP: RTE NOTICE enabled on lcore 5
+  DP: RTE INFO enabled on lcore 5
+  DP: RTE NOTICE enabled on lcore 4
+  DP: RTE INFO enabled on lcore 4
+  API: RTE NOTICE enabled on lcore 4
+  API: RTE INFO enabled on lcore 4
+  DP: RTE NOTICE enabled on lcore 6
+  DP: RTE INFO enabled on lcore 6
+  DP: RTE NOTICE enabled on lcore 2
+  DP: RTE INFO enabled on lcore 2
+  Building and running the SPGW-C
+  SSH into the head node
+  SSH into SPGW-C VNF with credentials ngic/ngic account (to access the VNF follow the guide, here)
+  Become sudo (sudo su)
+  Go to the configuration dir
+  ectory: cd /root/ngic/cp
+  Run the following commands
+  ../setenv.sh
+  make build
+  make
+  ./run.sh > results &
+  Open the “results” file. If the process succeeds, you should see an output similar to the one below
 
-* Go back to `SPGW-U` VNF and open `results` file in `/root/ngic/dp`. If the process succeeds, you should see a new output similar to one below
+              rx       tx  rx pkts  tx pkts   create   modify  b resrc   create   delete   delete           rel acc               ddn
+   time     pkts     pkts     /sec     /sec  session   bearer      cmd   bearer   bearer  session     echo   bearer      ddn      ack
+      0        0        0        0        0        0        0        0        0        0        0        0        0        0        0
+      1        0        0        0        0        0        0        0        0        0        0        0        0        0        0
+  ```
 
-```
-DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
-DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
-DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
-```
+* Go back to `SPGW-U` VNF and open `results` file in `/root/ngic/dp`. If the
+  process succeeds, you should see a new output similar to one below
+
+  ```shell
+  DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
+  DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
+  DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
+  ```
 
 ### Building and running the SPGW-C
 
 * SSH into the head node
-* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
+* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow
+  [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
 * Become sudo
 
-```
-sudo su
-```
+  ```shell
+  sudo su
+  ```
 
 * Go to the configuration directory:
 
-```
-cd /root/ngic/cp
-```
+  ```shell
+  cd /root/ngic/cp
+  ```
 
 * Run the following commands
 
-```
-../setenv.sh
-make build
-make
-./run.sh > results &
-```
+  ```shell
+  ../setenv.sh
+  make build
+  make
+  ./run.sh > results &
+  ```
 
-* Open the `results` file. If the process succeeds, you should see an output similar to the one below
+* Open the `results` file. If the process succeeds, you should see an output
+  similar to the one below
 
-```
-            rx       tx  rx pkts  tx pkts   create   modify  b resrc   create   delete   delete           rel acc               ddn
- time     pkts     pkts     /sec     /sec  session   bearer      cmd   bearer   bearer  session     echo   bearer      ddn      ack
-    0        0        0        0        0        0        0        0        0        0        0        0        0        0        0
-    1        0        0        0        0        0        0        0        0        0        0        0        0        0        0
-```
+  ```shell
+              rx       tx  rx pkts  tx pkts   create   modify  b resrc   create   delete   delete           rel acc               ddn
+   time     pkts     pkts     /sec     /sec  session   bearer      cmd   bearer   bearer  session     echo   bearer      ddn      ack
+      0        0        0        0        0        0        0        0        0        0        0        0        0        0        0
+      1        0        0        0        0        0        0        0        0        0        0        0        0        0        0
+  ```
 
-* Go back to `SPGW-U` VNF and open `results` file in `/root/ngic/dp`. If the process succeeds, you should see a new output similar to one below
+* Go back to `SPGW-U` VNF and open `results` file in `/root/ngic/dp`. If the
+  process succeeds, you should see a new output similar to one below
 
-```
-DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
-DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
-DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
-```
+  ```shell
+  DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
+  DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
+  DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
+  ```
 
 ## Troubleshooting with the NG40 vTester
 
-The guide describes how the NG40 results should be interpreted for troubleshooting.
+The guide describes how the NG40 results should be interpreted for
+troubleshooting.
 
 ### Failure on S11 or SPGW-C not running
 
-```
+```shell
 Signaling Table
            AttUE     ActCTXT   BrReq     BrAct     RelRq     RelAc     ActS1
 ng40_ran_1 0         0         0         0         0         0         1
@@ -347,15 +450,23 @@
 Testlist verdict = VERDICT_FAIL
 ```
 
-When you see a `VERDICT_FAIL` and there are 0 values for `BrReq` and `BrAct` in the Signaling table, check if the `SPGW-C` is running. If it is, check the `S11` connection between the `NG40` VM and the `SPGW-C` VM. The `verify_attach.sh` test can help you to verify the status of the control plane connectivity on the `S11` network.
+When you see a `VERDICT_FAIL` and there are 0 values for `BrReq` and `BrAct` in
+the Signaling table, check if the `SPGW-C` is running. If it is, check the
+`S11` connection between the `NG40` VM and the `SPGW-C` VM. The
+`verify_attach.sh` test can help you to verify the status of the control plane
+connectivity on the `S11` network.
 
-If the `ActS1` counter is 0, there’s an internal error on the `S1mme` interface, or the NG40 components did not start  correctly.
-Run the command `ng40forcecleanup all` to restart all the `NG40` components. The `NG40` system components may take some time to start.
+If the `ActS1` counter is 0, there’s an internal error on the `S1mme`
+interface, or the NG40 components did not start  correctly.
+
+Run the command `ng40forcecleanup all` to restart all the `NG40` components.
+The `NG40` system components may take some time to start.
+
 Wait 10 seconds before you start a new test.
 
 ### Failure on S1u and Sgi or SPGW-U
 
-```
+```shell
 Signaling Table
            AttUE     ActCTXT   BrReq     BrAct     RelRq     RelAc     ActS1
 ng40_ran_1 0         0         1         1         0         0         1
@@ -376,13 +487,17 @@
 UL Loss= S1uPktTx-AS_PktRx=     64(pkts); 100.00(%)
 ```
 
-If you running `attach_verify_data.sh` and you see `VERDICT_PASS` but 100% `DL` and `UL` Loss values, check if the `SGPW-U` is running. If it is, check the `S1u` and the `SGi` connections.
+If you running `attach_verify_data.sh` and you see `VERDICT_PASS` but 100% `DL`
+and `UL` Loss values, check if the `SGPW-U` is running. If it is, check the
+`S1u` and the `SGi` connections.
 
-When packets are generated (see values `AS_PktTx` and `S1uPktTx`), but are not sent to Ethernet (see values `AS_EthTx` and `S1uEthTx`) it may be that no `NG40` ARP requests get answered.
+When packets are generated (see values `AS_PktTx` and `S1uPktTx`), but are not
+sent to Ethernet (see values `AS_EthTx` and `S1uEthTx`) it may be that no
+`NG40` ARP requests get answered.
 
 ### Routing problem on S1u
 
-```
+```shell
 Signaling Table
            AttUE     ActCTXT   BrReq     BrAct     RelRq     RelAc     ActS1
 ng40_ran_1 0         0         1         1         0         0         1
@@ -403,12 +518,26 @@
 UL Loss= S1uPktTx-AS_PktRx=     64(pkts); 100.00(%)
 ```
 
-When you run `attach_verify_data.sh` and you see `Data sent` on `S1u` and `SGi` (see values `AS_EthTx` and `S1uEthTx`), but no data received at the Application Server (see values `AS_EthRx` and `AS_PktRx`), check the `SPGW-U CDRs`. If the packets going `uplink` are processed in the `CDRs`, either your routing or the allowed IP settings on the `SGi` interface are not correct.
+When you run `attach_verify_data.sh` and you see `Data sent` on `S1u` and `SGi`
+(see values `AS_EthTx` and `S1uEthTx`), but no data received at the Application
+Server (see values `AS_EthRx` and `AS_PktRx`), check the `SPGW-U CDRs`. If the
+packets going `uplink` are processed in the `CDRs`, either your routing or the
+allowed IP settings on the `SGi` interface are not correct.
 
-If packets are processed, but 0 bytes are sent through the `uplink`, there’s a mismatch in the `GTPU header size` configuration. For example the `SPGW-U` is compiled together with a `SessionID`,  but the `NG40` VM is configured without `SessionID`.
+If packets are processed, but 0 bytes are sent through the `uplink`, there’s a
+mismatch in the `GTPU header size` configuration. For example the `SPGW-U` is
+compiled together with a `SessionID`,  but the `NG40` VM is configured without
+`SessionID`.
 
 ### Other problems
-If you see timeouts, exceptions, strange behaviors or the `ActS1` counter is 0, try one of the solutions below:
-* Cleanup and restart the NG40 process, running `ng40forcecleanup all`. Wait about 10 seconds to allow the system to restart. Then, run a new test.
-* Check the NG40 license is still active, and in case install a new one, as described [here](installation_guide.md#request--update-the-ng40-vtester-software-license).
-Re-initialize the NG40 processes, running `~/install/ng40init`.
\ No newline at end of file
+
+If you see timeouts, exceptions, strange behaviors or the `ActS1` counter is 0,
+try one of the solutions below:
+
+* Cleanup and restart the NG40 process, running `ng40forcecleanup all`. Wait
+  about 10 seconds to allow the system to restart. Then, run a new test.
+* Check the NG40 license is still active, and in case install a new one, as
+  described
+  [here](installation_guide.md#request--update-the-ng40-vtester-software-license).
+
+Re-initialize the NG40 processes, running `~/install/ng40init`.