[CORD-2585]
Further lint fixes

Change-Id: I6aff140217a104618d9beda641234992a64d3ac6
(cherry picked from commit 00992c5c11bfdead739619978a6966f24348729a)
diff --git a/docs/tutorials/local_synchronizer_dev_loop.md b/docs/tutorials/local_synchronizer_dev_loop.md
index 41983d0..293c6fd 100644
--- a/docs/tutorials/local_synchronizer_dev_loop.md
+++ b/docs/tutorials/local_synchronizer_dev_loop.md
@@ -1,37 +1,47 @@
-# Development:
-## Synchronizers in the local scenario
+# Development of synchronizers in the local scenario
 
-In some cases is possible to completely write a synchronizer in using the local scenario,
-if that is possible for the integration with your VNF, this workflow will speed up your development cycle by a lot.
+In some cases is possible to completely write a synchronizer in using the local
+scenario, if that is possible for the integration with your VNF, this workflow
+will speed up your development cycle by a lot.
 
-Note that this document assume that you are already confident with writing XOS services,
-the build system and the CORD terminology. It also assumes that you have an XOS service in a good status.
+Note that this document assume that you are already confident with writing XOS
+services, the build system and the CORD terminology. It also assumes that you
+have an XOS service in a good status.
 
 It’s possible to work on a synchronizer locally as long as:
-- The VNF can be executed somewhere that we can connect to from our machine
-- The VNF does not require OpenStack to be deployed
 
-> Note that some of this steps can be used also in a more complex scenario, for example “virtual” also know as Cord-in-a-box
+* The VNF can be executed somewhere that we can connect to from our machine
+* The VNF does not require OpenStack to be deployed
 
-From now on this guide will assume that
-- you have a local scenario up and running, with the service you are working on onboarded
-- You obtained the source code as per [https://guide.opencord.org/getting_the_code.html](https://guide.opencord.org/getting_the_code.html)
+> NOTE: Some of this steps can be used also in a more complex scenario, for
+> example “virtual” also know as Cord-in-a-box
+
+From now on this guide will assume that:
+
+* you have a local scenario up and running, with the service you are working on
+  onboarded
+
+* You obtained the source code as per [Getting the Code](/getting_the_code.md)
 
 ## Tweak your docker-compose file
 
-There are few changes you need to make to the docker-compose.yml file in order to really shorten your development loop.
+There are few changes you need to make to the docker-compose.yml file in order
+to really shorten your development loop.
 
-A `docker-compose.yml` file for XOS has been generated during the build and it is located in the cord_profile directory.
-Note that the cord_profile directory is generated on the side of your cord root folder,
-so you should find your compose file in `~/cord_profile/docker-compose.yml`
+A `docker-compose.yml` file for XOS has been generated during the build and it
+is located in the cord_profile directory.  Note that the cord_profile directory
+is generated on the side of your cord root folder, so you should find your
+compose file in `~/cord_profile/docker-compose.yml`
 
-Open it with you favorite editor and locate your service synchronizer.
-You’ll need to add a `command: sleep 86440` to prevent the synchronizer from starting automatically
-and a volume mount to share the synchronizer code with your filesystem.
+Open it with you favorite editor and locate your service synchronizer.  You’ll
+need to add a `command: sleep 86440` to prevent the synchronizer from starting
+automatically and a volume mount to share the synchronizer code with your
+filesystem.
 
-Here is an example of a modified synchronizer block (only the meaningful fields have been reported here):
+Here is an example of a modified synchronizer block (only the meaningful fields
+have been reported here):
 
-```
+```yaml
 <service>-synchronizer:
     image: xosproject/<servicename>-synchronizer:candidate
     command: sleep 86400
@@ -43,55 +53,66 @@
       - /home/user/cord/orchestration/xos_services/<service>/xos/synchronizer:/opt/xos/synchronizers/<service>
 ```
 
-> Note that the important bits here are the sleep command and the last volume mount, leave everything else untouched.
+> NOTE: The important bits here are the sleep command and the last volume
+> mount, leave everything else untouched.
 
 ## Development loop
 
-As first we’ll need to restart the project to apply the changes we made in the docker-compose file.
-To do this we can use docker-compose native commands, so from the cord_profile directory execute:
+As first we’ll need to restart the project to apply the changes we made in the
+docker-compose file.  To do this we can use docker-compose native commands, so
+from the `cord_profile` directory execute:
 
-```
+```shell
 docker-compose -p <profile-name> up -d
 ```
 
-> Note that the <profile- name> is the first part of any XOS container name, so you can easily discover it with docker ps
+> NOTE: The `<profile-name>` is the first part of any XOS container name, so
+> you can easily discover it with `docker ps`.
 
-At this point everything is up and running, except our synchronizer, since that is up but sleeping.
-We need to connect to the docker container with:
+At this point everything is up and running, except our synchronizer, since that
+is up but sleeping.  We need to connect to the docker container with:
 
-```
+```shell
 docker exec -it <synchronizer-container> bash
 ```
 
-We’ll find ourself in the synchronizer folder, and to start the synchronizer it’s enough to call
+We’ll find ourself in the synchronizer folder, and to start the synchronizer
+it’s enough to call `bash run.sh`.
 
-`bash run.sh` (_note that the filename can be different here and you can also directly start the python process_)
+> NOTE: The filename can be different here and you can also directly start the
+> python process
 
-From now on, you can just make changes at the code on your local filesystem and restart the process inside the container to see the changes.
+From now on, you can just make changes at the code on your local filesystem and
+restart the process inside the container to see the changes.
 
 ## Appendix
 
-Note that if you have the VNF running on you machine and you need to connect to it,
-you can find the host ip from inside a docker container using:
+Note that if you have the VNF running on you machine and you need to connect to
+it, you can find the host ip from inside a docker container using:
 
-```
+```shell
 /sbin/ip route|awk '/default/ { print $3 }'
 ```
 
-So you easily can have an onos running on you machine, and have your synchronizer talk to it to quickly verify the changes.
+So you easily can have an onos running on you machine, and have your
+synchronizer talk to it to quickly verify the changes.
 
-The same exact workflow will apply to changes in model policies, while if you make changes to the `xproto` model definition
-or to the `_decl` model extension, you will have to rebuild the core container.
+The same exact workflow will apply to changes in model policies, while if you
+make changes to the `xproto` model definition or to the `_decl` model
+extension, you will have to rebuild the core container.
 
-If the model changes are in the logic only (eg: you are overriding the default save method)
-you can rebuild and restart the container, and here is a command that you use:
+If the model changes are in the logic only (eg: you are overriding the default
+save method) you can rebuild and restart the container, and here is a command
+that you use:
 
-```
+```shell
 rm milestones/local-start-xos && rm milestones/local-core-image && make build
 ```
 
-While if you made model changes (eg: added/remove a field) you need to teardown the database container and recreate it, so the command will be:
+While if you made model changes (eg: added/remove a field) you need to teardown
+the database container and recreate it, so the command will be:
 
-```
+```shell
 make xos-teardown && make build
 ```
+
diff --git a/docs/xos_vtn.md b/docs/xos_vtn.md
index 044f1c5..23e1947 100644
--- a/docs/xos_vtn.md
+++ b/docs/xos_vtn.md
@@ -193,7 +193,6 @@
 2. `PUT xosapi/v1/vtn/vtnservices/{service_id}` with data `{"resync": true}`.
    `{service_id}` is the identifier you retrieved in step (1).
 
-
 ## VTN Provided API
 
 ### ServicePorts
@@ -220,7 +219,8 @@
 | floating_address_pairs | list | Additional public addresses allowed to the port interface.|
 | ip_address | string | Additional public IP address.|
 | mac_address | string | Additional MAC address mapped to the public IP address.|
-_* fields are mandatory for creating a new service port._
+
+> NOTE: `*` fields are mandatory for creating a new service port.
 
 Example json request:
 
@@ -254,7 +254,7 @@
 * `GET onos/cordvtn/serviceNetworks`  List service networks including the
   details
 
-* `GET onos/cordvtn/serviceNetworks/{network_id} `  Show service network
+* `GET onos/cordvtn/serviceNetworks/{network_id}`  Show service network
   details
 
 * `PUT onos/cordvtn/serviceNetworks/{network_id}`  Update a service network
@@ -267,18 +267,18 @@
 | Parameters | Type | Description |
 | --------- | ---- | --------- |
 | id * | UUID | The UUID of the service network. |
-| name	| string | The name of the service network. |
+| name | string | The name of the service network. |
 | type * | string | The type of the service network |
 |segment_id | integer | The ID of the isolated segment on the physical network. Currently, only VXLAN based isolation is supported and this ID is a VNI. |
 | subnet | string | The associated subnet. |
 | providers | list | The list of the provider service networks.|
 | id | string | The UUID of the provider service network.|
 | bidirectional | boolean | The dependency, which is bidirectional (true) or unidirectional (false).|
-_* fields are mandatory for creating a new service network_
+
+> NOTE: `*` fields are mandatory for creating a new service port.
 
 #### Service Network Types
 
-
 * PRIVATE: virtual network for the instances in the same service
 * PUBLIC: externally accessible network
 * MANAGEMENT_LOCAL: instance management network which does not span compute