[CORD-2585]
Lint check documentation with markdownlint

Change-Id: I9098a990aaa3bbeb5b8c0180307aea285f1f3a24
diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md
index 054a8ed..0fbdc90 100644
--- a/docs/SUMMARY.md
+++ b/docs/SUMMARY.md
@@ -1,12 +1,12 @@
 # Summary
 
 * [Introduction](README.md)
-    - [Contribute to the documentation](contribute_docs.md)
+    * [Contribute to the documentation](contribute_docs.md)
 * Modules
-	- [XOS Config](modules/xosconfig.md)
+    * [XOS Config](modules/xosconfig.md)
 * Developer How Tos
-    - [Workflow: Mock Config](dev/workflow_mock.md)
-    - [Workflow: CORD-in-a-Box](dev/workflow_ciab.md)
-         	- [Local Dev Environmet](dev/local_env.md)
-    - [xproto](dev/xproto.md)
+    * [Workflow: Mock Config](dev/workflow_mock.md)
+    * [Workflow: CORD-in-a-Box](dev/workflow_ciab.md)
+        * [Local Dev Environmet](dev/local_env.md)
+    * [xproto](dev/xproto.md)
 
diff --git a/docs/contribute_docs.md b/docs/contribute_docs.md
index 6b3419b..aa00fbc 100644
--- a/docs/contribute_docs.md
+++ b/docs/contribute_docs.md
@@ -1,15 +1,20 @@
 # Contribute to the documentation
 
-The XOS documentation is generated using [GitBooks](https://www.gitbook.com) and you can learn more about the GitBooks Toolchain [here](https://toolchain.gitbook.com/)
+The XOS documentation is generated using [GitBooks](https://www.gitbook.com)
+and you can learn more about the GitBooks Toolchain
+[here](https://toolchain.gitbook.com/)
 
 ## View the documentation as a website
 
-If you have [NodeJs](https://nodejs.org/en/) installed in your system, you can easily install the `gitbook-cli` using:
-```
+If you have [NodeJs](https://nodejs.org/en/) installed in your system, you can
+easily install the `gitbook-cli` using:
+
+```shell
 npm install gitbook-cli -g
 ```
 
 and then serve the documentation with:
-```
+
+```shell
 gitbook serve
-```
\ No newline at end of file
+```
diff --git a/docs/core_models.md b/docs/core_models.md
index 68dce2e..aecaba8 100644
--- a/docs/core_models.md
+++ b/docs/core_models.md
@@ -1,248 +1,226 @@
 # Core Models
 
-The XOS modeling framework provides a foundation for building CORD,
-but it is just a tool to defining a set of core models. It is these core
-models that provide a coherent interface for configuring,
-controlling, and applying policy to a CORD POD. This gives operators a
-way to specify and reason about the behavior of CORD, while allowing
-for a wide range of implementation choices for the underlying software
-components.
+The XOS modeling framework provides a foundation for building CORD, but it is
+just a tool to defining a set of core models. It is these core models that
+provide a coherent interface for configuring, controlling, and applying policy
+to a CORD POD. This gives operators a way to specify and reason about the
+behavior of CORD, while allowing for a wide range of implementation choices for
+the underlying software components.
 
 ## Services, Slices, and ServiceInstances
 
 CORD's core starts with the **Service** model, which represents all
-functionality that can be on-boarded into CORD. This model is designed
-to meet two requirements. The first is to be implementation-agnostic,
-supporting both *server-based* implementations (e.g., legacy VNFs
-running in VMs and micro-services running in containers) and
-*switch-based* implementations (e.g., SDN control applications that
-install flow rules into white-box switches). The second is to be
-multi-tenant, supporting isolated and virtualized instances that can
-be created on behalf of both trusted and untrusted tenants.
+functionality that can be on-boarded into CORD. This model is designed to meet
+two requirements. The first is to be implementation-agnostic, supporting both
+*server-based* implementations (e.g., legacy VNFs running in VMs and
+micro-services running in containers) and *switch-based* implementations (e.g.,
+SDN control applications that install flow rules into white-box switches). The
+second is to be multi-tenant, supporting isolated and virtualized instances
+that can be created on behalf of both trusted and untrusted tenants.
 
-To realize these two requirements, the Service model builds upon two
-other models—**Slices** and **ServiceInstances**—as shown in the
-figure below. Specifically, a Service is bound to one or more
-Slices, each of which represents a distributed resource container in
-which the Service runs. This resource container, in turn, consists of
-a scalable set of **Instances** (VMs or containers) and a set of
-**Networks** that interconnect those VMs and containers. Similarly, a
-Service is bound to one or more ServiceInstances, each of which
-represents the virtualized partition of the service allocated to some
+To realize these two requirements, the Service model builds upon two other
+models—**Slices** and **ServiceInstances**—as shown in the figure below.
+Specifically, a Service is bound to one or more Slices, each of which
+represents a distributed resource container in which the Service runs. This
+resource container, in turn, consists of a scalable set of **Instances** (VMs
+or containers) and a set of **Networks** that interconnect those VMs and
+containers. Similarly, a Service is bound to one or more ServiceInstances, each
+of which represents the virtualized partition of the service allocated to some
 tenant.
 
-<img src="Service.png" alt="Drawing" style="width: 300px;"/>
+![Service](Service.png)
 
-Slices model the compute and network resources used to implement a
-service. By creating and provisioning a Slice, the Service acquires
-the resources it needs to run the VNF image or the SDN control
-application that defines its behavior. Services are often bound to a
-single Slice, but multiple Slices are supported for Services
-implemented as a collection of micro-services that scales
-independently.
+Slices model the compute and network resources used to implement a service. By
+creating and provisioning a Slice, the Service acquires the resources it needs
+to run the VNF image or the SDN control application that defines its behavior.
+Services are often bound to a single Slice, but multiple Slices are supported
+for Services implemented as a collection of micro-services that scales
+  independently.
 
-ServiceInstances model the virtualized/isolated partition of the
-service allocated to a particular tenant. It defines the context in
-which a tenant accesses and controls its virtualized instantiation of
-the Service. In practice, this means the ServiceInstance maintains
-tenant-specific state, whereas the Service maintains Service-wide
-state.
+ServiceInstances model the virtualized/isolated partition of the service
+allocated to a particular tenant. It defines the context in which a tenant
+accesses and controls its virtualized instantiation of the Service. In
+practice, this means the ServiceInstance maintains tenant-specific state,
+whereas the Service maintains Service-wide state.
 
-How ServiceInstances isolate tenants—and the extent of isolation
-(e.g., namespace isolation, failure isolation, performance
-isolation)—is an implementation choice. One option, as depicted by the
-dotted line in the figure shown above, is for each ServiceInstance to
-correspond to an underlying compute Instance. Because compute
-Instances provide isolation, the ServiceInstances are also
-isolated. But this is just one possible implementation. A second is
-that the ServiceInstance corresponds to a logically isolated partition
-of a horizontally scalable set of compute Instances. A third example
-is that each ServiceInstances corresponds to an isolated virtual
-network/channel implemented by some SDN control application. These
-three example implementations correspond to the vSG, vCDN, and vRouter
-Services in CORD, respectively.
+How ServiceInstances isolate tenants—and the extent of isolation (e.g.,
+namespace isolation, failure isolation, performance isolation)—is an
+implementation choice. One option, as depicted by the dotted line in the figure
+shown above, is for each ServiceInstance to correspond to an underlying compute
+Instance. Because compute Instances provide isolation, the ServiceInstances are
+also isolated. But this is just one possible implementation. A second is that
+the ServiceInstance corresponds to a logically isolated partition of a
+horizontally scalable set of compute Instances. A third example is that each
+ServiceInstances corresponds to an isolated virtual network/channel implemented
+by some SDN control application. These three example implementations correspond
+to the vSG, vCDN, and vRouter Services in CORD, respectively.
 
-One important takeaway is that ServiceInstances and compute Instances
-are not necessarily one-to-one: the former represents a virtualized
-instance of a service and the latter represents a virtualized instance
-of a compute resource. Only in certain limited cases is the first
-implemented by the latter.
+One important takeaway is that ServiceInstances and compute Instances are not
+necessarily one-to-one: the former represents a virtualized instance of a
+service and the latter represents a virtualized instance of a compute resource.
+Only in certain limited cases is the first implemented by the latter.
 
-M-CORD’s vSGW Service is a fourth example, one that is worth calling
-out because it does not fully utilize the degrees-of-freedom that the
-three models provide. vSGW is representative of many legacy VNFs in
-that it requires only one Slice that consists of a single VM (i.e., it
-does not necessarily leverage the Slice’s ability to scale across
-multiple compute Instances). And because the VNF was not designed to
-support multiple tenant contexts, there is no value in creating
-ServiceInstances (i.e., there is only Service-wide configuration).
-There is no harm in creating a ServiceInstance, representing the
-context in which all subscribers use the vSGW service, but doing so is
-not necessary since there is no need to control vSGW on a
-per-subscriber basis.
+M-CORD’s vSGW Service is a fourth example, one that is worth calling out
+because it does not fully utilize the degrees-of-freedom that the three models
+provide. vSGW is representative of many legacy VNFs in that it requires only
+one Slice that consists of a single VM (i.e., it does not necessarily leverage
+the Slice’s ability to scale across multiple compute Instances). And because
+the VNF was not designed to support multiple tenant contexts, there is no value
+in creating ServiceInstances (i.e., there is only Service-wide configuration).
+There is no harm in creating a ServiceInstance, representing the context in
+which all subscribers use the vSGW service, but doing so is not necessary since
+there is no need to control vSGW on a per-subscriber basis.
 
 ## Service Graphs and Service Chains
 
-Given a set of Services (and their corresponding Slices and
-ServiceInstances), CORD also defines two core models for
-interconnecting them: **ServiceDependencies** and
-**ServiceInstanceLinks**. The first defines a dependency of one
-Service on another, thereby forming a CORD-wide *Service Graph*. The
-second defines a dependency between a pair of ServiceInstances,
+Given a set of Services (and their corresponding Slices and ServiceInstances),
+CORD also defines two core models for interconnecting them:
+**ServiceDependencies** and **ServiceInstanceLinks**. The first defines a
+dependency of one Service on another, thereby forming a CORD-wide *Service
+Graph*. The second defines a dependency between a pair of ServiceInstances,
 thereby forming a per-subscriber *Service Chain*.
 
-> Note: Service Graphs and Service Chains are not explicit models in CORD, but rather, they are defined by a set of vertices (Services, ServiceInstances) and edges (ServiceDependency, ServiceInstanceLink).
+> NOTE: Service Graphs and Service Chains are not explicit models in CORD, but
+> rather, they are defined by a set of vertices (Services, ServiceInstances)
+> and edges (ServiceDependency, ServiceInstanceLink).
 
 The following figure illustrates an example service graph configured into CORD,
-along with an example collection of service chains. It does not show
-the related Slices.
+along with an example collection of service chains. It does not show the
+related Slices.
 
-<img src="ServiceChain.png" alt="Drawing" style="width: 500px;"/>
+![Service Chain](ServiceChain.png)
 
-This example is overly simplistic in three
-ways. One, the Service Graph is not necessarily linear. It generally
-forms an arbitrary mesh. Two, the Service Chains are not necessarily
-isomorphic to the Service Graph nor equivalent to each other. Each
-generally corresponds to a subscriber-specific path through the
-Service Graph. The path corresponding to one subscriber may be
-different from the path corresponding to another subscriber. Three,
-Service Chains are also not necessarily linear. In general, a Service
-Chain may include “forks” and “joins” that subscriber traffic might
-follow based on runtime decisions made on a packet-by-packet for
-flow-by-flow basis.
+This example is overly simplistic in three ways. One, the Service Graph is not
+necessarily linear. It generally forms an arbitrary mesh. Two, the Service
+Chains are not necessarily isomorphic to the Service Graph nor equivalent to
+each other. Each generally corresponds to a subscriber-specific path through
+the Service Graph. The path corresponding to one subscriber may be different
+from the path corresponding to another subscriber. Three, Service Chains are
+also not necessarily linear. In general, a Service Chain may include “forks”
+and “joins” that subscriber traffic might follow based on runtime decisions
+made on a packet-by-packet for flow-by-flow basis.
 
-ServiceDependencies effectively define a template for how
-ServiceInterfaceLinks are implemented. For example, the
-ServiceDependency connecting some Service A (the
-**subscriber_service**) to some Service B (the **provider_service**)
-might indicate that they communicate in the data plane using one of
-the private networks associated with Service B. In general, this
-dependency is parameterized by a **connect_method** that defines
-how the two services are interconnected in the underlying network data
-plane. The design is general enough to interconnect two server-based
-services, two switch-based services, or a server-based and a
-switch-based service pair. This makes it possible to construct a
-service graph without regard to how the underlying services are
+ServiceDependencies effectively define a template for how ServiceInterfaceLinks
+are implemented. For example, the ServiceDependency connecting some Service A
+(the **subscriber_service**) to some Service B (the **provider_service**) might
+indicate that they communicate in the data plane using one of the private
+networks associated with Service B. In general, this dependency is
+parameterized by a **connect_method** that defines how the two services are
+interconnected in the underlying network data plane. The design is general
+enough to interconnect two server-based services, two switch-based services, or
+a server-based and a switch-based service pair. This makes it possible to
+construct a service graph without regard to how the underlying services are
 implemented.
 
-The ServiceInterfaceLink associated with a ServiceInstance of A and a
-the corresponding ServiceInstance of B would then record specific
-state about that data plane connection (e.g., what address each is
-known by).
+The ServiceInterfaceLink associated with a ServiceInstance of A and a the
+corresponding ServiceInstance of B would then record specific state about that
+data plane connection (e.g., what address each is known by).
 
-## Model Glossary 
+## Model Glossary
 
-CORD's core models are defined by a set of [xproto](dev/xproto.md) 
-specifications. They are defined in their full detail in the source 
-code (see
+CORD's core models are defined by a set of [xproto](dev/xproto.md)
+specifications. They are defined in their full detail in the source code (see
 [core.xproto](https://github.com/opencord/xos/blob/master/xos/core/models/core.xproto)).
-The following summarizes these core models—along with the 
-key relationships (bindings) among them—in words. 
+The following summarizes these core models—along with the key relationships
+(bindings) among them—in words.
 
-* **Service:** Represents an elastically scalable, multi-tenant
-program, including the declarative state needed to instantiate,
-control, and scale functionality.
+* **Service:** Represents an elastically scalable, multi-tenant program,
+  including the declarative state needed to instantiate, control, and scale
+  functionality.
 
-   - Bound to a set of `Slices` that contains the collection of
-      virtualized resources (e.g., compute, network) in which the
-      `Service` runs.
+    * Bound to a set of `Slices` that contains the collection of virtualized
+      resources (e.g., compute, network) in which the `Service` runs.
 
-   - Bound to a set of `ServiceInstances` that record per-tenant
-      context for a virtualized partition of the `Service`. 
+    * Bound to a set of `ServiceInstances` that record per-tenant
+      context for a virtualized partition of the `Service`.
 
   In many CORD documents you will see mention of each service also
   having a "controller" which effectively corresponds to the
   `Service` model itself (i.e., its purpose is to generate a "control
   interface" for the service). There  is no "Controller" model
-  bound to a service. (Confusingly, CORD does include a `Controller` 
+  bound to a service. (Confusingly, CORD does include a `Controller`
   model, but it represents information about OpenStack. There is
   also a `ServiceController` construct in the TOSCA interface for
   CORD, which provides a means to load the `Service` model for
   a given service into CORD.)
-   
-* **ServiceDependency:** Represents a dependency between a *Subscriber*
-service on a *Provider*  service. The set of `ServiceDependency` 
-and `Service` models defined in CORD collectively represent the edges 
-and verticies of a *Service Graph*, but there is no explicit
-"ServiceGraph" model in CORD. The dependency between a pair of
-services is parameterized by the `connect_method` by which the service are
-interconnected in the data plane.Connect methods include:
 
-   - **None:** The two services are not connected in the data plane. 
-   - **Private:** The two services are connected by a common private network. 
-   - **Public:** The two services are connected by a publicly routable 
-   network. 
-   
+* **ServiceDependency:** Represents a dependency between a *Subscriber* service
+  on a *Provider*  service. The set of `ServiceDependency` and `Service` models
+  defined in CORD collectively represent the edges and verticies of a *Service
+  Graph*, but there is no explicit "ServiceGraph" model in CORD. The dependency
+  between a pair of services is parameterized by the `connect_method` by which
+  the service are interconnected in the data plane.Connect methods include:
 
-* **ServiceInstance:** Represents an instance of a service
-  instantiated on behalf of a particular tenant. This is a
-  generalization of the idea of a Compute-as-a-Service spinning up
-  individual "compute instances," or using another common
-  example, the `ServiceInstance` corresponding to a Storage Service
-  might be called a "Volume" or a "Bucket." Confusingly, there are
-  also instances of a `Service` model that represent different
-  services, but this is a consequence of standard modeling
-  terminology, whereas  `ServiceInstance` is a core model in CORD
-  (and yes, there are instances of the `ServiceInstance` model).
+    * **None:** The two services are not connected in the data plane.
+    * **Private:** The two services are connected by a common private network.
+    * **Public:** The two services are connected by a publicly routable
+      network.
+
+* **ServiceInstance:** Represents an instance of a service instantiated on
+  behalf of a particular tenant. This is a generalization of the idea of a
+  Compute-as-a-Service spinning up individual "compute instances," or using
+  another common example, the `ServiceInstance` corresponding to a Storage
+  Service might be called a "Volume" or a "Bucket." Confusingly, there are also
+  instances of a `Service` model that represent different services, but this is
+  a consequence of standard modeling terminology, whereas  `ServiceInstance` is
+  a core model in CORD (and yes, there are instances of the `ServiceInstance`
+  model).
 
 * **ServiceInstanceLink:** Represents a logical connection between
-`ServiceInstances` of two `Services`. A related model, `ServiceInterface`,
-types the `ServiceInstanceLink` between two `ServiceInstances`. A
-connected sequence of `ServiceInstances` and `ServiceInstanceLinks` form
-what is often called a *Service Chain*, but there is no explicit
-"ServiceChain" model in CORD.
+  `ServiceInstances` of two `Services`. A related model, `ServiceInterface`,
+  types the `ServiceInstanceLink` between two `ServiceInstances`. A connected
+  sequence of `ServiceInstances` and `ServiceInstanceLinks` form what is often
+  called a *Service Chain*, but there is no explicit "ServiceChain" model in
+  CORD.
 
-* **Slice:** Represents a distributed resource container that includes
-the compute and network resources that belong to (are used by) some
-`Service`.
+* **Slice:** Represents a distributed resource container that includes the
+  compute and network resources that belong to (are used by) some `Service`.
 
-   - Bound to a set of `Instances` that provide compute resources for
-      the `Slice`.
+    * Bound to a set of `Instances` that provide compute resources for the
+      `Slice`.
 
-   - Bound to a set of `Networks` that connect the  slice's `Instances` to
+    * Bound to a set of `Networks` that connect the  slice's `Instances` to
       each other.
-  
-   - Bound to  a default `Flavor` that represents a bundle of
-      resources (e.g., disk, memory, and cores) allocated to an
-      instance. Current flavors borrow from EC2. 
 
-   - Bound to a default `Image` that boots in each of the slice's`Instances`.
+    * Bound to a default `Flavor` that represents a bundle of resources (e.g.,
+      disk, memory, and cores) allocated to an instance. Current flavors borrow
+      from EC2.
+
+    * Bound to a default `Image` that boots in each of the slice's`Instances`.
       Each `Image` implies a virtualization layer (e.g., Docker, KVM).
 
-
-* **Instance:** Represents a single compute instance associated
-   with a Slice and instantiated on some physical Node. Each Instance
-   is of some `isolation` type: `vm` (implemented as a KVM virtual machine),
-   `container` (implemented as a Docker container), or `container_vm`
-   (implemented as a Docker container running inside a KVM virtual machine).
+* **Instance:** Represents a single compute instance associated with a Slice
+  and instantiated on some physical Node. Each Instance is of some `isolation`
+  type: `vm` (implemented as a KVM virtual machine), `container` (implemented
+  as a Docker container), or `container_vm` (implemented as a Docker container
+  running inside a KVM virtual machine).
 
 * **Network:** Represents a virtual network associated with a `Slice`. The
-behavior of a given `Network`is defined by a `NetworkTemplate`, which
-specifies a set of parameters, including `visibility` (set to `public` or
-`private`),  `access` (set to `direct` or `indirect`), `translation`
-(set to `none`or `nat`), and `topology_kind` (set to `bigswitch`,
-`physical` or `custom`). There is also a `vtn_kind` parameter
-(indicating the `Network` is manged by VTN), with possible settings:
-`PRIVATE`, `PUBLIC`, `MANAGEMENT_LOCAL`, `MANAGEMENT_HOST`,
-`VSG`, or `ACCESS__AGENT`.
+  behavior of a given `Network`is defined by a `NetworkTemplate`, which
+  specifies a set of parameters, including `visibility` (set to `public` or
+  `private`),  `access` (set to `direct` or `indirect`), `translation` (set to
+  `none`or `nat`), and `topology_kind` (set to `bigswitch`, `physical` or
+  `custom`). There is also a `vtn_kind` parameter (indicating the `Network` is
+  manged by VTN), with possible settings: `PRIVATE`, `PUBLIC`,
+  `MANAGEMENT_LOCAL`, `MANAGEMENT_HOST`, `VSG`, or `ACCESS__AGENT`.
 
-* **Node:** Represents a physical server that can be virtualized and host Instances.
+* **Node:** Represents a physical server that can be virtualized and host
+  Instances.
 
-   - Bound to the `Site` where the `Node` is physically located.
-
+    * Bound to the `Site` where the `Node` is physically located.
 
 * **User:** Represents an authenticated principal that is granted a set of
-  privileges to invoke operations on a set of models, objects, and
-  fields in the data model.
+  privileges to invoke operations on a set of models, objects, and fields in
+  the data model.
 
-* **Privilege:** Represents the right to perform a set of read, write,
-  or grant operations on a set of models, objects, and fields.
+* **Privilege:** Represents the right to perform a set of read, write, or grant
+  operations on a set of models, objects, and fields.
 
-* **Site:** Represents a logical grouping of `Nodes` that are
-  co-located at the same geographic location, which also typically
-  corresponds to the nodes' location in the physical network.
-  The typical use case involves one configuration of a CORD POD 
-  deployed at a single location, although the underlying core includes 
-  allows for multi-site deployments.
+* **Site:** Represents a logical grouping of `Nodes` that are co-located at the
+  same geographic location, which also typically corresponds to the nodes'
+  location in the physical network.  The typical use case involves one
+  configuration of a CORD POD deployed at a single location, although the
+  underlying core includes allows for multi-site deployments.
 
-  - Bound to a set of `Nodes` located at the `Site`.
+    * Bound to a set of `Nodes` located at the `Site`.
+
diff --git a/docs/dev/local_env.md b/docs/dev/local_env.md
index 330877b..af3b61e 100644
--- a/docs/dev/local_env.md
+++ b/docs/dev/local_env.md
@@ -1,22 +1,30 @@
 # How to setup a local dev environmnet
 
-As now this is useful for working on libraries and will give you access to the `xossh` cli tool.
+As now this is useful for working on libraries and will give you access to the
+`xossh` cli tool.
 
 ## Create a python virtual-env
 
-We are providing an helper script to setup a python virtual environment and install all the required dependencies, to use it: 
+We are providing an helper script to setup a python virtual environment and
+install all the required dependencies, to use it:
+
 ```bash
 source scripts/setup_venv.sh
 ```
 
-At this point xos libraries are installed as python modules, so you can use any cli tool, for instance:
+At this point xos libraries are installed as python modules, so you can use any
+cli tool, for instance:
+
 ```bash
 xossh
 ```
-will open an xos shell to operate on models.
-For more informations on `xossh` look at `xos/xos_client/README.md`
 
->NOTE: The `xossh` tool accept parameters to be configured, for example to use it against a local installation (frontend VM) you use:
->```bash
+will open an xos shell to operate on models.  For more informations on `xossh`
+look at `xos/xos_client/README.md`
+
+> NOTE: The `xossh` tool accept parameters to be configured, for example to use
+> it against a local installation (frontend VM) you use:
+>
+> ```bash
 > xossh -G 192.168.46.100:50055 -S 192.168.46.100:50051
->```
\ No newline at end of file
+> ```
diff --git a/docs/dev/sync_arch.md b/docs/dev/sync_arch.md
index f58a6df..c647e86 100644
--- a/docs/dev/sync_arch.md
+++ b/docs/dev/sync_arch.md
@@ -1,80 +1,194 @@
-## Design Guidelines
+# Synchronizer Design Guidelines
 
-Synchronizers act as the link between the data model and the functional half of the system. The data model contains a clean, abstract and declarative representation of the system curated by service developers and operators. This representation is not subject to the idiosyncrasies of distributed system behavior. It defines the authoritative state of the system. The functional half of the system, on the other hand, consists of the software that implements services along with the resources on which they run. Unlike the data model, its configuration is error-prone, liable to reach anomalous states, and involves mechanisms whose implementation and management sometimes do not follow best practices. 
+Synchronizers act as the link between the data model and the functional half of
+the system. The data model contains a clean, abstract and declarative
+representation of the system curated by service developers and operators. This
+representation is not subject to the idiosyncrasies of distributed system
+behavior. It defines the authoritative state of the system. The functional half
+of the system, on the other hand, consists of the software that implements
+services along with the resources on which they run. Unlike the data model, its
+configuration is error-prone, liable to reach anomalous states, and involves
+mechanisms whose implementation and management sometimes do not follow best
+practices.
 
-A Synchronizer bridges these two sides of the system robustly through the use of an approach we call “goal-oriented synchronization.” Rather than tracking and relaying changes from the data model to the back-end system in the form of events, a synchronizer tracks and drives the system towards a final “goal state” corresponding to the current state defined by the data model. And it does so irrespective of the particular combination of changes that led to that state. As a consequence, an opportunity is made available at every step of synchronization to correct for anomalies created by prior steps, or ones that arise due to ambient system activity. 
+A Synchronizer bridges these two sides of the system robustly through the use
+of an approach we call “goal-oriented synchronization.” Rather than tracking
+and relaying changes from the data model to the back-end system in the form of
+events, a synchronizer tracks and drives the system towards a final “goal
+state” corresponding to the current state defined by the data model. And it
+does so irrespective of the particular combination of changes that led to that
+state. As a consequence, an opportunity is made available at every step of
+synchronization to correct for anomalies created by prior steps, or ones that
+arise due to ambient system activity.
 
-The specific method we use to accomplish this property is to require synchronization actions to be idempotent. This requirement boils down to two constraints on the implementation of a synchronizer. The first is for a synchronizer to compute a delta between the current and desired state of the service component it manages, and to then apply that delta. The second is to ensure that changes can never propagate back from the synchronizer to the data model in a way that affects the Synchronizer’s behavior. Of these, the first requirement is a burden on the service developer who implements a particular synchronizer, and the second requirement is fulfilled by the synchronizer core, which all service synchronizers share. The specific details of how the flow of details is kept unidirectional are provided in detail in later sections. For now, we will introduce the actors in a synchronizer that interact with the data model. 
+The specific method we use to accomplish this property is to require
+synchronization actions to be idempotent. This requirement boils down to two
+constraints on the implementation of a synchronizer. The first is for a
+synchronizer to compute a delta between the current and desired state of the
+service component it manages, and to then apply that delta. The second is to
+ensure that changes can never propagate back from the synchronizer to the data
+model in a way that affects the Synchronizer’s behavior. Of these, the first
+requirement is a burden on the service developer who implements a particular
+synchronizer, and the second requirement is fulfilled by the synchronizer core,
+which all service synchronizers share. The specific details of how the flow of
+details is kept unidirectional are provided in detail in later sections. For
+now, we will introduce the actors in a synchronizer that interact with the data
+model.
 
-### Actors and Types of State 
+## Actors and Types of State
 
 There are three actors in a Synchronizer that interact with a Data Model:
 
-* **Synchronizer Actuators:** An actuator is notified of changes to the data model, upon which it refers to the current state of its service in the data model, and idempotently translates it into a configuration for that service. A given data model can only have one actuator, scheduled by the synchronizer core in an ordering consistent with the dependencies on the model that it synchronizes, with possible retries and error management. 
+* **Synchronizer Actuators:** An actuator is notified of changes to the data
+  model, upon which it refers to the current state of its service in the data
+  model, and idempotently translates it into a configuration for that service.
+  A given data model can only have one actuator, scheduled by the synchronizer
+  core in an ordering consistent with the dependencies on the model that it
+  synchronizes, with possible retries and error management.
 
-* **Synchronizer Watchers:** A watcher is also notified of changes to a data model, with the difference that it is not responsible for actuating its state by applying it to the system substrate. A Watcher for a given model is an actuator for a different model (i.e., not the one it watches). It subscribes to the watched model to gather information that it needs in addition to the data model that it synchronizes. For example, a synchronizer for a daemon may watch the IP address of the host it runs on, not because it configures the host with the IP address, but because it needs to advertise its address to clients that use the daemon. 
+* **Synchronizer Watchers:** A watcher is also notified of changes to a data
+  model, with the difference that it is not responsible for actuating its state
+  by applying it to the system substrate. A Watcher for a given model is an
+  actuator for a different model (i.e., not the one it watches). It subscribes
+  to the watched model to gather information that it needs in addition to the
+  data model that it synchronizes. For example, a synchronizer for a daemon may
+  watch the IP address of the host it runs on, not because it configures the
+  host with the IP address, but because it needs to advertise its address to
+  clients that use the daemon.
 
-* **Model Policies:** A model policy encapsulates data relationships between related data models, such as “for every Network there must be at least one interface.” Concretely in this example, a model policy would intercept creations of Network models and create Interface models accordingly. 
+* **Model Policies:** A model policy encapsulates data relationships between
+  related data models, such as “for every Network there must be at least one
+  interface.” Concretely in this example, a model policy would intercept
+  creations of Network models and create Interface models accordingly.
 
-The Data Model represents the authoritative and abstract state of the system. By authoritative, we mean that if there is a conflict, then it is given precedence over the internal configuration of services. This state is a combination of two types of fields: 
+The Data Model represents the authoritative and abstract state of the system.
+By authoritative, we mean that if there is a conflict, then it is given
+precedence over the internal configuration of services. This state is a
+combination of two types of fields:
 
-* **Declarative:** Declarative state is sufficient to recreate the full operational state of the system, with the help of a particular synchronizer. 
+* **Declarative:** Declarative state is sufficient to recreate the full
+  operational state of the system, with the help of a particular synchronizer.
 
-* **Feedback:** Feedback state is derivative. It is the result of Synchronizer actions, preserved as a cache for later accesses to the backend objects created as a consequence of those actions. 
+* **Feedback:** Feedback state is derivative. It is the result of Synchronizer
+  actions, preserved as a cache for later accesses to the backend objects
+  created as a consequence of those actions.
 
-Synchronizers are mainly interested in declarative state, as that is the basis on which they configure the service they implement. The core synchronizer machinery ensures that synchronizers are notified of changes to declarative state, that they are invoked in an appropriate order, and also provide a degree of resilience to failure. 
+Synchronizers are mainly interested in declarative state, as that is the basis
+on which they configure the service they implement. The core synchronizer
+machinery ensures that synchronizers are notified of changes to declarative
+state, that they are invoked in an appropriate order, and also provide a degree
+of resilience to failure.
 
 The actors of a synchronizer interact with this state in the following manner:
 
 * Actuators can:
-  - Read Declarative state 
-  - Read/Write Feedback state 
-  - Be scheduled upon changes to Declarative state 
+    * Read Declarative state
+    * Read/Write Feedback state
+    * Be scheduled upon changes to Declarative state
 
 * Watchers can:
-  - Read Declarative state 
-  - Read Feedback state 
-  - Subscribe to changes to Declarative state (meaning no dependency ordering, no retries, no 
-error propagation) 
+    * Read Declarative state
+    * Read Feedback state
+    * Subscribe to changes to Declarative state (meaning no dependency
+      ordering, no retries, no error propagation)
 
-* Model Policies can 
-  - Read/Write Declarative state 
-  - Subscribe to changes to Declarative state 
+* Model Policies can
+    * Read/Write Declarative state
+    * Subscribe to changes to Declarative state
 
-### Relationships Between Synchronizers and Data Models 
+## Relationships Between Synchronizers and Data Models
 
-A single synchronizer can synchronize multiple data models, usually through an actuator per model. However, a given model can only be handled by one actuator. Furthermore, a single actuator only synchronizes one data model. The act of synchronizing may generate feedback state in the same model, but watching never generates/modifies feedback state in the model being watched. (Watching model A may be part of synchronizing mode B, and so generates feedback state in B.) 
+A single synchronizer can synchronize multiple data models, usually through an
+actuator per model. However, a given model can only be handled by one actuator.
+Furthermore, a single actuator only synchronizes one data model. The act of
+synchronizing may generate feedback state in the same model, but watching never
+generates/modifies feedback state in the model being watched. (Watching model A
+may be part of synchronizing mode B, and so generates feedback state in B.)
 
-But how are these relationships established? The answer lies in the linkages between models in the data model. The data model, which is implemented using Django, lets us link one model to another through references called foreign keys and many-to-many keys. Apart from enabling organizational patterns such as aggregation, composition, proxies, etc. this linkage is used to establish two levels of dependencies: ones between models, and ones between objects. If a field interface in a model for a daemon references an Interface model, then it implies that the daemon’s model depends on interface. Furthermore, that an object of type daemon depends on an object of type interface if the interface field of the latter contains a reference to the latter. 
+But how are these relationships established? The answer lies in the linkages
+between models in the data model. The data model, which is implemented using
+Django, lets us link one model to another through references called foreign
+keys and many-to-many keys. Apart from enabling organizational patterns such as
+aggregation, composition, proxies, etc. this linkage is used to establish two
+levels of dependencies: ones between models, and ones between objects. If a
+field interface in a model for a daemon references an Interface model, then it
+implies that the daemon’s model depends on interface. Furthermore, that an
+object of type daemon depends on an object of type interface if the interface
+field of the latter contains a reference to the latter.
 
 Dependencies between models can be specified in two ways:
 
-* Implicitly through linkages in the data model 
-* Explicitly through annotations, which are in turn read by the synchronizer core 
+* Implicitly through linkages in the data model
+* Explicitly through annotations, which are in turn read by the synchronizer
+  core
 
-Once these dependencies have been extracted, they decide whether synchronizer modules are actuators or they are watchers. They also configure the scheduling of actuators in a way that they are run in dependency order, and so that errors in the execution of an actuator are propagated to its dependencies. Consider the diagram below. 
+Once these dependencies have been extracted, they decide whether synchronizer
+modules are actuators or they are watchers. They also configure the scheduling
+of actuators in a way that they are run in dependency order, and so that errors
+in the execution of an actuator are propagated to its dependencies. Consider
+the diagram below.
 
-### Loops 
+## Loops
 
-The separation of declarative and feedback state in the data model eliminates the possibility of loops involving actions, caused by a synchronizer directly modifying its declarative state. Such loops involve repeated executions of one or more actions by the synchronizer core. But it does not eliminate loops of the following kind 
+The separation of declarative and feedback state in the data model eliminates
+the possibility of loops involving actions, caused by a synchronizer directly
+modifying its declarative state. Such loops involve repeated executions of one
+or more actions by the synchronizer core. But it does not eliminate loops of
+the following kind
 
-1. Loops caused because a synchronizer modifies declarative state indirectly - say by triggering an external action that modifies the state via the API. 
+1. Loops caused because a synchronizer modifies declarative state indirectly -
+   say by triggering an external action that modifies the state via the API.
 
-2. Loops in which feedback state written by one Synchronizer is watched (read) by a second Synchronizer, and feedback state written by the second Synchronizer is watched (read) by the first Synchronizer. Of course, this type of interference can also happen across a chain of Synchronizers. These loops can be detected by analyzing the synchronizer-watcher-model graph. 
+2. Loops in which feedback state written by one Synchronizer is watched (read)
+   by a second Synchronizer, and feedback state written by the second
+   Synchronizer is watched (read) by the first Synchronizer. Of course, this
+   type of interference can also happen across a chain of Synchronizers. These
+   loops can be detected by analyzing the synchronizer-watcher-model graph.
 
-3. Spin loops and other general loops found in programs. 
+3. Spin loops and other general loops found in programs.
 
-The second possibility is unlikely in practice because it would be akin to a data model version of a layering violation: Layer _i_ depends on Layer _i+1_, while at the same time Layer _i+1_ would depend on Layer _i_. 
+The second possibility is unlikely in practice because it would be akin to a
+data model version of a layering violation: Layer _i_ depends on Layer _i+1_,
+while at the same time Layer _i+1_ would depend on Layer _i_.
 
-### Dependencies and Data Consistency 
+## Dependencies and Data Consistency
 
-XOS enforces sequential consistency, without real-time bounds. This is to say that no guarantees are made on when the goal state will be transferred from the data model to the back end, but it is guaranteed that the components of the states defined by individual data models will be actuated in a valid order. This order is implied by the dependencies described in the previous section. For example, if a host model depends on an interface model, then it is guaranteed that the actuator of a host will execute only when the actuator of the corresponding interface has completed successfully. Note that this sequencing guarantee does not apply to watchers. The watchers for a model are executed in an arbitrary order. 
+XOS enforces sequential consistency, without real-time bounds. This is to say
+that no guarantees are made on when the goal state will be transferred from the
+data model to the back end, but it is guaranteed that the components of the
+states defined by individual data models will be actuated in a valid order.
+This order is implied by the dependencies described in the previous section.
+For example, if a host model depends on an interface model, then it is
+guaranteed that the actuator of a host will execute only when the actuator of
+the corresponding interface has completed successfully. Note that this
+sequencing guarantee does not apply to watchers. The watchers for a model are
+executed in an arbitrary order.
 
-Outside of the ordering mandated by dependencies in the data model, operations may be rearranged randomly, or to favor the concurrent scheduling of actuators. This property poses an important task for a service designer, making it necessary for him to specify all ordering constraints comprehensively in the service data model. If any orderings are missed, then even if changes to a set of models are properly ordered at the source, their actuation may be reordered into sequences that are invalid. 
+Outside of the ordering mandated by dependencies in the data model, operations
+may be rearranged randomly, or to favor the concurrent scheduling of actuators.
+This property poses an important task for a service designer, making it
+necessary for him to specify all ordering constraints comprehensively in the
+service data model. If any orderings are missed, then even if changes to a set
+of models are properly ordered at the source, their actuation may be reordered
+into sequences that are invalid.
 
-### Error Handling and Idempotence 
+## Error Handling and Idempotence
 
-The synchronizer is designed to be robust to unforeseeable faults in the back-end system. The main source of this robustness is the idempotence of actuators. Rather than blindly executing an operation on the current state, actuators target a goal state. This means that they are expected to make a reasonable effort to compensate for anomalies. Goal-directed synchronization, i.e., the strategy of driving towards the end state, rather than simply “replaying” events is central to this outcome. In the latter case, actuators would have no other choice than to dutifully apply incoming updates, even if the start state is anomalous, and likely lead to an anomalous end state. 
+The synchronizer is designed to be robust to unforeseeable faults in the
+back-end system. The main source of this robustness is the idempotence of
+actuators. Rather than blindly executing an operation on the current state,
+actuators target a goal state. This means that they are expected to make a
+reasonable effort to compensate for anomalies. Goal-directed synchronization,
+i.e., the strategy of driving towards the end state, rather than simply
+“replaying” events is central to this outcome. In the latter case, actuators
+would have no other choice than to dutifully apply incoming updates, even if
+the start state is anomalous, and likely lead to an anomalous end state.
 
-A synchronizer tries to schedule as many actuators as it can concurrently without violating dependencies. Dependencies are tracked at the object level. For example, in the example mentioned previously, the failure of the synchronization of an interface would hold up a host if the interface is bound to it, but not if that interface is bound to a different node. When there is a failure, the synchronizer core re-executes the actuator at a later time, and then again at increasing intervals. 
+A synchronizer tries to schedule as many actuators as it can concurrently
+without violating dependencies. Dependencies are tracked at the object level.
+For example, in the example mentioned previously, the failure of the
+synchronization of an interface would hold up a host if the interface is bound
+to it, but not if that interface is bound to a different node. When there is a
+failure, the synchronizer core re-executes the actuator at a later time, and
+then again at increasing intervals.
 
diff --git a/docs/dev/sync_impl.md b/docs/dev/sync_impl.md
index 25451fc..b0ca5d3 100644
--- a/docs/dev/sync_impl.md
+++ b/docs/dev/sync_impl.md
@@ -1,8 +1,14 @@
-## Implementation Details 
+# Synchronizer Implementation Details
 
-There are three types of synchronizers: _Work-based_, _Event-based_, and _Hybrid_ (the last of which subsume the functionalities of the first two). Work-based synchronizers are somewhat cumbersome to implement, but offer strong robustness guarantees such as causal consistency, retries in the face of failure, model-dependency analysis and concurrent scheduling of synchronization modules. Event-based synchronizers are simpler to implement, but lack the aforementioned guarantees. 
+There are three types of synchronizers: _Work-based_, _Event-based_, and
+_Hybrid_ (the last of which subsume the functionalities of the first two).
+Work-based synchronizers are somewhat cumbersome to implement, but offer strong
+robustness guarantees such as causal consistency, retries in the face of
+failure, model-dependency analysis and concurrent scheduling of synchronization
+modules. Event-based synchronizers are simpler to implement, but lack the
+aforementioned guarantees.
 
-### Differences between Work-based and Event-based Synchronizers 
+## Differences between Work-based and Event-based Synchronizers
 
 |   Mechanism   |   Work-Based Synchronizers   |   Event-based Synchronizers   |
 |--------------------|------------------------------------------|------------------------------------------|
@@ -12,203 +18,289 @@
 | Concurrency | Non-dependent modules are executed concurrently | Modules are executed sequentially |
 | Error handling | Errors are propagated to dependencies; retries on failure | No error dependency; it’s up to the Synchronizer to cope with event loss |
 | Ease of implementation | Moderate | Easy |
- 
-### Implementing an Event-based Synchronizer 
 
-An Event-based Synchronizer is a collection of _Watcher_ modules. Each Watcher module listens for (i.e., watches) events pertaining to a particular model. The Synchronizer developer must provide the set of these modules. The steps for assembling a synchronizer once these modules have been implemented, are as follows:
+### Implementing an Event-based Synchronizer
+
+An Event-based Synchronizer is a collection of _Watcher_ modules. Each Watcher
+module listens for (i.e., watches) events pertaining to a particular model. The
+Synchronizer developer must provide the set of these modules. The steps for
+assembling a synchronizer once these modules have been implemented, are as
+follows:
 
 1. Run the generate watcher script: `gen_watcher.py <name of your app>`
 
-2. Set your Synchronizer-specific config options in the config file, and also set `observer_enable_watchers` to true. 
+2. Set your Synchronizer-specific config options in the config file, and also
+   set `observer_enable_watchers` to true.
 
-3. Install python-redis by running `pip install redis` in your Synchronizer container 
+3. Install python-redis by running `pip install redis` in your Synchronizer
+   container
 
-4. Link the redis container that comes packaged with XOS with your Synchronizer container as `redis`. 
+4. Link the redis container that comes packaged with XOS with your Synchronizer
+   container as `redis`.
 
-5. Drop your watcher modules in the directory `/opt/xos/synchronizers/<your synchronizer>/steps`
+5. Drop your watcher modules in the directory `/opt/xos/synchronizers/<your
+   synchronizer>/steps`
 
-6. Run your synchronizer by running `/opt/xos/synchronizers/<your synchronizer>/run-synchronizer.sh`
+6. Run your synchronizer by running `/opt/xos/synchronizers/<your
+   synchronizer>/run-synchronizer.sh`
 
-### Watcher Module API 
+### Watcher Module API
 
-* `def handle_watched_object(self, o)`: A method, called every time a watched object is added, deleted, or updated. 
+* `def handle_watched_object(self, o)`: A method, called every time a watched
+  object is added, deleted, or updated.
 
-* `int watch_degree`: A variable of type `int` that defines the set of watched models _implicitly_. If this module synchronizes models A and B, then the watched set is defined by the models that are a distance `watch_degree` from A or from B in the model dependency graph. 
+* `int watch_degree`: A variable of type `int` that defines the set of watched
+  models _implicitly_. If this module synchronizes models A and B, then the
+  watched set is defined by the models that are a distance `watch_degree` from
+  A or from B in the model dependency graph.
 
-* `ModelLink watched`: A list of type `ModelLink` that defines the set of watched models _explicitly_. If this is defined, then `watch_degree` is ignored. 
+* `ModelLink watched`: A list of type `ModelLink` that defines the set of
+  watched models _explicitly_. If this is defined, then `watch_degree` is
+  ignored.
 
-* `Model synchronizes`: A list of type `Model` that identifies the model that this  module synchronizes. 
+* `Model synchronizes`: A list of type `Model` that identifies the model that
+  this  module synchronizes.
 
-The main body of a watcher module is the function `handle_watched_object`, which responds to operations on objects that the module synchronizes. If the module responds to multiple object types, then it must determine the type of object, and proceed to process it accordingly. 
- 
-```python 
+The main body of a watcher module is the function `handle_watched_object`,
+which responds to operations on objects that the module synchronizes. If the
+module responds to multiple object types, then it must determine the type of
+object, and proceed to process it accordingly.
+
+```python
 def handle_watched_object(self, o):
     if (type(o) is Slice):
-        self.handle_changed_slice(o) 
+        self.handle_changed_slice(o)
     elif (type(o) is Node):
-        self.handle_changed_node(o) 
+        self.handle_changed_node(o)
 ```
 
-#### Linking the Watcher into the Synchronizer 
+#### Linking the Watcher into the Synchronizer
 
-There are two ways of linking in a Watcher. Using them both does not hurt. The first method is complex but robust, and involves making the declaration in the data model, by ensuring that the model that your synchronizer would like to watch is linked to the model that it actuates. For instance, if your synchronizer actuates a service model called Fabric, which links the Instance model, then you would ensure that Instance is a dependency of Fabric by making the following annotation in the Fabric model:
+There are two ways of linking in a Watcher. Using them both does not hurt. The
+first method is complex but robust, and involves making the declaration in the
+data model, by ensuring that the model that your synchronizer would like to
+watch is linked to the model that it actuates. For instance, if your
+synchronizer actuates a service model called Fabric, which links the Instance
+model, then you would ensure that Instance is a dependency of Fabric by making
+the following annotation in the Fabric model:
 
-```python 
+```python
 class Fabric(Service):
-    ... 
-    ... 
+    ...
+    ...
     xos_links = [ModelLink(Instance,via='instance',into='ip')]
 ```
-	
-There can be several `ModelLink` specifications in a single `xos_links` declaration, each encapsulating the referenced model, the field in the current model that links to it, and the destination field in which the watcher is interested. If into is omitted, then the watcher is notified of all changes in the linked model, irrespective of the fields that change. 
-	
-The above change needs to be backed up with an instruction to the synchronizer that the watcher is interested in being notified of changes to its dependencies. This is done through a `watch_degree` annotation. 
-	
-```python 
+
+There can be several `ModelLink` specifications in a single `xos_links`
+declaration, each encapsulating the referenced model, the field in the current
+model that links to it, and the destination field in which the watcher is
+interested. If into is omitted, then the watcher is notified of all changes in
+the linked model, irrespective of the fields that change.
+
+The above change needs to be backed up with an instruction to the synchronizer
+that the watcher is interested in being notified of changes to its
+dependencies. This is done through a `watch_degree` annotation.
+
+```python
 class SyncFabricService(SyncStep):
-   watch_degree=1 
+   watch_degree=1
 ```
 
-By default, `watch_degree = 0`, means the Synchronizer watches nothing. When watch degree is 1, it watches one level of dependencies removed, and so on. If the `watch_degree` in the above code were 2, then this module would also get notified of changes in dependencies of the `Instance` model. 
+By default, `watch_degree = 0`, means the Synchronizer watches nothing. When
+watch degree is 1, it watches one level of dependencies removed, and so on. If
+the `watch_degree` in the above code were 2, then this module would also get
+notified of changes in dependencies of the `Instance` model.
 
-The second way of linking in a watcher is to hardcode the watched model directly in the synchronizer:
+The second way of linking in a watcher is to hardcode the watched model
+directly in the synchronizer:
 
-```python 
+```python
 class SyncFabricService(SyncStep):
     watched = [ModelLink(Instance,via='instance',into='ip')]
 ```
 
-#### Linking the Watcher into the Synchronizer 
+#### Activate the Watcher by connecting the Synchronzier Container to Redis
 
-* Set the `observer_enable_watchers` option to true in your XOS synchronizer config file. 
+* Set the `observer_enable_watchers` option to true in your XOS synchronizer
+  config file.
 
-* Add a link between your synchronizer container and the redis container by including the following lines in the definition of your synchronizer's docker-compose file. You may need to adapt these to the name of the project used (e.g. cordpod) 
+* Add a link between your synchronizer container and the redis container by
+  including the following lines in the definition of your synchronizer's
+  docker-compose file. You may need to adapt these to the name of the project
+  used (e.g. cordpod)
 
-  - `external_links:`
-       - `xos_redis:redis`
+  ```yaml
+  - external_links:
+    - xos_redis:redis
+  ```
 
-* Ensure that there is a similar link between your XOS UI container and the redis container. 
+* Ensure that there is a similar link between your XOS UI container and the
+  redis container.
 
-In addition to the above development tasks, you also need to make the following changes to your configuration to activate watchers. 
+In addition to the above development tasks, you also need to make the following
+changes to your configuration to activate watchers.
 
-### Implementing a Work-based Synchronizer 
+### Implementing a Work-based Synchronizer
 
-A work-based Synchronizer is a collection of _Actuator_ modules. Each Actuator module is invoked when a model is found to be outdated relative to its last synchronization. An actuator module can be self-contained and written entirely in Python, or it can be broken into a "dispatcher" and "payload", with the dispatcher implemented in Python and the payload implemented using Ansible. The Synchronizer core has built-in support for the dispatch of Ansible modules and helps extract parameters from the synchronized model and translate them into the parameters required by the corresponding Ansible script. It also tracks an hierarchically structured list of such ansible scripts on the filesystem, for operators to use to inspect and debug a system. The procedure for building a work-based synchronizer is as follows:
+A work-based Synchronizer is a collection of _Actuator_ modules. Each Actuator
+module is invoked when a model is found to be outdated relative to its last
+synchronization. An actuator module can be self-contained and written entirely
+in Python, or it can be broken into a "dispatcher" and "payload", with the
+dispatcher implemented in Python and the payload implemented using Ansible. The
+Synchronizer core has built-in support for the dispatch of Ansible modules and
+helps extract parameters from the synchronized model and translate them into
+the parameters required by the corresponding Ansible script. It also tracks an
+hierarchically structured list of such ansible scripts on the filesystem, for
+operators to use to inspect and debug a system. The procedure for building a
+work-based synchronizer is as follows:
 
-1. Run the gen_workbased.py script. gen_workbased <app name>. 
+1. Run the gen_workbased.py script. `gen_workbased <app name>`.
 
-2. Set your Synchronizer-specific config options in the config file, and also set observer_enable_watchers to False. 
+2. Set your Synchronizer-specific config options in the config file, and also
+   set observer_enable_watchers to False.
 
-3. Drop your actuator modules in the directory `/opt/xos/synchronizers/<your synchronizer>/steps`
+3. Drop your actuator modules in the directory `/opt/xos/synchronizers/<your
+   synchronizer>/steps`
 
-4. Run your synchronizer by running `/opt/xos/synchronizers/<your synchronizer>/run-synchronizer.sh`
+4. Run your synchronizer by running `/opt/xos/synchronizers/<your
+   synchronizer>/run-synchronizer.sh`
 
-### Actuator Module API 
+### Actuator Module API
 
-* `Model synchronizes`: A list of type `Model` that records the set of models that the module synchronizes. 
+* `Model synchronizes`: A list of type `Model` that records the set of models
+  that the module synchronizes.
 
-* `def sync_record(self, object)`: A method that handles outdated objects. 
+* `def sync_record(self, object)`: A method that handles outdated objects.
 
-* `def delete_record(self, object)`" A method that handles object delection. 
+* `def delete_record(self, object)`" A method that handles object delection.
 
-* `def get_extra_attributes(self, object)`: A method that maps an object to 
-the parameters required by its Ansible payload. Returns a `dict` with those 
-parameters and their values. 
+* `def get_extra_attributes(self, object)`: A method that maps an object to the
+  parameters required by its Ansible payload. Returns a `dict` with those
+  parameters and their values.
 
-* `def fetch_pending(self, deleted)`: A method that fetches the set of pending 
-objects from the database. The synchronizer core provides a default implementation. 
-Override only if you have a reason to do so. 
+* `def fetch_pending(self, deleted)`: A method that fetches the set of pending
+  objects from the database. The synchronizer core provides a default
+  implementation.  Override only if you have a reason to do so.
 
-* `string template_name`: The name of the Ansible script that directly interacts 
-with the underlying substrate. 
+* `string template_name`: The name of the Ansible script that directly
+  interacts with the underlying substrate.
 
-#### Implementing a Step with Ansible 
+#### Implementing a Step with Ansible
 
-To implement a step using Ansible, a developer must provide two things: an Ansible recipe, and a `get_extra_attributes` method, which maps attributes of the object into a dictionary that configures that Ansible recipe. The Ansible recipe comes in two parts, an inner payload and a wrapper that delivers that payload to the VMs associated with the service. The wrapper itself comes in two parts. A part that sets up the preliminaries:
- 
- ```python 
+To implement a step using Ansible, a developer must provide two things: an
+Ansible recipe, and a `get_extra_attributes` method, which maps attributes of
+the object into a dictionary that configures that Ansible recipe. The Ansible
+recipe comes in two parts, an inner payload and a wrapper that delivers that
+payload to the VMs associated with the service. The wrapper itself comes in two
+parts. A part that sets up the preliminaries:
 
+```python
 ---
- 
 - hosts: "{{ instance_name }}"
-  connection: ssh 
-  user: ubuntu 
-  sudo: yes 
-  gather_facts: no 
+  connection: ssh
+  user: ubuntu
+  sudo: yes
+  gather_facts: no
   vars:
     - package_location: "{{ package_location }}"
     - server_port: "{{ server_port }}"
 ```
 
-The template variables `package_location` and `server_port`
-come out of the Python part of the Synchronizer implementation 
-(discussed below). The outer wrapper then includes a set of Ansible 
-roles that perform the required actions:
+The template variables `package_location` and `server_port` come out of the
+Python part of the Synchronizer implementation (discussed below). The outer
+wrapper then includes a set of Ansible roles that perform the required actions:
 
-```python 
+```python
 roles:
-  - download_packages 
-  - configure_packages 
-  - start_server 
+  - download_packages
+  - configure_packages
+  - start_server
 ```
 
- The "payload" of the Ansible recipe contains an implementation 
- of the roles, in this case, `download_packages`, `configure_packages`,
- and `start_server`. The concrete values of parameters required by the 
- Ansible payload are provided in the implementation of the `get_extra_attributes`
- method in the Python part of the Synchronizer. This method receives an object 
- from the data model and is charged with the task of converting the properties of 
- that object into the set of properties required by the Ansible recipe, which are 
- returned as a Python dictionary. 
- 
-```python 
+The "payload" of the Ansible recipe contains an implementation of the roles, in
+this case, `download_packages`, `configure_packages`, and `start_server`. The
+concrete values of parameters required by the Ansible payload are provided in
+the implementation of the `get_extra_attributes` method in the Python part of
+the Synchronizer. This method receives an object from the data model and is
+charged with the task of converting the properties of that object into the set
+of properties required by the Ansible recipe, which are returned as a Python
+dictionary.
+
+```python
 def get_extra_attributes(self, o):
         fields = {}
-        fields['package_location'] = o.package_location 
-        fields['server_port'] = o.server_port 
-        return fields 
+        fields['package_location'] = o.package_location
+        fields['server_port'] = o.server_port
+        return fields
 ```
 
-#### Implementing a Step without Ansible 
+#### Implementing a Step without Ansible
 
-To implement a step without using Ansible, a developer need only implement the `sync_record` and `delete_record` methods, which get called for every pending 
-object. These methods interact directly with the underlying substrate. 
+To implement a step without using Ansible, a developer need only implement the
+`sync_record` and `delete_record` methods, which get called for every pending
+object. These methods interact directly with the underlying substrate.
 
-#### Managing Dependencies 
+#### Managing Dependencies
 
-If your data models have dependencies between them, so that for one to be synchronized, another must already have been synchronized, then you can define such dependencies in your data model. The Synchronizer automatically picks up such dependencies and ensures that the steps corresponding to the models in questions are executed in a valid order. It also ensures that any errors that arise propagate from the affected objects to its dependents, and that the dependents are held up until the errors have been resolved and the dependencies have been successfully synchronized. 
-In the absence of failures, the Synchronizer tries to execute your synchronization steps concurrently to whatever extent this is possible while still honoring dependencies. 
+If your data models have dependencies between them, so that for one to be
+synchronized, another must already have been synchronized, then you can define
+such dependencies in your data model. The Synchronizer automatically picks up
+such dependencies and ensures that the steps corresponding to the models in
+questions are executed in a valid order. It also ensures that any errors that
+arise propagate from the affected objects to its dependents, and that the
+dependents are held up until the errors have been resolved and the dependencies
+have been successfully synchronized.  In the absence of failures, the
+Synchronizer tries to execute your synchronization steps concurrently to
+whatever extent this is possible while still honoring dependencies.
 
-```python 
+```python
 <in the definition of your model>
 xos_links = [ModelLink(dest=MyServiceUser,via='user'),ModelLink(dest=MyServiceDevice,via='device') ]
 ```
 
-In the above example, the `xos_links` field declares two dependencies. The name `xos_links` is key, and so the field should be named as such. The dependencies are contained in a list of type `ModelLink`, each of which defines a type of object (a model) and an "accessor" field via which a related object of that type can be accessed. 
+In the above example, the `xos_links` field declares two dependencies. The name
+`xos_links` is key, and so the field should be named as such. The dependencies
+are contained in a list of type `ModelLink`, each of which defines a type of
+object (a model) and an "accessor" field via which a related object of that
+type can be accessed.
 
-#### Handling Errors 
+#### Handling Errors
 
-To fault synchronization, you can raise an exception in any of the methods of your step that are automatically called by the synchronizer core. These include `fetch_pending`, `sync_record` and `delete_record`. The outcome of such exceptions has multiple parts:
+To fault synchronization, you can raise an exception in any of the methods of
+your step that are automatically called by the synchronizer core. These include
+`fetch_pending`, `sync_record` and `delete_record`. The outcome of such
+exceptions has multiple parts:
 
-1. The synchronization of the present object is deferred. 
+1. The synchronization of the present object is deferred.
 
-2. The synchronization of dependent objects is deferred, if those objects are accessible via the current object (see the `via` field). 
+2. The synchronization of dependent objects is deferred, if those objects are
+   accessible via the current object (see the `via` field).
 
-3. A string representation of your exception is propagated into a scratchpad in your model, which in turn appears in your UI. When you click the object in question, in the UI, you should see the error message. 
+3. A string representation of your exception is propagated into a scratchpad in
+   your model, which in turn appears in your UI. When you click the object in
+   question, in the UI, you should see the error message.
 
-4. The synchronization state of your object, and of dependent objects changes to "Error" and a red icon appears next to it. 
+4. The synchronization state of your object, and of dependent objects changes
+   to "Error" and a red icon appears next to it.
 
-5. If the object repeatedly fails to synchronize, then its synchronization interval is increased exponentially. 
+5. If the object repeatedly fails to synchronize, then its synchronization
+   interval is increased exponentially.
 
-Sometimes, you may encounter a temporary error, which you think may be resolved shortly, by the time the Synchronizer runs again. In these cases, you can raise a `DeferredException`. This error type differs from a general exception in two ways:
+Sometimes, you may encounter a temporary error, which you think may be resolved
+shortly, by the time the Synchronizer runs again. In these cases, you can raise
+a `DeferredException`. This error type differs from a general exception in two
+ways:
 
-1. It does not put your object in error state. 
+1. It does not put your object in error state.
 
-2. It disables exponential backoff (i.e., the Synchronizer tries to synchronize your object every single time). 
+2. It disables exponential backoff (i.e., the Synchronizer tries to synchronize
+   your object every single time).
 
-#### Synchronizer Configuration Options 
+#### Synchronizer Configuration Options
 
-The following table summarizes the available configuration options. For historical reasons, they are called `observer_foo` since Synchronizers were called Observers in an earlier version of XOS. 
+The following table summarizes the available configuration options. For
+historical reasons, they are called `observer_foo` since Synchronizers were
+called Observers in an earlier version of XOS.
 
 |    Option    |   Default    |     Purpose     |
 |---------|----------|-----------|
@@ -224,3 +316,4 @@
 | `observer_logstash_hostport` | N/A | The host name and port number (e.g. `xosmonitor.org:4132`) to which the Synchronizer streams its logs, on which a logstash server is running. |
 | `observer_log_file~ | N/A | The log file into which the Synchronizer logs are published. |
 | `observer_model_policies_dir` | N/A | The directory in which model policies are stored.|
+
diff --git a/docs/dev/unittest.md b/docs/dev/unittest.md
index ef3884c..3dd979a 100644
--- a/docs/dev/unittest.md
+++ b/docs/dev/unittest.md
@@ -1,27 +1,42 @@
 # Unit Testing
 
-XOS supports automated unit tests using the `nose2` unit testing framework. 
+XOS supports automated unit tests using the `nose2` unit testing framework.
 
 ## Setting up a unit testing environment
 
-To run unit tests, an environment needs to be setup with the appropriate python libraries used by the unit testing framework and by the XOS libraries that are being tested. One way to accomplish this is to setup a [virtual-env](local_env.md). You will also need to copy Chameleon from `component/chameleon` to `containers/xos/tmp.chameleon`. Here is a set of commands that may prove useful:
+To run unit tests, an environment needs to be setup with the appropriate python
+libraries used by the unit testing framework and by the XOS libraries that are
+being tested. One way to accomplish this is to setup a
+[virtual-env](local_env.md). You will also need to copy Chameleon from
+`component/chameleon` to `containers/xos/tmp.chameleon`. Here is a set of
+commands that may prove useful:
 
-```brew install graphviz
+```shell
+brew install graphviz
 pip install --install-option="--include-path=/usr/local/include/" --install-option="--library-path=/usr/local/lib/" pygraphviz
 source scripts/setup_venv.sh
 pip install nose2 mock
-cp -R ../../component/chameleon containers/xos/tmp.chameleon```
+cp -R ../../component/chameleon containers/xos/tmp.chameleon
+```
 
 ## Running unit tests
 
 To run unit tests, go to the root of the xos repository and run the following:
 
-```nose2 --verbose --exclude-ignored-files```
+```shell
+nose2 --verbose --exclude-ignored-files
+```
 
 ## Writing new unit tests
 
-New test filename should start with the string `test`. For example, `test_mymodule.py`. If named properly, then `nose2` will automatically pick them up. 
+New test filename should start with the string `test`. For example,
+`test_mymodule.py`. If named properly, then `nose2` will automatically pick
+them up.
 
 ## Ignoring unwanted unit tests
 
-Some tests are still being migrated to the unit testing framework and/or require features that may not be present in the virtual-env. Placing the string `# TEST_FRAMEWORK: IGNORE` anywhere in a python file will prevent it from being executed automatically by the test framework. 
+Some tests are still being migrated to the unit testing framework and/or
+require features that may not be present in the virtual-env. Placing the string
+`# TEST_FRAMEWORK: IGNORE` anywhere in a python file will prevent it from being
+executed automatically by the test framework.
+
diff --git a/docs/dev/workflow_local.md b/docs/dev/workflow_local.md
index c515469..d86e0ec 100644
--- a/docs/dev/workflow_local.md
+++ b/docs/dev/workflow_local.md
@@ -8,9 +8,9 @@
 The `local` scenario is suitable for working on (and verifying the correctness
 of):
 
-- `core` models
-- `service` models
-- `gui`
+* `core` models
+* `service` models
+* `gui`
 
 It runs in a set of local docker containers, and is the most lightweight of all
 CORD development environments.
@@ -30,7 +30,7 @@
 
 You can setup a `local` POD config on your machine as follows.
 
-```
+```shell
 cd ~/cord/build
 make PODCONFIG=rcord-local.yml config
 make build
@@ -43,13 +43,13 @@
 deploy an Apache proxy and set `/etc/hosts` variables to allow it to proxy the
 connection remotely:
 
-```
+```shell
 make local-ubuntu-dev-env
 ```
 
 Example combining all of these using `cord-boostrap.sh`:
 
-```
+```shell
 bash ./cord-bootstrap.sh -d -t "PODCONFIG=rcord-local.yml config" -t "local-ubuntu-dev-env" -t "build"
 ```
 
@@ -58,7 +58,7 @@
 This is the workflow that you'll need to follow if you want to start from a
 fresh XOS installation. Note that it wipes the out the XOS database.
 
-```
+```shell
 cd ~/cord/build
 make local-xos-teardown
 make build
diff --git a/docs/dev/workflow_mock_single.md b/docs/dev/workflow_mock_single.md
index 9e76192..1b95cdc 100644
--- a/docs/dev/workflow_mock_single.md
+++ b/docs/dev/workflow_mock_single.md
@@ -8,17 +8,17 @@
 The `mock` scenario is suitable for working on (and verifying the
 correctness of):
 
-- `core` models
-- `service` models
-- `gui`
-- `profile` configurations
+* `core` models
+* `service` models
+* `gui`
+* `profile` configurations
 
 The `single` scenario also runs the CORD synchronizer containers and can
 optionally run ONOS and ElasticStack, and may be suitable for working on:
 
-- `synchronizer` steps
-- Interaction between XOS's ONOS synchronizer and ONOS
-- Logging with ElasticStack
+* `synchronizer` steps
+* Interaction between XOS's ONOS synchronizer and ONOS
+* Logging with ElasticStack
 
 ## Requirements
 
@@ -36,7 +36,7 @@
 You can setup a `mock` deployment on your machine as follows. If using
 `single`, replace `rcord-mock.yml` with `rcord-single.yml`:
 
-```
+```shell
 cd ~/cord/build
 make PODCONFIG=rcord-mock.yml config
 make -j4 build
@@ -45,8 +45,8 @@
 This setups a `Vagrant VM`, and once the install is complete,
 you can access:
 
-- the XOS GUI at `192.168.46.100:8080/xos`
-- the Vagrant VM via `ssh headnode`
+* the XOS GUI at `192.168.46.100:8080/xos`
+* the Vagrant VM via `ssh headnode`
 
 ### Configure Your Deployment
 
@@ -54,7 +54,7 @@
 prefer to use `VirtualBox` (this is the typical Mac OS case), you can invoke
 the build command as:
 
-```
+```shell
 VAGRANT_PROVIDER=virtualbox make -j4 build
 ```
 
@@ -74,14 +74,14 @@
 
 Note that the code is shared in the VM so that:
 
-- `~/cord` is mounted on `/opt/cord`
-- `~/cord_profile` is mounted on `/opt/cord_profile`
-- `~/cord/platform-install/credentials/` is mounted on `~/opt/credentials`
+* `~/cord` is mounted on `/opt/cord`
+* `~/cord_profile` is mounted on `/opt/cord_profile`
+* `~/cord/platform-install/credentials/` is mounted on `~/opt/credentials`
   (only in the `single` scenario)
 
 ### Update the Code Running in the Containers
 
-```
+```shell
 cd ~/cord/build
 make xos-update-images
 make -j4 build
@@ -93,7 +93,7 @@
 to start from a fresh XOS installation. Note that it wipes the
 out the XOS database.
 
-```
+```shell
 cd ~/cord/build
 make xos-teardown
 make -j4 build
@@ -101,7 +101,7 @@
 
 ### Update the Profile Configuration
 
-```
+```shell
 cd ~/cord/build
 make clean-profile
 make PODCONFIG=rcord-mock.yml config
@@ -116,7 +116,7 @@
 To use these, you would invoke the ONOS or ElasticStack milestone target before
 the `build` target:
 
-```
+```shell
 make PODCONFIG=rcord-single.yml config
 make -j4 milestones/deploy-elasticstack
 make -j4 build
@@ -124,7 +124,7 @@
 
 or
 
-```
+```shell
 make PODCONFIG=rcord-single.yml config
 make -j4 milestones/deploy-onos
 make -j4 build
diff --git a/docs/dev/xossh.md b/docs/dev/xossh.md
index dfe37c0..df3f0c0 100644
--- a/docs/dev/xossh.md
+++ b/docs/dev/xossh.md
@@ -17,11 +17,11 @@
   > <   | |  | |  \___ \   \___ \  |  __  |
  / . \  | |__| |  ____) |  ____) | | |  | |
 /_/ \_\  \____/  |_____/  |_____/  |_|  |_|
- 
+
 XOS Core server at xos-core.cord.lab:50051
 Type "listObjects()" for a list of all objects
 Type "listUtility()" for a list of utility functions
 Type "login("username", "password")" to switch to a secure shell
 Type "examples()" for some examples
 xossh >>>
-```
\ No newline at end of file
+```
diff --git a/docs/dev/xproto.md b/docs/dev/xproto.md
index 95ca43a..30e5e1d 100644
--- a/docs/dev/xproto.md
+++ b/docs/dev/xproto.md
@@ -1,27 +1,49 @@
 # XOS Modeling Framework
 
-XOS defines a modeling framework: a language for specifying data models (_xproto_) and a tool chain for generating code based on the set of models (_xosgenx_).
+XOS defines a modeling framework: a language for specifying data models
+(_xproto_) and a tool chain for generating code based on the set of models
+(_xosgenx_).
 
-The xproto language is based on [Google’s protocol buffers](https://developers.google.com/protocol-buffers/) (protobufs), borrowing their syntax, but extending their semantics to express additional behavior. Although these extensions can be written in syntactically valid protobufs (using the protobuf option feature), the resulting model definitions are cumbersome and the semantics are under-specified.
+The xproto language is based on [Google’s protocol
+buffers](https://developers.google.com/protocol-buffers/) (protobufs),
+borrowing their syntax, but extending their semantics to express additional
+behavior. Although these extensions can be written in syntactically valid
+protobufs (using the protobuf option feature), the resulting model definitions
+are cumbersome and the semantics are under-specified.
 
-Whereas protobufs primarily facilitate one operation on models, namely, data serialization, xproto goes beyond protobufs to provide a framework for implementing custom operators. 
+Whereas protobufs primarily facilitate one operation on models, namely, data
+serialization, xproto goes beyond protobufs to provide a framework for
+implementing custom operators.
 
-Users are free to define models using standard protobufs instead of the xproto syntax, but doing so obscures the fact that packing new behavior into the options field renders protobuf’s semantics under-specified. Full details are given below, but as two examples: (1) xproto supports relationships (foreign keys) among objects defined by the models, and (2) xproto supports boolean predicates (policies) that can be applied to objects defined by the  models.
+Users are free to define models using standard protobufs instead of the xproto
+syntax, but doing so obscures the fact that packing new behavior into the
+options field renders protobuf’s semantics under-specified. Full details are
+given below, but as two examples: (1) xproto supports relationships (foreign
+keys) among objects defined by the models, and (2) xproto supports boolean
+predicates (policies) that can be applied to objects defined by the  models.
 
-The xosgenx tool chain generates code based on a set of models loaded into the XOS Core. This tool chain can be used to produce multiple targets, including:
+The xosgenx tool chain generates code based on a set of models loaded into the
+XOS Core. This tool chain can be used to produce multiple targets, including:
 
-* Object Relation Mapping (ORM) – maps the data model onto a persistent database.
+* Object Relation Mapping (ORM) – maps the data model onto a persistent
+  database.
 * gRPC Interface – how all the other containers communicate with XOS Core.
 * TOSCA API – one of the UI/Views used to access CORD.
 * Security Policies – governs which principals can read/write which objects.
-* Synchronizer Framework – execution environment in which Ansible playbooks run.
+* Synchronizer Framework – execution environment in which Ansible playbooks
+  run.
 * Unit Tests – auto-generate API unit tests.
 
-The next two sections describe xproto (first the models and then policies that can be applied to the models), and the following section describes xosgenx and how it can be used to generate different targets.
+The next two sections describe xproto (first the models and then policies that
+can be applied to the models), and the following section describes xosgenx and
+how it can be used to generate different targets.
 
 ## Models
 
-The xproto syntax for models is based on Google Protobufs. This means that any protobuf file also qualifies as xproto. We currently use the Protobuf v2 syntax. For example, the file below specifies a model that describes container images:
+The xproto syntax for models is based on Google Protobufs. This means that any
+protobuf file also qualifies as xproto. We currently use the Protobuf v2
+syntax. For example, the file below specifies a model that describes container
+images:
 
 ```protobuf
 message Image {
@@ -34,22 +56,29 @@
 }
 ```
 
-We use standard protobuf scalar types, for example: `int32`, `uint32`, `string`, `bool`, and `float`.
+We use standard protobuf scalar types, for example: `int32`, `uint32`,
+`string`, `bool`, and `float`.
 
-xproto contains several extensions, encoded as Protobuf options, which the xosgenx toolchain recognizes at the top level. The xproto extensions to Google Protobufs are as follows.
+xproto contains several extensions, encoded as Protobuf options, which the
+xosgenx toolchain recognizes at the top level. The xproto extensions to Google
+Protobufs are as follows.
 
 ### Inheritance
 
-Inheritance instructs the xproto processor that a model inherits the fields of a set of base models. These base model fields are not copied into the derived model automatically. However, the fields can be accessed in an xproto target.
+Inheritance instructs the xproto processor that a model inherits the fields of
+a set of base models. These base model fields are not copied into the derived
+model automatically. However, the fields can be accessed in an xproto target.
 
-- xproto
+* xproto
+
   ```protobuf
   message EC2Instance (Instance, EC2Object) {
         // EC2Instance inherits the fields of  Instance
   }
   ```
 
-- protobuf
+* protobuf
+
   ```protobuf
   message EC2Instance  {
         option bases = "Instance,EC2Object"
@@ -58,37 +87,49 @@
 
 ### Links
 
-Links are references to one model from another. A link specifies the type of the reference (manytoone, manytomany, onetomany, or onetoone), name of the field that contains the reference (_slice_ in the following example), its type (e.g., _Slice_), the name of the field in the peer model that points back to the current model, and a “through” field, specifying a model declared separately as an xproto message, that stores properties of the link.
+Links are references to one model from another. A link specifies the type of
+the reference (manytoone, manytomany, onetomany, or onetoone), name of the
+field that contains the reference (_slice_ in the following example), its type
+(e.g., _Slice_), the name of the field in the peer model that points back to
+the current model, and a “through” field, specifying a model declared
+separately as an xproto message, that stores properties of the link.
 
-- xproto
+* xproto
+
   ```protobuf
   message Instance {
         required manytoone slice:Slice->instances = 1;
   }
   ```
 
-- protobuf
+* protobuf
+
   ```protobuf
   message Instance {
         required int32 slice = 1 [model="Slice", link="manytoone", src_port="slice", dst_port="instances"];
   }
   ```
 
-The example shown below illustrates a manytomany link from Image to Deployment, which goes through the model `ImageDeployments`:
+The example shown below illustrates a manytomany link from Image to Deployment,
+which goes through the model `ImageDeployments`:
 
-- xproto
+* xproto
+
   ```protobuf
   required manytomany deployments->Deployment/ImageDeployments:images = 7 [help_text = "Select which images should be instantiated on this deployment", null = False, db_index = False, blank = True];
   ```
 
-- Protobuf
+* protobuf
+
   ```protobuf
   required int32 deployments = 7 [help_text = "Select which images should be instantiated on this deployment", null = False, db_index = False, blank = True, model="Deployment", through="ImageDeployments", dst_port="images", link="manytomany"];
   ```
 
 ### Access Policies
 
-Associates a policy (a boolean expression) with a model  to control access to instances of that model. How policies (e.g., `slicle_policy`) are specified is described below.
+Associates a policy (a boolean expression) with a model  to control access to
+instances of that model. How policies (e.g., `slicle_policy`) are specified is
+described below.
 
 ```protobuf
 message Slice::slice_policy (XOSBase) {
@@ -98,17 +139,26 @@
 
 ### Model Options
 
-Model Options declare information about models. They can be declared for individual models, or at the top level in the xproto definition, in which case they are inherited by all of the models in the file, unless they are overridden by a particular model.
+Model Options declare information about models. They can be declared for
+individual models, or at the top level in the xproto definition, in which case
+they are inherited by all of the models in the file, unless they are overridden
+by a particular model.
 
-Currently supported model options include: `name`, `app_label`, `verbose_name`, `legacy`, `tosca_description`, `validators`, `plural`, `singular`, and `gui_hidden`.
+Currently supported model options include: `name`, `app_label`, `verbose_name`,
+`legacy`, `tosca_description`, `validators`, `plural`, `singular`, and
+`gui_hidden`.
 
-The name option is a short name used to refer to your service. For example, in the Virtual Subscriber Gateway service, the name option is set to `vSG`. 
+The name option is a short name used to refer to your service. For example, in
+the Virtual Subscriber Gateway service, the name option is set to `vSG`.
 
 ```protobuf
 option name = "vSG"
 ```
 
-The app\_label option is a short programmatic name that does not need to be easily understood by humans. It should not include whitespaces, and should preferrably be all lowercase. If app\_label is not specified, then its value defaults to the name option described above.
+The app\_label option is a short programmatic name that does not need to be
+easily understood by humans. It should not include whitespaces, and should
+preferrably be all lowercase. If app\_label is not specified, then its value
+defaults to the name option described above.
 
 ```protobuf
 option app_label = "vsg"
@@ -120,18 +170,18 @@
 option verbose_name = "Virtual Subscriber Gateway Service";
 ```
 
-
 ```protobuf
 option app_label = "legacy"
 ```
 
 The legacy option is for services that require custom Python code in their
-generated models. Without this option set, for any given model (`VSGService`) the
-toolchain generates model classes in a self-contained file (`vsgservice.py`).
-With this option set, the toolchain generates the models in a file called
-`vsgservice_decl.py`. All of the models in this file have the suffix `_decl`.
-It is then up to the service developer to provide the final models. The code below gives
-an example of custom models that inherit from such intermediate `decl` models:
+generated models. Without this option set, for any given model (`VSGService`)
+the toolchain generates model classes in a self-contained file
+(`vsgservice.py`).  With this option set, the toolchain generates the models in
+a file called `vsgservice_decl.py`. All of the models in this file have the
+suffix `_decl`.  It is then up to the service developer to provide the final
+models. The code below gives an example of custom models that inherit from such
+intermediate `decl` models:
 
 ```python
 class VSGService(VSGService__decl):
@@ -143,7 +193,9 @@
 You can use the xproto `service_extender` target to generate a stub for your
 final model definitions.
 
-The plural and singular options provide the grammatically correct plural and singular forms of your model name to ensure that autogenerated API endpoints are valid.
+The plural and singular options provide the grammatically correct plural and
+singular forms of your model name to ensure that autogenerated API endpoints
+are valid.
 
 ```protobuf
 option singular = "slice" # Singular of slice is not slouse, as computed by Python's pattern.en library
@@ -151,16 +203,23 @@
 option plural = "ports" # Plural of ports is not portss
 ```
 
-The tosca\_description option is a description for the service entry in the autogenerated TOSCA schema.
+The tosca\_description option is a description for the service entry in the
+autogenerated TOSCA schema.
 
-The `validators` option contains a set of declarative object validators applied to every object of the present model when it is saved. Validators are a comma separated
-list of tuples, where the two elements of each tuple are separated by a ':'. The first element of the tuple is a reference to an XOS policy (described in another section of this document). The second element is an error message that is returned to an API client that attempts an operation that does not pass validation.
+The `validators` option contains a set of declarative object validators applied
+to every object of the present model when it is saved. Validators are a comma
+separated list of tuples, where the two elements of each tuple are separated by
+a ':'. The first element of the tuple is a reference to an XOS policy
+(described in another section of this document). The second element is an error
+message that is returned to an API client that attempts an operation that does
+not pass validation.
 
 ```protobuf
 option validators = "instance_creator:Instance has no creator, instance_isolation: Container instance {obj.name} must use container image, instance_isolation_container_vm_parent:Container-vm instance {obj.name} must have a parent";
 ```
 
-The gui\_hidden option is a directive to the XOS GUI to exclude the present model from the default view provided to users.
+The gui\_hidden option is a directive to the XOS GUI to exclude the present
+model from the default view provided to users.
 
 ```protobuf
 option null = True
@@ -168,9 +227,11 @@
 
 ### Field Options
 
-Options are also supported on a per-field basis. The following lists the currently available field options.
+Options are also supported on a per-field basis. The following lists the
+currently available field options.
 
-The null option specifies whether a field has to be set or not (equivalent to annotating the field as `required` or `optional`):
+The null option specifies whether a field has to be set or not (equivalent to
+annotating the field as `required` or `optional`):
 
 ```protobuf
 option null = True
@@ -212,7 +273,8 @@
 option gui_hidden = True;
 ```
 
-The set of valid values for a field, where each inner-tuple specifies equivalence classes (e.g., vm is equivalent to Virtual Machine):
+The set of valid values for a field, where each inner-tuple specifies
+equivalence classes (e.g., vm is equivalent to Virtual Machine):
 
 ```protobuf
 option choices = "(('vm', 'Virtual Machine'), ('container', 'Container'))";
@@ -233,11 +295,13 @@
 option content_type = “ip”;
 ```
 
-Whether an assignment to a field is permitted, where the option setting is a named policy:
+Whether an assignment to a field is permitted, where the option setting is a
+named policy:
 
 ```protobuf
 option validators = “port_validator:Slice is not allowed to connect to network”;
 ```
+
 How policies (e.g., `port_validator`) are specified is described below.
 
 Whether a field should be shown in the GUI:
@@ -246,16 +310,22 @@
 option gui_hidden = True;
 ```
 
-Identify a field that is used as key by the TOSCA engine. A model can have multiple keys in case we need a composite key:
+Identify a field that is used as key by the TOSCA engine. A model can have
+multiple keys in case we need a composite key:
 
 ```protobuf
 option tosca_key = True;
 ```
-Identify a field that is used as key by the TOSCA engine. This needs to be used in case a composite key can be composed by different combination of fields:
+
+Identify a field that is used as key by the TOSCA engine. This needs to be used
+in case a composite key can be composed by different combination of fields:
+
 ```protobuf
 tosca_key_one_of = "<field_name>"
 ```
+
 For example, in the `ServiceInstanceLink` model:
+
 ```protobuf
 message ServiceInstanceLink (XOSBase) {
      required manytoone provider_service_instance->ServiceInstance:provided_links = 1 [db_index = True, null = False, blank = False, tosca_key=True];
@@ -265,51 +335,56 @@
      optional manytoone subscriber_network->Network:subscribed_links = 5 [db_index = True, null = True, blank = True, tosca_key_one_of=subscriber_service_instance];
 }
 ```
-the key is composed by `provider_service_instance` and one of `subscriber_service_instance`, `subscriber_service`, `subscriber_network`
+
+the key is composed by `provider_service_instance` and one of
+`subscriber_service_instance`, `subscriber_service`, `subscriber_network`
 
 ### Naming Conventions
 
-Model names should use _CamelCase_ without underscore. Model names should always
-be singular, never plural. For example: `Slice`, `Network`, `Site`.
+Model names should use _CamelCase_ without underscore. Model names should
+always be singular, never plural. For example: `Slice`, `Network`, `Site`.
 
-Sometimes a model is used to relate two other models, and
-should be named after the two models that it relates. For example, a model that
-relates the `Controller` and `User` models should be called `ControllerUser`.
+Sometimes a model is used to relate two other models, and should be named after
+the two models that it relates. For example, a model that relates the
+`Controller` and `User` models should be called `ControllerUser`.
 
-Field names use lower-case with underscores separating names. Examples of
-valid field names are: name, `disk_format`, `controller_format`.
+Field names use lower-case with underscores separating names. Examples of valid
+field names are: name, `disk_format`, `controller_format`.
 
-### Declarative vs Feedback State 
+### Declarative vs Feedback State
 
-By convention, the fields that make up a model are classified as
-holding one of two kinds of state: *declarative* and *feedback*.
+By convention, the fields that make up a model are classified as holding one of
+two kinds of state: *declarative* and *feedback*.
 
-Fields set by the operator to specify (declare) the expected state of
-CORD's underlying components are said to hold *declarative state*.
-In contrast, fields that record operational data reported from CORD's
-underlying (backend) components are said to hold *feedback state*.
+Fields set by the operator to specify (declare) the expected state of CORD's
+underlying components are said to hold *declarative state*.  In contrast,
+fields that record operational data reported from CORD's underlying (backend)
+components are said to hold *feedback state*.
 
-For more information about declarative and feedback state, and the
-role they play in synchornizing the data model with the backend
-components, read about the [Synchronizer Architecture](sync_arch.md). 
+For more information about declarative and feedback state, and the role they
+play in synchornizing the data model with the backend components, read about
+the [Synchronizer Architecture](sync_arch.md).
 
 ## Policies
 
-Policies are boolean expressions that can be associated with models. Consider two examples. In the first, `grant_policy` is a predicate applied to instances of the `Privilege` model. It is used to generate and inject security checks into the API.
+Policies are boolean expressions that can be associated with models. Consider
+two examples. In the first, `grant_policy` is a predicate applied to instances
+of the `Privilege` model. It is used to generate and inject security checks
+into the API.
 
 ```protobuf
 policy grant_policy < ctx.user.is_admin
                       | exists Privilege:Privilege.object_type = obj.object_type
                         & Privilege.object_id = obj.object_id
                         & Privilege.accessor_type = "User"
-                        & Privilege.accessor_id = ctx.user.id 
+                        & Privilege.accessor_id = ctx.user.id
                         & Privilege.permission = "role:admin" >
-    
+
 message Privilege::grant_policy (XOSBase) {
      required int32 accessor_id = 1 [null = False];
      required string accessor_type = 2 [null = False, max_length=1024];
      required int32 controller_id = 3 [null = True];
-     required int32 object_id = 4 [null = False];  
+     required int32 object_id = 4 [null = False];
      required string object_type = 5 [null = False, max_length=1024];
      required string permission = 6 [null = False, default = "all", max_length=1024];
      required string granted = 7 [content_type = "date", auto_now_add = True, max_length=1024];
@@ -321,11 +396,15 @@
 
 * The object on which the policy is invoked (e.g., `obj.object_type`).
 * The context in which the policy is invoked (e.g., `cxt.user`).
-* The data model as a whole (e.g., `exists Privilege:Privilege.accessor_id = ctx.user.id`).
+* The data model as a whole (e.g., `exists Privilege:Privilege.accessor_id =
+  ctx.user.id`).
 
-Available context information includes the principal that invoked the operation (`ctx.user`) and the type of access that principal is requesting (`ctx.write_access` and `ctx.read_access`).
+Available context information includes the principal that invoked the operation
+(`ctx.user`) and the type of access that principal is requesting
+(`ctx.write_access` and `ctx.read_access`).
 
-A second example involves the `Port` model and two related policies, `port_validator` and `port_policy`. 
+A second example involves the `Port` model and two related policies,
+`port_validator` and `port_policy`.
 
 ```protobuf
 policy port_validator < (obj.instance.slice in obj.network.permitted_slices.all()) | (obj.instance.slice = obj.network.owner) | obj.network.permit_all_slices >
@@ -342,30 +421,42 @@
 }
 ```
 
-Similar to the previous example, `port_policy` is associated with the `Port` model, but unlike `grant_policy` shown above (which is an expression over a set of objects in the data model), `port_policy` is defined by reference to two other policies: `instance_policy` and `network_policy` (not shown). 
+Similar to the previous example, `port_policy` is associated with the `Port`
+model, but unlike `grant_policy` shown above (which is an expression over a set
+of objects in the data model), `port_policy` is defined by reference to two
+other policies: `instance_policy` and `network_policy` (not shown).
 
-This example also shows the use of _validators_, which enforce invariants on how objects of a given model are used. In this case, policy `port_validator` checks to make sure the slice associated with a given port is included in the set of permitted networks.
+This example also shows the use of _validators_, which enforce invariants on
+how objects of a given model are used. In this case, policy `port_validator`
+checks to make sure the slice associated with a given port is included in the
+set of permitted networks.
 
 Policy expressions may include the following operators:
-conjunction ( `&` ),
-disjunction ( `|` ),
-equality ( `=` ),
-negation ( `not` ),
-set membership ( `in` ),
-implication ( `->` ),
-qualifiers ( `exists`, `forall` ),
-sub-policy reference ( `* <policy name>` ),
-python escapes (`{{ python expression }}`).
+
+* conjunction ( `&` ),
+* disjunction ( `|` ),
+* equality ( `=` ),
+* negation ( `not` ),
+* set membership ( `in` ),
+* implication ( `->` ),
+* qualifiers ( `exists`, `forall` ),
+* sub-policy reference ( `* <policy name>` ),
+* python escapes ({% raw %}`{{ python expression }}`{% endraw %}).
 
 ## Tool Chain
 
-The xosgenx tool converts a xproto file into an intermediate representation and passes it to a target, which in turn generates the output code. The target has access to a library of auxiliary functions implemented in Python. The target itself is written as a jinja2 template. The following figure depicts the processing pipeline.
+The xosgenx tool converts a xproto file into an intermediate representation and
+passes it to a target, which in turn generates the output code. The target has
+access to a library of auxiliary functions implemented in Python. The target
+itself is written as a jinja2 template. The following figure depicts the
+processing pipeline.
 
-<img src="toolchain.png" alt="Drawing" style="width: 500px;"/>
+![xosgenx toolchain](toolchain.png)
 
 ### Intermediate Representation (IR)
 
-The IR is a representation of a parsed xproto file in the form of nested Python dictionaries. Here is a description of its structure.
+The IR is a representation of a parsed xproto file in the form of nested Python
+dictionaries. Here is a description of its structure.
 
 ```protobuf
 "proto": {
@@ -383,17 +474,26 @@
 
 ### Library Functions
 
-xproto targets can use a set of library functions implemented in Python. These can be found in the file `lib.py` in the `genx/tool` directory. These functions are listed below:
+xproto targets can use a set of library functions implemented in Python. These
+can be found in the file `lib.py` in the `genx/tool` directory. These functions
+are listed below:
 
-- `xproto_unquote(string)` Unquotes a string. For example, `"This is a help string"` is converted into `This is a help string.`  
-  
-- `xproto_singularize(field)` Converts an English plural into its singular. It is extracted from the `singular` option for a field if such an option is specified. Otherwise, it performs the conversion automatically using the library `pattern.en`.
+* `xproto_unquote(string)` Unquotes a string. For example, `"This is a help
+  string"` is converted into `This is a help string.`
 
-- `xproto_pluralize(field)` The reverse of `xproto_singularize`.
+* `xproto_singularize(field)` Converts an English plural into its singular. It
+  is extracted from the `singular` option for a field if such an option is
+  specified. Otherwise, it performs the conversion automatically using the
+  library `pattern.en`.
+
+* `xproto_pluralize(field)` The reverse of `xproto_singularize`.
 
 ### Targets
 
-A target is a template written in jinja2 that takes the IR as input and generates code (a text file) as output. Common targets are Python, Protobufs, unit tests, and so on. The following example shows how to generate a GraphViz dot file from a set of xproto specifications:
+A target is a template written in jinja2 that takes the IR as input and
+generates code (a text file) as output. Common targets are Python, Protobufs,
+unit tests, and so on. The following example shows how to generate a GraphViz
+dot file from a set of xproto specifications:
 
 ```python
 digraph {
@@ -405,9 +505,11 @@
 }
 ```
 
-This template loops through all of the messages in a proto definition and then through the links in each message. For each link, it formats and outputs an edge in a graph in Graphviz dot notation.
+This template loops through all of the messages in a proto definition and then
+through the links in each message. For each link, it formats and outputs an
+edge in a graph in Graphviz dot notation.
 
-```
+```python
 {{ proto }}
 ```
 
@@ -422,13 +524,18 @@
 {% endfor -%}
 ```
 
-The example target outputs a Python function that enumerates the ids of the objects from which the current object is linked.
+The example target outputs a Python function that enumerates the ids of the
+objects from which the current object is linked.
 
 ### Running xosgenx
 
-It is possible to run the xosgenx tool chain directly. This is useful, for example, when developing a new target.
+It is possible to run the xosgenx tool chain directly. This is useful, for
+example, when developing a new target.
 
-To do do, first setup the python virtual environment as described [here](local_env.md). Then drop an xproto file in your working directory. For example, you can copy-and-paste the following content into a file named `slice.xproto`:
+To do do, first setup the python virtual environment as described
+[here](local_env.md). Then drop an xproto file in your working directory. For
+example, you can copy-and-paste the following content into a file named
+`slice.xproto`:
 
 ```protobuf
 message Slice::slice_policy (XOSBase) {
@@ -453,15 +560,20 @@
 }
 ```
 
-One of the existing targets is Django, which currently serves as the Object-Relational Mapping (ORM) tool used in CORD. To generate a Django model starting from this xproto file you can use:
+One of the existing targets is Django, which currently serves as the
+Object-Relational Mapping (ORM) tool used in CORD. To generate a Django model
+starting from this xproto file you can use:
 
-`xosgenx --target="django.xtarget" --output=. --write-to-file="model" --dest-extension="py" slice.xproto`
+```shell
+xosgenx --target="django.xtarget" --output=. --write-to-file="model" --dest-extension="py" slice.xproto
+```
 
-This generates a file called `slice.py` in your current directory. If there were multiple files, then it generates python Django models for each of them.
+This generates a file called `slice.py` in your current directory. If there
+were multiple files, then it generates python Django models for each of them.
 
 You can print the tool’s syntax by running `xosgenx --help`.
 
-```
+```shell
 usage: xosgenx [-h] [--rev] --target TARGET [--output OUTPUT] [--attic ATTIC]
                [--kvpairs KV] [--write-to-file {single,model,target}]
                [--dest-file DEST_FILE | --dest-extension DEST_EXTENSION]
@@ -488,3 +600,4 @@
                         Output file extension (if write-to-file is set to
                         single)
 ```
+
diff --git a/docs/migrate_4.0.md b/docs/migrate_4.0.md
index ea7e698..784f865 100644
--- a/docs/migrate_4.0.md
+++ b/docs/migrate_4.0.md
@@ -4,37 +4,49 @@
 
 CORD-4.0 makes the following changes:
 
-- Renames `Tenant` to `ServiceInstance`.
-- Replaces CORD-3.0's many-to-one tenancy links with a new many-to-many link object called `ServiceInstanceLink`. 
-- Introduces the concept of service interfaces using the `InterfaceType` and `ServiceInterface` models.
-- Makes `ServiceDependency` a separate model not directly related to Tenancy models. 
+* Renames `Tenant` to `ServiceInstance`.
+* Replaces CORD-3.0's many-to-one tenancy links with a new many-to-many link
+  object called `ServiceInstanceLink`.
+* Introduces the concept of service interfaces using the `InterfaceType` and
+  `ServiceInterface` models.
+* Makes `ServiceDependency` a separate model not directly related to Tenancy
+  models.
 
-Note that for the purposes of this document, we still refer to some R-CORD models using the suffix "Tenant" rather than "ServiceInstance". As time permits, those R-CORD models will be renamed. New services are recommended to use the suffix ServiceInstance rather than Tenant.  
+Note that for the purposes of this document, we still refer to some R-CORD
+models using the suffix "Tenant" rather than "ServiceInstance". As time
+permits, those R-CORD models will be renamed. New services are recommended to
+use the suffix ServiceInstance rather than Tenant.
 
 ### Migrating existing Tenants
 
-The base class has been changed from `Tenant` to `ServiceInstance`. This may require an  `xproto` change, for example:
+The base class has been changed from `Tenant` to `ServiceInstance`. This may
+require an  `xproto` change, for example:
 
     - message VTRTenant (Tenant){
     + message VTRTenant (ServiceInstance){
 
-Note that `TenantWithContainer` has not yet been renamed (at some point in the future it may become `ServiceInstanceWithContainer`), so models inheriting from `TenantWithContainer` are fine for now.
+Note that `TenantWithContainer` has not yet been renamed (at some point in the
+future it may become `ServiceInstanceWithContainer`), so models inheriting from
+`TenantWithContainer` are fine for now.
 
 ### Differences in ServiceInstance fields
 
 A few fields in ServiceInstance have been changed from what they were in `Tenant`:
- 
-- `Tenant.provider_service` --> `ServiceInstance.owner` 
+
+* `Tenant.provider_service` --> `ServiceInstance.owner`
 
 ### Creating links between Service Instances (`ServiceInstanceLink` objects)
 
-A common pattern in CORD-3.0 model policies is to create a new Tenant and link it to an existing model. For example,
+A common pattern in CORD-3.0 model policies is to create a new Tenant and link
+it to an existing model. For example,
 
     t = VRouterTenant(provider_service=vrouter_service,
                       subscriber_tenant=some_vsg_tenant)
     t.save()
 
-In the above example, the relationship between the new VRouterTenant and `some_vsg_tenant` was captured by the field `subscriber_tenant`. This field no longer exists in CORD-4.0 and needs to be replaced with a link:
+In the above example, the relationship between the new VRouterTenant and
+`some_vsg_tenant` was captured by the field `subscriber_tenant`. This field no
+longer exists in CORD-4.0 and needs to be replaced with a link:
 
     t = VRouterTenant(owner=vrouter_service)
     t.save()
@@ -44,7 +56,8 @@
 
 ### Traversing links between Service Instances
 
-In CORD-3.0, it was possible to determine the subscriber of a Tenant by looking at the Tenant's `subscriber_*` properties. For example,
+In CORD-3.0, it was possible to determine the subscriber of a Tenant by looking
+at the Tenant's `subscriber_*` properties. For example,
 
     subscriber = some_vrouter_tenant.subscriber_tenant
     vsg_tenant = VSGTenant.objects.get(id = subscriber.id)
@@ -59,16 +72,24 @@
             # now, do something with vsg_tenant
 
 You can also walk in the opposite direction:
-    
+
     for link in some_vsg_tenant.subscribed_links.all():
         if link.subscriber_service_instance:
             provider = link.provider_service_instance
             vrouter_tenant = provider.leaf_model
             # now, do something with vrouter_tenant
 
-Note that since the service instance graph now supports true many-to-many relations, it's common to have to use for loops as described above to cover cases where an object may be linked to many providers or many subscribers. If it's a known constraint that only one object may be linked, then it may be reasonable to omit the for loop and use `provided_links.first()` or `subscribed_links.first()` instead of `.all()`. 
+Note that since the service instance graph now supports true many-to-many
+relations, it's common to have to use for loops as described above to cover
+cases where an object may be linked to many providers or many subscribers. If
+it's a known constraint that only one object may be linked, then it may be
+reasonable to omit the for loop and use `provided_links.first()` or
+`subscribed_links.first()` instead of `.all()`.
 
-Also note that `leaf_model` is a property that will automatically cast any base object to its descendant class. For example, if you have a generic `ServiceInstance` object, and that `ServiceInstance` is really a `VSGTenant`, then `leaf_model` will perform that conversion for you automatically.
+Also note that `leaf_model` is a property that will automatically cast any base
+object to its descendant class. For example, if you have a generic
+`ServiceInstance` object, and that `ServiceInstance` is really a `VSGTenant`,
+then `leaf_model` will perform that conversion for you automatically.
 
 ### Removing links between Service Instances
 
@@ -76,11 +97,14 @@
 
     # delete all links between some_vsg_tenant and some_vrouter_tenant
     for link in ServiceInstanceLink.objects.filter(provider_service_instance_id=some_vsg_tenant.id, subscriber_service_instance_id=some_vrouter_instance.id):
-        link.delete() 
+        link.delete()
 
 ### Creating ServiceInterfaces
 
-`ServiceInterfaces` allow you to type the links between `ServiceInstances`. For example, if one `ServiceInstance` provides a `WAN` interface and another `ServiceInstance` uses a `LAN` interface, you can explicitly connect those two interfaces. These are currently created in Tosca. For example,
+`ServiceInterfaces` allow you to type the links between `ServiceInstances`. For
+example, if one `ServiceInstance` provides a `WAN` interface and another
+`ServiceInstance` uses a `LAN` interface, you can explicitly connect those two
+interfaces. These are currently created in Tosca. For example,
 
     in#lanside:
       type: tosca.nodes.InterfaceType
@@ -112,17 +136,23 @@
             node: in#lanside
             relationship: tosca.relationships.IsType
 
-This example creates a `lanside` interface that is present in both the `VOLT` and `VSG` services. 
+This example creates a `lanside` interface that is present in both the `VOLT`
+and `VSG` services.
 
-Interfaces are currently optional, but may become mandatory in the next release. Until then, you can optionally associate links with interfaces. For example,
+Interfaces are currently optional, but may become mandatory in the next
+release. Until then, you can optionally associate links with interfaces. For
+example,
 
     interface_type = InterfaceType.objects.get(name="lanside", direction="in")
     interface = VSGService.Interfaces.get(interface_type=interface_type)
     t = VSGTenant(owner=vsg_service)
     t.save()
-    l = ServiceInstanceLink(provider_service_instance = t, 
+    l = ServiceInstanceLink(provider_service_instance = t,
                             provider_service_interface = interface,
                             subscriber_service_interface=some_volt_tenant)
     l.save()
 
-As `ServiceInterface` are not mandatory, it's suggested that you perform the other migration steps, and leave `ServiceInterfaces` until everything else is working.
\ No newline at end of file
+As `ServiceInterface` are not mandatory, it's suggested that you perform the
+other migration steps, and leave `ServiceInterfaces` until everything else is
+working.
+
diff --git a/docs/modeling_conventions.md b/docs/modeling_conventions.md
index 8f0371f..0d19677 100644
--- a/docs/modeling_conventions.md
+++ b/docs/modeling_conventions.md
@@ -1,33 +1,32 @@
-#Modeling Conventions
+# Modeling Conventions
 
-CORD adopts the following terminology and data modeling conventions
-(some of which is carried over from an earlier Django-based implementation).
+CORD adopts the following terminology and data modeling conventions (some of
+which is carried over from an earlier Django-based implementation).
 
 ## Terminology
 
-A *Model*  consists of a set of *Fields*. Each
-Field has a *Type* and each Type has a set of *Attributes*. Some of these
-Attributes are core (common across all Types) and some are
-Type-specific. *Relationships* between Models are expressed by Fields
-with one of a set of distinguished relationship-oriented Types (e.g,
+A *Model*  consists of a set of *Fields*. Each Field has a *Type* and each Type
+has a set of *Attributes*. Some of these Attributes are core (common across all
+Types) and some are Type-specific. *Relationships* between Models are expressed
+by Fields with one of a set of distinguished relationship-oriented Types (e.g,
 *OneToOneField*). Finally, an *Object* is an instance (instantiation) of a
-Model, where each Object has a unique primary key (or more precisely,
-a primary index into the table that implements the Model). By
-convention, that index/key is auto-generated for any Model that has
-not identified a separate unique primary key. The default primary key
-is always `id` for system level tables, and `pk` for model tables.
+Model, where each Object has a unique primary key (or more precisely, a primary
+index into the table that implements the Model). By convention, that index/key
+is auto-generated for any Model that has not identified a separate unique
+primary key. The default primary key is always `id` for system level tables,
+and `pk` for model tables.
 
 ## Naming Conventions
 
 Model names should use CamelCase without underscore. Model names should always
 be singular, never plural. For example: `Slice`, `Network`, `Site`.
 
-Sometimes a model is used to relate two other models, and
-should be named after the two models that it relates. For example, a model that
-relates the `Controller` and `User` models should be called `ControllerUser`.
+Sometimes a model is used to relate two other models, and should be named after
+the two models that it relates. For example, a model that relates the
+`Controller` and `User` models should be called `ControllerUser`.
 
-Field names use lower case with underscores separating names. Examples of
-valid field names are: name, `disk_format`, `controller_format`.
+Field names use lower case with underscores separating names. Examples of valid
+field names are: name, `disk_format`, `controller_format`.
 
 ## Field Types
 
@@ -56,21 +55,22 @@
 | help_text="..." | Provides some context-based help for the field; will show up in the GUI display.|
 | default=... | Allows a predefined default value to be specified.|
 | choices=CHOICE_LIST | An interable (list or tuple). Allows the field to be filled in from an enumerated choice. For example, *ROLE_CHOICES = (('admin', 'Admin'), ('pi', 'Principle Investigator'), ('user','User'))*|
-| unique=True |	Requires that the field be unique across all entries.|
+| unique=True | Requires that the field be unique across all entries.|
 | blank=True | Allows the field to be present but empty.|
 | null=True | Allows the field to have a value of null if the field is blank.|
 | editable=False | If you would like to make this a readOnly field to the user.|
 | gui_hidden=True | Hide a particular field from the GUI. This can be specified for an entire model.|
 
-The following Field-level optional attributes should not be used (or use judiciously).
+The following Field-level optional attributes should not be used (or use
+judiciously).
 
 | Attribute          | Why                |
 |--------------------|--------------------|
 | primary_key        | Some of the plugins we use, particularly in the REST area, do not do well with CharField's as the primary key. In general, it is best to use the system primary key instead, and put a *db_index=True, unique=True* on the CharField you would have used.|
 | db_column, db_tablespace | Convention is to use the Field name as the db column, and use verbose_name if you want to change the display. For tablespace, all models should be defined within the application they are specified in. Overwriting the tablespace will make it more challenging for the next developer to find and fix any issues that might arise.|
 
-The following Field-level optional attributes are not currently used but may
-be used at some point.
+The following Field-level optional attributes are not currently used but may be
+used at some point.
 
 | Attribute          | Effect             |
 |--------------------|--------------------|
@@ -88,11 +88,11 @@
 | ManyToManyField    | Used to represent an N-to-N relationship. For example: Deployments may have 0 or more Sites; Sites may have 0 or more Deployments.|
 | OneToOneField      | Not currently in use, but would be useful for applications that wanted to augment a core class with their own additional settings. This has the same affect as a ForeignKey with unique=True.  The difference is that the reverse side of the relationship will always be 1 object (not a list).|
 | GenericForeignKey | Not currently in use, but can be used to specify a non specific relation to "another object." Meaning object A relates to any other object. This relationship requires a reverse attribute in the "other" object to see the relationship -- but would primarily be accessed through the GenericForeignKey owner Model.
-The nuances of these relationships is brought about by the additional optional attributes that can be ascribed to each Field.
+The nuances of these relationships is brought about by the additional optional attributes that can be ascribed to each Field. |
 
->Note that we should likely convert our Tags to use GenericForeignKey
->so that all objects can be extensible during development, but then
->converted/promoted to attributes once the model has stabilized.
+> Note: We should likely convert our Tags to use GenericForeignKey so that all
+> objects can be extensible during development, but then converted/promoted to
+> attributes once the model has stabilized.
 
 ## Optional Attribute Side Effects
 
@@ -107,8 +107,8 @@
 
 ## Avoid List
 
-Avoid using the following optional attributes as they can have adverse
-effects on data integrity and REST relationships:
+Avoid using the following optional attributes as they can have adverse effects
+on data integrity and REST relationships:
 
 | Attribute          | Effect             |
 |--------------------|--------------------|
@@ -126,5 +126,3 @@
 | description | Provide an explation of the model. It's rendered in the GUI to help the operator.|
 | gui_hidden=True | Hide a particular model from the GUI. This can be specified for a single field.|
 
-
-
diff --git a/docs/modules/xosconfig.md b/docs/modules/xosconfig.md
index b908c23..ae3708f 100644
--- a/docs/modules/xosconfig.md
+++ b/docs/modules/xosconfig.md
@@ -1,26 +1,25 @@
 # XOS Configuration
 
-The `xosconfig` module is used to read, validate and distribute
-configuration information for all XOS-related components.
+The `xosconfig` module is used to read, validate and distribute configuration
+information for all XOS-related components.
 
 The code for this module can be found in `lib/xos-config`.
 
-The `xosconfig` module uses a combination of parameters provided
-via a `.yaml` file and a service discovery mechanism.
+The `xosconfig` module uses a combination of parameters provided via a `.yaml`
+file and a service discovery mechanism.
 
 ## How to Use This Module
 
-This module needs to be initialized once (and only once) when XOS
-starts. You can do it with:
+This module needs to be initialized once (and only once) when XOS starts. You
+can do it with:
 
 ```python
 from xosconfig import Config
 Config.init()
 ```
 
-By default, `xosconfig` looks for a configuration file
-in `/opt/xos/config.yaml`. Passing a
-different config file can be done with:
+By default, `xosconfig` looks for a configuration file in
+`/opt/xos/config.yaml`. Passing a different config file can be done with:
 
 ```python
 from xosconfig import Config
@@ -29,18 +28,19 @@
 
 ### Configuration Defaults
 
-Defaults are defined for some of the configuration items
-in `lib/xos-config/xosconfig/default.py`.
+Defaults are defined for some of the configuration items in
+`lib/xos-config/xosconfig/default.py`.
 
 ### Reading Data from the Configuration File
 
-To access static information defined in the `config.yaml` file, use
-the following API:
+To access static information defined in the `config.yaml` file, use the
+following API:
 
 ```python
 from xosconfig import Config
 res = Config.get('database')
 ```
+
 This call returns something like:
 
 ```python
@@ -50,28 +50,27 @@
 }
 ```
 
-Since the configuration supports a nested dictionary, it is possible to
-query directly nested values using `dot` notation. For example:
+Since the configuration supports a nested dictionary, it is possible to query
+directly nested values using `dot` notation. For example:
 
 ```python
 from xosconfig import Config
 res = Config.get('database.username')
 ```
 
-returns
+returns:
 
 ```python
 "test"
 ```
 
-**The configuration schema is defined in `/lib/xos-config/config-schema.yaml`**
+The configuration schema is defined in `/lib/xos-config/config-schema.yaml`
 
 ### Reading Service Information
 
-XOS is composed of a set of services. To discover these services and
-their address, use the
-[registrator](https://github.com/gliderlabs/registrator) tool.
- 
+XOS is composed of a set of services. To discover these services and their
+address, use the [registrator](https://github.com/gliderlabs/registrator) tool.
+
 #### Retrieving a List of Services
 
 Invoking
@@ -95,10 +94,11 @@
 ]
 ```
 
->You can get the same information on the `head node` using:
->```bash
+> NOTE: You can get the same information on the `head node` using:
+>
+> ```bash
 > curl consul:8500/v1/catalog/services
->```
+> ```
 
 #### Retrieving Information for a Single Service
 
@@ -118,10 +118,11 @@
     'port': 5432
 }
 ```
->You can get the same information on the `head node` using:
->```bash
+
+> NOTE: You can get the same information on the `head node` using:
+> ```bash
 > curl consul:8500/v1/catalog/service/xos-db
->```
+> ```
 
 #### Retrieving Endpoint for a Single Service
 
@@ -137,3 +138,4 @@
 ```python
 "http://172.18.0.4:5432"
 ```
+
diff --git a/docs/security_policies.md b/docs/security_policies.md
index 5f2842e..c49496d 100644
--- a/docs/security_policies.md
+++ b/docs/security_policies.md
@@ -1,53 +1,50 @@
 # Security Policies
 
-CORD security policies are implemented by XOS. These policies answer
-the question: *Who can do what?* The *who* in this case generally
-refers to a user (represented by a *User* model), but it can also
-refer to an API context. The *what* refers to two things: (1) the
-piece of information being accessed (a model, an object, or a field
-within that object), and (2) the access type (whether it is a read, a
-write, or a privilege update).
+CORD security policies are implemented by XOS. These policies answer the
+question: *Who can do what?* The *who* in this case generally refers to a user
+(represented by a *User* model), but it can also refer to an API context. The
+*what* refers to two things: (1) the piece of information being accessed (a
+model, an object, or a field within that object), and (2) the access type
+(whether it is a read, a write, or a privilege update).
 
-The mechanism for expressing these policies is provided by xproto’s
-policy extensions. The policies are enforced at the API boundary. When
-an API call is made, the appropriate policy is executed to determine
-whether or not access should be granted, and an audit trail is left
-behind. The policy enforcers are
-auto-generated by the generative toolchain as part of the model
-generation process.
+The mechanism for expressing these policies is provided by xproto’s policy
+extensions. The policies are enforced at the API boundary. When an API call is
+made, the appropriate policy is executed to determine whether or not access
+should be granted, and an audit trail is left behind. The policy enforcers are
+auto-generated by the generative toolchain as part of the model generation
+process.
 
 > Note: Auditing is still todo.
 
-Policies are generic logic expressions and can operate on any model
-or on the environment, but they frequently use the *Privilege* model.
-Specifically, when a policy cannot be expressed as a general principle
-(e.g., “a user can do whatever they want to a slice if he or she is
-its creator”) and instead depends on dynamic conditions, then it is
-encoded with the help of Privilege objects. For example, a Privilege
-object may be created to indicate that a user who is not a slice’s
-creator has admin privileges on it.
+Policies are generic logic expressions and can operate on any model or on the
+environment, but they frequently use the *Privilege* model.  Specifically, when
+a policy cannot be expressed as a general principle (e.g., “a user can do
+whatever they want to a slice if he or she is its creator”) and instead depends
+on dynamic conditions, then it is encoded with the help of Privilege objects.
+For example, a Privilege object may be created to indicate that a user who is
+not a slice’s creator has admin privileges on it.
 
-The set of security policies is being bootstrapped into the following
-state:
+The set of security policies is being bootstrapped into the following state:
 
-* Privilege objects are automatically created for Slices. Most access
-control (e.g., to Networks and Instances) is via Slice, so this
-privilege covers the bulk of the access control.
+* Privilege objects are automatically created for Slices. Most access control
+  (e.g., to Networks and Instances) is via Slice, so this privilege covers the
+  bulk of the access control.
 
-* Privileges for other models need to be created manually via the API
-(e.g., Sites, Services).
+* Privileges for other models need to be created manually via the API (e.g.,
+  Sites, Services).
 
-* Any principal that has access to object *X* is also granted access
-to object *ControllerX*.
+* Any principal that has access to object *X* is also granted access to object
+  *ControllerX*.
 
-* There are three types of access permissions: *Read*, *Write*, and
-*Grant*. Grant arbitrates access to Privilege objects (e.g., a slice
-admin could grant slice admin privileges to a user).
+* There are three types of access permissions: *Read*, *Write*, and *Grant*.
+  Grant arbitrates access to Privilege objects (e.g., a slice admin could grant
+  slice admin privileges to a user).
 
 The current policies are defined in
-[core.xproto](https://github.com/opencord/xos/blob/master/xos/core/models/core.xproto). For
-example, the following `site_policy` controls access to instances of
-the `Site` model:
+[core.xproto](https://github.com/opencord/xos/blob/master/xos/core/models/core.xproto).
+
+For example, the following `site_policy` controls access to instances of the
+`Site` model:
 
 ```python
 // Everyone has read access
@@ -55,10 +52,11 @@
 policy site_policy <
          ctx.user.is_admin
          | (ctx.write_access -> exists Privilege:
-		 Privilege.object_type = "Site" & Privilege.object_id = obj.id
-		 & Privilege.accessor_id = ctx.user.id & Privilege.permission
-		 = "role:admin") >
+     Privilege.object_type = "Site" & Privilege.object_id = obj.id
+     & Privilege.accessor_id = ctx.user.id & Privilege.permission
+     = "role:admin") >
 ```
 
 For more information about security policy definitions, read about
-[xproto and xosgenx](dev/xproto.md).
+[xproto and xosgenx](/xos/dev/xproto.md).
+
diff --git a/docs/tutorials/example_service.md b/docs/tutorials/example_service.md
index 6d1d07e..0cfa008 100644
--- a/docs/tutorials/example_service.md
+++ b/docs/tutorials/example_service.md
@@ -32,28 +32,31 @@
 of files, all located in the `xos` directory of the `exampleservice`
 repository. When checked out, these files live in the
 `CORD_ROOT/orchestration/xos_services/exampleservice` directory on
-your local development machine. 
+your local development machine.
 
 | Component | Source Code (https://github.com/opencord/exampleservice/) |
 |----------|-----------------------------------------------------|
 | Data Model  | `xos/synchronizer/models/exampleservice.xproto` |
 | Syncronizer Program | `xos/synchronizer/exampleservice-synchronizer.py` `xos/synchronizer/exampleservice_config.yaml` `xos/synchronizer/model-deps` `xos/synchronizer/Dockerfile.synchronizer` |
 | Sync Steps  | `xos/synchronizer/steps/sync_exampletenant.py` `xos/synchronizer/steps/exampletenant_playbook.yaml` |
-| Model Policies | `xos/synchronizer/model_policies/model_policy_exampleserviceinstance.py` | 
-| On-Boarding Spec	| `xos/exampleservice-onboard.yaml`
+| Model Policies | `xos/synchronizer/model_policies/model_policy_exampleserviceinstance.py` |
+| On-Boarding Spec | `xos/exampleservice-onboard.yaml`
 
 Earlier releases (3.0 and before) required additional files (mostly Python
 code) to on-board a service, including a REST API, a TOSCA API, and an Admin
 GUI. These components are now auto-generated from the models rather than coded
 by hand, although it is still possible to [extend the
-GUI](../xos-gui/developer/README.md).
+GUI](/xos-gui/developer/README.md).
 
-In addition to implementing these service-specific files, the final
-step to on-boarding a service requires you to modify an existing
-(or write a new)
-[service profile](https://guide.opencord.org/service-profiles.html).
-This tutorial uses the existing R-CORD profile for illustrative
-purposes. These profile definitions currently live in the (https://github.com/opencord/rcord) repository. Additional related playbooks reside in the (https://github.com/opencord/platform-install/) for historical reasons.
+In addition to implementing these service-specific files, the final step to
+on-boarding a service requires you to modify an existing (or write a new)
+[service profile](/service-profiles.md).  This tutorial uses the existing
+R-CORD profile for illustrative purposes. These profile definitions currently
+live in the
+[https://github.com/opencord/rcord](https://github.com/opencord/rcord)
+repository. Additional related playbooks reside in the
+[https://github.com/opencord/platform-install/](https://github.com/opencord/platform-install/)
+for historical reasons.
 
 ## Development Environment
 
@@ -69,21 +72,28 @@
 
 ## Create the synchronizer directory
 
-The synchronizer directory holds the model declarations and the synchronizer for the service. Usually this directory is `xos/synchronizer`. This tutorial will first walk through creating the models, and then discuss creating the synchronizer itself.
+The synchronizer directory holds the model declarations and the synchronizer
+for the service. Usually this directory is `xos/synchronizer`. This tutorial
+  will first walk through creating the models, and then discuss creating the
+  synchronizer itself.
 
-Make a new root directory for your service, and within that directory,

-create an `xos` subdirectory. The `xos` subdirectory will hold all xos-related files for your service.
+Make a new root directory for your service, and within that directory, create
+an `xos` subdirectory. The `xos` subdirectory will hold all xos-related files
+for your service.
 
-Within the `xos` subdirectory, create a `synchronizer` subdirectory. The `synchronizer` subdirectory holds the subset of files that end up built into the `synchronizer` container image. 
+Within the `xos` subdirectory, create a `synchronizer` subdirectory. The
+`synchronizer` subdirectory holds the subset of files that end up built into
+the `synchronizer` container image.
 
 ## Define a Model
 
-Your models live in a file named `exampleservice.xproto` in your service's `xos/synchronizer/models` directory. This
-file encodes the models in the service in a format called
-[xproto](../xos/dev/xproto.md) which is a combination of Google Protocol
-Buffers and some XOS-specific annotations to facilitate the generation of
-service components, such as the GRPC and REST APIs, security policies, and
-database models among other things. It consists of two parts:
+Your models live in a file named `exampleservice.xproto` in your service's
+`xos/synchronizer/models` directory. This file encodes the models in the
+service in a format called [xproto](../xos/dev/xproto.md) which is a
+combination of Google Protocol Buffers and some XOS-specific annotations to
+facilitate the generation of service components, such as the GRPC and REST
+APIs, security policies, and database models among other things. It consists of
+two parts:
 
 * The XPROTO Header, which contains options that are global to the rest of the file.
 
@@ -96,22 +106,28 @@
 
 Some options are typically specified at the top of your xproto file:
 
-```
+```protobuf
 option name = "exampleservice";
-option app_label = "exampleservice";

+option app_label = "exampleservice";
 ```
 
-`name` specifies a name for your service. This is used as a default in several places, for example it will be used for `app_label` if you don't specifically choose an `app_label`. Normally it suffices to set this the name of your service, lower case, with no spaces.
+`name` specifies a name for your service. This is used as a default in several
+places, for example it will be used for `app_label` if you don't specifically
+choose an `app_label`. Normally it suffices to set this the name of your
+service, lower case, with no spaces.
 
-`app_label` configures the internal xos database application that is attached to these models. As with `name`, it suffices to set this the name of your service, lower case, with no spaces.
+`app_label` configures the internal xos database application that is attached
+to these models. As with `name`, it suffices to set this the name of your
+service, lower case, with no spaces.
 
 ### Service Model (Service-wide state)
 
 A Service model extends (inherits from) the XOS base *Service* model.  At its
-head is a set of option declarations such as `verbose_name`, which specifies a human-readable name for the service model. Then follows a set of field
+head is a set of option declarations such as `verbose_name`, which specifies a
+human-readable name for the service model. Then follows a set of field
 definitions.
 
-```
+```protobuf
 message ExampleService (Service){
     option verbose_name = "Example Service";
     required string service_message = 1 [help_text = "Service Message to Display", max_length = 254, null = False, db_index = False, blank = False];
@@ -123,7 +139,7 @@
 Your ServiceInstance model will extend the core `TenantWithContainer` class,
 which is a Tenant that creates a VM instance:
 
-```
+```protobuf
 message ExampleServiceInstance (TenantWithContainer){
      option verbose_name = "Example Service Instance";
      required string tenant_message = 1 [help_text = "Tenant Message to Display", max_length = 254, null = False, db_index = False, blank = False];
@@ -133,7 +149,7 @@
 The following field specifies the message that will be displayed on a
 per-Tenant basis:
 
-```
+```protobuf
 tenant_message = models.CharField(max_length=254, help_text="Tenant Message to Display")
 ```
 
@@ -152,15 +168,18 @@
 your service.
 
 > Note: Earlier versions included a tool to track model dependencies, but today
-> it is sufficient to create a file named `model-deps` with the contents:` {}`.
+> it is sufficient to create a file named `model-deps` with the contents: `{}`.
 
-The Synchronizer has three parts: The synchronizer python program, model policies which enact changes on the data model, and a playbook (typically Ansible) that configures the underlying system. The following describes how to construct these.
+The Synchronizer has three parts: The synchronizer python program, model
+policies which enact changes on the data model, and a playbook (typically
+Ansible) that configures the underlying system. The following describes how to
+construct these.
 
 ### Synchronizer Python Program
 
 First, create a file named `exampleservice-synchronizer.py`:
 
-```
+```python
 #!/usr/bin/env python
 # Runs the standard XOS synchronizer
 
@@ -184,7 +203,7 @@
 named `exampleservice_config.yaml`, which specifies various configuration and
 logging options:
 
-```
+```yaml
 name: exampleservice
 accessor:
   username: xosadmin@opencord.org
@@ -201,7 +220,9 @@
 models_dir: "/opt/xos/synchronizers/exampleservice/models"
 ```
 
-Make sure the `name` in your synchronizer config file is that same as the app_label in your `xproto` file. Otherwise the models won't be dynamically loaded correctly.
+Make sure the `name` in your synchronizer config file is that same as the
+app_label in your `xproto` file. Otherwise the models won't be dynamically
+loaded correctly.
 
 > NOTE: Historically, synchronizers were named “observers”, so
 > `s/observer/synchronizer/` when you come upon this term in the XOS code and
@@ -210,7 +231,7 @@
 Second, create a directory within your synchronizer directory named `steps`. In
 steps, create a file named `sync_exampleserviceinstance.py`:
 
-```
+```python
 import os
 import sys
 from synchronizers.new_base.SyncInstanceUsingAnsible import SyncInstanceUsingAnsible
@@ -227,7 +248,7 @@
 `SyncInstanceUsingAnsible` which will run the Ansible playbook in the Instance
 VM.
 
-```
+```python
 class SyncExampleServiceInstance(SyncInstanceUsingAnsible):
 
     provides = [ExampleServiceInstance]
@@ -271,7 +292,7 @@
 
 Third, create a `run-from-api.sh` file for your synchronizer.
 
-```
+```shell
 python exampleservice-synchronizer.py
 ```
 
@@ -279,7 +300,7 @@
 `Dockerfile.synchronizer` and place it in the `synchronizer` directory with the
 other synchronizer files:
 
-```
+```dockerfile
 FROM xosproject/xos-synchronizer-base:candidate
 
 COPY . /opt/xos/synchronizers/exampleservice
@@ -319,25 +340,33 @@
 
 CMD bash -c "cd /opt/xos/synchronizers/exampleservice; ./run-from-api.sh"
 ```
+
 ### Synchronizer Model Policies
 
-Model policies are used to implement change within the data model. When an `ExampleServiceInstance` object is saved, we want an `Instance` to be automatically created that will hold the ExampleServiceInstance's web server. Fortunately, there's a base class that implements this functionality for us, so minimal coding needs to be done at this time. Create the `model_policies` subdirectory and within that subdirectory create the file `model_policy_exampleserviceinstance.py`:
+Model policies are used to implement change within the data model. When an
+`ExampleServiceInstance` object is saved, we want an `Instance` to be
+automatically created that will hold the ExampleServiceInstance's web server.
+Fortunately, there's a base class that implements this functionality for us, so
+minimal coding needs to be done at this time. Create the `model_policies`
+subdirectory and within that subdirectory create the file
+`model_policy_exampleserviceinstance.py`:
 
-```

-from synchronizers.new_base.modelaccessor import *

-from synchronizers.new_base.model_policies.model_policy_tenantwithcontainer import TenantWithContainerPolicy

-

-class ExampleServiceInstancePolicy(TenantWithContainerPolicy):

+```python
+from synchronizers.new_base.modelaccessor import *
+from synchronizers.new_base.model_policies.model_policy_tenantwithcontainer import TenantWithContainerPolicy
+
+class ExampleServiceInstancePolicy(TenantWithContainerPolicy):
     model_name = "ExampleServiceInstance"
 ```
 
 ### Synchronizer Playbooks
 
-In the same `steps` directory where you created `sync_exampleserviceinstance.py`, create an Ansible playbook named
+In the same `steps` directory where you created
+`sync_exampleserviceinstance.py`, create an Ansible playbook named
 `exampleserviceinstance_playbook.yml` which is the “master playbook” for this
 set of plays:
 
-```
+```yaml
 # exampletenant_playbook
 
 - hosts: "{{ instance_name }}"
@@ -353,7 +382,7 @@
 This sets some basic configuration, specifies the host this Instance will run
 on, and the two variables that we’re passing to the playbook.
 
-```
+```yaml
 roles:
   - install_apache
   - create_index
@@ -373,7 +402,7 @@
 directory, a file named `main.yml`. This will contain the set of plays for the
 `install_apache` role. To that file add the following:
 
-```
+```yaml
 - name: Install apache using apt
   apt:
     name=apache2
@@ -385,7 +414,7 @@
 Next, within `create_index`, create two directories, `tasks` and `templates`.
 In `templates`, create a file named `index.html.j2`, with the contents:
 
-```
+```html
 ExampleService
  Service Message: "{{ service_message }}"
  Tenant Message: "{{ tenant_message }}"
@@ -396,7 +425,7 @@
 
 In the `tasks` directory, create a file named `main.yml`, with the contents:
 
-```
+```yaml
 - name: Write index.html file to apache document root
   template:
     src=index.html.j2
@@ -421,7 +450,8 @@
 for your synchronizer. For example, here is the on-boarding recipe for
   *ExampleService*:
 
-```
+```yaml
+---
 tosca_definitions_version: tosca_simple_yaml_1_0
 
 description: Onboard the exampleservice
@@ -442,14 +472,14 @@
 ```
 
 This is a legacy recipe that (when executed) on-boards *ExampleService* in the
-sense that it registers the service with the system, but it does not provision the service or create instances of the service. These latter steps can be done
-through CORD's GUI or REST API, or by submitting yet other TOSCA
-workflows to a running CORD POD (all based on end-points that are
-auto-generated from these on-boarded models). Additional information
-on how to provision and use the service is given in the last section
-of this tutorial.
+sense that it registers the service with the system, but it does not provision
+the service or create instances of the service. These latter steps can be done
+through CORD's GUI or REST API, or by submitting yet other TOSCA workflows to a
+running CORD POD (all based on end-points that are auto-generated from these
+on-boarded models). Additional information on how to provision and use the
+service is given in the last section of this tutorial.
 
-NOTE: This file may soon be removed. 
+NOTE: This file may soon be removed.
 
 ## Include the Service in a Profile
 
@@ -474,7 +504,7 @@
 the build system at the model and synchronizer specifications you've
 just defined.
 
-```
+```yaml
 xos_services:
   ... (lines omitted)...
   - name: exampleservice
@@ -488,7 +518,7 @@
 uses the `trusty-server-multi-nic` that is included in R-CORD
 for other purposes.
 
-```
+```yaml
 xos_images:
   - name: "trusty-server-multi-nic".
     url: "http://www.vicci.org/opencloud/trusty-server-cloudimg-amd64-disk1.img.20170201"
@@ -509,11 +539,10 @@
 
 * Add the service's synchronizer image to `build/docker_images.yml`
 
-* Because the build system is integrated with the `git` and `repo`
-tools, if your service is not already checked into
-`gerrit.opencord.org`, you will also need to add the service to
-the manifest file `CORD_ROOT/.repo/manifest.xml`.
-Then run `git init` in the service’s source tree.
+* Because the build system is integrated with the `git` and `repo` tools, if
+  your service is not already checked into `gerrit.opencord.org`, you will also
+  need to add the service to the manifest file `CORD_ROOT/.repo/manifest.xml`.
+  Then run `git init` in the service’s source tree.
 
 ## Provision, Control, and Use the Service
 
@@ -532,10 +561,12 @@
 POD and executed.
 
 The *ExampleService*  template is defined by the following file:
-```
+
+```shell
 build/platform-install/roles/exampleservice-config/templates/test-exampleservice.yaml.j2
 ```
-It is an historical artifact that this template is in the 
+
+It is an historical artifact that this template is in the
 `build/platform-install/roles/exampleservice-config/templates`
 directory. Templates for new services are instead located in
 `build/platform-install/roles/cord-profile/templates`. For example,
@@ -543,11 +574,12 @@
 similar to the one used for *ExampleService*.
 
 The first part of `test-exampleservice.yaml.j2` includes some
-core object reference that *ExampleService* uses, for example, 
+core object reference that *ExampleService* uses, for example,
 the `trusty-server-multic-nic` image, the `small` flavor, and
 both the `management_network` and the `public_network`.
 
-```
+```yaml
+---
 tosca_definitions_version: tosca_simple_yaml_1_0
 
 imports:
@@ -609,7 +641,8 @@
 
 This is followed by the specification of a `private` network used by
 *ExampleService* :
-```
+
+```yaml
     exampleservice_network:
       type: tosca.nodes.Network
       properties:
@@ -628,7 +661,7 @@
 instances and networks) in which *ExampleService*  runs. These
 definitions reference the dependencies established above.
 
-```
+```yaml
 # CORD Slices
     {{ site_name }}_exampleservice:
       description: Example Service Slice
@@ -687,7 +720,7 @@
 *ExampleService* (`exampleservice`) and spins up a Service Instance
 on behalf of the first tenant (`exampletenant1`).
 
-```
+```yaml
    exampleservice:
       type: tosca.nodes.ExampleService
       properties:
@@ -708,10 +741,10 @@
             node: exampleservice
             relationship: tosca.relationships.BelongsToOne
 ```
-			
+
 Note that these definitions initialize the `service_message` and
-`tenant_message`, respectively. As a consequence, sending an
-HTTP GET request to *ExampleService* will result in the response:
-`hello world`. Subsequently, the user can interact with
-*ExampleService* via CORD's GUI or REST API to change those
-values.
+`tenant_message`, respectively. As a consequence, sending an HTTP GET request
+to *ExampleService* will result in the response: `hello world`. Subsequently,
+the user can interact with *ExampleService* via CORD's GUI or REST API to
+change those values.
+
diff --git a/docs/xos_internals.md b/docs/xos_internals.md
index 08807d5..4d261eb 100644
--- a/docs/xos_internals.md
+++ b/docs/xos_internals.md
@@ -1,9 +1,8 @@
 # XOS Containers
 
-XOS is made up of a set of Docker containers that cooperate to provide
-platform controller functionaly, including the data model,
-synchronizers, and northbound APIs. The following is an inventory of
-those containers:
+XOS is made up of a set of Docker containers that cooperate to provide platform
+controller functionaly, including the data model, synchronizers, and northbound
+APIs. The following is an inventory of those containers:
 
 | Name | Description | Ports |
 | ---- | ----------- | ----- |
@@ -15,9 +14,10 @@
 | xos-ws | Listens to `redis` events and propagates them over web-sockets for notifications| 3000|
 | xos-chameleon | Northbound REST interface, accessible at `/xosapi/v1/..` (`swagger` is published at `/apidocs/`| 3000|
 
-Additionally some infrastructure helpers such as `consul` and `registrator` are  deployed to facilitate service discovery.
+Additionally some infrastructure helpers such as `consul` and `registrator` are
+deployed to facilitate service discovery.
 
-All the communication between containers happen over `gRPC` except for `xos-gui` 
-where it is a combination of REST and web-socket.
+All the communication between containers happen over `gRPC` except for
+`xos-gui` where it is a combination of REST and web-socket.
 
 ![xos-containers](./static/xos_containers.png)
diff --git a/docs/xos_vtn.md b/docs/xos_vtn.md
index fc44cd5..044f1c5 100644
--- a/docs/xos_vtn.md
+++ b/docs/xos_vtn.md
@@ -1,107 +1,213 @@
 # VTN and Service Composition
 
-CORD's support for service composition depends on VTN. The following
-focuses on the interface VTN exports, and how XOS interacts with VTN
-interconnect services. For more information about VTN’s internals and its
-relationship to the CORD fabric, see the
-[Trellis](https://wiki.opencord.org/display/CORD/Trellis%3A+CORD+Network+Infrastructure) documentation.
+CORD's support for service composition depends on VTN. The following focuses on
+the interface VTN exports, and how XOS interacts with VTN interconnect
+services. For more information about VTN’s internals and its relationship to
+the CORD fabric, see the
+[Trellis](https://wiki.opencord.org/display/CORD/Trellis%3A+CORD+Network+Infrastructure)
+documentation.
 
 ## VTN-Provided Networks
 
-Each Service is connected to one or two *Management Networks* and one
-or more *Data Networks*. VTN implements both types of networks.
+Each Service is connected to one or two *Management Networks* and one or more
+*Data Networks*. VTN implements both types of networks.
 
-VTN defines two management networks that instances can join: 
+VTN defines two management networks that instances can join:
 
-* **MANAGEMENT_LOCAL:** This puts the instance on the 172.27.0.0/24 network, which is limited to the local compute node. The compute node's root context is always 172.27.0.1 on this local network. Synchronizers currently use this network to SSH into an Instance to configure it. They SSH to the compute node's root context, and then from there into the Instance.
+* **MANAGEMENT_LOCAL:** This puts the instance on the 172.27.0.0/24 network,
+  which is limited to the local compute node. The compute node's root context
+  is always 172.27.0.1 on this local network. Synchronizers currently use this
+  network to SSH into an Instance to configure it. They SSH to the compute
+  node's root context, and then from there into the Instance.
 
-* **MANAGEMENT_HOST:** This puts the instance on the 10.1.0.0/24 network that does span compute nodes and does offer end-to-end connectivity to the head node. This network currently runs over the physical management network.
+* **MANAGEMENT_HOST:** This puts the instance on the 10.1.0.0/24 network that
+  does span compute nodes and does offer end-to-end connectivity to the head
+  node. This network currently runs over the physical management network.
 
-These two management networks are completely independent. A Slice can choose to participate in either of them, neither of them, or both of them. In the latter case, each instance has two management interfaces, one with a 172.27.0.0/24 address and one with a 10.1.0.0/24 address. Instances on different compute nodes can talk to each other on MANAGEMENT_HOST, but they cannot talk to each other on MANAGEMENT_LOCAL.
+These two management networks are completely independent. A Slice can choose to
+participate in either of them, neither of them, or both of them. In the latter
+case, each instance has two management interfaces, one with a 172.27.0.0/24
+address and one with a 10.1.0.0/24 address. Instances on different compute
+nodes can talk to each other on MANAGEMENT_HOST, but they cannot talk to each
+other on MANAGEMENT_LOCAL.
 
->Note: These two management networks are entirely configurable: 172.27.0.0/24 and 10.1.0.0/24 are what been set for CORD-in-a-Box but need not necessarily be the same on a physical POD.
+> NOTE: These two management networks are entirely configurable: 172.27.0.0/24
+> and 10.1.0.0/24 are what been set for CORD-in-a-Box but need not necessarily
+> be the same on a physical POD.
 
-The rest of this guide focuses on the Data Network, which connects all the instances in a Service to each other (by default), but optionally can also connect that network to other networks in CORD. VTN is responsible for both implementing the base (default) Data Network and for splicing that network to other networks. These “other” networks are provided by both VTN (i.e., the other network is some other Service’s base Data Network) and by other CORD services (e.g., vOLT, vRouter).
+The rest of this guide focuses on the Data Network, which connects all the
+instances in a Service to each other (by default), but optionally can also
+connect that network to other networks in CORD. VTN is responsible for both
+implementing the base (default) Data Network and for splicing that network to
+other networks. These “other” networks are provided by both VTN (i.e., the
+other network is some other Service’s base Data Network) and by other CORD
+services (e.g., vOLT, vRouter).
 
-When first created, each Data Network is *Private* by default, analogous to a private network in OpenStack (i.e., it connects only the Instances in the Service). The network can remain private, or it can be connected onto one or more other networks. These other networks can themselves be the Data Network of some other Service, but it is also possible to connect a Service’s Data Network to the public Internet, making the Instances Internet-routable. This framing of how to connect a Service to the public Internet is unusual (i.e., one connects to a public network by augmenting an existing a Private network), but it is helpful to view the public Internet being just like any other network in CORD: it is provided by some service. In the case of the public Internet, this service is currently provided by vRouter.
+When first created, each Data Network is *Private* by default, analogous to a
+private network in OpenStack (i.e., it connects only the Instances in the
+Service). The network can remain private, or it can be connected onto one or
+more other networks. These other networks can themselves be the Data Network of
+some other Service, but it is also possible to connect a Service’s Data Network
+to the public Internet, making the Instances Internet-routable. This framing of
+how to connect a Service to the public Internet is unusual (i.e., one connects
+to a public network by augmenting an existing a Private network), but it is
+helpful to view the public Internet being just like any other network in CORD:
+it is provided by some service. In the case of the public Internet, this
+service is currently provided by vRouter.
 
 > Note: While it is natural to expect an Instance to connect directly to a
-Public network (as opposed to first connecting to a private network and then splicing that network to a public network), that exploits a race condition involving Neutron. Should Neutron have had the opportunity to come up before the public network is connected, a private address rather than a public address would be assigned to each Instance's Port. Instead, our approach is that (a) the private network be established as a first step, and (b) it is possible to assign a second address (in this case public) to each Port. This allows a private network to be made public at any time, and in general, being able to assign multiple addresses to a Port is a requirement.
+Public network (as opposed to first connecting to a private network and then
+splicing that network to a public network), that exploits a race condition
+involving Neutron. Should Neutron have had the opportunity to come up before
+the public network is connected, a private address rather than a public address
+would be assigned to each Instance's Port. Instead, our approach is that (a)
+the private network be established as a first step, and (b) it is possible to
+assign a second address (in this case public) to each Port. This allows a
+private network to be made public at any time, and in general, being able to
+assign multiple addresses to a Port is a requirement.
 
-In addition to adding public connectivity to a private network, it is also possible to create a network that is public by default. This is done by setting the network's template's `VTN_KIND` to `public`.
+In addition to adding public connectivity to a private network, it is also
+possible to create a network that is public by default. This is done by setting
+the network's template's `VTN_KIND` to `public`.
 
 ## Interconnecting Networks
 
-VTN programs the underlying software switches (e.g., OvS) to forward packets to/from the Ports of the Service’s Instances. Connecting a network onto an existing Data Network means Instances in the two networks can exchange packets according to the parameters of the splicing operation (see below) and it may result in a new address being assigned to each Port.
+VTN programs the underlying software switches (e.g., OvS) to forward packets
+to/from the Ports of the Service’s Instances. Connecting a network onto an
+existing Data Network means Instances in the two networks can exchange packets
+according to the parameters of the splicing operation (see below) and it may
+result in a new address being assigned to each Port.
 
-A Service’s Data Network is interconnected to other networks as a consequence or the corresponding Services being composed in the Service Graph. There are four cases, depending on whether the two services being composed are implemented in the network control plane or the network data plane. We start with the example most people assume (the two services implement VNFs in the data plane), and then show how the same principle generalizes to other possible compositions.
+A Service’s Data Network is interconnected to other networks as a consequence
+or the corresponding Services being composed in the Service Graph. There are
+four cases, depending on whether the two services being composed are
+implemented in the network control plane or the network data plane. We start
+with the example most people assume (the two services implement VNFs in the
+data plane), and then show how the same principle generalizes to other possible
+compositions.
 
-When a dependency is established between a pair of data plane services, denoted A → B, it causes the Data Networks of A and B to be interconnected according to attribute’s assigned to their respective Data Networks. (To simplify the discussion we assume each Service has one Slice and each Slice has one Network. In practice, it is necessary to specify which of the Service’s Networks are being interconnected.) Two attribute vectors are (currently) defined:
+When a dependency is established between a pair of data plane services, denoted
+A → B, it causes the Data Networks of A and B to be interconnected according to
+attribute’s assigned to their respective Data Networks. (To simplify the
+discussion we assume each Service has one Slice and each Slice has one Network.
+In practice, it is necessary to specify which of the Service’s Networks are
+being interconnected.) Two attribute vectors are (currently) defined:
 
 * *Direct* vs *Indirect*
 
 * *Unidirectional* vs *Bidirectional*
 
-The first defines whether A uses *Direct* addressing for the Instances of B (using a unique address for each Instance) or *Indirect* addressing (using a Service-wide address, where VTN implements load balancing by forwarding the packet to a specific Instance). The second defines whether communication is *Unidirectional* (A can send packets to B but not vice versa) or *Bidirectional* (A and B can send packets to each other).
+The first defines whether A uses *Direct* addressing for the Instances of B
+(using a unique address for each Instance) or *Indirect* addressing (using a
+Service-wide address, where VTN implements load balancing by forwarding the
+packet to a specific Instance). The second defines whether communication is
+*Unidirectional* (A can send packets to B but not vice versa) or
+*Bidirectional* (A and B can send packets to each other).
 
-Technically, these attributes are associated with Service B’s Network rather than with the interconnection of A to B. When A interconnects with B, the properties of B’s Data Network dictate the terms of the interconnection. But as we expand the capability, it may be possible that attributes of the interconnection will be specified with the interconnection model rather than one or other of the networks being interconnected.
+Technically, these attributes are associated with Service B’s Network rather
+than with the interconnection of A to B. When A interconnects with B, the
+properties of B’s Data Network dictate the terms of the interconnection. But as
+we expand the capability, it may be possible that attributes of the
+interconnection will be specified with the interconnection model rather than
+one or other of the networks being interconnected.
 
-The above corresponds to a service implemented by one set of Instances being composed with a service implemented by another set of Instances. This results in two VTN-implemented Data Networks being connected together. Because VTN allocates a disjoint block of addresses drawn from a common private address space to each Data Network, this means a new block of addresses becomes visible to (routable from) each Service; each Instance continues to have the same private IP address assigned to its Ports.
+The above corresponds to a service implemented by one set of Instances being
+composed with a service implemented by another set of Instances. This results
+in two VTN-implemented Data Networks being connected together. Because VTN
+allocates a disjoint block of addresses drawn from a common private address
+space to each Data Network, this means a new block of addresses becomes visible
+to (routable from) each Service; each Instance continues to have the same
+private IP address assigned to its Ports.
 
-In general, however, one or both of A and B might be “control plane” Services, in which case it implements a network rather than uses a network. Two examples are vOLT and vRouter, meaning we have two “mixed” service interconnections:
+In general, however, one or both of A and B might be “control plane” Services,
+in which case it implements a network rather than uses a network. Two examples
+are vOLT and vRouter, meaning we have two “mixed” service interconnections:
 
 * vOLT → vSG (a control plane service connects to a data plane service)
 
 * vSG → vRouter (a data plane service connects to a control plane service)
 
-What this means is that rather than VTN splicing together two VTN-based Data Networks, we are asking VTN to connect some “other” network to the VTN-defined Data Network. 
+What this means is that rather than VTN splicing together two VTN-based Data
+Networks, we are asking VTN to connect some “other” network to the VTN-defined
+Data Network.
 
 Similarly, this generalizes to account for other Networks-as-a-Service; e.g.,
 
-* vSG → vNaaS (a container connects to a wide-area virtual Network-as-a-Service)
+* vSG → vNaaS (a container connects to a wide-area virtual
+  Network-as-a-Service)
 
-Finally, although VTN does not yet support this case, one can imagine a situation where one control plane service connects to another control plane service, in which case VTN will need to interconnect two networks (neither of which VTN implements itself). For example:
+Finally, although VTN does not yet support this case, one can imagine a
+situation where one control plane service connects to another control plane
+service, in which case VTN will need to interconnect two networks (neither of
+which VTN implements itself). For example:
 
 * vOLT → vRouter (from R-CORD)
 
 * vEE → vNaaS (from E-CORD)
 
-Looking across this set of examples, there are two subcases. In the first, when interconnecting two VTN-based networks, the result is basically the union of the two original networks (with restrictions). In the second, when interconnecting a Service to some ONOS-provided network, the result is to dynamically add the ServiceInstances to that new network, with the side-effect of the instances being assigned a new address on that network. These two subcases can be traced back to the two roles VTN plays: (1) it connects instances to networks, and (2) it provides a private network for a set of instances.
+Looking across this set of examples, there are two subcases. In the first, when
+interconnecting two VTN-based networks, the result is basically the union of
+the two original networks (with restrictions). In the second, when
+interconnecting a Service to some ONOS-provided network, the result is to
+dynamically add the ServiceInstances to that new network, with the side-effect
+of the instances being assigned a new address on that network. These two
+subcases can be traced back to the two roles VTN plays: (1) it connects
+instances to networks, and (2) it provides a private network for a set of
+instances.
 
 ## Components and Interfaces
 
-Two interfaces (one provided by VTN and the other provided by XOS) are necessary to support service composition. XOS invokes the VTN-provided API to interconnect networks belonging to two composed services. VTN invokes the XOS-provided API to restore interconnection state (e.g., if VTN restarts), but this interface is not involved in typical XOS/VTN interaction. The following diagram shows the relationship between XOS, OpenStack Neutron, and the various sub-systems of VTN.
+Two interfaces (one provided by VTN and the other provided by XOS) are
+necessary to support service composition. XOS invokes the VTN-provided API to
+interconnect networks belonging to two composed services. VTN invokes the
+XOS-provided API to restore interconnection state (e.g., if VTN restarts), but
+this interface is not involved in typical XOS/VTN interaction. The following
+diagram shows the relationship between XOS, OpenStack Neutron, and the various
+sub-systems of VTN.
 
-<img src="vtn-xos.jpeg" alt="Drawing" style="width: 700px;"/>
+![VTN relationships](vtn-xos.jpeg)
 
-In a configuration that includes OpenStack, XOS indirectly calls VTN via Neutron. In this case, Neutron's ML2 plugin informs OpenStack Nova about the virtual network connecting a set of instances. XOS can also create a virtual network by directly calling VTN without Neutron's involvement (e.g,. to interconnect Docker containers).
+In a configuration that includes OpenStack, XOS indirectly calls VTN via
+Neutron. In this case, Neutron's ML2 plugin informs OpenStack Nova about the
+virtual network connecting a set of instances. XOS can also create a virtual
+network by directly calling VTN without Neutron's involvement (e.g,. to
+interconnect Docker containers).
 
-Because VTN provides a CLI to purge its internal state, it uses the XOS-provided API to resync with XOS. This VTN-to-XOS interface is not shown in the figure.
+Because VTN provides a CLI to purge its internal state, it uses the
+XOS-provided API to resync with XOS. This VTN-to-XOS interface is not shown in
+the figure.
 
 ## XOS Provided API
 
 * `GET xosapi/v1/vtn/vtnservices` Get a list of VTN services
 * `PUT xosapi/v1/vtn/vtnservices/{service_id}` Update a VTN service
 
-To cause VTN to be resynchronized from XOS to the VTN app, the following steps are performed:
+To cause VTN to be resynchronized from XOS to the VTN app, the following steps
+are performed:
 
-1. `GET xosapi/v1/vtn/vtnservices/` This will provide a list of registered VTN services. There's usually only one, and it's `id` is typically set to `1`, but we recommend always getting the list of services rather than assuming an the id.
-2. `PUT xosapi/v1/vtn/vtnservices/{service_id}` with data `{"resync": true}`. `{service_id}` is the identifier you retrieved in step (1). 
+1. `GET xosapi/v1/vtn/vtnservices/` This will provide a list of registered VTN
+   services. There's usually only one, and it's `id` is typically set to `1`,
+   but we recommend always getting the list of services rather than assuming an
+   the id.
+
+2. `PUT xosapi/v1/vtn/vtnservices/{service_id}` with data `{"resync": true}`.
+   `{service_id}` is the identifier you retrieved in step (1).
 
 
 ## VTN Provided API
 
 ### ServicePorts
 
-* `POST onos/cordvtn/servicePorts`  Create a service port 
+* `POST onos/cordvtn/servicePorts`  Create a service port
 
-* `GET onos/cordvtn/servicePorts`  List service ports including service port details
+* `GET onos/cordvtn/servicePorts`  List service ports including service port
+  details
 
-* `GET onos/cordvtn/servicePorts/{port_id}`  Show service port details 
+* `GET onos/cordvtn/servicePorts/{port_id}`  Show service port details
 
 * `DELETE onos/cordvtn/servicePorts/{port_id}`  Delete a service port
 
-_Service Port Details_
+#### Service Port Details
 
 | Parameters | Type | Description |
 | --------- | ---- | --------- |
@@ -118,7 +224,7 @@
 
 Example json request:
 
-```
+```json
 {
    "ServicePort":{
       "id":"b8ba3d85-1dec-49f9-8503-6f1b90399152",
@@ -140,26 +246,29 @@
    }
 }
 ```
-   
+
 ### ServiceNetworks
-   
+
 * `POST onos/cordvtn/serviceNetworks`  Create a service network
 
-* `GET onos/cordvtn/serviceNetworks`  List service networks including the details
+* `GET onos/cordvtn/serviceNetworks`  List service networks including the
+  details
 
-* `GET onos/cordvtn/serviceNetworks/{network_id} `  Show service network details
+* `GET onos/cordvtn/serviceNetworks/{network_id} `  Show service network
+  details
 
-* `PUT onos/cordvtn/serviceNetworks/{network_id}`  Update a service network dependencies 
+* `PUT onos/cordvtn/serviceNetworks/{network_id}`  Update a service network
+  dependencies
 
 * `DELETE onos/cordvtn/serviceNetworks/{network_id}`  Delete a service network
 
-Service Network Details
+#### Service Network Details
 
 | Parameters | Type | Description |
 | --------- | ---- | --------- |
 | id * | UUID | The UUID of the service network. |
 | name	| string | The name of the service network. |
-| type * | string | The type of the service network | 
+| type * | string | The type of the service network |
 |segment_id | integer | The ID of the isolated segment on the physical network. Currently, only VXLAN based isolation is supported and this ID is a VNI. |
 | subnet | string | The associated subnet. |
 | providers | list | The list of the provider service networks.|
@@ -167,16 +276,19 @@
 | bidirectional | boolean | The dependency, which is bidirectional (true) or unidirectional (false).|
 _* fields are mandatory for creating a new service network_
 
-_ServiceNetwork Types_
-* PRIVATE: virtual network for the instances in the same service 
-* PUBLIC: externally accessible network 
-* MANAGEMENT_LOCAL: instance management network which does not span compute nodes, only accessible from the host machine 
-* MANAGEMENT_HOST: real management network which spans compute and head nodes 
-* ACCESS_AGENT: network for access agent infrastructure service 
+#### Service Network Types
+
+
+* PRIVATE: virtual network for the instances in the same service
+* PUBLIC: externally accessible network
+* MANAGEMENT_LOCAL: instance management network which does not span compute
+  nodes, only accessible from the host machine
+* MANAGEMENT_HOST: real management network which spans compute and head nodes
+* ACCESS_AGENT: network for access agent infrastructure service
 
 Example json request:
 
-```
+```json
 {
    "ServiceNetwork":{
       "id":"e4974238-448c-4b5c-9a45-b27c9477eb6a",
@@ -197,27 +309,35 @@
 
 ## Relationship to Core Models
 
-Two of CORD's [core models](core_models.md) play a role in service composition, and hence, in the XOS/VTN interaction.
+Two of CORD's [core models](core_models.md) play a role in service composition,
+and hence, in the XOS/VTN interaction.
 
-The first is the *ServiceDependency* model, which defines an edge in the Service Graph,
-connecting a consumer service to a provider service. XOS uses this model to instruct
-VTN in how to interconnect the two services in the underlying data plane. This
-interconnection is specified by a `connect_method` field, with the following values
-currently supported:
+The first is the *ServiceDependency* model, which defines an edge in the
+Service Graph, connecting a consumer service to a provider service. XOS uses
+this model to instruct VTN in how to interconnect the two services in the
+underlying data plane. This interconnection is specified by a `connect_method`
+field, with the following values currently supported:
 
-* `None`  No network connectivity is provided (services not connected in data plane)
+* `None`  No network connectivity is provided (services not connected in data
+  plane)
 
 * `Public` Connected via a public network (currently implemented by vRouter)
 
-* `Private-unidirectional` Connected via a private network with unidirectional connectivity (currently implemented by VTN)
+* `Private-unidirectional` Connected via a private network with unidirectional
+  connectivity (currently implemented by VTN)
 
-* `Private-bidirectional` Connected via a private network with bidirectional connectivity (currently implemented by VTN) 
+* `Private-bidirectional` Connected via a private network with bidirectional
+  connectivity (currently implemented by VTN)
 
 * `Other` Connected via some other network (how specified is TBD)
 
->Note: The `Other` choice does not currently exist. We expect to add it in the near future as we reconcile how networks are parameterized (see below).
+> NOTE: The `Other` choice does not currently exist. We expect to add it in the
+> near future as we reconcile how networks are parameterized (see below).
 
-The second is the *NetworkTemplate* model, which defines the parameters by which all networks are set up, including any VTN-provided networks (which corresponds to the situation where `connect_method = Private`). This model includes a `vtn_kind` field, with the following values currently supported:
+The second is the *NetworkTemplate* model, which defines the parameters by
+which all networks are set up, including any VTN-provided networks (which
+corresponds to the situation where `connect_method = Private`). This model
+includes a `vtn_kind` field, with the following values currently supported:
 
 * `PRIVATE` Provides a private network for the instances in the same service
 
@@ -229,7 +349,15 @@
 
 * `VSG` Provides an access-side network
 
-* `ACCESS_AGENT` Provides a network for access agent infrastructure service 
+* `ACCESS_AGENT` Provides a network for access agent infrastructure service
 
->Note: The NetworkTemplate model needs to be cleaned up and reconciled with the ServiceDependency model. For example, there are currently three different places one can specify some version of public versus private, and the `choices` imposed on various fields are not currently enforced. The logic that controls how XOS invokes VTN can be found in the VTN synchronizer, and can be summarized as follows: If a ServiceDependency exists between Services A and B, then VTN will connect every eligible Network in A to every eligible network in B, where a network is eligible if its NetworkTemplate's `vtn_kind` field is set of `VSG` or `Private`.
+> NOTE: The NetworkTemplate model needs to be cleaned up and reconciled with
+> the ServiceDependency model. For example, there are currently three different
+> places one can specify some version of public versus private, and the
+> `choices` imposed on various fields are not currently enforced. The logic
+> that controls how XOS invokes VTN can be found in the VTN synchronizer, and
+> can be summarized as follows: If a ServiceDependency exists between Services
+> A and B, then VTN will connect every eligible Network in A to every eligible
+> network in B, where a network is eligible if its NetworkTemplate's `vtn_kind`
+> field is set of `VSG` or `Private`.