Merge "CORD-3130 Add link to Openstack service developer documentation"
diff --git a/README.md b/README.md
index 7fff017..ffadeed 100644
--- a/README.md
+++ b/README.md
@@ -1,34 +1,35 @@
-# Install CORD
+# Installation Guide
 
-The following section describes how to deploy CORD. To install, follow either the side menu or the links below in the page.
+This guide describes how to install CORD.
 
-## Hardware requirements
+## Prerequisites
 
-Start putting together the [hardware](./prereqs/hardware.md) you need to deploy CORD.
+Start by satisfying the following prerequisites:
 
-## Networking Connectivity
-
-[Connect](./prereqs/networking.md) together the hardware components. Discover what the [connectivity requirements](./prereqs/networking.md) are.
-
-## Software Requirements
-
-You'll need to satisfy a very minimum set of [software requirements](./prereqs/software.md) before proceeding with the installation. The section provides useful pointers and scripts to help you installing Kubernetes and more.
+* [Hardware Requirements](./prereqs/hardware.md)
+* [Connectivity Requirements](./prereqs/networking.md)
+* [Software Requirements](./prereqs/software.md)
 
 ## Deploy CORD
 
-You're finally ready to install the CORD components. Choose the component you'd like to install.
+The next step is select the configuration (profile) you want to
+install:
 
-- [RCORD-lite](./profiles/rcord/install.md)
-- [MCORD](./profiles/mcord/install.md)
+* [R-CORD](./profiles/rcord/install.md)
+* [M-CORD](./profiles/mcord/install.md)
 
-## More
+## Additional Information
 
-Here is a list of optional secitons you may want to follow.
+The following are optional steps you may want to take
 
-### Offline Installation / local docker registry support
+### Offline Installation
 
-Can't have your POD connected to Internet? Want to deploy your own containers to the POD? The [docker registry](./prereqs/docker-registry.md) will help.
+If your environment does not permit connecin your POD to ther public
+Internet, you may want to take advantage of a local Docker registery.
+The following [registry setup](./prereqs/docker-registry.md) will help.
 
-### OpenStack-helm integration
+### OpenStack Installation
 
-Need OpenStack support to deploy VMs on your POD? Follow [this seciton](./prereqs/openstack-helm.md).
+If you need OpenStack included in your deployment, so you can bring up
+VMs on your POD, you will need to following the following
+[OpenStack deployment](./prereqs/openstack-helm.md) guide.
diff --git a/SUMMARY.md b/SUMMARY.md
index c25d05e..b2781e6 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -3,17 +3,17 @@
 * [Overview](overview.md)
 * [Installation Guide](README.md)
     * [Hardware Requirements](prereqs/hardware.md)
-    * [Networking Connectivity](prereqs/networking.md)
+    * [Connectivity Requirements](prereqs/networking.md)
     * [Software Requirements](prereqs/software.md)
         * [Kubernetes](prereqs/kubernetes.md)
-            * [Single Node KB8s](prereqs/k8s-single-node.md)
-            * [Multi Node KB8s](prereqs/k8s-multi-node.md)
+            * [Single Node](prereqs/k8s-single-node.md)
+            * [Multi-Node](prereqs/k8s-multi-node.md)
         * [Helm](prereqs/helm.md)
         * [Docker Registry (optional)](prereqs/docker-registry.md)
-        * [OpenStack Support (M-CORD)](prereqs/openstack-helm.md)
-    * [Fabric setup](prereqs/fabric-setup.md)
-    * [Install CORD](profiles/intro.md)
-        * [RCORD Lite](profiles/rcord/install.md)
+        * [OpenStack (optional)](prereqs/openstack-helm.md)
+    * [Fabric Setup](prereqs/fabric-setup.md)
+    * [Bringing Up CORD](profiles/intro.md)
+        * [R-CORD](profiles/rcord/install.md)
             * [OLT Setup](openolt/README.md)
         * [MCORD](profiles/mcord/install.md)
             * [EnodeB Setup](profiles/mcord/enodeb-setup.md)
@@ -24,9 +24,9 @@
         * [Kafka](charts/kafka.md)
         * [Hippie OSS](charts/hippie-oss.md)
         * [Base OpenStack](charts/base-openstack.md)
-            * [VTN Prerequisites](prereqs/vtn-setup.md)
+            * [VTN Setup](prereqs/vtn-setup.md)
         * [M-CORD](charts/mcord.md)
-* [Operating CORD](operating_cord/operating_cord.md)
+* [Operations Guide](operating_cord/operating_cord.md)
     * General info
         * [Diagnostics](operating_cord/diag.md)
         * [REST API](operating_cord/rest_apis.md)
@@ -39,7 +39,7 @@
     * [Services](operating_cord/services.md)
         * [Fabric](fabric/README.md)
         * [vRouter](vrouter/README.md)
-* [Defining Models in CORD](xos/README.md)
+* [Modeling Guide](xos/README.md)
     * [XOS Support for Models](xos/dev/xproto.md)
     * [Core Models](xos/core_models.md)
     * [Security Policies](xos/security_policies.md)
@@ -62,7 +62,7 @@
             * [Data Sources](xos-gui/architecture/data-sources.md)
         * [Tests](xos-gui/developer/tests.md)
     * [Unit Tests](xos/dev/unittest.md)
-* [Testing CORD](cord-tester/README.md)
+* [Testing Guide](cord-tester/README.md)
     * [Test Setup](cord-tester/qa_testsetup.md)
     * [Test Environment](cord-tester/qa_testenv.md)
     * [System Tests](cord-tester/validate_pods.md)
diff --git a/charts/base-openstack.md b/charts/base-openstack.md
index aa404bd..b48eb96 100644
--- a/charts/base-openstack.md
+++ b/charts/base-openstack.md
@@ -1,4 +1,4 @@
-# Deploying the Base Openstack Chart
+# Deploy Base OpenStack
 
 XOS can be configured to manage an existing OpenStack installation
 (e.g., deployed using [openstack-helm](../prereqs/openstack-helm.md)) by
@@ -12,7 +12,7 @@
 Prior to installing the chart, make sure that VTN's requirements are
 satisfied by following [this guide](../prereqs/vtn-setup.md)
 
-## Single-node configuration
+## Single-Node Configuration
 
 Here is an example of deploying the `xos-profiles/base-openstack` chart
 on a single-node OpenStack server set up by the
@@ -25,7 +25,7 @@
     --set vtn-service.sshUser=`whoami`
 ```
 
-## Multi-node configuration
+## Multi-Node Configuration
 
 If you are deploying on a multi-node OpenStack cluster, create a YAML
 file containing information for each node, and pass it as an argument
diff --git a/charts/helm.md b/charts/helm.md
index 6decfd9..5592cc7 100644
--- a/charts/helm.md
+++ b/charts/helm.md
@@ -1,14 +1,14 @@
-# Helm
+# Helm Reference Guide
 
-For informations on how to install `helm` please refer to [Installing helm](../prereqs/helm.md)
+For information on how to install `helm` please refer to [Installing helm](../prereqs/helm.md)
 
 ## What is Helm?
 
 {% include "/partials/helm/description.md" %}
 
-## How to get CORD Helm charts
+## CORD Helm Charts
 
-### Donwload the helm-charts repository
+### Download the helm-charts Repository
 
 You can get the CORD helm-chars by cloning the `helm-charts` repository:
 
@@ -16,13 +16,13 @@
 git clone https://gerrit.opencord.org/helm-charts
 ```
 
-> If you have downloaded the CORD code following the [Getting the Source
+> **Note:** If you have downloaded the CORD code following the [Getting the Source
 > Code](../developer/getting_the_code.md) guide, you'll find it in
 > `~/cord/helm-charts`.
 
 **IMPORTANT: All the helm commands needs to be executed from within this directory**
 
-### Add the CORD repository to helm
+### Add the CORD Repository to Helm
 
 If you don't want to download the repository, you can just add the OPENCord charts to your helm repo:
 
@@ -31,8 +31,8 @@
 helm repo update
 ```
 
-If you decide to follow this route the `cord/` prefix needs to be added to specify the repo to use,
-for example
+If you decide to follow this route, the `cord/` prefix needs to be
+added to specify the repo to use. For example:
 
 ```shell
 helm install -n xos-core xos-core
@@ -44,7 +44,7 @@
 helm install -n xos-core cord/xos-core
 ```
 
-## CORD example values
+## CORD Example Values
 
 As you may have noticed, there is an `example` folder
 in the `helm-chart` repository.
diff --git a/charts/hippie-oss.md b/charts/hippie-oss.md
index 9b1f07e..0d11701 100644
--- a/charts/hippie-oss.md
+++ b/charts/hippie-oss.md
@@ -1,4 +1,4 @@
-# Deploying Hippie OSS
+# Deploy Hippie OSS
 
 ```shell
 helm install -n hippie-oss xos-services/hippie-oss
diff --git a/charts/mcord.md b/charts/mcord.md
index bcab325..b7e6730 100644
--- a/charts/mcord.md
+++ b/charts/mcord.md
@@ -1,6 +1,6 @@
-# Deploying the M-CORD profile chart
+# Deploy M-CORD Profile
 
-To deploy the M-CORD profile chart:
+To deploy the M-CORD profile, run the following:
 
 ```shell
 helm dep update xos-profiles/mcord
diff --git a/charts/onos.md b/charts/onos.md
index 81f575c..72f2d5e 100644
--- a/charts/onos.md
+++ b/charts/onos.md
@@ -22,7 +22,7 @@
 
 ## onos-voltha
 
-> NOTE: This requires [VOLTHA](voltha.md) to be installed
+> **Note:** This requires [VOLTHA](voltha.md) to be installed
 
 ```shell
 helm install -n onos-voltha -f configs/onos-voltha.yaml onos
diff --git a/charts/voltha.md b/charts/voltha.md
index b185e0d..f583f35 100644
--- a/charts/voltha.md
+++ b/charts/voltha.md
@@ -1,6 +1,6 @@
 # Deploy VOLTHA
 
-## First time installation
+## First Time Installation
 
 Add the kubernetes helm charts incubator repository
 ```shell
@@ -13,7 +13,9 @@
 helm dep build
 ```
 
-There's an etcd-operator **known bug** we're trying to solve that prevents users to deploy Voltha straight since the first time. We found a workaround. 
+There's an etcd-operator **known bug** we're trying to solve that
+prevents users to deploy Voltha straight since the first time. We
+found a workaround. 
 
 Few steps:
 
@@ -32,13 +34,13 @@
 helm install -n voltha voltha
 ```
 
-## Standard installation process
+## Standard Installation Process
 
 ```shell
 helm install -n voltha voltha
 ```
 
-## Nodeports exposed
+## Nodeports Exposed
 
 * Voltha CLI
     * Inner port: 5022
diff --git a/charts/xos-core.md b/charts/xos-core.md
index 723e833..50c7b1f 100644
--- a/charts/xos-core.md
+++ b/charts/xos-core.md
@@ -1,4 +1,4 @@
-# Deploying XOS-CORE
+# Deploy XOS-CORE
 
 ```shell
 helm dep update xos-core
diff --git a/partials/push-images-to-registry.md b/partials/push-images-to-registry.md
index 5d57042..52a4612 100644
--- a/partials/push-images-to-registry.md
+++ b/partials/push-images-to-registry.md
@@ -1,6 +1,7 @@
-## Tag and push images to the docker registry
+## Tag and Push Images to the Docker Registry
 
-In order for the images to be consumed on the Kubernetes pod, they'll need to be tagged first (prefixing them with the ), and pushed to the local registry
+For the images to be consumed on the Kubernetes cluster, they need to
+be first tagged, and pushed to the local registry:
 
 Supposing your docker-registry address is:
 ```shell
@@ -27,10 +28,12 @@
 docker push 192.168.0.1:30500/xosproject/vsg-synchronizer:candidate
 ```
 
-The image should now be in the local docker registry on your pod.
+The image should now be in the local docker registry on your cluster.
 
-## Use the tag-and-push script
+## Use the tag-and-push Script
 
-Sometimes you may need to download, tag and push lots of images. This may become a long and error prone operation if done manually. For this reason, we provide an optional tool that automates the tag and push procedures.
-
-The script can be found [here](https://github.com/opencord/automation-tools/tree/master/developer).
+Sometimes you may need to download, tag and push lots of images.
+This can become a long and error prone operation if done manually.
+For this reason, we provide an optional tool that automates the tag
+and push procedures. The script can be found
+[here](https://github.com/opencord/automation-tools/tree/master/developer).
diff --git a/prereqs/docker-registry.md b/prereqs/docker-registry.md
index 17a295b..97a409e 100644
--- a/prereqs/docker-registry.md
+++ b/prereqs/docker-registry.md
@@ -2,20 +2,23 @@
 
 The guide describes how to install an **insecure** *docker registry* in Kubernetes, using the standard Kubernetes helm charts.
 
-Local docker registries can be used to push container images directly to the pod, which could be useful for example in the following cases:
+Local docker registries can be used to push container images directly to the cluster,
+which could be useful for example in the following cases:
 
-* The CORD POD has no Internet access, so container images cannot be downloaded directly from DockerHub to the POD
-* You're developing new CORD components, or modifying existing ones. You may want to test your changes before uploading the image to the official docker repository. So, you build your new container and you push it to the local registry.
+* The CORD POD has no Internet access, so container images cannot be downloaded directly from DockerHub to the POD.
 
-More informations about docker registries at <https://docs.docker.com/registry/>
+* You are developing new CORD components, or modifying existing ones. You may want to test your changes before uploading the image to the official docker repository. In this case, your workflow might be to build your new container and push it to the local registry.
+
+More informations about docker registries can be found at <https://docs.docker.com/registry/>.
 
 > NOTE: *Insecure* registries can be used for development, POCs or lab trials. **You should not use this in production.** There are planty of documents online that guide you through secure registries setup.
 
-## Deploy an insecure docker registry on Kubernetes using helm
+## Deploy a Registry Using Helm
 
-Helm provides a default helm chart to deploy an insecure registry on your Kubernetes pod.
-
-The following command deploys the registry and exposes the nodeport *30500* (you may want to change it with any value that fit your deployment needs) to access it:
+Helm provides a default helm chart to deploy an insecure registry on your
+Kubernetes cluster. The following command deploys the registry and exposes
+the port *30500*. (You may want to change it with any value that fit your
+deployment needs.)
 
 ```shell
 helm install stable/docker-registry --set service.nodePort=30500,service.type=NodePort -n docker-registry
@@ -29,14 +32,19 @@
 
 {% include "/partials/push-images-to-registry.md" %}
 
-## Modify the default helm charts to use your images, instead of the default ones
+## Modify the Helm Charts to Use Your Images
 
-Now that your custom images are in the local docker registry on the Kubernetes pod, you can modify the CORD helm charts to instruct the system to consume them, instead of using the default ones (from DockerHub).
+Now that your custom images are in the local docker registry on the Kubernetes
+cluster, you can modify the CORD helm charts to instruct the system to consume
+them instead of using the default images from DockerHub.
 
-Image names and tags are specified in the *values.yaml* file of each chart (just look in the main chart folder), or -alternatively- in the configuration files, in the config folder.
+Image names and tags are specified in the *values.yaml* file of each chart
+(look in the main chart directory), or alternatively, in the configuration
+files in the config directory.
 
-Simply modify the values as needed, uninstall the containers previously deployed, and deploy them again.
+Simply modify the values as needed, uninstall the containers previously deployed,
+and deploy them again.
 
-> **NOTE**: it's better to extend the existing helm charts, rather than directly modifying them. This way you can keep the original configuration as it is, and just override some values when needed. You can do this writing your additional configuration yaml file, and parsing it as needed, adding -f my-additional-config.yml to your helm commands.
+> **NOTE**: it's better to extend the existing helm charts, rather than directly modifying them. This way you can keep the original configuration as it is, and just override some values when needed. You can do this by writing your additional configuration yaml file, and parsing it as needed, adding -f my-additional-config.yml to your helm commands.
 
 The full CORD helm charts reference documentation is available [here](../charts/helm.md).
diff --git a/prereqs/fabric-setup.md b/prereqs/fabric-setup.md
index 947aea5..471fe2d 100644
--- a/prereqs/fabric-setup.md
+++ b/prereqs/fabric-setup.md
@@ -1 +1,2 @@
-# How to install fabric switches
+# Fabric Setup
+
diff --git a/prereqs/hardware.md b/prereqs/hardware.md
index a177f09..5af720b 100644
--- a/prereqs/hardware.md
+++ b/prereqs/hardware.md
@@ -1,8 +1,8 @@
-# Hardware requirements
+# Hardware Requirements
 
-To build CORD you'll need different hardware components, depending on the specific requirements of your deployment.
+A CORD POD is built using the following hardware components.
 
-## Generic hardware guidelines
+## Generic Hardware Guidelines
 
 * **Compute machines**: CORD can be in principle deployed both on any x86 machine, either physical or virtual. For development, demos or lab trials you may want to use only one machine (even your laptop could be fine, as long as it can provide enough hardware resources). For more realistic deployments it's anyway suggested to use at least three machines; better if all equals to one each other. The characteristics of these machines depends by lots of factors. At high level, at the very minimum, each machine should have a 4 cores CPU, 32GB of RAM and 100G of disk capacity. More sophisticated use-cases, for example M-CORD require more resources. Look at paragraphs below for more informations.
 
@@ -16,9 +16,10 @@
 
 * **Other**: Besides all above, you will need a development/management machine and a L2 management swich to connect things together. Usually a laptop is enough for the former, and a legacy L2 switch is enough for the latter.
 
-## Suggested hardware
+## Suggested Hardware
 
-Following is a list of hardware that people from the ONF community have tested over time in lab trials.
+Following is a list of hardware that people from the ONF community
+have tested over time in lab trials.
 
 * **Compute machines**
     * OCP Inspired&trade; QuantaGrid D51B-1U server. Each
@@ -58,13 +59,14 @@
     * **eNodeBs**:
         * Cavium Octeon Fusion CNF7100 (for more info <kin-yip.liu@cavium.com>)
 
-## BOM examples
+## BOM Examples
 
-Following are some BOM examples you may hopefully take inspiration form.
+The following are some BOM examples you might wish to adopt.
 
-### Basic lab tests
+### Basic Lab Tests
 
-The goal is to try CORD, maybe modify/develop some software, and deploy locally in a lab.
+Sufficient to modify/develop basic software components, and
+deploy locally in a lab.
 
 * 1x x86 server (maybe with a 10G interface if need to support VNFs)
 * 1x fabric switch (10G)
@@ -74,9 +76,10 @@
 * 1x or more developers' workstations (i.e. laptop) to develop and deploy
 * 1x L2 legacy management switch
 
-### More complex lab tests
+### Complex Lab Tests
 
-Want to make sure you have a good representation of a realistic deployment. Want to run in a lab more complex deployments and tests.
+For a more realistic deployment, you can build a POD with the
+following elements:
 
 * 3x x86 server (maybe 10G/40G/100G interfaces if need to support VNFs)
 * 4x fabric switches (10G/40G/100G)
diff --git a/prereqs/helm.md b/prereqs/helm.md
index 140084e..1fb5b6b 100644
--- a/prereqs/helm.md
+++ b/prereqs/helm.md
@@ -1,33 +1,32 @@
-# Helm Installation guide
-
-The paragraph assumes that *Kubernetes* has already been installed and *kubectl* can access the pod.
-
-CORD uses helm to deploy containers on Kubernetes. As such helm should be installed before trying to deploy any CORD container.
-
-Helm documentation can be found at <https://docs.helm.sh/>
-
-## What is helm?
+# Helm
 
 {% include "/partials/helm/description.md" %}
 
-## Install helm (and tiller)
+The following assumes that *Kubernetes* has already been installed
+and *kubectl* can access the POD. CORD uses helm to deploy containers
+on Kubernetes, and as such, it should be installed before trying to
+deploy any CORD container.
 
-Helm is made of two components:
+Helm documentation can be found at <https://docs.helm.sh/>. It consists
+of two components:
 
-* the helm client, most times also called simply helm: the client component, basically the CLI utility
-* tiller: the server side component, interpreting the client commands and executing tasks on the Kubernetes pod
+* `helm`: The helm client is basically a CLI utility.
+* `tiller`: The server side component, which executes client commands on the Kubernetes cluster.
 
-Helm can be installed on any device able to reach the Kubernetes POD (i.e. the developer laptop, another server in the network). Tiller should be installed on the Kubernetes pod itself, through the kubectl CLI.
+Helm can be installed on any device that is able to reach the
+Kubernetes POD (i.e. the developer laptop, another server in the
+network). Tiller should be installed on the Kubernetes cluster itself.
 
-> **Note**: if you've installed Minikube you'll likely need to install *socat* as well before proceeding, otherwise errors will be thrown. For example, on Ubuntu do *sudo apt-get install socat*.
+> **Note:** if you've installed Minikube you'll likely need to install *socat* as well before proceeding, otherwise errors will be thrown. For example, on Ubuntu do *sudo apt-get install socat*.
 
-### Install helm client
+## Install Helm Client
 
 Follow the instructions at <https://github.com/kubernetes/helm/blob/master/docs/install.md#installing-the-helm-client>
 
-### Install tiller
+## Install Tiller
 
-To install tiller type the following commands from any device already able to access the Kubernetes pod.
+Enter the following commands from any device thsat is already
+able to access the Kubernetes cluster.
 
 ```shell
 helm init
@@ -37,10 +36,11 @@
 helm init --service-account tiller --upgrade
 ```
 
-Once *helm* and *tiller* are installed you should be able to run the command *helm ls* without errors.
+Once *helm* and *tiller* are installed you should be able to run the
+command *helm ls* without errors.
 
 ## Done?
 
-You're ready to deploy CORD components through helm charts! [Install CORD](../profiles/intro.md).
-
-The CORD helm charts reference guide can be found [here](../charts/helm.md).
+Once you are done, you are ready to deploy CORD components using their
+helm charts! See [Bringup Up CORD](../profiles/intro.md). For more detailed
+information, see the [helm chart reference guide](../charts/helm.md).
diff --git a/prereqs/k8s-multi-node.md b/prereqs/k8s-multi-node.md
index 496c980..48ed502 100644
--- a/prereqs/k8s-multi-node.md
+++ b/prereqs/k8s-multi-node.md
@@ -1,23 +1,20 @@
-# Multi-node Kubernetes
+# Multi-Node Kubernetes
 
-Usually multi-node Kubernetes installation are suggested for production and larger trials.
+A multi-node Kubernetes installation is recommended for
+production deployments and and larger trials.
 
-## Kubernetes multi-node well-known releases
+Kubespray is a popular tool for deploying Kubernetes on multiple nodes:
 
 * **Kubespray**
     * Documentation: <https://github.com/kubernetes-incubator/kubespray>
     * Minimum requirements:
         * At least three machines (more info on hardware requirements on the Kubernetes website)
 
-## Kubespray demo / lab-trial installation scripts
-
-For simplicity, CORD provides some easy-to-use automated scripts to quickly setup Kubespray on an arbitrary number of target machines in few commands.
-
+For simplicity, CORD provides some easy-to-use automation scripts to
+quickly setup Kubespray on an arbitrary number of target machines.
 This is meant only for *lab trials* and *demo use*.
 
-At the end of the procedure, Kubespray should be installed.
-
-### Requirements
+## Requirements
 
 * **Operator machine** (1x, either physical or virtual machine)
     * Has Git installed
@@ -30,28 +27,28 @@
     * Have the same user *cord* configured, that you can use to remotely access them from the operator machine
     * The user *cord* is sudoer on each machine, and it doesn't need a password to get sudoer privileges
 
-### Get the Kubespray installation scripts
+## Download the Kubespray Installation Scripts
 
 On the operator machine
 ```shell
 git clone https://gerrit.opencord.org/automation-tools
 ```
 
-Inside, you will find a folder called *kubespray-installer*
-From now on the guide will assume you're running commands in this folder.
+Inside this directory, you will find a folder called *kubespray-installer*;
+the following assumes you are running commands in this directory
 
-### More on the Kubespray installation scripts
-
-The main script (*setup.sh*) provides an helper with instructions. To see it, run *./setup.sh --help*.
+The main script (*setup.sh*) provides a help message with
+instructions. To see it, run *./setup.sh --help*.
 
 The two main functions are:
 
 * Install Kubespray on an arbitrary number of target machines
-* Export the k8s configuration file path as environment variable to let the user access a specific deployment
+* Export the k8s configuration file path as environment variable to
+   let the user access a specific deployment
 
-### Install Kubespray
+## Install Kubespray
 
-In the following example we assume that
+The following example assumes that
 
 * Remote machines have the following IP addresses:
     * 10.90.0.101
@@ -60,7 +57,7 @@
 
 * The deployment/POD has been given the arbitrary name *onf*
 
-The installation procedure goes through the following steps (right in this order):
+The installation procedure goes through the following steps (in this order):
 
 * Cleans up any old Kubespray installation folder (may be there from previous installations)
 * Clones the official Kubespray installation repository
@@ -74,35 +71,44 @@
 ./setup.sh -i onf 10.90.0.101 10.90.0.102 10.90.0.103
 ```
 
-At the beginning of the installation you'll be asked to insert your password multiple times.
+At the beginning of the installation you will be asked to insert your
+password multiple times.
 
-At the end of the procedure, Kubespray should be installed and running on the remote machines.
+At the end of the procedure, Kubespray should be installed and running
+on the remote machines.
 
-The configuration file to access the POD will be saved in the subfolder *configs/onf.conf*.
+The configuration file to access the POD will be saved in the
+sub-directory *configs/onf.conf*.
 
-Want to deploy another POD without affecting your existing deployment?
-
-Runt the following:
+If you want to deploy another POD without affecting your existing
+deployment run the following:
 ```shell
 ./setup.sh -i my_other_deployment 192.168.0.1 192.168.0.2 192.168.0.3
 ```
 
-Your *onf.conf* configuration will be always there, and your new *my_other_deployment.conf* file as well!
+Your *onf.conf* configuration will be always there, and your
+new *my_other_deployment.conf* file as well!
 
-### Access the Kubespray deployment
+## Access the Kubespray Deployment
 
-Kubectl and helm (look [here](kubernetes.md) for more details) need to be pointed to a specific cluster, before being used. This is done through standard KUBECONFIG files.
+Kubectl and helm (see [here](kubernetes.md) for more details) need to
+be pointed to a specific cluster before being used. This is done
+through standard KUBECONFIG files.
 
-The script also helps you to automatically export the path pointing to an existing KUBECONFIG file, previously generated during the installation.
+The script also helps you to automatically export the path pointing to
+an existing KUBECONFIG file, previously generated during the installation.
 
-To do that -for example against the onf pod just deployed, simply type
+To do so, for example against the onf pod just deployed, simply type
 
 ```shell
 source setup.sh -s onf
 ```
 
-At this point, you can start to use *kubectl* and *helm*.
+At this point, you can start to use *kubectl*  and *helm*.
 
 ## Done?
 
-Are you done? You're ready to install kubectl and helm. Instructions [here](kubernetes.md#get-your-kubeconfig-file).
+Once you are done, you are ready to install Kubctl and Helm, so return to 
+[here](kubernetes.md#get-your-kubeconfig-file) in the installation
+guide.
+
diff --git a/prereqs/k8s-single-node.md b/prereqs/k8s-single-node.md
index caafec3..b9b4cc2 100644
--- a/prereqs/k8s-single-node.md
+++ b/prereqs/k8s-single-node.md
@@ -1,8 +1,9 @@
-# Install Minikube on a single node
+# Single-Node Kubernetes
 
-Usually single-node Kubernetes installation are suggested for development, testing, and small lab-trial deployments.
+We suggest a single-node Kubernetes installation for most development,
+testing, and small lab-trial deployments.
 
-## Kubernetes single-node well-known releases
+There are two popular single-node versions of Kubernetes.
 
 * **Minikube**
     * Documentation: <https://kubernetes.io/docs/getting-started-guides/minikube/>
@@ -12,22 +13,18 @@
     * Documentation: <https://microk8s.io/>
     * One machine, Linux based, either physical machine or virtual. It could also be your own PC!
 
-## Minikube installation walkthrough
+We recommend Minikube, which is easy to set up and use. The following
+comments on two considerations:
 
-Install Minikube is so easy that there's no need for us to provide additional custom scripts. What we can do instead, is to point you to the official Minikube installation guide:
+* If you want to install Minikube on a Linux machine (either a
+  physical machine or a VM on your laptop or in the cloud), you will
+  need to follow the instructions at <https://github.com/kubernetes/minikube#linux-continuous-integration-without-vm-support>.
 
-### Install Minikube directly on the Linux OS (no VM support)
-
-**Suggested if you want to install Minikube on a Linux machine, either that this is a physical machine or a VM you created (even runing on your laptop)**
-
-Instructions avaialble at <https://github.com/kubernetes/minikube#linux-continuous-integration-without-vm-support>
-
-### Standard Minikube installation (VM support)
-
-**Suggested if you want to run Minikube directly on your Windows or macOS system**
-
-Instructions available at <https://kubernetes.io/docs/getting-started-guides/minikube/#installation>
+* If you want to run Minikube directly on your Windows or MacOS
+  system, you will need to follow the instructions at
+  <https://kubernetes.io/docs/getting-started-guides/minikube/#installation>.
 
 ## Done?
 
-Are you done? You're ready to install kubectl and helm. Instructions [here](kubernetes.md#get-your-kubeconfig-file).
+Once you are done, you are ready to install Kubctl and Helm, so return to 
+[here](kubernetes.md#get-your-kubeconfig-file) in the installation guide.
diff --git a/prereqs/kubernetes.md b/prereqs/kubernetes.md
index 05c680e..c259de9 100644
--- a/prereqs/kubernetes.md
+++ b/prereqs/kubernetes.md
@@ -1,34 +1,45 @@
 # Kubernetes
 
-A generic CORD installation can run on any version of Kubernetes (>=1.9) and Helm.
+CORD runs on any version of Kubernetes (1.9 or greater), and uses the
+Helm client-side tool. If you are new to Kubernetes, we recommend
+<https://kubernetes.io/docs/tutorials/> as a good place to start.
 
-Internet is full of different releases of Kubernetes, as of resources that can help to get you going. If on one side it may sound confusing, the good news is that you’re not alone!
+Although you are free to set up Kubernetes and Helm in whatever way makes
+sense for your deployment, the following provides guidelines, pointers, and
+automated scripts that might be helpful.
 
-Pointing you to a specific version of Kubernetes wouldn’t probably make much sense, since each Kubernetes version may be specific to different deployment needs. Anyway, we think it’s good to point you to some well known releases, that can be used for different types of deployments.
+## Install Kubernetes
 
-**New to Kubernetes?** Tutorials are a good place to start. More at <https://kubernetes.io/docs/tutorials/>.
+The following sections offer pointers and scripts to install your favorite
+version of Kubernetes. Start there, then come back here and follow the
+steps in the following three subsections.
 
-Following paragraphs provide guidelines, pointers and automated scripts to let you quickly install both Kubernetes and Helm.
+## Export KUBECONFIG
 
-## Step by step installation
-
-### Install Kubernetes
-
-First, choose what version of Kubernetes you'd like to run. In the following sections of the guide we offer pointers and scripts to get your favorite version of Kubernetes installed. Start from there. Then, come back here and continue over the next paragraphs, below.
-
-### Get your KUBECONFIG file
-
-Once Kubernetes is installed, you should have a KUBECONFIG configuration file containing all the details of your deployment (address of the machine(s), credentials, ...). The file can be used to access your Kubernetes deployment from any client able to communicate with the Kubernetes installation. To manage the pod, export a KUBECONFIG variable containing the path to the configuration file:
+Once Kubernetes is installed, you should have a KUBECONFIG configuration file containing all the details of your deployment: address of the machine(s),
+credentials, and so on. The file can be used to access your Kubernetes deployment
+from any client able to communicate with the Kubernetes installation. To manage
+the pod, export a KUBECONFIG variable containing the path to the configuration
+file:
 
 ```shell
 export KUBECONFIG=/path/to/your/kubeconfig/file
 ```
 
-You can also permanently export this environment variable, so you don’t have to export it every time you open a new window in your terminal. More info on this topic at <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/>.
+You can also permanently export this environment variable, so you don’t have to
+export it every time you open a new window in your terminal. More info on this
+topic at
+<https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/>.
 
-### Install kubectl
+## Install Kubectl
 
-You've installed Kubernetes. Now it's time to install the CLI tools to interact with it. *kubectl* is the basic tool you need. It can be installed on any device able to reach the Kubernetes just installed (i.e. the development laptop, another server, the same machine where Kubernetes is installed). To install kubectl, follow this guide: <https://kubernetes.io/docs/tasks/tools/install-kubectl/>.
+Again assuming Kubernetes is already installed, the next step is to
+install the CLI tools used to interact with it. *kubectl* is the basic tool
+you need. It can be installed on any device able to reach the Kubernetes
+just installed (i.e., the development laptop, another server, the same machine
+where Kubernetes is installed).
+
+To install kubectl, follow this step-by-step guide: <https://kubernetes.io/docs/tasks/tools/install-kubectl/>.
 
 To test the kubectl installation run:
 
@@ -36,8 +47,12 @@
 kubectl get pods
 ```
 
-Kubernetes should reply to the request showing the pods already deployed. If you've just installed Kubernetes, likely you won't see any pod, yet. That's fine, as long as you don't see errors.
+Kubernetes should reply to the request showing the pods already deployed.
+If you've just installed Kubernetes, likely you won't see any pod, yet.
+That's fine, as long as you don't see errors.
 
-### Install helm
+## Install Helm
 
-CORD uses a tool called helm to deploy containers on Kubernetes. As such, helm needs to be installed before being able to deploy CORD containers. More info on helm and how to install it can be found [here](helm.md).
+CORD uses a tool called Helm to deploy containers on Kubernetes.
+As such, Helm needs to be installed before being able to deploy CORD containers.
+More info on Helm and how to install it can be found [here](helm.md).
diff --git a/prereqs/networking.md b/prereqs/networking.md
index bde9f69..e72f549 100644
--- a/prereqs/networking.md
+++ b/prereqs/networking.md
@@ -1,23 +1,29 @@
-# Network Connectivity
+# Connectivity Requirements
 
-Network requirments are very easy. There are two networks: a management network for operators' management, and (in some use-cases) a dataplane network for end-users' traffic.
+CORD expects two networks: a management network (for control traffic between the control plane containers) and a dataplane network (for end-user traffic).
 
-## Management network
+## Management Network
 
-It's the network that connects all physical devices (compute machines, fabric switches, access devices, development machine...) together, allowing them to talk one each other, and allowing operators to manage CORD.
-The network is usually a 1G copper network, but this may vary deployment by deployment.
-Network devices (access devices and fabric switches) usually connect to this network through a dedicated management 1G port.
-If everything is setup correctly, any device should be able to communicate with the others at L3 (basically devices should ping one each other).
-This network is usually used to access Internet for the underlay infrastructure setup (CORD doesn't necessarilly need Internet access). For example, you'll likely need to have Internet access through this network to install your OS or updates of it, switch software, Kubernetes.
+The management network that connects all physical devices (compute machines,
+fabric switches, access devices, development machines), allowing them to
+communicate to manage CORD. This is usually a 1G copper network, but may
+vary deployment by deployment. Network devices (access devices and fabric
+switches) usually connect to this network through a dedicated management 1G port.
+If everything is setup correctly, any device should be able to communicate with the others at L3 (i.e., devices should ping one each other).
 
-Below you can see a diagram of a typical management network.
+The management network is usually used to access Internet for the underlay
+infrastructure setup, although CORD doesn't require Internet access. For
+example, you will likely need to have Internet access through this network to
+install or update your OS, switch software, Kubernetes, and so on.
+
+The following is a diagram of a typical management network.
 
 ![CORD management network](../images/mgmt_net.png)
 
-## Dataplane network
+## Dataplane Network
 
-This is the network that carries the users' traffic. Depending on the requirements it may vary and go from 1G to any speed. This is completely separate from the management network. Usually this network has access to Internet to allow subscribers to go to Internet.
+The dataplane network carries the users' traffic, connecting subscribers to the
+Intetnet (which is the whole purpose of CORD). The following is a diagram of a
+reference dataplane network for CORD.
 
-An example diagram including the dataplane network is shown below.
-
-![CORD management network](../images/data_net.png)
\ No newline at end of file
+![CORD management network](../images/data_net.png)
diff --git a/prereqs/openstack-helm.md b/prereqs/openstack-helm.md
index 19ae7c6..5c80bb8 100644
--- a/prereqs/openstack-helm.md
+++ b/prereqs/openstack-helm.md
@@ -1,24 +1,27 @@
-# OpenStack Support (M-CORD)
+# OpenStack (optional)
 
 The [openstack-helm](https://github.com/openstack/openstack-helm)
 project can be used to install a set of Kubernetes nodes as OpenStack
 compute nodes, with the OpenStack control services (nova, neutron,
-keystone, glance, etc.) running as containers on Kubernetes.
-Instructions for installing `openstack-helm` on a single node or a multi-node
-cluster can be found at [https://docs.openstack.org/openstack-helm/latest/index.html](https://docs.openstack.org/openstack-helm/latest/index.html).
+keystone, glance, etc.) running as containers on Kubernetes. This is
+necessary, for example, to run the M-CORD profile.
 
-This page describes steps for installing `openstack-helm`, including how to
+Instructions for installing `openstack-helm` on a single node or a
+multi-node cluster can be found at
+[https://docs.openstack.org/openstack-helm/latest/index.html](https://docs.openstack.org/openstack-helm/latest/index.html).
+
+The following describes steps for installing `openstack-helm`, including how to
 customize the documented install procedure with specializations for CORD.
-CORD uses the VTN ONOS app to control Open vSwitch on the compute nodes
-and configure virtual networks between VMs on the OpenStack cluster.
-Neutron must be configured to pass control to ONOS rather than using
-`openvswitch-agent` to manage OvS.
+Specifically, CORD uses the VTN ONOS app to control Open vSwitch on
+the compute nodes and configure virtual networks between VMs on the
+OpenStack cluster. Neutron must be configured to pass control to ONOS
+rather than using `openvswitch-agent` to manage OvS.
 
 After the install process is complete, you won't yet have a
 fully-working OpenStack system; you will need to install the
 [base-openstack](../charts/base-openstack.md) chart first.
 
-## Single node quick start
+## Single-Node Quick Start
 
 For convenience, a script to install Kubernetes, Helm, and `openstack-helm`
 on a _single Ubuntu 16.04 node_ is provided in the `automation-tools`
@@ -33,9 +36,9 @@
 If you run this script you can skip the instructions on the rest of
 this page.
 
-## Customizing the openstack-helm install for CORD
+## Customizing the openstack-helm Install for CORD
 
-In order to enable the VTN app to control Open vSwitch on the compute
+To enable the VTN app to control Open vSwitch on the compute
 nodes, it is necessary to customize the `openstack-helm` installation.
 The customization occurs through specifiying `values.yaml` files to use
 when installing the Helm charts.
@@ -116,7 +119,7 @@
 export OSH_EXTRA_HELM_ARGS_NEUTRON="-f /tmp/neutron-cord.yaml"
 ```
 
-## Install process for openstack-helm
+## Install Process for openstack-helm
 
 Please see the `openstack-helm` documentation for instructions on how to
 install openstack-helm on a single node (for development and testing) or
diff --git a/prereqs/software.md b/prereqs/software.md
index ac8aed1..d8c00fe 100644
--- a/prereqs/software.md
+++ b/prereqs/software.md
@@ -1,16 +1,14 @@
-# Software requirements
+# Software Requirements
 
-CORD is distributed as a set of containers that can potentially run on any Kubernetes environment.
+CORD is distributed as a set of containers that can run on
+pretty much any Kubernetes environment. It is your choice how
+to install Kubernetes, although this section describes automation
+scripts we have found useful.
 
-As such, you can choose what operating system to use, how to configure it, and how to install Kubernetes on it.
-
-**M-CORD is the exception**,
-since its components still run on OpenStack. OpenStack is
-deployed as a set of Kubernetes containers using the
-[openstack-helm](https://github.com/openstack/openstack-helm)
-project. Successfully installing the OpenStack Helm charts requires
-some additional system configuration besides just installing Kubernetes
-and Helm. You can find more informations about this in the [OpenStack
-Support](./openstack-helm.md) installation section.
-
-Following sections describe what specifically CORD containers require and some pointers to DEMO automated-installation scripts.
+> **Note:** M-CORD is the exception since its components still depend on
+> OpenStack, which is in turn deployed as a set of Kubernetes containers
+>using the [openstack-helm](https://github.com/openstack/openstack-helm)
+>project. Successfully installing the OpenStack Helm charts requires
+>some additional system configuration besides just installing Kubernetes
+>and Helm. You can find more informations about this in the
+>[OpenStack Support](./openstack-helm.md) installation section.
diff --git a/prereqs/vtn-setup.md b/prereqs/vtn-setup.md
index 3608cbf..d523c3b 100644
--- a/prereqs/vtn-setup.md
+++ b/prereqs/vtn-setup.md
@@ -1,4 +1,4 @@
-# VTN Prerequisites
+# VTN Setup
 
 The ONOS VTN app provides virtual networking between VMs on an OpenStack cluster.  Prior to installing the [base-openstack](../charts/base-openstack.md) chart that installs and configures VTN, make sure that the following requirements are satisfied.
 
diff --git a/profiles/intro.md b/profiles/intro.md
index 2c4c75c..37646b3 100644
--- a/profiles/intro.md
+++ b/profiles/intro.md
@@ -1,5 +1,12 @@
-# Profiles
+# Bringing Up CORD
 
-Lorem ipsum dolor sit amet, consectetur adipisicing elit. Nobis veritatis
-eligendi vitae dolorem animi non unde odio, hic quasi totam recusandae repellat
-minima provident aliquam eveniet a tempora saepe. Iusto.
+CORD is a general-purpose platform that is able to run one or more
+profiles, each of which includes some access technology (e.g., OLT,
+RAN) and some collection of services (called a service graph or
+service mesh).
+
+Although in principle arbitrarily many different profiles are
+possible, and profiles can be dynamically extended with new services
+at runtime, each release of CORD includes a collection of reference
+profiles. This section describes two.
+
diff --git a/profiles/mcord/enodeb-setup.md b/profiles/mcord/enodeb-setup.md
index a277eee..7e2eaa6 100644
--- a/profiles/mcord/enodeb-setup.md
+++ b/profiles/mcord/enodeb-setup.md
@@ -1 +1,2 @@
-# How to install a physical eNodeB
+# eNodeB Setup
+
diff --git a/profiles/mcord/install.md b/profiles/mcord/install.md
index 79e8efb..6a93a23 100644
--- a/profiles/mcord/install.md
+++ b/profiles/mcord/install.md
@@ -2,7 +2,8 @@
 
 ## Prerequisites
 
-M-CORD requires OpenStack to run VNFs.  The OpenStack installation must be customized with the *onos_ml2* Neutron plugin.
+M-CORD requires OpenStack to run VNFs.  The OpenStack installation
+must be customized with the *onos_ml2* Neutron plugin.
 
 - To install Kubernetes, Helm, and a customized Openstack-Helm on a single node or a multi-node cluster, follow [this guide](../../prereqs/openstack-helm.md)
 - To configure the nodes so that VTN can provide virtual networking for OpenStack, follow [this guide](../../prereqs/vtn-setup.md)
diff --git a/profiles/rcord/install.md b/profiles/rcord/install.md
index c73df05..755e433 100644
--- a/profiles/rcord/install.md
+++ b/profiles/rcord/install.md
@@ -1,4 +1,10 @@
-# R-CORD Lite
+# R-CORD Profile
+
+The latest version of R-CORD differs from versions included in earlier
+releases in that it does not include the vSG service. In the code this
+configuration is called RCORD-Lite, but since it is the only version
+of Residential CORD currently supported, we usually simply call it
+"R-CORD."
 
 ## Prerequisites
 
@@ -15,7 +21,7 @@
 - [onos-fabric](../../charts/onos.md#onos-fabric)
 - [onos-voltha](../../charts/onos.md#onos-voltha)
 
-## Install the RCORD-Lite helm chart
+## Install the RCORD-Lite Helm Chart
 
 ```shell
 helm install -n rcord-lite xos-profiles/rcord-lite
@@ -24,7 +30,7 @@
 Now that the your RCORD-Lite deployment is complete, please read this 
 to understand how to configure it: [Configure RCORD-Lite](configuration.md)
 
-## How to customize the RCORD-Lite helm chart
+## How to Customize the RCORD-Lite Helm Chart
 
 Define a `my-rcord-lite-values.yaml` that looks like: