Merge "Exposing onos-cord ui and ssh port for debugging purposes"
diff --git a/SUMMARY.md b/SUMMARY.md
index 3437415..dce2483 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -41,19 +41,21 @@
         * [Fabric](fabric/README.md)
         * [vRouter](vrouter/README.md)
 * [Modeling Guide](xos/README.md)
-    * [XOS Support for Models](xos/dev/xproto.md)
+    * [XOS Modeling Framework](xos/dev/xproto.md)
     * [Core Models](xos/core_models.md)
     * [Security Policies](xos/security_policies.md)
     * [Writing Synchronizers](xos/dev/synchronizers.md)
         * [Design Guidelines](xos/dev/sync_arch.md)
         * [Implementation Details](xos/dev/sync_impl.md)
+        * [Synchronizer Reference](xos/dev/sync_reference.md)
 * [Development Guide](developer/developer.md)
     * [Getting the Source Code](developer/getting_the_code.md)
     * [Developer Workflows](developer/workflows.md)
-        * [Building Docker Images](developer/imagebuilder.md)
-    * [Kubernetes Service](kubernetes-service/kubernetes-service.md)
-    * [OpenStack Service](openstack/openstack-service.md)
-    * [VTN and Service Composition](xos/xos_vtn.md)
+    * [Building Docker Images](developer/imagebuilder.md)
+    * [Platform Services](developer/platform.md)
+        * [Kubernetes](kubernetes-service/kubernetes-service.md)
+        * [OpenStack](openstack/openstack-service.md)
+        * [VTN and Service Composition](xos/xos_vtn.md)
     * [GUI Development](xos-gui/developer/README.md)
         * [Quickstart](xos-gui/developer/quickstart.md)
         * [Service Graph](xos-gui/developer/service_graph.md)
diff --git a/charts/helm.md b/charts/helm.md
index 5592cc7..a4c68ff 100644
--- a/charts/helm.md
+++ b/charts/helm.md
@@ -1,4 +1,4 @@
-# Helm Reference Guide
+# Helm Reference
 
 For information on how to install `helm` please refer to [Installing helm](../prereqs/helm.md)
 
@@ -8,9 +8,57 @@
 
 ## CORD Helm Charts
 
+All helm charts used to install CORD can be found in the `helm-chart`
+repository. Most of the top-level directories in that repository
+(e.g., `onos`, `voltha`, `xos-core`) correspond to components of
+CORD that can be installed independently. For example, it is possible
+to bring up `onos` without `voltha`, and vice versa. You can also
+bring up XOS by itself (`xos-core`) or XOS with its GUI (`xos-core`
+and `xos-gui`). This can be useful if you want to work on just the
+CORD data models, without any backend components.
+
+The `xos-services` and `xos-profiles` directories contain helm
+charts for individual services and profiles (a mesh of services),
+respectively. While it is possible to use Helm to bring up an
+individual service, collections of related services are typically
+installed as a unit; we call this unit a *profile.* Looking in the
+`xos-profiles` directory, `rcord-lite` is an example profile. It
+corresponds to R-CORD, and inspecting its `requirements.yaml`
+file shows that it, in turn, depends on the `volt` and `vrouter`
+services, among several others.
+
+Some of the profiles bring up sub-systems that other profiles then
+build upon. For example, `base-openstack` brings up three platform
+related services (`onos-service`, `openstack`, and `vtn-service`),
+which effectively provisions CORD to support OpenStack-based VNFs.
+Once the services in the `base-openstack` profile are running, it
+is then possible to bring up the `mcord` profile, which corresponds
+to ~10 other services. It is also possible to bring up an individual
+service by executing its helm chart; for example
+`xos-services/exampleservice`.
+
+Similarly, the `base-kubernetes` profile brings up Kubernetes in
+support of container-based VNFs. This corresponds to the
+`kubernetes-service`, not to be confused with CORD's use of
+Kubernetes to deploy the CORD control plane. Once this profile is
+running, it is possible to bring up an example VNF in a container
+by executing its helm chart; for example
+`xos-services/simpleexampleservice`.
+
+> **Note:** The `base-kubernetes` configuration does not yet
+> incorporate VTN. Doing so is work-in-progress.
+
+Finally, note that the `templates` sub-directory in both the
+`xos-services` and `xos-profiles` directories includes one or
+more TOSCA-related files. These play a role in configuring the
+service graph and provisioning the individual services contained
+in that service graph. This happens once the helm charts have
+done their job, and is technically a post-install operation, as
+discussed in the [Operations Guide](../operating_cord/operating_cord.md).
+
 ### Download the helm-charts Repository
 
-You can get the CORD helm-chars by cloning the `helm-charts` repository:
+You can get the CORD helm charts by cloning the `helm-charts` repository:
 
 ```shell
 git clone https://gerrit.opencord.org/helm-charts
@@ -46,9 +94,8 @@
 
 ## CORD Example Values
 
-As you may have noticed, there is an `example` folder
-in the `helm-chart` repository.
-The files contained in that repository are examples of possible overrides
+There is an `example` directory in the `helm-chart` repository.
+The files contained in that directory are examples of possible overrides
 to obtain a custom deployment.
 
 For example, it is possible to deploy a single instance of `kafka`,
diff --git a/charts/kafka.md b/charts/kafka.md
index a2c1260..051754d 100644
--- a/charts/kafka.md
+++ b/charts/kafka.md
@@ -8,4 +8,11 @@
 ```shell
 helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
 helm install --name cord-kafka incubator/kafka
+```
+
+If you are experierencing problems with a multi instance installation of kafka,
+you can try to install a single instance of it:
+
+```shell
+helm install --name cord-kafka incubator/kafka -f examples/kafka-single.yaml
 ```
\ No newline at end of file
diff --git a/charts/voltha.md b/charts/voltha.md
index f583f35..6ddabc0 100644
--- a/charts/voltha.md
+++ b/charts/voltha.md
@@ -48,3 +48,12 @@
 * Voltha REST APIs
     * Inner port: 8882
     * Nodeport: 30125
+
+## How to access the VOLTHA CLI
+
+Assuming you have not changed the default ports in the chart,
+you can use this command to access the VOLTHA CLI:
+
+```shell
+ssh voltha@<pod-ip> -p 30110
+```
diff --git a/developer/developer.md b/developer/developer.md
index d66f3cb..fef811f 100644
--- a/developer/developer.md
+++ b/developer/developer.md
@@ -1 +1,11 @@
-# Development Guide
\ No newline at end of file
+# Development Guide
+
+This guide describes workflows and best practices for developers. If
+you are a service developer, you will need to consult this guide and
+the companion [Modeling Guide](../xos/README.md) that describes how
+define models and synchronizers for services being onboarded into
+CORD. If you are a platform developer, you will find information about
+the platform services typically integrated into CORD (e.g.,
+Kubernetes, OpenStack, VTN). Service developers may be interested in
+what's under the covers, but they should not need to understand these
+internals to develop individual services.
diff --git a/developer/getting_the_code.md b/developer/getting_the_code.md
index 4c3d308..4c0ddaa 100644
--- a/developer/getting_the_code.md
+++ b/developer/getting_the_code.md
@@ -2,14 +2,13 @@
 
 ## Install repo
 
-[repo](https://code.google.com/archive/p/git-repo/) is a tool from Google that
-works with Gerrit and allows us to manage the multiple git repos that make up
-the CORD code base.
+We use the [repo](https://code.google.com/archive/p/git-repo/) tool
+from Google, which works with Gerrit, to manage the multiple git repos
+that make up the CORD code base.
 
-If you don't already have `repo` installed, this may be possible with your
-system package manager, or using the [instructions on the android source
-site](https://source.android.com/source/downloading#installing-repo), or by
-using the following commands which download/verify/install it:
+If you don't already have `repo` installed, you may be able to install
+it with your system package manager, or you can follow these
+[instructions from the android source site](https://source.android.com/source/downloading#installing-repo):
 
 ```sh
 curl -o /tmp/repo 'https://gerrit.opencord.org/gitweb?p=repo.git;a=blob_plain;f=repo;hb=refs/heads/stable'
@@ -18,20 +17,20 @@
 sudo chmod a+x /usr/local/bin/repo
 ```
 
-> NOTE: As mentioned above, you may want to install *repo* using the official
+> **Note:** As mentioned above, you may want to install *repo* using the official
 > repository instead. We forked the original repository and host a copy of the
 > file to make repo downloadable also by organizations that don't have access
 > to Google servers.
 
-## Download CORD repositories
+## Download CORD Repositories
 
 The `cord` repositories are usually checked out to `~/cord` in most of our
 examples and deployments:
 
 {% include "/partials/repo-download.md" %}
 
-> NOTE: `-b` specifies the branch name. Development work goes on in `master`,
-> and there are also specific stable branches such as `cord-4.0` that can be
+> **Note:** `-b` specifies the branch name. Development work goes on in `master`,
+> and there are also specific stable branches such as `cord-6.0` that can be
 > used.
 
 When this is complete, a listing (`ls`) inside this directory should yield
@@ -43,7 +42,7 @@
 build                   docs                    incubator               orchestration           test
 ```
 
-## Download patchsets
+## Download Patchsets
 
 Once you've downloaded a CORD source tree, you can download patchsets from
 Gerrit with the following command:
@@ -52,10 +51,9 @@
 repo download orchestration/xos 1234/3
 ```
 
-Which downloads a patch for the `xos` git repo, patchset number `1234` and
-version `3`.
+which downloads patchset number `1234` and version `3` for the `xos` git repo.
 
-## Contributing code to CORD
+## Contributing Code to CORD
 
 We use [Gerrit](https://gerrit.opencord.org) to manage the CORD code base. For
 more information see [Working with
@@ -65,14 +63,14 @@
 project, see [Contributing to
 CORD](https://wiki.opencord.org/display/CORD/Contributing+to+CORD).
 
-## Downloading testing and QA repositories
+## Testing and QA Repositories
 
-Whie not useful for deploying a CORD POD, the repo manifest files and the
-infrastructure code used to configure our test and QA systems, including
-Jenkins jobs created with [Jenkins Job
+While not part of the standard process for deploying a CORD POD, the
+repo manifest files and the infrastructure code used to configure our
+test and QA systems, including Jenkins jobs created with [Jenkins Job
 Builder](https://docs.openstack.org/infra/jenkins-job-builder/) can be
-downloaded with repo.  The `ci-management` repo uses git submodules, so those
-need to be checked out as well:
+also be downloaded with repo.  The `ci-management` repo uses git
+submodules, so those need to be checked out as well:
 
 ```shell
 mkdir cordqa
diff --git a/developer/imagebuilder.md b/developer/imagebuilder.md
index 06693dd..6dc23bc 100644
--- a/developer/imagebuilder.md
+++ b/developer/imagebuilder.md
@@ -1,32 +1,31 @@
 # Building Docker Images with imagebuilder
 
-The current CORD implementation consists of many interrelated Docker images.
-Making sure that the images used in a deployment are consistent with the source
-tree on disk is a challenge and required a tool, `imagebuilder`, to be
-developed to perform image rebuilds in a consistent and efficient manner.
+CORD consists of many interrelated Docker images.
+Making sure that the images used in a deployment are consistent with
+the source tree is a challenge we address with a tool called `imagebuilder`.
+`imagebuilder` is currently used to build the XOS, ONOS, and the `mavenrepo`
+(source of ONOS Apps used in CORD) images.
 
-Imagebuilder is currently used to build the XOS, ONOS, and the `mavenrepo`
-(source of ONOS Apps used in CORD) images, and pull down other required images.
+While `imagebuilder` pulls down required images from DockerHub and
+builds/tags images, it does not push those images or delete obsolete
+ones. These tasks are left to other software (Ansible, Jenkins), which
+should take in YAML output from `imagebuilder` and then perform the
+appropriate actions.
 
-While imagebuilder will pull down required images from DockerHub and build/tag
-images, it does not push those images or delete obsolete ones.  These tasks are
-left to other software (Ansible, Jenkins) which should take in imagebuilder's
-YAML output and take the appropriate actions.
+## Obtaining and Rebuilding Images
 
-## Obtaining and rebuilding images
-
-For the normal build process, you won't need to manually download images as the
-`docker-images` make target that runs imagebuilder will automatically be run as
-a part of the build process.
+For the normal build process, you won't need to manually download
+images as the `docker-images` make target that runs `imagebuilder`
+will automatically be run as a part of the build process.
 
 If you do need to rebuild images, there is a `make clean-images` target that
-will force imagebuilder to be run again and images to be moved into place.
+forces `imagebuilder` to be run again and images to be moved into place.
 
-## Adding a new Docker image to CORD
+## Adding a new Docker Image to CORD
 
-There are several cases where an Image would need to be added to CORD.
+There are several cases where an Image might need to be added to CORD.
 
-### Adding an image developed outside of CORD
+### Adding an Image Developed Outside of CORD
 
 There are cases where a 3rd party image developed outside of CORD may be
 needed. This is the case with ONOS, Redis, and a few other pieces of software
@@ -39,9 +38,9 @@
 file with a `docker_image_whitelist` list - see
 `cord/helm-charts/examples/*-images.yaml` for examples.
 
-These images will be retagged with a `candidate` tag after being pulled.
+These images are retagged with a `candidate` tag after being pulled.
 
-### Adding a synchronizer image
+### Adding a Synchronizer Image
 
 Adding a synchronizer image is usually as simple as adding it to the
 `buildable_images` list in the `automation-tools/developer/docker_images.yml`
@@ -52,9 +51,9 @@
 list it in `build/docker_images.yml`, so it can build the synchronizer image
 locally.
 
-### Adding other CORD images
+### Adding Other CORD Images
 
-If you want imagebuilder to build an image from a Dockerfile somewhere in the
+If you want `imagebuilder` to build an image from a Dockerfile somewhere in the
 CORD source tree, you need to add it to the `buildable_images` list in the
 `docker_images.yml` file (see that file for the specific format), then making
 sure the image name is listed in the `docker_image_whitelist` list.
@@ -65,7 +64,7 @@
 
 ## Debugging imagebuilder
 
-If you get a different error or  think that imagebuilder isn't working
+If you get a different error or  think that `imagebuilder` isn't working
 correctly, please rerun it with the `-vv` ("very verbose") option, read through
 the output carefully, and then post about the issue on the mailing list or
 Slack.
@@ -82,9 +81,9 @@
 
 Run `imagebuilder.py -h` for a list of other supported arguments.
 
-## How Imagebuilder works
+## How imagebuilder Works
 
-The imagebuilder program performs the following steps when run:
+The `imagebuilder` program performs the following steps when run:
 
 1. Reads the [repo manifest file](https://github.com/opencord/manifest/blob/master/default.xml)
    (checked out as `.repo/manifest`) to get a list of the CORD git repositories.
@@ -117,8 +116,8 @@
 
 ## Image Tagging
 
-CORD container images frequently have multiple tags. The two most common ones
-are:
+CORD container images frequently have multiple tags. The two most
+common tags are:
 
 * The string `candidate`, which says that the container is ready to be deployed
   on a CORD POD
@@ -126,17 +125,18 @@
   container is built from an untouched (according to git) source tree.  Images
   built from a modified source tree will not be tagged in this way.
 
-Imagebuilder use this git hash tag as well as labels on the image of the git
+`imagebuilder` use this git hash tag as well as labels on the image of the git
 repos of parent images to determine whether an image is correctly built from
 the checked out source tree.
 
-## Image labels
+## Image Labels
 
-Imagebuilder uses a Docker label scheme to determine whether an image needs to
-be rebuilt, which is added to the image when it is built.  Docker images used
-in CORD must apply labels in their Dockerfiles which are specified by
-[label-schema.org](http://label-schema.org) - see there for examples, and below
-for a few notes that clear up the ambiguity within that spec.
+`imagebuilder` uses a Docker label scheme to determine whether an
+image needs to be rebuilt, which is added to the image when it is
+built. Docker images used in CORD must apply labels in their
+Dockerfiles which are specified by
+[label-schema.org](http://label-schema.org); look there for examples,
+and below for a few notes that clear up the ambiguity within that spec.
 
 Required labels for every CORD image:
 
@@ -209,7 +209,7 @@
 
 Labels on a built image can be seen by running `docker inspect <image name or id>`
 
-## Automating image builds
+## Automating Image Builds
 
 There is a [Jenkinsfile.imagebuilder](https://github.com/opencord/cord/blob/{{
 book.branch }}/Jenkinsfile.imagebuilder) that can be run in a Jenkins
diff --git a/developer/platform.md b/developer/platform.md
new file mode 100644
index 0000000..25b387f
--- /dev/null
+++ b/developer/platform.md
@@ -0,0 +1,8 @@
+# Platform Services
+
+Everything is a service in CORD, including the "platform" on top of
+which other services run. This includes infrastructure services like
+Kubernetes and OpenStack, SDN controllers like ONOS, and overlay
+services like VTN. This section includes information about how these
+platform-level services are integrated into CORD, and the role they play
+in supporting other services.
diff --git a/developer/workflows.md b/developer/workflows.md
index 9a8a99e..3480ee8 100644
--- a/developer/workflows.md
+++ b/developer/workflows.md
@@ -1,32 +1,30 @@
 # Developer Workflows
 
-This document is intended to describe the workflow to develop the control plane
-of CORD.
+This section describes a typical workflow for developing the CORD
+control plane. This workflow does not include any data plane
+elements (e.g., the underlying switching fabric or access devices).
 
-## Setting up a local development environment
+## Setting Up a Local Development Environment
 
-The first thing you’ll need to work on the control plane of CORD, known as XOS,
-is to setup a local Kubernetes environment.
-The suggested way to achieve that is to use Minikube on your laptop,
-and this guide assume that it will be the environment going forward.
-
-You can follow this guide to get started with Minikube:
+It is straightforward to set up a local Kubernetes environment on your laptop.
+The recommended way to do this is to use Minikube. This guide assumes
+you have done that. See the
+[Single-Node](../prereqs/k8s-single-node.md) case in the
+Installation Guide for more information, or you can go directly
+to the documentation for Minikube:
 <https://kubernetes.io/docs/getting-started-guides/minikube/#installation>
 
-> Note: If you are going to do development on Minikube you may want to increase
-> it’s memory from the default 512MB, you can do that using this command to
+> **Note:** If you are going to do development on Minikube you may want to increase
+> its memory from the default 512MB. You can do this using this command to
 > start Minikube: `minikube start --cpus 2 --memory 4096`
 
-Once Minikube is up and running on your laptop you can proceed with
-the following steps to bring XOS up.
+In addition to Minikube running on your laptop, you will also need to
+install Helm: <https://docs.helm.sh/using_helm/#installing-helm>.
 
-Once Minikube is installed you’ll need to install Helm:
-<https://docs.helm.sh/using_helm/#installing-helm>
-
-At this point you should be able to deploy the core components of XOS
-and the services required by R-CORD from images published on dockerhub.
-
-> NOTE: You can replace the `xos-profile` with the one you need to work on.
+Once both Helm and Minikube are installed, you can deploy the
+core components of XOS, along with the services that make
+up, for example, the R-CORD profile. This uses images published
+on DockerHub:
 
 ```shell
 cd ~/cord/build/helm-charts
@@ -35,30 +33,31 @@
 helm install xos-profiles/rcord-lite -n rcord-lite
 ```
 
-### Deply a single instance of kafka
+> **Note:** You can replace the `rcord-lite` profile with the one you want to work on. 
 
-Some profiles require a `kafka` message bus to properly working.
+### Deploy a Single Instance of Kafka
+
+Some profiles require a `kafka` message bus to work properly.
 If you need to deploy it for development purposes, a single instance
-deployment will be enough.
-
-You can install it by using:
+deployment will be enough. You can do so as follows:
 
 ```shell
 helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
 install --name cord-kafka incubator/kafka -f examples/kafka-single.yaml
 ```
 
-## Making changes and deploy them
+## Making and Deploying Changes
 
-You can follow this guide to [get the CORD source code](getting_the_code.md).
+Assuming you have
+[downloaded the CORD source code](getting_the_code.md) and the entire
+source tree for CORD is under `~/cord`, you can edit and re-deploy the
+code as follows.
 
-We assume that now you have the entire CORD tree under `~/cord`
-
-> Note: to develop a single synchronizer you may not need the full CORD source,
+> **Note:** To develop a single synchronizer you may not need the full CORD source,
 > but this assume  that you have a good knowledge of the system and you know
 > what you’re doing.
 
-As first you’ll need to point Docker to the one provided by Minikube
+First you will need to point Docker to the one provided by Minikube
 (_note that you don’t need to have docker installed,
 as it comes with the Minikube installation_).
 
@@ -66,21 +65,21 @@
 eval $(minikube docker-env)
 ```
 
-Then you’ll need to build the XOS containers from source:
+You will then need to build the containers from source:
 
 ```shell
 cd ~/cord/automation-tools/developer
 python imagebuilder.py -f ../../helm-charts/examples/filter-images.yaml -x
 ```
 
-At this point the images containing your changes will be available
+At this point, the images containing your changes will be available
 in the Docker environment used by Minikube.
 
-> Note: in some cases you can rebuild a single docker image to make the process
+> **Note:** In some cases you can rebuild a single docker image to make the process
 > faster, but this assume that you have a good knowledge of the system and you
 > know what you’re doing.
 
-All that is left is to teardown and redeploy the containers.
+All that is left is to teardown and re-deploy the containers.
 
 ```shell
 helm del --purge xos-core
@@ -90,20 +89,22 @@
 helm install xos-profiles/rcord-lite -n rcord-lite -f examples/image-tag-candidate.yaml -f examples/imagePullPolicy-IfNotPresent.yaml
 ```
 
-In some cases is possible to use the helm upgrade command,
-but if you made changes to the models we suggest to redeploy everything
+In some cases it is possible to use the `helm` upgrade command,
+but if you made changes to the XOS models we suggest you redeploy
+everything.
 
-> Note: if your changes are only in the synchronizer steps, after rebuilding
+> **Note:** if your changes are only in the synchronizer steps, after rebuilding
 > the containers, you can just delete the corresponding POD and kubernetes will
-> restart it with the new image
+> restart it with the new image.
 
-## Pushing changes to a remote registry
+## Pushing Changes to a Remote Registry
 
-If you have a remote POD you want to test your changes on, you need to push your
-docker images on a registry that can be accessed from the POD itself.
+If you have a remote POD that you want to test your changes on, you
+need to push your docker images to a registry that can be accessed
+from the POD.
 
-The way we suggest to do this is via a private docker-registry,
-you can find more informations about what a
+The way we recommend doing this is via a private docker-registry.
+You can find more informations about what a
 docker-registry is [here](../prereqs/docker-registry.md).
 
 {% include "/partials/push-images-to-registry.md" %}
diff --git a/overview.md b/overview.md
index c1a4dd7..8119167 100644
--- a/overview.md
+++ b/overview.md
@@ -20,5 +20,6 @@
 
 Source for individual guides is available in the [CORD code
 repository](https://gerrit.opencord.org); look in the `docs` directory of each
-project, with the documentation rooted in `build/docs`. Updates and
-improvements to this documentation can be submitted through Gerrit.
+project, with the documentation rooted in the top-level `/docs`
+directory. Updates and improvements to this documentation can be
+submitted through Gerrit.
diff --git a/prereqs/docker-registry.md b/prereqs/docker-registry.md
index 97a409e..2b7c74d 100644
--- a/prereqs/docker-registry.md
+++ b/prereqs/docker-registry.md
@@ -1,8 +1,8 @@
 # Docker Registry (optional)
 
-The guide describes how to install an **insecure** *docker registry* in Kubernetes, using the standard Kubernetes helm charts.
+The section describes how to install an **insecure** *docker registry* in Kubernetes, using the standard Kubernetes helm charts.
 
-Local docker registries can be used to push container images directly to the cluster,
+A local docker registry can be used to push container images directly to the cluster,
 which could be useful for example in the following cases:
 
 * The CORD POD has no Internet access, so container images cannot be downloaded directly from DockerHub to the POD.
@@ -11,7 +11,7 @@
 
 More informations about docker registries can be found at <https://docs.docker.com/registry/>.
 
-> NOTE: *Insecure* registries can be used for development, POCs or lab trials. **You should not use this in production.** There are planty of documents online that guide you through secure registries setup.
+> **Note:** *Insecure* registries can be used for development, POCs or lab trials. **You should not use this in production.** There are planty of documents online that guide you through secure registry setup.
 
 ## Deploy a Registry Using Helm
 
@@ -45,6 +45,6 @@
 Simply modify the values as needed, uninstall the containers previously deployed,
 and deploy them again.
 
-> **NOTE**: it's better to extend the existing helm charts, rather than directly modifying them. This way you can keep the original configuration as it is, and just override some values when needed. You can do this by writing your additional configuration yaml file, and parsing it as needed, adding -f my-additional-config.yml to your helm commands.
+> **Note**: It is better to extend the existing helm charts, rather than directly modifying them. This way you can keep the original configuration as it is, and just override some values when needed. You can do this by writing your additional configuration yaml file, and parsing it as needed, adding `-f my-additional-config.yml` to your helm commands.
 
 The full CORD helm charts reference documentation is available [here](../charts/helm.md).
diff --git a/prereqs/hardware.md b/prereqs/hardware.md
index 5af720b..f29c1fc 100644
--- a/prereqs/hardware.md
+++ b/prereqs/hardware.md
@@ -4,24 +4,49 @@
 
 ## Generic Hardware Guidelines
 
-* **Compute machines**: CORD can be in principle deployed both on any x86 machine, either physical or virtual. For development, demos or lab trials you may want to use only one machine (even your laptop could be fine, as long as it can provide enough hardware resources). For more realistic deployments it's anyway suggested to use at least three machines; better if all equals to one each other. The characteristics of these machines depends by lots of factors. At high level, at the very minimum, each machine should have a 4 cores CPU, 32GB of RAM and 100G of disk capacity. More sophisticated use-cases, for example M-CORD require more resources. Look at paragraphs below for more informations.
+* **Compute Machines**: CORD canin principle be deployed on any x86
+  machine, either physical or virtual. For development, demos or lab
+  trials you may want to use only one machine (even your laptop could
+  be fine, as long as it has enough resources). For more realistic
+  deployments, we suggest using at least three machines (preferably
+  all the same). The characteristics of these machines depends several
+  factors. At the very minimum, each machine should have a 4 cores
+  CPU, 32GB of RAM, and 100G of disk capacity. More sophisticated
+  use-cases, for example M-CORD, require more resources (see below).
 
-* **Network cards**: Whatever server you want to use, it should have at the very minimum a 1G network interface for management.
+* **Network Cards**: For whatever server use, it should have at the
+  very minimum a 1G network interface for management.
 
-* **Fabric switches**: Fabric switches should be compatible with the ONOS Trellis application that controls them. In this case, it's strongly suggested to stick with one of the models suggested, depending on the requirements. 10G switches are usually preferred for initial functional tests / lab deployments, since cheaper. Moreover, 10G ports can be usually downgraded to 1G speed, and the user can connect copper SFPs to them. The number of switches largely depends by your needs. For basic scenarios one may be enough, for more complete fabric tests, it's suggested to use at least four switches. More for more complex deployments. Developers sometimes emulate the fabric in software (using Mininet), but this can only apply to specific use-cases.
+* **Fabric Switches**: Fabric switches should be compatible with the
+  ONOS Trellis application that controls them. We strongly recommend
+  using one of the tested models suggested. 10G switches are usually
+  preferred for initial functional tests and lab deployments since
+  they are less expensive. Moreover, 10G ports can be usually
+  downgraded to 1G speed, and it's possible to connect them using
+  copper SFPs. The number of switches largely depends by your needs.
+  For basic scenarios one may be enough. For more complete fabric
+  tests, we recommend at least four switches. Developers sometimes
+  emulate the fabric in software (e.g., using Mininet), but this applies
+  only to specific use-cases.
 
-* **Access equipment**: At the moment, both R-CORD and M-CORD work with very specific access equipment. It's strongly suggested to stick with the models suggested in the following paragraphs.
+* **Access Devices**: At the moment, both R-CORD and M-CORD work
+  with very specific access devices, as described below. We strongly
+  recommend using these tested devices.
 
-* **Optics and cabling**: Some hardware may be picky on the optics. Both optics and cable models tested by the community are provided below.
+* **Optics and Cabling**: Some hardware may be picky about the optics.
+  Both optics and cable models tested by the community are provided below.
 
-* **Other**: Besides all above, you will need a development/management machine and a L2 management swich to connect things together. Usually a laptop is enough for the former, and a legacy L2 switch is enough for the latter.
+* **Other**: In addition to the above, you will need a
+  development/management machine and an L2 management swich to
+  connect things together. Usually a laptop is enough for the former,
+  and a legacy L2 switch is enough for the latter.
 
-## Suggested Hardware
+## Recommended Hardware
 
 Following is a list of hardware that people from the ONF community
 have tested over time in lab trials.
 
-* **Compute machines**
+* **Compute Machines**
     * OCP Inspired&trade; QuantaGrid D51B-1U server. Each
     server is configured with 2x Intel E5-2630 v4 10C 2.2GHz 85W, 64GB of RAM 2133MHz DDR4, 2x 500GB HDD, and a 40 Gig adapter.
 
@@ -36,7 +61,7 @@
         * OCP Accepted&trade; EdgeCore AS7712-32X
         * QuantaMesh BMS T7032-IX1/IX1B
 
-* **Fabric optics and DACs**
+* **Fabric Optics and DACs**
     * **10G DACs**
         * Robofiber QSFP-10G-03C SFP+ 10G direct attach passive
         copper cable, 3m length - S/N: SFP-10G-03C
@@ -44,7 +69,7 @@
         * Robofiber QSFP-40G-03C QSFP+ 40G direct attach passive
         copper cable, 3m length - S/N: QSFP-40G-03C
 
-* **R-CORD access equipment and optics**
+* **R-CORD Access Devices and Optics**
     * **XGS-PON**
         * **OLT**: EdgeCore ASFVOLT16 (for more info <bartek_raszczyk@edge-core.com>)
         * Compatible **OLT optics**
@@ -54,7 +79,7 @@
         * Compatible **ONU optics**
             * Hisense/Ligent: LTF7225-BC, LTF7225-BH+
 
-* **M-CORD specific requirements**
+* **M-CORD Specific Requirements**
     * **Servers**: Some components of CORD require at least a Intel XEON CPU with Haswell microarchitecture or better.
     * **eNodeBs**:
         * Cavium Octeon Fusion CNF7100 (for more info <kin-yip.liu@cavium.com>)
diff --git a/prereqs/helm.md b/prereqs/helm.md
index 1fb5b6b..9c8ded4 100644
--- a/prereqs/helm.md
+++ b/prereqs/helm.md
@@ -42,5 +42,5 @@
 ## Done?
 
 Once you are done, you are ready to deploy CORD components using their
-helm charts! See [Bringup Up CORD](../profiles/intro.md). For more detailed
+helm charts! See [Bringing Up CORD](../profiles/intro.md). For more detailed
 information, see the [helm chart reference guide](../charts/helm.md).
diff --git a/prereqs/k8s-multi-node.md b/prereqs/k8s-multi-node.md
index 48ed502..d606416 100644
--- a/prereqs/k8s-multi-node.md
+++ b/prereqs/k8s-multi-node.md
@@ -16,12 +16,12 @@
 
 ## Requirements
 
-* **Operator machine** (1x, either physical or virtual machine)
+* **Operator/Developer Machine** (1x, either physical or virtual machine)
     * Has Git installed
     * Has Python3 installed (<https://www.python.org/downloads/>)
-    * Has Stable version of Ansible installed (<http://docs.ansible.com/ansible/latest/intro_installation.html>)
+    * Has a stable version of Ansible installed (<http://docs.ansible.com/ansible/latest/intro_installation.html>)
     * Is able to reach the target servers (ssh into them)
-* **Target machines** (at least 3x, either physical or virtual machines)
+* **Target/Cluster Machines** (at least 3x, either physical or virtual machines)
     * Run Ubuntu 16.04 server
     * Able to communicate together (ping one each other)
     * Have the same user *cord* configured, that you can use to remotely access them from the operator machine
diff --git a/prereqs/k8s-single-node.md b/prereqs/k8s-single-node.md
index b9b4cc2..ad08a06 100644
--- a/prereqs/k8s-single-node.md
+++ b/prereqs/k8s-single-node.md
@@ -5,26 +5,17 @@
 
 There are two popular single-node versions of Kubernetes.
 
-* **Minikube**
+* **Minikube** (recommended)
     * Documentation: <https://kubernetes.io/docs/getting-started-guides/minikube/>
+    * Installation on a Linux machine (either physical or a VM on your laptop or in the cloud): <https://github.com/kubernetes/minikube#linux-continuous-integration-without-vm-support (Must have Docker installed <https://docs.docker.com/install/>)
+    * Installation directly on your Windows or MacOS System: <https://kubernetes.io/docs/getting-started-guides/minikube/#installation>
     * Minimum requirements:
         * One machine, either a physical machine or a VM. It could also be your own PC! It installs natively also on macOS.
 * **Mikrok8s**
     * Documentation: <https://microk8s.io/>
     * One machine, Linux based, either physical machine or virtual. It could also be your own PC!
 
-We recommend Minikube, which is easy to set up and use. The following
-comments on two considerations:
-
-* If you want to install Minikube on a Linux machine (either a
-  physical machine or a VM on your laptop or in the cloud), you will
-  need to follow the instructions at <https://github.com/kubernetes/minikube#linux-continuous-integration-without-vm-support>.
-
-* If you want to run Minikube directly on your Windows or MacOS
-  system, you will need to follow the instructions at
-  <https://kubernetes.io/docs/getting-started-guides/minikube/#installation>.
-
 ## Done?
 
-Once you are done, you are ready to install Kubctl and Helm, so return to 
+Once you are done, you are ready to install Kubctl and Helm, so return to
 [here](kubernetes.md#get-your-kubeconfig-file) in the installation guide.
diff --git a/prereqs/kubernetes.md b/prereqs/kubernetes.md
index c259de9..7a3e3d0 100644
--- a/prereqs/kubernetes.md
+++ b/prereqs/kubernetes.md
@@ -10,7 +10,7 @@
 
 ## Install Kubernetes
 
-The following sections offer pointers and scripts to install your favorite
+The following sections, [Single Node Cluster](k8s-single-node.md) and [Multi Node Cluster](k8s-multi-node.md), offer pointers and scripts to install your favorite
 version of Kubernetes. Start there, then come back here and follow the
 steps in the following three subsections.
 
diff --git a/profiles/rcord/configuration.md b/profiles/rcord/configuration.md
index a2847f5..e321692 100644
--- a/profiles/rcord/configuration.md
+++ b/profiles/rcord/configuration.md
@@ -125,6 +125,119 @@
 please refer to the [RCORD Service](../../rcord/README.md) guide for
 more informations.
 
+### Create a subscriber in RCORD
+
+To create a subscriber in CORD you need to retrieve some informations:
+
+- ONU Serial Number
+- UNI Port ID
+- Mac Address
+- IP Address
+
+We'll focus on the first two as the others are pretty self-explaining.
+
+**Find the ONU Serial Number**
+
+Once your POD is set up and the OLT has been pushed and activated in VOLTHA,
+XOS will discover the ONUs available in the system.
+
+You can find them trough:
+
+- the XOS UI, on the left side click on `vOLT > ONUDevices`
+- the rest APIs `http://<pod-id>:<chameleon-port|30006>/xosapi/v1/volt/onudevices`
+- the VOLTHA [cli](../../charts/voltha.md#how-to-access-the-voltha-cli)
+
+If you are connected to the VOLTHA CLI you can use the command:
+
+```shell
+(voltha) devices
+Devices:
++------------------+--------------+------+------------------+-------------+-------------+----------------+----------------+------------------+----------+-------------------------+----------------------+------------------------------+
+|               id |         type | root |        parent_id | admin_state | oper_status | connect_status | parent_port_no |    host_and_port | vendor_id| proxy_address.device_id | proxy_address.onu_id | proxy_address.onu_session_id |
++------------------+--------------+------+------------------+-------------+-------------+----------------+----------------+------------------+----------+-------------------------+----------------------+------------------------------+
+| 0001941bd45e71d8 |      openolt | True | 000100000a5a0072 |     ENABLED |      ACTIVE |      REACHABLE |                | 10.90.0.114:9191 |          |                         |                      |                              |
+| 00015698e67dc060 | broadcom_onu | True | 0001941bd45e71d8 |     ENABLED |      ACTIVE |      REACHABLE |      536870912 |                  |      BRCM|        0001941bd45e71d8 |                    1 |                            1 |
++------------------+--------------+------+------------------+-------------+-------------+----------------+----------------+------------------+----------+-------------------------+----------------------+------------------------------+
+```
+to list all the existing devices, and locate the correct ONU, then:
+
+```shell
+(voltha) device 00015698e67dc060
+(device 00015698e67dc060) show
+Device 00015698e67dc060
++------------------------------+------------------+
+|                        field |            value |
++------------------------------+------------------+
+|                           id | 00015698e67dc060 |
+|                         type |     broadcom_onu |
+|                         root |             True |
+|                    parent_id | 0001941bd45e71d8 |
+|                       vendor |         Broadcom |
+|                        model |              n/a |
+|             hardware_version |     to be filled |
+|             firmware_version |     to be filled |
+|                 images.image |        1 item(s) |
+|                serial_number |     BRCM22222222 |
++------------------------------+------------------+
+|                      adapter |     broadcom_onu |
+|                  admin_state |                3 |
+|                  oper_status |                4 |
+|               connect_status |                2 |
+|      proxy_address.device_id | 0001941bd45e71d8 |
+|         proxy_address.onu_id |                1 |
+| proxy_address.onu_session_id |                1 |
+|               parent_port_no |        536870912 |
+|                    vendor_id |             BRCM |
+|                        ports |        2 item(s) |
++------------------------------+------------------+
+|                  flows.items |        5 item(s) |
++------------------------------+------------------+
+```
+to find the correct serial number.
+
+**Find the UNI Port Id**
+
+From the VOLTHA CLI, in the device command prompt execute:
+
+```shell
+(device 00015698e67dc060) ports
+Device ports:
++---------+----------+--------------+-------------+-------------+------------------+-----------------------------------------------------+
+| port_no |    label |         type | admin_state | oper_status |        device_id |                                               peers |
++---------+----------+--------------+-------------+-------------+------------------+-----------------------------------------------------+
+|     100 | PON port |      PON_ONU |     ENABLED |      ACTIVE | 00015698e67dc060 | [{'port_no': 16, 'device_id': u'0001941bd45e71d8'}] |
+|      16 |   uni-16 | ETHERNET_UNI |     ENABLED |      ACTIVE | 00015698e67dc060 |                                                     |
++---------+----------+--------------+-------------+-------------+------------------+-----------------------------------------------------+
+```
+and locate the `ETHERNET_UNI` port.
+The `port_no` for that port is the value you are looking for.
+
+**Push a subscriber into CORD**
+
+Once you have the informations you need about your subscriber,
+you can create it by customizing this TOSCA:
+
+```yaml
+tosca_definitions_version: tosca_simple_yaml_1_0
+imports:
+  - custom_types/rcordsubscriber.yaml
+description: Create a test subscriber
+topology_template:
+  node_templates:
+    # A subscriber
+    my_house:
+      type: tosca.nodes.RCORDSubscriber
+      properties:
+        name: My House
+        c_tag: 111
+        onu_device: BRCM1234 # Serial Number of the ONU Device to which this subscriber is connected
+        uni_port_id: 16 # UNI PORT ID in VOLTHA
+        mac_address: 00:AA:00:00:00:01 # subscriber mac address
+        ip_address: 10.8.2.1 # subscriber IP
+```
+
+_For instructions on how to push TOSCA, please refer to this [guide](../../xos-tosca/README.md)_
+
 ### Zero-Touch Subscriber Provisioning
 
 This feature, also referred to as "bottom-up provisioning" enables auto-discovery
diff --git a/profiles/rcord/install.md b/profiles/rcord/install.md
index cd3a9f3..0231ae1 100644
--- a/profiles/rcord/install.md
+++ b/profiles/rcord/install.md
@@ -24,7 +24,7 @@
 ## Install the RCORD-Lite Helm Chart
 
 ```shell
-helm dep update xos-profile/rcord-lite
+helm dep update xos-profiles/rcord-lite
 helm install -n rcord-lite xos-profiles/rcord-lite
 ```