Merge "Add guide for connecting fabric switches to ONOS"
diff --git a/README.md b/README.md
index 87b8637..268c88d 100644
--- a/README.md
+++ b/README.md
@@ -8,8 +8,8 @@
 * [M-CORD](./profiles/mcord/install.md)
 
 If you are anxious to jump straight to a [Quick Start](quickstart.md)
-procedure that brings up the an emulated version of CORD running
+procedure that brings up an emulated version of CORD running
 on your laptop (sorry, no subscriber data plane), then that's an option.
 
-Alternatively, if you want to get a broader layof-of-the-land, you
+Alternatively, if you want to get a broader lay-of-the-land, you
 might step back and start with an [Overview](overview.md).
diff --git a/SUMMARY.md b/SUMMARY.md
index 9c2393b..610afda 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -3,6 +3,8 @@
 * [Overview](overview.md)
     * [Navigating CORD](navigate.md)
     * [Quick Start](quickstart.md)
+        * [MacOS](macos.md)
+        * [Linux](linux.md)
 * [Installation Guide](README.md)
     * [Prerequisites](prereqs/README.md)
         * [Hardware Requirements](prereqs/hardware.md)
@@ -19,6 +21,7 @@
     * [Bringing Up CORD](profiles/intro.md)
         * [R-CORD](profiles/rcord/install.md)
             * [OLT Setup](openolt/README.md)
+	        * [Emulated OLT/ONU](profiles/rcord/emulate.md)
         * [M-CORD](profiles/mcord/install.md)
             * [EnodeB Setup](profiles/mcord/enodeb-setup.md)
     * [Helm Reference](charts/helm.md)
@@ -62,6 +65,7 @@
         * [Core Models](xos/core_models.md)
         * [Security Policies](xos/security_policies.md)
     * [Developer Workflows](developer/workflows.md)
+        * [Working on R-CORD Without an OLT/ONU](developer/configuration_rcord.md)
     * [Building Docker Images](developer/imagebuilder.md)
     * [Platform Services](developer/platform.md)
         * [Kubernetes](kubernetes-service/kubernetes-service.md)
@@ -77,6 +81,7 @@
             * [Data Sources](xos-gui/architecture/data-sources.md)
         * [Tests](xos-gui/developer/tests.md)
     * [Unit Tests](xos/dev/unittest.md)
+    * [Versions and Releases](versioning.md)
 * [Testing Guide](cord-tester/README.md)
     * [Test Setup](cord-tester/qa_testsetup.md)
     * [Test Environment](cord-tester/qa_testenv.md)
diff --git a/charts/helm.md b/charts/helm.md
index 63ce16f..8b1c2e3 100644
--- a/charts/helm.md
+++ b/charts/helm.md
@@ -34,7 +34,18 @@
 is then possible to bring up the `mcord` profile, which corresponds
 to ~10 other services. It is also possible to bring up an individual
 service by executing its helm chart; for example
-`xos-services/exampleservice`.
+`xos-services/simpleexampleservice`.
+
+> **Note:** Sometimes we install Individual services by first
+> "wrapping" them in a profile. For example,
+> `SimpleExampleService` is deployed from the
+> `xos-profiles/demo-simpleexampleservice` profile, rather
+> than directly from `xos-services/simpleexampleservice`.
+> The latter is included by reference from the former.
+> This is not a fundamental limitation, but we do it when we
+> want to run the `tosca-loader` that loads a TOSCA workflow
+> into CORD. This feature is currently available at only
+> the profile level.
 
 Similarly, the `base-kubernetes` profile brings up Kubernetes in
 support of container-based VNFs. This corresponds to the
@@ -42,7 +53,7 @@
 Kubernetes to deploy the CORD control plane. Once this profile is
 running, it is possible to bring up an example VNF in a container
 by executing its helm chart; for example
-`xos-services/simpleexampleservice`.
+`xos-profiles/demo-simpleexampleservice`.
 
 > **Note:** The `base-kubernetes` configuration does not yet
 > incorporate VTN. Doing so is work-in-progress.
diff --git a/charts/hippie-oss.md b/charts/hippie-oss.md
index 0d11701..7afdb4c 100644
--- a/charts/hippie-oss.md
+++ b/charts/hippie-oss.md
@@ -1,5 +1,8 @@
 # Deploy Hippie OSS
 
+To insall a minimal (permissive) OSS container in support of subscriber
+provisioning for R-CORD, run the following:
+
 ```shell
 helm install -n hippie-oss xos-services/hippie-oss
 ```
diff --git a/charts/local-persistent-volume.md b/charts/local-persistent-volume.md
new file mode 100644
index 0000000..b2bd8d8
--- /dev/null
+++ b/charts/local-persistent-volume.md
@@ -0,0 +1,40 @@
+# Local Persistent Volume Helm chart
+
+## Introduction
+
+The `local-persistent-volume` helm chart is a utility helm chart. It was
+created mainly to persist the `xos-core` DB data but this helm can be used
+to persist any data.
+
+It uses a relatively new kubernetes feature (it's a beta feature
+in Kubernetes 1.10.x) that allows us to define an independent persistent
+store in a kubernetes cluster.
+
+The helm chart mainly consists of the following kubernetes resources:
+
+- A storage class resource representing a local persistent volume
+- A persistent volume resource associated with the storage class and a specific directory on a specific node
+- A persistent volume claim resource that claims certain portion of the persistent volume on behalf of a pod
+
+The following variables are configurable in the helm chart:
+
+- `storageClassName`: The name of the storage class resource
+- `persistentVolumeName`: The name of the persistent volume resource
+- `pvClaimName`: The name of the persistent volume claim resource
+- `volumeHostName`: The name of the kubernetes node on which the data will be persisted
+- `hostLocalPath`: The directory or volume mount path on the chosen chosen node where data will be persisted
+- `pvStorageCapacity`: The capacity of the volume available to the persistent volume resource (e.g. 10Gi)
+
+Note: For this helm chart to work, the volume mount path or directory specified in the `hostLocalPath` variable needs to exist before the helm chart is deployed.
+
+## Standard Install
+
+```shell
+helm install -n local-store local-persistent-volume
+```
+
+## Standard Uninstall
+
+```shell
+helm delete --purge local-store
+```
diff --git a/charts/voltha.md b/charts/voltha.md
index 1402dda..16cd2f4 100644
--- a/charts/voltha.md
+++ b/charts/voltha.md
@@ -2,7 +2,7 @@
 
 ## First Time Installation
 
-Add the kubernetes helm charts incubator repository
+Download the helm charts `incubator` repository
 
 ```shell
 helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
@@ -14,31 +14,43 @@
 helm dep build voltha
 ```
 
-There's an etcd-operator **known bug** we're trying to solve that
-prevents users to deploy Voltha straight since the first time. We
-found a workaround.
+Install the kafka dependency
 
-Few steps:
+```shell
+helm install --name voltha-kafka \
+--set replicas=1 \
+--set persistence.enabled=false \
+--set zookeeper.servers=1 \
+--set zookeeper.persistence.enabled=false \
+incubator/kafka
+```
 
-Install Voltha (without etcd operator)
+There is an `etcd-operator` **known bug** that prevents deploying
+Voltha correctly the first time. We suggest the following workaround:
+
+First, install Voltha without an `etcd` custom resource definition:
 
 ```shell
 helm install -n voltha --set etcd-operator.customResources.createEtcdClusterCRD=false voltha
 ```
 
-Uninstall Voltha
+Then upgrade Voltha, which defaults to using the `etcd` custom
+resource definition:
+
+```shell
+helm upgrade voltha ./voltha
+```
+
+After this first installation, you can use the standard
+install/uninstall procedure described below.
+
+## Standard Uninstall
 
 ```shell
 helm delete --purge voltha
 ```
 
-Deploy Voltha
-
-```shell
-helm install -n voltha voltha
-```
-
-## Standard Installation Process
+## Standard Install
 
 ```shell
 helm install -n voltha voltha
@@ -53,7 +65,7 @@
     * Inner port: 8882
     * Nodeport: 30125
 
-## How to access the VOLTHA CLI
+## Accessing the VOLTHA CLI
 
 Assuming you have not changed the default ports in the chart,
 you can use this command to access the VOLTHA CLI:
diff --git a/charts/xos-core.md b/charts/xos-core.md
index b0ecc55..7d2f44c 100644
--- a/charts/xos-core.md
+++ b/charts/xos-core.md
@@ -1,20 +1,43 @@
 # Deploy XOS-CORE
 
+To deploy the XOS core and affiliated containers, run the following:
+
 ```shell
 helm dep update xos-core
 helm install -n xos-core xos-core
 ```
 
-> We highly suggest to override the default values of
-> `xosAdminUser` and `xosAdminPassword` with custom values
+## Customizing security information
 
-You can do it using a [`values.yaml`](https://docs.helm.sh/chart_template_guide/#values-files) file or using this command:
+We strongly recommend you to override the default values of `xosAdminUser` and
+`xosAdminPassword` with custom values.
+
+You can do it using a [`values.yaml`](https://docs.helm.sh/chart_template_guide/#values-files)
+file like this one:
+
+```yaml
+# custom-security.yaml
+xosAdminUser: 'admin@onf.org'
+xosAdminPassword: 'foobar'
+```
+
+and add it to the install command:
+
+```shell
+helm install -n xos-core xos-core -f custom-security.yaml
+```
+
+or you can override the values from the CLI
 
 ```shell
 helm install -n xos-core xos-core --set xosAdminUser=MyUser --set xosAdminPassword=MySuperSecurePassword
 ```
+> **Important!**
+> If you override security values in the `xos-core` chart, you'll need to pass
+> these values, either via a file or cli arguments, to all the xos related charts
+> you will install, eg: `rcord-lite`, `base-openstack`, ...
 
 ## Deploy kafka
 
-Some flavors of XOS require kafka, to install it please
-follow refer to the [kafka](kafka.md) instructions.
+Some flavors of XOS require kafka. To install it, please
+refer to the [kafka](kafka.md) instructions.
diff --git a/charts/xossh.md b/charts/xossh.md
index 5a31d74..675a23e 100644
--- a/charts/xossh.md
+++ b/charts/xossh.md
@@ -1,5 +1,7 @@
 # Deploy XOSSH
 
+To deploy the XOS-Shell, run the following:
+
 ```shell
 helm install xos-tools/xossh -n xossh
 ```
diff --git a/developer/configuration_rcord.md b/developer/configuration_rcord.md
new file mode 100644
index 0000000..d8066af
--- /dev/null
+++ b/developer/configuration_rcord.md
@@ -0,0 +1,113 @@
+# Working on R-CORD Without an OLT/ONU
+
+This section describes a developer workflow that works in scenarios
+where you do not have a real OLT or ONU. It combines steps from
+the "bottom-up" and "top-down" subscriber provisioning sequences
+described [here](../profiles/rcord/configuration.md).
+
+The idea is to add the access device's (OLT/PONPORT/ONU) to the XOS
+data model through "top-down" provisioning, with the "bottom-up"
+action of VOLTHA publishing a newly discovered ONU to the Kafka bus
+simulated by a python script.
+
+## Prerequisites
+
+- All the components needed for the R-CORD profile are up and running
+   on your POD (xos-core, rcord-lite, voltha, onos-voltha).
+- Configure `OLT/PONPORT/ONU` devices using the sample
+   TOSCA config given below:
+
+```shell
+tosca_definitions_version: tosca_simple_yaml_1_0
+imports:
+  - custom_types/oltdevice.yaml
+  - custom_types/onudevice.yaml
+  - custom_types/ponport.yaml
+  - custom_types/voltservice.yaml
+description: Create a simulated OLT Device in VOLTHA
+topology_template:
+  node_templates:
+
+    device#olt:
+      type: tosca.nodes.OLTDevice
+      properties:
+        device_type: simulated_olt
+        host: 172.17.0.1
+        port: 50060
+        must-exist: true
+
+    pon_port:
+      type: tosca.nodes.PONPort
+      properties:
+        name: test_pon_port_1
+        port_no: 2
+        s_tag: 222
+      requirements:
+        - olt_device:
+            node: device#olt
+            relationship: tosca.relationships.BelongsToOne
+
+    onu:
+      type: tosca.nodes.ONUDevice
+      properties:
+        serial_number: BRCM1234
+        vendor: Broadcom
+      requirements:
+        - pon_port:
+            node: pon_port
+            relationship: tosca.relationships.BelongsToOne
+~
+```
+
+- Deploy `kafka` as described in [these instructions](../charts/kafka.md).
+
+- Deploy `hippie-oss` as described in [these instructions](../charts/hippie-oss.md).
+
+## Push "onu-event" to Kafka
+
+The following event needs to be pushed manually.
+
+```shell
+event = json.dumps({
+    'status': 'activated',
+    'serial_number': 'BRCM1234',
+    'uni_port_id': 16,
+    'of_dpid': 'of:109299321'
+})
+```
+
+Make sure that the `serial_number` in the event matches the
+`serial_number`you configured when adding the ONU device.
+XOS uses the serial number to make sure the device is actually
+listed (`volt/onudevices`).
+
+The script for pushing the `onu-event` to Kafka
+(`onu_activate_event.py`) is already available in the container
+running `volt-synchronizer` and you may execute it as:
+
+```shell
+cordserver@cordserver:~$ kubectl get pods | grep rcord-lite-volt
+rcord-lite-volt-dd98f78d6-rwwhz                                   1/1       Running            0          10d
+
+cordserver@cordserver:~$ kubectl exec rcord-lite-volt-dd98f78d6-rwwhz python /opt/xos/synchronizers/volt/onu_activate_event.py
+```
+
+If you need to update the contents of the event file, you have to do
+an `apt update` and `apt install vim` within the container.
+
+## Verification
+
+- Verify that the `hippie-oss` instance is created for the event
+   (i.e., verify the serial number of ONU). The `hippie-oss` container
+   is intended to verify ONU serial number with an external OSS-DB,
+   but this is now configured to always validate the ONU.
+- Verify a new `rcord-subscriber` service instance is created.
+- Once the `rcord-subscriber` service instance is created, make sure
+   new service instances are created for the `volt` and `vsg-hw` models.
+
+```shell
+curl -X GET http://172.17.8.101:30006/xosapi/v1/hippie-oss/hippieossserviceinstances -u "admin@opencord.org:letmein"
+curl -X GET http://172.17.8.101:30006/xosapi/v1/rcord/rcordsubscribers -u "admin@opencord.org:letmein"
+curl -X GET http://172.17.8.101:30006/xosapi/v1/volt/voltserviceinstances -u "admin@opencord.org:letmein"
+curl -X GET http://172.17.8.101:30006/xosapi/v1/vsg-hw/vsghwserviceinstances -u "admin@opencord.org:letmein"
+```
diff --git a/fabric-setup.md b/fabric-setup.md
index 79ddd93..d0de4c2 100644
--- a/fabric-setup.md
+++ b/fabric-setup.md
@@ -11,7 +11,8 @@
 
 ## Operating System
 
-All CORD-compatible switches use [Open Networking Linux (ONL)](https://opennetlinux.org/) as operating system.
+All CORD-compatible switches use
+[Open Networking Linux (ONL)](https://opennetlinux.org/) as the operating system.
 The [latest compatible ONL image](https://github.com/opencord/OpenNetworkLinux/releases/download/2017-10-19.2200-1211610/ONL-2.0.0_ONL-OS_2017-10-19.2200-1211610_AMD64_INSTALLED_INSTALLER) can be downloaded from [here](https://github.com/opencord/OpenNetworkLinux/releases/download/2017-10-19.2200-1211610/ONL-2.0.0_ONL-OS_2017-10-19.2200-1211610_AMD64_INSTALLED_INSTALLER).
 
 **Checksum**: *sha256:2db316ea83f5dc761b9b11cc8542f153f092f3b49d82ffc0a36a2c41290f5421*
@@ -26,7 +27,7 @@
 
 ## OFDPA Drivers
 
-Once ONL is installed OFDPA drivers will need to be installed as well.
+Once ONL is installed, OFDPA drivers will need to be installed as well.
 Each switch model requires a specific version of OFDPA. All driver packages are distributed as DEB packages, which makes the installation process straightforward.
 
 First, copy the package to the switch. For example
@@ -41,7 +42,7 @@
 dpkg -i your-ofdpa.deb
 ```
 
-Two OFDPA drivers are available:
+Three OFDPA drivers are available:
 
 * [EdgeCore 5712-54X / 5812-54X / 6712-32X](https://github.com/onfsdn/atrium-docs/blob/master/16A/ONOS/builds/ofdpa_3.0.5.5%2Baccton1.7-1_amd64.deb?raw=true) - *checksum: sha256:db228b6e79fb15f77497b59689235606b60abc157e72fc3356071bcc8dc4c01f*
 * [QuantaMesh T3048-LY8](https://github.com/onfsdn/atrium-docs/blob/master/16A/ONOS/builds/ofdpa-ly8_0.3.0.5.0-EA5-qct-01.01_amd64.deb?raw=true) - *checksum: sha256:f8201530b1452145c1a0956ea1d3c0402c3568d090553d0d7b3c91a79137da9e*
diff --git a/linux.md b/linux.md
new file mode 100644
index 0000000..6e22f34
--- /dev/null
+++ b/linux.md
@@ -0,0 +1,224 @@
+# Quick Start: Linux
+
+This section walks you through an example installation sequence on 
+Linux, assuming a fresh install of Ubunto 16.04.
+
+## Prerequisites
+
+You need to first install Docker and Python:
+
+```shell
+sudo apt update
+sudo apt-get install python
+sudo apt-get install python-pip
+pip install requests
+sudo apt install -y docker.io
+sudo systemctl start docker
+sudo systemctl enable docker
+```
+
+Now, verify the docker version.
+
+```shell
+docker --version
+```
+
+## Minikube & Kubectl
+
+Install `minikube` and `kubectl`:
+
+```shell
+curl -Lo minikube
+https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
+chmod +x minikube
+sudo mv minikube /usr/local/bin/
+curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl
+curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
+chmod +x ./kubectl
+sudo mv ./kubectl /usr/local/bin/kubectl
+```
+
+Issue the following commands:
+
+```shell
+export MINIKUBE_WANTUPDATENOTIFICATION=false
+export MINIKUBE_WANTREPORTERRORPROMPT=false
+export MINIKUBE_HOME=$HOME
+export CHANGE_MINIKUBE_NONE_USER=true
+mkdir -p $HOME/.kube
+touch $HOME/.kube/config
+
+export KUBECONFIG=$HOME/.kube/config
+```
+
+Navigate to the `/usr/local/bin/` directory and issue the following
+commands. Make sure there are no errors afterwards:
+
+```shell
+sudo -E ./minikube start --vm-driver=none
+```
+
+You can run
+
+```shell
+kubectl cluster-info
+```
+
+to verify that your Minikube cluster is up and running.
+
+## Export the KUBECONFIG File
+
+Locate the `KUBECONFIG` file:
+
+```shell
+sudo updatedb
+locate kubeconfig
+```
+
+Export a `KUBECONFIG` variable containing the path to the
+configuration file found above. For example, If your `U`
+file was located in the `/var/lib/localkube/kubeconfig` directory,
+the command you issue would look like this:
+
+```shell
+export KUBECONFIG=/var/lib/localkube/kubeconfig
+```
+
+## Download CORD
+
+There are two general ways you might download CORD. The following
+walks through both, but you need to follow only one. (For simplicity, we
+recommend the first.)
+
+The first simply clones the CORD `helm-chart` repository using `git`.
+This is sufficient for downloading just the Helm charts you will need
+to deploy the set of containers that comprise CORD. These containers
+will be pulled down from DockerHub.
+
+The second uses the `repo` tool to download all the source code that
+makes up CORD, including the Helm charts needed to deploy the CORD
+containers. You might find this useful if you want look at the
+interals of CORD more closely.
+
+In either case, following these instructions will result in a
+directory `~/cord/helm-charts`, which will be where you go next to
+continue the installation process.
+
+### Download: `git clone`
+
+Create a CORD directory and run the following `git` command in it:
+
+```shell
+mkdir ~/cord
+cd ~/cord
+git clone https://gerrit.opencord.org/helm-charts
+cd helm-charts
+```
+
+### Download: `repo`
+
+Make sure you have a `bin/` directory in your home directory and
+that it is included in your path:
+
+```shell
+mkdir ~/bin
+PATH=~/bin:$PATH
+```
+
+Download the Repo tool and ensure that it is executable:
+
+```shell
+curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
+chmod a+x ~/bin/repo
+```
+
+Make a `/cord` directory and navigate into it:
+
+```shell
+mkdir ~/cord
+cd ~/cord
+```
+
+Configure `git` with your real name and email address:
+
+```shell
+git config --global user.name "Your Name"
+git config --global user.email "you@example.com"
+```
+
+Initialize `repo` and download the CORD source tree to your working
+directory:
+
+```shell
+repo init -u https://gerrit.opencord.org/manifest -b master
+repo sync
+```
+
+## Helm
+
+Run the Helm installer script that will automatically grab the latest
+version of the Helm client and install it locally:
+
+```shell
+curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
+chmod 700 get_helm.sh
+./get_helm.sh
+```
+
+## Tiller
+
+Issue the following:
+
+```shell
+sudo helm init
+sudo kubectl create serviceaccount --namespace kube-system tiller
+sudo kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
+sudo kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'      
+sudo helm init --service-account tiller --upgrade
+```
+
+Install `socat` to fix a port-forwarding error:
+
+```shell
+sudo apt-get install socat
+```
+
+Issue the following and make sure no errors come up:
+
+```shell
+helm ls
+```
+
+## Deploy CORD Helm Charts
+
+Deploy the service profiles corresponding to the `xos-core`,
+`base-kubernetes`, and `demo-simpleexampleservice` helm-charts:
+
+```shell
+cd ~/cord/helm-charts
+helm init
+sudo helm dep update xos-core
+sudo helm install xos-core -n xos-core
+sudo helm dep update xos-profiles/base-kubernetes
+sudo helm install xos-profiles/base-kubernetes -n base-kubernetes
+sudo helm dep update xos-profiles/demo-simpleexampleservice
+sudo helm install xos-profiles/demo-simpleexampleservice -n demo-simpleexampleservice
+```
+
+Use `kubectl get pods` to verify that all containers in the profile 
+are successful and none are in the error state. 
+
+> **Note:** It will take some time for the various helm charts to 
+> deploy and the containers to come online. The `tosca-loader 
+> `container may error and retry several times as they wait for 
+> services to be dynamically loaded. This is normal, and eventually 
+> the `tosca-loader` containers will enter the completed state:
+
+## Next Steps 
+
+This completes our example walk-through. At this point, you can do one 
+of the following:
+
+* Explore other [installation options](README.md). 
+* Take a tour of the [operational interfaces](operating_cord/general.md). 
+* Drill down on the internals of [SimpleExampleService](simpleexampleservice/simple-example-service.md). 
diff --git a/macos.md b/macos.md
new file mode 100644
index 0000000..59e4b6d
--- /dev/null
+++ b/macos.md
@@ -0,0 +1,148 @@
+# Quick Start: MacOS
+
+This section walks you through an example installation sequence on
+MacOS. It was tested on version 10.12.6.
+
+## Prerequisites
+
+You need to install Docker. Visit `https://docs.docker.com/docker-for-mac/install/` for instructions.
+
+You also need to install VirtualBox. Visit `https://www.virtualbox.org/wiki/Downloads` for instructions.
+
+The following assumes you've installed the Homebrew package manager. Visit
+`https://brew.sh/` for instructions.
+
+## Install Minikube and Kubectl
+
+To install Minikube, run the following command:
+
+```shell
+curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.28.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
+```
+To install Kubectl, run the following command:
+
+```shell
+brew install kubectl
+```
+
+## Install Helm and Tiller
+
+The following installs both Helm and Tiller.
+
+```shell
+brew install kubernetes-helm
+```
+
+## Bring Up a Kubernetes Cluster
+
+Start a minikube cluster as follows. This automatically runs inside VirtualBox.
+
+```shell
+minikube start
+```
+
+To see that it's running, type
+
+```shell
+kubectl cluster-info
+```
+
+You should see something like the following
+
+```shell
+Kubernetes master is running at https://192.168.99.100:8443
+KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
+
+To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
+```
+
+You can also see how the cluster is configured by looking at `~/.kube/config`.
+Other tools described on this page use this configuration file to find your cluster.
+
+If you want, you can see minikube running by looking at the VirtualBox dashboard.
+Or alternatively, you can visit the Minikube dashboard:
+
+```shell
+minikube dashboard
+```
+
+As a final setp, you need to start Tiller on the Kubernetes cluster.
+
+```shell
+helm init
+```
+
+## Download CORD Helm-Charts
+
+You don't need to download all of CORD. You just need to download a set of helm charts. They will, in turn, download a collection of CORD containers from Docker
+Hub. The rest of this section assumes all CORD-related downloads are placed in
+directory `~/cord`.
+
+```shell
+mkdir ~/cord
+cd ~/cord
+git clone https://gerrit.opencord.org/helm-charts
+cd helm-charts
+```
+
+## Bring Up CORD
+
+Deploy the service profiles corresponding to the `xos-core`,
+`base-kubernetes`, and `demo-simpleexampleservice` helm-charts.
+To do this, execute the following from the `~/cord/helm-charts` directory.
+
+```shell
+helm dep update xos-core
+helm install xos-core -n xos-core
+helm dep update xos-profiles/base-kubernetes
+helm install xos-profiles/base-kubernetes -n base-kubernetes
+helm dep update xos-profiles/demo-simpleexampleservice
+helm install xos-profiles/demo-simpleexampleservice -n demo-simpleexampleservice
+```
+
+Use `kubectl get pods` to verify that all containers in the profile
+are successful and none are in the error state.
+
+> **Note:** It will take some time for the various helm charts to
+> deploy and the containers to come online. The `tosca-loader`
+> container may error and retry several times as it waits for
+> services to be dynamically loaded. This is normal, and eventually
+> the `tosca-loader` will enter the completed state.
+
+When all the containers are successfully up and running, `kubectl get pod`
+will return output that looks something like this:
+
+```shell
+NAME                                           READY     STATUS    RESTARTS   AGE
+base-kubernetes-kubernetes-55c55bd897-rn9ln    1/1       Running   0          2m
+base-kubernetes-tosca-loader-vs6pv             1/1       Running   1          2m
+demo-simpleexampleservice-787454b84b-ckpn2     1/1       Running   0          1m
+demo-simpleexampleservice-tosca-loader-4q7zg   1/1       Running   0          1m
+xos-chameleon-6f49b67f68-pdf6n                 1/1       Running   0          2m
+xos-core-57fd788db-8b97d                       1/1       Running   0          2m
+xos-db-f9ddc6589-rtrml                         1/1       Running   0          2m
+xos-gui-7fcfcd4474-prhfb                       1/1       Running   0          2m
+xos-redis-74c5cdc969-ppd7z                     1/1       Running   0          2m
+xos-tosca-7c665f97b6-krp5k                     1/1       Running   0          2m
+xos-ws-55d676c696-pxsqk                        1/1       Running   0          2m
+```
+
+## Visit CORD Dashboard
+
+Finally, to view the CORD dashboard, run the following:
+
+```shell
+minikube service xos-gui
+```
+
+This will launch a window in your default browser. Administrator login
+and password are defined in `~/cord/helm-charts/xos-core/values.yaml`.
+
+## Next Steps
+
+This completes our example walk-through. At this point, you can do one
+of the following:
+
+* Explore other [installation options](README.md).
+* Take a tour of the [operational interfaces](operating_cord/general.md).
+* Drill down on the internals of [SimpleExampleService](simpleexampleservice/simple-example-service.md).
diff --git a/mdl_relaxed.rb b/mdl_relaxed.rb
index 92e3fe8..bebc671 100644
--- a/mdl_relaxed.rb
+++ b/mdl_relaxed.rb
@@ -48,3 +48,6 @@
 
 # Exclude rule: Emphasis used instead of a header
 exclude_rule 'MD036'
+
+# Gitbook won't care about multiple blank lines
+exclude_rule 'MD012'
diff --git a/navigate.md b/navigate.md
index 4e2aced..0a6db80 100644
--- a/navigate.md
+++ b/navigate.md
@@ -1,8 +1,8 @@
 # Navigating CORD
 
-The relationship between installing, operating, and developing
-CORD—and the corresponding toolsets and specification files
-used by each stage—is helpful in navigating CORD.
+Understanding the relationship between installing, operating, and developing
+CORD—and the corresponding toolsets and specification files used by
+each stage—is helpful in navigating CORD.
 
 * **Installation (Helm):** Installing CORD means installing a collection
   of Docker containers in a Kubernetes cluster. We use Helm to carry out
@@ -42,11 +42,13 @@
   TOSCA workflow into a newly deployed set of services. This is how a
   service graph is typically instantiated.
 
-* Not all services run as Docker containers. Some services run in VMs
-  managed by OpenStack (this is currently the case for M-CORD) and
-  some services are implemented as ONOS applications that have been
-  packaged using Maven. In such cases, the VM image and the Maven
-  package are still specified in the TOSCA workflow.
+* While the CORD control plane is deployed as a set of Docker
+  containers, not all of the services themselves run in containers.
+  Some services run in VMs managed by OpenStack (this is currently
+  the case for M-CORD) and some services are implemented as ONOS
+  applications that have been packaged using Maven. In such cases,
+  the VM image and the Maven package are still specified in the TOSCA
+  workflow.
 
 * Every service (whether implemented in Docker, OpenStack, or ONOS)
   has a counter-part *synchronizer* container running as part of the CORD
diff --git a/partials/helm/description.md b/partials/helm/description.md
index 1c04574..a2f8538 100644
--- a/partials/helm/description.md
+++ b/partials/helm/description.md
@@ -1,3 +1,3 @@
 Helm is the package manager for Kubernetes. It lets you define, install,
-and upgrade Kubernetes base application. For more information about Helm,
-please the visit the official website: <https://helm.sh>.
+and upgrade Kubernetes base applications. For more information about Helm,
+please the visit official website: <https://helm.sh>.
diff --git a/prereqs/kubernetes.md b/prereqs/kubernetes.md
index e840b38..996b60a 100644
--- a/prereqs/kubernetes.md
+++ b/prereqs/kubernetes.md
@@ -1,9 +1,22 @@
 # Kubernetes
 
-CORD runs on any version of Kubernetes (1.9 or greater), and uses the
+CORD runs on any version of Kubernetes (1.10 or greater), and uses the
 Helm client-side tool. If you are new to Kubernetes, we recommend
 <https://kubernetes.io/docs/tutorials/> as a good place to start.
 
+Note: We are using a feature in kubernetes 1.10 to allow local persistence of data.
+This is a beta feature in K8S 1.10.x as of this writing and should be enabled by default.
+However, if it is not, you will need to enable it as a feature gate when
+launching kubernetes with the following feature gate settings:
+
+```shell
+PersistentLocalVolumes=true
+VolumeScheduling=true
+MountPropagation=true
+```
+
+More information about feature gates can be found [here](https://github.com/kubernetes-incubator/external-storage/tree/local-volume-provisioner-v2.0.0/local-volume#enabling-the-alpha-feature-gates).
+
 Although you are free to set up Kubernetes and Helm in whatever way makes
 sense for your deployment, the following provides guidelines, pointers, and
 automated scripts that might be helpful.
diff --git a/profiles/mcord/install.md b/profiles/mcord/install.md
index d043ae4..4df44df 100644
--- a/profiles/mcord/install.md
+++ b/profiles/mcord/install.md
@@ -66,8 +66,16 @@
 ssh -p 8101 onos@onos-cord-ssh.default.svc.cluster.local cordvtn-nodes
 ```
 
+> NOTE: If the `cordvtn-nodes` command is not present, or if it does not show any nodes,
+> the most common cause is an issue with resolving the server's hostname.
+> See [this section on adding a hostname to kube-dns](../../prereqs/vtn-setup.md#dns-setup)
+> for a fix; the command should be present shortly after the hostname is added.
+
 You should see all nodes in `COMPLETE` state.
 
+> NOTE: If the node is in `INIT` state rather than `COMPLETE`, try running
+> `cordvtn-node-init <node>` and see if that resolves the issue.
+
 Next, check that the VNF images are loaded into OpenStack (they are quite large
 so this may take a while to complete):
 
@@ -144,3 +152,14 @@
 | 4a5960b5-b5e4-4777-8fe4-f257c244f198 | mysite_vspgwc-3 | ACTIVE | management=172.27.0.7; spgw_network=117.0.0.8; s11_network=112.0.0.4                               | image_spgwc_v0.1 | m1.large  |
 +--------------------------------------+-----------------+--------+----------------------------------------------------------------------------------------------------+------------------+-----------+
 ```
+
+Log in to the XOS GUI and verify that the service synchronizers have run.  The
+GUI is available at URL `http:<master-node>:30001` with username
+`admin@opencord.org` and password `letmein`.  Verify that the status of all
+ServiceInstance objects is `OK`.
+
+> NOTE: If you see a status message of `SSH Error: data could not be sent to
+> remote host`, the most common cause is the inability of the synchronizers to
+> resolve the server's hostname.  See [this section on adding a hostname to
+> kube-dns](../../prereqs/vtn-setup.md#dns-setup) for a fix; the issue should
+> resolve itself after the hostname is added.
diff --git a/profiles/rcord/configuration.md b/profiles/rcord/configuration.md
index 887f644..7c865c8 100644
--- a/profiles/rcord/configuration.md
+++ b/profiles/rcord/configuration.md
@@ -1,9 +1,10 @@
 # R-CORD Configuration
 
 Once all the components needed for the R-CORD profile are up and
-running on your POD, you'll need to configure XOS with the proper configuration.
-Since this configuration is environment specific, you'll need to create your own,
-but the following can serve as a reference for it:
+running on your POD, you will need to configure it. This is typically
+done using TOSCA. This configuration is environment specific, so
+you will need to create your own, but the following can serve as a
+reference:
 
 ```yaml
 tosca_definitions_version: tosca_simple_yaml_1_0
@@ -120,34 +121,34 @@
             relationship: tosca.relationships.BelongsToOne
 ```
 
-_For instructions on how to push TOSCA, please refer to this [guide](../../xos-tosca/README.md)_
+For instructions on how to push TOSCA into a CORD POD, please
+refer to this [guide](../../xos-tosca/README.md).
 
-Once the POD has been configured, you can create a subscriber,
-please refer to the [RCORD Service](../../rcord/README.md) guide for
-more informations.
+## Top-Down Subscriber Provisioning
 
-## Create a subscriber in RCORD
+Once the POD has been configured, you can create a subscriber. This
+section describes a "top-down" approach for doing that. (The following
+section describes an alternative, "bottom up" approach.)
 
-To create a subscriber in CORD you need to retrieve some informations:
+To create a subscriber, you need to retrieve some information:
 
 - ONU Serial Number
 - Mac Address
 - IP Address
 
-We'll focus on the first two as the others are pretty self-explaining.
-
-**Find the ONU Serial Number**
+### Find ONU Serial Number
 
 Once your POD is set up and the OLT has been pushed and activated in VOLTHA,
 XOS will discover the ONUs available in the system.
 
-You can find them trough:
+You can find them through:
 
-- the XOS UI, on the left side click on `vOLT > ONUDevices`
-- the rest APIs `http://<pod-id>:<chameleon-port|30006>/xosapi/v1/volt/onudevices`
-- the VOLTHA [cli](../../charts/voltha.md#how-to-access-the-voltha-cli)
+- XOS GUI: on the left side click on `vOLT > ONUDevices`
+- XOS Rest API: `http://<pod-id>:<chameleon-port|30006>/xosapi/v1/volt/onudevices`
+- VOLTHA CLI: [Command Line Interface](../../charts/voltha.md#how-to-access-the-voltha-cli)
 
-If you are connected to the VOLTHA CLI you can use the command:
+If you are connected to the VOLTHA CLI you can use the following
+command to list all the existing devices:
 
 ```shell
 (voltha) devices
@@ -160,7 +161,7 @@
 +------------------+--------------+------+------------------+-------------+-------------+----------------+----------------+------------------+----------+-------------------------+----------------------+------------------------------+
 ```
 
-to list all the existing devices, and locate the correct ONU, then:
+Locate the correct ONU, then:
 
 ```shell
 (voltha) device 00015698e67dc060
@@ -197,10 +198,10 @@
 
 to find the correct serial number.
 
-**Push a subscriber into CORD**
+### Push a Subscriber into CORD
 
-Once you have the informations you need about your subscriber,
-you can create it by customizing this TOSCA:
+Once you have this information, you can create the subscriber by
+customizing the following TOSCA and passing it into the POD:
 
 ```yaml
 tosca_definitions_version: tosca_simple_yaml_1_0
@@ -220,42 +221,47 @@
         ip_address: 10.8.2.1 # subscriber IP
 ```
 
-_For instructions on how to push TOSCA, please refer to this [guide](../../xos-tosca/README.md)_
+For instructions on how to push TOSCA into a CORD POD, please
+refer to this [guide](../../xos-tosca/README.md).
 
 ## Zero-Touch Subscriber Provisioning
 
-This feature, also referred to as "bottom-up provisioning" enables auto-discovery
-of subscriber and their validation through an external OSS.
+This feature, also referred to as "bottom-up" provisioning,
+enables auto-discovery of subscribers and validates them
+using an external OSS.
 
-Here is the expected workflow:
+The expected workflow is as follows:
 
-- when an ONU is attached to the POD, VOLTHA will discover it and send an event to XOS
-- XOS receive the ONU activated events and through an OSS-Service query the upstream OSS to validate wether that ONU has a valid serial number
-- once the OSS has approved the ONU, XOS will create `ServiceInstance` chain for this particular subscriber and configure the POD to give him connectivity
+- When an ONU is attached to the POD, VOLTHA will discover it and send
+   an event to XOS
+- XOS receives the ONU activation event and through an OSS proxy
+   queries the upstream OSS to validate wether that ONU has a valid serial number
+- Once the OSS has approved the ONU, XOS will create `ServiceInstance`
+  chain for this particular subscriber and configure the POD to enable connectivity
 
-If you want to enable the "Zero touch provisioning" feature you'll need
-to deploy and configure some extra pieces in the system before attaching
+To enable the zero-touch provisioning feature, you will need to deploy
+and configure some extra pieces into the system before attaching
 subscribers:
 
-**Kafka**
+### Deploy Kafka
 
-To enable this feature XOS needs to receive events from `onos-voltha`
+To enable this feature XOS needs to receive events from `onos-voltha`,
 so a kafka bus needs to be deployed.
-To deploy it please follow [this instructions](../../charts/kafka.md)
+To deploy Kafka, please follow these [instructions](../../charts/kafka.md)
 
-**OSS Service**
+### Deploy OSS Proxy
 
-This is the piece of code that is responsible to enable the communication
-between CORD and you OSS Database.
-For reference we are providing a sample implemetation, available here:
+This is the piece of code that is responsible to connecting CORD to an
+external OSS Database. As a simple reference, we provide a sample
+implemetation, available here:
 [hippie-oss](https://github.com/opencord/hippie-oss)
 
 > **Note:** This implementation currently validates any subscriber that comes online.
 
 To deploy the `hippie-oss` service you can look [here](../../charts/hippie-oss.md).
 
-Once the chart has come online, you'll need to add it to your service graph,
-and you can use this TOSCA for that:
+Once the chart has come online, you will need to add the Hippie-OSS service
+to your service graph. You can use the following TOSCA to do that:
 
 ```yaml
 tosca_definitions_version: tosca_simple_yaml_1_0
@@ -296,4 +302,5 @@
             relationship: tosca.relationships.BelongsToOne
 ```
 
-_For instructions on how to push TOSCA, please refer to this [guide](../../xos-tosca/README.md)_
+For instructions on how to push TOSCA into a CORD POD, please
+refer to this [guide](../../xos-tosca/README.md).
diff --git a/profiles/rcord/emulate.md b/profiles/rcord/emulate.md
new file mode 100644
index 0000000..30efab0
--- /dev/null
+++ b/profiles/rcord/emulate.md
@@ -0,0 +1,17 @@
+# Emulated OLT/ONU
+
+Support for emulating the OLT/ONU using `ponsim` is still a
+work-in-progress, so it is not currently possible to bring up R-CORD
+without the necessary access hardware. In the meantime, it is possible
+to set up a development environment that includes just the R-CORD
+control plane. Doing so involves installing the following helm charts:
+
+- [xos-core](../../charts/xos-core.md)
+- [cord-kafka](../../charts/kafka.md)
+- [hippie-oss](../../charts/hippie-oss.md)
+
+in addition to `rcord-lite`. This would typically be done
+on a [single node platform](../../prereqs/k8s-single-node.md) in
+support of a developer workflow that [emulates subscriber
+provisioning](../../developer/configuration_rcord.md).
+
diff --git a/profiles/rcord/install.md b/profiles/rcord/install.md
index c08c1cf..7f260bd 100644
--- a/profiles/rcord/install.md
+++ b/profiles/rcord/install.md
@@ -1,37 +1,58 @@
 # R-CORD Profile
 
 The latest version of R-CORD differs from versions included in earlier
-releases in that it does not include the vSG service. In the code this
-configuration is called `rcord-lite`, but since it is the only version
-of Residential CORD currently supported, we usually simply call it
-the "R-CORD" profile.
+releases in that it does not include the vSG service. In the code,
+this new configuration is called `rcord-lite`, but since it is the
+only version of Residential CORD currently supported, we simply
+call it the *R-CORD profile.*
 
 ## Prerequisites
 
-- A Kubernetes cluster (you can follow one of this guide to install a [single
-  node cluster](../../prereqs/k8s-single-node.md) or a [multi node
-  cluster](../../prereqs/k8s-multi-node.md))
-- Helm, follow [this guide](../../prereqs/helm.md)
+- Kubernetes: Follow one of these guides to install either a [single
+   node](../../prereqs/k8s-single-node.md) or a [multi
+   node](../../prereqs/k8s-multi-node.md) cluster.
+- Helm: Follow this [guide](../../prereqs/helm.md).
 
-## CORD Components
+## Install VOLTHA
 
-R-CORD has dependencies on this charts, so they need to be installed first:
+When running on a physical POD with OLT/ONU hardware, the
+first step to bringing up R-CORD is to install the
+[VOLTHA helm chart](../../charts/voltha.md).
+
+## Install CORD Platform
+
+The R-CORD profile has dependencies on the following platform
+charts, so they need to be installed next:
 
 - [xos-core](../../charts/xos-core.md)
 - [onos-fabric](../../charts/onos.md#onos-fabric)
 - [onos-voltha](../../charts/onos.md#onos-voltha)
 
-## Installing the R-CORD Profile
+## Install R-CORD Profile
 
-```shell
+You are now ready to install the R-CORD profile:
+
+```shell 
 helm dep update xos-profiles/rcord-lite
 helm install -n rcord-lite xos-profiles/rcord-lite
 ```
 
-Now that your R-CORD deployment is complete, please read this
-to understand how to configure it: [Configure R-CORD](configuration.md)
+Optionally, if you want to use the "bottom up" subscriber provisioning
+workflow described in the [Operations Guide](configuration.md), you
+will also need to install the following two charts:
 
-## Customizing an R-CORD Install
+- [cord-kafka](../../charts/kafka.md)
+- [hippie-oss](../../charts/hippie-oss.md)
+
+> **Note:** If you install both VOLTHA and the optional Kafka, you
+> will end up with two instantiations of Kafka: `kafka-voltha` and
+> `kafka-cord`.
+
+Once your R-CORD deployment is complete, please read the
+following guide to understand how to configure it:
+[Configure R-CORD](configuration.md)
+
+## Customize an R-CORD Install
 
 Define a `my-rcord-values.yaml` that looks like:
 
@@ -62,4 +83,3 @@
 ```shell
 helm install -n rcord-lite xos-profiles/rcord-lite -f my-rcord-values.yaml
 ```
-
diff --git a/quickstart.md b/quickstart.md
index b42cf11..bf1354d 100644
--- a/quickstart.md
+++ b/quickstart.md
@@ -1,8 +1,16 @@
 # Quick Start
 
-This section walks you through the installation sequence to bring up a
-demonstration configuration of CORD that includes a simple example
-service. If you'd prefer to understand the installation process in more
-depth, you might start with the [Installation Guide](README.md).
+This section walks you through an example installation sequence on two
+different platforms. If you'd prefer to understand the installation
+process in more depth, including the full range of deployment options,
+you might start with the [Installation Guide](README.md) instead.
 
-More to come...
+This Quick Start describes how to install the R-CORD profile, plus a
+`SimpleExampleService`, on a single machine. Once you complete these
+steps, you might be interested in jumping ahead to the
+[SimpleExampleService Tutorial](simpleexampleservice/simple-example-service.md)
+to learn more about the make-up of a CORD service. Another option
+would be to explore CORD's [operational interfaces](operating_cord/general.md). 
+
+* [MacOS](macos.md)
+* [Linux](linux.md)
diff --git a/versioning.md b/versioning.md
new file mode 100644
index 0000000..dd1a00c
--- /dev/null
+++ b/versioning.md
@@ -0,0 +1,112 @@
+# Versions and Releases of CORD
+
+The 5.0 and earlier releases of CORD were done on a patch branch, named
+`cord-4.1`, `cord-5.0`, etc., which received bug fixes as required.
+
+Starting with 6.0, the decision was made that individual components of CORD
+would be released and versioned separately.  The versioning method chosen was
+[Semantic Versioning](https://semver.org/), with versions numbers incrementing
+with per the MAJOR.MINOR.PATCH method as changes are made.
+
+For development versions, using either SemVer or the slightly different
+[PEP440](https://www.python.org/dev/peps/pep-0440/) syntax (`.dev#` instead of
+`-dev#`) if using Python code are the recommended formats.
+
+To avoid confusion, all components that existed prior to 6.0 started their
+independent versioning at version `6.0.0`.
+
+## CORD Releases
+
+Formal CORD releases are the tags on two repositories,
+[helm-charts](https://github.com/opencord/helm-charts) and
+[automation-tools](https://github.com/opencord/automation-tools).  The helm
+charts refer to specific container versions used with CORD, which encapsulate
+individual components which are versioned independently.  For example, a future
+6.1.0 release might still include some components that are still using the
+original 6.0.0 version.
+
+While not created by default during a release, patch branches may be created
+for tracking ongoing work, but this is left up to the owners/developers of the
+individual components.
+
+Support and patches are provided for the currently released version and the
+last point revision of the previous release - for example, while developing
+6.0, we continued to support both 5.0 and 4.1.  Once 6.0 is released, support
+will be provided for 6.0 and 5.0, dropping 4.1
+
+## How to create a versioned release of an individual component
+
+1. Update the `VERSION` file to a released version (ex: `6.0.1`)
+
+2. Make sure that any Docker parents are using a released version (`FROM
+   xosproject/xos-base:6.0.0`), or some other acceptably specific version (ex: `FROM
+   scratch` or `FROM ubuntu-16.04`)
+
+3. Create a patchset with these changes and submit it to Gerrit
+
+4. If it passes tests, upon merge the commit will be tagged in git with the
+   version string, and Docker images with the tag will be built and sent to
+   Docker Hub.
+
+## Details of the release process
+
+To create a new version, the version string is updated in the language or
+framework specific method. For most of CORD, a file named `VERSION` is created
+in the root of the git repository and contains a single line with the version
+string.  Once a commit has been merged to a branch that changes the released
+version number, the version number is added as a git tag on the repo.
+
+During development, the version is usually set to a development value, such as
+`6.0.0.dev1`. There can be multiple patchsets using the same non-release
+development version, and these versions don't create git tags on merge.
+
+As it's confusing to have multiple git commits that contain the same released
+version string in the `VERSION` file, Jenkins jobs have been put in place to
+prevent this from happening. The implementation is as follows:
+
+- When a patchset is submitted to Gerrit, the `tag-collision-reject` Jenkins
+  job runs. This checks that the version string in `VERSION` is does not
+  already exist as a git tag, and rejects any patchsets that have duplicate
+  released versions. It ignores development and non-SemVer version strings.
+
+  This job also checks that if a released version number is used, any
+  Dockerfile parent images are also using a fixed parent version, to better
+  ensure repeatability of the image builds.
+
+- Once the patchset is approved and merged, the `version-tag` Jenkins job runs.
+  If the patchset uses a SemVer released version, additional checks are
+  performed and if they pass a git tag is created on the git repo pointing to
+  the commit.
+
+  Once the tag is created, if there are Docker images that need to be created
+  from the commit, the `publish-imagebuilder` job runs and creates tagged
+  images corresponding to the git tags and branches and pushes them to Docker
+  Hub.
+
+Git history is public so it shouldn't be rewritten to abandon already merged
+commits, which means that there's not a way to un-release a version.
+
+Reverting a commit leaves it in the git history, so if a broken version is
+released the correct action is to create a new fixed version, not try to fix
+the already released version.
+
+## Troubleshooting
+
+### Patchsets fail job after rebasing
+
+If you've rebased your patchset onto a released version, the `VERSION` file may
+be at a released version, which violates the "no two patchsets can contain the
+same released version".  For example, an error like this:
+
+```text
+Version string '1.0.1' found in 'VERSION' is a SemVer released version!
+ERROR: Duplicate tag: 1.0.1
+```
+
+Means that when you rebased your code, it found a `1.0.1` in the `VERSION`
+file, which violates the "two commits can't have the same version" policy.
+
+To fix this issue, you would change the contents of the `VERSION` file to
+either increment to a dev version (ex: `1.0.2.dev1`) or a release version (ex:
+`1.0.2`) and resubmit your patchset.
+