Merge "Troubleshooting page for failing authentication."
diff --git a/Makefile b/Makefile
index 5b2c180..e8a972f 100644
--- a/Makefile
+++ b/Makefile
@@ -13,7 +13,7 @@
 
 # Other repos with documentation that's included in the gitbook
 # edit the `git_refs` file with the commit/tag/branch that you want to use
-OTHER_REPO_DOCS ?= att-workflow-driver cord-tester fabric fabric-crossconnect hippie-oss kubernetes-service olt-service onos-service openolt openstack rcord simpleexampleservice exampleservice vrouter xos xos-gui xos-tosca
+OTHER_REPO_DOCS ?= att-workflow-driver cord-tester fabric fabric-crossconnect hippie-oss kubernetes-service olt-service onos-service openolt openstack rcord simpleexampleservice exampleservice vrouter xos xos-gui xos-tosca vtn-service
 GENERATED_DOCS  ?= # should be 'swagger', but currently broken
 ALL_DOCS        ?= $(OTHER_REPO_DOCS) $(GENERATED_DOCS)
 
diff --git a/SUMMARY.md b/SUMMARY.md
index e75ac60..3bf8973 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -106,7 +106,7 @@
     * [AT&T Workflow Driver](att-workflow-driver/README.md)
     * [Kubernetes](kubernetes-service/kubernetes-service.md)
     * [OpenStack](openstack/openstack-service.md)
-    * [VTN](xos/xos_vtn.md)
+    * [VTN](vtn-service/README.md)
 * [Helm Reference](charts/helm.md)
     * [XOS-CORE](charts/xos-core.md)
     * [ONOS](charts/onos.md)
diff --git a/git_refs b/git_refs
index 8659fbc..9e9dd1a 100644
--- a/git_refs
+++ b/git_refs
@@ -28,4 +28,4 @@
 xos-gui               /docs    master
 xos-tosca             /docs    master
 xos                   /docs    master
-
+vtn-service           /docs    master
diff --git a/navigate.md b/navigate.md
index 7e3479e..f307986 100644
--- a/navigate.md
+++ b/navigate.md
@@ -1,10 +1,10 @@
 # Navigating CORD
 
-## Assembled in Layers
+## Assembled from Components
 
-A given instance of CORD is constructed from a set of disaggregated
+A given instance of CORD is assembled from a set of disaggregated
 components. This assembly is done according to the general pattern
-shown in the following conceptual diagram.
+shown in the following diagram.
 
 ![Layers](images/layers.png)
 
@@ -12,11 +12,13 @@
 
 * **Kubernetes:** All elements of the CORD control plane run in
   Kubernetes containers. CORD assumes a Kubernetes foundation,
-  but does not prescribe how the hardware or Kubernetes are installed.
+  but does not prescribe how Kubernetes (or the underlying hardware)
+  are installed.
 
 * **Platform:** The Platform layer consists of ONOS, XOS,
   Kafka, and collection of Logging and Monitoring micro-services,
-  all running on a Kubernetes foundation.
+  all running on a Kubernetes foundation. The platform is common
+  to all Profiles.
 
 * **Profile:** Each unique CORD configuration corresponds to a
   Profile. It consists of a set of services (e.g., access services,
@@ -27,8 +29,8 @@
   
 * **Workflow:** A Profile includes one or more workflows, each of
   which defines the business logic and state machine for one of the
-  access technologies. A workflow augments/parameterizes a Profile for
-  the target deployment environment; it is not a layer, per se.  SEBA's
+  access technologies. A workflow customizes a Profile for the target
+  deployment environment; it is not a layer, per se.  SEBA's
   [AT&T Workflow](profiles/seba/workflows/att-install.md) is an example.
 
 The diagram also shows a hardware bill-of-materials, which must be
diff --git a/offline-install.md b/offline-install.md
index dc209ef..304c6fb 100644
--- a/offline-install.md
+++ b/offline-install.md
@@ -22,27 +22,34 @@
 
 * A **local webserver** to host the ONOS applications (.oar files), that are instead normally downloaded from Sonatype. If you don't have one, follow the notes below to quickly deploy a webserver on top of your existing Kubernetes cluster.
 
-* **Kubernetes servers default gateway**: In order for kube-dns to work, a default route (even pointing to a non-exisiting/working gateway) needs to be set on the machines hosting Kubernetes. This is something related to Kubernetes, not to CORD.
-
 ## Prepare the offline installation
 
 This should be done from a machine that has access to Internet.
 
-* Clone the *helm-charts* repository
+* Add the *cord* repository to the list of your local repositories and download the repository index.
 
 ```shell
-git clone https://gerrit.opencord.org/helm-charts
+helm repo add cord https://charts.opencord.org
+helm repo update
 ```
 
-Next steps largely depend on the type of profile you want to install.
+* Add other third-party helm repositories used by CORD and pull external dependencies.
 
-* Add external helm repositories and pull external dependencies.
+* Fetch locally all the charts from the remote repositories
 
-* Fetch helm-charts not available locally.
+```shell
+helm fetch name-of-the-repo/name-of-the-chart --untar
+```
 
-* Modify the CORD helm charts to instruct kubernetes to pull the images from your local registry (where images will be pushed), instead of DockerHub. One option is to modify the *values.yaml* in each chart. A better option consists in extending the charts, rather than directly modifying them. This way, the original configuration can be kept as is, just overriding some values as needed. You can do this by writing your additional configuration yaml file, and parsing it as needed, adding `-f my-additional-config.yml` while using the helm install/upgrade commands. The full CORD helm charts reference documentation is available [here](../charts/helm.md).
+* Fetch (where needed) the chart dependencies
 
-* Download the ONOS applications (OAR files). Informations about the oar applications used can be found here: <https://github.com/opencord/helm-charts/blob/master/xos-profiles/att-workflow/values.yaml>
+```shell
+helm dep update name-of-the-repo/name-of-the-chart
+```
+
+* Create a file to override the default values of the charts, in order to instruct Kubernetes to pull the Docker images from your local registry (where images will be pushed) instead of DockerHub, and the ONOS images from the local webserver. One option is to modify the *values.yaml* files of each chart. A better option consists in extending the charts, rather than directly modifying them. This way, the original configuration can be kept as is, and just some values can be override as needed. You can do this by writing your additional configuration yaml file, and parsing it adding `-f my-additional-config.yml` while using the helm install/upgrade commands. The full CORD helm charts reference documentation is available [here](../charts/helm.md).
+
+* Download the ONOS applications (OAR files) used in your profile. For SEBA, this can be found here: <https://github.com/opencord/helm-charts/blob/master/xos-profiles/seba-services/values.yaml>
 
 * Pull from DockerHub all the Docker images that need to be used on your POD. The *automation-tools* repository has a *images_from_charts.sh* utility inside the *developer* folder that can help you to get all the image names given the helm-chart repository and a list of chart names. More informations in the sections below.
 
@@ -50,7 +57,7 @@
 
 * Optionally download the OpenOLT driver deb files from your vendor website (i.e. EdgeCore).
 
-* Optionally, save as tar files the Docker images downloaded. This can be useful if you'll use a different machine to upload the images on the local registry running in your infrastructure. To do that, for each image use the Docker command.
+* Optionally, save the Docker images downloaded as tar files. This can be useful if you'll use a different machine to upload the images on the local registry running in your infrastructure. To do that, for each image use the Docker command.
 
 ```shell
 docker save IMAGE_NAME:TAG > FILE_NAME.tar
@@ -72,7 +79,7 @@
 
 * Copy the ONOS applications to your local web server. The procedure largely varies from the web server you run, its configuration, and what ONOS applications you need.
 
-* Deploy CORD/SEBA using the helm charts. Remember to load with the *-f* option the additional configuration file to extend the helm charts, if any.
+* Deploy CORD and its profile(s) using the helm charts. Remember to load with the *-f* option the additional configuration file to override the helm charts, if any.
 
 {% include "/partials/push-images-to-registry.md" %}
 
@@ -82,6 +89,8 @@
 
 If you don't have a local Docker registry deployed in your infrastructure, you can install an **insecure** one using the official Kubernetes helm-chart.
 
+Since this specific docker registry is packaged as a kubernetes pod, shipped with helm, you'll need Internet connectivity to install it.
+
 > **Note:** *Insecure* registries can be used for development, POCs or lab trials. **You should not use this in production.** There are planty of documents online that guide you through secure registries setup.
 
 The following command deploys the registry and exposes the port *30500*. (You may want to change it with any value that fit your deployment needs).
@@ -106,7 +115,7 @@
 helm fetch bitnami/nginx --untar
 
 # Then, while deploying offline
-helm install -n mavenrepo --set service.nodePorts.http=30160,service.type=NodePort nginx
+helm install -n mavenrepo --set service.type=NodePort --set service.nodePorts.http=30160 bitnami/nginx
 ```
 
 The webserver will be up in few seconds and you'll be able to reach the root web page using the IP of one of your Kubernetes nodes, port *30160*. For example, you can do:
@@ -146,46 +155,34 @@
 ### Prepare the installation
 
 ```shell
-# Clone repos
+# Clone the automation-tools repo
 git clone https://gerrit.opencord.org/automation-tools
-git clone https://gerrit.opencord.org/helm-charts
 
-# Copy automation scripts in the right place
-cp automation-tools/developer/images_from_charts.sh helm-charts
-cp automation-tools/developer/pull_images.sh helm-charts
-cp automation-tools/developer/tag_and_push.sh helm-charts
-
-cd helm-charts
-
-# Add online helm repositories and update dependencies
-helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
-helm repo add rook-beta https://charts.rook.io/beta
+# Add the online helm repositories and update indexes
+helm repo add cord https://charts.opencord.org
+helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
 helm repo add bitnami https://charts.bitnami.com/bitnami
-helm dep update voltha
-helm dep update xos-core
-helm dep update xos-profiles/att-workflow
-helm dep update xos-profiles/base-kubernetes
-helm dep update nem-monitoring
-helm dep update logging
-helm dep update storage/rook-operator
+helm repo update
 
-# Fetch helm-charts not available locally
+# Fetch helm charts
+helm fetch cord/cord-platform --version 6.1.0 --untar
+helm fetch cord/seba --version 1.0.0 --untar
+helm fetch cord/att-workflow --version 1.0.0 --untar
 helm fetch stable/docker-registry --untar
-helm fetch --version 0.8.0 stable/etcd-operator --untar
-helm fetch --version 0.8.8 incubator/kafka --untar
 helm fetch bitnami/nginx --untar
 
-# Update Kafka dependencies
-helm dep update kafka
+# Update chart dependencies
+helm dep update cord-platform
+helm dep update seba
 
 # For demo, install the local, helm-based Docker Registry on the remote POD (this will require POD connectivity to download the docker registry image)
 helm install stable/docker-registry --set service.nodePort=30500,service.type=NodePort -n docker-registry
 
 # For demo, install the local web-server to host ONOS images
-helm install -n mavenrepo bitnami/nginx
+helm install -n mavenrepo --set service.type=NodePort --set service.nodePorts.http=30160 bitnami/nginx
 
 # Identify images form the official helm charts and pull images from DockerHub. If you see some "skipped value for filters" warning that's fine
-bash images_from_charts.sh kafka etcd-cluster etcd-operator voltha onos xos-core xos-profiles/att-workflow xos-profiles/base-kubernetes nem-monitoring logging storage/rook-operator | bash pull_images.sh > images
+bash automation-tools/developer/images_from_charts.sh cord-platform seba seba/charts/voltha/charts/etcd-cluster att-workflow | automation-tools/developer/bash pull_images.sh > images
 
 # Download ONOS apps
 curl -L "https://oss.sonatype.org/service/local/artifact/maven/redirect?r=snapshots&g=org.opencord&a=olt-app&v=2.1.0-SNAPSHOT&e=oar" > olt.oar
@@ -194,81 +191,91 @@
 curl -L "https://oss.sonatype.org/service/local/artifact/maven/redirect?r=snapshots&g=org.opencord&a=aaa&v=1.8.0-SNAPSHOT&e=oar" > aaa.oar
 curl -L "https://oss.sonatype.org/service/local/artifact/maven/redirect?r=snapshots&g=org.opencord&a=kafka&v=1.0.0-SNAPSHOT&e=oar" > kafka.oar
 
-# Create file to extend the helm charts (call it extend.yaml)
+# Create file to override the default helm charts values (call it extend.yaml)
 global:
   registry: 192.168.0.100:30500/
 
-image: 192.168.0.100:30500/confluentinc/cp-kafka
-imageTag: 4.1.2-2
+# CORD platform overrides
+kafka:
+  image: 192.168.0.100:30500/confluentinc/cp-kafka
+  imageTag: 4.1.2-2
+  configurationOverrides:
+    zookeeper.connection.timeout.ms: 60000
+    zookeeper.session.timeout.ms: 60000
 
-zookeeper:
-  image:
-    repository: 192.168.0.100:30500/gcr.io/google_samples/k8szk
+  zookeeper:
+    image:
+      repository: 192.168.0.100:30500/gcr.io/google_samples/k8szk
 
-etcd-cluster:
-  spec:
-    repository: 192.168.0.100:30500/quay.io/coreos/etcd
-  pod:
-    busyboxImage: 192.168.0.100:30500/busybox:1.28.1-glibc
-
-etcdOperator:
-  image:
-    repository: 192.168.0.100:30500/quay.io/coreos/etcd-operator
-backupOperator:
-  image:
-    repository: 192.168.0.100:30500/quay.io/coreos/etcd-operator
-restoreOperator:
-  image:
-    repository: 192.168.0.100:30500/quay.io/coreos/etcd-operator
-
-grafana:
-  image:
-    repository: 192.168.0.100:30500/grafana/grafana
-  sidecar:
-    image: 192.168.0.100:30500/kiwigrid/k8s-sidecar:0.0.3
-
-prometheus:
-  server:
+logging:
+  elasticsearch:
     image:
-      repository: 192.168.0.100:30500/prom/prometheus
-  alertmanager:
-    image:
-      repository: 192.168.0.100:30500/prom/alertmanager
-  configmapReload:
-    image:
-      repository: 192.168.0.100:30500/jimmidyson/configmap-reload
-  kubeStateMetrics:
-    image:
-      repository: 192.168.0.100:30500/quay.io/coreos/kube-state-metrics
-  nodeExporter:
-    image:
-      repository: 192.168.0.100:30500/prom/node-exporter
-  pushgateway:
-    image:
-      repository: 192.168.0.100:30500/prom/pushgateway
-  initChownData:
-    image:
+      repository: 192.168.0.100:30500/docker.elastic.co/elasticsearch/elasticsearch-oss
+    initImage:
       repository: 192.168.0.100:30500/busybox
 
-elasticsearch:
-  image:
-    repository: 192.168.0.100:30500/docker.elastic.co/elasticsearch/elasticsearch-oss
-  initImage:
-    repository: 192.168.0.100:30500/busybox
+  kibana:
+    image:
+      repository: 192.168.0.100:30500/docker.elastic.co/kibana/kibana-oss
 
-kibana:
-  image:
-    repository: 192.168.0.100:30500/docker.elastic.co/kibana/kibana-oss
+  logstash:
+    image:
+      repository: 192.168.0.100:30500/docker.elastic.co/logstash/logstash-oss
 
-logstash:
-  image:
-    repository: 192.168.0.100:30500/docker.elastic.co/logstash/logstash-oss
+nem-monitoring:
+  grafana:
+    image:
+      repository: 192.168.0.100:30500/grafana/grafana
+    sidecar:
+      image: 192.168.0.100:30500/kiwigrid/k8s-sidecar:0.0.3
 
-oltAppUrl: http://192.168.0.100:30160/olt.oar
-sadisAppUrl: http://192.168.0.100:30160/sadis.oar
-dhcpL2RelayAppUrl: http://192.168.0.100:30160/dhcpl2relay.oar
-aaaAppUrl: http://192.168.0.100:30160/aaa.oar
-kafkaAppUrl: http://192.168.0.100:30160/kafka.oar
+  prometheus:
+    server:
+      image:
+        repository: 192.168.0.100:30500/prom/prometheus
+    alertmanager:
+      image:
+        repository: 192.168.0.100:30500/prom/alertmanager
+    configmapReload:
+      image:
+        repository: 192.168.0.100:30500/jimmidyson/configmap-reload
+    kubeStateMetrics:
+      image:
+        repository: 192.168.0.100:30500/quay.io/coreos/kube-state-metrics
+    nodeExporter:
+      image:
+        repository: 192.168.0.100:30500/prom/node-exporter
+    pushgateway:
+      image:
+        repository: 192.168.0.100:30500/prom/pushgateway
+    initChownData:
+      image:
+        repository: 192.168.0.100:30500/busybox
+
+# SEBA specific overrides
+voltha:
+  etcd-cluster:
+    spec:
+      repository: 192.168.0.100:30500/quay.io/coreos/etcd
+    pod:
+      busyboxImage: 192.168.0.100:30500/busybox:1.28.1-glibc
+
+  etcdOperator:
+    image:
+      repository: 192.168.0.100:30500/quay.io/coreos/etcd-operator
+  backupOperator:
+    image:
+      repository: 192.168.0.100:30500/quay.io/coreos/etcd-operator
+  restoreOperator:
+    image:
+      repository: 192.168.0.100:30500/quay.io/coreos/etcd-operator
+
+seba-services:
+  oltAppUrl: http://192.168.0.100:30160/olt.oar
+  sadisAppUrl: http://192.168.0.100:30160/sadis.oar
+  dhcpL2RelayAppUrl: http://192.168.0.100:30160/dhcpl2relay.oar
+  aaaAppUrl: http://192.168.0.100:30160/aaa.oar
+  kafkaAppUrl: http://192.168.0.100:30160/kafka.oar
 
 # Download the openolt.deb driver installation file from the vendor website (command varies)
 scp/wget... openolt.deb
@@ -277,10 +284,8 @@
 ### Offline deployment
 
 ```shell
-cd helm-charts
-
 # Tag and push the images to the local Docker registry
-cat images | bash tag_and_push.sh -r 192.168.0.100:30500
+cat images | bash automation-tools/developer/tag_and_push.sh -r 192.168.0.100:30500
 
 # Copy the ONOS applications to the local web server
 MAVEN_REPO=$(kubectl get pods | grep mavenrepo | awk '{print $1;}')
@@ -290,16 +295,10 @@
 kubectl cp aaa.oar $MAVEN_REPO:/opt/bitnami/nginx/html
 kubectl cp kafka.oar $MAVEN_REPO:/opt/bitnami/nginx/html
 
-# Install SEBA
-helm install -n etcd-operator -f extend.yaml --version 0.8.0 etcd-operator
-helm install -n cord-kafka -f examples/kafka-single.yaml -f extend.yaml --version 0.8.8 kafka
-helm install -n voltha -f extend.yaml voltha
-helm install -n onos -f configs/onos.yaml -f extend.yaml onos
-helm install -n xos-core -f extend.yaml xos-core
-helm install -n att-workflow -f extend.yaml xos-profiles/att-workflow
-helm install -n base-kubernetes -f extend.yaml xos-profiles/base-kubernetes
-helm install -n nem-monitoring -f extend.yaml nem-monitoring
-helm install -n logging -f extend.yaml logging
+# Install the CORD platform, the SEBA profile and the ATT workflow
+helm install -n cord-platform -f extend.yaml cord-platform
+helm install -n seba -f extend.yaml seba
+helm install -n att-workflow -f extend.yaml att-workflow
 
 # On the OLT, copy, install and run openolt.deb
 scp openolt.deb root@192.168.0.200:
diff --git a/overview.md b/overview.md
index a67e6db..1a16dfb 100644
--- a/overview.md
+++ b/overview.md
@@ -29,11 +29,11 @@
  included in CORD.
 
 These are all fairly obvious. What's less obvious is the relationship among
-these stages, which is helpful in [Navigating CORD](navigate.md).
+these stages, which is explained in [Navigating CORD](navigate.md).
 
 ## Navigating the References
 
-CORD is built from components and the aggregation of components into a
+CORD is built from disaggregated components that are assembled into a
 composite solution. The References are organized accordingly:
 
 * [Profile Reference](profiles/intro.md): Installation and
diff --git a/quickstart.md b/quickstart.md
index 3dfab6b..382d310 100644
--- a/quickstart.md
+++ b/quickstart.md
@@ -2,7 +2,7 @@
 
 This section walks you through an example installation sequence on two
 different Unix-based platforms. This is just a surface introduction to
-CORD. If you'd prefer to understand the installation process in more
+CORD. If you prefer to understand the installation process in more
 depth, including the full range of deployment options, you should
 start with the [Installation Guide](README.md) instead.
 
@@ -16,6 +16,7 @@
 * [MacOS](macos.md)
 * [Linux](linux.md)
 
-Instead if you want to quickly get started with a complete CORD system together
-with the CORD platform, a profile such as SEBA, an exemplar operator workflow and
-an emulated data-plane, consider [SEBA-in-a-Box](./profiles/seba/siab-overview.md).
+If you want to quickly get started with a complete CORD system
+running on your laptop—including the CORD platform, the SEBA profile,
+an exemplar operator workflow, and an emulated data-plane—you could
+give [SEBA-in-a-Box](./profiles/seba/siab-overview.md) a try.