Merge "Adding offline installation docs"
diff --git a/SUMMARY.md b/SUMMARY.md
index 5ad8044..115f433 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -38,6 +38,7 @@
         * [XOSSH](charts/xossh.md)
         * [Logging and Monitoring](charts/logging-monitoring.md)
         * [Persistent Storage](charts/storage.md)
+        * [BBSim](charts/bbsim.md)
 * [Operations Guide](operating_cord/operating_cord.md)
     * [General Info](operating_cord/general.md)
         * [GUI](operating_cord/gui.md)
diff --git a/charts/bbsim.md b/charts/bbsim.md
new file mode 100644
index 0000000..cabc6c6
--- /dev/null
+++ b/charts/bbsim.md
@@ -0,0 +1,53 @@
+# BBSim Helm Chart
+
+This chart let you install the broadband simulator.
+Note that this chart depends on [kafka](kafka.md)
+
+```shell
+helm install -n bbsim bbsim
+```
+
+## Set a different number of ONUs
+
+You can configure the number of ONUs trough a parameter in the installation:
+
+```shell
+helm install -n bbsim bbsim --set onus_per_pon_port={number_od_onus}
+```
+
+## Set a different mode
+
+By default BBSim will bring up a certain number of ONUs and the start sending
+authentication requests, via EAPOL, and DHCP requests.
+
+You can change the behaviour via:
+
+```shell
+helm install -n bbsim bbsim --set emulation_mode="{both|aaa|default}"
+```
+
+Where:
+
+- `both` stands for authentication and DHCP
+- `aaa` stands for authentication only
+- `default` will just activate the devices
+
+## Start BBSim without Kafka
+
+Kafka is used to aggregate the logs in CORD's [logging](logging-monitoring.md)
+framework.
+
+If you want to start BBSim without pushing the logs to kafka, you can install it
+with:
+
+```shell
+helm install -n bbsim bbsim --set kafka_broker=""
+```
+
+## Provision the BBSim OLT in NEM
+
+You can use this file to bring up the BBSim OLT in NEM: [bbsim-16.yaml](https://github.com/opencord/pod-configs/blob/master/tosca-configs/bbsim/bbsim-16.yaml).
+
+Note that in that file there is a bit of configuration for the `dhcpl2relay` application
+in ONOS that instructs it to send DHCP packet back to the OLT. This may differ
+from a POD where you are sending those packets out of the fabric.
\ No newline at end of file
diff --git a/charts/voltha.md b/charts/voltha.md
index 26641a0..05e748c 100644
--- a/charts/voltha.md
+++ b/charts/voltha.md
@@ -12,30 +12,33 @@
 helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
 ```
 
+Install the etcd-operator helm chart.   This chart provides a convenient way of creating and managing etcd clusters.   When voltha installs it will attempt to use etcd-operator to create its etcd cluster.  Once installed etcd-operator can be left running.
+
+```shell
+helm install -n etcd-operator stable/etcd-operator --version 0.8.0
+```
+
+Allow etcd-operator enough time to create the EtdCluster CustomResourceDefinitions.  This should only be a couple of seconds after the etcd-operator pods are running.  Check the CRD are ready by running the following:
+
+```shell
+kubectl get crd | grep etcd
+```
+
+
+
 Update dependencies within the voltha chart:
 
 ```shell
 helm dep up voltha
 ```
 
-There is an `etcd-operator` **known bug** that prevents deploying
-Voltha correctly the first time. We suggest the following workaround:
-
-First, install Voltha without an `etcd` custom resource definition:
+Install the voltha helm chart.   This will create the voltha pods and additionally create the etcd-cluster pods
 
 ```shell
-helm install -n voltha --set etcd-operator.customResources.createEtcdClusterCRD=false voltha
+helm install -n voltha voltha
 ```
 
-Then upgrade Voltha, which defaults to using the `etcd` custom
-resource definition:
-
-```shell
-helm upgrade --set etcd-operator.customResources.createEtcdClusterCRD=true voltha ./voltha
-```
-
-After this first installation, you can use the standard
-install/uninstall procedure described below.
+Allow enough time for the 3 etcd-cluster pods to start before using the voltha pods.
 
 ## Standard Uninstall
 
@@ -89,11 +92,32 @@
 
 ```yaml
 # voltha-values.yaml
-envoyForEtcdImage: 'voltha/voltha-envoy:dev'
-netconfImage: 'voltha/voltha-netconf:dev'
-ofagentImage: 'voltha/voltha-ofagent:dev'
-vcliImage: 'voltha/voltha-cli:dev'
-vcoreImage: 'voltha/voltha-voltha:dev'
+images:
+  vcore:
+    repository: '192.168.99.100:30500/voltha-voltha'
+    tag: 'dev'
+    pullPolicy: 'Always'
+
+  vcli:
+    repository: '192.168.99.100:30500/voltha-cli'
+    tag: 'dev'
+    pullPolicy: 'Always'
+
+  ofagent:
+    repository: '192.168.99.100:30500/voltha-ofagent'
+    tag: 'dev'
+    pullPolicy: 'Always'
+
+  netconf:
+    repository: '192.168.99.100:30500/voltha-netconf'
+    tag: 'dev'
+    pullPolicy: 'Always'
+
+  envoy_for_etcd:
+    repository: '192.168.99.100:30500/voltha-envoy'
+    tag: 'dev'
+    pullPolicy: 'Always'
+
 ```
 
 and you can install VOLTHA using:
diff --git a/prereqs/k8s-multi-node.md b/prereqs/k8s-multi-node.md
index 841cb67..95791eb 100644
--- a/prereqs/k8s-multi-node.md
+++ b/prereqs/k8s-multi-node.md
@@ -89,7 +89,7 @@
 on the remote machines.
 
 The configuration file to access the POD will be saved in the
-sub-directory *configs/onf.conf*.
+sub-directory *inventories/onf/artifacts/admin.conf*.
 
 If you want to deploy another POD without affecting your existing
 deployment run the following: