Reorganizing docs

- Adding quickstart examples for the different workflows
- Adding an environemnt overview
- Adding workflow definition
- Adding operation guide

Change-Id: I474dd1a2ea6e916512041f80f2ca3039718ab219
diff --git a/.gitignore b/.gitignore
index 8ed6db6..c0fb2d1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,5 +1,6 @@
 # build related
 venv_docs
+doc_venv
 _build
 repos
 
@@ -11,6 +12,7 @@
 voltha-go
 voltha-openolt-adapter
 voltha-openonu-adapter
+voltha-openonu-adapter-go
 kind-voltha
 voltha-protos
 voltha-system-tests
@@ -19,3 +21,6 @@
 
 # IDEs
 .idea
+
+# OS
+.DS_Store
diff --git a/Makefile b/Makefile
index e674052..25ca56e 100644
--- a/Makefile
+++ b/Makefile
@@ -11,7 +11,7 @@
 
 # Other repos with documentation to include.
 # edit the `git_refs` file with the commit/tag/branch that you want to use
-OTHER_REPO_DOCS ?= bbsim cord-tester ofagent-go openolt voltctl voltha-openolt-adapter voltha-openonu-adapter voltha-protos voltha-system-tests kind-voltha
+OTHER_REPO_DOCS ?= bbsim cord-tester ofagent-go openolt voltctl voltha-openolt-adapter voltha-openonu-adapter voltha-openonu-adapter-go voltha-protos voltha-system-tests kind-voltha
 
 # Static docs, built by other means (usually robot framework)
 STATIC_DOCS    := _static/voltha-system-tests _static/cord-tester
diff --git a/_static/voltha_cluster_overview.png b/_static/voltha_cluster_overview.png
new file mode 100644
index 0000000..9ba8a0b
--- /dev/null
+++ b/_static/voltha_cluster_overview.png
Binary files differ
diff --git a/_static/voltha_cluster_virtual.png b/_static/voltha_cluster_virtual.png
new file mode 100644
index 0000000..b31707f
--- /dev/null
+++ b/_static/voltha_cluster_virtual.png
Binary files differ
diff --git a/conf.py b/conf.py
index 2e6b04d..4be53dd 100644
--- a/conf.py
+++ b/conf.py
@@ -62,6 +62,7 @@
     'sphinxcontrib.seqdiag',
     'sphinxcontrib.spelling',
     "sphinx_multiversion",
+    "sphinx.ext.intersphinx",
 #    'sphinxcontrib.golangdomain',
 #    'autoapi.extension',
 ]
@@ -117,6 +118,7 @@
 # This pattern also affects html_static_path and html_extra_path.
 exclude_patterns = [
         '*/LICENSE.md',
+        '*/RELEASE_NOTES.md',
         '*/vendor',
         '.DS_Store',
         'Thumbs.db',
@@ -135,6 +137,7 @@
         'bbsim/README.md',
         'CODE_OF_CONDUCT.md',
         '*/CODE_OF_CONDUCT.md',
+        'doc_venv/*'
 ]
 
 # The name of the Pygments (syntax highlighting) style to use.
diff --git a/git_refs b/git_refs
index 1fa3f0a..f81e56e 100644
--- a/git_refs
+++ b/git_refs
@@ -11,15 +11,16 @@
 
 _REPO NAME_             _DIR_    _REF_
 
-bbsim                   /        master
-cord-tester             /        master
-ofagent-go              /        master
-openolt                 /        master
-pyvoltha                /        master
-voltctl                 /        master
-voltha-go               /        master
-voltha-openolt-adapter  /        master
-voltha-openonu-adapter  /        master
-voltha-protos           /        master
-voltha-system-tests     /        master
-kind-voltha             /        master
+bbsim                      /        master
+cord-tester                /        master
+ofagent-go                 /        master
+openolt                    /        master
+pyvoltha                   /        master
+voltctl                    /        master
+voltha-go                  /        master
+voltha-openolt-adapter     /        master
+voltha-openonu-adapter     /        master
+voltha-openonu-adapter-go  /        master
+voltha-protos              /        master
+voltha-system-tests        /        master
+kind-voltha                /        master
diff --git a/index.rst b/index.rst
index c1c9d01..4817d9c 100644
--- a/index.rst
+++ b/index.rst
@@ -33,6 +33,12 @@
 
    VOLTHA Component Diagram
 
+Here some quick links to get you started:
+
+- :doc:`./overview/deployment_environment`
+- :doc:`./overview/workflows`
+- :doc:`./overview/quickstart`
+- :doc:`./overview/troubleshooting`
 
 Community
 ---------
@@ -48,9 +54,12 @@
    :hidden:
    :glob:
 
-   overview/*
+   overview/deployment_environment.rst
+   overview/workflows.rst
+   overview/quickstart.rst
+   overview/operate.rst
+   overview/troubleshooting.rst
    readme
-   VOLTHA Deployment Tool (kind-voltha) <kind-voltha/README.md>
 
 .. toctree::
    :maxdepth: 1
@@ -60,9 +69,12 @@
    BBSIM <bbsim/docs/source/index.rst>
    OpenFlow Agent <ofagent-go/README.md>
    OpenOlt Adapter <voltha-openolt-adapter/README.md>
+   OpenOnu Adapter <voltha-openonu-adapter/README.md>
+   OpenOnu Adapter Go <voltha-openonu-adapter-go/README.md>
    Openolt Agent <openolt/README.md>
    VOLTHA CLI <voltctl/README.md>
    VOLTHA Protos <voltha-protos/README.md>
+   Kind-voltha <kind-voltha/README.md>
 
 .. toctree::
    :maxdepth: 1
diff --git a/overview/deployment_environment.rst b/overview/deployment_environment.rst
new file mode 100644
index 0000000..44f877f
--- /dev/null
+++ b/overview/deployment_environment.rst
@@ -0,0 +1,68 @@
+VOLTHA Deployment Environment
+=============================
+
+All the components in the VOLTHA project are containerized and the default
+deployment environment is ``kubernetes``.
+
+Generally VOLTHA is installed in one of two setups:
+
+- A physical ``kubernetes`` cluster, generally used for production deployments.
+- A virtual ``kind`` cluster, generally used for development.
+
+Regardless of the chosen environment the deployment process is the same,
+more on this later, and the installation can be managed in the same way.
+
+Managing a VOLTHA deployment
+----------------------------
+
+VOLTHA components on top of ``kubernetes`` are managed via ``helm`` charts.
+For more information about ``helm`` please refer to the `Official Documentation
+<https://helm.sh>`_.
+For the sake of this guide all you need to know is that an
+``helm`` chart describes all the information required to deploy a component on top of
+``kubernetes``, such as: containers, exposed ports and configuration parameters.
+
+A VOLTHA deployment is composed, at its very minimum, by:
+
+* Infrastructure
+
+   * A ``kafka`` cluster (can also be a single node)
+   * An ``etcd`` cluster (can also be a single node)
+   * ``ONOS`` (single or multi instance)
+   * [Optional] ``radius`` (for EAPOL based authentication)
+* ``VOLTHA``
+
+   * ``voltha-core`` and ``ofAgent`` (contained in the same ``helm`` chart)
+* Adapters
+
+   * [one or more] adapter pair(s) (OLT adapter + ONU Adapter)
+
+.. figure:: ../_static/voltha_cluster_overview.png
+   :alt: VOLTHA Component Diagram
+   :width: 100%
+
+   VOLTHA Kubernetes deployment
+
+Note that the ``Infrastructure`` components can be deployed outside of the
+``kubernetes`` cluster.
+
+You can read more about VOLTHA deployments in:
+
+- :doc:`lab_setup`
+- :doc:`pod_physical`
+- :doc:`dev_virtual`
+
+.. toctree::
+   :maxdepth: 1
+   :hidden:
+   :glob:
+
+   ./lab_setup.rst
+   ./pod_physical.rst
+   ./dev_virtual.rst
+
+Tooling
+-------
+
+To simplify the installation of VOLTHA we provided a tool called ``kind-voltha``.
+You can read more on :doc:`kind-voltha <../kind-voltha/README>` in its own documentation.
diff --git a/overview/dev_virtual.rst b/overview/dev_virtual.rst
index 22f17db..586b160 100644
--- a/overview/dev_virtual.rst
+++ b/overview/dev_virtual.rst
@@ -3,11 +3,31 @@
 Developing code with a virtual VOLTHA POD
 =========================================
 
-A guide to install a virtual POD. This is generally used to gain familiarity with the
-environment or for development purposes.
+A guide to install a virtual POD. A virtual pod is generally used to gain familiarity with the
+environment or for development and testing purposes.
 
-Most of the `helm` and `voltctl` commands found in the :ref:`pod_physical` also
-apply in the virtual environment.
+How is it different from a Physical deployment?
+-----------------------------------------------
+
+The main difference is in the ``kubernetes`` cluster itself.
+In a Physical deployment we assume that the ``kubernetes`` cluster is installed
+on 3 (or mode) physical nodes.
+When installing a ``virtual`` cluster we refer to a ``kind`` (``kubernetes-in-docker``)
+cluster.
+
+Another common difference is that a Physical deployment is generally associated
+with one or more physical OLTs while a Virtual deployment normally emulates the
+PON network using :doc:`BBSim <../bbsim/docs/source/index>`.
+
+.. figure:: ../_static/voltha_cluster_virtual.png
+   :alt: VOLTHA Component Diagram
+   :width: 100%
+
+   VOLTHA Kubernetes kind deployment
+
+Note that is anyway possible to connect a physical OLT to a virtual cluster, as
+long as the OLT is reachable from the ``kind`` host machine. If you need to control
+your OLT "in-band" then it's not advised to connect it to a virtual cluster.
 
 Quickstart
 ----------
@@ -20,57 +40,9 @@
 
 .. code:: bash
 
-    TYPE=minimal WITH_RADIUS=y CONFIG_SADIS=y ONLY_ONE=y WITH_BBSIM=y ./voltha up
+    TYPE=minimal WITH_RADIUS=y CONFIG_SADIS=y WITH_BBSIM=y ./voltha up
 
-For more information you can visit the `kind-voltha page <kind-voltha/README.md>`_.
-
-Install BBSIM (Broad Band OLT/ONU Simulator)
---------------------------------------------
-
-BBSIM provides a simulation of a BB device. It can be useful for
-testing.
-
-Create BBSIM Device
-^^^^^^^^^^^^^^^^^^^
-
-After having deployed BBSIM either through `kind-voltha` or manually `bbsim <bbsim/docs/source/index.rst>`_ you can create a similated OLT.
-
-.. code:: bash
-
-   voltctl device create -t openolt -H $(kubectl get -n voltha service/bbsim -o go-template='{{.spec.clusterIP}}'):50060
-
-Enable BBSIM Device
-^^^^^^^^^^^^^^^^^^^
-
-.. code:: bash
-
-   voltctl device enable $(voltctl device list --filter Type~openolt -q)
-
-Observing the newly created device in ONOS
-------------------------------------------
-
-At this point you should be able to see a new device in ONOS.
-
-You can SSH into ONOS via
-
-.. code:: bash
-
-    ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -p 8201 karaf@localhost
-
-with password ``karaf``.
-
-Now when issuing `ports` command you should see something along the lines of:
-
-.. code:: bash
-
-   karaf@root > ports
-   id=of:00000a0a0a0a0a00, available=true, local-status=connected 29s ago, role=MASTER, type=SWITCH, mfr=VOLTHA Project, hw=open_pon, sw=open_pon, serial=BBSIM_OLT_0, chassis=a0a0a0a0a00, driver=voltha, channelId=10.244.2.7:48630, managementAddress=10.244.2.7, protocol=OF_13
-   port=16, state=enabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:10, portName=BBSM00000001-1
-   port=17, state=disabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:11, portName=BBSM00000001-2
-   port=18, state=disabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:12, portName=BBSM00000001-3
-   port=19, state=disabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:13, portName=BBSM00000001-4
-   port=1048576, state=enabled, type=fiber, speed=0 , adminState=enabled, portMac=0a:0a:0a:0a:0a:00, portName=nni-1048576
-
+For more information you can visit the :doc:`kind-voltha page <../kind-voltha/README>`.
 
 Developing changes on a virtual pod
 -----------------------------------
@@ -121,42 +93,3 @@
 .. code:: bash
 
     $ DEPLOY_K8S=no ./voltha down && DEPLOY_K8S=no EXTRA_HELM_FLAGS="-f dev-values.yaml" ./voltha up
-
-Create Kubernetes Cluster
--------------------------
-
-Kind provides a command line control tool to easily create Kubernetes
-clusters using just a basic Docker environment. The following commands
-will create the desired deployment of Kubernetes and then configure your
-local copy of ``kubectl`` to connect to this cluster.
-
-.. code:: bash
-
-   kind create cluster --name=voltha-$TYPE --config $TYPE-cluster.cfg
-   export KUBECONFIG="$(kind get kubeconfig-path --name="voltha-$TYPE")"
-   kubectl cluster-info
-
-Initialize Helm
----------------
-
-Helm provide a capability to install and manage Kubernetes applications.
-VOLTHA’s default deployment mechanism utilized Helm. Before Helm can be
-used to deploy VOLTHA it must be initialized and the repositories that
-container the artifacts required to deploy VOLTHA must be added to Helm.
-
-.. code:: bash
-
-   # Initialize Helm and add the required chart repositories
-   helm init
-   helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com
-   helm repo add stable https://kubernetes-charts.storage.googleapis.com
-   helm repo add onf https://charts.opencord.org
-   helm repo update
-
-   # Create and k8s service account so that Helm can create pods
-   kubectl create serviceaccount --namespace kube-system tiller
-   kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
-   kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
-
-From this point follow the :ref:`physical VOLTHA POD installation instructions
-<installation_steps>`. Come back here once done.
diff --git a/overview/lab_setup.rst b/overview/lab_setup.rst
index 2659b0d..d0fb41b 100644
--- a/overview/lab_setup.rst
+++ b/overview/lab_setup.rst
@@ -19,7 +19,8 @@
    VOLTHA Lab Setup
 
 *The image above represents the data plane connections in a LAB setup.
-It does not include the kubernetes cluster for simplicity.*
+It does not include the ``kubernetes`` cluster for simplicity, but the ``dev server``
+listed above can be one of your ``kubernetes`` nodes.*
 
 What you’ll need to emulate E2E traffic is:
 
@@ -28,6 +29,8 @@
   - 1 1G Ethernet port
   - 1 10G Ethernet port (this can be a second 1G interface as long as you have a media converter)
 
+.. _setting-up-a-client:
+
 Setting up a client
 -------------------
 
diff --git a/overview/operate.rst b/overview/operate.rst
new file mode 100644
index 0000000..f43d19e
--- /dev/null
+++ b/overview/operate.rst
@@ -0,0 +1,154 @@
+.. _operate:
+
+Operate a VOLTHA POD
+====================
+
+In this page we assume that you have a VOLTHA POD (either Physical or Virtual) up and running.
+
+Provision an OLT
+----------------
+
+The first step in order to operate a VOLTHA POD is to add an OLT to it.
+
+If you deployed a Virtual cluster you can create a BBSim based OLT in VOLTHA with:
+
+.. code:: bash
+
+    voltctl device create -t openolt -H bbsim0.voltha.svc:50060
+
+*If you have deployed multiple BBSim instances using the ``NUM_OF_BBSIM`` variable
+you can list all the available BBSim OLTs with ``kubectl get svc --all-namespaces | grep bbsim``*
+
+If you are connecting to a Physical OLT:
+
+.. code:: bash
+
+    voltctl device create -t openolt -H <olt-management-ip>:9191
+
+Regardless of the OLT the command to ``enable`` an OLT in VOLTHA is always the same:
+
+.. code:: bash
+
+    voltctl device enable <device-id>
+
+*The ``device id`` is the output of the ``device create`` command, or can be retrieved
+with ``voltctl device list``*
+
+If you have just one OLT create you can use :
+
+.. code:: bash
+
+    voltctl device enable $(voltctl device list --filter Type~openolt -q)
+
+Once the OLT is ``enabled`` in VOLTHA you should be able to see the ONU attached
+to it by listing the devices:
+
+.. code:: bash
+
+    voltctl device list
+
+Authentication
+--------------
+
+If the use-case you installed (e.g. AT&T) expects EAPOL based authentication you want to make
+sure that is working. Visit :ref:`workflows` for more information.
+
+In a **Physical POD** you need to trigger authentication on your client
+(if it doesn't do so automatically). You can refer to :ref:`setting-up-a-client`.
+
+In a **Virtual POD** installed with the ``WITH_EAPOL="yes"`` flag authentication
+happens automatically.
+
+You can check the authentication state for your subscribers via the ONOS cli:
+
+.. code:: bash
+
+    ssh -p 8101 karaf@localhost # (pwd: karaf)
+    karaf@root > aaa-users
+    of:00000a0a0a0a0a00/16: AUTHORIZED_STATE, last-changed=5m14s ago, mac=2E:60:70:00:00:01, subid=BBSM00000001-1, username=user
+
+*Note that if ONOS was not installed as parted of VOLTHA the ``ssh`` command may differ*
+
+Subscriber provisioning
+-----------------------
+
+*Note that, depending on the workflow, authentication is not a requirement of subscriber provisioning*
+
+The process referred to as ``Subscriber provisioning`` causes traffic flows to be created in ONOS and
+ data plane path to be configured in the device, enabling different services on a specific UNI port.
+
+In order to provision a subscriber you need to identify it. In ONOS a subscriber
+is viewed as an enabled port (UNI) on the logical switch that VOLTHA exposes, for example:
+
+.. code:: bash
+
+    ssh -p 8101 karaf@localhost # (pwd: karaf)
+    karaf@root > ports -e
+    id=of:00000a0a0a0a0a00, available=true, local-status=connected 8m27s ago, role=MASTER, type=SWITCH, mfr=VOLTHA Project, hw=open_pon, sw=open_pon, serial=BBSIM_OLT_0, chassis=a0a0a0a0a00, driver=voltha, channelId=10.244.2.7:53576, managementAddress=10.244.2.7, protocol=OF_13
+      port=16, state=enabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:10, portName=BBSM00000001-1
+      port=1048576, state=enabled, type=fiber, speed=0 , adminState=enabled, portMac=0a:0a:0a:0a:0a:00, portName=nni-1048576
+
+Once the port number representing a subscriber has been retrieved, you can provision it via:
+
+.. code:: bash
+
+    ssh -p 8101 karaf@localhost # (pwd: karaf)
+    karaf@root > volt-add-subscriber-access of:00000a0a0a0a0a00 16
+
+Where ``of:00000a0a0a0a0a00`` is the OpenFlow ID of the Logical Device representing the OLT
+and ``16`` is the port representing that particular subscriber.
+
+To verify that the subscriber has been provisioned:
+
+.. code:: bash
+
+    ssh -p 8101 karaf@localhost # (pwd: karaf)
+    karaf@root > volt-programmed-subscribers
+    location=of:00000a0a0a0a0a00/16 tagInformation=UniTagInformation{uniTagMatch=0, ponCTag=900, ponSTag=900, usPonCTagPriority=-1, usPonSTagPriority=-1, dsPonCTagPriority=-1, dsPonSTagPriority=-1, technologyProfileId=64, enableMacLearning=false, upstreamBandwidthProfile='Default', downstreamBandwidthProfile='Default', serviceName='', configuredMacAddress='A4:23:05:00:00:00', isDhcpRequired=true, isIgmpRequired=false}
+
+You can also verify that the expected flows have been created and ``ADDED`` to VOLTHA:
+
+.. code:: bash
+
+    ssh -p 8101 karaf@localhost # (pwd: karaf)
+    karaf@root > flows -s
+    deviceId=of:00000a0a0a0a0a00, flowRuleCount=8
+      ADDED, bytes=0, packets=0, table=0, priority=10000, selector=[IN_PORT:16, ETH_TYPE:eapol, VLAN_VID:900], treatment=[immediate=[OUTPUT:CONTROLLER], meter=METER:1, metadata=METADATA:384004000000000/0]
+      ADDED, bytes=0, packets=0, table=0, priority=10000, selector=[IN_PORT:16, ETH_TYPE:ipv4, VLAN_VID:900, IP_PROTO:17, UDP_SRC:68, UDP_DST:67], treatment=[immediate=[OUTPUT:CONTROLLER], meter=METER:1, metadata=METADATA:4000000000/0]
+      ADDED, bytes=0, packets=0, table=0, priority=10000, selector=[IN_PORT:1048576, ETH_TYPE:lldp], treatment=[immediate=[OUTPUT:CONTROLLER]]
+      ADDED, bytes=0, packets=0, table=0, priority=10000, selector=[IN_PORT:1048576, ETH_TYPE:ipv4, IP_PROTO:17, UDP_SRC:67, UDP_DST:68], treatment=[immediate=[OUTPUT:CONTROLLER]]
+      ADDED, bytes=0, packets=0, table=0, priority=1000, selector=[IN_PORT:16, VLAN_VID:0], treatment=[immediate=[VLAN_ID:900], transition=TABLE:1, meter=METER:1, metadata=METADATA:384004000100000/0]
+      ADDED, bytes=0, packets=0, table=0, priority=1000, selector=[IN_PORT:1048576, METADATA:384, VLAN_VID:900], treatment=[immediate=[VLAN_POP], transition=TABLE:1, meter=METER:1, metadata=METADATA:384004000000010/0]
+      ADDED, bytes=0, packets=0, table=1, priority=1000, selector=[IN_PORT:1048576, METADATA:10, VLAN_VID:900], treatment=[immediate=[VLAN_ID:0, OUTPUT:16], meter=METER:1, metadata=METADATA:4000000000/0]
+      ADDED, bytes=0, packets=0, table=1, priority=1000, selector=[IN_PORT:16, VLAN_VID:900], treatment=[immediate=[VLAN_PUSH:vlan, VLAN_ID:900, OUTPUT:1048576], meter=METER:1, metadata=METADATA:4000000000/0]
+
+*The flows above may vary in form and number from workflow to workflow, the example is given for the ATT workflow*
+
+DHCP Allocation
+---------------
+
+If the use-case you installed expect DHCP to be handled by ONOS it's time to check
+that an IP has correctly been allocated to the subscriber.
+
+In a **Physical POD** you need to trigger a DHCP request on your client
+(if it doesn't do so automatically). You can refer to :ref:`setting-up-a-client`.
+
+In a **Virtual POD** installed with the ``WITH_DHCP="yes"`` flag a DHCP requests
+happens automatically.
+
+You can check the DHCP state for your subscribers via the ONOS cli:
+
+.. code:: bash
+
+    ssh -p 8101 karaf@localhost # (pwd: karaf)
+    karaf@root > dhcpl2relay-allocations
+    01SubscriberId=BBSM00000001-1,ConnectPoint=of:00000a0a0a0a0a00/16,State=DHCPACK,MAC=2E:60:70:00:00:01,CircuitId=BBSM00000001-1,IP Allocated=192.168.240.6,Allocation Timestamp=2020-07-27T22:39:24.140361Z
+
+Data plane validation
+---------------------
+
+If you deployed a **Virtual POD** with a BBSim OLT you are done. BBSim does not support
+data plane emulation at the moment.
+
+If you deployed a **Physical POD** then you should now be able to reach the internet, from
+your client attached to the UNI port you provisioned during the ``subscriber provisioning`` step.
diff --git a/overview/pod_physical.rst b/overview/pod_physical.rst
index 8a35abe..91be3a6 100644
--- a/overview/pod_physical.rst
+++ b/overview/pod_physical.rst
@@ -3,10 +3,7 @@
 Deploy a physical VOLTHA POD
 ============================
 
-Quickstart
-----------
-
-The quickstart assumes you POD is already correctly cabled, if not you can
+This document assumes you POD is already correctly cabled, if not you can
 refer to :ref:`lab_setup`
 
 Requires:
@@ -21,262 +18,39 @@
 
 .. code:: bash
 
-    DEPLOY_K8S=no WITH_RADIUS=y CONFIG_SADIS=y ./voltha up
+    DEPLOY_K8S=no WITH_RADIUS=y CONFIG_SADIS=y SADIS_CFG="my-sadis-cfg.json" ./voltha up
+
+*``my-sadis-cfg.json`` is a reference to your own ``sadis`` configuration.
+This is needed to specify the appropriate values for your devices and subsribers*
 
 If you already have a ``radius`` server that you want to use, change the flag to ``WITH_RADIUS=n``
 and `configure ONOS accordingly <https://github.com/opencord/aaa>`_.
 
-For more information please check `kind-voltha page <kind-voltha/README.md>`_.
+For more information please check :doc:`kind-voltha page <../kind-voltha/README>`.
 
-TLDR;
------
+After the deployment please refer to :ref:`operate` .
 
-Below are the complete steps to install a physical cluster. It assumes
-``kubectl`` and ``helm`` commands are already available.
+HA Cluster
+----------
 
-Configure Helm
---------------
+To deploy ONOS in a multi instance environment for redundancy, High avaliablity and scale, you can add
+`NUM_OF_ONOS=3 NUM_OF_ATOMIX=3` to any of the workflow command. You can pick the number of instances onf ONOS
+and ATOMIX independently. As a good suggestion is 3 or 5.
 
-Helm provide a capability to install and manage Kubernetes applications.
-VOLTHA’s default deployment mechanism utilized Helm. Before Helm can be
-used to deploy VOLTHA it must be initialized and the repositories that
-container the artifacts required to deploy VOLTHA must be added to Helm.
+If you are planning to support a big number of ONU we suggest to horizontally scale
+the ``openonu-adapater``, you can do so by setting the ``NUM_OF_OPENONU`` variable.
+Generally speaking a single ``openonu-adapter`` instance can support up to 200 ONU devices.
+
+As an example for the ATT workflow:
 
 .. code:: bash
 
-    helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com
-    helm repo add stable https://kubernetes-charts.storage.googleapis.com
-    helm repo add onf https://charts.opencord.org
-    helm repo update
+    WITH_RADIUS=y CONFIG_SADIS=y SADIS_CFG="my-sadis-cfg.json" NUM_OF_ONOS=3 NUM_OF_ATOMIX=3 NUM_OF_OPENONU=8 ./voltha up
 
-.. _installation_steps:
+Configuration for in-band OLT control
+-------------------------------------
 
-Install EtcdOperator
---------------------
-
-ETCD Operator is a utility that allows applications to create and manage
-ETCD key/value clusters as Kubernetes resources. VOLTHA utilizes this
-utility to create its key/value store. *NOTE: it is not required that
-VOLTHA create its own datastore as VOLTHA can utilize an existing
-datastore, but for this example VOLTHA will creates its own datastore*
-
-.. code:: bash
-
-   helm install -f $TYPE-values.yaml --namespace voltha --name etcd-operator stable/etcd-operator
-
-Wait for operator pods
-~~~~~~~~~~~~~~~~~~~~~~
-
-Before continuing, the Kubernetes pods associated with ETCD Operator must
-be in the ``Running`` state.
-
-.. code:: bash
-
-   kubectl get -n voltha pod
-
-Once all the pods are in the ``Running`` state the output, for a
-**full** deployment should be similar to the output below. For a
-**minimal** deployment there will only be a single pod, the
-``etcd-operator-etcd-operator-etcd-operator`` pod.
-
-.. code:: bash
-
-   NAME                                                              READY     STATUS    RESTARTS   AGE
-   etcd-operator-etcd-operator-etcd-backup-operator-7897665cfq75w2   1/1       Running   0          2m
-   etcd-operator-etcd-operator-etcd-operator-7d579799f7-bjdnj        1/1       Running   0          2m
-   etcd-operator-etcd-operator-etcd-restore-operator-7d77d878wwcn7   1/1       Running   0          2m
-
-It is not just VOLTHA
----------------------
-
-To demonstrate the capability of VOLTHA other *partner* applications are
-required, such as ONOS. The following sections describe how to install
-and configure these *partner* applications.
-
-*NOTE: It is important to start ONOS before VOLTHA as if they are started in
-the reverse order the ``ofagent`` sometimes does not connect to the SDN
-controller*\ `VOL-1764 <https://jira.opencord.org/browse/VOL-1764>`__.
-
-ONOS (OpenFlow Controller)
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-VOLTHA exposes an OLT and its connected ONUs as an OpenFlow switch. To control
-that virtual OpenFlow switch an OpenFlow controller is required.  For most
-VOLTHA deployments that controller is ONOS, with a set of ONOS applications
-installed. To install ONOS use the following Helm command:
-
-.. code:: bash
-
-   helm install -f $TYPE-values.yaml --name onos onf/onos
-
-Exposing ONOS Services
-^^^^^^^^^^^^^^^^^^^^^^
-
-.. code:: bash
-
-   screen -dmS onos-ui kubectl port-forward service/onos-ui 8181:8181
-   screen -dmS onos-ssh kubectl port-forward service/onos-ssh 8101:8101
-
-Configuring ONOS Applications
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Configuration files have been provided to configure aspects of the ONOS
-deployment. The following curl commands push those configurations to the
-ONOS instance. It is possible (likely) that ONOS won’t be immediately
-ready to accept REST requests, so the first ``curl`` command may need
-retried until ONOS is ready to accept REST connections.
-
-.. code:: bash
-
-   curl --fail -sSL --user karaf:karaf \
-       -X POST -H Content-Type:application/json \
-       http://127.0.0.1:8181/onos/v1/network/configuration/apps/org.opencord.kafka \
-       --data @onos-files/onos-kafka.json
-   curl --fail -sSL --user karaf:karaf \
-       -X POST -H Content-Type:application/json \
-       http://127.0.0.1:8181/onos/v1/network/configuration/apps/org.opencord.dhcpl2relay \
-       --data @onos-files/onos-dhcpl2relay.json
-   curl --fail -sSL --user karaf:karaf \
-       -X POST -H Content-Type:application/json \
-       http://127.0.0.1:8181/onos/v1/configuration/org.opencord.olt.impl.Olt \
-       --data @onos-files/olt-onos-olt-settings.json
-   curl --fail -sSL --user karaf:karaf \
-       -X POST -H Content-Type:application/json \
-       http://127.0.0.1:8181/onos/v1/configuration/org.onosproject.net.flow.impl.FlowRuleManager \
-       --data @onos-files/olt-onos-enableExtraneousRules.json
-
-SADIS Configuration
-^^^^^^^^^^^^^^^^^^^
-
-The ONOS applications leverage the *Subscriber and Device Information
-Store (SADIS)* when processing EAPOL and DHCP packets from VOLTHA
-controlled devices. In order for VOLTHA to function properly, SADIS
-entries must be configured into ONOS.
-
-The repository contains two example SADIS configuration that can be used
-with ONOS depending if you using VOLTHA with *tech profile* support
-(``onos-files/onos-sadis-no-tp.json``) or without *tech profile* support
-(``onos-files/onos-sadis-tp.json``). Either of these configurations can
-be pushed to ONOS using the following command:
-
-.. code:: bash
-
-   curl --fail -sSL --user karaf:karaf \
-       -X POST -H Content-Type:application/json \
-       http://127.0.0.1:8181/onos/v1/network/configuration/apps/org.opencord.sadis \
-       --data @<selected SADIS configuration file>
-
-Install VOLTHA Core
--------------------
-
-VOLTHA has two main *parts*: core and adapters. The **core** provides
-the main logic for the VOLTHA application and the **adapters** contain
-logic to adapter vendor neutral operations to vendor specific devices.
-
-Before any adapters can be deployed the VOLTHA core must be installed
-and in the ``Running`` state. The following Helm command installs the
-core components of VOLTHA based on the desired deployment type.
-
-.. code:: bash
-
-   helm install -f $TYPE-values.yaml --set use_go=true --set defaults.log_level=WARN \
-       --namespace voltha --name voltha onf/voltha
-
-During the install of the core VOLTHA components some containers may
-"crash" or restart. This is normal as there are dependencies, such as
-the read/write cores cannot start until the ETCD cluster is established
-and so they crash until the ETCD cluster is operational. Eventually all
-the containers should be in a ``Running`` state as queried by the
-command:
-
-.. code:: bash
-
-   kubectl get -n voltha pod
-
-The output should be similar to the following with a different number of
-``etcd-operator`` and ``voltha-etcd-cluster`` pods depending on the
-deployment type.
-
-.. code:: bash
-
-   NAME                                                         READY     STATUS    RESTARTS   AGE
-   etcd-operator-etcd-operator-etcd-operator-7d579799f7-xq6f2   1/1       Running   0          19m
-   ofagent-8ccb7f5fb-hwgfn                                      1/1       Running   0          4m
-   ro-core-564f5cdcc7-2pch8                                     1/1       Running   0          4m
-   rw-core1-7fbb878cdd-6npvr                                    1/1       Running   2          4m
-   rw-core2-7fbb878cdd-k7w9j                                    1/1       Running   3          4m
-   voltha-api-server-5f7c8b5b77-k6mrg                           2/2       Running   0          4m
-   voltha-cli-server-5df4c95b7f-kcpdl                           1/1       Running   0          4m
-   voltha-etcd-cluster-4rsqcvpwr4                               1/1       Running   0          4m
-   voltha-kafka-0                                               1/1       Running   0          4m
-   voltha-zookeeper-0                                           1/1       Running   0          4m
-
-Install Adapters
-----------------
-
-The following commands install both the simulated OLT and ONU adapters
-as well as the adapters for an OpenOLT and OpenONU device.
-
-.. code:: bash
-
-   helm install -f $TYPE-values.yaml -set use_go=true --set defaults.log_level=WARN \
-       --namespace voltha --name sim onf/voltha-adapter-simulated
-   helm install -f $TYPE-values.yaml -set use_go=true --set defaults.log_level=WARN \
-       --namespace voltha --name open-olt onf/voltha-adapter-openolt
-   helm install -f $TYPE-values.yaml -set use_go=true --set defaults.log_level=WARN \
-       --namespace voltha --name open-onu onf/voltha-adapter-openonu
-
-Exposing VOLTHA Services
-------------------------
-
-At this point VOLTHA is deployed, and from within the Kubernetes cluster
-the VOLTHA services can be reached. However, from outside the Kubernetes
-cluster the services cannot be reached.
-
-.. code:: bash
-
-   screen -dmS voltha-api kubectl port-forward -n voltha service/voltha-api 55555:55555
-   screen -dmS voltha-ssh kubectl port-forward -n voltha service/voltha-cli 5022:5022
-
-Install FreeRADIUS Service
---------------------------
-
-.. code:: bash
-
-   helm install -f minimal-values.yaml --namespace voltha --name radius onf/freeradius
-
-Configure ``voltctl`` to Connect to VOLTHA
-------------------------------------------
-
-In order for ``voltctl`` to connect to the VOLTHA instance deployed in
-the Kubernetes cluster it must know which IP address and port to use.
-This configuration can be persisted to a local config file using the
-following commands.
-
-.. code:: bash
-
-   mkdir -p $HOME/.volt
-   voltctl -a v2 -s localhost:55555 config > $HOME/.volt/config
-
-To test the connectivity you can query the version of the VOLTHA client
-and server::
-
-   voltctl version
-
-The output should be similar to the following::
-
-   Client:
-    Version        unknown-version
-    Go version:    unknown-goversion
-    Vcs reference: unknown-vcsref
-    Vcs dirty:     unknown-vcsdirty
-    Built:         unknown-buildtime
-    OS/Arch:       unknown-os/unknown-arch
-
-   Cluster:
-    Version        2.1.0-dev
-    Go version:    1.12.6
-    Vcs feference: 28f120f1f4751284cadccf73f2f559ce838dd0a5
-    Vcs dirty:     false
-    Built:         2019-06-26T16:58:22Z
-    OS/Arch:       linux/amd64
+If OLT is being used in in-band connectivity mode, the following
+`document <https://docs.google.com/document/d/1OKDJCPEFVTEythAFUS_I7Piew4jHmhk25llK6UF04Wg>`_
+details the configuration aspects in ONOS and the aggregation switch to
+trunk/switch in-band packets from the OLT to BNG or Voltha.
diff --git a/overview/quickstart.rst b/overview/quickstart.rst
new file mode 100644
index 0000000..18e5fbc
--- /dev/null
+++ b/overview/quickstart.rst
@@ -0,0 +1,77 @@
+.. _quickstart:
+
+Quickstart
+==========
+
+This page contains a set of one liner useful to setup different VOLTHA use-cases on
+a Virtual pod emulating the PON through :doc:`BBSim <../bbsim/docs/source/index>`.
+
+For more information on how to setup a :doc:`physical POD <./pod_physical>` or
+use a :doc:`Virtual POD <./dev_virtual>` for development
+refer to the respective guides.
+
+Common setup
+------------
+
+In order to install VOLTHA you need to have ``golang`` and ``docker`` installed.
+
+.. code:: bash
+
+    export KINDVOLTHADIR=~/kind-voltha
+    mkdir $KINDVOLTHADIR
+    cd $KINDVOLTHADIR
+    curl -sSL https://raw.githubusercontent.com/opencord/kind-voltha/master/voltha --output ./voltha
+    chmod +x ./voltha
+
+Now select the use-case you want to deploy:
+
+ATT Workflow
+------------
+
+The ATT Workflow expects EAPOL based authentication and DHCP to be handled within
+the VOLTHA POD.
+
+.. code:: bash
+
+    cd $KINDVOLTHADIR
+    WITH_BBSIM="yes" WITH_EAPOL="yes" WITH_DHCP="yes" WITH_RADIUS="yes" CONFIG_SADIS="bbsim" ./voltha up
+
+DT Workflow
+------------
+
+The DT workflow does not require EAPOL based authentication or DHCP packet handling
+in the VOLTHA POD.
+
+.. code:: bash
+
+    cd $KINDVOLTHADIR
+    WITH_BBSIM="yes" WITH_EAPOL="no" WITH_DHCP="no" CONFIG_SADIS="bbsim" EXTRA_HELM_FLAGS="--set bbsim.sadisFormat=dt" ./voltha up
+
+TT Workflow
+------------
+
+The TT workflow does not require EAPOL based authentication but expects DHCP packets
+for multiple services to be handled within the POD.
+
+*Note that the TT workflow is not fully supported yet*
+
+.. code:: bash
+
+    cd $KINDVOLTHADIR
+    WITH_BBSIM="yes" WITH_EAPOL="no" WITH_DHCP="yes" CONFIG_SADIS="bbsim" EXTRA_HELM_FLAGS="--set bbsim.sadisFormat=tt" ./voltha up
+
+Post deploy actions
+-------------------
+
+Once the deployment completed, make sure to export the required ``environment``
+variables as ``kind-voltha`` outputs:
+
+.. code:: bash
+
+    export KUBECONFIG="/Users/teone/.kube/kind-config-voltha-minimal"
+    export VOLTCONFIG="/Users/teone/.volt/config-minimal"
+    export PATH=/Users/teone/kind-voltha/bin:$PATH
+
+Once you have the POD up and running you can refer to the :doc:`./operate` guide.
+
+For more information please check :doc:`kind-voltha page <../kind-voltha/README>`.
diff --git a/overview/ubuntu_dev_env.rst b/overview/ubuntu_dev_env.rst
deleted file mode 100644
index 36b9901..0000000
--- a/overview/ubuntu_dev_env.rst
+++ /dev/null
@@ -1,510 +0,0 @@
-Setting up an Ubuntu Development Environment
-============================================
-
-These notes describe the checking out and building from the multiple
-gerrit repositories needed to run a VOLTHA 2.x environment with
-docker-compose. Starting point is a basic Ubuntu 16.04 or 18.04
-installation with internet access.
-
-These notes are intended for iterative development only. The testing
-environments and production environments will run a Kubernetes and Helm
-based deployment.
-
-Install prerequisites
----------------------
-
-Patch and updated
-
-.. code:: sh
-
-   sudo apt update
-   sudo apt dist-upgrade
-
-Add ``docker-ce`` apt repo and install docker and build tools
-
-.. code:: sh
-
-   curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
-   sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
-   sudo apt update
-   sudo apt install build-essential docker-ce git
-
-Install current ``docker-compose``. Older versions may cause docker
-build problems:
-https://github.com/docker/docker-credential-helpers/issues/103
-
-.. code:: sh
-
-   sudo curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
-   sudo chmod 755 /usr/local/bin/docker-compose
-
-Install the Golang ppa apt repository and install **golang 1.13**.
-
-.. code:: sh
-
-   sudo add-apt-repository ppa:longsleep/golang-backports
-   sudo apt update
-   sudo apt install golang-1.13
-
-Setup environment
-~~~~~~~~~~~~~~~~~
-
-Setup a local Golang and docker-compose environment, verifying the
-golang-1.13 binaries are in your path. Also add your local ``GOPATH``
-bin folder to ``PATH`` Add export statements to your ``~/.profile`` to
-persist.
-
-.. code:: sh
-
-   mkdir $HOME/source
-   mkdir $HOME/go
-   export GO111MODULE=on
-   export GOPATH=$HOME/go
-   export DOCKER_TAG=latest
-   export PATH=$PATH:/usr/lib/go-1.13/bin:$GOPATH/bin
-   go version
-
-Allow your current non-root user ``$USER`` docker system access
-
-.. code:: sh
-
-   sudo usermod -a -G docker $USER
-
-Logout/Login to assume new group membership needed for running docker as
-non-root user and verify any environment variables set in
-``~/.profile``.
-
-Checkout source and build images
---------------------------------
-
-VOLTHA 2.x Core Containers
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Checkout needed source from gerrit. Build the ``voltha-rw-core`` docker
-image.
-
-.. code:: sh
-
-   cd ~/source/
-   git clone https://gerrit.opencord.org/voltha-go.git
-   cd ~/source/voltha-go
-   make build
-
-For more details regarding building and debugging the 2.x core outside
-of Docker refer to voltha-go BUILD.md.
-
-https://github.com/opencord/voltha-go/blob/master/BUILD.md
-
-VOLTHA 2.x OpenOLT Container
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Checkout needed source from gerrit. Build the ``voltha-openolt-adapter``
-docker image.
-
-.. code:: sh
-
-   cd ~/source/
-   git clone https://gerrit.opencord.org/voltha-openolt-adapter.git
-   cd ~/source/voltha-openolt-adapter/
-   make build
-
-VOLTHA 2.x OpenONU Container
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Checkout needed source from gerrit. Build the ``voltha-openonu-adapter``
-docker image.
-
-.. code:: sh
-
-   cd ~/source/
-   git clone https://gerrit.opencord.org/voltha-openonu-adapter.git
-   cd ~/source/voltha-openonu-adapter/
-   make build
-
-VOLTHA 2.x OFAgent
-~~~~~~~~~~~~~~~~~~
-
-Checkout needed source from gerrit. Build the ``voltha-ofagent`` docker
-image.
-
-.. code:: sh
-
-   cd ~/source/
-   git clone https://gerrit.opencord.org/ofagent-go.git
-   cd ~/source/ofagent-go/
-   make docker-build
-
-ONOS Container with VOLTHA Compatible Apps
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-By default the standard ONOS docker image does not contain nor start any
-apps needed by VOLTHA. If you use the standard image then you need to
-use the ONOS restful API to load needed apps separately.
-
-For development convenience there is an ONOS docker image build that
-adds in the current compatible VOLTHA apps. Checkout and build the ONOS
-image with added ONOS apps (olt, aaa, sadis, dhcpl2relay, and kafka).
-
-.. code:: sh
-
-   cd ~/source/
-   git clone https://gerrit.opencord.org/voltha-onos.git
-   cd ~/source/voltha-onos
-   make build
-
-Install voltctl VOLTHA Command Line Management Tool
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-A working Golang build environment is required as ``voltctl`` is build
-and run directly from the host. Build the ``voltctl`` executable and
-install in ~/go/bin which should already be in your $PATH
-
-.. code:: sh
-
-   cd ~/source/
-   git clone https://gerrit.opencord.org/voltctl.git
-   cd ~/source/voltctl
-   make build
-   make install
-
-Configure the ``voltctl`` environment configuration files
-``~/.volt/config`` and ``~/.volt/command_options`` to point at the local
-``voltha-rw-core`` instance that will be running.
-
-.. code:: sh
-
-   mkdir ~/.volt/
-
-   cat << EOF > ~/.volt/config
-   apiVersion: v2
-   server: localhost:50057
-   tls:
-     useTls: false
-     caCert: ""
-     cert: ""
-     key: ""
-     verify: ""
-   grpc:
-     timeout: 10s
-   EOF
-
-   cat << EOF > ~/.volt/command_options
-   device-list:
-     format: table{{.Id}}\t{{.Type}}\t{{.Root}}\t{{.ParentId}}\t{{.SerialNumber}}\t{{.Address}}\t{{.AdminState}}\t{{.OperStatus}}\t{{.ConnectStatus}}\t{{.Reason}}
-     order: -Root,SerialNumber
-
-   device-ports:
-     order: PortNo
-
-   device-flows:
-     order: Priority,EthType
-
-   logical-device-list:
-     order: RootDeviceId,DataPathId
-
-   logical-device-ports:
-     order: Id
-
-   logical-device-flows:
-     order: Priority,EthType
-
-   adapter-list:
-     order: Id
-
-   component-list:
-     order: Component,Name,Id
-
-   loglevel-get:
-     order: ComponentName,PackageName,Level
-
-   loglevel-list:
-     order: ComponentName,PackageName,Level
-   EOF
-
-
-Install VOLTHA bbsim olt/onu Simulator (Optional)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If you do not have physical OLT/ONU hardware you can build a simulator.
-
-.. code:: sh
-
-   cd ~/source/
-   git clone https://gerrit.opencord.org/bbsim.git
-   cd ~/source/bbsim
-   make docker-build
-
-Test
-----
-
-Startup
-~~~~~~~
-
-Run the combined docker-compose yaml configuration file that starts the
-core, its dependent systems (etcd, zookeeper, and kafka) and the openonu
-and openolt adapters. Export the ``DOCKER_HOST_IP`` environment variable
-to your non-localhost IP address needed for inter-container
-communication. This can be the IP assigned to ``eth0`` or the
-``docker0`` bridge (typically 172.17.0.1)
-
-For convenience you can also export ``DOCKER_TAG`` to signify the docker
-images tag you would like to use. Though for typical development you may
-have to edit ``compose/system-test.yml`` to override the specific docker
-image ``DOCKER_TAG`` needed. The ``DOCKER_REGISTRY`` and
-``DOCKER_REPOSITORY`` variables are not needed unless you wish to
-override the docker image path. See the ``system-test.yml`` file for
-details on the docker image path creation string.
-
-.. code:: sh
-
-   export DOCKER_HOST_IP=172.17.0.1
-   export DOCKER_TAG=latest
-
-   cd ~/source/voltha-go
-   docker-compose -f compose/system-test.yml up -d
-
-
-   WARNING: The DOCKER_REGISTRY variable is not set. Defaulting to a blank string.
-   WARNING: The DOCKER_REPOSITORY variable is not set. Defaulting to a blank string.
-   Creating network "compose_default" with driver "bridge"
-   Pulling zookeeper (wurstmeister/zookeeper:latest)...
-   latest: Pulling from wurstmeister/zookeeper
-   a3ed95caeb02: Pull complete
-   ef38b711a50f: Pull complete
-   e057c74597c7: Pull complete
-   666c214f6385: Pull complete
-   c3d6a96f1ffc: Pull complete
-   3fe26a83e0ca: Pull complete
-   3d3a7dd3a3b1: Pull complete
-   f8cc938abe5f: Pull complete
-   9978b75f7a58: Pull complete
-   4d4dbcc8f8cc: Pull complete
-   8b130a9baa49: Pull complete
-   6b9611650a73: Pull complete
-   5df5aac51927: Pull complete
-   76eea4448d9b: Pull complete
-   8b66990876c6: Pull complete
-   f0dd38204b6f: Pull complete
-   Digest: sha256:7a7fd44a72104bfbd24a77844bad5fabc86485b036f988ea927d1780782a6680
-   Status: Downloaded newer image for wurstmeister/zookeeper:latest
-   Pulling kafka (wurstmeister/kafka:2.11-2.0.1)...
-   2.11-2.0.1: Pulling from wurstmeister/kafka
-   4fe2ade4980c: Pull complete
-   6fc58a8d4ae4: Pull complete
-   819f4a45746c: Pull complete
-   a3133bc2e3e5: Pull complete
-   72f0dc369677: Pull complete
-   1e1130fc942d: Pull complete
-   Digest: sha256:20d08a6849383b124bccbe58bc9c48ec202eefb373d05e0a11e186459b84f2a0
-   Status: Downloaded newer image for wurstmeister/kafka:2.11-2.0.1
-   Pulling etcd (quay.io/coreos/etcd:v3.2.9)...
-   v3.2.9: Pulling from coreos/etcd
-   88286f41530e: Pull complete
-   2fa4a2c3ffb5: Pull complete
-   539b8e6ccce1: Pull complete
-   79e70e608afa: Pull complete
-   f1bf8f503bff: Pull complete
-   c4abfc27d146: Pull complete
-   Digest: sha256:1913dd980d55490fa50640bbef0f4540d124e5c66d6db271b0b4456e9370a272
-   Status: Downloaded newer image for quay.io/coreos/etcd:v3.2.9
-   Creating compose_kafka_1           ... done
-   Creating compose_cli_1             ... done
-   Creating compose_adapter_openolt_1 ... done
-   Creating compose_rw_core_1         ... done
-   Creating compose_adapter_openonu_1 ... done
-   Creating compose_etcd_1            ... done
-   Creating compose_onos_1            ... done
-   Creating compose_ofagent_1         ... done
-   Creating compose_zookeeper_1       ... done
-
-Verify containers have continuous uptime and no restarts
-
-.. code:: sh
-
-   $ docker-compose -f compose/system-test.yml ps
-   WARNING: The DOCKER_REGISTRY variable is not set. Defaulting to a blank string.
-   WARNING: The DOCKER_REPOSITORY variable is not set. Defaulting to a blank string.
-             Name                         Command               State                                             Ports
-   ---------------------------------------------------------------------------------------------------------------------------------------------------------------
-   compose_adapter_openolt_1   /app/openolt --kafka_adapt ...   Up      0.0.0.0:50062->50062/tcp
-   compose_adapter_openonu_1   /voltha/adapters/brcm_open ...   Up
-   compose_etcd_1              etcd --name=etcd0 --advert ...   Up      0.0.0.0:2379->2379/tcp, 0.0.0.0:32929->2380/tcp, 0.0.0.0:32928->4001/tcp
-   compose_kafka_1             start-kafka.sh                   Up      0.0.0.0:9092->9092/tcp
-   compose_ofagent_1           /app/ofagent --controller= ...   Up
-   compose_onos_1              ./bin/onos-service server        Up      6640/tcp, 0.0.0.0:6653->6653/tcp, 0.0.0.0:8101->8101/tcp, 0.0.0.0:8181->8181/tcp, 9876/tcp
-   compose_rw_core_1           /app/rw_core -kv_store_typ ...   Up      0.0.0.0:50057->50057/tcp
-   compose_zookeeper_1         /bin/sh -c /usr/sbin/sshd  ...   Up      0.0.0.0:2181->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
-
-.. code:: sh
-
-   $ docker ps
-   CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                                                                                        NAMES
-   08a0e7a1ee5c        voltha-openolt-adapter:latest   "/app/openolt --kafk…"   31 seconds ago      Up 27 seconds       0.0.0.0:50062->50062/tcp                                                                     compose_adapter_openolt_1
-   1f364cf7912d        wurstmeister/zookeeper:latest   "/bin/sh -c '/usr/sb…"   31 seconds ago      Up 27 seconds       22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp                                           compose_zookeeper_1
-   ab1822baed41        wurstmeister/kafka:2.11-2.0.1   "start-kafka.sh"         31 seconds ago      Up 24 seconds       0.0.0.0:9092->9092/tcp                                                                       compose_kafka_1
-   22a4fe4b2eb4        voltha-ofagent-go:latest        "/app/ofagent --cont…"   31 seconds ago      Up 23 seconds                                                                                                    compose_ofagent_1
-   d34e1c976db5        voltha-rw-core:latest           "/app/rw_core -kv_st…"   31 seconds ago      Up 26 seconds       0.0.0.0:50057->50057/tcp                                                                     compose_rw_core_1
-   f6ef52975dc0        voltha-openonu-adapter:latest   "/voltha/adapters/br…"   31 seconds ago      Up 29 seconds                                                                                                    compose_adapter_openonu_1
-   7ce8bcf7436c        voltha-onos:latest              "./bin/onos-service …"   31 seconds ago      Up 25 seconds       0.0.0.0:6653->6653/tcp, 0.0.0.0:8101->8101/tcp, 6640/tcp, 9876/tcp, 0.0.0.0:8181->8181/tcp   compose_onos_1
-   60ac172726f5        quay.io/coreos/etcd:v3.4.1      "etcd --name=etcd0 -…"   31 seconds ago      Up 28 seconds       0.0.0.0:2379->2379/tcp, 0.0.0.0:32931->2380/tcp, 0.0.0.0:32930->4001/tcp                     compose_etcd_1
-
-Verify Cluster Communication
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Use ``voltctl`` commands to verify core and adapters are running.
-
-.. code:: sh
-
-   voltctl adapter list
-   ID                   VENDOR            VERSION      SINCELASTCOMMUNICATION
-   brcm_openomci_onu    VOLTHA OpenONU    2.3.2-dev    UNKNOWN
-   openolt              VOLTHA OpenOLT    2.3.5-dev    UNKNOWN
-
-List “devices” to verify no devices exist.
-
-.. code:: sh
-
-   voltctl device list
-   ID    TYPE    ROOT    PARENTID    SERIALNUMBER    ADDRESS    ADMINSTATE    OPERSTATUS    CONNECTSTATUS    REASON
-
-At this point create/preprovision and enable an olt device and add flows
-via onos and ofagent.
-
-Physical OLT/ONU Testing with Passing Traffic
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Start a physical OLT and ONU. Tested with Edgecore OLT, Broadcom based
-ONU, and RG capable of EAPoL. Create/Preprovision the OLT and enable.
-The create command returns the device ID needed for enable and
-subsequent commands.
-
-**Add device to VOLTHA**
-
-.. code:: sh
-
-   $ voltctl device create -t openolt -H 10.64.1.206:9191
-   db87c4b48843bb99567d3d94
-
-   $ voltctl device enable db87c4b48843bb99567d3d94
-   db87c4b48843bb99567d3d94
-
-**Verify device state**
-
-.. code:: sh
-
-   $ voltctl device list
-   ID                          TYPE                 ROOT     PARENTID                    SERIALNUMBER    ADDRESS             ADMINSTATE    OPERSTATUS    CONNECTSTATUS    REASON
-   db87c4b48843bb99567d3d94    openolt              true     a82bb53678ae                EC1721000221    10.64.1.206:9191    ENABLED       ACTIVE        REACHABLE
-   082d7c2e628325ccc3336275    brcm_openomci_onu    false    db87c4b48843bb99567d3d94    ALPHe3d1cf57    unknown             ENABLED       ACTIVE        REACHABLE        omci-flows-pushed
-
-.. code:: sh
-
-   $ voltctl device port list db87c4b48843bb99567d3d94
-   PORTNO       LABEL            TYPE            ADMINSTATE    OPERSTATUS    DEVICEID    PEERS
-   1048576      nni-1048576      ETHERNET_NNI    ENABLED       ACTIVE                    []
-   536870912    pon-536870912    PON_OLT         ENABLED       ACTIVE                    [{082d7c2e628325ccc3336275 536870912}]
-   536870913    pon-536870913    PON_OLT         ENABLED       ACTIVE                    []
-   536870914    pon-536870914    PON_OLT         ENABLED       ACTIVE                    []
-   536870915    pon-536870915    PON_OLT         ENABLED       ACTIVE                    []
-   536870916    pon-536870916    PON_OLT         ENABLED       ACTIVE                    []
-   536870917    pon-536870917    PON_OLT         ENABLED       ACTIVE                    []
-   536870918    pon-536870918    PON_OLT         ENABLED       ACTIVE                    []
-   536870919    pon-536870919    PON_OLT         ENABLED       ACTIVE                    []
-   536870920    pon-536870920    PON_OLT         ENABLED       ACTIVE                    []
-   536870921    pon-536870921    PON_OLT         ENABLED       ACTIVE                    []
-   536870922    pon-536870922    PON_OLT         ENABLED       ACTIVE                    []
-   536870923    pon-536870923    PON_OLT         ENABLED       ACTIVE                    []
-   536870924    pon-536870924    PON_OLT         ENABLED       ACTIVE                    []
-   536870925    pon-536870925    PON_OLT         ENABLED       ACTIVE                    []
-   536870926    pon-536870926    PON_OLT         ENABLED       ACTIVE                    []
-   536870927    pon-536870927    PON_OLT         ENABLED       ACTIVE                    []
-
-.. code:: sh
-
-   $ voltctl device port list 082d7c2e628325ccc3336275
-   PORTNO       LABEL       TYPE            ADMINSTATE    OPERSTATUS    DEVICEID    PEERS
-   16           uni-16      ETHERNET_UNI    ENABLED       ACTIVE                    []
-   17           uni-17      ETHERNET_UNI    ENABLED       DISCOVERED                []
-   18           uni-18      ETHERNET_UNI    ENABLED       DISCOVERED                []
-   19           uni-19      ETHERNET_UNI    ENABLED       DISCOVERED                []
-   20           uni-20      ETHERNET_UNI    ENABLED       DISCOVERED                []
-   536870912    PON port    PON_ONU         ENABLED       ACTIVE                    [{db87c4b48843bb99567d3d94 536870912}]
-
-Verify ONOS device state and eventual EAPoL authentication. ONOS default
-Username is ``karaf``, Password is ``karaf``
-
-.. code:: sh
-
-   ssh -p 8101 karaf@localhost
-
-Display the device and ports discovered
-
-.. code:: sh
-
-   karaf@root > ports
-
-   id=of:0000a82bb53678ae, available=true, local-status=connected 4m27s ago, role=MASTER, type=SWITCH, mfr=VOLTHA Project, hw=open_pon, sw=open_pon, serial=EC1721000221, chassis=a82bb53678ae, driver=voltha, channelId=172.27.0.1:59124, managementAddress=172.27.0.1, protocol=OF_13
-     port=16, state=enabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:10, portName=ALPHe3d1cf57-1
-     port=17, state=disabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:11, portName=ALPHe3d1cf57-2
-     port=18, state=disabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:12, portName=ALPHe3d1cf57-3
-     port=19, state=disabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:13, portName=ALPHe3d1cf57-4
-     port=20, state=disabled, type=fiber, speed=0 , adminState=enabled, portMac=08:00:00:00:00:14, portName=ALPHe3d1cf57-5
-     port=1048576, state=enabled, type=fiber, speed=0 , adminState=enabled, portMac=a8:2b:b5:36:78:ae, portName=nni-1048576
-
-EAPoL may take up to 30 seconds to complete.
-
-.. code:: sh
-
-   karaf@root > aaa-users
-
-   of:0000a82bb53678ae/16: AUTHORIZED_STATE, last-changed=4m22s ago, mac=94:CC:B9:DA:AB:D1, subid=PON 1/1/3/1:2.1.1, username=94:CC:B9:DA:AB:D1
-
-**Provision subscriber flows**
-
-.. code:: sh
-
-   karaf@root > volt-add-subscriber-access of:0000a82bb53678ae 16
-
-   karaf@root > volt-programmed-subscribers
-
-   location=of:0000a82bb53678ae/16 tagInformation=UniTagInformation{uniTagMatch=0, ponCTag=20, ponSTag=11, usPonCTagPriority=-1, usPonSTagPriority=-1, dsPonCTagPriority=-1, dsPonSTagPriority=-1, technologyProfileId=64, enableMacLearning=false, upstreamBandwidthProfile='Default', downstreamBandwidthProfile='Default', serviceName='', configuredMacAddress='A4:23:05:00:00:00', isDhcpRequired=true, isIgmpRequired=false}
-
-After about 30 seconds the RG should attempt DHCP which should be
-visible in onos. At this point the RG should be able to pass database
-traffic via the ONU/OLT.
-
-.. code:: sh
-
-   karaf@root > dhcpl2relay-allocations
-
-   SubscriberId=ALPHe3d1cf57-1,ConnectPoint=of:0000a82bb53678ae/16,State=DHCPREQUEST,MAC=94:CC:B9:DA:AB:D1,CircuitId=PON 1/1/3/1:2.1.1,IP Allocated=29.29.206.20,Allocation Timestamp=2020-02-17T15:34:31.572746Z
-
-BBSIM Simulated OLT/ONU Testing Control Plane Traffic
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If you do not have physical OLT/ONU hardware you can start VOLTHA
-containers and the bbsim olt/onu hardware simulator using a different
-docker-compose yaml file. Verify containers are running as above with
-the addition of the bbsim and radius server containers.
-
-.. code:: sh
-
-   export DOCKER_HOST_IP=172.17.0.1
-   export DOCKER_TAG=latest
-
-   cd ~/source/voltha-go
-   docker-compose -f compose/system-test-bbsim.yml up -d
-
-Create/Preprovision and enable similarly to the physical OLT above,
-providing the local bbsim IP and listening port
-
-.. code:: sh
-
-   voltctl device create -t openolt -H 172.17.0.1:50060
-   ece94c86e93c6e06dd0a544b
-
-   voltctl device enable ece94c86e93c6e06dd0a544b
-   ece94c86e93c6e06dd0a544b
-
-Proceed with the verification and ONOS provisioning commands similar to
-the physical OLT described above.
diff --git a/overview/workflows.rst b/overview/workflows.rst
new file mode 100644
index 0000000..25bb7e6
--- /dev/null
+++ b/overview/workflows.rst
@@ -0,0 +1,101 @@
+.. _workflows:
+
+Operator workflows
+==================
+
+``Workflow`` is a term that spilled from the SEBA Reference Design (RD) into VOLTHA.
+
+In SEBA a workflow is defined as the set of operations that control the lifecycle
+of a subscriber, authentication logic, customer tag management etc... Such workflow is operator specific.
+
+A full description of the workflows can be `found here <https://drive.google.com/drive/folders/1MfxwoDSvAR_rgFHt6n9Sai7IuiJPrHxF>`_.
+
+A big part of the workflow in SEBA is defined within NEM (Network Edge Mediator).
+Given that NEM is not available in a plain VOLTHA deployment the definition
+of workflow is a subset of the SEBA one, and comprises:
+
+- Customer tags allocation
+- Technology profile
+- Bandwidth profile
+- Flow management (EAPOL/DHCP/Data path)
+- Group management
+
+The `workflows` are often referred to as `use-cases` and the two words are interchangeable
+in the VOLTHA environment.
+
+To deploy a specific workflow through ``kind-voltha`` please visit :ref:`quickstart`.
+
+How is the workflow defined in VOLTHA?
+----------------------------------------
+
+Customer tag allocation
+***********************
+
+The vlan tags for a particular subscriber are defined in the ``sadis`` configuration.
+`Sadis <https://github.com/opencord/sadis>`_ stands for `Subscriber and Device Information Service`
+and is the ONOS application responsible to store and distribute Subscriber information.
+
+Information on different ``sadis`` configurations can be found here:
+https://docs.google.com/document/d/1JLQ51CZg4jsXsBQcrJn-fc2kVvXH6lw0PYoyIclwmBs
+
+Technology profile
+******************
+
+Technology profiles describes technology specific attributes required to implement
+Subscriber Services on an OpenFlow managed Logical Switch overlaid upon an OLT
+or other technology specific platform.
+
+More on Technology profiles here:
+https://wiki.opencord.org/display/CORD/Technology+Profiles#TechnologyProfiles-IntroductiontoTechnologyProfiles
+
+Technology profiles in VOLTHA are stored in ETCD. If you want to load a custom
+Technology profile in your stack you can do so by:
+
+.. code:: bash
+
+    ETCD_POD=$(kubectl get pods | grep etcd | awk 'NR==1{print \$1}')
+    kubectl cp <my-tech-profile>.json $ETCD_POD:/tmp/tp.json
+    kubectl exec -it $ETCD_POD -- /bin/sh -c 'cat /tmp/tp.json | ETCDCTL_API=3 etcdctl put service/voltha/technology_profiles/XGS-PON/64'
+
+*Note that `XGS-PON` represents the technology of your OLT device and `64` is
+the default id of the technology profile. If you want to use a technology profile
+that is not the default for a particular subscriber that needs to be configured
+in `sadis`.*
+
+Bandwidth profile
+*****************
+
+Bandwidth profiles control the allocation Bandwidth for a particular subscriber.
+They are defined in the `sadis`.
+An example:
+
+.. code-block:: json
+
+    {
+      "id" : "Default",
+      "cir" : 1000000,
+      "cbs" : 1001,
+      "eir" : 1002,
+      "ebs" : 1003,
+      "air" : 1004
+    }
+
+Each bandwidth profile is then translated into an OpenFlow Meter for configuration on the OLT.
+
+Flow management
+***************
+
+Flows are managed in ONOS by the `olt` application. Through the configuration of
+this application you can define whether your setup will create:
+
+- An `EAPOL` trap flow
+- A `DHCP` trap flow
+- An `IGMP` trap flow
+
+in addition to the default data plane flows.
+
+Group management
+****************
+
+Groups are managed in ONOS by the `mcast` application. Through the configuration of
+this application you can achieve multicast for services such as IpTV.