Merge "[CORD-3110] Synchronizer hello world"
diff --git a/Makefile b/Makefile
index b5c7dd9..f1d1b93 100644
--- a/Makefile
+++ b/Makefile
@@ -27,7 +27,7 @@
test: linkcheck lint
linkcheck: build
- linkchecker --check-extern -a _book/
+ linkchecker -a _book/
lint:
@echo "markdownlint(mdl) version: `mdl --version`"
diff --git a/README.md b/README.md
index ffadeed..268c88d 100644
--- a/README.md
+++ b/README.md
@@ -1,35 +1,15 @@
# Installation Guide
-This guide describes how to install CORD.
-
-## Prerequisites
-
-Start by satisfying the following prerequisites:
-
-* [Hardware Requirements](./prereqs/hardware.md)
-* [Connectivity Requirements](./prereqs/networking.md)
-* [Software Requirements](./prereqs/software.md)
-
-## Deploy CORD
-
-The next step is select the configuration (profile) you want to
-install:
+This guide describes how to install CORD. It identifies a set of
+[prerequisites](prereqs/README.md), and then walks through
+the steps involved in bringing up one of two CORD profiles:
* [R-CORD](./profiles/rcord/install.md)
* [M-CORD](./profiles/mcord/install.md)
-## Additional Information
+If you are anxious to jump straight to a [Quick Start](quickstart.md)
+procedure that brings up an emulated version of CORD running
+on your laptop (sorry, no subscriber data plane), then that's an option.
-The following are optional steps you may want to take
-
-### Offline Installation
-
-If your environment does not permit connecin your POD to ther public
-Internet, you may want to take advantage of a local Docker registery.
-The following [registry setup](./prereqs/docker-registry.md) will help.
-
-### OpenStack Installation
-
-If you need OpenStack included in your deployment, so you can bring up
-VMs on your POD, you will need to following the following
-[OpenStack deployment](./prereqs/openstack-helm.md) guide.
+Alternatively, if you want to get a broader lay-of-the-land, you
+might step back and start with an [Overview](overview.md).
diff --git a/SUMMARY.md b/SUMMARY.md
index 798a13d..70dc248 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -1,20 +1,27 @@
# Summary
* [Overview](overview.md)
+ * [Navigating CORD](navigate.md)
+ * [Quick Start](quickstart.md)
+ * [MacOS](macos.md)
+ * [Linux](linux.md)
* [Installation Guide](README.md)
- * [Hardware Requirements](prereqs/hardware.md)
- * [Connectivity Requirements](prereqs/networking.md)
- * [Software Requirements](prereqs/software.md)
- * [Kubernetes](prereqs/kubernetes.md)
- * [Single Node](prereqs/k8s-single-node.md)
- * [Multi-Node](prereqs/k8s-multi-node.md)
- * [Helm](prereqs/helm.md)
- * [Docker Registry (optional)](prereqs/docker-registry.md)
- * [OpenStack (optional)](prereqs/openstack-helm.md)
- * [Fabric Setup](fabric-setup.md)
+ * [Prerequisites](prereqs/README.md)
+ * [Hardware Requirements](prereqs/hardware.md)
+ * [Connectivity Requirements](prereqs/networking.md)
+ * [Software Requirements](prereqs/software.md)
+ * [Kubernetes](prereqs/kubernetes.md)
+ * [Single Node](prereqs/k8s-single-node.md)
+ * [Multi-Node](prereqs/k8s-multi-node.md)
+ * [Helm](prereqs/helm.md)
+ * [Optional Packages](prereqs/optional.md)
+ * [Docker Registry](prereqs/docker-registry.md)
+ * [OpenStack](prereqs/openstack-helm.md)
+ * [Fabric Software Setup](fabric-setup.md)
* [Bringing Up CORD](profiles/intro.md)
* [R-CORD](profiles/rcord/install.md)
* [OLT Setup](openolt/README.md)
+ * [Emulated OLT/ONU](profiles/rcord/emulate.md)
* [M-CORD](profiles/mcord/install.md)
* [EnodeB Setup](profiles/mcord/enodeb-setup.md)
* [Helm Reference](charts/helm.md)
@@ -26,8 +33,11 @@
* [Base OpenStack](charts/base-openstack.md)
* [VTN Setup](prereqs/vtn-setup.md)
* [M-CORD](charts/mcord.md)
+ * [XOSSH](charts/xossh.md)
* [Operations Guide](operating_cord/operating_cord.md)
* [General Info](operating_cord/general.md)
+ * [GUI](operating_cord/gui.md)
+ * [Configuring the Service Graph](xos-gui/developer/service_graph.md)
* [REST API](operating_cord/rest_apis.md)
* [TOSCA](xos-tosca/README.md)
* [XOSSH](xos/dev/xossh.md)
@@ -43,17 +53,19 @@
* [RCORD](rcord/README.md)
* [vOLT](olt-service/README.md)
* [vRouter](vrouter/README.md)
-* [Modeling Guide](xos/README.md)
- * [XOS Modeling Framework](xos/dev/xproto.md)
- * [Core Models](xos/core_models.md)
- * [Security Policies](xos/security_policies.md)
- * [Writing Synchronizers](xos/dev/synchronizers.md)
- * [Design Guidelines](xos/dev/sync_arch.md)
- * [Implementation Details](xos/dev/sync_impl.md)
- * [Synchronizer Reference](xos/dev/sync_reference.md)
* [Development Guide](developer/developer.md)
* [Getting the Source Code](developer/getting_the_code.md)
+ * [Writing Models and Synchronizers](xos/intro.md)
+ * [XOS Modeling Framework](xos/dev/xproto.md)
+ * [XOS Tool Chain (Internals)](xos/dev/xosgenx.md)
+ * [XOS Synchronizer Framework](xos/dev/synchronizers.md)
+ * [Synchronizer Design](xos/dev/sync_arch.md)
+ * [Synchronizer Implementation](xos/dev/sync_impl.md)
+ * [Synchronizer Reference](xos/dev/sync_reference.md)
+ * [Core Models](xos/core_models.md)
+ * [Security Policies](xos/security_policies.md)
* [Developer Workflows](developer/workflows.md)
+ * [Working on R-CORD Without an OLT/ONU](developer/configuration_rcord.md)
* [Building Docker Images](developer/imagebuilder.md)
* Tutorials
* [Synchronizer Hello World](developer/tutorials/basic-synchronizer/intro.md)
@@ -65,13 +77,13 @@
* [SimpleExampleService](simpleexampleservice/simple-example-service.md)
* [GUI Development](xos-gui/developer/README.md)
* [Quickstart](xos-gui/developer/quickstart.md)
- * [Service Graph](xos-gui/developer/service_graph.md)
* [GUI Extensions](xos-gui/developer/gui_extensions.md)
* [GUI Internals](xos-gui/architecture/README.md)
* [Module Strucure](xos-gui/architecture/gui-modules.md)
* [Data Sources](xos-gui/architecture/data-sources.md)
* [Tests](xos-gui/developer/tests.md)
* [Unit Tests](xos/dev/unittest.md)
+ * [Versions and Releases](versioning.md)
* [Testing Guide](cord-tester/README.md)
* [Test Setup](cord-tester/qa_testsetup.md)
* [Test Environment](cord-tester/qa_testenv.md)
diff --git a/charts/helm.md b/charts/helm.md
index a4c68ff..8b1c2e3 100644
--- a/charts/helm.md
+++ b/charts/helm.md
@@ -1,11 +1,10 @@
# Helm Reference
-For information on how to install `helm` please refer to [Installing helm](../prereqs/helm.md)
-
-## What is Helm?
-
{% include "/partials/helm/description.md" %}
+For information on how to install `helm` please refer to
+[Installing Helm](../prereqs/helm.md).
+
## CORD Helm Charts
All helm charts used to install CORD can be found in the `helm-chart`
@@ -35,7 +34,18 @@
is then possible to bring up the `mcord` profile, which corresponds
to ~10 other services. It is also possible to bring up an individual
service by executing its helm chart; for example
-`xos-services/exampleservice`.
+`xos-services/simpleexampleservice`.
+
+> **Note:** Sometimes we install Individual services by first
+> "wrapping" them in a profile. For example,
+> `SimpleExampleService` is deployed from the
+> `xos-profiles/demo-simpleexampleservice` profile, rather
+> than directly from `xos-services/simpleexampleservice`.
+> The latter is included by reference from the former.
+> This is not a fundamental limitation, but we do it when we
+> want to run the `tosca-loader` that loads a TOSCA workflow
+> into CORD. This feature is currently available at only
+> the profile level.
Similarly, the `base-kubernetes` profile brings up Kubernetes in
support of container-based VNFs. This corresponds to the
@@ -43,7 +53,7 @@
Kubernetes to deploy the CORD control plane. Once this profile is
running, it is possible to bring up an example VNF in a container
by executing its helm chart; for example
-`xos-services/simpleexampleservice`.
+`xos-profiles/demo-simpleexampleservice`.
> **Note:** The `base-kubernetes` configuration does not yet
> incorporate VTN. Doing so is work-in-progress.
diff --git a/charts/hippie-oss.md b/charts/hippie-oss.md
index 0d11701..7afdb4c 100644
--- a/charts/hippie-oss.md
+++ b/charts/hippie-oss.md
@@ -1,5 +1,8 @@
# Deploy Hippie OSS
+To insall a minimal (permissive) OSS container in support of subscriber
+provisioning for R-CORD, run the following:
+
```shell
helm install -n hippie-oss xos-services/hippie-oss
```
diff --git a/charts/local-persistent-volume.md b/charts/local-persistent-volume.md
new file mode 100644
index 0000000..b2bd8d8
--- /dev/null
+++ b/charts/local-persistent-volume.md
@@ -0,0 +1,40 @@
+# Local Persistent Volume Helm chart
+
+## Introduction
+
+The `local-persistent-volume` helm chart is a utility helm chart. It was
+created mainly to persist the `xos-core` DB data but this helm can be used
+to persist any data.
+
+It uses a relatively new kubernetes feature (it's a beta feature
+in Kubernetes 1.10.x) that allows us to define an independent persistent
+store in a kubernetes cluster.
+
+The helm chart mainly consists of the following kubernetes resources:
+
+- A storage class resource representing a local persistent volume
+- A persistent volume resource associated with the storage class and a specific directory on a specific node
+- A persistent volume claim resource that claims certain portion of the persistent volume on behalf of a pod
+
+The following variables are configurable in the helm chart:
+
+- `storageClassName`: The name of the storage class resource
+- `persistentVolumeName`: The name of the persistent volume resource
+- `pvClaimName`: The name of the persistent volume claim resource
+- `volumeHostName`: The name of the kubernetes node on which the data will be persisted
+- `hostLocalPath`: The directory or volume mount path on the chosen chosen node where data will be persisted
+- `pvStorageCapacity`: The capacity of the volume available to the persistent volume resource (e.g. 10Gi)
+
+Note: For this helm chart to work, the volume mount path or directory specified in the `hostLocalPath` variable needs to exist before the helm chart is deployed.
+
+## Standard Install
+
+```shell
+helm install -n local-store local-persistent-volume
+```
+
+## Standard Uninstall
+
+```shell
+helm delete --purge local-store
+```
diff --git a/charts/voltha.md b/charts/voltha.md
index 6ddabc0..e39a45a 100644
--- a/charts/voltha.md
+++ b/charts/voltha.md
@@ -2,39 +2,55 @@
## First Time Installation
-Add the kubernetes helm charts incubator repository
+Download the helm charts `incubator` repository
+
```shell
-cd voltha
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
```
Build dependencies
+
```shell
-helm dep build
+helm dep build voltha
```
-There's an etcd-operator **known bug** we're trying to solve that
-prevents users to deploy Voltha straight since the first time. We
-found a workaround.
+Install the kafka dependency
-Few steps:
+```shell
+helm install --name voltha-kafka \
+--set replicas=1 \
+--set persistence.enabled=false \
+--set zookeeper.servers=1 \
+--set zookeeper.persistence.enabled=false \
+incubator/kafka
+```
-Install Voltha (without etcd operator)
+There is an `etcd-operator` **known bug** that prevents deploying
+Voltha correctly the first time. We suggest the following workaround:
+
+First, install Voltha without an `etcd` custom resource definition:
+
```shell
helm install -n voltha --set etcd-operator.customResources.createEtcdClusterCRD=false voltha
```
-Uninstall Voltha
+Then upgrade Voltha, which defaults to using the `etcd` custom
+resource definition:
+
+```shell
+helm upgrade --set etcd-operator.customResources.createEtcdClusterCRD=true voltha ./voltha
+```
+
+After this first installation, you can use the standard
+install/uninstall procedure described below.
+
+## Standard Uninstall
+
```shell
helm delete --purge voltha
```
-Deploy Voltha
-```shell
-helm install -n voltha voltha
-```
-
-## Standard Installation Process
+## Standard Install
```shell
helm install -n voltha voltha
@@ -49,7 +65,7 @@
* Inner port: 8882
* Nodeport: 30125
-## How to access the VOLTHA CLI
+## Accessing the VOLTHA CLI
Assuming you have not changed the default ports in the chart,
you can use this command to access the VOLTHA CLI:
diff --git a/charts/xos-core.md b/charts/xos-core.md
index 50c7b1f..7d2f44c 100644
--- a/charts/xos-core.md
+++ b/charts/xos-core.md
@@ -1,11 +1,43 @@
# Deploy XOS-CORE
+To deploy the XOS core and affiliated containers, run the following:
+
```shell
helm dep update xos-core
helm install -n xos-core xos-core
```
+## Customizing security information
+
+We strongly recommend you to override the default values of `xosAdminUser` and
+`xosAdminPassword` with custom values.
+
+You can do it using a [`values.yaml`](https://docs.helm.sh/chart_template_guide/#values-files)
+file like this one:
+
+```yaml
+# custom-security.yaml
+xosAdminUser: 'admin@onf.org'
+xosAdminPassword: 'foobar'
+```
+
+and add it to the install command:
+
+```shell
+helm install -n xos-core xos-core -f custom-security.yaml
+```
+
+or you can override the values from the CLI
+
+```shell
+helm install -n xos-core xos-core --set xosAdminUser=MyUser --set xosAdminPassword=MySuperSecurePassword
+```
+> **Important!**
+> If you override security values in the `xos-core` chart, you'll need to pass
+> these values, either via a file or cli arguments, to all the xos related charts
+> you will install, eg: `rcord-lite`, `base-openstack`, ...
+
## Deploy kafka
-Some flavors of XOS require kafka, to install it please
-follow refer to the [kafka](kafka.md) instructions.
+Some flavors of XOS require kafka. To install it, please
+refer to the [kafka](kafka.md) instructions.
diff --git a/charts/xossh.md b/charts/xossh.md
new file mode 100644
index 0000000..675a23e
--- /dev/null
+++ b/charts/xossh.md
@@ -0,0 +1,7 @@
+# Deploy XOSSH
+
+To deploy the XOS-Shell, run the following:
+
+```shell
+helm install xos-tools/xossh -n xossh
+```
diff --git a/developer/configuration_rcord.md b/developer/configuration_rcord.md
new file mode 100644
index 0000000..d8066af
--- /dev/null
+++ b/developer/configuration_rcord.md
@@ -0,0 +1,113 @@
+# Working on R-CORD Without an OLT/ONU
+
+This section describes a developer workflow that works in scenarios
+where you do not have a real OLT or ONU. It combines steps from
+the "bottom-up" and "top-down" subscriber provisioning sequences
+described [here](../profiles/rcord/configuration.md).
+
+The idea is to add the access device's (OLT/PONPORT/ONU) to the XOS
+data model through "top-down" provisioning, with the "bottom-up"
+action of VOLTHA publishing a newly discovered ONU to the Kafka bus
+simulated by a python script.
+
+## Prerequisites
+
+- All the components needed for the R-CORD profile are up and running
+ on your POD (xos-core, rcord-lite, voltha, onos-voltha).
+- Configure `OLT/PONPORT/ONU` devices using the sample
+ TOSCA config given below:
+
+```shell
+tosca_definitions_version: tosca_simple_yaml_1_0
+imports:
+ - custom_types/oltdevice.yaml
+ - custom_types/onudevice.yaml
+ - custom_types/ponport.yaml
+ - custom_types/voltservice.yaml
+description: Create a simulated OLT Device in VOLTHA
+topology_template:
+ node_templates:
+
+ device#olt:
+ type: tosca.nodes.OLTDevice
+ properties:
+ device_type: simulated_olt
+ host: 172.17.0.1
+ port: 50060
+ must-exist: true
+
+ pon_port:
+ type: tosca.nodes.PONPort
+ properties:
+ name: test_pon_port_1
+ port_no: 2
+ s_tag: 222
+ requirements:
+ - olt_device:
+ node: device#olt
+ relationship: tosca.relationships.BelongsToOne
+
+ onu:
+ type: tosca.nodes.ONUDevice
+ properties:
+ serial_number: BRCM1234
+ vendor: Broadcom
+ requirements:
+ - pon_port:
+ node: pon_port
+ relationship: tosca.relationships.BelongsToOne
+~
+```
+
+- Deploy `kafka` as described in [these instructions](../charts/kafka.md).
+
+- Deploy `hippie-oss` as described in [these instructions](../charts/hippie-oss.md).
+
+## Push "onu-event" to Kafka
+
+The following event needs to be pushed manually.
+
+```shell
+event = json.dumps({
+ 'status': 'activated',
+ 'serial_number': 'BRCM1234',
+ 'uni_port_id': 16,
+ 'of_dpid': 'of:109299321'
+})
+```
+
+Make sure that the `serial_number` in the event matches the
+`serial_number`you configured when adding the ONU device.
+XOS uses the serial number to make sure the device is actually
+listed (`volt/onudevices`).
+
+The script for pushing the `onu-event` to Kafka
+(`onu_activate_event.py`) is already available in the container
+running `volt-synchronizer` and you may execute it as:
+
+```shell
+cordserver@cordserver:~$ kubectl get pods | grep rcord-lite-volt
+rcord-lite-volt-dd98f78d6-rwwhz 1/1 Running 0 10d
+
+cordserver@cordserver:~$ kubectl exec rcord-lite-volt-dd98f78d6-rwwhz python /opt/xos/synchronizers/volt/onu_activate_event.py
+```
+
+If you need to update the contents of the event file, you have to do
+an `apt update` and `apt install vim` within the container.
+
+## Verification
+
+- Verify that the `hippie-oss` instance is created for the event
+ (i.e., verify the serial number of ONU). The `hippie-oss` container
+ is intended to verify ONU serial number with an external OSS-DB,
+ but this is now configured to always validate the ONU.
+- Verify a new `rcord-subscriber` service instance is created.
+- Once the `rcord-subscriber` service instance is created, make sure
+ new service instances are created for the `volt` and `vsg-hw` models.
+
+```shell
+curl -X GET http://172.17.8.101:30006/xosapi/v1/hippie-oss/hippieossserviceinstances -u "admin@opencord.org:letmein"
+curl -X GET http://172.17.8.101:30006/xosapi/v1/rcord/rcordsubscribers -u "admin@opencord.org:letmein"
+curl -X GET http://172.17.8.101:30006/xosapi/v1/volt/voltserviceinstances -u "admin@opencord.org:letmein"
+curl -X GET http://172.17.8.101:30006/xosapi/v1/vsg-hw/vsghwserviceinstances -u "admin@opencord.org:letmein"
+```
diff --git a/developer/developer.md b/developer/developer.md
index fef811f..a1fc60c 100644
--- a/developer/developer.md
+++ b/developer/developer.md
@@ -1,9 +1,8 @@
# Development Guide
-This guide describes workflows and best practices for developers. If
-you are a service developer, you will need to consult this guide and
-the companion [Modeling Guide](../xos/README.md) that describes how
-define models and synchronizers for services being onboarded into
+This guide describes workflows and best practices for developers.
+If you are a service developer, this includes information on how to
+write models and synchronizers for services being on-boarded into
CORD. If you are a platform developer, you will find information about
the platform services typically integrated into CORD (e.g.,
Kubernetes, OpenStack, VTN). Service developers may be interested in
diff --git a/developer/workflows.md b/developer/workflows.md
index aa8cf5f..ea8b5da 100644
--- a/developer/workflows.md
+++ b/developer/workflows.md
@@ -31,7 +31,7 @@
```
In this folder you can choose from the different charts which one to deploy.
-For example to deploy rcord-lite you can follow [this guide](../profiles/rcord/install.md)
+For example to deploy R-CORD you can follow [this guide](../profiles/rcord/install.md)
### Deploy a Single Instance of Kafka
diff --git a/fabric-setup.md b/fabric-setup.md
index e4f0d2d..d0de4c2 100644
--- a/fabric-setup.md
+++ b/fabric-setup.md
@@ -1,31 +1,34 @@
-# Fabric switches software setup
+# Fabric Software Setup
CORD uses the Trellis fabric to connect the data plane components together.
+This section describes how to setup the software for these switches.
-The full [latest Trellis fabric documentation](https://wiki.opencord.org/display/CORD/Trellis%3A+CORD+Network+Infrastructure) can still be found on the old CORD wiki.
+The latest [Trellis Fabric](https://wiki.opencord.org/display/CORD/Trellis%3A+CORD+Network+Infrastructure) documentation can be found on the CORD wiki.
-## Supported switches
+## Supported Switches
-The list of supported hardware can be found in the [hardware requirements page](prereqs/hardware.html#generic-hardware-guidelines).
+The list of supported hardware can be found in the [hardware requirements page](prereqs/hardware.md).
-## Operating system
+## Operating System
-At today, all compatible switches use [Open Networking Linux (ONL)](https://opennetlinux.org/) as operating system.
-
+All CORD-compatible switches use
+[Open Networking Linux (ONL)](https://opennetlinux.org/) as the operating system.
The [latest compatible ONL image](https://github.com/opencord/OpenNetworkLinux/releases/download/2017-10-19.2200-1211610/ONL-2.0.0_ONL-OS_2017-10-19.2200-1211610_AMD64_INSTALLED_INSTALLER) can be downloaded from [here](https://github.com/opencord/OpenNetworkLinux/releases/download/2017-10-19.2200-1211610/ONL-2.0.0_ONL-OS_2017-10-19.2200-1211610_AMD64_INSTALLED_INSTALLER).
**Checksum**: *sha256:2db316ea83f5dc761b9b11cc8542f153f092f3b49d82ffc0a36a2c41290f5421*
-Deployment guidelines on how to install ONL on top of an ONIE compatible device can be found directly on the [ONL website](https://opennetlinux.org/docs/deploy).
+Guidelines on how to install ONL on top of an ONIE compatible device can be found directly on the [ONL website](https://opennetlinux.org/docs/deploy).
-This specific version of ONL has been already customized to accept an IP address through DHCP on the management interface, *ma0*. If you'd like to use a static IP, give it first an IP through DHCP, login and change the configuration in */etc/network/interfaces*.
+This specific version of ONL has been customized to accept an IP address through DHCP on the management interface, *ma0*. If you'd like to use a static IP, first give
+it an IP address through DHCP, then login and change the configuration in
+*/etc/network/interfaces*.
The default *username* and *password* are *root* / *onl*.
-## OFDPA drivers
+## OFDPA Drivers
-Once ONL is installed OFDPA drivers will need to be installed as well.
-Each switch model requires a specific version of OFDPA. All driver packages are distributed as DEB packages. This makes the installation process very easy.
+Once ONL is installed, OFDPA drivers will need to be installed as well.
+Each switch model requires a specific version of OFDPA. All driver packages are distributed as DEB packages, which makes the installation process straightforward.
First, copy the package to the switch. For example
@@ -33,13 +36,38 @@
scp your-ofdpa.deb root@fabric-switch-ip:
```
-Then, install the deb package
+Then, install the DEB package
```shell
dpkg -i your-ofdpa.deb
```
-## OFDPA drivers download
+Three OFDPA drivers are available:
* [EdgeCore 5712-54X / 5812-54X / 6712-32X](https://github.com/onfsdn/atrium-docs/blob/master/16A/ONOS/builds/ofdpa_3.0.5.5%2Baccton1.7-1_amd64.deb?raw=true) - *checksum: sha256:db228b6e79fb15f77497b59689235606b60abc157e72fc3356071bcc8dc4c01f*
* [QuantaMesh T3048-LY8](https://github.com/onfsdn/atrium-docs/blob/master/16A/ONOS/builds/ofdpa-ly8_0.3.0.5.0-EA5-qct-01.01_amd64.deb?raw=true) - *checksum: sha256:f8201530b1452145c1a0956ea1d3c0402c3568d090553d0d7b3c91a79137da9e*
+* [QuantaMesh BMS T7032-IX1/IX1B](https://github.com/onfsdn/atrium-docs/blob/master/16A/ONOS/builds/ofdpa-ix1_0.3.0.5.0-EA5-qct-01.00_amd64.deb?raw=true) *checksum: sha256:278b8ffed8a8fc705a1b60d16f8e70377e78342a27a11568a1d80b1efd706a46*
+
+## Connect the Fabric Switches to ONOS
+
+If the switches are not already connected, ssh to each switch and configure */etc/ofagent/ofagent.conf* by uncommenting and editing the following line:
+
+```shell
+OPT_ARGS="-d 2 -c 2 -c 4 -t K8S_NODE_IP:31653 -i $DPID"
+```
+
+Then start ofagent by running
+
+```shell
+service ofagentd start
+```
+
+You can verify ONOS has recognized the devices using the following command:
+
+> NOTE: When prompted, use password `rocks`.
+
+```shell
+ssh -p 31101 onos@K8S_NODE_IP devices
+```
+
+> NOTE: It may take a few seconds for the switches to initialize and connect to ONOS
diff --git a/linux.md b/linux.md
new file mode 100644
index 0000000..6e22f34
--- /dev/null
+++ b/linux.md
@@ -0,0 +1,224 @@
+# Quick Start: Linux
+
+This section walks you through an example installation sequence on
+Linux, assuming a fresh install of Ubunto 16.04.
+
+## Prerequisites
+
+You need to first install Docker and Python:
+
+```shell
+sudo apt update
+sudo apt-get install python
+sudo apt-get install python-pip
+pip install requests
+sudo apt install -y docker.io
+sudo systemctl start docker
+sudo systemctl enable docker
+```
+
+Now, verify the docker version.
+
+```shell
+docker --version
+```
+
+## Minikube & Kubectl
+
+Install `minikube` and `kubectl`:
+
+```shell
+curl -Lo minikube
+https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
+chmod +x minikube
+sudo mv minikube /usr/local/bin/
+curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl
+curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
+chmod +x ./kubectl
+sudo mv ./kubectl /usr/local/bin/kubectl
+```
+
+Issue the following commands:
+
+```shell
+export MINIKUBE_WANTUPDATENOTIFICATION=false
+export MINIKUBE_WANTREPORTERRORPROMPT=false
+export MINIKUBE_HOME=$HOME
+export CHANGE_MINIKUBE_NONE_USER=true
+mkdir -p $HOME/.kube
+touch $HOME/.kube/config
+
+export KUBECONFIG=$HOME/.kube/config
+```
+
+Navigate to the `/usr/local/bin/` directory and issue the following
+commands. Make sure there are no errors afterwards:
+
+```shell
+sudo -E ./minikube start --vm-driver=none
+```
+
+You can run
+
+```shell
+kubectl cluster-info
+```
+
+to verify that your Minikube cluster is up and running.
+
+## Export the KUBECONFIG File
+
+Locate the `KUBECONFIG` file:
+
+```shell
+sudo updatedb
+locate kubeconfig
+```
+
+Export a `KUBECONFIG` variable containing the path to the
+configuration file found above. For example, If your `U`
+file was located in the `/var/lib/localkube/kubeconfig` directory,
+the command you issue would look like this:
+
+```shell
+export KUBECONFIG=/var/lib/localkube/kubeconfig
+```
+
+## Download CORD
+
+There are two general ways you might download CORD. The following
+walks through both, but you need to follow only one. (For simplicity, we
+recommend the first.)
+
+The first simply clones the CORD `helm-chart` repository using `git`.
+This is sufficient for downloading just the Helm charts you will need
+to deploy the set of containers that comprise CORD. These containers
+will be pulled down from DockerHub.
+
+The second uses the `repo` tool to download all the source code that
+makes up CORD, including the Helm charts needed to deploy the CORD
+containers. You might find this useful if you want look at the
+interals of CORD more closely.
+
+In either case, following these instructions will result in a
+directory `~/cord/helm-charts`, which will be where you go next to
+continue the installation process.
+
+### Download: `git clone`
+
+Create a CORD directory and run the following `git` command in it:
+
+```shell
+mkdir ~/cord
+cd ~/cord
+git clone https://gerrit.opencord.org/helm-charts
+cd helm-charts
+```
+
+### Download: `repo`
+
+Make sure you have a `bin/` directory in your home directory and
+that it is included in your path:
+
+```shell
+mkdir ~/bin
+PATH=~/bin:$PATH
+```
+
+Download the Repo tool and ensure that it is executable:
+
+```shell
+curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
+chmod a+x ~/bin/repo
+```
+
+Make a `/cord` directory and navigate into it:
+
+```shell
+mkdir ~/cord
+cd ~/cord
+```
+
+Configure `git` with your real name and email address:
+
+```shell
+git config --global user.name "Your Name"
+git config --global user.email "you@example.com"
+```
+
+Initialize `repo` and download the CORD source tree to your working
+directory:
+
+```shell
+repo init -u https://gerrit.opencord.org/manifest -b master
+repo sync
+```
+
+## Helm
+
+Run the Helm installer script that will automatically grab the latest
+version of the Helm client and install it locally:
+
+```shell
+curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
+chmod 700 get_helm.sh
+./get_helm.sh
+```
+
+## Tiller
+
+Issue the following:
+
+```shell
+sudo helm init
+sudo kubectl create serviceaccount --namespace kube-system tiller
+sudo kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
+sudo kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
+sudo helm init --service-account tiller --upgrade
+```
+
+Install `socat` to fix a port-forwarding error:
+
+```shell
+sudo apt-get install socat
+```
+
+Issue the following and make sure no errors come up:
+
+```shell
+helm ls
+```
+
+## Deploy CORD Helm Charts
+
+Deploy the service profiles corresponding to the `xos-core`,
+`base-kubernetes`, and `demo-simpleexampleservice` helm-charts:
+
+```shell
+cd ~/cord/helm-charts
+helm init
+sudo helm dep update xos-core
+sudo helm install xos-core -n xos-core
+sudo helm dep update xos-profiles/base-kubernetes
+sudo helm install xos-profiles/base-kubernetes -n base-kubernetes
+sudo helm dep update xos-profiles/demo-simpleexampleservice
+sudo helm install xos-profiles/demo-simpleexampleservice -n demo-simpleexampleservice
+```
+
+Use `kubectl get pods` to verify that all containers in the profile
+are successful and none are in the error state.
+
+> **Note:** It will take some time for the various helm charts to
+> deploy and the containers to come online. The `tosca-loader
+> `container may error and retry several times as they wait for
+> services to be dynamically loaded. This is normal, and eventually
+> the `tosca-loader` containers will enter the completed state:
+
+## Next Steps
+
+This completes our example walk-through. At this point, you can do one
+of the following:
+
+* Explore other [installation options](README.md).
+* Take a tour of the [operational interfaces](operating_cord/general.md).
+* Drill down on the internals of [SimpleExampleService](simpleexampleservice/simple-example-service.md).
diff --git a/macos.md b/macos.md
new file mode 100644
index 0000000..59e4b6d
--- /dev/null
+++ b/macos.md
@@ -0,0 +1,148 @@
+# Quick Start: MacOS
+
+This section walks you through an example installation sequence on
+MacOS. It was tested on version 10.12.6.
+
+## Prerequisites
+
+You need to install Docker. Visit `https://docs.docker.com/docker-for-mac/install/` for instructions.
+
+You also need to install VirtualBox. Visit `https://www.virtualbox.org/wiki/Downloads` for instructions.
+
+The following assumes you've installed the Homebrew package manager. Visit
+`https://brew.sh/` for instructions.
+
+## Install Minikube and Kubectl
+
+To install Minikube, run the following command:
+
+```shell
+curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.28.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
+```
+To install Kubectl, run the following command:
+
+```shell
+brew install kubectl
+```
+
+## Install Helm and Tiller
+
+The following installs both Helm and Tiller.
+
+```shell
+brew install kubernetes-helm
+```
+
+## Bring Up a Kubernetes Cluster
+
+Start a minikube cluster as follows. This automatically runs inside VirtualBox.
+
+```shell
+minikube start
+```
+
+To see that it's running, type
+
+```shell
+kubectl cluster-info
+```
+
+You should see something like the following
+
+```shell
+Kubernetes master is running at https://192.168.99.100:8443
+KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
+
+To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
+```
+
+You can also see how the cluster is configured by looking at `~/.kube/config`.
+Other tools described on this page use this configuration file to find your cluster.
+
+If you want, you can see minikube running by looking at the VirtualBox dashboard.
+Or alternatively, you can visit the Minikube dashboard:
+
+```shell
+minikube dashboard
+```
+
+As a final setp, you need to start Tiller on the Kubernetes cluster.
+
+```shell
+helm init
+```
+
+## Download CORD Helm-Charts
+
+You don't need to download all of CORD. You just need to download a set of helm charts. They will, in turn, download a collection of CORD containers from Docker
+Hub. The rest of this section assumes all CORD-related downloads are placed in
+directory `~/cord`.
+
+```shell
+mkdir ~/cord
+cd ~/cord
+git clone https://gerrit.opencord.org/helm-charts
+cd helm-charts
+```
+
+## Bring Up CORD
+
+Deploy the service profiles corresponding to the `xos-core`,
+`base-kubernetes`, and `demo-simpleexampleservice` helm-charts.
+To do this, execute the following from the `~/cord/helm-charts` directory.
+
+```shell
+helm dep update xos-core
+helm install xos-core -n xos-core
+helm dep update xos-profiles/base-kubernetes
+helm install xos-profiles/base-kubernetes -n base-kubernetes
+helm dep update xos-profiles/demo-simpleexampleservice
+helm install xos-profiles/demo-simpleexampleservice -n demo-simpleexampleservice
+```
+
+Use `kubectl get pods` to verify that all containers in the profile
+are successful and none are in the error state.
+
+> **Note:** It will take some time for the various helm charts to
+> deploy and the containers to come online. The `tosca-loader`
+> container may error and retry several times as it waits for
+> services to be dynamically loaded. This is normal, and eventually
+> the `tosca-loader` will enter the completed state.
+
+When all the containers are successfully up and running, `kubectl get pod`
+will return output that looks something like this:
+
+```shell
+NAME READY STATUS RESTARTS AGE
+base-kubernetes-kubernetes-55c55bd897-rn9ln 1/1 Running 0 2m
+base-kubernetes-tosca-loader-vs6pv 1/1 Running 1 2m
+demo-simpleexampleservice-787454b84b-ckpn2 1/1 Running 0 1m
+demo-simpleexampleservice-tosca-loader-4q7zg 1/1 Running 0 1m
+xos-chameleon-6f49b67f68-pdf6n 1/1 Running 0 2m
+xos-core-57fd788db-8b97d 1/1 Running 0 2m
+xos-db-f9ddc6589-rtrml 1/1 Running 0 2m
+xos-gui-7fcfcd4474-prhfb 1/1 Running 0 2m
+xos-redis-74c5cdc969-ppd7z 1/1 Running 0 2m
+xos-tosca-7c665f97b6-krp5k 1/1 Running 0 2m
+xos-ws-55d676c696-pxsqk 1/1 Running 0 2m
+```
+
+## Visit CORD Dashboard
+
+Finally, to view the CORD dashboard, run the following:
+
+```shell
+minikube service xos-gui
+```
+
+This will launch a window in your default browser. Administrator login
+and password are defined in `~/cord/helm-charts/xos-core/values.yaml`.
+
+## Next Steps
+
+This completes our example walk-through. At this point, you can do one
+of the following:
+
+* Explore other [installation options](README.md).
+* Take a tour of the [operational interfaces](operating_cord/general.md).
+* Drill down on the internals of [SimpleExampleService](simpleexampleservice/simple-example-service.md).
diff --git a/mdl_relaxed.rb b/mdl_relaxed.rb
index 92e3fe8..bebc671 100644
--- a/mdl_relaxed.rb
+++ b/mdl_relaxed.rb
@@ -48,3 +48,6 @@
# Exclude rule: Emphasis used instead of a header
exclude_rule 'MD036'
+
+# Gitbook won't care about multiple blank lines
+exclude_rule 'MD012'
diff --git a/navigate.md b/navigate.md
new file mode 100644
index 0000000..0a6db80
--- /dev/null
+++ b/navigate.md
@@ -0,0 +1,71 @@
+# Navigating CORD
+
+Understanding the relationship between installing, operating, and developing
+CORD—and the corresponding toolsets and specification files used by
+each stage—is helpful in navigating CORD.
+
+* **Installation (Helm):** Installing CORD means installing a collection
+ of Docker containers in a Kubernetes cluster. We use Helm to carry out
+ the installation, with the valid configurations defined by a set of
+ `helm-charts`. These charts specify the version of each container to be
+ deployed, and so they also play a role in upgrading a running system.
+ More information about `helm-charts` can be found [here](charts/helm.md).
+
+* **Operations (TOSCA):** A running CORD POD supports multiple Northbound
+ Interfaces (e.g., a GUI and REST API), but we typically use `TOSCA` to specify
+ a workflow for configuring and provisioning a running system. A freshly
+ installed CORD POD has a set of control plane and platform level containers
+ running (e.g., XOS, ONOS, OpenStack), but until provisioned using `TOSCA`,
+ there are no services and no service graph. More information about `TOSCA`
+ can be found [here](xos-tosca/README.md).
+
+* **Development (XOS):** The services running in an operational system
+ are typically deployed as Docker containers, paired with a model that
+ specifies how the service is to be on-boarded into CORD. This model is
+ writen in the `xproto` modeling language, and processed by the XOS
+ tool-chain. Among other things, this tool-chain generates the
+ TOSCA-engine that is used to process the configuration and provisioning
+ workflows used to operate CORD. More information about `xproto` (and
+ other details about on-boarding a service) can be found
+ [here](xos/dev/xproto.md).
+
+These tools and containers are inter-related as follows:
+
+* An initial install brings up a set of XOS-related containers (e.g., `xos-core`,
+ `xos-gui`, `xos-tosca`) that have been configured with a base set of models.
+ Of these, the `xos-tosca` container implements the TOSCA engine, which
+ takes TOSCA workflows as input and configures/provisions CORD accordingly.
+
+* While the install and operate stages are distinct, for convenience,
+ some helm-charts elect to launch a `tosca-loader` container
+ (in Kubernetes parlance, it's a *job* and not a *service*) to load an initial
+ TOSCA workflow into a newly deployed set of services. This is how a
+ service graph is typically instantiated.
+
+* While the CORD control plane is deployed as a set of Docker
+ containers, not all of the services themselves run in containers.
+ Some services run in VMs managed by OpenStack (this is currently
+ the case for M-CORD) and some services are implemented as ONOS
+ applications that have been packaged using Maven. In such cases,
+ the VM image and the Maven package are still specified in the TOSCA
+ workflow.
+
+* Every service (whether implemented in Docker, OpenStack, or ONOS)
+ has a counter-part *synchronizer* container running as part of the CORD
+ control plane (e.g., `volt-synchronizer` for the vOLT service). Typically,
+ the helm-chart for a service launches this synchronizer container, whereas
+ the TOSCA worflow creates, provisions, and initializes the backend container,
+ VM, or ONOS app.
+
+* Bringing up additional services in a running POD involves executing
+ helm-charts to install the new service's synchronizer container, which
+ in turn loads the corresponding new models into XOS. This load then
+ triggers and upgrade and restart of the TOSCA engine (and other NBIs),
+ which is a pre-requisite for configuring and provisioning that new service.
+
+* Upgrading an existing service is similar to bringing up a new service,
+ where we depend on Kubernetes to incrermentally roll out the containers
+ that implement the service (and rollback if necessarily), and we depend
+ on XOS to migrate from the old model to the new model (and support
+ both old and new APIs during the transition period). Upgrading existing
+ services has not been thoroughly tested.
diff --git a/operating_cord/general.md b/operating_cord/general.md
index c5ae0d3..49a06ce 100644
--- a/operating_cord/general.md
+++ b/operating_cord/general.md
@@ -1,13 +1,15 @@
# General Info
-CORD's operations and management interface is primarily defined by
-its Northbound API. There is typically more than one variant of this
-interface, and they are auto-generated from the models loaded into
+CORD's operations and management interface is primarily defined by
+its Northbound API. There is typically more than one variant of this
+interface, and they are auto-generated from the models loaded into
CORD, as described [elsewhere](../xos/README.md). Most notably:
-* A RESTful version of this API is documented [here](rest_apis.md).
+* A graphical interface is documented [here](gui.md).
-* A TOSCA version is typically used to configure and provision a
- POD. Later sections of this guide give examples of TOSCA workflows
+* A RESTful version of this API is documented [here](rest_apis.md).
+
+* A TOSCA version is typically used to configure and provision a
+ POD. Later sections of this guide give examples of TOSCA workflows
used to provision and configure various [profiles](profiles.md)
and [services](services.md).
diff --git a/operating_cord/gui.md b/operating_cord/gui.md
new file mode 100644
index 0000000..6a5db25
--- /dev/null
+++ b/operating_cord/gui.md
@@ -0,0 +1,37 @@
+# GUI
+
+The GUI is useful for development and demos. At the moment it is not
+designed to support the scale of data one might expect in a production
+deployment.
+
+## How to Acces the GUI
+
+Once you have CORD up and running, you can find the port on which the
+GUI is available by running:
+
+```shell
+kubectl get service xos-gui
+
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+xos-gui NodePort 10.102.239.199 <none> 4000:30001/TCP 2h
+```
+
+By default, the GUI can be accessed on port `30001`
+
+To connect to the GUI you can just open a browser at `<cluster-ip>:<gui-port`,
+where `cluster-ip` is the ip of any node in your kubernetes cluster.
+
+The *username* and *password* for the GUI are defined in
+the [`xos-core`](../charts/xos-core.md) helm chart.
+
+## Opening the GUI in minikube
+
+The above works the same way when running on `minikube`, but
+this helper is also available:
+
+```shell
+minikube service xos-gui
+```
+
+This command opens the GUI in your default browser.
diff --git a/overview.md b/overview.md
index 8119167..c6a67da 100644
--- a/overview.md
+++ b/overview.md
@@ -9,7 +9,22 @@
and design notes that have shaped [CORD's
architecture](https://wiki.opencord.org/display/CORD/Documentation).
-## Making Changes to Documentation
+## Navigating the Guide
+
+The guide is organized around the major stages in the lifecycle of CORD:
+
+* [Installation](README.md): Installing (and later upgrading) CORD.
+* [Operations](operating_cord/operating_cord.md): Operating an already
+ installed CORD deployment.
+* [Development](developer/developer.md): Developing new functionality
+ to be included in CORD.
+* [Testing](cord-tester/README.md): Testing functionality to be
+ included in CORD.
+
+These are all fairly obvious. What's less obvious is the relationship among
+these stages, which is helpful in [Navigating CORD](navigate.md).
+
+## Making Changes to the Guide
The [http://guide.opencord.org](http://guide.opencord.org) website is built
using the [GitBook Toolchain](https://toolchain.gitbook.com/), with the
diff --git a/partials/helm/description.md b/partials/helm/description.md
index 383710a..a2f8538 100644
--- a/partials/helm/description.md
+++ b/partials/helm/description.md
@@ -1,5 +1,3 @@
Helm is the package manager for Kubernetes. It lets you define, install,
-and upgrade Kubernetes base application.
-
-For more informations about helm,
-please the visit the official website: <https://helm.sh>
\ No newline at end of file
+and upgrade Kubernetes base applications. For more information about Helm,
+please the visit official website: <https://helm.sh>.
diff --git a/partials/push-images-to-registry.md b/partials/push-images-to-registry.md
index 52a4612..cac2de6 100644
--- a/partials/push-images-to-registry.md
+++ b/partials/push-images-to-registry.md
@@ -4,26 +4,31 @@
be first tagged, and pushed to the local registry:
Supposing your docker-registry address is:
+
```shell
192.168.0.1:30500
```
and that your original image name is called:
+
```shell
xosproject/vsg-synchronizer
```
you'll need to tag the image as
+
```shell
192.168.0.1:30500/xosproject/vsg-synchronizer
```
For example, you can use the *docker tag* command to do this:
+
```shell
docker tag xosproject/vsg-synchronizer:candidate 192.168.0.1:30500/xosproject/vsg-synchronizer:candidate
```
Now, you can push the image to the registry. For example, with *docker push*:
+
```shell
docker push 192.168.0.1:30500/xosproject/vsg-synchronizer:candidate
```
diff --git a/prereqs/README.md b/prereqs/README.md
new file mode 100644
index 0000000..a3fc2f4
--- /dev/null
+++ b/prereqs/README.md
@@ -0,0 +1,12 @@
+# Prerequisites
+
+The latest release of CORD decouples setting up the deployment environment from
+installing CORD. This means more prerequisites must be satisfied (as enumerated
+in this section), but doing so provides more latitude in how you prep a POD to best
+match your local environment.
+
+There are three categories of requirements that must be met before installing CORD:
+
+* [Hardware Requirements](hardware.md)
+* [Connectivity Requirements](networking.md)
+* [Software Requirements](software.md)
diff --git a/prereqs/docker-registry.md b/prereqs/docker-registry.md
index 2b7c74d..a94587e 100644
--- a/prereqs/docker-registry.md
+++ b/prereqs/docker-registry.md
@@ -1,4 +1,4 @@
-# Docker Registry (optional)
+# Docker Registry (Optional)
The section describes how to install an **insecure** *docker registry* in Kubernetes, using the standard Kubernetes helm charts.
diff --git a/prereqs/helm.md b/prereqs/helm.md
index 9c8ded4..b3a6cd0 100644
--- a/prereqs/helm.md
+++ b/prereqs/helm.md
@@ -32,14 +32,14 @@
helm init
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
-kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
+kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
```
Once *helm* and *tiller* are installed you should be able to run the
command *helm ls* without errors.
-## Done?
+## Next Step
Once you are done, you are ready to deploy CORD components using their
helm charts! See [Bringing Up CORD](../profiles/intro.md). For more detailed
diff --git a/prereqs/k8s-multi-node.md b/prereqs/k8s-multi-node.md
index febe128..7b70562 100644
--- a/prereqs/k8s-multi-node.md
+++ b/prereqs/k8s-multi-node.md
@@ -25,11 +25,12 @@
* Run Ubuntu 16.04 server
* Able to communicate together (ping one each other)
* Have the same user *cord* configured, that you can use to remotely access them from the operator machine
- * The user *cord* is sudoer on each machine, and it doesn't need a password to get sudoer privileges (see to authorize a password-less access from the development/management machine in the sections below)
+ * A user (i.e. *cord*) is sudoer on each machine, and it doesn't need a password to get sudoer privileges (see to authorize a password-less access from the development/management machine in the sections below)
## Download the Kubespray Installation Scripts
On the operator machine
+
```shell
git clone https://gerrit.opencord.org/automation-tools
```
@@ -40,7 +41,7 @@
The main script (*setup.sh*) provides a help message with
instructions. To see it, run *./setup.sh --help*.
-The two main functions are:
+The main functions are:
* Install Kubespray on an arbitrary number of target machines
* Export the k8s configuration file path as environment variable to
@@ -51,7 +52,8 @@
Before starting the installation make sure that
* The development/management machine has password-less access to the target machine(s), meaning the public key of the development/management machine has been copied in the authorization_keys files on the target machines. If you don't know how to do a script called *copy-ssh-keys.sh* is provided. To copy your public key to a target machine run *./copy-ssh-keys.sh TARGET_MACHINE_IP*. Repeat this procedure for each target machine.
-* All target machines don't mount any swap partition. It's easy as simply installing Ubuntu without a swap partition or -once the OS is already installed- commenting out the corresponding line in */etc/fstab* and reboot.
+* All target machines don't mount any swap partition. The setup script should do this automatically, but many times this doesn't work as it should. Doing this manually is easy as installing Ubuntu without a swap partition or -once the OS is already installed- commenting out the corresponding line in */etc/fstab* and reboot.
+* By default the installation script assumes that the user on all the target machines is *cord*. If this is not the case an environment variable should be exported: *export REMOTE_SSH_USER='my-remote-user'*.
## Install Kubespray
@@ -73,13 +75,14 @@
* Downloads and exports the access configuration outside the Kubespray folder, so it won’t be removed at the next execution of the script (for example while trying to re-deploy the POD, or while deploying a different POD)
To run the installation script, type
+
```shell
./setup.sh -i onf 10.90.0.101 10.90.0.102 10.90.0.103
```
> **NOTE:** at the beginning of the installation you will be asked to insert your
password multiple times.
-> **NOTE:** the official Kubespray instalation procedure -run by the script- will automatically change the hostname of the target machine(s) with nodeX (where X is an incremental number starting from 1).
+> **NOTE:** the official Kubespray installation script will automatically change the hostname of the target machine(s) with nodeX (where X is an incremental number starting from 1).
At the end of the procedure, Kubespray should be installed and running
on the remote machines.
@@ -89,6 +92,7 @@
If you want to deploy another POD without affecting your existing
deployment run the following:
+
```shell
./setup.sh -i my_other_deployment 192.168.0.1 192.168.0.2 192.168.0.3
```
@@ -113,9 +117,9 @@
At this point, you can start to use *kubectl* and *helm*.
-## Done?
+## Next Step
-Once you are done, you are ready to install Kubctl and Helm, so return to
+Once you are done, you are ready to install Kubctl and Helm, so return to
[here](kubernetes.md#get-your-kubeconfig-file) in the installation
guide.
diff --git a/prereqs/k8s-single-node.md b/prereqs/k8s-single-node.md
index ad08a06..9e3877c 100644
--- a/prereqs/k8s-single-node.md
+++ b/prereqs/k8s-single-node.md
@@ -15,7 +15,7 @@
* Documentation: <https://microk8s.io/>
* One machine, Linux based, either physical machine or virtual. It could also be your own PC!
-## Done?
+## Next Step
Once you are done, you are ready to install Kubctl and Helm, so return to
[here](kubernetes.md#get-your-kubeconfig-file) in the installation guide.
diff --git a/prereqs/kubernetes.md b/prereqs/kubernetes.md
index 7a3e3d0..996b60a 100644
--- a/prereqs/kubernetes.md
+++ b/prereqs/kubernetes.md
@@ -1,9 +1,22 @@
# Kubernetes
-CORD runs on any version of Kubernetes (1.9 or greater), and uses the
+CORD runs on any version of Kubernetes (1.10 or greater), and uses the
Helm client-side tool. If you are new to Kubernetes, we recommend
<https://kubernetes.io/docs/tutorials/> as a good place to start.
+Note: We are using a feature in kubernetes 1.10 to allow local persistence of data.
+This is a beta feature in K8S 1.10.x as of this writing and should be enabled by default.
+However, if it is not, you will need to enable it as a feature gate when
+launching kubernetes with the following feature gate settings:
+
+```shell
+PersistentLocalVolumes=true
+VolumeScheduling=true
+MountPropagation=true
+```
+
+More information about feature gates can be found [here](https://github.com/kubernetes-incubator/external-storage/tree/local-volume-provisioner-v2.0.0/local-volume#enabling-the-alpha-feature-gates).
+
Although you are free to set up Kubernetes and Helm in whatever way makes
sense for your deployment, the following provides guidelines, pointers, and
automated scripts that might be helpful.
@@ -51,8 +64,3 @@
If you've just installed Kubernetes, likely you won't see any pod, yet.
That's fine, as long as you don't see errors.
-## Install Helm
-
-CORD uses a tool called Helm to deploy containers on Kubernetes.
-As such, Helm needs to be installed before being able to deploy CORD containers.
-More info on Helm and how to install it can be found [here](helm.md).
diff --git a/prereqs/openstack-helm.md b/prereqs/openstack-helm.md
index 1d9f651..d60f9a8 100644
--- a/prereqs/openstack-helm.md
+++ b/prereqs/openstack-helm.md
@@ -1,4 +1,4 @@
-# OpenStack (optional)
+# OpenStack (Optional)
The [openstack-helm](https://github.com/openstack/openstack-helm)
project can be used to install a set of Kubernetes nodes as OpenStack
@@ -148,6 +148,6 @@
* Install software like Kubernetes and Helm
* Build the Helm charts and install them in a local Helm repository
* Install requried packages
-* Configure DNS on the nodes
+* Configure DNS on the nodes (_NOTE: The `openstack-helm` install overwrites `/etc/resolv.conf` on the compute hosts and points the upstream nameservers to Google DNS. If a local upstream is required, [see this note](https://docs.openstack.org/openstack-helm/latest/install/developer/kubernetes-and-common-setup.html#clone-the-openstack-helm-repos)_.)
* Generate `values.yaml` files based on the environment and install Helm charts using these files
* Run post-install tests on the OpenStack services
diff --git a/prereqs/optional.md b/prereqs/optional.md
new file mode 100644
index 0000000..370041e
--- /dev/null
+++ b/prereqs/optional.md
@@ -0,0 +1,14 @@
+# Optional Packages
+
+Although not required, you may want to install one or both of the following
+packages:
+
+* **Local Registry:** If your environment does not permit connecting your
+ POD to ther public Internet, you may want to take advantage of a local Docker
+ registery. The following [registry setup](docker-registry.md) will help.
+ (Having a local registry is also useful when doing local development, as outlined
+ in the [Developer Guide](../developer/workflows.md).)
+
+* **OpenStack:** If you need to include OpenStack in your deployment,
+ so you can bring up VMs on your POD, you will need to following the
+ [OpenStack deployment](openstack-helm.md) guide.
diff --git a/prereqs/software.md b/prereqs/software.md
index d8c00fe..03b0a07 100644
--- a/prereqs/software.md
+++ b/prereqs/software.md
@@ -7,8 +7,8 @@
> **Note:** M-CORD is the exception since its components still depend on
> OpenStack, which is in turn deployed as a set of Kubernetes containers
->using the [openstack-helm](https://github.com/openstack/openstack-helm)
->project. Successfully installing the OpenStack Helm charts requires
->some additional system configuration besides just installing Kubernetes
->and Helm. You can find more informations about this in the
->[OpenStack Support](./openstack-helm.md) installation section.
+> using the [openstack-helm](https://github.com/openstack/openstack-helm)
+> project. Successfully installing the OpenStack Helm charts requires
+> some additional system configuration besides just installing Kubernetes
+> and Helm. You can find more informations about this in the
+> [OpenStack Support](./openstack-helm.md) installation section.
diff --git a/prereqs/vtn-setup.md b/prereqs/vtn-setup.md
index e5ff99b..1d41e2e 100644
--- a/prereqs/vtn-setup.md
+++ b/prereqs/vtn-setup.md
@@ -2,7 +2,12 @@
The ONOS VTN app provides virtual networking between VMs on an OpenStack cluster. Prior to installing the [base-openstack](../charts/base-openstack.md) chart that installs and configures VTN, make sure that the following requirements are satisfied.
-First, VTN requires the ability to SSH to each compute node _using an account with passwordless `sudo` capability_. Before installing this chart, first create an SSH keypair and copy it to the `authorized_keys` files of all nodes in the cluster:
+## SSH access to hosts
+
+VTN requires the ability to SSH to each compute node _using an account with
+passwordless `sudo` capability_. Before installing this chart, first create
+an SSH keypair and copy it to the `authorized_keys` files of all nodes in the
+cluster:
Generate a keypair:
@@ -22,7 +27,38 @@
cp ~/.ssh/id_rsa xos-profiles/base-openstack/files/node_key
```
-Second, the VTN app requires a fabric interface on the compute nodes. VTN will not successfully initialize if this interface is not present. By default the name of this interface is expected to be named `fabric`. If there is not an actual fabric interface on the compute node, create a dummy interface as follows:
+## Fabric interface
+
+The VTN app requires a fabric interface on the compute nodes. VTN will not
+successfully initialize if this interface is not present. By default the name
+of this interface is expected to be `fabric`.
+
+### Interface not named 'fabric'
+
+If you have a fabric interface on the compute node but it is not named
+`fabric`, create a bridge named `fabric` and add the interface to it.
+Assuming the fabric interface is named `eth2`:
+
+```shell
+sudo brctl addbr fabric
+sudo brctl addif fabric eth2
+sudo ifconfig fabric up
+sudo ifconfig eth2 up
+```
+
+To make this configuration persistent, add the following to
+`/etc/network/interfaces`:
+
+```text
+auto fabric
+iface fabric inet manual
+ bridge_ports eth2
+```
+
+### Dummy interface
+
+If there is not an actual fabric
+interface on the compute node, create a dummy interface as follows:
```shell
sudo modprobe dummy
@@ -30,7 +66,9 @@
sudo ifconfig fabric up
```
-Finally, in order to be added to the VTN configuration, each compute node must
+## DNS setup
+
+In order to be added to the VTN configuration, each compute node must
be resolvable in DNS. If a server's hostname is not resolvable, it can be
added to the local `kube-dns` server (substitute _HOSTNAME_ with the output of
the `hostname` command, and _HOST-IP-ADDRESS_ with the node's primary IP
diff --git a/profiles/mcord/configuration.md b/profiles/mcord/configuration.md
index 9e55602..6bdf4af 100644
--- a/profiles/mcord/configuration.md
+++ b/profiles/mcord/configuration.md
@@ -1,6 +1,6 @@
-# M-CORD Configuration
+# M-CORD Configuration
Once all the components needed for M-CORD are up and running on your POD,
-you'll need to configure XOS with the proper configuration.
+you'll need to configure XOS with the proper configuration.
Since this configuration is environment specific, you'll need to create your own,
but the following can serve as a reference for it:
diff --git a/profiles/mcord/install.md b/profiles/mcord/install.md
index 3fda193..4df44df 100644
--- a/profiles/mcord/install.md
+++ b/profiles/mcord/install.md
@@ -6,16 +6,38 @@
node, suitable for evaluation or testing. Requirements:
- An _Ubuntu 16.04.4 LTS_ server with at least 64GB of RAM and 32 virtual CPUs
+- Latest versions of released software installed on the server: `sudo apt update; sudo apt -y upgrade`
- User invoking the script has passwordless `sudo` capability
+- Open access to the Internet (not behind a proxy)
+- Google DNS servers (e.g., 8.8.8.8) are accessible
-```bash
-git clone https://gerrit.opencord.org/automation-tools
-automation-tools/mcord/mcord-in-a-box.sh
-```
+### Target server on CloudLab (optional)
+
+If you do not have a target server available that meets the above
+requirements, you can borrow one on [CloudLab](https://www.cloudlab.us). Sign
+up for an account using your organization's email address and choose "Join
+Existing Project"; for "Project Name" enter `cord-testdrive`.
+
+> NOTE: CloudLab is supporting CORD as a courtesy. It is expected that you will not use CloudLab resources for purposes other than evaluating CORD. If, after a week or two, you wish to continue using CloudLab to experiment with or develop CORD, then you must apply for your own separate CloudLab project.
+
+Once your account is approved, start an experiment using the
+`OnePC-Ubuntu16.04-HWE` profile on the Wisconsin cluster. This will provide
+you with a temporary target server meeting the above requirements.
+
+Refer to the [CloudLab documentation](http://docs.cloudlab.us/) for more information.
+
+### Convenience Script
This script takes about an hour to complete. If you run it, you can skip
directly to [Validating the Installation](#validating-the-installation) below.
+```bash
+mkdir ~/cord
+cd ~/cord
+git clone https://gerrit.opencord.org/automation-tools
+automation-tools/mcord/mcord-in-a-box.sh
+```
+
## Prerequisites
M-CORD requires OpenStack to run VNFs. The OpenStack installation
@@ -44,8 +66,16 @@
ssh -p 8101 onos@onos-cord-ssh.default.svc.cluster.local cordvtn-nodes
```
+> NOTE: If the `cordvtn-nodes` command is not present, or if it does not show any nodes,
+> the most common cause is an issue with resolving the server's hostname.
+> See [this section on adding a hostname to kube-dns](../../prereqs/vtn-setup.md#dns-setup)
+> for a fix; the command should be present shortly after the hostname is added.
+
You should see all nodes in `COMPLETE` state.
+> NOTE: If the node is in `INIT` state rather than `COMPLETE`, try running
+> `cordvtn-node-init <node>` and see if that resolves the issue.
+
Next, check that the VNF images are loaded into OpenStack (they are quite large
so this may take a while to complete):
@@ -122,3 +152,14 @@
| 4a5960b5-b5e4-4777-8fe4-f257c244f198 | mysite_vspgwc-3 | ACTIVE | management=172.27.0.7; spgw_network=117.0.0.8; s11_network=112.0.0.4 | image_spgwc_v0.1 | m1.large |
+--------------------------------------+-----------------+--------+----------------------------------------------------------------------------------------------------+------------------+-----------+
```
+
+Log in to the XOS GUI and verify that the service synchronizers have run. The
+GUI is available at URL `http:<master-node>:30001` with username
+`admin@opencord.org` and password `letmein`. Verify that the status of all
+ServiceInstance objects is `OK`.
+
+> NOTE: If you see a status message of `SSH Error: data could not be sent to
+> remote host`, the most common cause is the inability of the synchronizers to
+> resolve the server's hostname. See [this section on adding a hostname to
+> kube-dns](../../prereqs/vtn-setup.md#dns-setup) for a fix; the issue should
+> resolve itself after the hostname is added.
diff --git a/profiles/rcord/configuration.md b/profiles/rcord/configuration.md
index d0ff9af..7c865c8 100644
--- a/profiles/rcord/configuration.md
+++ b/profiles/rcord/configuration.md
@@ -1,9 +1,10 @@
-# R-CORD Configuration
+# R-CORD Configuration
-Once all the components needed for RCORD-Lite are up and running on your POD,
-you'll need to configure XOS with the proper configuration.
-Since this configuration is environment specific, you'll need to create your own,
-but the following can serve as a reference for it:
+Once all the components needed for the R-CORD profile are up and
+running on your POD, you will need to configure it. This is typically
+done using TOSCA. This configuration is environment specific, so
+you will need to create your own, but the following can serve as a
+reference:
```yaml
tosca_definitions_version: tosca_simple_yaml_1_0
@@ -37,6 +38,7 @@
type: tosca.nodes.SwitchPort
properties:
portId: 1
+ host_learning: false
requirements:
- switch:
node: switch#my_fabric_switch
@@ -119,35 +121,34 @@
relationship: tosca.relationships.BelongsToOne
```
-_For instructions on how to push TOSCA, please refer to this [guide](../../xos-tosca/README.md)_
+For instructions on how to push TOSCA into a CORD POD, please
+refer to this [guide](../../xos-tosca/README.md).
-Once the POD has been configured, you can create a subscriber,
-please refer to the [RCORD Service](../../rcord/README.md) guide for
-more informations.
+## Top-Down Subscriber Provisioning
-### Create a subscriber in RCORD
+Once the POD has been configured, you can create a subscriber. This
+section describes a "top-down" approach for doing that. (The following
+section describes an alternative, "bottom up" approach.)
-To create a subscriber in CORD you need to retrieve some informations:
+To create a subscriber, you need to retrieve some information:
- ONU Serial Number
-- UNI Port ID
- Mac Address
- IP Address
-We'll focus on the first two as the others are pretty self-explaining.
-
-**Find the ONU Serial Number**
+### Find ONU Serial Number
Once your POD is set up and the OLT has been pushed and activated in VOLTHA,
XOS will discover the ONUs available in the system.
-You can find them trough:
+You can find them through:
-- the XOS UI, on the left side click on `vOLT > ONUDevices`
-- the rest APIs `http://<pod-id>:<chameleon-port|30006>/xosapi/v1/volt/onudevices`
-- the VOLTHA [cli](../../charts/voltha.md#how-to-access-the-voltha-cli)
+- XOS GUI: on the left side click on `vOLT > ONUDevices`
+- XOS Rest API: `http://<pod-id>:<chameleon-port|30006>/xosapi/v1/volt/onudevices`
+- VOLTHA CLI: [Command Line Interface](../../charts/voltha.md#how-to-access-the-voltha-cli)
-If you are connected to the VOLTHA CLI you can use the command:
+If you are connected to the VOLTHA CLI you can use the following
+command to list all the existing devices:
```shell
(voltha) devices
@@ -159,7 +160,8 @@
| 00015698e67dc060 | broadcom_onu | True | 0001941bd45e71d8 | ENABLED | ACTIVE | REACHABLE | 536870912 | | BRCM| 0001941bd45e71d8 | 1 | 1 |
+------------------+--------------+------+------------------+-------------+-------------+----------------+----------------+------------------+----------+-------------------------+----------------------+------------------------------+
```
-to list all the existing devices, and locate the correct ONU, then:
+
+Locate the correct ONU, then:
```shell
(voltha) device 00015698e67dc060
@@ -193,29 +195,13 @@
| flows.items | 5 item(s) |
+------------------------------+------------------+
```
+
to find the correct serial number.
-**Find the UNI Port Id**
+### Push a Subscriber into CORD
-From the VOLTHA CLI, in the device command prompt execute:
-
-```shell
-(device 00015698e67dc060) ports
-Device ports:
-+---------+----------+--------------+-------------+-------------+------------------+-----------------------------------------------------+
-| port_no | label | type | admin_state | oper_status | device_id | peers |
-+---------+----------+--------------+-------------+-------------+------------------+-----------------------------------------------------+
-| 100 | PON port | PON_ONU | ENABLED | ACTIVE | 00015698e67dc060 | [{'port_no': 16, 'device_id': u'0001941bd45e71d8'}] |
-| 16 | uni-16 | ETHERNET_UNI | ENABLED | ACTIVE | 00015698e67dc060 | |
-+---------+----------+--------------+-------------+-------------+------------------+-----------------------------------------------------+
-```
-and locate the `ETHERNET_UNI` port.
-The `port_no` for that port is the value you are looking for.
-
-**Push a subscriber into CORD**
-
-Once you have the informations you need about your subscriber,
-you can create it by customizing this TOSCA:
+Once you have this information, you can create the subscriber by
+customizing the following TOSCA and passing it into the POD:
```yaml
tosca_definitions_version: tosca_simple_yaml_1_0
@@ -231,47 +217,51 @@
name: My House
c_tag: 111
onu_device: BRCM1234 # Serial Number of the ONU Device to which this subscriber is connected
- uni_port_id: 16 # UNI PORT ID in VOLTHA
mac_address: 00:AA:00:00:00:01 # subscriber mac address
ip_address: 10.8.2.1 # subscriber IP
```
-_For instructions on how to push TOSCA, please refer to this [guide](../../xos-tosca/README.md)_
+For instructions on how to push TOSCA into a CORD POD, please
+refer to this [guide](../../xos-tosca/README.md).
-### Zero-Touch Subscriber Provisioning
+## Zero-Touch Subscriber Provisioning
-This feature, also referred to as "bottom-up provisioning" enables auto-discovery
-of subscriber and their validation through an external OSS.
+This feature, also referred to as "bottom-up" provisioning,
+enables auto-discovery of subscribers and validates them
+using an external OSS.
-Here is the expected workflow:
+The expected workflow is as follows:
-- when an ONU is attached to the POD, VOLTHA will discover it and send an event to XOS
-- XOS receive the ONU activated events and through an OSS-Service query the upstream OSS to validate wether that ONU has a valid serial number
-- once the OSS has approved the ONU, XOS will create `ServiceInstance` chain for this particular subscriber and configure the POD to give him connectivity
+- When an ONU is attached to the POD, VOLTHA will discover it and send
+ an event to XOS
+- XOS receives the ONU activation event and through an OSS proxy
+ queries the upstream OSS to validate wether that ONU has a valid serial number
+- Once the OSS has approved the ONU, XOS will create `ServiceInstance`
+ chain for this particular subscriber and configure the POD to enable connectivity
-If you want to enable the "Zero touch provisioning" feature you'll need
-to deploy and configure some extra pieces in the system before attaching
+To enable the zero-touch provisioning feature, you will need to deploy
+and configure some extra pieces into the system before attaching
subscribers:
-**Kafka**
+### Deploy Kafka
-To enable this feature XOS needs to receive events from `onos-voltha`
+To enable this feature XOS needs to receive events from `onos-voltha`,
so a kafka bus needs to be deployed.
-To deploy it please follow [this instructions](../../charts/kafka.md)
+To deploy Kafka, please follow these [instructions](../../charts/kafka.md)
-**OSS Service**
+### Deploy OSS Proxy
-This is the piece of code that is responsible to enable the communication
-between CORD and you OSS Database.
-For reference we are providing a sample implemetation, available here:
+This is the piece of code that is responsible to connecting CORD to an
+external OSS Database. As a simple reference, we provide a sample
+implemetation, available here:
[hippie-oss](https://github.com/opencord/hippie-oss)
> **Note:** This implementation currently validates any subscriber that comes online.
To deploy the `hippie-oss` service you can look [here](../../charts/hippie-oss.md).
-Once the chart has come online, you'll need to add it to your service graph,
-and you can use this TOSCA for that:
+Once the chart has come online, you will need to add the Hippie-OSS service
+to your service graph. You can use the following TOSCA to do that:
```yaml
tosca_definitions_version: tosca_simple_yaml_1_0
@@ -312,4 +302,5 @@
relationship: tosca.relationships.BelongsToOne
```
-_For instructions on how to push TOSCA, please refer to this [guide](../../xos-tosca/README.md)_
+For instructions on how to push TOSCA into a CORD POD, please
+refer to this [guide](../../xos-tosca/README.md).
diff --git a/profiles/rcord/emulate.md b/profiles/rcord/emulate.md
new file mode 100644
index 0000000..30efab0
--- /dev/null
+++ b/profiles/rcord/emulate.md
@@ -0,0 +1,17 @@
+# Emulated OLT/ONU
+
+Support for emulating the OLT/ONU using `ponsim` is still a
+work-in-progress, so it is not currently possible to bring up R-CORD
+without the necessary access hardware. In the meantime, it is possible
+to set up a development environment that includes just the R-CORD
+control plane. Doing so involves installing the following helm charts:
+
+- [xos-core](../../charts/xos-core.md)
+- [cord-kafka](../../charts/kafka.md)
+- [hippie-oss](../../charts/hippie-oss.md)
+
+in addition to `rcord-lite`. This would typically be done
+on a [single node platform](../../prereqs/k8s-single-node.md) in
+support of a developer workflow that [emulates subscriber
+provisioning](../../developer/configuration_rcord.md).
+
diff --git a/profiles/rcord/install.md b/profiles/rcord/install.md
index 0231ae1..7f260bd 100644
--- a/profiles/rcord/install.md
+++ b/profiles/rcord/install.md
@@ -1,39 +1,60 @@
# R-CORD Profile
The latest version of R-CORD differs from versions included in earlier
-releases in that it does not include the vSG service. In the code this
-configuration is called RCORD-Lite, but since it is the only version
-of Residential CORD currently supported, we usually simply call it
-"R-CORD."
+releases in that it does not include the vSG service. In the code,
+this new configuration is called `rcord-lite`, but since it is the
+only version of Residential CORD currently supported, we simply
+call it the *R-CORD profile.*
## Prerequisites
-- A Kubernetes cluster (you can follow one of this guide to install a [single
- node cluster](../../prereqs/k8s-single-node.md) or a [multi node
- cluster](../../prereqs/k8s-multi-node.md))
-- Helm, follow [this guide](../../prereqs/helm.md)
+- Kubernetes: Follow one of these guides to install either a [single
+ node](../../prereqs/k8s-single-node.md) or a [multi
+ node](../../prereqs/k8s-multi-node.md) cluster.
+- Helm: Follow this [guide](../../prereqs/helm.md).
-## CORD Components
+## Install VOLTHA
-RCORD-Lite has dependencies on this charts, so they need to be installed first:
+When running on a physical POD with OLT/ONU hardware, the
+first step to bringing up R-CORD is to install the
+[VOLTHA helm chart](../../charts/voltha.md).
+
+## Install CORD Platform
+
+The R-CORD profile has dependencies on the following platform
+charts, so they need to be installed next:
- [xos-core](../../charts/xos-core.md)
- [onos-fabric](../../charts/onos.md#onos-fabric)
- [onos-voltha](../../charts/onos.md#onos-voltha)
-## Install the RCORD-Lite Helm Chart
+## Install R-CORD Profile
-```shell
+You are now ready to install the R-CORD profile:
+
+```shell
helm dep update xos-profiles/rcord-lite
helm install -n rcord-lite xos-profiles/rcord-lite
```
-Now that the your RCORD-Lite deployment is complete, please read this
-to understand how to configure it: [Configure RCORD-Lite](configuration.md)
+Optionally, if you want to use the "bottom up" subscriber provisioning
+workflow described in the [Operations Guide](configuration.md), you
+will also need to install the following two charts:
-## How to Customize the RCORD-Lite Helm Chart
+- [cord-kafka](../../charts/kafka.md)
+- [hippie-oss](../../charts/hippie-oss.md)
-Define a `my-rcord-lite-values.yaml` that looks like:
+> **Note:** If you install both VOLTHA and the optional Kafka, you
+> will end up with two instantiations of Kafka: `kafka-voltha` and
+> `kafka-cord`.
+
+Once your R-CORD deployment is complete, please read the
+following guide to understand how to configure it:
+[Configure R-CORD](configuration.md)
+
+## Customize an R-CORD Install
+
+Define a `my-rcord-values.yaml` that looks like:
```yaml
# in service charts
@@ -60,6 +81,5 @@
and use it during the installation with:
```shell
-helm install -n rcord-lite xos-profiles/rcord-lite -f my-rcord-lite-values.yaml
+helm install -n rcord-lite xos-profiles/rcord-lite -f my-rcord-values.yaml
```
-
diff --git a/quickstart.md b/quickstart.md
new file mode 100644
index 0000000..bf1354d
--- /dev/null
+++ b/quickstart.md
@@ -0,0 +1,16 @@
+# Quick Start
+
+This section walks you through an example installation sequence on two
+different platforms. If you'd prefer to understand the installation
+process in more depth, including the full range of deployment options,
+you might start with the [Installation Guide](README.md) instead.
+
+This Quick Start describes how to install the R-CORD profile, plus a
+`SimpleExampleService`, on a single machine. Once you complete these
+steps, you might be interested in jumping ahead to the
+[SimpleExampleService Tutorial](simpleexampleservice/simple-example-service.md)
+to learn more about the make-up of a CORD service. Another option
+would be to explore CORD's [operational interfaces](operating_cord/general.md).
+
+* [MacOS](macos.md)
+* [Linux](linux.md)
diff --git a/versioning.md b/versioning.md
new file mode 100644
index 0000000..dd1a00c
--- /dev/null
+++ b/versioning.md
@@ -0,0 +1,112 @@
+# Versions and Releases of CORD
+
+The 5.0 and earlier releases of CORD were done on a patch branch, named
+`cord-4.1`, `cord-5.0`, etc., which received bug fixes as required.
+
+Starting with 6.0, the decision was made that individual components of CORD
+would be released and versioned separately. The versioning method chosen was
+[Semantic Versioning](https://semver.org/), with versions numbers incrementing
+with per the MAJOR.MINOR.PATCH method as changes are made.
+
+For development versions, using either SemVer or the slightly different
+[PEP440](https://www.python.org/dev/peps/pep-0440/) syntax (`.dev#` instead of
+`-dev#`) if using Python code are the recommended formats.
+
+To avoid confusion, all components that existed prior to 6.0 started their
+independent versioning at version `6.0.0`.
+
+## CORD Releases
+
+Formal CORD releases are the tags on two repositories,
+[helm-charts](https://github.com/opencord/helm-charts) and
+[automation-tools](https://github.com/opencord/automation-tools). The helm
+charts refer to specific container versions used with CORD, which encapsulate
+individual components which are versioned independently. For example, a future
+6.1.0 release might still include some components that are still using the
+original 6.0.0 version.
+
+While not created by default during a release, patch branches may be created
+for tracking ongoing work, but this is left up to the owners/developers of the
+individual components.
+
+Support and patches are provided for the currently released version and the
+last point revision of the previous release - for example, while developing
+6.0, we continued to support both 5.0 and 4.1. Once 6.0 is released, support
+will be provided for 6.0 and 5.0, dropping 4.1
+
+## How to create a versioned release of an individual component
+
+1. Update the `VERSION` file to a released version (ex: `6.0.1`)
+
+2. Make sure that any Docker parents are using a released version (`FROM
+ xosproject/xos-base:6.0.0`), or some other acceptably specific version (ex: `FROM
+ scratch` or `FROM ubuntu-16.04`)
+
+3. Create a patchset with these changes and submit it to Gerrit
+
+4. If it passes tests, upon merge the commit will be tagged in git with the
+ version string, and Docker images with the tag will be built and sent to
+ Docker Hub.
+
+## Details of the release process
+
+To create a new version, the version string is updated in the language or
+framework specific method. For most of CORD, a file named `VERSION` is created
+in the root of the git repository and contains a single line with the version
+string. Once a commit has been merged to a branch that changes the released
+version number, the version number is added as a git tag on the repo.
+
+During development, the version is usually set to a development value, such as
+`6.0.0.dev1`. There can be multiple patchsets using the same non-release
+development version, and these versions don't create git tags on merge.
+
+As it's confusing to have multiple git commits that contain the same released
+version string in the `VERSION` file, Jenkins jobs have been put in place to
+prevent this from happening. The implementation is as follows:
+
+- When a patchset is submitted to Gerrit, the `tag-collision-reject` Jenkins
+ job runs. This checks that the version string in `VERSION` is does not
+ already exist as a git tag, and rejects any patchsets that have duplicate
+ released versions. It ignores development and non-SemVer version strings.
+
+ This job also checks that if a released version number is used, any
+ Dockerfile parent images are also using a fixed parent version, to better
+ ensure repeatability of the image builds.
+
+- Once the patchset is approved and merged, the `version-tag` Jenkins job runs.
+ If the patchset uses a SemVer released version, additional checks are
+ performed and if they pass a git tag is created on the git repo pointing to
+ the commit.
+
+ Once the tag is created, if there are Docker images that need to be created
+ from the commit, the `publish-imagebuilder` job runs and creates tagged
+ images corresponding to the git tags and branches and pushes them to Docker
+ Hub.
+
+Git history is public so it shouldn't be rewritten to abandon already merged
+commits, which means that there's not a way to un-release a version.
+
+Reverting a commit leaves it in the git history, so if a broken version is
+released the correct action is to create a new fixed version, not try to fix
+the already released version.
+
+## Troubleshooting
+
+### Patchsets fail job after rebasing
+
+If you've rebased your patchset onto a released version, the `VERSION` file may
+be at a released version, which violates the "no two patchsets can contain the
+same released version". For example, an error like this:
+
+```text
+Version string '1.0.1' found in 'VERSION' is a SemVer released version!
+ERROR: Duplicate tag: 1.0.1
+```
+
+Means that when you rebased your code, it found a `1.0.1` in the `VERSION`
+file, which violates the "two commits can't have the same version" policy.
+
+To fix this issue, you would change the contents of the `VERSION` file to
+either increment to a dev version (ex: `1.0.2.dev1`) or a release version (ex:
+`1.0.2`) and resubmit your patchset.
+