Improve prerequisites for COMAC

Change-Id: I14f9c32d1f20d729b6bdd395f06dd3bd4e8a692b
diff --git a/prereqs/hardware.md b/prereqs/hardware.md
index e201bfd..f682012 100644
--- a/prereqs/hardware.md
+++ b/prereqs/hardware.md
@@ -99,9 +99,13 @@
                     * SUNSTAR D22799-STCC, EZconn ETP69966-7TB4-I2
 
 * **COMAC Specific Requirements**
-    * **Servers**: COMAC requires at least Intel XEON CPU with Haswell microarchitecture or better.
-    * **eNodeBs**:
-        * For this release, we tested a commercial enodeb: Accelleran E1000.
+    * **Compute Machines**:
+        * Intel Haswell CPUs or newer with VT-d support
+        * SR-IOV capable network card (for a list of Intel NICs with SR-IOV support, see [here](https://www.intel.com/content/www/us/en/support/articles/000005722/network-and-i-o/ethernet-products.html))
+    * **eNodeB**:
+        * Accelleran E1000
+    * **UE**:
+        * Sumsang J5 with Andriod v7.1.1
 
 ## BOM Examples
 
@@ -150,3 +154,32 @@
 * 1 or more developers' workstations to develop and deploy
 * A workstation/server to simulate BNG
 * 1x L2 legacy management switch
+
+### COMAC BOM
+
+**Single Cluster**
+
+* 3x x86 server (1G managment and 10G/25G/40G/100G data with SR-IOV enabled)
+* 1x fabric switch (10G/25G/40G/100G)
+* 1x L2 legacy management switch
+* DAC breakout cables as needed
+* Ethernet copper cables as needed
+* 1x eNB
+* 1x or more UEs
+* A workstation/server to develop and deploy
+
+**Multi-Cluster**
+
+* A workstation/server with an access to both clusters to develop and deploy
+* **Central**
+    * 3x x86 server (1G managment)
+    * 1x L2 legacy management switch
+    * Ethernet copper cables as needed
+* **Edge**
+    * 3x x86 server (1G managment and 10G/25G/40G/100G data)
+    * 1x fabric switch (10G/25G/40G/100G)
+    * 1x L2 legacy management switch
+    * DAC breakout cables as needed
+    * Ethernet copper cables as needed
+    * 1x eNB
+    * 1x or more UEs
diff --git a/prereqs/kubernetes.md b/prereqs/kubernetes.md
index 996b60a..c4c0c3f 100644
--- a/prereqs/kubernetes.md
+++ b/prereqs/kubernetes.md
@@ -16,6 +16,7 @@
 ```
 
 More information about feature gates can be found [here](https://github.com/kubernetes-incubator/external-storage/tree/local-volume-provisioner-v2.0.0/local-volume#enabling-the-alpha-feature-gates).
+More sophisticated use-cases, for example COMAC, require additional settings (see [COMAC prerequisites](../profiles/comac/install/prerequisites.md)).
 
 Although you are free to set up Kubernetes and Helm in whatever way makes
 sense for your deployment, the following provides guidelines, pointers, and
@@ -44,6 +45,15 @@
 topic at
 <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/>.
 
+In case you have to manage multiple clusters, for example COMAC,
+extend KUBECONFIG variable to include the configuration file for all clusters.
+
+```shell
+export KUBECONFIG=/path/to/central/kubeconfig/file:/path/to/edge/kubeconfig/file
+```
+
+After updating KUBECONFIG, you can quickly switch between clusters by using the `kubectl config use-context` command.
+
 ## Install Kubectl
 
 Again assuming Kubernetes is already installed, the next step is to
diff --git a/profiles/comac/images/central-edge-connectivity.png b/profiles/comac/images/central-edge-connectivity.png
new file mode 100644
index 0000000..c2013a2
--- /dev/null
+++ b/profiles/comac/images/central-edge-connectivity.png
Binary files differ
diff --git a/profiles/comac/install/prerequisites.md b/profiles/comac/install/prerequisites.md
index 7b20e71..dd44ff3 100644
--- a/profiles/comac/install/prerequisites.md
+++ b/profiles/comac/install/prerequisites.md
@@ -1,159 +1,186 @@
 # Prerequisites
 
-This page will introduce the pre-installation before installing OMEC, which includes:
+This page describes prerequisites for deploying COMAC, which includes:
 
-* Hardware Requirements;
-* Nodes Setup for COMAC;
-* Install Kubernetes;
-* Install CORD platform and COMAC profile;
-* Setup Underlay Fabric.
+* Hardware Requirements
+* Connectivity Requirements
+* Software Requirements
 
-The introduction is based on multi-cluster: Edge and Central. If you want to install single cluster, you can ignore the central part.
+Before proceeding further, pelease read general [CORD prerequisites](../../../prereqs/README.md) first.
+This page addresses COMAC specific settings only based on the general
+requirements.
 
 ## Hardware Requirements
 
-Based on the description of "Generic Hardware Guidelines" <https://guide.opencord.org/prereqs/hardware.html#generic-hardware-guidelines>, we are going to introduce the specific requirements for COMAC in this page.
+Hardware requirements and COMAC BOM are described in [this page](../../../prereqs/hardware.md).
 
+## Connectivity Requirements
 
-* **Compute Machines**: Same as described in "Generic Hardware Guidelines" page. But you want to use multi-cluster COMAC, then you can prepare two same setups. Also, COMAC requires at least Intel XEON CPU with Haswell microarchitecture or better.
+Read [this page](https://guide.opencord.org/prereqs/networking.html) first for the
+general connectivity requirements of CORD cluster.
+The same setup is applied when you run COMAC in a single cluster.
 
-* **Network Cards**: For 3GPP data plane, COMAC supports high performance with SR-IOV interfaces. So besides the first 1G NIC for management, COMAC also need another 10G NIC on computer machines for user data traffic.
+In a multi cluster setup such as the example setup below, you need to provide
+a method for inter-cluster communication for exchanging control packets
+between applications running on different cluster.
+There are various ways to achieve this requirement but we usually setup
+site-to-site VPN for the managment networks of the clusters.
 
-* **Access Devices**: In COMAC, the access devices here refer to Enodebs. The enodeb for this release we use Accelleran E1000. The rest of the hardware are same with "Hardware Requirements" section on "Generic Hardware Guidelines" page.
+![example-setup](../images/central-edge-connectivity.png)
 
-* **COMAC BOM Example**:
+Note: COMAC currently only provides `NodePort` as a way to expose services outside
+of the cluster. If two clusters in your environment are routed or reachable in
+some way, there is no need to pay special attention.
 
-  One cluster with one OpenFlow switch setup example:
+Here is the list of default `NodePort` numbers that need to be opened externally
+in case you want port forwarding.
 
-![](../images/3nodes-hardware-setup.png)
+|Cluster|Description|Default NodePort Number|
+|---------|-----------|-------|
+|Central|SPGWC-SPGWU communication|30021|
+|Edge|SPGWC-SPGWC communication|30020|
+|Central|CDN remote service HTTP UI|32080|
+|Central|CDN remote service RTMP|30935|
 
-  3x x86 server (10G NIC)
-  1x OpenFlow switch (40G NIC)
-  1x DAC breakout cable
-  5x Ethernet copper cables
-  2x layer 2 switches, one is for management, another is as converter between 1G and 10G NICs
+## Software Requirements
 
+### Node Setup
 
-## Nodes Setup for COMAC
+#### Operating System
 
-In this section, we need to prepare some COMAC specific work before install K8S.
+COMAC runs on Kubernetes, so it will work on all Linux distributions.
+So far, Ubuntu 16.04 and 18.04 have been tested.
 
-* **OS requirement**
+#### Enable SR-IOV
 
-  COMAC runs on K8S, so any OS distribution should work, we tested Ubuntu 16.04 and 18.04 so far.
+It is recommended to enable SR-IOV in a data plane interface for
+accellerating the data plane. Enabling SR-IOV includes:
 
-* **Download repos**
+* Enable VT-d
+* Enable IOMMU
+* Create VFs
+* Bind VF to VFIO driver
 
-  Download automation-tools, configurations and the helm charts:
+The last step should be done only on the node that you want to run SPGWU. All data plane components in COMAC, `SPGWU` and `CDN-local`, support SR-IOV
+but they require different type of drivers for the VFs.
+`SPGWU` is implemented as a DPDK application so it requires VFs bounded
+to VFIO driver, while `CDN-local` requires VFs bounded to normal kernel drivers.
 
-  ```shell
-  git clone https://gerrit.opencord.org/automation-tools
-  git clone https://gerrit.opencord.org/pod-configs
-  git clone https://gerrit.opencord.org/helm-charts
-  ```
-
-* **SCTP Setup**
-
-   The protocol for S1-MME interface is SCTP, but SCTP is not loaded by ubuntu OS by default. So we need to setup SCTP on all nodes:
-
-  ```shell
-  sudo modprobe nf_conntrack_proto_sctp
-  echo ‘nf_conntrack_proto_sctp’ >> /etc/modules
-  ```
-
-  You can verify whether he sctp module is loaded by command:
-
-  ```shell
-  sudo lsmod | grep sctp
-  ```
-
-* **SR-IOV Setup**
-
-   In this release, we pre-setup the SR-IOV support on the nodes which will run SPGWU and CDN containers.
-
-   COMAC use “*VFIO driver*” for userspace APP with DPDK for SPGWU. To setup SR-IOV support on nodes, COMAC team provides a script inside automation-tools repo. This script will help you to setup the SR-IOV, including: check whether the hardware virtualization is enabled in BIOS, enable IOMMU, enable Hugepage, enable the SR-IOV, etc.
-
-   So what you need to do is just run the following command lines on the node with SR-IOV supported NIC:
-
-  ```shell
-  git clone https://gerrit.opencord.org/automation-tools
-  sudo automation-tools/comac/scripts/node-setup.sh <SR-IOV-supported-NIC-name>
-  ```
-
-  You can verify it with command:
-
-  ```shell
- ip link show
-  ```
-   You should see the 63 VF interfaces in the result like this:
-
-   ![](../images/SR-IOV-result.png)
-
-  COMAC use “*Netdevice driver*” for CDN. Run the following command on the node where you want to run CDN container:
-
-  ```shell
- sudo su
- echo '8' > /sys/class/net/eth2/device/sriov_numvfs
-  ```
- You can verify it with command:
-
-  ```shell
-# ip link show
-  ```
-  You should see the 8 VF interfaces in the result like this:
-
-  ![](../images/cdn-vf-result.png)
-
-## Install Kubernetes
-
-You can refer to the [Kubernetes page](https://guide.opencord.org/prereqs/kubernetes.html) for installation. In this section, we only describe the COMAC specific work.
-
-As we described before, SCTP protocol is used on S1-MME interface between BBU and MME containers. To enable this feature, COMAC needs to enable the SCTP feature by adding the following line in "*inventory/comac/extra-vars.yaml*" file.
+We provide a script that automates all of the above steps.
+Run `node-setup.sh` script with `--vfio` option to create VFIO bounded VFs.
 
 ```shell
-kube_feature_gates: [SCTPSupport=True]
+$ git clone https://gerrit.opencord.org/automation-tools
+$ cd automation-tools/comac/scripts/
+$ sudo ./node-setup.sh -i [iface name] --vfio
+  OK: vmx is enabled
+INFO: IOMMU is disabled
+      Added "intel_iommu=on" is to kernel parameters
+INFO: Hugepage is disabled
+      Added "hugepages=32" is to kernel parameters
+      Added "default_hugepagesz=1G" is to kernel parameters
+INFO: SR-IOV VF does not exist
+      Configured VFs on [iface name]
+INFO: SR-IOV VF 0 does not exist or is not binded to vfio-pci
+HINT: Grub was updated, reboot for changes to take effect
 ```
-In COMAC, most containers have multiple and different types of interfaces. Take the SPGWU container for example, one is used to talk to SPGWC for receiving police info and commands to setup GTP tunnels which does not need high performace, this interface is based on calico. The second and the third interfaces are S1U and SGI interfaces, which will run user traffic, those interfaces are based on SR-IOV.
 
-[Multus](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/multus.md) is a meta Container Network Interfaces (CNI) plugin that provides multiple network interface support to pods, which is needed by COMAC.
-
-So in "*inventory/comac/extra-vars.yaml*" file, for the Container CNI plugins, COMAC needs to add the following lines:
+For VFs bounded to kernel driver, run the same command without `--vfio` option.
 
 ```shell
+cd automation-tools/comac/scripts/
+sudo ./node-setup.sh -i [iface name]
+```
+
+You'll need to reboot after running the script and run the script again after
+reboot to verify the settings.
+
+```shell
+$ sudo ./node-setup.sh -i [iface name] --vfio
+  OK: vmx is enabled
+  OK: IOMMU is enabled
+  OK: Hugepage is enabled
+  OK: SR-IOV is enabled on [iface name]
+```
+
+#### Load SCTP module
+
+The protocol used in S1-MME interface is SCTP.
+Make sure SCTP kernel module is loaded in all nodes permanently:
+
+```shell
+sudo modprobe nf_conntrack_proto_sctp
+echo "nf_conntrack_proto_sctp" >> /etc/modules
+```
+
+### Kubernetes
+
+Read [this page](https://guide.opencord.org/prereqs/kubernetes.html) first for a
+basic understanding of how to install Kubernetes in your environment.
+To run COMAC on Kubernetes, some additional settings are required.
+
+* Enable `SCTPSupport` as a feature gates
+* Install `Multus` CNI plugin
+* Change NodePort range to `2000-36767`
+
+Here is an example [Kubespray](https://github.com/kubernetes-sigs/kubespray)
+configuration file that includes the additional settings listed above.
+You can pass the file when running Kubespray ansible-playbook.
+Note that it is tested with Kubespray version `release-2.11`.
+
+```shell
+$ cat >> extra-vars.yaml << EOF
+# OS
+disable_swap: true
+populate_inventory_to_hosts_file: true
+
+# etcd
+etcd_deployment_type: docker
+etcd_memory_limit: 8192M
+
+# K8S
+kubelet_deployment_type: host
+kubectl_localhost: true
+kubeconfig_localhost: true
+
+kube_feature_gates: [SCTPSupport=True]
+kube_apiserver_node_port_range: 2000-36767
 kube_network_plugin: calico
 kube_network_plugin_multus: true
 multus_version: stable
+local_volume_provisioner_enabled: true
+
+# Applications
+dns_mode: coredns
+dns_cores_per_replica: 256
+dns_min_replicas: 1
+
+helm_enabled: true
+helm_deployment_type: host
+helm_version: v2.14.2
+EOF
+
+$ ansible-playbook -b -i inventory/comac/edge.ini -e @inventory/comac/extra-vars.yaml cluster.yml
 ```
 
+We also provide sample Kubespray inventories and configuration files under `automation-tools/comac/sample` directory.
 
-## Install CORD Platform and COMAC Profile
+Once you have installed Kubernetes, the next step is to install CORD platform.
+Refer to [this page](../../../installation/platform.md) for the
+basic instructions. For configuring `nem-monitoring` for COMAC, see [this page](../configure/monitoring.md).
 
-* **Install CORD Platform**
+### Trellis for Fabric
+
+You may use Trellis for configuring the data plane networks.
+You'll need to create a Tosca configuration file for your networks and then push
+the configurations to `XOS`. XOS exposes itself with `NodePort`, so you can
+interface to it with one of the node IP in the cluster.
 
 ```shell
- helm init --wait --client-only
- helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
- helm repo add cord https://charts.opencord.org
- helm repo update
- helm install -n cord-platform cord/cord-platform --version 7.0.0 -f automation-tools/comac/sample/omec-override-values-multi.yaml
+curl -H "xos-username: admin@opencord.org" \
+     -H "xos-password: letmein" -X POST \
+     --data-binary @comac-fabric.yaml \
+     http://<nodeIP>:30007/run
 ```
 
-* **Install COMAC Profile**
-
-```shell
- helm install -n comac-platform --version 0.0.6 cord/comac-platform --set mcord-setup.enabled=false --set etcd-cluster.enabled=false
-```
-## Fabric Configuration
-
-You can refer to [Trellis Fabric Documentation](https://docs.trellisfabric.org/) for more info on how to config the fabric.
-
-  You can modify the exmpale file "mcord-local-cluster-fabric-accelleran.yaml" according to your netowrk, and insert fabric configuration with command:
-
-  ```shell
-$ cd pod-configs/tosca-configs/mcord
-curl -H "xos-username: admin@opencord.org" -H "xos-password: letmein" -X POST --data-binary @mcord-local-cluster-fabric-accelleran.yaml http://<cluster-ip>:30007/run
-
-  ```
-
-
-
+You can find the Tosca file for the example setup from [here](https://github.com/opencord/pod-configs/blob/master/tosca-configs/mcord/mcord-local-cluster-fabric-accelleran.yaml). See [Trellis Fabric Documentation](https://docs.trellisfabric.org/) for more information.