add  Prerequisites for COMAC install

Change-Id: Iffbf5fa488c5fd10cb77b0133f1471177133baf0
diff --git a/profiles/comac/configure/Prerequisites.md b/profiles/comac/configure/Prerequisites.md
deleted file mode 100644
index ca2e729..0000000
--- a/profiles/comac/configure/Prerequisites.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Prerequisites
-
-## Hardware Requirements
-
-Based on the description of "Generic Hardware Guidelines" <https://guide.opencord.org/prereqs/hardware.html#generic-hardware-guidelines>, we are going to introduce the specific requirements for COMAC in this page.
-
-
-* **Compute Machines**: Same as described in "Generic Hardware Guidelines" page. But you want to use multi-cluster COMAC, then you can prepare two same setups. Also, COMAC requires at least Intel XEON CPU with Haswell microarchitecture or better.
-
-* **Network Cards**: For 3GPP data plane, COMAC supports high performance with SR-IOV interfaces. So besides the first 1G NIC for management, COMAC also need another 10G NIC on computer machines for user data traffic.
-
-* **Access Devices**: In COMAC, the access devices here refer to Enodebs. The enodeb for this release we use Accelleran E1000.
-
-The rest of the hardware are same with "Hardware Requirements" section on "Generic Hardware Guidelines" page.
-
-## COMAC BOM Example
-
-One cluster with one OpenFlow switch setup example:  
-![](../images/3nodes-hardware-setup.png)
-
-3x x86 server (10G NIC)  
-1x OpenFlow switch (40G NIC)  
-1x DAC breakout cable     
-5x Ethernet copper cables 
-2x layer 2 switches, one is for management, another is as converter between 1G and 10G NICs  
-
-## Software Requirements
-
-* **Kernel Modules**:  
-  (1) “*nf_conntrack_proto_sctp*” for SCTP protocol;  
-  (2) “*vfio-pci*" for SR-IOV.
-  
-* **Software List**:  
-   Download kubespray, automation-tools, configurations and the helm charts:
-
-  `git clone https://github.com/kubernetes-incubator/kubespray.git -b release-2.11`
-  `git clone https://gerrit.opencord.org/automation-tools`
-  `git clone https://gerrit.opencord.org/pod-configs`
-  `git clone https://gerrit.opencord.org/helm-charts`
-
-
-
-  
-
-
diff --git a/profiles/comac/images/SR-IOV-result.png b/profiles/comac/images/SR-IOV-result.png
new file mode 100644
index 0000000..2334039
--- /dev/null
+++ b/profiles/comac/images/SR-IOV-result.png
Binary files differ
diff --git a/profiles/comac/images/cdn-vf-result.png b/profiles/comac/images/cdn-vf-result.png
new file mode 100644
index 0000000..c8aa715
--- /dev/null
+++ b/profiles/comac/images/cdn-vf-result.png
Binary files differ
diff --git a/profiles/comac/install/prerequisites.md b/profiles/comac/install/prerequisites.md
index a22b2d5..a52ec79 100644
--- a/profiles/comac/install/prerequisites.md
+++ b/profiles/comac/install/prerequisites.md
@@ -1,197 +1,157 @@
 # Prerequisites
 
-This page will introduce the pre-installation before installing OMEC, which includes:   
+This page will introduce the pre-installation before installing OMEC, which includes:
 
-* Installing OS;  
-* Nodes Configuration;  
-* SCTP Setup;  
-* SR-IOV Setup;
+* Hardware Requirements;
+* Nodes Setup for COMAC;
 * Install Kubernetes;
-* Install CORD and COMAC;
-* Fabric Configuration.
+* Install CORD platform and COMAC profile;
+* Setup Underlay Fabric.
 
-The introduction is based on multi-cluster: Edge and Central. If you want to install single cluster, you can ignore the central part.  
+The introduction is based on multi-cluster: Edge and Central. If you want to install single cluster, you can ignore the central part.
 
-## Install OS on All Nodes 
+## Hardware Requirements
 
-COMAC supports both Ubuntu 16.04 or Ubuntu 18.04. You can select any of them.
+Based on the description of "Generic Hardware Guidelines" <https://guide.opencord.org/prereqs/hardware.html#generic-hardware-guidelines>, we are going to introduce the specific requirements for COMAC in this page.
 
 
-## Config All Nodes
+* **Compute Machines**: Same as described in "Generic Hardware Guidelines" page. But you want to use multi-cluster COMAC, then you can prepare two same setups. Also, COMAC requires at least Intel XEON CPU with Haswell microarchitecture or better.
 
-* **Configure cluster node names**
-    
-  COMAC will install kubernets on the first node. So on edge 1 and central 1, add other node name and IP addresses to "/etc/hosts" file:
-  
+* **Network Cards**: For 3GPP data plane, COMAC supports high performance with SR-IOV interfaces. So besides the first 1G NIC for management, COMAC also need another 10G NIC on computer machines for user data traffic.
+
+* **Access Devices**: In COMAC, the access devices here refer to Enodebs. The enodeb for this release we use Accelleran E1000. The rest of the hardware are same with "Hardware Requirements" section on "Generic Hardware Guidelines" page.
+
+* **COMAC BOM Example**:
+
+  One cluster with one OpenFlow switch setup example:
+
+![](../images/3nodes-hardware-setup.png)
+
+  3x x86 server (10G NIC)  
+  1x OpenFlow switch (40G NIC)  
+  1x DAC breakout cable  
+  5x Ethernet copper cables  
+  2x layer 2 switches, one is for management, another is as converter between 1G and 10G NICs
+
+
+## Nodes Setup for COMAC
+
+In this section, we need to prepare some COMAC specific work before install K8S.
+
+* **OS requirement**
+
+  COMAC runs on K8S, so any OS distribution should work, we tested Ubuntu 16.04 and 18.04 so far.
+
+* **Download repos**
+
+  Download automation-tools, configurations and the helm charts:
+
   ```shell
-  127.0.0.1 localhost localhost.localdomain
-  192.168.170.3 edge1
-  192.168.170.4 edge2
-  192.168.170.5 edge3
-  ```
-  ```shell
-  192.168.171.3 central1
-  192.168.171.4 central2
-  192.168.171.5 central3
-  ```
-  If you just want to run a single cluster, you only need to config the edge cluster.
-  
-
-* **IP address configuration**
-  
-  After installing OS to nodes, we need to config the two NICs on each node. As described in the hardware requirements section, each nodes should have 2 NICs and the 1G NIC is for management network and 10G NIC is for user dataplane traffic.  
-  
-  For example, if the 1G inerface for management network is: 10.90.0.0/16, the 10G interface for fabric is 119.0.0.0/24. Then we can config the cluster like this:
- 
-  Edge1:
- 
-  ```shell   
-  auto eth0
-  iface eth0 inet static  
-  address 192.168.170.3 
-  netmask 255.255.0.0 
-  gateway 10.90.0.1
- 
-  auto eth2  
-  iface eth2 inet static
-  address 119.0.0.101
-  netmask 255.255.255.0
+  git clone https://gerrit.opencord.org/automation-tools
+  git clone https://gerrit.opencord.org/pod-configs
+  git clone https://gerrit.opencord.org/helm-charts
   ```
 
-  Edge2:
-
-  ```shell   
-  auto eth0
-  iface eth0 inet static  
-  address 192.168.170.4 
-  netmask 255.255.0.0 
-  gateway 10.90.0.1
- 
-  auto eth2  
-  iface eth2 inet static
-  address 119.0.0.102
-  netmask 255.255.255.0
-  ```
-
-  Edge3:
- 
-  ```shell   
-  auto eth0
-  iface eth0 inet static  
-  address 192.168.170.5
-  netmask 255.255.0.0 
-  gateway 10.90.0.1
-
-  auto eth2  
-  iface eth2 inet static
-  address 119.0.0.103
-  netmask 255.255.255.0
-  ```
-
-  If you want to run multi-cluster, you can config the second cluster in the same way.  
-
-
-* **SSH Key Configuration** 
-   
-  COMAC uses kubespray to insalll the kubernetes cluster. The Ansible tool inside kubespray needs to ssh into each node and execute the playbook. So we need to setup ssh login with key instead of password for each node.
-  
-  Login Edge1, run the following commands:
-  
-  ```shell
-  cord@edge1:~$ ssh-keygen
-  cord@edge1:~$ ssh-copy-id localhost
-  cord@edge1:~$ ssh-copy-id edge2
-  cord@edge1:~$ ssh-copy-id edge3
-  ```
-  
-  Then ssh into each node, make sure the ssh key works without password.
- 
-* **Clone repos** 
-  
-  On Edge1:
-  
-  ```shell
-  cord@edge1:~$ git clone https://github.com/kubernetes-incubator/kubespray.git -b release-2.11
-  cord@edge1:~$ git clone https://gerrit.opencord.org/automation-tools
-  cord@edge1:~$ git clone https://gerrit.opencord.org/pod-configs
-  cord@edge1:~$ git clone https://gerrit.opencord.org/helm-charts
-  ```
-
-## SCTP Setup
+* **SCTP Setup**
 
    The protocol for S1-MME interface is SCTP, but SCTP is not loaded by ubuntu OS by default. So we need to setup SCTP on all nodes:
 
   ```shell
-$sudo modprobe nf_conntrack_proto_sctp
-$echo ‘nf_conntrack_proto_sctp’ >> /etc/modules
+  sudo modprobe nf_conntrack_proto_sctp
+  echo ‘nf_conntrack_proto_sctp’ >> /etc/modules
   ```
-  
+
   You can verify whether he sctp module is loaded by command:
-  
-  ```shell
-$sudo lsmod | grep sctp
-  ```
-
-## SR-IOV Setup
-
-   In this release, we pre-setup the SR-IOV support on the nodes which will run SPGWU and CDN containers. Also with COMAC, you can specify which node to run the SPGWU and which node to run CDN container. By default, COMAC run SPGWU on edge3 and CDN on edge2. 
-   
-   The name of 10G interface on each node is eth2.
-   
-   COMAC use “*VFIO driver*” for userspace APP with DPDK for SPGWU. Run the following command on the node where you want to run SPGWU container on edge3:  
 
   ```shell
-cord@edge3:~$ git clone https://gerrit.opencord.org/automation-tools
-cord@edge3:~$ sudo automation-tools/comac/scripts/node-setup.sh eth2
+  sudo lsmod | grep sctp
   ```
+
+* **SR-IOV Setup**
+
+   In this release, we pre-setup the SR-IOV support on the nodes which will run SPGWU and CDN containers.
+
+   COMAC use “*VFIO driver*” for userspace APP with DPDK for SPGWU. To setup SR-IOV support on nodes, COMAC team provides a script inside automation-tools repo. This script will help you to setup the SR-IOV, including: check whether the hardware virtualization is enabled in BIOS, enable IOMMU, enable Hugepage, enable the SR-IOV, etc.
+
+   So what you need to do is just run the following command lines on the node with SR-IOV supported NIC:
+
+  ```shell
+  git clone https://gerrit.opencord.org/automation-tools
+  sudo automation-tools/comac/scripts/node-setup.sh <SR-IOV-supported-NIC-name>
+  ```
+
   You can verify it with command:
-  
+
   ```shell
-cord@edge3:~$ ip link show
+ ip link show
   ```
-  
-  COMAC use “*Netdevice driver*” for CDN. Run the following command on the node where you want to run CDN container on edge2:
-    
+   You should see the 63 VF interfaces in the result like this:
+
+   ![](../images/SR-IOV-result.png)
+
+  COMAC use “*Netdevice driver*” for CDN. Run the following command on the node where you want to run CDN container:
+
   ```shell
-cord@edge2:~$ sudo su
-cord@edge2:/home/cord# echo '8' > /sys/class/net/eth2/device/sriov_numvfs
+ sudo su
+ echo '8' > /sys/class/net/eth2/device/sriov_numvfs
   ```
  You can verify it with command:
-  
-  ```shell
-cord@edge2:/home/cord# ip link show
-  ```
 
-## Install Kubernetes 
+  ```shell
+# ip link show
+  ```
+  You should see the 8 VF interfaces in the result like this:
+
+  ![](../images/cdn-vf-result.png)
+
+## Install Kubernetes
 
 You can refer to the [Kubernetes page](https://guide.opencord.org/prereqs/kubernetes.html) for installation. In this section, we only describe the COMAC specific work.
 
-## Install CORD and COMAC
-
-* **Install CORD** 
+As we described before, SCTP protocol is used on S1-MME interface between BBU and MME containers. To enable this feature, COMAC needs to enable the SCTP feature by adding the following line in "*inventory/comac/extra-vars.yaml*" file.
 
 ```shell
-cord@edge1:~$ helm init --wait --client-only
-cord@edge1:~$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
-cord@edge1:~$ helm repo add cord https://charts.opencord.org
-cord@edge1:~$ helm repo update
-cord@edge1:~$ helm install -n cord-platform cord/cord-platform --version 7.0.0 -f automation-tools/comac/sample/omec-override-values-multi.yaml
+kube_feature_gates: [SCTPSupport=True]
+```
+In COMAC, most containers have multiple and different types of interfaces. Take the SPGWU container for example, one is used to talk to SPGWC for receiving police info and commands to setup GTP tunnels which does not need high performace, this interface is based on calico. The second and the third interfaces are S1U and SGI interfaces, which will run user traffic, those interfaces are based on SR-IOV.
+
+[Multus](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/multus.md) is a meta Container Network Interfaces (CNI) plugin that provides multiple network interface support to pods, which is needed by COMAC.
+
+So in "*inventory/comac/extra-vars.yaml*" file, for the Container CNI plugins, COMAC needs to add the following lines:
+
+```shell
+kube_network_plugin: calico
+kube_network_plugin_multus: true
+multus_version: stable
 ```
 
-* **Install COMAC** 
+
+## Install CORD Platform and COMAC Profile
+
+* **Install CORD Platform**
 
 ```shell
-cord@edge1:~$ helm install -n comac-platform --version 0.0.6 cord/comac-platform --set mcord-setup.enabled=false --set etcd-cluster.enabled=false 
+ helm init --wait --client-only
+ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
+ helm repo add cord https://charts.opencord.org
+ helm repo update
+ helm install -n cord-platform cord/cord-platform --version 7.0.0 -f automation-tools/comac/sample/omec-override-values-multi.yaml
+```
+
+* **Install COMAC Profile**
+
+```shell
+ helm install -n comac-platform --version 0.0.6 cord/comac-platform --set mcord-setup.enabled=false --set etcd-cluster.enabled=false
 ```
 ## Fabric Configuration
-  
-You can refer to [Trellis Underlay Fabric](https://wiki.opencord.org/display/CORD/) for more info on how to config the fabric. 
+
+You can refer to [Trellis Underlay Fabric](https://wiki.opencord.org/display/CORD/) for more info on how to config the fabric.
 
   You can modify the exmpale file "mcord-local-cluster-fabric-accelleran.yaml" according to your netowrk, and insert fabric configuration with command:
-  
+
   ```shell
 $ cd pod-configs/tosca-configs/mcord
-curl -H "xos-username: admin@opencord.org" -H "xos-password: letmein" -X POST --data-binary @mcord-local-cluster-fabric-accelleran.yaml http://192.168.87.151:30007/run
+curl -H "xos-username: admin@opencord.org" -H "xos-password: letmein" -X POST --data-binary @mcord-local-cluster-fabric-accelleran.yaml http://<cluster-ip>:30007/run
 
   ```