Merge "Fixing example yaml file" into cord-4.1
diff --git a/docs/installation_guide.md b/docs/installation_guide.md
index dab486c..7bebfaf 100644
--- a/docs/installation_guide.md
+++ b/docs/installation_guide.md
@@ -30,12 +30,48 @@
  
 **NOTE**: The role of the CPE is to get users’ traffic, tag it with a VLAN id, and forward it to the Ethernet Edge switch. Additionally, the CPE sends and receives OAM probes to let CORD monitor the status of the network. For lab trials, a combination of two components has been used to emulate the CPE functionalities: a media converter, used to collect users’ traffic from an Ethernet CAT5/6 interface (where a traditional host, like a laptop, is connected) and send it out from its other SFP interface; a programmable SFP (plugged into the SFP port of the media converter), that a) tags the traffic with a specific VLAN id and forwards it to the Ethernet Edge switch; b) sends and receives OEM probes to let CORD monitor the network. The programmable SFP is currently configured through NETCONF, using the ONOS Flow Rule abstraction translated into NETCONF XML for the drivers, and the ONOS-based CarrierEthernet application to generate the Flow Rules based on requests.
 
+## Local site connectivity diagram
+The main CORD physical POD [installation guide](https://guide.opencord.org/install_physical.html#connectivity-requirements) already provides a basic POD connectivity diagram. These connections are anyway needed to bring up an E-CORD local site. Please, carefully review them before going through this section.
+
+<img src="static/images/connectivity-diagram.png" alt="E-CORD connectivity diagram" style="width: 800px;"/>
+
+### Legend
+* **Red lines**: data plane connections
+* **Light blue lines**: control plane connections
+* **Bold (BOLD!) lines**: 10G/40G fiber network connections, depending on your hardware
+* **Bold lines**: 1G fiber network connections
+* **Thin lines**: 1G copper network connections
+
+The diagram has been populated with letters and numbers that reference specific devices or ports. Letters only reference devices (i.e. A is the CPE). Letters and numbers reference a port. For example, A1 is the "fiber" port on the CPE.
+
+* **A** - the CPE
+* **B** - the the Ethernet Edge Device
+* **B1** - the Ethernet Edge Device port facing the CPE
+* **B2** - the Ethernet Edge Device port facing the fabric switch
+* **C** - the fabric switch
+* **C1** - the fabric switch port facing the Ethernet Edge Device
+* **C2** - the fabric switch port facing the head node (if any)
+* **C3** - the fabric switch port facing the compute node (if any)
+* **C4** - the main fabric switch port connecting the POD to the upstream network (or directly to the fabric switch of the second POD)
+* **C5** - the main fabric switch port connecting the POD to the upstream network (or directly to the fabric switch of the third POD)
+* **E** - the remote fabric switch (of "POD2" in lab trials with 3 PODs)
+* **E1** - the remote fabric switch port of "POD2", facing the fabric switch in POD1
+* **F** - the remote fabric switch (of "POD3" in lab trials with 3 PODs)
+* **F1** - the remote fabric switch port of "POD3", facing the fabric switch in POD1
+
+A letter, plus "N" represents a generic port on a specific device.
+
+### References and informations needed: letters and numbers in the diagram
+Some informations are needed to
+* Properly configure E-CORD, following in the guide.
+* Have a reference when debugging an installation
+
 ## Installing the global node
 To install the global orchestrator on a physical node, you should follow the steps described in the main [physical POD installation](https://guide.opencord.org/install_physical.html).
 
 At a high level, bootstrap the development machine and download the code.
 
-### Local POD configuration file
+### Global node configuration file
 When it’s time to write your POD configuration, use the [physical-example.yml](https://github.com/opencord/cord/blob/master/podconfig/physical-example.yml) file as a template. Either modify it or make a copy of it in the same directory.
 Fill in the configuration with your own head node data.
 
@@ -174,9 +210,52 @@
     ```
 
 ## Configure the local sites
-Local sites configuration consists of two parts:
+Local sites configuration consists of four parts:
+* Ethernet Edge (Centec V350) configuration
+* **Optional** Fabric Breakout configuration
 * ONOS_Fabric configuration
 * The ONOS_CORD configuration
+
+### Configure the Ethernet Edge device - B - (Centec v350)
+The steps below assume that
+* The Centec device to an A(Access)-leaf fabric switch
+* ONOS_CORD is running
+
+Follow the steps below to assign an IP address to the Ethernet Edge device and connect it to ONOS_CORD.
+
+#### Set a management IP address on the switch OOB interface
+The switch management interface should be set with a static IP address (DHCP not supported yet), in the same subnet of the POD internal/management network (by default 10.6.0.0/24).
+
+**NOTE**: Please, use high values for the IP last octet, since lower values are usually allocated first by the MAAS DHCP server running on the head node.
+
+To configure the static IP address, do the following:
+* Log into the CLI of the Centec switch (through SSH or console cable)
+* *configure terminal*
+* *management ip address YOUR_MGMT_ADDRESS netmask YOUR_NETMASK*
+* *end*
+* *show management ip address*
+
+### **Optional** Configure the breakout cable on the fabric switch
+If you use a fabric switch with 40G interfaces and a 4 x 10G breakout cable to go to the Centec Etherned Edge you need to properly configure the interface on which the breakout cable is connected.
+By default, all 32 ports are running in 1 x 40G mode. The */etc/accton/ofdpa.conf* needs to be modified if we want to break out 1 x 40G into 4 x 10G.
+Do the following:
+* ssh into the fabric switch (username and password are usually root/onl)
+* *vi /etc/accton/ofdpa.conf*
+* uncomment this line *port_mode_1=4x10g    # front port 1* for the prot you have the breakout cable connected to
+* save the file and exit from it. 
+* *cd ~*
+* *./killit*
+* *./connect -bg*
+
+For any more reference you can go to this particular step [Fabric configuration guide](https://wiki.opencord.org/display/CORD/Hardware+Switch+Installation+Guide#HardwareSwitchInstallationGuide-C3)
+
+### Set ONOS-CORD as the Openflow controller
+To set ONOS-CORD as the default switch OpenFlow controller and verify the configuration:
+* Log into the CLI of the Centec switch (through SSH or console cable)
+* *configure terminal*
+* *openflow set controller tcp YOUR-LOCAL-SITE-HEAD-IP 6654*
+* *end*
+* *show openflow controller status*
  
 The local site configurations explained below need to happen on the head nodes of all local sites.
 
@@ -215,7 +294,7 @@
             "port": "8181",
             "username": "onos",
             "password": "rocks",
-            "deviceId": "of:YOUR-FABRIC-SW-DPID"
+            "deviceId": "of:C-DPID"
           }
         },
         "org.opencord.ce.local.bigswitch" : {
@@ -223,12 +302,12 @@
           [
             {
               "mefPortType" : "INNI",
-              "connectPoint" : "of:YOUR-FABRIC-SW-DPID/PORT-ON-FABRIC-CONNECTING-TO-EE",
+              "connectPoint" : "of:C-DPID/C1",
               "interlinkId" : "EE-1-to-fabric"
             },
             {
               "mefPortType" : "ENNI",
-              "connectPoint" : "of:YOUR-FABRIC-SW-TO-UPSTREAM-DPID/PORT (duplicate as many time as needed)",
+              "connectPoint" : "of:C-DPID/C4 (duplicate as many time as needed, depending how many uplink / connections to other PODs you have)",
               "interlinkId" : "fabric-1-to-fabric-2"
             }
           ]
@@ -255,8 +334,8 @@
 Under the key “mefPorts” there is the list of physical ports that have to be exposed to the global node. These ports represent MEF ports and can belong to different physical devices, but they will be part of a single abstract “bigswitch” in the topology of the global ONOS (see [E-CORD topology abstraction](https://guide.opencord.org/profiles/ecord/overview.html#e-cord-topology-abstraction)). These ports represent also the boundary between physical topologies controlled by different ONOS controllers.
 
 In the Json above:
-* *of:YOUR-FABRIC-SW-DPID/PORT-ON-FABRIC-CONNECTING-TO-EE* is the DPID and the port of the fabric device where the ethernet edge is connected to. 
-* *of:YOUR-FABRIC-SW-TO-UPSTREAM-DPID/PORT* is the DPID and the port of the fabric device where the transport network is connected to (in the example fabric switches are connected together). 
+* *of:C-DPID/C1* is the DPID and the port of the fabric device where the Ethernet Edge Device is connected to. 
+* *of:C-DPID/C4* is the DPID and the port of the fabric device where the transport network is connected to (in the example fabric switches are connected together). 
 
 This is hinted through the interlinkId.
 
@@ -291,7 +370,7 @@
        "netconf": {
          "username": "admin",
          "password": "admin",
-         "ip": "YOUR-CPE-IP",
+         "ip": "A-IP",
          "port": "830"
        },
        "basic": {
@@ -303,12 +382,12 @@
       }
      },
      "links": {
-      "netconf:YOUR-CPE-IP:830/0-of:WHERE-YOUR-CPE-IS-CONNECTED (EE DPID/PORT)" : {
+      "netconf:A-IP:830/0-of:B-DPID/B1" : {
        "basic" : {
         "type" : "DIRECT"
        }
       },
-      "of:WHERE-YOUR-CPE-IS-CONNECTED (EE DPID/PORT)-netconf:YOUR-CPE-IP:830/0" : {
+      "of:B-DPID/B1-netconf:A-IP:830/0" : {
        "basic" : {
         "type" : "DIRECT"
        }
@@ -320,11 +399,11 @@
         [
          {
           "mefPortType" : "UNI",
-          "connectPoint" : "netconf:YOUR-CPE-IP:830/0"
+          "connectPoint" : "netconf:A-IP:830/0"
          },
          {
           "mefPortType" : "INNI",
-          "connectPoint" : "of:DPID-AND-PORT-OF-EE-CONNECTING-TO-FABRIC",
+          "connectPoint" : "of:B-DPID",
           "interlinkId" : "EE-2-fabric"
          }
         ]
@@ -346,34 +425,49 @@
     curl -X POST -H "content-type:application/json" http://YOUR-LOCAL-SITE-HEAD-IP:8182/onos/v1/network/configuration -d @YOUR-JSON-FILE.json --user onos:rocks
     ```
 
-**Warning** The Json above tries to congiure devices and links at the same time. It may happen that ONOS denies your request of link creation, since it does not find devices present (because their creation is still in progress). If this happens, just wait few seconds and try to push again the same configuration, using the *curl* command above.
-
-## Configure the Ethernet Edge device (Centec v350)
-The steps below assume that
-* The Centec device to an A(Access)-leaf fabric switch
-* ONOS_CORD is running
-
-Follow the steps below to assign an IP address to the Ethernet Edge device and connect it to ONOS_CORD.
-
-### Set a management IP address on the switch OOB interface
-The switch management interface should be set with a static IP address (DHCP not supported yet), in the same subnet of the POD internal/management network (by default 10.6.0.0/24).
-
-**NOTE**: Please, use high values for the IP last octet, since lower values are usually allocated first by the MAAS DHCP server running on the head node.
-
-To configure the static IP address, do the following:
-* Log into the CLI of the Centec switch (through SSH or console cable)
-* *configure terminal*
-* *management ip address YOUR_MGMT_ADDRESS netmask YOUR_NETMASK*
-* *end*
-* *show management ip address*
-
-### Set ONOS-CORD as the Openflow controller
-To set ONOS-CORD as the default switch OpenFlow controller and verify the configuration:
-* Log into the CLI of the Centec switch (through SSH or console cable)
-* *configure terminal*
-* *openflow set controller tcp YOUR-LOCAL-SITE-HEAD-IP 6654*
-* *end*
-* *show openflow controller status*
+**Warning** The JSON above tries to configure devices and links at the same time. It may happen that ONOS denies your request of link creation, since it does not find devices present (because their creation is still in progress). If this happens, just wait few seconds and try to push again the same configuration, using the *curl* command above.
 
 # Done!
 After the global node and the local sites are properly configured, the global node should maintain an abstract view of the topology and you should see UNIs distributed on the map of the XoS GUI. You can start requests to setup Ethernet Virtual Circuit (EVC) from XoS.
+
+# Demo: create an E-Line through the UI
+
+* SSH into global node
+* From */opt/credentials/xosadmin@opencord.org* copy the XOS password
+* From a computer, able to reach the global node, open a browser and go to *http://YOUR_GLOBAL_NODE_IP/xos*
+* Use the following username/password to access:
+    * Username: *xosadmin@opencord.org*
+    * Password: YOUR_PASSWORD_COPIED_AT_THE_STEP_BEFORE
+* From the left menu of the XOS UI, choose *VNaaS GUI*
+* You should see CORD symbols on the map (your UNIs / end-points)
+    * Click on one of them
+    * Choose “Create connection”
+* From the right menu
+    * Choose the bandwidth profile you prefer (i.e. “Gold”)
+    * Input a CORD Site name (i.e. test demo)
+    * Input a VLAN id (i.e. 100)
+* Click on another CORD symbol
+* Click “Finish connection”. This will populate the field “Connect point 2 ID” in the right menu
+* Click save changes on the right menu
+
+A line should appear between the two icons, meaning that a request to connect the two end-points has been saved. The line should become green in few seconds. Green lines are good signs of a working environment.
+
+## End-points communication
+You should now be able to let communicate together the two end-points connected to the CPEs. Each of the two end-points need to be configured to send-out packets tagged with the same VLAN Id(s).  
+Assuming you just configured VLAN id 100 in the UI, and that the two end-points will communicate together using the 192.168.1.0/24 subnet, on each head do the following:
+* *apt-get install vlan*
+* *sudo modprobe 8021q*
+* Activate by default at startup the vlan module
+* *sudo  sh -c 'grep -q 8021q /etc/modules || echo 8021q >> /etc/modules'*
+* *vi /etc/network/interfaces* and add an alias, VLAN tagged interface
+    ```
+	auto YOUR_INTERFACE_CONNECTED_TO_INTERNAL_NETWORK.100
+	iface YOUR_INTERFACE_CONNECTED_TO_INTERNAL_NETWORK.100 inet static
+	address 192.168.1.1 (for the first head node, or 2 for the second one)
+	netmask 255.255.255.0
+	```
+* Save
+* Bring up the interface with *sudo ifup YOUR_INTERFACE_CONNECTED_TO_THE_CPE.100*
+
+## Success, Ping!
+If everything works fine, the two hosts should be able to ping one each other.
diff --git a/docs/static/images/connectivity-diagram.png b/docs/static/images/connectivity-diagram.png
new file mode 100644
index 0000000..80e1659
--- /dev/null
+++ b/docs/static/images/connectivity-diagram.png
Binary files differ