[CORD-1992]E-CORD docs

Change-Id: I8bb292e00216be479aba6711be7788e5f44a765e
(cherry picked from commit 49ba71cef681d4652d4c53a60c0b4ac2406b5301)
diff --git a/docs/README.md b/docs/README.md
index c905344..505eceb 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,7 +1,9 @@
 # E-CORD Profile
 
-This repository is intended to host XOS Models definitions and XOS GUI Extensions that are horizontal to the profile and not tight to a single service.
+This repository is intended to host the E-CORD documentation, XOS Models definitions and XOS GUI Extensions that are horizontal to the profile and not tied to a single service.
+
+It contains an overview of E-CORD and an installation guide that outlines step by step how to deploy an end to end E-CORD solution.
 
 It contains models and GUIs for both `ecord-local` and `ecord-global` profiles.
 
-## Stay tuned. More documentation coming soon!
+
diff --git a/docs/installation_guide.md b/docs/installation_guide.md
new file mode 100644
index 0000000..3cfad1c
--- /dev/null
+++ b/docs/installation_guide.md
@@ -0,0 +1,376 @@
+# Installation guide
+
+The paragraph describes how to install and configure the E-CORD global node and the local sites on a phsyical POD.
+
+## Hardware requirements (BOM)
+Following is a list of hardware needed to create a typical E-CORD deployment. References will be often made to the generic CORD BOM.
+ 
+**NOTE**: The hardware suggested is a reference implementation. Hardware listed have been usually used by ONF and its community for lab trials and deployments to validate the platform and demonstrate it's capabilities. You’re very welcome to replace any of the components and bring in yours into the ecosystem. As a community, we would be happy to acknowledge your contribution and put new tested devices in the BOM as well.
+
+Following the suggested hardware list is reported.
+
+### Global node
+* 1x development machine - same model used for a [generic CORD POD](https://guide.opencord.org/install_physical.html#bill-of-materials-bom--hardware-requirements)
+* 1x compute node (server) - same model used for a [generic CORD POD](https://guide.opencord.org/install_physical.html#bill-of-materials-bom--hardware-requirements)
+
+### Local site (POD)
+The hardware listed is needed for each POD.
+
+* Everything listed in the BOM of a generic CORD POD
+**WARNING** Currently E-CORD does not support more than 1 fabric switch per POD. While soon will be possible to use more than one fabric switch, it is useless now to buy more than a fabric switch per local site.
+* 1x Centec v350, used as “Ethernet Edge switch”
+* 1x CPE, composed by
+    * 1x 2-port TP-Link Gigabit SFP Media converter, model MC220L(UN)
+    * 1x Microsemi EA1000 programmable SFP
+    * 1x fiber cable (or DAC) to connect the CPE to the Ethernet Edge switch
+* 1x 40G to 4x10G  QSFP+ module to connect the Ethernet Edge switch to the access leaf fabric switch (EdgeCore QSFP to 4x SFP+ DAC, model ET6402-10DAC-3M, part M0OEC6402T06Z)
+ 
+**NOTE**: The role of the CPE is to get users’ traffic, tag it with a VLAN id, and forward it to the Ethernet Edge switch. Additionally, the CPE sends and receives OAM probes to let CORD monitor the status of the network. For lab trials, a combination of two components has been used to emulate the CPE functionalities: a media converter, used to collect users’ traffic from an Ethernet CAT5/6 interface (where a traditional host, like a laptop, is connected) and send it out from its other SFP interface; a programmable SFP (plugged into the SFP port of the media converter), that a) tags the traffic with a specific VLAN id and forwards it to the Ethernet Edge switch; b) sends and receives OEM probes to let CORD monitor the network. The programmable SFP is currently configured through NETCONF, using the ONOS Flow Rule abstraction translated into NETCONF XML for the drivers, and the ONOS-based CarrierEthernet application to generate the Flow Rules based on requests.
+
+## Installing the global node
+To install the global orchestrator on a physical node, you should follow the steps described in the main [physical POD installation](https://guide.opencord.org/install_physical.html).
+
+At a high level, bootstrap the development machine and download the code.
+
+### Local POD configuration file
+When it’s time to write your POD configuration, use the [physical-example.yml](https://github.com/opencord/cord/blob/master/podconfig/physical-example.yml) file as a template. Either modify it or make a copy of it in the same directory.
+Fill in the configuration with your own head node data.
+
+As cord_scenario, use *single*. This won’t install OpenStack and other software, which are not needed on the global node.
+
+As cord_profile, use *ecord-global*.
+
+### POD Build
+Continue the installation as described in the guide by running the make build target:
+
+```
+make build
+```
+
+### DNS Services restart
+As soon as the procedure is finished you need to restart two services on the pod.
+
+```
+sudo service nsd restart
+sudo service unbound restart
+```
+
+Now your global node is ready to be connected to the local sites.
+
+## Installing an E-CORD local site
+To install the local node you should follow the steps described in the main [physical POD installation](https://guide.opencord.org/install_physical.html). Bootstrap the development machine and download the code.
+When it’s time to write your pod configuration, use the [physical-example.yml](https://github.com/opencord/cord/blob/master/podconfig/physical-example.yml) file as a template. Either modify it or make a copy of it in the same directory.
+Fill in the configuration with your own head node data.
+
+As cord_scenario use *cord*.
+
+As cord_profile use *ecord*.
+
+## Configure the Global node
+It’s essential to configure the global node properly, in order to instruct it about the existing local sites, have them connected and coordinated.
+Configuring the global node consists of two parts, an XOS/Tosca configuration and an ONOS/JSON.
+
+### Configuring XOS (Tosca Configuration)
+The first part consists of instructing the XOS running on the global node about the other XOS instances, orchestrating the local sites.
+ 
+To configure your XOS instance, do the following:
+* Create your TOSCA file, using as template the file on your development/management machine, under *CORD_ROOT/orchestration/profiles/ecord/examples/vnaasglobal-service-reference.yaml* (available also online, [here](https://github.com/opencord/ecord/blob/master/examples/vnaasglobal-service-reference.yaml))
+* Save it on the global node.
+* SSH into your global node
+* On the global node, run the following command
+
+```
+python /opt/cord/build/platform-install/scripts/run_tosca.py 9000 xosadmin@opencord.org YOUR_XOS_PASSWORD PATH_TO_YOUR_TOSCA FILE
+```
+
+**NOTE**: If the XOS password has been auto-generated, you can find it on the global node, in 
+```
+/opt/credentials/xosadmin@opencord.org
+```
+
+### Configuring ONOS (JSON configuration)
+To configure ONOS on the global node:
+* SSH into the global node
+* Login to ONOS_CORD: *ssh -p 8102 onos@onos-cord*
+* In the ONOS CLI (*onos>*) verify that apps are loaded by executing: *apps -a -s*
+    The following applications should be enabled:
+    ```
+    org.onosproject.drivers
+    org.opencord.ce-api
+    org.opencord.ce.global
+    ```
+    If one or more apps mentioned above are not present in the list, they can be activated with *app activate APP-NAME*
+* Logout from ONOS (CTRL+D or exit)
+* Anywhere, either on the global node itself, or on any machine able to reach the global node, write your ONOS configuration file. The following is an example configuration for a global node that communicates with two local sites, with domain names site1 and site2.
+    
+    ```
+    {
+      "apps" : {
+        "org.opencord.ce.global.vprovider" : {
+          "xos" : {
+            "username" : "xosadmin@opencord.org",
+            "password" : "YOUR_XOS_PASSWORD (see note below)",
+            "address" : "YOUR_GLOBAL_NODE_IP",
+            "resource" : "/xosapi/v1/vnaas/usernetworkinterfaces/"
+          }
+        },
+        "org.opencord.ce.global.channel.http" : {
+          "endPoints" : {
+            "port" : "8182",
+            "topics" : [
+              "ecord-domains-topic-one"
+            ],
+            "domains" :
+            [
+              {
+                "domainId" : "YOUR-SITE1-EXTERNAL-IP-fabric-onos",
+                "publicIp" : "YOUR-SITE1-EXTERNAL-IP",
+                "port" : "8181",
+                "username" : "onos",
+                "password" : "rocks",
+                "topic" : "ecord-domains-topic-one"
+              },
+              {
+                "domainId" : "YOUR-SITE1-EXTERNAL-IP-cord-onos",
+                "publicIp" : "YOUR-SITE1-EXTERNAL-IP",
+                "port" : "8182",
+                "username" : "onos",
+                "password" : "rocks",
+                "topic" : "ecord-domains-topic-one"
+              },
+              {
+                "domainId" : "YOUR-SITE2-EXTERNAL-IP-fabric-onos",
+                "publicIp" : "YOUR-SITE2-EXTERNAL-IP",
+                "port" : "8181",
+                "username" : "onos",
+                "password" : "rocks",
+                "topic" : "ecord-domains-topic-one"
+              },
+              {
+                "domainId" : "YOUR-SITE2-EXTERNAL-IP-cord-onos",
+                "publicIp" : "YOUR-SITE2-EXTERNAL-IP",
+                "port" : "8182",
+                "username" : "onos",
+                "password" : "rocks",
+                "topic" : "ecord-domains-topic-one"
+              }
+            ]
+          }
+        }
+      }
+    }
+    ```
+    
+    **NOTE** Under the key “topics” you can specify as many topics (string of your choice) as you prefer for sake of load balancing the communication with the underlying (local-site) controllers among the instance of the global ONOS cluster. For each domain you also have to specify one of these topics.
+    
+    **NOTE**: If the XOS password has been auto-generated, you can find it on the global node, in /opt/credentials/xosadmin@opencord.org
+
+* Use CURL to push your file:
+    ```
+    curl -X POST -H "content-type:application/json"  http://YOUR-GLOBAL-NODE-IP:8182/onos/v1/network/configuration -d @YOUR-JSON-FILE.json --user onos:rocks
+    ```
+
+## Configure the local sites
+Local sites configuration consists of two parts:
+* ONOS_Fabric configuration
+* The ONOS_CORD configuration
+ 
+The local site configurations explained below need to happen on the head nodes of all local sites.
+
+### ONOS_fabric configuration
+ONOS_Fabric manages the underlay network (the connectivity between the fabric switches).
+
+To configure ONOS_Fabric do the following:
+* SSH on the local site head node
+* Log into ONOS_Fabric: *ssh -p 8101 onos@onos-fabric*
+* In the ONOS CLI (*onos>*) verify that apps are loaded: *apps -a -s*
+    The list of applications enabled should include:
+    ```
+    org.onosproject.segmentrouting
+    org.opencord.ce.api
+    org.opencord.ce.local.bigswitch
+    org.opencord.ce.local.channel.http
+    org.opencord.ce.local.fabric
+    ```
+    If one or more apps mentioned above are not present in the list, they can be activated with *app activate APP-NAME*
+    * Check that the site domain id is correctly set in ONOS, running
+    ```
+    cfg get org.opencord.ce.local.bigswitch.BigSwitchManager
+    ```
+    The value should be *YOUR-HEAD-NODE-IP-fabric-onos*
+* Check that the fabric switch is connected to onos typing devices. If no devices are there, make sure your fabric switch is connected to ONOS and go through the [Fabric configuration guide](https://guide.opencord.org/appendix_basic_config.html#connect-the-fabric-switches-to-onos).
+* Logout from ONOS (CTRL+D or exit)
+* Anywhere, either on the head node itself, or on any machine able to reach the head node, write your ONOS configuration file.
+    Following, is an example configuration for a local site.
+    
+    ```
+    {
+      "apps" : {
+        "org.opencord.ce.local.fabric" : {
+          "segmentrouting_ctl": {
+            "publicIp": "YOUR-HEAD-NODE-IP",
+            "port": "8181",
+            "username": "onos",
+            "password": "rocks",
+            "deviceId": "of:YOUR-FABRIC-SW-DPID"
+          }
+        },
+        "org.opencord.ce.local.bigswitch" : {
+          "mefPorts" :
+          [
+            {
+              "mefPortType" : "INNI",
+              "connectPoint" : "of:YOUR-FABRIC-SW-DPID/PORT-ON-FABRIC-CONNECTING-TO-EE",
+              "interlinkId" : "EE-1-to-fabric"
+            },
+            {
+              "mefPortType" : "ENNI",
+              "connectPoint" : "of:YOUR-FABRIC-SW-TO-UPSTREAM-DPID/PORT (duplicate as many time as needed)",
+              "interlinkId" : "fabric-1-to-fabric-2"
+            }
+          ]
+        },
+        "org.opencord.ce.local.channel.http" : {
+          "global" : {
+            "publicIp" : "YOUR-GLOBAL-NODE-IP",
+            "port" : "8182",
+            "username" : "onos",
+            "password" : "rocks",
+            "topic" : "ecord-domains-topic-one"
+          }
+        }
+      }
+    }
+    ```
+
+* Use CURL to push your file
+    ```
+    curl -X POST -H "content-type:application/json"  http://YOUR-LOCAL-HEAD-NODE-IP:8181/onos/v1/network/configuration -d @YOUR-JSON-FILE.json --user onos:rocks
+    ```
+
+#### The MEF ports
+Under the key “mefPorts” there is the list of physical ports that have to be exposed to the global node. These ports represent MEF ports and can belong to different physical devices, but they will be part of a single abstract “bigswitch” in the topology of the global ONOS (see [E-CORD topology abstraction](https://guide.opencord.org/profiles/ecord/overview.html#e-cord-topology-abstraction)). These ports represent also the boundary between physical topologies controlled by different ONOS controllers.
+
+In the Json above:
+* *of:YOUR-FABRIC-SW-DPID/PORT-ON-FABRIC-CONNECTING-TO-EE* is the DPID and the port of the fabric device where the ethernet edge is connected to. 
+* *of:YOUR-FABRIC-SW-TO-UPSTREAM-DPID/PORT* is the DPID and the port of the fabric device where the transport network is connected to (in the example fabric switches are connected together). 
+
+This is hinted through the interlinkId.
+
+#### The topic attribute
+The *topic* attribute under the *org.opencord.ce.local.channel.http* is a string of your choice. It is used to run an election in an ONOS cluster to choose the ONOS instance that interacts with the global node.
+
+### ONOS_CORD configuration
+ONOS_CORD manages the overlay network. It controls both the users' CPEs and the Ethernet Edge devices.
+The programmable microsemi SFP -part of your emulated CPE- is configured and managed through Netconf (more information about NETCONF and ONOS can be found [here](https://wiki.onosproject.org/display/ONOS/NETCONF)).  
+The Ethernet Edge device is managed through OpenFlow.  
+Both the devices, at the end of the configuration procedure should show up in ONOS_CORD, which can be confirmed by typing *devices* in the ONOS CLI (*onos>*).
+
+To configure ONOS_CORD do the following:
+* SSH into the head node
+* Login to onos-cord: *ssh -p 8102 onos@onos-cord*
+* At the ONOS CLI verify that apps are loaded: *apps -a -s*
+    The following applications should be enabled:
+    ```
+    org.onosproject.drivers.microsemi
+    org.onosproject.cfm
+    org.opencord.ce.api
+    org.opencord.ce.local.bigswitch
+    org.opencord.ce.local.channel.http
+    org.opencord.ce.local.vee
+    ```
+    If one or more apps mentioned above are not present in the list, they can be activated with app activate APP-NAME
+* Create a new JSON file and write your configuration. As a template, you can use the JSON below:
+    ```
+    {
+     "devices": {
+      "netconf:YOUR-CPE-IP:830": {
+       "netconf": {
+         "username": "admin",
+         "password": "admin",
+         "ip": "YOUR-CPE-IP",
+         "port": "830"
+       },
+       "basic": {
+        "driver": "microsemi-netconf",
+        "type": "SWITCH",
+        "manufacturer": "Microsemi",
+        "hwVersion": "EA1000"
+       }
+      }
+     },
+     "links": {
+      "netconf:YOUR-CPE-IP:830/0-of:WHERE-YOUR-CPE-IS-CONNECTED (EE DPID/PORT)" : {
+       "basic" : {
+        "type" : "DIRECT"
+       }
+      },
+      "of:WHERE-YOUR-CPE-IS-CONNECTED (EE DPID/PORT)-netconf:YOUR-CPE-IP:830/0" : {
+       "basic" : {
+        "type" : "DIRECT"
+       }
+      }
+     },
+     "apps" : {
+      "org.opencord.ce.local.bigswitch" : {
+       "mefPorts" :
+        [
+         {
+          "mefPortType" : "UNI",
+          "connectPoint" : "netconf:YOUR-CPE-IP:830/0"
+         },
+         {
+          "mefPortType" : "INNI",
+          "connectPoint" : "of:DPID-AND-PORT-OF-EE-CONNECTING-TO-FABRIC",
+          "interlinkId" : "EE-2-fabric"
+         }
+        ]
+      },
+      "org.opencord.ce.local.channel.http" : {
+       "global" : {
+        "publicIp" : "YOUR-GLOBAL-NODE-IP",
+        "port" : "8182",
+        "username" : "onos",
+        "password" : "rocks",
+        "topic" : "ecord-domains-topic-one"
+       }
+      }
+     }
+    }
+    ```
+* Load the following the Json file just created using curl:
+    ```
+    curl -X POST -H "content-type:application/json" http://YOUR-LOCAL-SITE-HEAD-IP:8182/onos/v1/network/configuration -d @YOUR-JSON-FILE.json --user onos:rocks
+    ```
+
+**Warning** The Json above tries to congiure devices and links at the same time. It may happen that ONOS denies your request of link creation, since it does not find devices present (because their creation is still in progress). If this happens, just wait few seconds and try to push again the same configuration, using the *curl* command above.
+
+## Configure the Ethernet Edge device (Centec v350)
+The steps below assume that
+* The Centec device to an A(Access)-leaf fabric switch
+* ONOS_CORD is running
+
+Follow the steps below to assign an IP address to the Ethernet Edge device and connect it to ONOS_CORD.
+
+### Set a management IP address on the switch OOB interface
+The switch management interface should be set with a static IP address (DHCP not supported yet), in the same subnet of the POD internal/management network (by default 10.6.0.0/24).
+
+**NOTE**: Please, use high values for the IP last octet, since lower values are usually allocated first by the MAAS DHCP server running on the head node.
+
+To configure the static IP address, do the following:
+* Log into the CLI of the Centec switch (through SSH or console cable)
+* *configure terminal*
+* *management ip address YOUR_MGMT_ADDRESS netmask YOUR_NETMASK*
+* *end*
+* *show management ip address*
+
+### Set ONOS-CORD as the Openflow controller
+To set ONOS-CORD as the default switch OpenFlow controller and verify the configuration:
+* Log into the CLI of the Centec switch (through SSH or console cable)
+* *configure terminal*
+* *openflow set controller tcp YOUR-LOCAL-SITE-HEAD-IP 6654*
+* *end*
+* *show openflow controller status*
+
+# Done!
+After the global node and the local sites are properly configured, the global node should maintain an abstract view of the topology and you should see UNIs distributed on the map of the XoS GUI. You can start requests to setup Ethernet Virtual Circuit (EVC) from XoS.
diff --git a/docs/overview.md b/docs/overview.md
new file mode 100644
index 0000000..5c59139
--- /dev/null
+++ b/docs/overview.md
@@ -0,0 +1,100 @@
+# What is E-CORD?
+
+Enterprise CORD (E-CORD) is a CORD use-case that offers enterprise connectivity services over metro and wide area networks, using open source software and commodity hardware.
+
+<img src="static/images/overview.png" alt="E-CORD overview" style="width: 800px;"/>
+
+E-CORD builds on the CORD infrastructure to support enterprise customers, and allows Service Providers to offer enterprise connectivity services (L2 and L3VPN).
+It can go far beyond these simple connectivity services, as it includes Virtual Network Functions (VNFs) and service composition capabilities to support disruptive cloud-­based enterprise services.
+In turn, enterprise customers can use E-CORD to rapidly create on-­demand networks between any number of endpoints or company branches. These networks are dynamically configurable, implying connection attributes and SLAs can be specified and provisioned on the fly. Furthermore, enterprise customers may choose to run network functions such as firewalls, WAN accelerators, traffic analytic tools, virtual routers, etc. as on­-demand services that are provisioned and maintained inside the service provider network.
+
+# Glossary
+
+The section provides a list of the basic terms used in E-CORD
+ 
+* **CO/Local POD**: Each local CORD POD (also identified in the guide as an E-CORD site) is a standard CORD POD equipped with specific access equipment, such as the “Enterprise Edge”. It is usually located in the Service Provider’s Central Offices and is mainly used to: 1) connect the enterprise user to the Service Provider network; b) run value added user services at the edge of the network, closed to the user, such as firewalls, traffic analytic tools or WAN accelerators. Upstream, the POD usually connects to the Service Provider metro/transport network or to the public internet.
+* **Global Node**: The global node is a single machine running either in the cloud, or in any other part of the Service Provider network, used as general orchestrator that coordinates between all the local PODs of the E-CORD deployment. It is composed by an instance of XOS and one of ONOS.
+
+# System overview
+
+A typical E-CORD deployment is made of one orchestrator “global node” and multiple (min 2) CORD sites (PODs), connected to the same transport network.
+
+<img src="static/images/ecord-dataplane.png" alt="E-CORD dataplane" style="width: 800px;"/>
+
+Each site is a “typical” CORD POD comprised of one or more compute nodes, and one or more fabric switches (see here for details).
+The transport network provides connectivity between the CORD sites. It can be anything from a converged packet-optical network, to a single packet switch. The transport network can be composed of white-box switches, legacy equipment, or a mix of both. The minimum requirement in order to deploy E-CORD is to provide Layer 2 connectivity between the PODs, specifically between the leaf fabric switches, facing the upstream/metro network of the COs. 
+
+**INFO** Usually, for lab trials, the leaf switches of the two sites (PODs) are connected directly through a cable, or through a legacy L2 switch.
+
+## The E-CORD global node
+The global node is responsible for orchestrating users’ requests and to manage the connectivity and the services provisioning.
+ 
+It runs only an instance of the orchestrator, XOS, and an instance of ONOS, in specific the ONOS_CORD one.
+ 
+The ONOS instance (ONOS_CORD) manages the end-to-end traffic forwarding between the sites.
+ 
+The global ONOS is composed of three modules:
+* **Carrier Ethernet Orchestrator** application, exposing a NB REST interface to receive requests to create Ethernet Virtual Circuits (EVCs).
+* **Virtual Provider**, that manages the abstract view of the local E-CORD sites topologies.
+* **Global HTTP channel** for the communication with the underlying local sites. 
+
+## The E-CORD local sites
+Local sites are responsible for collecting users’ traffic and forward it either to other local sites or to Internet.
+Each local site comes with two ONOS controllers that are part of the reference architecture of CORD: ONOS_CORD and ONOS_Fabric.
+ 
+The Carrier Ethernet application of E-CORD uses both controllers to provision the physical network:
+* **ONOS_CORD** runs the application that controls the edge access: the CPE devices and the Ethernet Edge (EE) devices.
+* **ONOS_Fabric** runs the application configuring the cross connections within the fabric of CORD (Trellis), to bridge the CPEs to the transport network and to the remote sites. 
+ 
+The added components to ONOS_CORD  ONOS are these applications: 
+* CE-API
+* Bigswitch service
+* HTTP channel 
+* CE-VEE
+ 
+The added components to ONOS_Fabric ONOS are these applications: 
+* CE-API
+* Bigswitch service
+* HTTP channel 
+* CE-fabric
+ 
+The CE-API, the bigswitch service and the HTTP channel are common to both ONOS_CORD and ONOS_Fabric. They are used to put in communication the ONOS running on the global node  and the ones running on the local sites.
+The local sites abstract their network topology and expose it to the global node, they receive requests from the global ONOS, and provision the network for the EVCs setup. 
+
+## XOS service chain
+XOS is the default CORD orchestrator. More info about it can be found here.
+ 
+The Global pod only implements a single service called **vNaaS**, **virtual Network as a Service** that is responsible for orchestrating the end to end connectivity between the PODs.
+ 
+E-CORD local sites implement multiple services -wired together- that control the underlying hardware and software, and are thus able to connect the Enterprise Subscriber from the edge of the Central Office to the upstream network.
+ 
+A representation of the default E-CORD local site service chain is shown in the picture below.
+
+<img src="static/images/xos-service-chain.png" alt="XOS Service Chain" style="width: 600px;"/>
+
+The local chain comprises of 5 different services with different responsibilities each. 
+ 
+* **vCPE (virtual Customer Premise Equipment)** inserts an 802.1ad (QinQ) header to the upstream packets of an Enterprise Subscriber and removes it for downstream flows. Packets are tagged with a vlanId (the Service Provider tag, s-tag) and then forwarded to the Ethernet Edge. The s-tag is associated with one or more c-tag (customer tags, the vlanId of the 802.1q protocol) to isolate the traffic within the Service Provider network. The vCPE function is very similar to that of vOLT in R-CORD.
+* **vEE (virtual Ethernet Edge)** aggregates traffic from multiple customers with CPEs and applies policing and forwarding to the fabric. The vEE is also responsible for making routing decisions on the traffic. The vEE filters the traffic meant for the enterprise network and sends it through pseudo-wire in the fabric directly to the transport network and the other enterprise branch. If the vEE recognizes that the traffic is instead meant for the public Internet, it will send it to the vEG and then to the vRouter.
+* **vEG (virtual Enterprise Gateway)** runs the subscriber desired VNFs, such as bandwidth metering, firewall, diagnostics etc, and possibly forwards the traffic to the vRouter.
+* **vRouter (virtual Router)** is responsible for sending traffic out of the CORD POD to the public Internet. It is the same service that is present in the R-CORD service chain. More information about it can be found [here](https://guide.opencord.org/vrouter.html#connecting-to-upstream-networks-using-vrouter)
+* **PWaaS enables a “pass-through”** in the leaf-spine fabric (Trellis), allowing traffic to directly go out to the metro network. This is done through a configuration of the segment routing application. 
+
+## E-CORD Topology Abstraction
+The global node maintains an abstract view of the underlying topology for the sake of scalability and to separate domain-specific and domain-agnostic concerns. Note that in a single CORD PODs there are two independent ONOS controllers: ONOS_CORD and ONOS_Fabric. For each local site, the E-CORD application exposes two abstract devices to the global node: one is exposed by the ONOS_CORD and the other by the ONOS_Fabric. They represent an aggregation of the real network elements that compose the topology of a local site.
+
+The picture below shows how the two topologies get abstracted and exposed to the ONOS instance running on the global node.
+
+<img src="static/images/topology-abstraction-01.png" alt="XOS Service Chain" style="width: 600px;"/>
+
+This way, the global ONOS has fewer devices and link data structures to deal with. Path computation will involve only these aggregated items, while the actual network provisioning will be achieved by the local site controllers.
+ 
+The relevant topology information for the global node include
+* **User-to-Network Interfaces (UNIs)** - see [MEF specs](https://www.mef.net/resources/technical-specifications)
+* **Network-to-Network Interfaces (NNIs)** see see [MEF specs](https://www.mef.net/resources/technical-specifications)
+* **Associated bandwidth capacities** for admission control
+ 
+The NNI ports are annotated with an interlinkId so that the global node can understand which ports are at the ends of which inter-domain links.
+An inter-domain link, from now on simply called  interlink, is a link that connects to devices controlled by different ONOS controllers at local level (see the ONOS_FABRIC and ONOS_CORD json configuration sections in the installation guide for more examples).
+
+<img src="static/images/topology-abstraction-02.png" alt="XOS Service Chain" style="width: 600px;"/>
diff --git a/docs/static/images/ecord-dataplane.png b/docs/static/images/ecord-dataplane.png
new file mode 100644
index 0000000..fdc186d
--- /dev/null
+++ b/docs/static/images/ecord-dataplane.png
Binary files differ
diff --git a/docs/static/images/overview.png b/docs/static/images/overview.png
new file mode 100644
index 0000000..74b1414
--- /dev/null
+++ b/docs/static/images/overview.png
Binary files differ
diff --git a/docs/static/images/topology-abstraction-01.png b/docs/static/images/topology-abstraction-01.png
new file mode 100644
index 0000000..50bdda7
--- /dev/null
+++ b/docs/static/images/topology-abstraction-01.png
Binary files differ
diff --git a/docs/static/images/topology-abstraction-02.png b/docs/static/images/topology-abstraction-02.png
new file mode 100644
index 0000000..04b93a0
--- /dev/null
+++ b/docs/static/images/topology-abstraction-02.png
Binary files differ
diff --git a/docs/static/images/xos-service-chain.png b/docs/static/images/xos-service-chain.png
new file mode 100644
index 0000000..780111d
--- /dev/null
+++ b/docs/static/images/xos-service-chain.png
Binary files differ