This directory holds files that are used to configure a development POD for CORD. For more information on the CORD project, check out the CORD website.
XOS is composed of several core services:
Each service runs in a separate Docker container. The containers are built automatically by Docker Hub using the HEAD of the XOS repository.
Installing a CORD POD involves these steps:
Follow the instructions in the README.md file of the open-cloud/openstack-cluster-setup repository.
The OpenStack installer above creates a VM called onos-cord on the head node. To bring up ONOS in this VM, log into the head node and run:
$ ssh ubuntu@onos-cord ubuntu@onos-cord:~$ cd cord; sudo docker-compose up -d
The CORD fabric is responsible for providing external (Internet) connectivity for VMs created on CORD. If you are running on CloudLab (or another development environment) and want external connectivity without the fabric, download this script and run it as root:
$ sudo compute-ext-net.sh
The script creates a bridge (databr) on the node as well as a veth pair (veth0/veth1). The veth0 interface is added as a port on databr and VTN is configured to use veth1 as its data plane interface. Traffic coming from databr is NAT'ed to the external network via iptables
. The configuration assumes that databr takes the MAC address of veth0 when it is added as a port -- this seems to always be the case (though not sure why).
Note that setting up the full fabric is beyond the scope of this README.
The OpenStack installer above creates a VM called xos on the head node. To bring up XOS in this VM, first log into the head node and run:
$ ssh ubuntu@xos ubuntu@xos:~$ cd xos/xos/configurations/cord-pod
Next, check that the following files exist in this directory:
They will have been put there for you by the cluster installation scripts.
If your setup uses the CORD fabric, you need to modify the autogenerated VTN configuration and edit cord-vtn-vsg.yml
as follows.
The VTN app configuration is autogenerated by XOS. For more information about the configuration, see this page on the ONOS Wiki, under the ONOS Settings heading. To see the generated configuration, go to http://xos/admin/onos/onosapp/, click on VTN_ONOS_app, then the Attributes tab, and look for the rest_onos/v1/network/configuration/
attribute. You can edit this configuration after deleting the autogenerate
attribute (otherwise XOS will overwrite your changes), or you can change the other attributes and delete rest_onos/v1/network/configuration/
in order to get XOS to regenerate it.
Modify cord-vtn-vsg.yml
and set these parameters to the appropriate values for the fabric:
public_addresses:properties:addresses
(IP address block of fabric)service_vsg:properties:wan_container_gateway_ip
(same as publicGateway:gatewayIp
from VTN configuration)service_vsg:properties:wan_container_gateway_mac
(same as publicGateway:gatewayMac
from VTN configuration)service_vsg:properties:wan_container_netbits
(bits in fabric IP address block netmask)If you're not using the fabric then the default values should be OK.
XOS can then be brought up for CORD by running a few make
commands:
ubuntu@xos:~/xos/xos/configurations/cord-pod$ make ubuntu@xos:~/xos/xos/configurations/cord-pod$ make vtn ubuntu@xos:~/xos/xos/configurations/cord-pod$ make cord
After the first 'make' command above, you will be able to login to XOS at http://xos/ using username/password padmin@vicci.org/letmein
.
The above series of make
commands will spin up a vSG for a sample subscriber. The vSG is implemented as a Docker container (using the andybavier/docker-vcpe image hosted on Docker Hub) running inside an Ubuntu VM. Once the VM is created, you can login as the ubuntu
user at the management network IP (172.27.0.x) on the compute node hosting the VM, using the private key generated on the head node by the install process. For example, in the single-node development POD configuration, you can login to the VM with management IP 172.27.0.2 using a ProxyCommand as follows:
ubuntu@pod:~$ ssh -o ProxyCommand="ssh -W %h:%p ubuntu@nova-compute" ubuntu@172.27.0.2
Alternatively, you could copy the generated private key to the compute node and login from there:
ubuntu@pod:~$ scp ~/.ssh/id_rsa ubuntu@nova-compute:~/.ssh ubuntu@pod:~$ ssh ubuntu@nova-compute ubuntu@nova-compute:~$ ssh ubuntu@172.27.0.2
Once logged in to the VM, you can run sudo docker ps
to see the running vSG containers:
ubuntu@mysite-vsg-1:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2b0bfb3662c7 andybavier/docker-vcpe "/sbin/my_init" 5 days ago Up 5 days vcpe-222-111