XOS Configuration for CORD development POD

Introduction

This directory holds files that are used to configure a development POD for CORD. For more information on the CORD project, including how to get started, check out the CORD wiki.

XOS is composed of several core services that are typically containerized. Dynamic On-boarding System and Service Profiles describes these containers and how they fit together.

This document is primarily focused on how to start the cord-pod service-profile. This service profile is usually located in ~/service-profile/cord-pod/ on an installed pod. This directory is usually located in the xos virtual machine.

Prerequisites

The following prerequisites should be met:

  1. OpenStack should be installed, and OpenStack services (keystone, nova, neutron, glance, etc) should be started.
  2. ONOS should be installed, and at a minimum, ONOS-Cord should be running the VTN app.

Makefile Targets to launch this service-profile

These are generally executed in sequence:

make local_containers

Builds the xosproject/xos, xosproject/xos-synchronizer, and xosproject/xos-onboarding-synchronizer container images from source.

make

Bootstraps xos and onboards a stack of typical cord services. While the services are onboarded, they are not yet configured

make vtn

Configures the vtn service. If you are using a custom platform that differs from a typical single-node-pod experiment, then you may wish to make vtn-external.yaml, then edit the autogenerated vtn-external.yaml, and finally run make vtn.

make fabric

Configures the fabric service.

make cord

Configures the cord stack.

make cord-subscriber

Creates a sample subscriber in the cord stack.

make exampleservice

Builds an example service that launches a web server.

Utility Makefile targets

make stop

Stops all running containers.

make rm

Stops all running containers and then permanently destroys them. As the database is destroyed, this will cause loss of data.

make cleanup

Performs both make stop and make cleanup, and then goes to some extra effort to destroy associated networks, VMs, etc. This is handy when developing using single-node-pod, as it will cleanup the XOS installation and allow the profile to be started fresh.

Developer workflow

A common developer workflow that involves completely restarting the profile is:

  1. Upload new code
  2. Execute make cleanup; make; make vtn; make fabric; make cord; make cord-subscriber; make exampleservice

Useful diagnostics

Checking that VTN is functional

Before proceeding, check that the VTN app is controlling Open vSwitch on the compute nodes. Log into ONOS and run the cordvtn-nodes command:

$ ssh -p 8101 karaf@onos-cord   # password is karaf
onos> cordvtn-nodes
hostname=nova-compute, hostMgmtIp=192.168.122.177/24, dpIp=192.168.199.1/24, br-int=of:0000000000000001, dpIntf=veth1, init=COMPLETE
Total 1 nodes

The important part is the init=COMPLETE at the end. If you do not see this, refer to the CORD VTN page on the ONOS Wiki for help fixing the problem. This must be working to bring up VMs on the POD.

Inspecting the vSG

The above series of make commands will spin up a vSG for a sample subscriber. The vSG is implemented as a Docker container (using the andybavier/docker-vcpe image hosted on Docker Hub) running inside an Ubuntu VM. Once the VM is created, you can login as the ubuntu user at the management network IP (172.27.0.x) on the compute node hosting the VM, using the private key generated on the head node by the install process. For example, in the single-node development POD configuration, you can login to the VM with management IP 172.27.0.2 using a ProxyCommand as follows:

ubuntu@pod:~$ ssh -o ProxyCommand="ssh -W %h:%p ubuntu@nova-compute" ubuntu@172.27.0.2

Alternatively, you could copy the generated private key to the compute node and login from there:

ubuntu@pod:~$ scp ~/.ssh/id_rsa ubuntu@nova-compute:~/.ssh
ubuntu@pod:~$ ssh ubuntu@nova-compute
ubuntu@nova-compute:~$ ssh ubuntu@172.27.0.2

Once logged in to the VM, you can run sudo docker ps to see the running vSG containers:

ubuntu@mysite-vsg-1:~$ sudo docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS               NAMES
2b0bfb3662c7        andybavier/docker-vcpe   "/sbin/my_init"     5 days ago          Up 5 days                               vcpe-222-111

Logging into XOS on CloudLab (or any remote host)

The XOS service is accessible on the POD at http://xos/, but xos maps to a private IP address on the management network. If you install CORD on CloudLab you will not be able to directly access the XOS GUI. In order to log into the XOS GUI in the browser on your local machine (desktop or laptop), you can set up an SSH tunnel to your CloudLab node. Assuming that <your-cloudlab-node> is the DNS name of the CloudLab node hosting your experiment, run the following on your local machine to create the tunnel:

$ ssh -L 8888:xos:80 <your-cloudlab-node>

Then you should be able to access the XOS GUI by pointing your browser to http://localhost:8888. Default username/password is padmin@vicci.org/letmein.