This directory holds files that are used to configure a development POD for CORD. For more information on the CORD project, including how to get started, check out the CORD wiki.
XOS is composed of several core services that are typically containerized. Dynamic On-boarding System and Service Profiles describes these containers and how they fit together.
This document is primarily focused on how to start the cord-pod service-profile. This service profile is usually located in ~/service-profile/cord-pod/
on an installed pod. This directory is usually located in the xos
virtual machine.
The following prerequisites should be met:
The usual way to meet these prerequisites is by following one of the methods of building a CORD POD on the CORD Wiki.
These are generally executed in sequence:
make local_containers
Builds the xosproject/xos
, xosproject/xos-synchronizer
, and xosproject/xos-onboarding-synchronizer
container images from source.
make
Bootstraps xos and onboards a stack of typical cord services. While the services are onboarded, they are not yet configured
make vtn
Configures the vtn service. If you are using a custom platform that differs from a typical single-node-pod experiment, then you may wish to make vtn-external.yaml
, then edit the autogenerated vtn-external.yaml
, and finally run make vtn
.
make fabric
Configures the fabric service.
make vrouter
Configures the vrouter service.
make cord
Configures the cord stack.
make cord-subscriber
Creates a sample subscriber in the cord stack.
make exampleservice
Builds an example service that launches a web server.
make cleanup
Performs both make stop
and make cleanup
, and then goes to some extra effort to destroy associated networks, VMs, etc. This is handy when developing using single-node-pod, as it will cleanup the XOS installation and allow the profile to be started fresh.
A common developer workflow that involves completely restarting the profile is:
make cleanup; make local_containers; make; make vtn; make fabric; make cord; make cord-subscriber; make exampleservice
This workflow exercises many of the capabilities of a CORD POD. It does the following:
exampleservice
(described in the Tutorial on Assembling and On-Boarding Services)exampleservice
tenant (which creates a VM and loads and configures Apache in it)Before proceeding, check that the VTN app is controlling Open vSwitch on the compute nodes. Log into ONOS and run the cordvtn-nodes
command:
$ ssh -p 8101 karaf@onos-cord # password is karaf onos> cordvtn-nodes hostname=nova-compute, hostMgmtIp=192.168.122.177/24, dpIp=192.168.199.1/24, br-int=of:0000000000000001, dpIntf=veth1, init=COMPLETE Total 1 nodes
The important part is the init=COMPLETE
at the end. If you do not see this, refer to the CORD VTN Configuration Guide for help fixing the problem. This must be working to bring up VMs on the POD.
The above series of make
commands will spin up a vSG for a sample subscriber. The vSG is implemented as a Docker container (using the andybavier/docker-vcpe image hosted on Docker Hub) running inside an Ubuntu VM. Once the VM is created, you can login as the ubuntu
user at the management network IP (172.27.0.x) on the compute node hosting the VM, using the private key generated on the head node by the install process. For example, in the single-node development POD configuration, you can login to the VM with management IP 172.27.0.2 using a ProxyCommand as follows:
ubuntu@pod:~$ ssh -o ProxyCommand="ssh -W %h:%p ubuntu@nova-compute" ubuntu@172.27.0.2
Alternatively, you could copy the generated private key to the compute node and login from there:
ubuntu@pod:~$ scp ~/.ssh/id_rsa ubuntu@nova-compute:~/.ssh ubuntu@pod:~$ ssh ubuntu@nova-compute ubuntu@nova-compute:~$ ssh ubuntu@172.27.0.2
Once logged in to the VM, you can run sudo docker ps
to see the running vSG containers:
ubuntu@mysite-vsg-1:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2b0bfb3662c7 andybavier/docker-vcpe "/sbin/my_init" 5 days ago Up 5 days vcpe-222-111
The CORD POD installation process forwards port 80 on the head node to the xos
VM. You should be able to access the XOS GUI by simply pointing your browser at the head node. Default username/password is padmin@vicci.org/letmein
.