This tutorial guide walks through the steps to bring up a demonstration CORD "POD", running in virtual machines on a single physical server (a.k.a. "CORD-in-a-Box"). The purpose of this demonstration POD is to enable those interested in understanding how CORD works to examine and interact with a running CORD environment. It is a good place for novice CORD users to start.
NOTE: This tutorial installs a simplified version of a CORD POD on a single server using virtual machines. If you are looking for instructions on how to install a multi-node POD, you will find them in quickstart_physical.md. For more details about the actual build process, look there.
You will need a target server, which will run both a development environment in a Vagrant VM (used to deploy CORD) as well as CORD-in-a-Box itself.
Target server requirements:
ubuntu
user)If you do not have a target server available that meets the above requirements, you can borrow one on CloudLab. Sign up for an account using your organization's email address and choose "Join Existing Project"; for "Project Name" enter cord-testdrive
.
NOTE: CloudLab is supporting CORD as a courtesy. It is expected that you will not use CloudLab resources for purposes other than evaluating CORD. If, after a week or two, you wish to continue using CloudLab to experiment with or develop CORD, then you must apply for your own separate CloudLab project.
Once your account is approved, start an experiment using the OnePC-Ubuntu14.04.5
profile on either the Wisconsin or Clemson cluster. This will provide you with a temporary target server meeting the above requirements.
Refer to the CloudLab documentation for more information.
On the target server, download the script that installs CORD-in-a-Box. Then run it, saving the screen output to a file called install.out
:
curl -o ~/cord-in-a-box.sh https://raw.githubusercontent.com/opencord/cord/master/scripts/cord-in-a-box.sh bash ~/cord-in-a-box.sh -t | tee ~/install.out
The script takes a long time (at least two hours) to run. Be patient! If it hasn't completely failed yet, then assume all is well!
The script builds the CORD-in-a-Box and runs a couple of tests to ensure that things are working as expected. Once it has finished running, you'll see a BUILD SUCCESSFUL message.
The file ~/install.out
contains the full output of the build process.
There is an -b
option to cord-in-a-box.sh that will checkout a specific changset from a gerrit repo during the run. The syntax for this is <project path>:<changeset>/<revision>
. It can be used multiple times - for example:
bash ~/cord-in-a-box.sh -b build/platform-install:1233/4 -b orchestration/service-profile:1234/2"
will check out the platform-install
repo with changeset 1233, revision 4, and service-profile
repo changeset 1234, revision 2.
You can find the project path used by the repo
tool in the manifest/default.xml file.
CORD-in-a-Box creates a virtual CORD POD running inside Vagrant VMs. You can inspect their current status as follows:
~$ cd opencord/build/ ~/opencord/build$ vagrant status Current machine states: corddev running (libvirt) prod running (libvirt) switch not created (libvirt) testbox not created (libvirt) compute_node-1 running (libvirt) compute_node-2 not created (libvirt)
The corddev
VM is a development machine used by the cord-in-a-box.sh
script to drive the installation. It downloads and builds Docker containers and publishes them to the virtal head node (see below). It then installs MaaS on the virtual head node (for bare-metal provisioning) and the ONOS, XOS, and OpenStack services in containers. This VM can be entered as follows:
$ ssh corddev
The CORD build environment is located in /cord/build
inside this VM. It is possible to manually run individual steps in the build process here if you wish; see quickstart_physical.md for more information on how to run build steps.
The prod
VM is the virtual head node of the POD. It runs the OpenStack, ONOS, and XOS services inside containers. It also simulates a subscriber devices using a container. To enter it, simply type:
$ ssh prod
Inside the VM, a number of services run in Docker and LXD containers.
NOTE: NO ONOS CONTAINERS HERE YET!
vagrant@prod:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 649bb35c54aa xosproject/xos-ui "python /opt/xos/mana" About an hour ago Up About an hour 8000/tcp, 0.0.0.0:8888->8888/tcp cordpod_xos_ui_1 700f23298686 xosproject/xos-synchronizer-exampleservice "bash -c 'sleep 120; " About an hour ago Up About an hour 8000/tcp cordpod_xos_synchronizer_exampleservice_1 05266a97d245 xosproject/xos-synchronizer-vtr "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_vtr_1 1fcd23a4b0b5 xosproject/xos-synchronizer-vsg "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_vsg_1 72741c1d431a xosproject/xos-synchronizer-onos "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_onos_1 b31eb4a938ff xosproject/xos-synchronizer-fabric "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_fabric_1 b14213da2ae2 xosproject/xos-synchronizer-openstack "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_openstack_1 159c937b84d5 xosproject/xos-synchronizer-vtn "bash -c 'sleep 120; " 2 hours ago Up 2 hours 8000/tcp cordpod_xos_synchronizer_vtn_1 741c5465bd69 xosproject/xos "python /opt/xos/mana" 2 hours ago Up 2 hours 0.0.0.0:81->81/tcp, 8000/tcp cordpodbs_xos_bootstrap_ui_1 90452f1e31e9 xosproject/xos "bash -c 'cd /opt/xos" 2 hours ago Up 2 hours 8000/tcp cordpodbs_xos_synchronizer_onboarding_1 5d744962e86c redis "docker-entrypoint.sh" 2 hours ago Up 2 hours 6379/tcp cordpodbs_xos_redis_1 bd29494712dc xosproject/xos-postgres "/usr/lib/postgresql/" 2 hours ago Up 2 hours 5432/tcp cordpodbs_xos_db_1 c127d493f194 docker-registry:5000/mavenrepo:candidate "nginx -g 'daemon off" 3 hours ago Up 3 hours 443/tcp, 0.0.0.0:8080->80/tcp mavenrepo 9d4b98d49e69 docker-registry:5000/cord-maas-automation:candidate "/go/bin/cord-maas-au" 3 hours ago Up 3 hours automation 2f0f8bba4c4e docker-registry:5000/cord-maas-switchq:candidate "/go/bin/switchq" 3 hours ago Up 3 hours 0.0.0.0:4244->4244/tcp switchq 53e3d81ddb56 docker-registry:5000/cord-provisioner:candidate "/go/bin/cord-provisi" 3 hours ago Up 3 hours 0.0.0.0:4243->4243/tcp provisioner 5853b72e0f99 docker-registry:5000/config-generator:candidate "/go/bin/config-gener" 3 hours ago Up 3 hours 1337/tcp, 0.0.0.0:4245->4245/tcp generator 3605c5208cb9 docker-registry:5000/cord-ip-allocator:candidate "/go/bin/cord-ip-allo" 3 hours ago Up 3 hours 0.0.0.0:4242->4242/tcp allocator dda7030d7028 docker-registry:5000/consul:candidate "docker-entrypoint.sh" 3 hours ago Up 3 hours storage 775dbcf4c719 docker-registry:5000/cord-dhcp-harvester:candidate "/go/bin/harvester" 3 hours ago Up 3 hours 0.0.0.0:8954->8954/tcp harvester 97a6c43fb405 registry:2.4.0 "/bin/registry serve " 4 hours ago Up 3 hours 0.0.0.0:5000->5000/tcp registry 5a768a06e913 registry:2.4.0 "/bin/registry serve " 4 hours ago Up 3 hours 0.0.0.0:5001->5000/tcp registry-mirror
The above shows Docker containers launched by XOS (image names starting with xosproject
). There is also a Docker image registry, a Maven repository containing the CORD ONOS apps, and a number of microservices used in bare-metal provisioning.
vagrant@prod:~$ sudo lxc list +-------------------------+---------+------------------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------------+---------+------------------------------+------+------------+-----------+ | ceilometer-1 | RUNNING | 10.1.0.4 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | glance-1 | RUNNING | 10.1.0.5 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | juju-1 | RUNNING | 10.1.0.3 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | keystone-1 | RUNNING | 10.1.0.6 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | mongodb-1 | RUNNING | 10.1.0.13 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | nagios-1 | RUNNING | 10.1.0.8 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | neutron-api-1 | RUNNING | 10.1.0.9 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | nova-cloud-controller-1 | RUNNING | 10.1.0.10 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | openstack-dashboard-1 | RUNNING | 10.1.0.11 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | percona-cluster-1 | RUNNING | 10.1.0.7 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | rabbitmq-server-1 | RUNNING | 10.1.0.12 (eth0) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+ | testclient | RUNNING | 192.168.0.244 (eth0.222.111) | | PERSISTENT | 0 | +-------------------------+---------+------------------------------+------+------------+-----------+
The LXD containers ending with names ending with -1
are running OpenStack-related services. These containers can be entered as follows:
$ ssh ubuntu@<container-name>
The testclient
container runs the simulated subscriber device used for running simple end-to-end connectivity tests. Its only connectivity is to the vSG, but it can be entered using:
$ sudo lxc exec testclient bash
The compute_node-1
VM is the virtual compute node controlled by OpenStack. This VM can be entered from the prod
VM. Run cord prov list
to get the node name (assigned by MaaS) and then run:
$ ssh ubuntu@<compute-node-name>
Virtual machines created via XOS/OpenStack will be instantiated inside the compute_node
VM. To login to an OpenStack VM, first get the management IP address (172.27.0.x):
vagrant@prod:~$ nova list --all-tenants +--------------------------------------+-------------------------+--------+------------+-------------+---------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------------------------------------+ | 3ba837a0-81ff-47b5-8f03-020175eed6b3 | mysite_exampleservice-2 | ACTIVE | - | Running | management=172.27.0.3; public=10.6.1.194 | | 549ffc1e-c454-4ef8-9df7-b02ab692eb36 | mysite_vsg-1 | ACTIVE | - | Running | management=172.27.0.2; mysite_vsg-access=10.0.2.2 | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------------------------------------+
Then run ssh-agent
and add the default key -- this key loaded into the VM when it was created:
vagrant@prod:~$ ssh-agent bash vagrant@prod:~$ ssh-add
SSH to the compute node with the -A
option and then to the VM using the management IP obtained above. So if the compute node name is bony-alley
and the management IP is 172.27.0.2:
vagrant@prod:~$ ssh -A ubuntu@bony-alley ubuntu@bony-alley:~$ ssh ubuntu@172.27.0.2 # Now you're inside the mysite-vsg-1 VM ubuntu@mysite-vsg-1:~$
You can access the MaaS (Metal-as-a-Service) GUI by pointing your browser to the URL http://<target-server>/MAAS/
. Username and password are both cord
. For more information on MaaS, see the MaaS documentation.
You can access the XOS GUI by pointing your browser to URL http://<target-server>/xos/
. Username is padmin@vicci.org
and password is letmein
.
The state of the system is that all CORD services have been onboarded to XOS. You can see them in the GUI by clicking Services at left. Clicking on the name of a service will show more details about it.
A sample CORD subscriber has also been created. A nice way to drill down into the configuration is to click Customize at left, add the Diagnostic dashboard, and then click Diagnostic at left. To see the details of the subscriber in this dashboard, click the green box next to Subscriber and select cordSubscriber-1
. The dashboard will change to show information specific to that subscriber.
After CORD-in-a-Box was set up, a couple of basic health tests were executed on the platform. The results of these tests can be found near the end of ~/install.out
.
This tests the E2E connectivity of the POD by performing the following steps:
ping
in the client to a public IP address in the InternetSuccess means that traffic is flowing between the subscriber household and the Internet via the vSG. If it succeeded, you should see some lines like these in the output:
TASK [test-vsg : Output from ping test] **************************************** Thursday 27 October 2016 15:29:17 +0000 (0:00:03.144) 0:19:21.336 ****** ok: [10.100.198.201] => { "pingtest.stdout_lines": [ "PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.", "64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=29.7 ms", "64 bytes from 8.8.8.8: icmp_seq=2 ttl=47 time=29.2 ms", "64 bytes from 8.8.8.8: icmp_seq=3 ttl=47 time=29.1 ms", "", "--- 8.8.8.8 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2003ms", "rtt min/avg/max/mdev = 29.176/29.367/29.711/0.243 ms" ] }
This test builds on test-vsg
by loading the exampleservice described in the Tutorial on Assembling and On-Boarding Services. The purpose of the exampleservice is to demonstrate how new subscriber-facing services can be easily deployed to a CORD POD. This test performs the following steps:
curl
from the subscriber test client, through the vSG, to the Apache server.Success means that the Apache server launched by the exampleservice tenant is fully configured and is reachable from the subscriber client via the vSG. If it succeeded, you should see some lines like these in the output:
TASK [test-exampleservice : Output from curl test] ***************************** Thursday 27 October 2016 15:34:40 +0000 (0:00:01.116) 0:24:44.732 ****** ok: [10.100.198.201] => { "curltest.stdout_lines": [ "ExampleService", " Service Message: \"hello\"", " Tenant Message: \"world\"" ] }
If you got this far, you successfully built, deployed, and tested your first CORD POD.
You are now ready to bring up a multi-node POD with a real switching fabric and multiple physical compute nodes. The process for doing so is described in quickstart_physical.md.