This tutorial guide walks through the steps to bring up a demonstration CORD "POD", running in virtual machines on a single physical server. The purpose of this demonstration POD is to enable those interested in understanding how CORD works to examine and interact with a running CORD environment. It is a good place for novice CORD users to start.
NOTE: If you are looking for instructions on how to install a multi-node POD, you will find them in quickstart_physical.md.
Specifically, the tutorial covers the following:
You will need a build machine (can be your developer laptop) and a target server.
Build host:
git
(2.5.4 or later)Vagrant
(1.8.1 or later)Target server:
ubuntu
user)If you do not have a target server available, you can borrow one on CloudLab. Sign up for an account using your organization's email address and choose "Join Existing Project"; for "Project Name" enter cord-testdrive
.
NOTE: CloudLab is supporting CORD as a courtesy. It is expected that you will not use CloudLab resources for purposes other than evaluating CORD. If, after a week or two, you wish to continue using CloudLab to experiment with or develop CORD, then you must apply for your own separate CloudLab project.
Once your account is approved, start an experiment using the OnePC-Ubuntu14.04.4
profile on either the Wisconsin or Clemson cluster. This will provide you with a temporary target server meeting the above requirements.
Refer to the CloudLab documentation for more information.
Follow the instructions in devel_env_setup.md to set up the Vagrant development machine for CORD on your build host.
The rest of the tasks in this guide are run from inside the Vagrant development machine, in the /cord
directory.
The fetching phase of the deployment pulls Docker images from the public repository down to the local machine as well as clones any git
submodules that are part of the project. This phase can be initiated with the following command:
./gradlew fetch
Once the fetch command has successfully been run, this step is complete. After this command completes you should be able to see the Docker images that were downloaded using the docker images
command on the development machine:
docker images REPOSITORY TAG IMAGE ID CREATED SIZE python 2.7-alpine 836fa7aed31d 5 days ago 56.45 MB consul <none> 62f109a3299c 2 weeks ago 41.05 MB registry 2.4.0 8b162eee2794 9 weeks ago 171.1 MB abh1nav/dockerui latest 6e4d05915b2a 19 months ago 469.5 MB
Edit the configuration file /cord/components/platform-install/config/default.yml
. Add the IP address of your target server as well as the username / password
for accessing the server. You can skip adding the password if you can SSH to the target server from inside the Vagrant VM as username
without one (e.g., by running ssh-agent
).
If you are planning on deploying the single-node POD to a CloudLab host, uncomment the following lines in the configuration file:
#extraVars: # - 'on_cloudlab=True'
This will signal the install process to set up extra disk space on the CloudLab node for use by CORD.
Before proceeding, verify that you can SSH to the target server from the development environment using the IP address, username, and password that you entered into the configuration file. Also verify that the user account can sudo
without a password.
Deploy the CORD software to the the target server and configure it to form a running POD.
./gradlew -PdeployConfig=/cord/components/platform-install/config/default.yml deploySingle
What this does:
This command uses an Ansible playbook (cord-single-playbook.yml) to install OpenStack services, ONOS, and XOS in VMs on the target server. It also brings up a compute node as a VM.
This step usually takes at least an hour to complete. Be patient!
This step is completed once the Ansible playbook finishes without errors. If an error is encountered when running this step, the first thing to try is just running the above gradlew
command again.
Once the step completes, two instances of ONOS are running, in the onos-cord-1
and onos-fabric-1
VMs, though only onos-cord-1
is used in the single-node install. OpenStack is also running on the target server with a virtual compute node called nova-compute-1
. Finally, XOS is running inside the xos-1
VM and is controlling ONOS and OpenStack. You can get a deeper understanding of the configuration of the target server by visiting head_node_services.md.
After the single-node POD is set up, you can execute a set of basic health tests on the platform by running this command:
./gradlew -PdeployConfig=/cord/components/platform-install/config/default.yml postDeployTests
Currently this tests the E2E connectivity of the POD by performing the following steps:
ping
in the client to a public IP address in the InternetSuccess of this test means that traffic is flowing between the subscriber household and the Internet via the vSG. If it succeeds, the end of the test output should have some lines like this:
TASK [post-deploy-tests : Test external connectivity in test client] *********** Monday 18 July 2016 22:21:19 +0000 (0:00:04.381) 0:01:37.751 *********** changed: [128.104.222.194] TASK [post-deploy-tests : Output from ping test] ******************************* Monday 18 July 2016 22:21:25 +0000 (0:00:05.603) 0:01:43.355 *********** ok: [128.104.222.194] => { "pingtest.stdout_lines": [ "nova-compute-1 | SUCCESS | rc=0 >>", "PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.", "64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=29.9 ms", "64 bytes from 8.8.8.8: icmp_seq=2 ttl=46 time=29.2 ms", "64 bytes from 8.8.8.8: icmp_seq=3 ttl=46 time=29.5 ms", "", "--- 8.8.8.8 ping statistics ---", "3 packets transmitted, 3 received, 0% packet loss, time 2002ms", "rtt min/avg/max/mdev = 29.254/29.567/29.910/0.334 ms" ] } PLAY RECAP ********************************************************************* 128.104.222.194 : ok=15 changed=8 unreachable=0 failed=0
Once you are finished deploying the single-node POD, you can exit from the development environment on the build host and destroy it:
exit vagrant destroy -f
If you got this far, you successfully built, deployed, and tested your first CORD POD.
You are now ready to bring up a multi-node POD with a real switching fabric and multiple physical compute nodes. The process for doing so is described in quickstart_physical.md.