tree: a37b52ff8d2620e7220d6552f3b3c62e1d4e6e9e [path history] [tgz]
  1. Dockerfile.cord
  2. Makefile
  3. Makefile.inside
  4. README.md
  5. ceilometer.yaml
  6. ceilometer_vcpe_notification_agent.tar.gz
  7. cord.yaml
  8. dataplane/
  9. etc_xos_metering.tar.gz
  10. install_ceilometer_vcpe_notification_listener.sh
  11. install_xos_ceilometer_dashboard.sh
  12. xos_metering_dashboard.tar.gz
xos/configurations/cord/README.md

CORD development environment

This configuration can be used to set up a CORD development environment. It does the following:

  • Sets up a basic dataplane for testing end-to-end packet flow between a subscriber client and the Internet
  • Brings up ONOS apps for controlling the dataplane: virtualbng, olt
  • Configures XOS with the CORD services: vCPE, vBNG, vOLT

NOTE: This configuration is under active development and is not yet finished! Some features are not fully working yet.

End-to-end dataplane

The configuration uses XOS to set up an end-to-end dataplane for development of the XOS services and ONOS apps used in CORD. It abstracts away most of the complexity of the CORD hardware using virtual networks and Open vSwitch (OvS) switches. At a high level the dataplane looks like this:

             olt                 virtualbng
             ----                  ----
             ONOS                  ONOS
              |                     |
client ----> CPqD ----> vCPE ----> OvS ----> Internet
         1         2          3         4

On the datapath are two OvS switches, controlled by the olt and virtualbng ONOS applications. Once all the pieces are in place, the client at left should be able to obtain an IP address via DHCP from the vCPE and send packets out to the Internet.

All of the components in the above diagram (i.e., client, OvS switches, ONOS, and vCPE) currently run in distinct VMs created by XOS. The numbers in the diagram correspond to networks set up by XOS:

  1. subscriber_network
  2. lan_network
  3. wan_network
  4. public_network

How to run it

The configuration is intended to be run on CloudLab. It launches an XOS container on Cloudlab that runs the XOS develserver. The container is left running in the background.

To get started on CloudLab:

  • Create an experiment using the OpenStack-CORD profile. (You can also use the OpenStack profile, but choose Kilo and disable security groups.)
  • Wait until you get an email from CloudLab with title "OpenStack Instance Finished Setting Up".
  • Login to the ctl node of your experiment and run:
ctl:~$ git clone https://github.com/open-cloud/xos.git
ctl:~$ cd xos/xos/configurations/cord/
ctl:~/xos/xos/configurations/cord$ make

Running make in this directory creates the XOS Docker container and runs the TOSCA engine with cord.yaml to configure XOS with the CORD services. In addition, a number of VMs are created:

  1. Slice mysite_onos: runs the ONOS Docker container with virtualbng app loaded
  2. Slice mysite_onos: runs the ONOS Docker container with olt app loaded
  3. Slice mysite_vbng: for running OvS with the virtualbng app as controller
  4. Slice mysite_volt: for running OvS with the olt app as controller
  5. Slice mysite_clients: a subscriber client for end-to-end testing
  6. Slice mysite_vcpe: runs the vCPE Docker container

Once all the VMs are up and the ONOS apps are configured, XOS should be able to get an address mapping from the virtualbng ONOS app for the vCPE. To verify that it has received an IP address mapping, look at the Routeable subnet: field in the appropriate Vbng tenant object in XOS. It should contain an IP address in the 10.254.0.0/24 subnet.

After launching the ONOS apps, it is necessary to configure software switches along the dataplane so that ONOS can control them. To do this, from the cord configuration directory:

ctl:~/xos/xos/configurations/cord$ cd dataplane/
ctl:~/xos/xos/configurations/cord/dataplane$ ./gen-inventory.sh > hosts
ctl:~/xos/xos/configurations/cord/dataplane$ ansible-playbook -i hosts dataplane.yaml

Currently the vOLT switch is not forwarding ARP and so it is necessary to set up ARP mappings between the client and vCPE. Log into the client and add an ARP entry for the vCPE:

client:$ sudo arp -s 192.168.0.1 <mac-of-eth1-in-vCPE-container>

Inside the vCPE container add a similar entry for the client:

vcpe:$ arp -s 192.168.0.2 <mac-of-br-sub-on-client>

Now SSH into ONOS running the OLT app (see below) and activate the subscriber:

onos> add-subscriber-access of:0000000000000001 1 432

At this point you should be able to ping 192.168.0.1 from the client. The final step is to set the vCPE as the gateway on the client:

client:$ sudo route del default gw 10.11.10.5
client:$ sudo route add default gw 192.168.0.1

The client should now be able to surf the Internet through the dataplane.

How to log into ONOS

The ONOS Docker container runs in the VMs belonging to the mysite_onos slice. All ports exposed by the ONOS container are forwarded to the outside, and can be accessed from the ctl node using the flat-lan-1-net address of the hosting VM. For example, if the IP addresss of the VM is 10.11.10.30, then it is possible to SSH to ONOS as follows (password is karaf):

$ ssh -p 8101 karaf@10.11.10.30
Password authentication
Password:
Welcome to Open Network Operating System (ONOS)!
     ____  _  ______  ____
    / __ \/ |/ / __ \/ __/
   / /_/ /    / /_/ /\ \
   \____/_/|_/\____/___/


Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown ONOS.

onos>

For instance, to check the IP address mappings managed by the virtualbng app:

onos> vbngs
   Private IP - Public IP
   10.0.1.3 - 10.254.0.129