vtn notes:
see also: https://github.com/hyunsun/documentations/wiki/Neutron-ONOS-Integration-for-CORD-VTN#onos-setup
VTN doesn't seem to like cloudlab's networks (flat-net-1, ext-net, etc). I've placed a script in xos/scripts/ called destroy-all-networks.sh that will automate tearing down all of cloudlab's neutron networks.
cd xos/scripts ./destroy-all-networks.sh
inside the xos container, update the configuration. Make sure to restart Openstack Synchronizer afterward. Might be a good idea to restart the XOS UI as well:
python /opt/xos/tosca/run.py padmin@vicci.org /opt/xos/tosca/samples/vtn.yaml emacs /opt/xos/xos_configuration/xos_common_config [networking] use_vtn=True supervisorctl restart observer
ctl node:
# set ONOS_VTN_HOSTNAME to the host where the VTN container was installed ONOS_VTN_HOSTNAME="cp-2.smbaker-xos5.xos-pg0.clemson.cloudlab.us" apt-get -y install python-pip pip install -U setuptools pip pip install testrepository git clone https://github.com/openstack/networking-onos.git cd networking-onos python setup.py install # the above fails the first time with an error about pbr.json # I ran it again and it succeeded, but I am skeptical there's # not still an issue lurking... cat > /usr/local/etc/neutron/plugins/ml2/conf_onos.ini <<EOF [onos] url_path = http://$ONOS_VTN_HOSTNAME:8181/onos/openstackswitching username = karaf password = karaf EOF emacs /etc/neutron/plugins/ml2/ml2_conf.ini update settings as per vtn docs ([ml2] and [ml2_type_vxlan] sections) systemctl stop neutron-server # I started neutron manually to make sure it's using exactly the right config # files. Maybe it can be restarted using systemctl instead... /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /usr/local/etc/neutron/plugins/ml2/conf_onos.ini
Compute nodes and nm nodes:
cd xos/configurations/cord/dataplane ./generate-bm.sh > hosts-bm ansible-playbook -i hosts-bm dataplane-vtn.yaml # the playbook will: # 1) turn off neutron openvswitch-agent # 2) set openvswitch to listen on port 6641 # 3) restart openvswitch # 4) delete any existing br-int bridge # 5) [nm only] turn off neutron-dhcp-agent
Additional compute node stuff:
Br-flat-lan-1 needs to be deleted, since VTN will be attaching br-int directly to the eth device that br-flat-lan-1 was using. Additionally, we need to assign an IP address to br-int (sounds like Hyunsun is working on having VTN do that for us). Adding the route was not in Hyunsun's instructions, but I found I had to do it in order to get the compute nodes to talk to one another.
ovs-vsctl del-br br-tun ovs-vsctl del-br br-flat-lan-1 ip addr add <addr-that-was-assinged-to-flat-lan-1> dev br-int ip link set br-int up ip route add <network-that-was-assigned-to-flat-lan-1> dev br-int
For development, I suggest using the bash configuration (remember to start the ONOS observer manually) so that there aren't a bunch of preexisting Neutron networks and nova instances to get in the way.
Notes:
There is no management network yet, so no way to SSH into the slices. I've been setting up a VNC tunnel, like this:
# on compute node, run the following and note the IP address and port number virsh vncdisplay <instance-id> # from home ssh -o "GatewayPorts yes" -L <port+5900>:<IP>:<port+5900> <username>@<compute_node_hostname> # example ssh -o "GatewayPorts yes" -L 5901:192.168.0.7:5901 smbaker@cp-1.smbaker-xos3.xos-pg0.clemson.cloudlab.us
Then open a VNC session to the local port on your local machine. You'll have a console on the Instance. The username is "Ubuntu" and the password can be obtained from your cloudlab experiment description
Things that can be tested:
Testing service composition