see also: https://github.com/hyunsun/documentations/wiki/Neutron-ONOS-Integration-for-CORD-VTN#onos-setup
VTN doesn't seem to like cloudlab's networks (flat-net-1, ext-net, etc). I've placed a script in xos/scripts/ called destroy-all-networks.sh that will automate tearing down all of cloudlab's neutron networks.
cd xos/tools ./destroy-all-networks.sh
inside the xos container, update the configuration. Make sure to restart Openstack Synchronizer afterward. Might be a good idea to restart the XOS UI as well:
python /opt/xos/tosca/run.py padmin@vicci.org /opt/xos/tosca/samples/vtn.yaml emacs /opt/xos/xos_configuration/xos_common_config [networking] use_vtn=True supervisorctl restart observer
# set ONOS_VTN_HOSTNAME to the host where the VTN container was installed ONOS_VTN_HOSTNAME="cp-2.smbaker-xos5.xos-pg0.clemson.cloudlab.us" apt-get -y install python-pip pip install -U setuptools pip pip install testrepository git clone https://github.com/openstack/networking-onos.git cd networking-onos python setup.py install # the above fails the first time with an error about pbr.json # I ran it again and it succeeded, but I am skeptical there's # not still an issue lurking... cat > /usr/local/etc/neutron/plugins/ml2/conf_onos.ini <<EOF [onos] url_path = http://$ONOS_VTN_HOSTNAME:8181/onos/cordvtn username = karaf password = karaf EOF emacs /etc/neutron/plugins/ml2/ml2_conf.ini update settings as per vtn docs ([ml2] and [ml2_type_vxlan] sections) systemctl stop neutron-server # I started neutron manually to make sure it's using exactly the right config # files. Maybe it can be restarted using systemctl instead... /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /usr/local/etc/neutron/plugins/ml2/conf_onos.ini
cd xos/configurations/cord/dataplane ./generate-bm.sh > hosts-bm ansible-playbook -i hosts-bm dataplane-vtn.yaml # the playbook will: # 1) turn off neutron openvswitch-agent # 2) set openvswitch to listen on port 6641 # 3) restart openvswitch # 4) delete any existing br-int bridge # 5) [nm only] turn off neutron-dhcp-agent
Additional compute node stuff:
I've been deleting any existing unused bridges. Not sure if it's necesary.
ovs-vsctl del-br br-tun ovs-vsctl del-br br-flat-lan-1
To get the management network working, we need to create management network template, slice, and network. configurations/cord/vtn.yaml will do this for you. Then add a connection to the management network for any slice that needs management connectivity.
In case management network isn't working, you can use a VNC tunnel, like this:
# on compute node, run the following and note the IP address and port number virsh vncdisplay <instance-id> # from home ssh -o "GatewayPorts yes" -L <port+5900>:<IP>:<port+5900> <username>@<compute_node_hostname> # example ssh -o "GatewayPorts yes" -L 5901:192.168.0.7:5901 smbaker@cp-1.smbaker-xos3.xos-pg0.clemson.cloudlab.us
Then open a VNC session to the local port on your local machine. You'll have a console on the Instance. The username is "Ubuntu" and the password can be obtained from your cloudlab experiment description
On head node:
ovs-vsctl del-br br-flat-lan-1 ifconfig eth2 10.123.0.1 iptables --table nat --append POSTROUTING --out-interface br-ex -j MASQUERADE #arp -s 10.123.0.3 fa:16:3e:ea:11:0a sysctl net.ipv4.conf.all.send_redirects sysctl net.ipv4.conf.all.send_redirects=0 sysctl net.ipv4.conf.default.send_redirects=0 sysctl net.ipv4.conf.eth0.send_redirects=0 sysctl net.ipv4.conf.br-ex.send_redirects=0
Substitute for your installation:
10.123.0.3 = wan_ip of vSG 10.123.0.1 = wan gateway fa:16:3e:ea:11:0a = wan_mac of vSG 00:8c:fa:5b:09:d8 = wan_mac of gateway
Before setting up VTN, create a bridge and attach it to the dataplane device on each compute node:
brctl addbr br-inject brctl addif br-inject eth3 # substitute dataplane eth device here, may be different on each compute node ip link set br-inject up ip link set dev br-inject promisc on
Then update the network-config attribute of the VTN ONOS App in XOS to use a dataplaneIntf of br-inject instead of the eth device. Bring up VTN and a VSG. WAN connectivity and everything else should be working fine.
Add a new slice, mysite_client, and make sure to give it both a private and a management network. Bring up an instance on the same node as the vSG you want to test. On the compute node, run the following:
$MAC=<make-up-some-mac> $INSTANCE=<instance-id> virsh attach-interface --domain $INSTANCE --type bridge --source br-inject --model virtio --mac $MAC --config --live
Log into the vSG via the management interface. Inside of the vSG run the following:
STAG=<your s-tag here> CTAG=<your c-tag here> ip link add link eth2 eth2.$STAG type vlan id $STAG ip link add link eth2.$STAG eth2.$STAG.$CTAG type vlan id $CTAG ip link set eth2.$STAG up ip link set eth2.$STAG.$CTAG up ip addr add 192.168.0.2/24 dev eth2.$STAG.$CTAG ip route del default ip route add default via 192.168.0.1