Merge branch 'feature/vRouter'
diff --git a/xos/configurations/cord-pod/Makefile b/xos/configurations/cord-pod/Makefile
index 49a1c98..9296f10 100644
--- a/xos/configurations/cord-pod/Makefile
+++ b/xos/configurations/cord-pod/Makefile
@@ -6,7 +6,7 @@
sudo docker-compose run xos python /opt/xos/tosca/run.py padmin@vicci.org /root/setup/nodes.yaml
sudo docker-compose run xos python /opt/xos/tosca/run.py padmin@vicci.org /root/setup/images.yaml
-vtn: vtn_network_cfg_json
+vtn:
sudo docker-compose run xos python /opt/xos/tosca/run.py padmin@vicci.org /root/setup/vtn-external.yaml
cord: virtualbng_json
@@ -15,7 +15,7 @@
sudo docker-compose run xos python /opt/xos/tosca/run.py padmin@vicci.org /root/setup/cord-vtn-vsg.yaml
exampleservice:
- sudo docker-compose run xos python /opt/xos/tosca/run.py padmin@vicci.org /root/setup/pod-exampleservice.yaml
+ sudo docker-compose run xos python /opt/xos/tosca/run.py padmin@vicci.org /root/setup/pod-exampleservice.yaml
cord-ceilometer: ceilometer_custom_images cord
sudo docker-compose run xos python /opt/xos/tosca/run.py padmin@vicci.org /root/setup/ceilometer.yaml
diff --git a/xos/configurations/cord-pod/README.md b/xos/configurations/cord-pod/README.md
index 2c74c15..3d7a60f 100644
--- a/xos/configurations/cord-pod/README.md
+++ b/xos/configurations/cord-pod/README.md
@@ -77,18 +77,33 @@
They will have been put there for you by the cluster installation scripts.
-If your setup uses the CORD fabric, you need to edit `make-vtn-networkconfig-json.sh`
-and `cord-vtn-vsg.yml` as appropriate. Specifically, in
-`make-vtn-networkconfig-json.sh` you need to set these parameters for VTN:
- * gatewayIp
- * gatewayMac
- * PHYPORT
+**If your setup uses the CORD fabric**, you need to modify the autogenerated VTN
+configuration and node tags, and edit `cord-vtn-vsg.yml` as follows.
-And in `cord-vtn-vsg.yml`:
- * public_addresses -> properties -> addresses
- * service_vsg -> properties -> wan_container_gateway_ip
- * service_vsg -> properties -> wan_container_gateway_mac
- * service_vsg -> properties -> wan_container_netbits
+ 1. The VTN app configuration is autogenerated by XOS. For more information
+about the configuration, see [this page on the ONOS Wiki](https://wiki.onosproject.org/display/ONOS/CORD+VTN),
+under the **ONOS Settings** heading. To see the generated
+configuration, go to http://xos/admin/onos/onosapp/, click on
+*VTN_ONOS_app*, then the *Attributes* tab, and look for the
+`rest_onos/v1/network/configuration/` attribute. You can edit this
+configuration after deleting the `autogenerate` attribute (otherwise XOS will
+overwrite your changes), or you can change the other
+attributes and delete `rest_onos/v1/network/configuration/` in order
+to get XOS to regenerate it.
+
+ 2. The process of autoconfiguring VTN also assigns some default values to per-node parameters. Go to
+ http://xos/admin/core/node/, select a node, then select the *Tags* tab. Configure the following:
+ * `bridgeId` (the ID to set on the node's br-int)
+ * `dataPlaneIntf` (the data plane interface for the fabric on the node)
+ * `dataPlaneIp` (the IP address for the node on the fabric)
+
+ 3. Modify `cord-vtn-vsg.yml` and set these parameters to the
+appropriate values for the fabric:
+ * `public_addresses:properties:addresses` (IP address block of fabric)
+ * `service_vsg:properties:wan_container_gateway_ip` (same as `publicGateway:gatewayIp` from VTN configuration)
+ * `service_vsg:properties:wan_container_gateway_mac` (same as `publicGateway:gatewayMac` from VTN configuration)
+ * `service_vsg:properties:wan_container_netbits` (bits in fabric IP address block netmask)
+
If you're not using the fabric then the default values should be OK.
@@ -104,20 +119,20 @@
### Inspecting the vSG
-The above series of `make` commands will spin up a vSG for a sample subscriber. The
-vSG is implemented as a Docker container (using the
-[andybavier/docker-vcpe](https://hub.docker.com/r/andybavier/docker-vcpe/) image
+The above series of `make` commands will spin up a vSG for a sample subscriber. The
+vSG is implemented as a Docker container (using the
+[andybavier/docker-vcpe](https://hub.docker.com/r/andybavier/docker-vcpe/) image
hosted on Docker Hub) running inside an Ubuntu VM. Once the VM is created, you
can login as the `ubuntu` user at the management network IP (172.27.0.x) on the compute node
hosting the VM, using the private key generated on the head node by the install process.
-For example, in the single-node development POD configuration, you can login to the VM
+For example, in the single-node development POD configuration, you can login to the VM
with management IP 172.27.0.2 using a ProxyCommand as follows:
```
ubuntu@pod:~$ ssh -o ProxyCommand="ssh -W %h:%p ubuntu@nova-compute" ubuntu@172.27.0.2
```
-Alternatively, you could copy the generated private key to the compute node
+Alternatively, you could copy the generated private key to the compute node
and login from there:
```
@@ -126,7 +141,7 @@
ubuntu@nova-compute:~$ ssh ubuntu@172.27.0.2
```
-Once logged in to the VM, you can run `sudo docker ps` to see the running
+Once logged in to the VM, you can run `sudo docker ps` to see the running
vSG containers:
```
@@ -134,4 +149,3 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b0bfb3662c7 andybavier/docker-vcpe "/sbin/my_init" 5 days ago Up 5 days vcpe-222-111
```
-
diff --git a/xos/configurations/cord-pod/images/.gitignore b/xos/configurations/cord-pod/images/.gitignore
new file mode 100644
index 0000000..6949d1f
--- /dev/null
+++ b/xos/configurations/cord-pod/images/.gitignore
@@ -0,0 +1,3 @@
+*.img
+*.qcow2
+*.qcow
diff --git a/xos/configurations/cord-pod/make-vtn-networkconfig-json.sh b/xos/configurations/cord-pod/make-vtn-networkconfig-json.sh
deleted file mode 100755
index a1e5137..0000000
--- a/xos/configurations/cord-pod/make-vtn-networkconfig-json.sh
+++ /dev/null
@@ -1,83 +0,0 @@
-FN=$SETUPDIR/vtn-network-cfg.json
-
-echo "Writing to $FN"
-
-rm -f $FN
-
-cat >> $FN <<EOF
-{
- "apps" : {
- "org.onosproject.cordvtn" : {
- "cordvtn" : {
- "privateGatewayMac" : "00:00:00:00:00:01",
- "localManagementIp": "172.27.0.1/24",
- "ovsdbPort": "6641",
- "sshPort": "22",
- "sshUser": "root",
- "sshKeyFile": "/root/node_key",
- "publicGateways": [
- {
- "gatewayIp": "10.168.0.1",
- "gatewayMac": "02:42:0a:a8:00:01"
- }
- ],
- "nodes" : [
-EOF
-
-NODES=$( sudo bash -c "source $SETUPDIR/admin-openrc.sh ; nova hypervisor-list" |grep -v ID|grep -v +|awk '{print $4}' )
-
-# XXX disabled - we don't need or want the nm node at this time
-# also configure ONOS to manage the nm node
-#NM="neutron-gateway"
-#NODES="$NODES $NM"
-
-NODECOUNT=0
-for NODE in $NODES; do
- ((NODECOUNT++))
-done
-
-I=0
-for NODE in $NODES; do
- echo $NODE
- NODEIP=`getent hosts $NODE | awk '{ print $1 }'`
-
- PHYPORT=veth1
- # How to set LOCALIP?
- LOCALIPNET="192.168.199"
-
- ((I++))
- cat >> $FN <<EOF
- {
- "hostname": "$NODE",
- "hostManagementIp": "$NODEIP/24",
- "bridgeId": "of:000000000000000$I",
- "dataPlaneIntf": "$PHYPORT",
- "dataPlaneIp": "$LOCALIPNET.$I/24"
-EOF
- if [[ "$I" -lt "$NODECOUNT" ]]; then
- echo " }," >> $FN
- else
- echo " }" >> $FN
- fi
-done
-
-# get the openstack admin password and username
-source $SETUPDIR/admin-openrc.sh
-NEUTRON_URL=`keystone endpoint-get --service network|grep publicURL|awk '{print $4}'`
-
-cat >> $FN <<EOF
- ]
- }
- },
- "org.onosproject.openstackinterface" : {
- "openstackinterface" : {
- "do_not_push_flows" : "true",
- "neutron_server" : "$NEUTRON_URL/v2.0/",
- "keystone_server" : "$OS_AUTH_URL/",
- "user_name" : "$OS_USERNAME",
- "password" : "$OS_PASSWORD"
- }
- }
- }
-}
-EOF
diff --git a/xos/configurations/cord-pod/vtn-external.yaml b/xos/configurations/cord-pod/vtn-external.yaml
index 0aaee67..74e7ef7 100644
--- a/xos/configurations/cord-pod/vtn-external.yaml
+++ b/xos/configurations/cord-pod/vtn-external.yaml
@@ -25,6 +25,4 @@
relationship: tosca.relationships.TenantOfService
properties:
dependencies: org.onosproject.drivers, org.onosproject.drivers.ovsdb, org.onosproject.openflow-base, org.onosproject.ovsdb-base, org.onosproject.dhcp, org.onosproject.cordvtn, org.onosproject.olt, org.onosproject.igmp, org.onosproject.cordmcast
- rest_onos/v1/network/configuration/: { get_artifact: [ SELF, vtn_network_cfg_json, LOCAL_FILE ] }
- artifacts:
- vtn_network_cfg_json: /root/setup/vtn-network-cfg.json
+ autogenerate: vtn-network-cfg
diff --git a/xos/configurations/cord/README.md b/xos/configurations/cord/README.md
index 606f12a..64075d9 100644
--- a/xos/configurations/cord/README.md
+++ b/xos/configurations/cord/README.md
@@ -7,8 +7,10 @@
* Brings up ONOS apps for controlling the dataplane: virtualbng, olt
* Configures XOS with the CORD services: vCPE, vBNG, vOLT
-**NOTE:** This configuration is under **active development** and is not yet finished! Some features are not
-fully working yet.
+**NOTE: This configuration is stale and likely not working at present. If you are looking to evaluate
+and/or contribute to [CORD](http://opencord.org/),
+you should look instead at the [cord-pod](../cord-pod) configuration. Almost
+all CORD developers have transitioned to [cord-pod](../cord-pod).**
## End-to-end dataplane
diff --git a/xos/synchronizers/onos/steps/sync_onosapp.py b/xos/synchronizers/onos/steps/sync_onosapp.py
index 2dfdfbd..1fc6579 100644
--- a/xos/synchronizers/onos/steps/sync_onosapp.py
+++ b/xos/synchronizers/onos/steps/sync_onosapp.py
@@ -1,7 +1,6 @@
import hashlib
import os
import socket
-import socket
import sys
import base64
import time
@@ -14,7 +13,7 @@
from synchronizers.base.syncstep import SyncStep
from synchronizers.base.ansible import run_template_ssh
from synchronizers.base.SyncInstanceUsingAnsible import SyncInstanceUsingAnsible
-from core.models import Service, Slice, ControllerSlice, ControllerUser
+from core.models import Service, Slice, Controller, ControllerSlice, ControllerUser, Node, TenantAttribute, Tag
from services.onos.models import ONOSService, ONOSApp
from xos.logger import Logger, logging
@@ -117,6 +116,127 @@
raise Exception("Controller user object for %s does not exist" % instance.creator)
return cuser.kuser_id
+
+ def node_tag_default(self, o, node, tagname, default):
+ tags = Tag.select_by_content_object(node).filter(name=tagname)
+ if tags:
+ value = tags[0].value
+ else:
+ value = default
+ logger.info("node %s: saving default value %s for tag %s" % (node.name, value, tagname))
+ service = self.get_onos_service(o)
+ tag = Tag(service=service, content_object=node, name=tagname, value=value)
+ tag.save()
+ return value
+
+ # Scan attrs for attribute name
+ # If it's not present, save it as a TenantAttribute
+ def attribute_default(self, tenant, attrs, name, default):
+ if name in attrs:
+ value = attrs[name]
+ else:
+ value = default
+ logger.info("saving default value %s for attribute %s" % (value, name))
+ ta = TenantAttribute(tenant=tenant, name=name, value=value)
+ ta.save()
+ return value
+
+ # This function currently assumes a single Deployment and Site
+ def get_vtn_config(self, o, attrs):
+
+ # The "attrs" argument contains a list of all service and tenant attributes
+ # If an attribute is present, use it in the configuration
+ # Otherwise save the attriute with a reasonable (for a CORD devel pod) default value
+ # The admin will see all possible configuration values and the assigned defaults
+ privateGatewayMac = self.attribute_default(o, attrs, "privateGatewayMac", "00:00:00:00:00:01")
+ localManagementIp = self.attribute_default(o, attrs, "localManagementIp", "172.27.0.1/24")
+ ovsdbPort = self.attribute_default(o, attrs, "ovsdbPort", "6641")
+ sshPort = self.attribute_default(o, attrs, "sshPort", "22")
+ sshUser = self.attribute_default(o, attrs, "sshUser", "root")
+ sshKeyFile = self.attribute_default(o, attrs, "sshKeyFile", "/root/node_key")
+
+ # OpenStack endpoints and credentials
+ keystone_server = "http://keystone:5000/v2.0/"
+ user_name = "admin"
+ password = "ADMIN_PASS"
+ controllers = Controller.objects.all()
+ if controllers:
+ controller = controllers[0]
+ keystone_server = controller.auth_url
+ user_name = controller.admin_user
+ password = controller.admin_password
+
+ # Put this in the controller object? Or fetch it from Keystone?
+ # Seems like VTN should be pulling it from Keystone
+ # For now let it be specified by a service / tenant attribute
+ neutron_server = self.attribute_default(o, attrs, "neutron_server", "http://neutron-api:9696/v2.0/")
+
+ data = {
+ "apps" : {
+ "org.onosproject.cordvtn" : {
+ "cordvtn" : {
+ "privateGatewayMac" : privateGatewayMac,
+ "localManagementIp": localManagementIp,
+ "ovsdbPort": ovsdbPort,
+ "sshPort": sshPort,
+ "sshUser": sshUser,
+ "sshKeyFile": sshKeyFile,
+ "publicGateways": [],
+ "nodes" : []
+ }
+ },
+ "org.onosproject.openstackinterface" : {
+ "openstackinterface" : {
+ "do_not_push_flows" : "true",
+ "neutron_server" : neutron_server,
+ "keystone_server" : keystone_server,
+ "user_name" : user_name,
+ "password" : password
+ }
+ }
+ }
+ }
+
+ # Generate apps->org.onosproject.cordvtn->cordvtn->nodes
+
+ # We need to generate a CIDR address for the physical node's
+ # address on the management network
+ mgmtSubnetBits = self.attribute_default(o, attrs, "mgmtSubnetBits", "24")
+
+ nodes = Node.objects.all()
+ for node in nodes:
+ nodeip = socket.gethostbyname(node.name)
+
+ try:
+ bridgeId = self.node_tag_default(o, node, "bridgeId", "of:0000000000000001")
+ dataPlaneIntf = self.node_tag_default(o, node, "dataPlaneIntf", "veth1")
+ # This should be generated from the AddressPool if not present
+ dataPlaneIp = self.node_tag_default(o, node, "dataPlaneIp", "192.168.199.1/24")
+ except:
+ logger.error("not adding node %s to the VTN configuration" % node.name)
+ continue
+
+ node_dict = {
+ "hostname": node.name,
+ "hostManagementIp": "%s/%s" % (nodeip, mgmtSubnetBits),
+ "bridgeId": bridgeId,
+ "dataPlaneIntf": dataPlaneIntf,
+ "dataPlaneIp": dataPlaneIp
+ }
+ data["apps"]["org.onosproject.cordvtn"]["cordvtn"]["nodes"].append(node_dict)
+
+ # Generate apps->org.onosproject.cordvtn->cordvtn->publicGateways
+ # This should come from the vRouter service, but stick it in an attribute for now
+ gatewayIp = self.attribute_default(o, attrs, "gatewayIp", "10.168.0.1")
+ gatewayMac = self.attribute_default(o, attrs, "gatewayMac", "02:42:0a:a8:00:01")
+ gateway_dict = {
+ "gatewayIp": gatewayIp,
+ "gatewayMac": gatewayMac
+ }
+ data["apps"]["org.onosproject.cordvtn"]["cordvtn"]["publicGateways"].append(gateway_dict)
+
+ return json.dumps(data, indent=4, sort_keys=True)
+
def write_configs(self, o):
o.config_fns = []
o.rest_configs = []
@@ -153,6 +273,33 @@
file(os.path.join(o.files_dir, fn),"w").write(" " +value)
o.early_rest_configs.append( {"endpoint": endpoint, "fn": fn} )
+ # Generate config files and save them to the appropriate tenant attributes
+ autogen = []
+ for key, value in attrs.iteritems():
+ if key == "autogenerate" and value:
+ autogen.append(value)
+ for label in autogen:
+ config = None
+ value = None
+ if label == "vtn-network-cfg":
+ # Generate the VTN config file... where should this live?
+ config = "rest_onos/v1/network/configuration/"
+ value = self.get_vtn_config(o, attrs)
+ if config:
+ tas = TenantAttribute.objects.filter(tenant=o, name=config)
+ if tas:
+ ta = tas[0]
+ if ta.value != value:
+ logger.info("updating %s with autogenerated config" % config)
+ ta.value = value
+ ta.save()
+ attrs[config] = value
+ else:
+ logger.info("saving autogenerated config %s" % config)
+ ta = TenantAttribute(tenant=o, name=config, value=value)
+ ta.save()
+ attrs[config] = value
+
for name in attrs.keys():
value = attrs[name]
if name.startswith("config_"):
diff --git a/xos/synchronizers/openstack/steps/sync_images.py b/xos/synchronizers/openstack/steps/sync_images.py
index 8049ac1..1638fd0 100644
--- a/xos/synchronizers/openstack/steps/sync_images.py
+++ b/xos/synchronizers/openstack/steps/sync_images.py
@@ -27,7 +27,7 @@
if os.path.exists(images_path):
for f in os.listdir(images_path):
filename = os.path.join(images_path, f)
- if os.path.isfile(filename):
+ if os.path.isfile(filename) and filename.endswith(".img"):
available_images[f] = filename
logger.info("SyncImages: available_images = %s" % str(available_images))
diff --git a/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml b/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml
index 04521dc..618e9de 100644
--- a/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml
+++ b/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml
@@ -294,7 +294,7 @@
service: name={{ container_name }} state=started
- name: reload ufw
- shell: docker exec {{ container_name }} bash -c "/sbin/iptables -t nat -F PREROUTING; /usr/sbin/ufw reload"
+ shell: docker exec {{ container_name }} bash -c "/sbin/iptables -t nat -F PREROUTING; /sbin/iptables -t nat -F POSTROUTING; /usr/sbin/ufw reload"
- name: rerun /etc/rc.local
shell: docker exec {{ container_name }} bash -c "/etc/rc.local"
diff --git a/xos/tosca/custom_types/xos.m4 b/xos/tosca/custom_types/xos.m4
index 2ccb0ee..fd4f8ec 100644
--- a/xos/tosca/custom_types/xos.m4
+++ b/xos/tosca/custom_types/xos.m4
@@ -204,6 +204,9 @@
rest_onos/v1/network/configuration/:
type: string
required: false
+ autogenerate:
+ type: string
+ required: false
tosca.nodes.VSGService:
description: >
diff --git a/xos/tosca/custom_types/xos.yaml b/xos/tosca/custom_types/xos.yaml
index 2502690..9bdcfc3 100644
--- a/xos/tosca/custom_types/xos.yaml
+++ b/xos/tosca/custom_types/xos.yaml
@@ -262,6 +262,9 @@
rest_onos/v1/network/configuration/:
type: string
required: false
+ autogenerate:
+ type: string
+ required: false
tosca.nodes.VSGService:
description: >
diff --git a/xos/tosca/resources/onosapp.py b/xos/tosca/resources/onosapp.py
index 321600d..72511b3 100644
--- a/xos/tosca/resources/onosapp.py
+++ b/xos/tosca/resources/onosapp.py
@@ -61,7 +61,8 @@
self.set_tenant_attr(obj, k, v)
elif k.startswith("component_config"):
self.set_tenant_attr(obj, k, v)
+ elif k == "autogenerate":
+ self.set_tenant_attr(obj, k, v)
def can_delete(self, obj):
return super(XOSONOSApp, self).can_delete(obj)
-