Merge branch 'master' of github.com:jermowery/xos into AddVPNService
diff --git a/xos/configurations/cord-pod/README.md b/xos/configurations/cord-pod/README.md
index 0fcdb13..1824ab6 100644
--- a/xos/configurations/cord-pod/README.md
+++ b/xos/configurations/cord-pod/README.md
@@ -1,77 +1,79 @@
-# XOS Docker Images
+# XOS Configuration for CORD development POD
## Introduction
- XOS is comprised of 3 core services:
+This directory holds files that are used to configure a development POD for
+CORD. For more information on the CORD project, check out
+[the CORD website](http://cord.onosproject.org/).
+
+XOS is composed of several core services:
* A database backend (postgres)
* A webserver front end (django)
- * A synchronizer daemon that interacts with the openstack backend.
+ * A synchronizer daemon that interacts with the openstack backend
+ * A synchronizer for each configured XOS service
-We have created separate dockerfiles for each of these services, making it
-easier to build the services independently and also deploy and run them in
-isolated environments.
+Each service runs in a separate Docker container. The containers are built
+automatically by [Docker Hub](https://hub.docker.com/u/xosproject/) using
+the HEAD of the XOS repository.
-#### Database Container
+## How to bring up CORD
-To build the database container:
+Installing a CORD POD requires three steps:
+ 1. Installing OpenStack on a cluster
+ 2. Setting up the ONOS VTN app and configuring OVS on the nova-compute nodes to be
+ controlled by VTN
+ 3. Bringing up XOS with the CORD services
+### Installing OpenStack
+
+Follow the instructions in the [README.md](https://github.com/open-cloud/openstack-cluster-setup/blob/master/README.md)
+file of the [open-cloud/openstack-cluster-setup](https://github.com/open-cloud/openstack-cluster-setup/)
+repository.
+
+### Setting up ONOS VTN
+
+The OpenStack installer above creates a VM called *onos-cord* on the head node.
+To bring up ONOS in this VM, log into the head node and run:
```
-$ cd postgresql; make build
+$ ssh ubuntu@onos-cord
+ubuntu@onos-cord:~$ cd cord; docker-compose up -d
```
-#### XOS Container
-
-To build the XOS webserver container:
-
+Currently it's also necessary to do some manual configuration on each compute
+node. As root do the following:
+ 1. Disable neutron-plugin-openvswitch-agent, if running:
```
-$ cd xos; make build
+$ service neutron-plugin-openvswitch-agent stop
+$ echo manual > /etc/init/neutron-plugin-openvswitch-agent.override
+```
+ 2. Delete *br-int* and all other bridges from OVS
+ 3. Configure OVS to listen for connections from VTN:
+```
+$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6641
```
-#### Synchronizer Container
+### Bringing up XOS
-The Synchronizer shares many of the same dependencies as the XOS container. The
-synchronizer container takes advantage of this by building itself on top of the
-XOS image. This means you must build the XOS image before building the
-synchronizer image. Assuming you have already built the XOS container,
-executing the following will build the Synchronizer container:
-
+The OpenStack installer above creates a VM called *xos* on the head node.
+To bring up XOS in this VM, first log into the head node and run:
```
-$ cd synchronizer; make build
+$ ssh ubuntu@xos
+ubuntu@xos:~$ cd xos/xos/configurations/cord-pod
```
-#### Solution Compose File
+Next, put the following files in this directory:
-[Docker Compose](https://docs.docker.com/compose/) is a tool for defining and
-running multi-container Docker applications. With Compose, you use a Compose
-file to configure your application’s services. Then, using a single command, you
-create, start, scale, and manage all the services from your configuration.
+ * *admin-openrc.sh*: Admin credentials for your OpenStack cloud
+ * *id_rsa[.pub]*: A keypair that will be used by the various services
+ * *node_key*: A private key that allows root login to the compute nodes
-Included is a compose file in *YAML* format with content defined by the [Docker
-Compose Format](https://docs.docker.com/compose/compose-file/). With the compose
-file a complete XOS solution based on Docker containers can be instantiated
-using a single command. To start the instance you can use the command:
-
+Then XOS can be brought up for CORD by running a few 'make' commands:
```
-$ docker-compose up -d
+ubuntu@xos:~/xos/xos/configurations/cord-pod$ make
+ubuntu@xos:~/xos/xos/configurations/cord-pod$ make vtn
+ubuntu@xos:~/xos/xos/configurations/cord-pod$ make cord
```
-You should now be able to access the login page by visiting
-`http://localhost:8000` and log in using the default `padmin@vicci.org` account
-with password `letmein`.
-
-#### Configuring XOS for OpenStack
-
-If you have your own OpenStack cluster, and you would like to configure XOS to
-control it, copy the `admin-openrc.sh` credentials file for your cluster to
-this directory. Make sure that OpenStack commands work from the local machine
-using the credentials, e.g., `source ./admin-openrc.sh; nova list`. Then run:
-
-```
-$ make
-```
-
-XOS will be launched (the Makefile will run the `docker-compose up -d` command
-for you) and configured with the nodes and images available in your
-OpenStack cloud. You can then log in to XOS as described above and start creating
-slices and instances.
+After the first 'make' command above, you will be able to login to XOS at
+*http://xos/* using username/password `padmin@vicci.org/letmein`.
diff --git a/xos/configurations/cord-pod/vtn-external.yaml b/xos/configurations/cord-pod/vtn-external.yaml
index 9c1a550..315fc20 100644
--- a/xos/configurations/cord-pod/vtn-external.yaml
+++ b/xos/configurations/cord-pod/vtn-external.yaml
@@ -24,7 +24,7 @@
node: service_ONOS_VTN
relationship: tosca.relationships.TenantOfService
properties:
- dependencies: org.onosproject.drivers, org.onosproject.drivers.ovsdb, org.onosproject.lldpprovider, org.onosproject.openflow-base, org.onosproject.ovsdb-base, org.onosproject.dhcp, org.onosproject.openstackswitching, org.onosproject.cordvtn
+ dependencies: org.onosproject.drivers, org.onosproject.drivers.ovsdb, org.onosproject.openflow-base, org.onosproject.ovsdb-base, org.onosproject.dhcp, org.onosproject.openstackswitching, org.onosproject.cordvtn, org.onosproject.olt, org.onosproject.igmp, org.onosproject.cordmcast
rest_onos/v1/network/configuration/: { get_artifact: [ SELF, vtn_network_cfg_json, LOCAL_FILE ] }
artifacts:
vtn_network_cfg_json: /root/setup/vtn-network-cfg.json
diff --git a/xos/configurations/cord/README-VTN.md b/xos/configurations/cord/README-VTN.md
index 3d61940..b3c0c61 100644
--- a/xos/configurations/cord/README-VTN.md
+++ b/xos/configurations/cord/README-VTN.md
@@ -1,4 +1,4 @@
-vtn notes:
+# vtn notes:
see also: https://github.com/hyunsun/documentations/wiki/Neutron-ONOS-Integration-for-CORD-VTN#onos-setup
@@ -15,7 +15,7 @@
use_vtn=True
supervisorctl restart observer
-ctl node:
+### ctl node:
# set ONOS_VTN_HOSTNAME to the host where the VTN container was installed
ONOS_VTN_HOSTNAME="cp-2.smbaker-xos5.xos-pg0.clemson.cloudlab.us"
@@ -41,7 +41,7 @@
# files. Maybe it can be restarted using systemctl instead...
/usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /usr/local/etc/neutron/plugins/ml2/conf_onos.ini
-Compute nodes and nm nodes:
+### Compute nodes and nm nodes:
cd xos/configurations/cord/dataplane
./generate-bm.sh > hosts-bm
@@ -55,22 +55,14 @@
Additional compute node stuff:
-Br-flat-lan-1 needs to be deleted, since VTN will be attaching br-int directly to the eth device that br-flat-lan-1 was using. Additionally, we need to assign an IP address to br-int (sounds like Hyunsun is working on having VTN do that for us). Adding the route was not in Hyunsun's instructions, but I found I had to do it in order to get the compute nodes to talk to one another.
+I've been deleting any existing unused bridges. Not sure if it's necesary.
ovs-vsctl del-br br-tun
ovs-vsctl del-br br-flat-lan-1
- ip addr add <addr-that-was-assinged-to-flat-lan-1> dev br-int
- ip link set br-int up
- ip route add <network-that-was-assigned-to-flat-lan-1>/24 dev br-int
-
-To get the management network working, we need to create management network template, slice, and network. configurations/cord/vtn.yaml will do this for you. Then add a connection to the management network for any slice that needs management connectivity. Note the subnet that gets assigned to the management network. Management-gateway-ip is the .1 address on the subnet. On the compute node:
- ip addr add <management-gateway-ip>/24 dev br-int
+To get the management network working, we need to create management network template, slice, and network. configurations/cord/vtn.yaml will do this for you. Then add a connection to the management network for any slice that needs management connectivity.
-For development, I suggest using the bash configuration (remember to start the ONOS observer manually) so that
-there aren't a bunch of preexisting Neutron networks and nova instances to get in the way.
-
-Notes:
+### Notes:
* I've configured the OpenvSwitch switches to use port 6641 instead of port 6640. This is because the VTN app listens on 6640
itself, and since we're running it in docker 'host' networking mode now, it would conflict with an Openvswitch that was
also listening on 6640.
@@ -79,7 +71,7 @@
* Note that the VTN Synchronizer isn't started automatically. It's only use for inter-Service connectivity, so no need to mess with it until intra-Slice connectivity is working first.
* Note that the VTN Synchronizer won't connect non-access networks. Any network templates you want VTN to connect must have Access set to "Direct" or "Indirect".
-There is no management network yet, so no way to SSH into the slices. I've been setting up a VNC tunnel, like this:
+In case management network isn't working, you can use a VNC tunnel, like this:
# on compute node, run the following and note the IP address and port number
virsh vncdisplay <instance-id>
@@ -92,13 +84,13 @@
Then open a VNC session to the local port on your local machine. You'll have a console on the Instance. The username is "Ubuntu" and the password can be obtained from your cloudlab experiment description
-Things that can be tested:
+### Things that can be tested:
* Create an Instance, it should have a Private network, and there should be a tap attached from the instance to br-int
* Two Instances in the same Slice can talk to one another. They can be on the same machine or different machines.
* Two Slices can talk to one another if the slices are associated with Services and those Services have a Tenancy relationship between them. Note that 1) The VTN Synchronizer must be running, 2) There must be a Private network with Access=[Direct|Indirect], and 3) The connectivity is unidirectional, from subscriber service to provider service.
-Testing service composition
+### Testing service composition
1. Change the private network template's 'Access' field from None to Direct
2. Create a Service, Service-A
@@ -113,21 +105,11 @@
11. You should see the pings arrive and responses sent out. Note that the ping responses will not reach Slice-1, since VTN traffic is unidirectional.
12. Delete the Tenancy relation you created in Step #7. The ping traffic should no longer appear in the tcpdump.
-Getting external connectivity working on cloudlab
+### Getting external connectivity working on cloudlab
-Inside of vSG:
-
- ip link add link eth0 eth0.500 type vlan id 500
- ifconfig eth0.500 up
- route del default gw 172.27.0.1
- /sbin/ifconfig eth0.500 10.123.0.3
- route del -net 10.0.0.0 netmask 255.0.0.0 dev eth0.500 # only need to do this if this route exists
- route add -net 10.123.0.0 netmask 255.255.255.0 dev eth0.500
- route add default gw 10.123.0.1
- arp -s 10.123.0.1 00:8c:fa:5b:09:d8
-
On head node:
+ ovs-vsctl del-br br-flat-lan-1
ifconfig eth2 10.123.0.1
iptables --table nat --append POSTROUTING --out-interface br-ex -j MASQUERADE
arp -s 10.123.0.3 fa:16:3e:ea:11:0a
@@ -139,3 +121,31 @@
fa:16:3e:ea:11:0a = wan_mac of vSG
00:8c:fa:5b:09:d8 = wan_mac of gateway
+### Setting up a test-client
+
+Before setting up VTN, create a bridge and attach it to the dataplane device on each compute node:
+
+ brctl addbr br-inject
+ brctl addif br-inject eth3 # substitute dataplane eth device here, may be different on each compute node
+ ip link set br-inject up
+ ip link set dev br-inject promisc on
+
+Then update the network-config attribute of the VTN ONOS App in XOS to use a dataplaneIntf of br-inject instead of the eth device. Bring up VTN and a VSG. WAN connectivity and everything else should be working fine.
+
+Add a new slice, mysite_client, and make sure to give it both a private and a management network. Bring up an instance on the same node as the vSG you want to test. On the compute node, run the following:
+
+ $MAC=<make-up-some-mac>
+ $INSTANCE=<instance-id>
+ virsh attach-interface --domain $INSTANCE --type bridge --source br-inject --model virtio --mac $MAC --config --live
+
+Log into the vSG via the management interface. Inside of the vSG run the following:
+
+ STAG=<your s-tag here>
+ CTAG=<your c-tag here>
+ ip link add link eth2 eth2.$STAG type vlan id $STAG
+ ip link add link eth2.$STAG eth2.$STAG.$CTAG type vlan id $CTAG
+ ip link set eth2.$STAG up
+ ip link set eth2.$STAG.$CTAG up
+ ip addr add 192.168.0.2/24 dev eth2.$STAG.$CTAG
+ ip route del default
+ ip route add default via 192.168.0.1
diff --git a/xos/configurations/vtn/cord-vtn-vsg.yaml b/xos/configurations/vtn/cord-vtn-vsg.yaml
index f08a1b9..1b26bba 100644
--- a/xos/configurations/vtn/cord-vtn-vsg.yaml
+++ b/xos/configurations/vtn/cord-vtn-vsg.yaml
@@ -37,6 +37,7 @@
wan_container_gateway_ip: 10.123.0.1
wan_container_gateway_mac: 00:8c:fa:5b:09:d8
wan_container_netbits: 24
+ dns_servers: 8.8.8.8, 8.8.4.4
artifacts:
pubkey: /opt/xos/synchronizers/vcpe/vcpe_public_key
diff --git a/xos/configurations/vtn/docker-compose.yml b/xos/configurations/vtn/docker-compose.yml
index 7fb68f1..0fa718b 100644
--- a/xos/configurations/vtn/docker-compose.yml
+++ b/xos/configurations/vtn/docker-compose.yml
@@ -42,7 +42,7 @@
xos:
image: xosproject/xos
- command: python /opt/xos/manage.py runserver 0.0.0.0:8000 --insecure --makemigrations
+ command: bash -c "python /opt/xos/manage.py runserver 0.0.0.0:8000 --insecure --makemigrations"
ports:
- "9999:8000"
links:
diff --git a/xos/core/admin.py b/xos/core/admin.py
index f5578ec..b66f5a6 100644
--- a/xos/core/admin.py
+++ b/xos/core/admin.py
@@ -1259,7 +1259,7 @@
]
readonly_fields = ('backend_status_text', )
- suit_form_tabs =(('general','Image Details'),('instances','Instances'),('imagedeployments','Deployments'), ('controllerimages', 'Controllers'))
+ suit_form_tabs =(('general','Image Details'),('instances','Instances'),('imagedeployments','Deployments'), ('admin-only', 'Admin-Only'))
inlines = [InstanceInline, ControllerImagesInline]
diff --git a/xos/core/xoslib/methods/ceilometerview.py b/xos/core/xoslib/methods/ceilometerview.py
index 5f99b61..9e46aa7 100644
--- a/xos/core/xoslib/methods/ceilometerview.py
+++ b/xos/core/xoslib/methods/ceilometerview.py
@@ -1246,8 +1246,13 @@
query = make_query(tenant_id=meter["project_id"],resource_id=meter["resource_id"])
if additional_query:
query = query + additional_query
- statistics = statistic_list(request, meter["name"],
+ try:
+ statistics = statistic_list(request, meter["name"],
ceilometer_url=tenant_ceilometer_url, query=query, period=3600*24)
+ except Exception as e:
+ logger.error('Exception during statistics query for meter %(meter)s and reason:%(reason)s' % {'meter':meter["name"], 'reason':str(e)})
+ statistics = None
+
if not statistics:
continue
statistic = statistics[-1]
@@ -1398,8 +1403,13 @@
query = make_query(tenant_id=meter["project_id"],resource_id=meter["resource_id"])
if additional_query:
query = query + additional_query
- statistics = statistic_list(request, meter["name"],
+ try:
+ statistics = statistic_list(request, meter["name"],
ceilometer_url=tenant_ceilometer_url, query=query, period=3600*24)
+ except Exception as e:
+ logger.error('Exception during statistics query for meter %(meter)s and reason:%(reason)s' % {'meter':meter["name"], 'reason':str(e)})
+ statistics = None
+
if not statistics:
continue
statistic = statistics[-1]
diff --git a/xos/services/cord/admin.py b/xos/services/cord/admin.py
index 40e0f29..76b505c 100644
--- a/xos/services/cord/admin.py
+++ b/xos/services/cord/admin.py
@@ -107,6 +107,7 @@
wan_container_gateway_ip = forms.CharField(required=False)
wan_container_gateway_mac = forms.CharField(required=False)
wan_container_netbits = forms.CharField(required=False)
+ dns_servers = forms.CharField(required=False)
def __init__(self,*args,**kwargs):
super (VSGServiceForm,self ).__init__(*args,**kwargs)
@@ -119,6 +120,7 @@
self.fields['wan_container_gateway_ip'].initial = self.instance.wan_container_gateway_ip
self.fields['wan_container_gateway_mac'].initial = self.instance.wan_container_gateway_mac
self.fields['wan_container_netbits'].initial = self.instance.wan_container_netbits
+ self.fields['dns_servers'].initial = self.instance.dns_servers
def save(self, commit=True):
self.instance.bbs_api_hostname = self.cleaned_data.get("bbs_api_hostname")
@@ -129,6 +131,7 @@
self.instance.wan_container_gateway_ip = self.cleaned_data.get("wan_container_gateway_ip")
self.instance.wan_container_gateway_mac = self.cleaned_data.get("wan_container_gateway_mac")
self.instance.wan_container_netbits = self.cleaned_data.get("wan_container_netbits")
+ self.instance.dns_servers = self.cleaned_data.get("dns_servers")
return super(VSGServiceForm, self).save(commit=commit)
class Meta:
@@ -136,14 +139,16 @@
class VSGServiceAdmin(ReadOnlyAwareAdmin):
model = VSGService
- verbose_name = "vCPE Service"
- verbose_name_plural = "vCPE Service"
+ verbose_name = "vSG Service"
+ verbose_name_plural = "vSG Service"
list_display = ("backend_status_icon", "name", "enabled")
list_display_links = ('backend_status_icon', 'name', )
fieldsets = [(None, {'fields': ['backend_status_text', 'name','enabled','versionNumber', 'description', "view_url", "icon_url", "service_specific_attribute",],
'classes':['suit-tab suit-tab-general']}),
- ("backend config", {'fields': [ "backend_network_label", "bbs_api_hostname", "bbs_api_port", "bbs_server", "bbs_slice", "wan_container_gateway_ip", "wan_container_gateway_mac", "wan_container_netbits"],
- 'classes':['suit-tab suit-tab-backend']}) ]
+ ("backend config", {'fields': [ "backend_network_label", "bbs_api_hostname", "bbs_api_port", "bbs_server", "bbs_slice"],
+ 'classes':['suit-tab suit-tab-backend']}),
+ ("vSG config", {'fields': [ "wan_container_gateway_ip", "wan_container_gateway_mac", "wan_container_netbits", "dns_servers"],
+ 'classes':['suit-tab suit-tab-vsg']}) ]
readonly_fields = ('backend_status_text', "service_specific_attribute")
inlines = [SliceInline,ServiceAttrAsTabInline,ServicePrivilegeInline]
form = VSGServiceForm
@@ -154,6 +159,7 @@
suit_form_tabs =(('general', 'Service Details'),
('backend', 'Backend Config'),
+ ('vsg', 'vSG Config'),
('administration', 'Administration'),
#('tools', 'Tools'),
('slices','Slices'),
diff --git a/xos/services/cord/models.py b/xos/services/cord/models.py
index 959bf19..37ee78e 100644
--- a/xos/services/cord/models.py
+++ b/xos/services/cord/models.py
@@ -402,7 +402,8 @@
("backend_network_label", "hpc_client"),
("wan_container_gateway_ip", ""),
("wan_container_gateway_mac", ""),
- ("wan_container_netbits", "24") )
+ ("wan_container_netbits", "24"),
+ ("dns_servers", "8.8.8.8") )
def __init__(self, *args, **kwargs):
super(VSGService, self).__init__(*args, **kwargs)
diff --git a/xos/synchronizers/vcpe/steps/sync_vcpetenant.py b/xos/synchronizers/vcpe/steps/sync_vcpetenant.py
index cd8a292..5e48837 100644
--- a/xos/synchronizers/vcpe/steps/sync_vcpetenant.py
+++ b/xos/synchronizers/vcpe/steps/sync_vcpetenant.py
@@ -165,7 +165,8 @@
"wan_container_netbits": vcpe_service.wan_container_netbits,
"wan_vm_mac": wan_vm_mac,
"wan_vm_ip": wan_vm_ip,
- "safe_browsing_macs": safe_macs}
+ "safe_browsing_macs": safe_macs,
+ "dns_servers": [x.strip() for x in vcpe_service.dns_servers.split(",")] }
# add in the sync_attributes that come from the SubscriberRoot object
diff --git a/xos/synchronizers/vcpe/steps/sync_vcpetenant.yaml b/xos/synchronizers/vcpe/steps/sync_vcpetenant.yaml
index d887547..585f68a 100644
--- a/xos/synchronizers/vcpe/steps/sync_vcpetenant.yaml
+++ b/xos/synchronizers/vcpe/steps/sync_vcpetenant.yaml
@@ -33,6 +33,10 @@
{% for bbs_addr in bbs_addrs %}
- {{ bbs_addr }}
{% endfor %}
+ dns_servers:
+ {% for dns_server in dns_servers %}
+ - {{ dns_server }}
+ {% endfor %}
nat_ip: {{ nat_ip }}
nat_mac: {{ nat_mac }}
lan_ip: {{ lan_ip }}
diff --git a/xos/synchronizers/vcpe/steps/sync_vcpetenant_new.yaml b/xos/synchronizers/vcpe/steps/sync_vcpetenant_new.yaml
index 59047c1..071c30a 100644
--- a/xos/synchronizers/vcpe/steps/sync_vcpetenant_new.yaml
+++ b/xos/synchronizers/vcpe/steps/sync_vcpetenant_new.yaml
@@ -34,6 +34,10 @@
{% for bbs_addr in bbs_addrs %}
- {{ bbs_addr }}
{% endfor %}
+ dns_servers:
+ {% for dns_server in dns_servers %}
+ - {{ dns_server }}
+ {% endfor %}
nat_ip: {{ nat_ip }}
nat_mac: {{ nat_mac }}
lan_ip: {{ lan_ip }}
diff --git a/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml b/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml
index 96dc16c..819dcc5 100644
--- a/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml
+++ b/xos/synchronizers/vcpe/steps/sync_vcpetenant_vtn.yaml
@@ -33,6 +33,10 @@
{% for bbs_addr in bbs_addrs %}
- {{ bbs_addr }}
{% endfor %}
+ dns_servers:
+ {% for dns_server in dns_servers %}
+ - {{ dns_server }}
+ {% endfor %}
nat_ip: {{ nat_ip }}
nat_mac: {{ nat_mac }}
lan_ip: {{ lan_ip }}
diff --git a/xos/synchronizers/vcpe/templates/dnsmasq_servers.j2 b/xos/synchronizers/vcpe/templates/dnsmasq_servers.j2
index c89c762..3682cdf 100644
--- a/xos/synchronizers/vcpe/templates/dnsmasq_servers.j2
+++ b/xos/synchronizers/vcpe/templates/dnsmasq_servers.j2
@@ -10,5 +10,7 @@
{% endif %}
# use google's DNS service
-server=8.8.8.8
-server=8.8.4.4
+{% for dns_server in dns_servers %}
+server={{ dns_server }}
+{% endfor %}
+
diff --git a/xos/tosca/custom_types/xos.m4 b/xos/tosca/custom_types/xos.m4
index 80c0d91..15e9710 100644
--- a/xos/tosca/custom_types/xos.m4
+++ b/xos/tosca/custom_types/xos.m4
@@ -227,6 +227,9 @@
wan_container_netbits:
type: string
required: false
+ dns_servers:
+ type: string
+ required: false
tosca.nodes.VBNGService:
derived_from: tosca.nodes.Root
diff --git a/xos/tosca/custom_types/xos.yaml b/xos/tosca/custom_types/xos.yaml
index 2f404dc..88b3388 100644
--- a/xos/tosca/custom_types/xos.yaml
+++ b/xos/tosca/custom_types/xos.yaml
@@ -329,6 +329,9 @@
wan_container_netbits:
type: string
required: false
+ dns_servers:
+ type: string
+ required: false
tosca.nodes.VBNGService:
derived_from: tosca.nodes.Root
diff --git a/xos/tosca/resources/vcpeservice.py b/xos/tosca/resources/vcpeservice.py
index 1794010..5c7b2a7 100644
--- a/xos/tosca/resources/vcpeservice.py
+++ b/xos/tosca/resources/vcpeservice.py
@@ -12,5 +12,8 @@
class XOSVsgService(XOSService):
provides = "tosca.nodes.VSGService"
xos_model = VSGService
- copyin_props = ["view_url", "icon_url", "enabled", "published", "public_key", "private_key_fn", "versionNumber", "backend_network_label", "wan_container_gateway_ip", "wan_container_gateway_mac", "wan_container_netbits"]
+ copyin_props = ["view_url", "icon_url", "enabled", "published", "public_key",
+ "private_key_fn", "versionNumber", "backend_network_label",
+ "wan_container_gateway_ip", "wan_container_gateway_mac",
+ "wan_container_netbits", "dns_servers"]