Merge pull request #116 from teone/master
XOS dashboard view generator
diff --git a/xos/configurations/cord/README.md b/xos/configurations/cord/README.md
index e1b2d2e..606f12a 100644
--- a/xos/configurations/cord/README.md
+++ b/xos/configurations/cord/README.md
@@ -12,7 +12,7 @@
## End-to-end dataplane
-The configuration uses XOS to set up an end-to-end dataplane for development of the XOS services and ONOS apps
+The configuration uses XOS to set up an end-to-end dataplane for development of the XOS services and ONOS apps
used in CORD. It abstracts away most of the complexity of the CORD hardware using virtual networks
and Open vSwitch (OvS) switches. At a high level the dataplane looks like this:
@@ -49,10 +49,6 @@
```
ctl:~$ git clone https://github.com/open-cloud/xos.git
ctl:~$ cd xos/xos/configurations/cord/
-```
-Edit `cord.yaml` in this directory. Change the hostnames `cp-1.devel.xos-pg0.clemson.cloudlab.us` and
-`cp-2.devel.xos-pg0.clemson.cloudlab.us` to the names of the compute nodes in your experiment. Now run:
-```
ctl:~/xos/xos/configurations/cord$ make
```
@@ -62,12 +58,12 @@
1. *Slice mysite_onos*: runs the ONOS Docker container with `virtualbng` app loaded
1. *Slice mysite_onos*: runs the ONOS Docker container with `olt` app loaded
1. *Slice mysite_vbng*: for running OvS with the `virtualbng` app as controller
-1. *Slice mysite_volt*: for running OvS with the `olt` app as controller
+1. *Slice mysite_volt*: for running the CPqD switch with the `olt` app as controller
1. *Slice mysite_clients*: a subscriber client for end-to-end testing
-1. *Slice mysite_vcpe*: runs the vCPE Docker container
+1. *Slice mysite_vcpe*: runs the vCPE Docker container (if not using containers on bare metal)
Once all the VMs are up and the ONOS apps are configured, XOS should be able to get an address mapping from the `virtualbng`
-ONOS app for the vCPE. To verify that it has received an IP address mapping, look at the **Routeable subnet:** field in
+ONOS app for the vCPE. To verify that it has received an IP address mapping, look at the **Routeable subnet:** field in
the appropriate *Vbng tenant* object in XOS. It should contain an IP address in the 10.254.0.0/24 subnet.
After launching the ONOS apps, it is necessary to configure software switches along the dataplane so that ONOS can control
@@ -84,36 +80,27 @@
ctl:~/xos/xos/configurations/cord/dataplane$ sudo ansible-playbook -i hosts-bm dataplane-bm.yaml
```
-Check that the vCPE container has started, by going into the XOS UI, selecting 'Services', 'service_vcpe', 'Administration', 'Vcpe Tenants', and make sure there's a green icon next to the vCPE.
+Check that the vCPE container has started, by going into the XOS UI, selecting 'Services', 'service_vcpe', 'Administration', 'Vcpe Tenants', and make sure there's a green icon next to the vCPE.
If the vCPE Tenant is still red, then the Instance could be exponentially backed-off due to errors while trying to sync before dataplane.yaml was run. You can reset the exponential backoff by tracking down the vCPE Instance (Slices->mysite_vcpe->Instances, and find the Instance associated with the vCPE Tenant) and hitting the save button.
-Currently the vOLT switch is not forwarding ARP and so it is necessary to set up ARP mappings between the client
-and vCPE. Log into the client and add an ARP entry for the vCPE:
-```
-client:$ sudo arp -s 192.168.0.1 <mac-of-eth1-in-vCPE-container>
-```
-Inside the vCPE container add a similar entry for the client:
-```
-vcpe:$ arp -s 192.168.0.2 <mac-of-br-sub-on-client>
-```
-
Now SSH into ONOS running the OLT app (see below) and activate the subscriber:
```
onos> add-subscriber-access of:0000000000000001 1 432
```
-At this point you should be able to ping 192.168.0.1 from the client. The final step is to set the
-vCPE as the gateway on the client:
+At this point the client should be able to get an IP address from the vCPE via
+DHCP. To set up the IP address and default route on the client:
```
client:$ sudo route del default gw 10.11.10.5
-client:$ sudo route add default gw 192.168.0.1
+client:$ sudo dhclient br-sub
```
-The client should now be able to surf the Internet through the dataplane.
+Once `dhclient` returns, the client should now be able to surf the Internet
+through the dataplane.
## Setting up /etc/hosts
-To make it easy to log into the various VMs that make up the dataplane, add entries for them into `/etc/hosts` on the
+To make it easy to log into the various VMs that make up the dataplane, add entries for them into `/etc/hosts` on the
*ctl* node. As root, run:
```
ctl:~/xos/xos/configurations/cord/dataplane$ ./gen-etc-hosts.sh >> /etc/hosts
@@ -125,7 +112,7 @@
## How to log into ONOS
-ONOS apps are run inside Docker containers hosted in VMs. All ports exposed by the ONOS container are forwarded to the
+ONOS apps are run inside Docker containers hosted in VMs. All ports exposed by the ONOS container are forwarded to the
outside, and can be accessed from the *ctl* node over the `flat-lan-1-net` network. Assuming that `/etc/hosts`
has been configured as described above, it is possible to SSH to the ONOS running the `virtualbng` app as follows (password is *karaf*):
diff --git a/xos/configurations/cord/dataplane/dataplane.yaml b/xos/configurations/cord/dataplane/dataplane.yaml
index 489609a..a0950be 100644
--- a/xos/configurations/cord/dataplane/dataplane.yaml
+++ b/xos/configurations/cord/dataplane/dataplane.yaml
@@ -212,7 +212,7 @@
- name: Run some commands on br-sub
shell: "{{ item }}"
with_items:
- - ifconfig br-sub 192.168.0.2 mtu 1400 up
+ - ifconfig br-sub 0.0.0.0 mtu 1400 up
- ethtool -K br-sub tso off
- ethtool -K br-sub tx off
diff --git a/xos/core/xoslib/methods/ceilometerview.py b/xos/core/xoslib/methods/ceilometerview.py
index a453524..a416110 100644
--- a/xos/core/xoslib/methods/ceilometerview.py
+++ b/xos/core/xoslib/methods/ceilometerview.py
@@ -903,6 +903,8 @@
query = make_query(tenant_id=meter["project_id"])
statistics = statistic_list(request, meter["name"],
ceilometer_url=tenant_ceilometer_url, query=query, period=3600*24)
+ if not statistics:
+ continue
statistic = statistics[0]
row = {"name": 'none',
"project": meter["project_name"],
@@ -928,13 +930,17 @@
if (not tenant_ceilometer_url):
raise XOSMissingField("Tenant ceilometer URL is missing")
meter_name = request.QUERY_PARAMS.get('meter', None)
- tenant_id = request.QUERY_PARAMS.get('tenant', None)
if not meter_name:
raise XOSMissingField("Meter name in query params is missing")
+ limit = request.QUERY_PARAMS.get('limit', 10)
+ tenant_id = request.QUERY_PARAMS.get('tenant', None)
+ resource_id = request.QUERY_PARAMS.get('resource', None)
query = []
if tenant_id:
- query.extend(make_query(tenant_id="default_admin_tenant"))
+ query.extend(make_query(tenant_id=tenant_id))
+ if resource_id:
+ query.extend(make_query(resource_id=resource_id))
query.append({"field": "meter", "op": "eq", "value": meter_name})
samples = sample_list(request, meter_name,
- ceilometer_url=tenant_ceilometer_url, query=query, limit=10)
+ ceilometer_url=tenant_ceilometer_url, query=query, limit=limit)
return Response(samples)
diff --git a/xos/observers/vcpe/steps/sync_vcpetenant.yaml b/xos/observers/vcpe/steps/sync_vcpetenant.yaml
index c3b7246..f7cc02f 100644
--- a/xos/observers/vcpe/steps/sync_vcpetenant.yaml
+++ b/xos/observers/vcpe/steps/sync_vcpetenant.yaml
@@ -133,14 +133,12 @@
- name: vCPE basic dnsmasq config
copy: src=/opt/xos/observers/vcpe/files/vcpe.dnsmasq dest=/etc/vcpe-{{ s_tags[0] }}-{{ c_tags[0] }}/dnsmasq.d/vcpe.conf owner=root group=root
notify:
- - stop dnsmasq
- - start dnsmasq
+ - restart dnsmasq
- name: dnsmasq config
template: src=/opt/xos/observers/vcpe/templates/dnsmasq_servers.j2 dest=/etc/vcpe-{{ s_tags[0] }}-{{ c_tags[0] }}/dnsmasq.d/servers.conf owner=root group=root
notify:
- - stop dnsmasq
- - start dnsmasq
+ - restart dnsmasq
# These are samples, not necessary for correct function of demo
@@ -154,11 +152,9 @@
service: name=vcpe-{{ s_tags[0] }}-{{ c_tags[0] }} state=started
handlers:
- - name: stop dnsmasq
- shell: docker exec vcpe-{{ s_tags[0] }}-{{ c_tags[0] }} /usr/bin/killall dnsmasq
-
- - name: start dnsmasq
- shell: docker exec vcpe-{{ s_tags[0] }}-{{ c_tags[0] }} /usr/sbin/service dnsmasq start
+ # Dnsmasq is automatically restarted in the container
+ - name: restart dnsmasq
+ shell: docker exec vcpe-{{ s_tags[0] }}-{{ c_tags[0] }} killall dnsmasq
- name: restart vcpe
shell: service vcpe-{{ s_tags[0] }}-{{ c_tags[0] }} stop; sleep 1; service vcpe-{{ s_tags[0] }}-{{ c_tags[0] }} start
diff --git a/xos/observers/vcpe/steps/sync_vcpetenant_new.yaml b/xos/observers/vcpe/steps/sync_vcpetenant_new.yaml
index e96f3c5..bb64a71 100644
--- a/xos/observers/vcpe/steps/sync_vcpetenant_new.yaml
+++ b/xos/observers/vcpe/steps/sync_vcpetenant_new.yaml
@@ -56,30 +56,14 @@
- name: vCPE basic dnsmasq config
copy: src=/opt/xos/observers/vcpe/files/vcpe.dnsmasq dest=/var/container_volumes/{{ container_name }}/etc/dnsmasq.d/vcpe.conf owner=root group=root
notify:
- - stop dnsmasq
- - start dnsmasq
+ - restart dnsmasq
- name: dnsmasq config
template: src=/opt/xos/observers/vcpe/templates/dnsmasq_servers.j2 dest=/var/container_volumes/{{ container_name }}/etc/dnsmasq.d/servers.conf owner=root group=root
notify:
- - stop dnsmasq
- - start dnsmasq
+ - restart dnsmasq
handlers:
- - name: stop dnsmasq
+ # Dnsmasq is automatically restarted in the container
+ - name: restart dnsmasq
shell: docker exec {{ container_name }} /usr/bin/killall dnsmasq
-
- - name: start dnsmasq
- shell: docker exec {{ container_name }} /usr/sbin/service dnsmasq start
-
- - name: restart vcpe
- shell: service {{ container_name }} stop; sleep 1; service vcpe-{{ vlan_ids[0] }} start
-
- - name: stop vcpe
- service: name={{ container_name }} state=stopped
-
- - name: remove container
- docker: name={{ container_name }} state=absent image=docker-vcpe
-
- - name: start vcpe
- service: name={{ container_name }} state=started