Initial CDN deployment playbooks and ansible modules
Change-Id: Ib2c5a8f3d22459bf3c540289f7b7cc1b3fdf4457
diff --git a/setup/README.md b/setup/README.md
new file mode 100644
index 0000000..d91ee5b
--- /dev/null
+++ b/setup/README.md
@@ -0,0 +1,103 @@
+## Set up a new CDN
+
+### CDN - headnode prep
+
+1. Install AMC.qcow2 and CentOS-6-cdnnode-0.4.qcow2 to /opt/cord_profile/images/
+2. nova flavor-create --is-public true m1.cdnnode auto 8192 120 4
+3. run the deploy-cdn playbook:
+ cd /opt/cord/orchestration/xos_services/hypercache/setup
+ ansible-playbook -i /opt/cord/build/platform-install/inventory/head-localhost --extra-vars @/opt/cord/build/genconfig/config.yml deploy-cdn-playbook.yml
+4. ensure private keys are in /opt/cord/orchestration/xos_services/hypercache/setup/private
+5. Install ACT
+ * sudo easy_install -Z auraclienttools-0.4.2_parguera-py2.7.egg
+
+### CDN - cmi setup
+
+1. Wait for images (AMC, CentOS-6-cdnnode-0.4) to be loaded into glance (check glance image-list for status)
+2. XOS UI: Add cmi and CentOS images to MyDeployment
+3. Instantiate CMI instance in mysite_cdn_control
+ * flavor: m1.cdnnode
+ * image: AMC
+4. edit group_vars/all
+ * update cmi_compute_node, cmi_mgmt_ip
+ * do not update cmi_private_key -- the public part is baked into the image
+5. run the following playbook:
+ ansible-playbook -i /opt/cord/build/platform-install/inventory/head-localhost --extra-vars @/opt/cord/build/genconfig/config.yml generate-inventory-playbook.yml
+6. run setup-cmi.sh
+ * this will SSH into the CMI and run setup, then modify some settings.
+ * it may take a long time, 10-20 minutes or more
+7. log into CMI (ssh-cmi.sh) and setup socat to attach the CMI to eth1
+ * socat TCP-LISTEN:443,bind=172.27.0.9,reuseaddr,fork TCP4:10.6.1.196:443
+8. setup port forwarding from prod VM to CMI:
+ * ssh -L 0.0.0.0:3456:172.27.0.9:443 ubuntu@offbeat-pin
+ * (note IP address of CMI Instance and use in place of 172.27.0.9)
+
+### CDN - cdnnode setup
+
+1. Instantiate cdnnode instance in mysite_cdn_nodes
+ * flavor: m1.cdnnode
+ * CentOS-6-cdnnode-0.4.img
+2. Log into compute node and Attach disk
+ * on cloudlab w/ supersized compute node:
+ * virsh attach-disk <instance_name> /dev/vdb vdb --cache none
+ * (make sure this disk wasn't used anywhere else!)
+3. enroll the new node in the cdn
+ * ansible-playbook -i "localhost," example-node-playbook.yaml
+ * find the bootscript in /tmp
+4. log into cdnnode VM
+ * make sure default gateway is good (check public connectivity)
+ * make sure arp table is good
+ * make sure CMI is reachable from cdnnode
+ * run takeover script that was created by the CMI
+ * (I suggest commenting out the final reboot -f, and make sure the rest of it worked right before rebooting)
+ * Node will take a long time to install
+5. log into cdnnode
+ * to SSH into cdnnode, go into CMI, vserver coplc, cd /etc/planetlab, and use debug_ssh_key.rsa w/ root user
+ * check default gateway
+ * fix arp entry for default gateway
+
+### CDN - request router setup
+
+1. Instantiate request router instance in mysite_cdn_nodes
+ * flavor: m1.cdnnode
+ * CentOS-6-cdnnode-0.4.img
+2. enroll the new rr in the cdn
+ * ansible-playbook -i "localhost," example-rr-playbook.yaml
+ * find the bootscript in /tmp
+3. log into the request router VM
+ * run the bootscript
+ * (I suggest commenting out the final reboot -f, and make sure the rest of it worked right before rebooting)
+ * Node will take a long time to install
+
+### CDN - setup content
+
+1. Run the following playbook:
+ * ansible-playbook -i "localhost," example-content-playbook.yaml
+
+### CDN - important notes
+
+We manually edited synchronizers/vcpe/templates/dnsasq_safe_servers.j2 inside the vcpe synchronizer VM:
+
+ # temporary for ONS demo
+ address=/z.cdn.turner.com/207.141.192.134
+ address=/cnn-vh.akamaihd.net/207.141.192.134
+
+### Test Commands
+
+* First, make sure the vSG is the only DNS server available in the test client.
+* Second, make sure cdn_enable bit is set in CordSubscriber object for your vSG.
+* curl -L -vvvv http://downloads.onosproject.org/vm/onos-tutorial-1.1.0r220-ovf.zip > /dev/null
+* curl -L -vvvv http://onlab.vicci.org/onos-videos/Nov-planning-day1/Day1+00+Bill+-+Community+Growth.mp4 > /dev/null
+* curl -L -vvvv http://downloads.onosproject.org/release/onos-1.2.0.zip > /dev/null
+
+## Restart CDN after power-down
+
+To do...
+test
+
+
+## notes
+
+socat TCP-LISTEN:443,bind=172.27.0.9,reuseaddr,fork TCP4:10.6.1.196:443
+
+ssh -L 0.0.0.0:3456:172.27.0.9:443 ubuntu@offbeat-pin
diff --git a/setup/amc_id_rsa.pub b/setup/amc_id_rsa.pub
new file mode 100644
index 0000000..85bdeb6
--- /dev/null
+++ b/setup/amc_id_rsa.pub
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAn4MTRJZDelmNS59zA9zVlv97auuNmItyCix38xmxhziWuPvnzdUUKJMsyVMc4ggVsQiOb9EZId1PX9/PKZpQDuJtgS4Q7cFV4eff9+AS2hakl/J+FLlpZzaQEwDwMCgy45NBjZFMmEzCM2vw8OhL5b/MOQEcjZgI9F6AdrAR0K0CqYIfXffeiieTuqaM3wRvlWTXrdUb5yAyUuBPXwAlW8qSxeES3FFgUsv2xzAZMo/3puRTWsWWW2w2XpJEFznYtnN0IwepqYXnFIJffdJFWWmid4zDYiMrzIAqwlbF2qhIkUfvQ9fWC0Q7h8goreVwt4XuJO1rYS+JljtJgegv smbaker@fc16-64.lan
diff --git a/setup/cdnnode_id_rsa.pub b/setup/cdnnode_id_rsa.pub
new file mode 100644
index 0000000..8512002
--- /dev/null
+++ b/setup/cdnnode_id_rsa.pub
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDId6Eo1rvuCQW7c/OcIVmn/XWI0amQn5pj25IWblsYgjTzZQ4JxXzndNMIwhG057O+Ir1TSxxSp+mJzQqSXHTAD76z1YKObl8d2ckBrsuZsocRbrEo0gMhyoPa0jyjA/j251vvsNAc0qx8fHkMghfULDT7K76cRJfGhBw4Vp3rNJE/AdBEXC4nMPh6tAJI+QIN08UMXVu49JETVs0vyfChPcMBr53h+HRsWTyN+qjiE47nH3wrY7+XO3zjoFY3HmpCSVhvuMOFQIewINPfUcEkWpvu3nGUsdvtX46cXyugjzFF3QFRbRPZHwTQ0dYBvSFDBfDBZkyQ2AgZQPtAo9Q7 smbaker@fc16-64.lan
diff --git a/setup/deploy-cdn-playbook.yml b/setup/deploy-cdn-playbook.yml
new file mode 100644
index 0000000..1e7d732
--- /dev/null
+++ b/setup/deploy-cdn-playbook.yml
@@ -0,0 +1,28 @@
+---
+
+- name: Include vars
+ hosts: all
+ tasks:
+ - name: Include variables
+ include_vars: "{{ item }}"
+ with_items:
+ - "/opt/cord/build/platform-install/profile_manifests/{{ cord_profile }}.yml"
+ - /opt/cord/build/platform-install/profile_manifests/local_vars.yml
+
+- name: Create config and inventory for talking to cmi
+ hosts: all
+ roles:
+ - generate-inventory
+
+- name: Create CORD profile
+ hosts: config
+ roles:
+ - cdn-cord-profile
+
+- include: /opt/cord/build/platform-install/add-onboard-containers-playbook.yml
+
+- name: Check to see if XOS UI is ready, apply profile config
+ hosts: xos_ui
+ connection: docker
+ roles:
+ - cdn-xos-config
diff --git a/setup/example-content-playbook.yaml b/setup/example-content-playbook.yaml
new file mode 100644
index 0000000..dbd118c
--- /dev/null
+++ b/setup/example-content-playbook.yaml
@@ -0,0 +1,62 @@
+---
+- hosts: localhost
+ vars:
+ amc_hostname: "localhost:3456"
+ amc_username: "co@opencloud.us"
+ amc_password: "XOScdn123$"
+ amc_plc_name: "CoBlitz Test"
+ tasks:
+ - name: Create service provider
+ act_sp:
+ name: cord
+ account: cord
+ enabled: true
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ state: present
+
+ - name: Create content provider
+ act_cp:
+ name: cord
+ account: cord
+ enabled: true
+ service_provider: cord
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ state: present
+
+ - name: Create origin server
+ act_origin:
+ url: "{{ item }}"
+ content_provider: cord
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ state: present
+ with_items:
+ - http://www.cs.arizona.edu
+ - http://onlab.vicci.org
+
+ - name: Create CDN Prefix
+ act_cdnprefix:
+ cdn_prefix: "{{ item.cdn_prefix }}"
+ default_origin_server: "{{ item.default_origin_server }}"
+ content_provider: cord
+ enabled: True
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ state: present
+ with_items:
+ - cdn_prefix: test.vicci.org
+ default_origin_server: http://www.cs.arizona.edu
+ - cdn_prefix: onlab.vicci.org
+ default_origin_server: http://onlab.vicci.org
+
+
diff --git a/setup/example-node-playbook.yaml b/setup/example-node-playbook.yaml
new file mode 100644
index 0000000..d5e197a
--- /dev/null
+++ b/setup/example-node-playbook.yaml
@@ -0,0 +1,57 @@
+---
+- hosts: localhost
+ vars:
+ amc_hostname: "localhost:3456"
+ amc_username: "co@opencloud.us"
+ amc_password: "XOScdn123$"
+ amc_plc_name: "CoBlitz Test"
+ amc_remote_hostname: "10.6.1.197"
+ cachenode_hostname: "hpc1.lab.local"
+ tasks:
+ - name: Create site
+ act_site:
+ name: examplesite
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ state: present
+
+ - name: Get license
+ set_fact: license="{{ lookup('file', 'license.txt') }}"
+
+ - name: Create node
+ act_cachenode:
+ name: "{{ cachenode_hostname }}"
+ site: examplesite
+ dns:
+ - "8.8.8.8"
+ - "8.8.4.4"
+ interfaces:
+ - mac_addr: "DE:AD:BE:EF:01:01"
+ management: True
+ if_name: eth0
+ IpAddresses:
+ - netmask: "16"
+ address: "192.168.1.2"
+ logical:
+ - Client-Serving
+ Routes:
+ - subnet: 0.0.0.0/0
+ metric: 0
+ nexthop: 192.168.1.1
+ license: "{{ license }}"
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ remote_hostname: "{{ amc_remote_hostname }}"
+ state: present
+ force: true
+ register: cachenode
+
+ - name: Save bootscript
+ copy:
+ content: "{{ cachenode.setupscript }}"
+ dest: "/tmp/{{ cachenode_hostname }}"
+ when: cachenode.changed
diff --git a/setup/example-rr-playbook.yaml b/setup/example-rr-playbook.yaml
new file mode 100644
index 0000000..576c480
--- /dev/null
+++ b/setup/example-rr-playbook.yaml
@@ -0,0 +1,53 @@
+---
+- hosts: localhost
+ vars:
+ amc_hostname: "localhost:3456"
+ amc_username: "co@opencloud.us"
+ amc_password: "XOScdn123$"
+ amc_plc_name: "CoBlitz Test"
+ amc_remote_hostname: "10.6.1.197"
+ rrnode_hostname: "rr1.lab.local"
+ tasks:
+ - name: Create site
+ act_site:
+ name: examplesite
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ state: present
+
+ - name: Create rr node
+ act_rr:
+ name: "{{ rrnode_hostname }}"
+ site: examplesite
+ dns:
+ - "8.8.8.8"
+ - "8.8.4.4"
+ interfaces:
+ - mac_addr: "DE:AD:BE:EF:01:01"
+ management: True
+ if_name: eth0
+ IpAddresses:
+ - netmask: "16"
+ address: "192.168.1.200"
+ logical:
+ - Client-Serving
+ Routes:
+ - subnet: 0.0.0.0/0
+ metric: 0
+ nexthop: 192.168.1.1
+ username: "{{ amc_username }}"
+ password: "{{ amc_password }}"
+ hostname: "{{ amc_hostname }}"
+ plc_name: "{{ amc_plc_name }}"
+ remote_hostname: "{{ amc_remote_hostname }}"
+ state: present
+ force: true
+ register: rrnode
+
+ - name: Save bootscript
+ copy:
+ content: "{{ rrnode.setupscript }}"
+ dest: "/tmp/{{ rrnode_hostname }}"
+ when: rrnode.changed
diff --git a/setup/generate-inventory-playbook.yml b/setup/generate-inventory-playbook.yml
new file mode 100644
index 0000000..c0aa7ca
--- /dev/null
+++ b/setup/generate-inventory-playbook.yml
@@ -0,0 +1,15 @@
+---
+
+- name: Include vars
+ hosts: all
+ tasks:
+ - name: Include variables
+ include_vars: "{{ item }}"
+ with_items:
+ - "/opt/cord/build/platform-install/profile_manifests/{{ cord_profile }}.yml"
+ - /opt/cord/build/platform-install/profile_manifests/local_vars.yml
+
+- name: Create config and inventory for talking to cmi
+ hosts: all
+ roles:
+ - generate-inventory
diff --git a/setup/group_vars/all b/setup/group_vars/all
new file mode 100644
index 0000000..8a8e346
--- /dev/null
+++ b/setup/group_vars/all
@@ -0,0 +1,19 @@
+---
+cmi_compute_node: 10.1.0.16
+cmi_compute_node_key: /opt/cord_profile/node_key
+cmi_mgmt_ip: 172.27.0.7
+cmi_root_user: root
+cmi_private_key: private/amc_id_rsa
+
+eth_device: eth0
+eth_mac: 02:42:CF:8D:C0:82
+cmi_password: XOScdn123$
+cmi_domain: xos-cloudlab-cmi-vtn
+cmi_hostname: xos-cloudlab-cmi-vtn.opencloud.us
+cmi_dns: 8.8.8.8
+cdn_site: CoBlitz Test
+cdn_short_name: cobtest
+cdn_name: CoBlitz
+gateway_ip: 207.141.192.129
+gateway_mac: a4:23:05:45:56:79
+node_hostname: xos-cloudlab-node1-vtn.opencloud.us
\ No newline at end of file
diff --git a/setup/library/act_cachenode.py b/setup/library/act_cachenode.py
new file mode 100644
index 0000000..03a5690
--- /dev/null
+++ b/setup/library/act_cachenode.py
@@ -0,0 +1,69 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+from auraclienttools import LCDNAPI, LCDNFault
+
+def main():
+ module = AnsibleModule(
+ argument_spec = dict(
+ name = dict(required=True, type='str'),
+ site = dict(required=True, type='str'),
+ dns = dict(required=True, type='list'),
+ interfaces= dict(required=True, type='list'),
+ license = dict(required=True, type='str'),
+
+ state = dict(required=True, type='str', choices=["present", "absent"]),
+ force = dict(default=False, type="bool"),
+ username = dict(required=True, type='str'),
+ password = dict(required=True, type='str'),
+ hostname = dict(required=True, type='str'),
+ plc_name = dict(required=True, type='str'),
+
+ remote_hostname = dict(default=None, type="str"),
+ )
+ )
+
+ credentials = {"username": module.params["username"],
+ "password": module.params["password"],
+ "hostname": module.params["hostname"],
+ "plc_name": module.params["plc_name"]}
+
+ state = module.params["state"]
+ node_hostname = module.params["name"]
+ force = module.params["force"]
+
+ api = LCDNAPI(credentials)
+
+ nodes = api.ListAll("Node", {"hostname": node_hostname})
+
+ if (nodes or force) and (state=="absent"):
+ api.deleteCache(node_hostname)
+ module.exit_json(changed=True, msg="cachenode deleted")
+ elif ((not nodes) or force) and (state=="present"):
+ if nodes:
+ # must have been called with force=True, so delete the node so we can re-create it
+ api.deleteCache(node_hostname)
+
+ hpc = {"hostname": node_hostname,
+ "site": module.params["site"],
+ "dns": module.params["dns"],
+ "Interfaces": module.params["interfaces"],
+ "license": module.params["license"]}
+ ret = api.createCache(**hpc)
+ setupscript=ret["setupscript"]
+
+ if module.params["remote_hostname"]:
+ setupscript = setupscript.replace(module.params["hostname"], module.params["remote_hostname"])
+
+ module.exit_json(changed=True, msg="cachenode created", setupscript=setupscript)
+ else:
+ module.exit_json(changed=False)
+
+if __name__ == '__main__':
+ main()
diff --git a/setup/library/act_cdnprefix.py b/setup/library/act_cdnprefix.py
new file mode 100644
index 0000000..8c050d0
--- /dev/null
+++ b/setup/library/act_cdnprefix.py
@@ -0,0 +1,68 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+from auraclienttools import LCDNAPI, LCDNFault
+
+def main():
+ module = AnsibleModule(
+ argument_spec = dict(
+ cdn_prefix= dict(required=True, type='str'),
+ enabled = dict(required=True, type="bool"),
+ service = dict(default="HyperCache", type="str"),
+ content_provider=dict(required=True, type="str"),
+ default_origin_server = dict(required=True, type="str"),
+
+ state = dict(required=True, type='str', choices=["present", "absent"]),
+ force = dict(default=False, type="bool"),
+ username = dict(required=True, type='str'),
+ password = dict(required=True, type='str'),
+ hostname = dict(required=True, type='str'),
+ plc_name = dict(required=True, type='str'),
+ )
+ )
+
+ credentials = {"username": module.params["username"],
+ "password": module.params["password"],
+ "hostname": module.params["hostname"],
+ "plc_name": module.params["plc_name"]}
+
+ state = module.params["state"]
+ cdn_prefix = module.params["cdn_prefix"]
+ force = module.params["force"]
+
+ api = LCDNAPI(credentials, experimental=True)
+
+ content_providers = api.onevapi.ListAll("ContentProvider", {"name": module.params["content_provider"]})
+ if not content_providers:
+ raise Exception("Unable to find %s" % module.params["content_provider"])
+ content_provider = content_providers[0]
+
+ prefixes = api.onevapi.ListAll("CDNPrefix", {"cdn_prefix": cdn_prefix})
+
+ if (prefixes or force) and (state=="absent"):
+ api.Delete("CDNPrefix", prefixes[0]["cdn_prefix_id"])
+ module.exit_json(changed=True, msg="cdn prefix deleted")
+ elif ((not prefixes) or force) and (state=="present"):
+ if prefixes:
+ # must have been called with force=True, so delete the node so we can re-create it
+ api.onevapi.Delete("CDNPrefix", prefixes[0]["cdn_prefix_id"])
+
+ cdn_prefix = {"cdn_prefix": cdn_prefix,
+ "enabled": module.params["enabled"],
+ "service": module.params["service"],
+ "content_provider_id": content_provider["content_provider_id"],
+ "default_origin_server": module.params["default_origin_server"]}
+ ret = api.onevapi.Create("CDNPrefix", cdn_prefix)
+
+ module.exit_json(changed=True, msg="sp created")
+ else:
+ module.exit_json(changed=False)
+
+if __name__ == '__main__':
+ main()
diff --git a/setup/library/act_cp.py b/setup/library/act_cp.py
new file mode 100644
index 0000000..465c760
--- /dev/null
+++ b/setup/library/act_cp.py
@@ -0,0 +1,66 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+from auraclienttools import LCDNAPI, LCDNFault
+
+def main():
+ module = AnsibleModule(
+ argument_spec = dict(
+ name = dict(required=True, type='str'),
+ account = dict(required=True, type='str'),
+ enabled = dict(required=True, type="bool"),
+ service_provider = dict(required=True, type="str"),
+
+ state = dict(required=True, type='str', choices=["present", "absent"]),
+ force = dict(default=False, type="bool"),
+ username = dict(required=True, type='str'),
+ password = dict(required=True, type='str'),
+ hostname = dict(required=True, type='str'),
+ plc_name = dict(required=True, type='str'),
+ )
+ )
+
+ credentials = {"username": module.params["username"],
+ "password": module.params["password"],
+ "hostname": module.params["hostname"],
+ "plc_name": module.params["plc_name"]}
+
+ state = module.params["state"]
+ cp_name = module.params["name"]
+ force = module.params["force"]
+
+ api = LCDNAPI(credentials, experimental=True)
+
+ service_providers = api.onevapi.ListAll("ServiceProvider", {"name": module.params["service_provider"]})
+ if not service_providers:
+ raise Exception("Unable to find %s" % module.params["service_provider"])
+ service_provider = service_providers[0]
+
+ cps = api.onevapi.ListAll("ContentProvider", {"name": cp_name})
+
+ if (cps or force) and (state=="absent"):
+ api.Delete("ContentProvider", cps[0].id)
+ module.exit_json(changed=True, msg="cp deleted")
+ elif ((not cps) or force) and (state=="present"):
+ if cps:
+ # must have been called with force=True, so delete the node so we can re-create it
+ api.onevapi.Delete("ContentProvider", cps[0]["content_provider_id"])
+
+ sp = {"account": module.params["account"],
+ "name": cp_name,
+ "enabled": module.params["enabled"],
+ "service_provider_id": service_provider["service_provider_id"]}
+ ret = api.onevapi.Create("ContentProvider", sp)
+
+ module.exit_json(changed=True, msg="cp created")
+ else:
+ module.exit_json(changed=False)
+
+if __name__ == '__main__':
+ main()
diff --git a/setup/library/act_origin.py b/setup/library/act_origin.py
new file mode 100644
index 0000000..4fdf224
--- /dev/null
+++ b/setup/library/act_origin.py
@@ -0,0 +1,64 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+from auraclienttools import LCDNAPI, LCDNFault
+
+def main():
+ module = AnsibleModule(
+ argument_spec = dict(
+ url = dict(required=True, type='str'),
+ service_type = dict(default="HyperCache", type="str"),
+ content_provider = dict(required=True, type="str"),
+
+ state = dict(required=True, type='str', choices=["present", "absent"]),
+ force = dict(default=False, type="bool"),
+ username = dict(required=True, type='str'),
+ password = dict(required=True, type='str'),
+ hostname = dict(required=True, type='str'),
+ plc_name = dict(required=True, type='str'),
+ )
+ )
+
+ credentials = {"username": module.params["username"],
+ "password": module.params["password"],
+ "hostname": module.params["hostname"],
+ "plc_name": module.params["plc_name"]}
+
+ state = module.params["state"]
+ origin_url = module.params["url"]
+ force = module.params["force"]
+
+ api = LCDNAPI(credentials, experimental=True)
+
+ content_providers = api.onevapi.ListAll("ContentProvider", {"name": module.params["content_provider"]})
+ if not content_providers:
+ raise Exception("Unable to find %s" % module.params["content_provider"])
+ content_provider = content_providers[0]
+
+ origins = api.onevapi.ListAll("OriginServer", {"url": origin_url})
+
+ if (origins or force) and (state=="absent"):
+ api.Delete("OriginServer", origins[0]["origin_servier_id"])
+ module.exit_json(changed=True, msg="origin server deleted")
+ elif ((not origins) or force) and (state=="present"):
+ if origins:
+ # must have been called with force=True, so delete the node so we can re-create it
+ api.onevapi.Delete("OriginServer", origins[0]["origin_server_id"])
+
+ origin = {"url": origin_url,
+ "service_type": module.params["service_type"],
+ "content_provider_id": content_provider["content_provider_id"]}
+ ret = api.onevapi.Create("OriginServer", origin)
+
+ module.exit_json(changed=True, msg="origin server created")
+ else:
+ module.exit_json(changed=False)
+
+if __name__ == '__main__':
+ main()
diff --git a/setup/library/act_rr.py b/setup/library/act_rr.py
new file mode 100644
index 0000000..75b37e2
--- /dev/null
+++ b/setup/library/act_rr.py
@@ -0,0 +1,67 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+from auraclienttools import LCDNAPI, LCDNFault
+
+def main():
+ module = AnsibleModule(
+ argument_spec = dict(
+ name = dict(required=True, type='str'),
+ site = dict(required=True, type='str'),
+ dns = dict(required=True, type='list'),
+ interfaces= dict(required=True, type='list'),
+
+ state = dict(required=True, type='str', choices=["present", "absent"]),
+ force = dict(default=False, type="bool"),
+ username = dict(required=True, type='str'),
+ password = dict(required=True, type='str'),
+ hostname = dict(required=True, type='str'),
+ plc_name = dict(required=True, type='str'),
+
+ remote_hostname = dict(default=None, type="str"),
+ )
+ )
+
+ credentials = {"username": module.params["username"],
+ "password": module.params["password"],
+ "hostname": module.params["hostname"],
+ "plc_name": module.params["plc_name"]}
+
+ state = module.params["state"]
+ node_hostname = module.params["name"]
+ force = module.params["force"]
+
+ api = LCDNAPI(credentials, experimental=True)
+
+ nodes = api.ListAll("Node", {"hostname": node_hostname})
+
+ if (nodes or force) and (state=="absent"):
+ api.deleteRR(node_hostname)
+ module.exit_json(changed=True, msg="cachenode deleted")
+ elif ((not nodes) or force) and (state=="present"):
+ if nodes:
+ # must have been called with force=True, so delete the node so we can re-create it
+ api.deleteRR(node_hostname)
+
+ rr = {"hostname": node_hostname,
+ "site": module.params["site"],
+ "dns": module.params["dns"],
+ "Interfaces": module.params["interfaces"]}
+ ret = api.createRR(**rr)
+ setupscript=ret["setupscript"]
+
+ if module.params["remote_hostname"]:
+ setupscript = setupscript.replace(module.params["hostname"], module.params["remote_hostname"])
+
+ module.exit_json(changed=True, msg="rr created", setupscript=setupscript)
+ else:
+ module.exit_json(changed=False)
+
+if __name__ == '__main__':
+ main()
diff --git a/setup/library/act_site.py b/setup/library/act_site.py
new file mode 100644
index 0000000..0894649
--- /dev/null
+++ b/setup/library/act_site.py
@@ -0,0 +1,47 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+from auraclienttools import LCDNAPI, LCDNFault
+
+def main():
+ module = AnsibleModule(
+ argument_spec = dict(
+ name = dict(required=True, type='str'),
+ state = dict(required=True, type='str', choices=["present", "absent"]),
+ username = dict(required=True, type='str'),
+ password = dict(required=True, type='str'),
+ hostname = dict(required=True, type='str'),
+ plc_name = dict(required=True, type='str'),
+ )
+ )
+
+ credentials = {"username": module.params["username"],
+ "password": module.params["password"],
+ "hostname": module.params["hostname"],
+ "plc_name": module.params["plc_name"]}
+
+ state = module.params["state"]
+ siteName = module.params["name"]
+
+ api = LCDNAPI(credentials)
+
+ sites = api.ListAll("Site", {"name": siteName})
+
+ if sites and (state=="absent"):
+ api.deleteSite(siteName)
+ module.exit_json(changed=True, msg="site deleted")
+ elif (not sites) and (state=="present"):
+ api.createSite(siteName)
+ module.exit_json(changed=True, msg="site created")
+ else:
+ module.exit_json(changed=False)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/setup/library/act_sp.py b/setup/library/act_sp.py
new file mode 100644
index 0000000..e683218
--- /dev/null
+++ b/setup/library/act_sp.py
@@ -0,0 +1,59 @@
+#!/usr/bin/python
+
+import json
+import os
+import requests
+import sys
+import traceback
+
+from ansible.module_utils.basic import AnsibleModule
+from auraclienttools import LCDNAPI, LCDNFault
+
+def main():
+ module = AnsibleModule(
+ argument_spec = dict(
+ name = dict(required=True, type='str'),
+ account = dict(required=True, type='str'),
+ enabled = dict(required=True, type="bool"),
+
+ state = dict(required=True, type='str', choices=["present", "absent"]),
+ force = dict(default=False, type="bool"),
+ username = dict(required=True, type='str'),
+ password = dict(required=True, type='str'),
+ hostname = dict(required=True, type='str'),
+ plc_name = dict(required=True, type='str'),
+ )
+ )
+
+ credentials = {"username": module.params["username"],
+ "password": module.params["password"],
+ "hostname": module.params["hostname"],
+ "plc_name": module.params["plc_name"]}
+
+ state = module.params["state"]
+ sp_name = module.params["name"]
+ force = module.params["force"]
+
+ api = LCDNAPI(credentials, experimental=True)
+
+ sps = api.onevapi.ListAll("ServiceProvider", {"name": sp_name})
+
+ if (sps or force) and (state=="absent"):
+ api.Delete("ServiceProvider", sps[0].id)
+ module.exit_json(changed=True, msg="sp deleted")
+ elif ((not sps) or force) and (state=="present"):
+ if sps:
+ # must have been called with force=True, so delete the node so we can re-create it
+ api.onevapi.Delete("ServiceProvider", sps[0]["service_provider_id"])
+
+ sp = {"account": module.params["account"],
+ "name": sp_name,
+ "enabled": module.params["enabled"]}
+ ret = api.onevapi.Create("ServiceProvider", sp)
+
+ module.exit_json(changed=True, msg="sp created")
+ else:
+ module.exit_json(changed=False)
+
+if __name__ == '__main__':
+ main()
diff --git a/setup/old/cmi-logicalinterfaces.yaml b/setup/old/cmi-logicalinterfaces.yaml
new file mode 100644
index 0000000..d45b63a
--- /dev/null
+++ b/setup/old/cmi-logicalinterfaces.yaml
@@ -0,0 +1,11 @@
+---
+- hosts: cmi
+ connection: ssh
+ user: root
+ tasks:
+ - name: copy over cmi logical interface template
+ template: src=templates/setup_cmi_logicalinterfaces.sh dest=/vservers/coplc/root/setup_cmi_logicalinterfaces.sh
+
+ - name: run logical interface script
+ command: vserver coplc exec onevsh /root/setup_cmi_logicalinterfaces.sh
+
diff --git a/setup/old/cmi-settings.sh b/setup/old/cmi-settings.sh
new file mode 100644
index 0000000..db6c5f3
--- /dev/null
+++ b/setup/old/cmi-settings.sh
@@ -0,0 +1,12 @@
+# This holds the connection information necessary to talk to your CMI.
+# It will be used by setup-cmi.sh and ssh-cmi.sh
+
+#COMPUTE_NODE=cp-2.smbaker-xos-vtn.xos-pg0.clemson.cloudlab.us
+#MGMT_IP=172.27.0.22
+#NODE_KEY=/root/setup/id_rsa
+#VM_KEY=cmi_id_rsa
+
+COMPUTE_NODE=10.90.0.65
+MGMT_IP=172.27.0.17
+NODE_KEY=cord_pod_node_key
+VM_KEY=cmi_id_rsa
diff --git a/setup/old/cmi.yaml b/setup/old/cmi.yaml
new file mode 100644
index 0000000..62abe01
--- /dev/null
+++ b/setup/old/cmi.yaml
@@ -0,0 +1,69 @@
+---
+- hosts: cmi
+ connection: ssh
+ user: root
+ vars:
+ eth_device: eth0
+ eth_mac: 02:42:CF:8D:C0:82
+ cmi_password: XOScdn123$
+ cmi_hostname: xos-cloudlab-cmi-vtn.opencloud.us
+ cmi_dns: 8.8.8.8
+ cdn_site: CoBlitz Test
+ cdn_short_name: cobtest
+ cdn_name: CoBlitz
+# gateway_ip: 10.124.0.1
+# gateway_mac: 00:8c:fa:5b:09:d8
+ gateway_ip: 207.141.192.129
+ gateway_mac: a4:23:05:45:56:79
+ node_hostname: xos-cloudlab-node1-vtn.opencloud.us
+ tasks:
+ - name: fix the networking
+ shell: "{{ item }}"
+ with_items:
+ - ifconfig {{ eth_device }} hw ether {{ eth_mac }}
+ - ip route del default || true
+ - ip route add default via {{ gateway_ip }}
+ - arp -s {{ gateway_ip }} {{ gateway_mac }}
+
+ - name: copy over setup answers
+ template: src=templates/setup_answers.txt dest=/root/setup_answers.txt
+
+ - name: run the setup script
+ shell: /a/sbin/setup.sh < /root/setup_answers.txt
+ args:
+ creates: /a/var/log/setup.log
+
+ - name: fix onevapi CDNPrefix bug
+ shell: sed -i 's/hostname/str/g' /vservers/coplc/usr/share/cob_api/COB/PublicObjects/CDNPrefix.py
+
+ - name: fix onevapi OriginServer bug
+ shell: sed -i 's/attrToCheck = "edge_hosttype"/attrToCheck = "edge_hosttype_broken"/g' /vservers/coplc/usr/share/cob_api/COB/PublicObjects/OriginServer.py
+
+ - name: copy over cmi setup template
+ template: src=templates/setup_cmi_onevsh.sh dest=/vservers/coplc/root/setup_cmi_onevsh.sh
+
+ - name: run cmi setup script
+ command: vserver coplc exec onevsh /root/setup_cmi_onevsh.sh
+
+ - name: copy over cmi node setup template
+ template: src=templates/setup_cmi_node.sh dest=/vservers/coplc/root/setup_cmi_node.sh
+
+ - name: run node setup script
+ command: vserver coplc exec plcsh /root/setup_cmi_node.sh
+ args:
+ creates: /vservers/coplc/root/takeover-{{ node_hostname }}
+
+ - name: retrieve node takeover script
+ fetch: src=/vservers/coplc/root/takeover-{{ node_hostname }} dest=takeovers/takeover-{{ node_hostname }}
+
+ - name: update all keys script
+ copy: src=private/allkeys.template dest=/vservers/coplc/etc/onevantage/services/HPC/templates/usr/local/CoBlitz/var/allkeys.template
+
+ - name: install keygen
+ copy: src=private/keygen dest=/vservers/coplc/etc/onevantage/services/HPC/templates/usr/local/CoBlitz/var/keygen mode=0755
+
+ - name: download socat
+ get_url: url=http://pkgs.repoforge.org/socat/socat-1.7.2.1-1.el6.rf.x86_64.rpm dest=/root/socat-1.7.2.1-1.el6.rf.x86_64.rpm
+
+ - name: install socat
+ yum: name=/root/socat-1.7.2.1-1.el6.rf.x86_64.rpm state=present
diff --git a/setup/old/setup-cmi-logicalinterfaces.sh b/setup/old/setup-cmi-logicalinterfaces.sh
new file mode 100644
index 0000000..b1acd65
--- /dev/null
+++ b/setup/old/setup-cmi-logicalinterfaces.sh
@@ -0,0 +1,18 @@
+#! /bin/bash
+
+source cmi-settings.sh
+
+echo "[ssh_connection]" > cmi.conf
+echo "ssh_args = -o \"ProxyCommand ssh -q -i $NODE_KEY -o StrictHostKeyChecking=no root@$COMPUTE_NODE nc $MGMT_IP 22\"" >> cmi.conf
+echo "scp_if_ssh = True" >> cmi.conf
+echo "pipelining = True" >> cmi.conf
+echo >> cmi.conf
+echo "[defaults]" >> cmi.conf
+echo "host_key_checking = False" >> cmi.conf
+
+echo "cmi ansible_ssh_private_key_file=$VM_KEY" > cmi.hosts
+
+export ANSIBLE_CONFIG=cmi.conf
+export ANSIBLE_HOSTS=cmi.hosts
+
+ansible-playbook -v --step cmi-logicalinterfaces.yaml
diff --git a/setup/old/templates/setup_cmi_logicalinterfaces.sh b/setup/old/templates/setup_cmi_logicalinterfaces.sh
new file mode 100644
index 0000000..2ac8422
--- /dev/null
+++ b/setup/old/templates/setup_cmi_logicalinterfaces.sh
@@ -0,0 +1,14 @@
+lab="External"
+for service in ["HyperCache", "RequestRouter"]:
+ for node in ListAll("Node"):
+ node_id = node["node_id"]
+ for interface_id in node["interface_ids"]:
+ iface=Read("Interface", interface_id)
+ if iface["is_primary"] and len(iface["ip_address_ids"])==1:
+ ip_id = iface["ip_address_ids"][0]
+ if ListAll("LogicalInterface", {"node_id": node_id, "ip_address_ids": [ip_id], "label": lab, "service": service}):
+ print "External label exists for node", node_id, "ip", ip_id, "service", service
+ else:
+ print "Adding external label for node", node_id, "ip", ip_id, "service", service
+ li = Create("LogicalInterface", {"node_id": node_id, "label": lab, "service": service})
+ Bind("LogicalInterface", li, "IpAddress", ip_id)
diff --git a/setup/old/templates/setup_cmi_node.sh b/setup/old/templates/setup_cmi_node.sh
new file mode 100644
index 0000000..93435a3
--- /dev/null
+++ b/setup/old/templates/setup_cmi_node.sh
@@ -0,0 +1,20 @@
+site_id=GetSites()[0]["site_id"]
+nodeinfo = {'hostname': "{{ node_hostname }}", 'dns': "8.8.8.8"}
+n_id = AddNode(site_id, nodeinfo)
+mac = "DE:AD:BE:EF:00:01"
+interfacetemplate = {'mac': mac, 'kind': 'physical', 'method': 'static', 'is_primary': True, 'if_name': 'eth0'}
+i_id = AddInterface(n_id, interfacetemplate)
+ip_addr = "169.254.169.1" # TO DO: get this from Neutron in the future
+netmask = "255.255.255.254" # TO DO: get this from Neutron in the future
+ipinfo = {'ip_addr': ip_addr, 'netmask': netmask, 'type': 'ipv4'}
+ip_id = AddIpAddress(i_id, ipinfo)
+routeinfo = {'interface_id': i_id, 'next_hop': "127.0.0.127", 'subnet': '0.0.0.0', 'metric': 1}
+r_id = AddRoute(n_id, routeinfo)
+hpc_slice_id = GetSlices({"name": "co_coblitz"})[0]["slice_id"]
+AddSliceToNodes(hpc_slice_id, [n_id])
+dnsdemux_slice_id = GetSlices({"name": "co_dnsdemux"})[0]["slice_id"]
+dnsredir_slice_id = GetSlices({"name": "co_dnsredir_coblitz"})[0]["slice_id"]
+AddSliceToNodes(dnsdemux_slice_id, [n_id])
+AddSliceToNodes(dnsredir_slice_id, [n_id])
+takeoverscript=GetBootMedium(n_id, "node-cloudinit", '')
+file("/root/takeover-{{ node_hostname }}","w").write(takeoverscript)
diff --git a/setup/old/templates/setup_cmi_onevsh.sh b/setup/old/templates/setup_cmi_onevsh.sh
new file mode 100644
index 0000000..c517780
--- /dev/null
+++ b/setup/old/templates/setup_cmi_onevsh.sh
@@ -0,0 +1,19 @@
+def CreateOrFind(kind, args):
+ objs=ListAll(kind, args.copy())
+ if objs:
+ id_name = {"ServiceProvider": "service_provider_id",
+ "ContentProvider": "content_provider_id",
+ "OriginServer": "origin_server_id",
+ "CDNPrefix": "cdn_prefix_id"}
+ print kind, "exists with args", args
+ return objs[0].get(id_name[kind])
+ else:
+ print "create", kind, "with args", args
+ return Create(kind, args)
+sp=CreateOrFind("ServiceProvider", {"account": "cord", "name": "cord", "enabled": True})
+cp=CreateOrFind("ContentProvider", {"account": "test", "name": "test", "enabled": True, "service_provider_id": sp})
+ors=CreateOrFind("OriginServer", {"url": "http://www.cs.arizona.edu", "content_provider_id": cp, "service_type": "HyperCache"})
+pre=CreateOrFind("CDNPrefix", {"service": "HyperCache", "enabled": True, "content_provider_id": cp, "cdn_prefix": "test.vicci.org", "default_origin_server": "http://www.cs.arizona.edu"})
+cp=CreateOrFind("ContentProvider", {"account": "onlab", "name": "onlab", "enabled": True, "service_provider_id": sp})
+ors=CreateOrFind("OriginServer", {"url": "http://onlab.vicci.org", "content_provider_id": cp, "service_type": "HyperCache"})
+pre=CreateOrFind("CDNPrefix", {"service": "HyperCache", "enabled": True, "content_provider_id": cp, "cdn_prefix": "onlab.vicci.org", "default_origin_server": "http://onlab.vicci.org"})
diff --git a/setup/private/README b/setup/private/README
new file mode 100644
index 0000000..e5cfbc1
--- /dev/null
+++ b/setup/private/README
@@ -0,0 +1 @@
+Stuff in here is private and will not be uploaded to github.
diff --git a/setup/roles/cdn-cord-profile/files/cdn_service.yml b/setup/roles/cdn-cord-profile/files/cdn_service.yml
new file mode 100644
index 0000000..f83e4d0
--- /dev/null
+++ b/setup/roles/cdn-cord-profile/files/cdn_service.yml
@@ -0,0 +1,111 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Setup the CDN on the pod
+
+imports:
+ - custom_types/xos.yaml
+
+topology_template:
+ node_templates:
+
+ Private:
+ type: tosca.nodes.NetworkTemplate
+
+ management:
+ type: tosca.nodes.network.Network.XOS
+ properties:
+ no-create: true
+ no-delete: true
+ no-update: true
+
+# cdn-public:
+# type: tosca.nodes.network.Network
+# properties:
+# ip_version: 4
+# cidr: 207.141.192.128/28
+# requirements:
+# - network_template:
+# node: Private
+# relationship: tosca.relationships.UsesNetworkTemplate
+# - owner:
+# node: mysite_cdn
+# relationship: tosca.relationships.MemberOfSlice
+# - connection:
+# node: mysite_cdn
+# relationship: tosca.relationships.ConnectsToSlice
+
+ mysite:
+ type: tosca.nodes.Site
+
+ public:
+ type: tosca.nodes.network.Network.XOS
+ properties:
+ no-create: true
+ no-delete: true
+ no-update: true
+
+ service#cdn:
+ type: tosca.nodes.Service
+ properties:
+ public_key: { get_artifact: [ SELF, pubkey, LOCAL_FILE] }
+ private_key_fn: /opt/xos/services/cdn/keys/amc_id_rsa
+ artifacts:
+ pubkey: /opt/cord_profile/key_import/cdnnode_id_rsa.pub
+
+ mysite_cdn_control:
+ description: This slice holds the controller for the CDN
+ type: tosca.nodes.Slice
+ properties:
+ network: noauto
+ requirements:
+ - site:
+ node: mysite
+ relationship: tosca.relationships.MemberOfSite
+ - management:
+ node: management
+ relationship: tosca.relationships.ConnectsToNetwork
+ - public:
+ node: public
+ relationship: tosca.relationships.ConnectsToNetwork
+ - cdn_service:
+ node: service#cdn
+ relationship: tosca.relationships.MemberOfService
+
+
+ mysite_cdn_nodes:
+ description: This slice holds the hypercache/rr nodes for the CDN
+ type: tosca.nodes.Slice
+ properties:
+ network: noauto
+ requirements:
+ - site:
+ node: mysite
+ relationship: tosca.relationships.MemberOfSite
+ - management:
+ node: management
+ relationship: tosca.relationships.ConnectsToNetwork
+ - public:
+ node: public
+ relationship: tosca.relationships.ConnectsToNetwork
+ - cdn_service:
+ node: service#cdn
+ relationship: tosca.relationships.MemberOfService
+
+ mysite_cdn_cli:
+ description: This slice holds the hypercache/rr nodes for the CDN
+ type: tosca.nodes.Slice
+ properties:
+ network: noauto
+ requirements:
+ - site:
+ node: mysite
+ relationship: tosca.relationships.MemberOfSite
+ - management:
+ node: management
+ relationship: tosca.relationships.ConnectsToNetwork
+ - public:
+ node: public
+ relationship: tosca.relationships.ConnectsToNetwork
+ - cdn_service:
+ node: service#cdn
+ relationship: tosca.relationships.MemberOfService
\ No newline at end of file
diff --git a/setup/roles/cdn-cord-profile/files/setup_headnode.yml b/setup/roles/cdn-cord-profile/files/setup_headnode.yml
new file mode 100644
index 0000000..edf2aa4
--- /dev/null
+++ b/setup/roles/cdn-cord-profile/files/setup_headnode.yml
@@ -0,0 +1,21 @@
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Some basic fixtures
+
+imports:
+ - custom_types/xos.yaml
+
+topology_template:
+ node_templates:
+ m1.cdnnode:
+ type: tosca.nodes.Flavor
+
+ image#AMC:
+ type: tosca.nodes.Image
+ properties:
+ path: /opt/xos/images/AMC.qcow2
+
+ image#CentOS-6-cdnnode-0.4:
+ type: tosca.nodes.Image
+ properties:
+ path: /opt/xos/images/CentOS-6-cdnnode-0.4.qcow2
diff --git a/setup/roles/cdn-cord-profile/tasks/main.yml b/setup/roles/cdn-cord-profile/tasks/main.yml
new file mode 100644
index 0000000..5110500
--- /dev/null
+++ b/setup/roles/cdn-cord-profile/tasks/main.yml
@@ -0,0 +1,16 @@
+---
+
+- name: Copy over commonly used and utility TOSCA files
+ copy:
+ src: "{{ item }}"
+ dest: "/opt/cord_profile/{{ item }}"
+ with_items:
+ - setup_headnode.yml
+ - cdn_service.yml
+
+- name: Copy over the public key
+ copy:
+ src: "{{ item }}"
+ dest: "/opt/cord_profile/key_import/{{ item }}"
+ with_items:
+ - cdnnode_id_rsa.pub
diff --git a/setup/roles/cdn-xos-config/defaults/main.yml b/setup/roles/cdn-xos-config/defaults/main.yml
new file mode 100644
index 0000000..12210de
--- /dev/null
+++ b/setup/roles/cdn-xos-config/defaults/main.yml
@@ -0,0 +1,3 @@
+---
+
+xos_admin_user: "xosadmin@opencord.org"
diff --git a/setup/roles/cdn-xos-config/tasks/main.yml b/setup/roles/cdn-xos-config/tasks/main.yml
new file mode 100644
index 0000000..67edd97
--- /dev/null
+++ b/setup/roles/cdn-xos-config/tasks/main.yml
@@ -0,0 +1,10 @@
+---
+# xos-config/tasks/main.yml
+
+- name: Setup cdn-specific objects
+ command: "python /opt/xos/tosca/run.py {{ xos_admin_user }} /opt/cord_profile/{{ item }}"
+ with_items:
+ - "setup_headnode.yml"
+ - "cdn_service.yml"
+ tags:
+ - skip_ansible_lint # TOSCA loading should be idempotent
\ No newline at end of file
diff --git a/setup/roles/generate-inventory/tasks/main.yml b/setup/roles/generate-inventory/tasks/main.yml
new file mode 100644
index 0000000..9eaf267
--- /dev/null
+++ b/setup/roles/generate-inventory/tasks/main.yml
@@ -0,0 +1,8 @@
+- name: create config file
+ local_action: template src=cmi.conf.j2 dest={{ playbook_dir }}/cmi.conf
+
+- name: create inventory file
+ local_action: template src=cmi-inventory.j2 dest={{ playbook_dir }}/cmi-inventory
+
+- name: create ssh-cmi.sh file
+ local_action: template src=ssh-cmi.sh.j2 dest={{ playbook_dir }}/ssh-cmi.sh mode=0777
diff --git a/setup/roles/generate-inventory/templates/cmi-inventory.j2 b/setup/roles/generate-inventory/templates/cmi-inventory.j2
new file mode 100644
index 0000000..f48da15
--- /dev/null
+++ b/setup/roles/generate-inventory/templates/cmi-inventory.j2
@@ -0,0 +1 @@
+cmi ansible_ssh_private_key_file={{ cmi_private_key }} ansible_user={{ cmi_root_user }}
diff --git a/setup/roles/generate-inventory/templates/cmi.conf.j2 b/setup/roles/generate-inventory/templates/cmi.conf.j2
new file mode 100644
index 0000000..43df5f6
--- /dev/null
+++ b/setup/roles/generate-inventory/templates/cmi.conf.j2
@@ -0,0 +1,6 @@
+[ssh_connection]
+ssh_args = -o "ProxyCommand ssh -q -i {{ cmi_compute_node_key }} -o StrictHostKeyChecking=no root@{{ cmi_compute_node }} nc {{ cmi_mgmt_ip }} 22"
+scp_if_ssh = True
+pipelining = True
+[defaults]
+host_key_checking = False
diff --git a/setup/roles/generate-inventory/templates/ssh-cmi.sh.j2 b/setup/roles/generate-inventory/templates/ssh-cmi.sh.j2
new file mode 100644
index 0000000..aaa42af
--- /dev/null
+++ b/setup/roles/generate-inventory/templates/ssh-cmi.sh.j2
@@ -0,0 +1,2 @@
+#! /bin/bash
+ssh -i {{ cmi_private_key }} -o "ProxyCommand ssh -q -i {{ cmi_compute_node_key }} -o StrictHostKeyChecking=no root@{{ cmi_compute_node }} nc {{ cmi_mgmt_ip }} 22" {{ cmi_root_user }}@cmi
diff --git a/setup/roles/setup-cmi/tasks/main.yml b/setup/roles/setup-cmi/tasks/main.yml
new file mode 100644
index 0000000..8393efc
--- /dev/null
+++ b/setup/roles/setup-cmi/tasks/main.yml
@@ -0,0 +1,25 @@
+---
+#- name: fix the networking
+# shell: "{{ item }}"
+# with_items:
+# - ifconfig {{ eth_device }} hw ether {{ eth_mac }}
+# - ip route del default || true
+# - ip route add default via {{ gateway_ip }}
+# - arp -s {{ gateway_ip }} {{ gateway_mac }}
+
+
+- name: download socat
+ get_url: url=http://ftp.tu-chemnitz.de/pub/linux/dag/redhat/el6/en/x86_64/rpmforge/RPMS/socat-1.7.1.3-1.el6.rf.x86_64.rpm dest=/root/socat-1.7.1.3-1.el6.rf.x86_64.rpm
+
+- name: install socat
+ yum: name=/root/socat-1.7.1.3-1.el6.rf.x86_64.rpm state=present
+
+- name: copy over setup answers
+ template:
+ src: templates/setup_answers.txt.j2
+ dest: /root/setup_answers.txt
+
+- name: run the setup script
+ shell: /a/sbin/setup.sh < /root/setup_answers.txt
+ args:
+ creates: /a/var/log/setup.log
diff --git a/setup/roles/setup-cmi/templates/setup_answers.txt.j2 b/setup/roles/setup-cmi/templates/setup_answers.txt.j2
new file mode 100644
index 0000000..50aa853
--- /dev/null
+++ b/setup/roles/setup-cmi/templates/setup_answers.txt.j2
@@ -0,0 +1,19 @@
+y
+{{ cmi_password }}
+{{ cmi_password }}
+n
+{{ eth_device }}
+y
+{{ cmi_domain }}
+{{ cmi_hostname }}
+{{ eth_device }}
+
+
+{{ cdn_site }}
+{{ cdn_short_name }}
+{{ cmi_dns }}
+
+{{ cdn_name }}
+{{ cmi_password }}
+{{ cmi_password }}
+y
diff --git a/setup/setup-cmi-playbook.yml b/setup/setup-cmi-playbook.yml
new file mode 100644
index 0000000..22f727c
--- /dev/null
+++ b/setup/setup-cmi-playbook.yml
@@ -0,0 +1,15 @@
+---
+
+- name: Include vars
+ hosts: all
+ tasks:
+ - name: Include variables
+ include_vars: "{{ item }}"
+ with_items:
+ - "/opt/cord/build/platform-install/profile_manifests/{{ cord_profile }}.yml"
+ - /opt/cord/build/platform-install/profile_manifests/local_vars.yml
+
+- name: Create CORD profile
+ hosts: cmi
+ roles:
+ - setup-cmi
diff --git a/setup/setup-cmi.sh b/setup/setup-cmi.sh
new file mode 100755
index 0000000..ba8b28c
--- /dev/null
+++ b/setup/setup-cmi.sh
@@ -0,0 +1,3 @@
+#! /bin/bash
+
+ANSIBLE_CONFIG=cmi.conf ANSIBLE_HOSTS=cmi-inventory ansible-playbook --extra-vars @/opt/cord/build/genconfig/config.yml -v setup-cmi-playbook.yml