check point commit to ensure things are saved in more than one place. this commit contains the first integration of the docker build artifacts as well as the first integration of an automation test environment for MAAS based on virtual box
Change-Id: I236f12392501b4ed589aba2b748ba0c45e148f2e
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..d77fa41
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,89 @@
+# Compiled Object files, Static and Dynamic libs (Shared Objects)
+*.o
+*.a
+*.so
+
+# Folders
+_obj
+_test
+
+# Architecture specific extensions/prefixes
+*.[568vq]
+[568vq].out
+
+*.cgo1.go
+*.cgo2.c
+_cgo_defun.c
+_cgo_gotypes.go
+_cgo_export.*
+
+_testmain.go
+
+*.exe
+*.test
+*.prof
+bin
+src
+maas-flow
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+env/
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+*.egg-info/
+.installed.cfg
+*.egg
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*,cover
+.hypothesis/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+#Ipython Notebook
+.ipynb_checkpoints
diff --git a/QUICKSTART.md b/QUICKSTART.md
new file mode 100644
index 0000000..71317f3
--- /dev/null
+++ b/QUICKSTART.md
@@ -0,0 +1,157 @@
+# Quick Start
+This guide is meant to enable the user to quickly exercise the capabilities provided by the artifacts of this
+repository. There are three high level tasks that can be exercised:
+ - Create development environment
+ - Build / Tag / Publish Docker images that support bare metal provisioning
+ - Deploy the bare metal provisioning capabilities to a virtual machine (head node) and PXE boot a compute node
+
+**Prerequisite: Vagrant is installed and operationally.**
+_Note: This quick start guide has only been tested againt Vagrant + VirtualBox, specially on MacOS._
+
+## Create Development Environment
+The development environment is required for the other tasks in this repository. The other tasks could technically
+be done outside this Vagrant based development environment, but it would be left to the user to ensure
+connectivity and required tools are installed. It is far easier to leverage the Vagrant based environment.
+
+### Create Development Machine
+To create the development machine the following single Vagrant command can be used. This will create an Ubuntu
+14.04 LTS based virtual machine and install some basic required packages, such as Docker, Docker Compose, and
+Oracle Java 8.
+```
+vagrant up maasdev
+```
+
+### Connect to the Development Machine
+To connect to the development machine the following vagrant command can be used.
+```
+vagrant ssh maasdev -- -L 8888:10.100.198.202:80
+```
+
+__Ignore the extra options at the end of this command after the `--`. These are used for port forwarding and
+will be explaned later in the section on Verifing MAAS.__
+
+### Complete
+Once you have created and connected to the development environment this task is complete. The `maas` repository
+files can be found on the development machine under `/maasdev`. This directory is mounted from the host machine
+so changes made to files in this directory will be reflected on the host machine and vis vera.
+
+## Build / Tag / Publish Docker Images
+Bare metal provisioning leverages three (3) utilities built and packaged as Docker container images. These
+utilities are:
+
+ - cord-maas-bootstrap - (directory: bootstrap) run at MAAS installation time to customize the MAAS instance
+ via REST interfaces
+ - cord-maas-automation - (directory: automation) run on the head node to automate PXE booted servers
+ through the MAAS bare metal deployment work flow
+ - cord-maas-dhcp-harvester - (directory: harvester) run on the head node to facilitate CORD / DHCP / DNS
+ integration so that all hosts can be resolved via DNS
+
+### Build
+
+Each of the Docker images can be built using a command of the form `./gradlew build<Util>Image`, where `<Util>`
+can be `Bootstrap`, `Automation`, or `Harvester`. Building is the process of creating a local Docker image
+for each utility.
+
+_NOTE: The first time you run `./gradlew` it will download from the Internet the `gradle` binary and install it
+locally. This is a one time operation._
+
+```
+./gradlew buildBootstrapImage
+./gradlew buildAutomationImage
+./gradlew buildHarvester
+```
+
+Additionally, you can build all the images by issuing the following command:
+
+```
+./gradlew buildImages
+```
+
+### Tag
+
+Each of the Docker images can be tagged using a command of the form `./gradlew tag<Util>Image`, where `<Util>`
+can be `Bootstrap`, `Automation`, or `Harvester`. Tagging is the process of applying a local name and version to
+the utility Docker images.
+
+_NOTE: The first time you run `./gradlew` it will download from the Internet the `gradle` binary and install it
+locally. This is a one time operation._
+
+```
+./gradlew tagBootstrapImage
+./gradlew tagAutomationImage
+./gradlew tagHarvester
+```
+
+Additionally, you can tag all the images by issuing the following command:
+
+```
+./gradlew tagImages
+```
+
+### Publish
+
+Each of the Docker images can be published using a command of the form `./gradlew publish<Util>Image`, where
+`<Util>` can be `Bootstrap`, `Automation`, or `Harvester`. Publishing is the process of uploading the locally
+named and tagged Docker image to a local Docker image registry.
+
+_NOTE: The first time you run `./gradlew` it will download from the Internet the `gradle` binary and install it
+locally. This is a one time operation._
+
+```
+./gradlew publishBootstrapImage
+./gradlew publishAutomationImage
+./gradlew publishHarvester
+```
+
+Additionally, you can publish all the images by issuing the following command:
+
+```
+./gradlew publishImages
+```
+
+### Complete
+Once you have built, tagged, and published the utility Docker images this task is complete.
+
+## Deploy Bare Metal Provisioning Capabilities
+There are two parts to deploying bare metal: deploying the head node PXE server (`MAAS`) and test PXE
+booting a compute node. These tasks are accomplished utilizing additionally Vagrant machines as well
+as executing `gradle` tasks in the Vagrant development machine.
+
+### Create and Deploy MAAS into Head Node
+The first task is to create the Vagrant base head node. This will create an additional Ubutu virtual
+machine. **This task is executed on your host machine and not in the development virtual machine.** To create
+the head node Vagrant machine issue the following command:
+
+```
+vagrant up headnode
+```
+
+### Deploy MAAS
+Canonical MAAS provides the PXE and other bare metal provisioning services for CORD and will be deployed on the
+head node via `Ansible`. To initiate this deployment issue the following `gradle` command. This `gradle` command
+exexcutes `ansible-playbook -i 10.100.198.202, --skip-tags=switch_support,interface_config`. The IP address,
+`10.100.198.202` is the IP address assigned to the head node on a private network. The `skip-tags` option
+excludes Ansible tasks not required when utilizing the Vagrant based head node.
+
+```
+./gradlew deployMaas
+```
+
+This task can take some time so be patient. It should complete without errors, so if an error is encountered
+something when horrible wrong (tm).
+
+### Verifing MAAS
+
+After the Ansible script is complete the MAAS install can be validated by viewing the MAAS UI. When we
+connected to the `maasdev` Vagrant machine the flags `-- -L 8888:10.100.198.202:80` were added to the end of
+the `vagrant ssh` command. These flags inform Vagrant to expose port `80` on machine `10.100.198.202`
+as port `8888` on your local machine. Essentially, expose the MAAS UI on port `8888` on your local machine.
+To view the MAAS UI simply browser to `http://localhost:8888/MAAS`.
+
+You can login to MAAS using the username `cord` and the password `cord`.
+
+Browse around the UI and get familiar with MAAS via documentation at `http://maas.io`
+
+## Create and Boot Compute Node
+The requested communicator 'none' could not be found.
+Please verify the name is correct and try again.
diff --git a/Vagrantfile b/Vagrantfile
new file mode 100644
index 0000000..e17000f
--- /dev/null
+++ b/Vagrantfile
@@ -0,0 +1,66 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+Vagrant.configure(2) do |config|
+
+ if (/cygwin|mswin|mingw|bccwin|wince|emx/ =~ RUBY_PLATFORM) != nil
+ config.vm.synced_folder ".", "/maasdev", mount_options: ["dmode=700,fmode=600"]
+ else
+ config.vm.synced_folder ".", "/maasdev"
+ end
+
+ config.vm.define "maasdev" do |d|
+ d.vm.box = "ubuntu/trusty64"
+ d.vm.hostname = "maasdev"
+ d.vm.network "private_network", ip: "10.100.198.200"
+ d.vm.provision :shell, path: "scripts/bootstrap_ansible.sh"
+ d.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /maasdev/ansible/maasdev.yml -c local"
+ d.vm.provider "virtualbox" do |v|
+ v.memory = 2048
+ end
+ end
+
+ config.vm.define "prod" do |d|
+ d.vm.box = "ubuntu/trusty64"
+ d.vm.hostname = "prod"
+ d.vm.network "private_network", ip: "10.100.198.201"
+ d.vm.provider "virtualbox" do |v|
+ v.memory = 1024
+ end
+ end
+
+ config.vm.define "headnode" do |h|
+ h.vm.box = "ubuntu/trusty64"
+ h.vm.hostname = "headnode"
+ h.vm.network "private_network",
+ ip: "10.100.198.202"
+ h.vm.network "private_network",
+ ip: "10.6.0.1",
+ virtualbox__intnet: "cord-test-network"
+ h.vm.provider "virtualbox" do |v|
+ v.memory = 2048
+ end
+ end
+
+ config.vm.define "computenode" do |c|
+ #c.vm.box = "ubuntu/trusty64"
+ c.vm.box = "clink15/pxe"
+ c.vm.synced_folder '.', '/vagrant', disable: true
+ c.vm.communicator = "none"
+ c.vm.hostname = "computenode"
+ c.vm.network "private_network",
+ adapter: "1",
+ type: "dhcp",
+ auto_config: false,
+ virtualbox__intnet: "cord-test-network"
+ c.vm.provider "virtualbox" do |v|
+ v.memory = 1048
+ v.gui = "true"
+ end
+ end
+
+ if Vagrant.has_plugin?("vagrant-cachier")
+ config.cache.scope = :box
+ end
+
+end
diff --git a/ansible/ansible.cfg b/ansible/ansible.cfg
new file mode 100644
index 0000000..bd331b2
--- /dev/null
+++ b/ansible/ansible.cfg
@@ -0,0 +1,9 @@
+[defaults]
+callback_plugins=/etc/ansible/callback_plugins/
+host_key_checking=False
+deprecation_warnings=False
+
+[privilege_escalation]
+become=True
+become_method=sudo
+become_user=root
diff --git a/ansible/group_vars/all b/ansible/group_vars/all
new file mode 100644
index 0000000..5c59599
--- /dev/null
+++ b/ansible/group_vars/all
@@ -0,0 +1,9 @@
+ip: "{{ facter_ipaddress_eth1 }}"
+consul_extra: ""
+proxy_url: http://{{ facter_ipaddress_eth1 }}
+proxy_url2: http://{{ facter_ipaddress_eth1 }}
+registry_url: 10.100.198.200:5000/
+jenkins_ip: 10.100.198.200
+debian_version: trusty
+docker_cfg: docker.cfg
+docker_cfg_dest: /etc/default/docker
diff --git a/ansible/headnode.yml b/ansible/headnode.yml
new file mode 100644
index 0000000..698667a
--- /dev/null
+++ b/ansible/headnode.yml
@@ -0,0 +1,5 @@
+- hosts: localhost
+ remote_user: vagrant
+ serial: 1
+ roles:
+ - maas
diff --git a/ansible/host_vars/10.100.198.200 b/ansible/host_vars/10.100.198.200
new file mode 100644
index 0000000..48505cb
--- /dev/null
+++ b/ansible/host_vars/10.100.198.200
@@ -0,0 +1 @@
+ansible_ssh_private_key_file: /opencord/.vagrant/machines/cd/virtualbox/private_key
diff --git a/ansible/maasdev.yml b/ansible/maasdev.yml
new file mode 100644
index 0000000..5bdfa0b
--- /dev/null
+++ b/ansible/maasdev.yml
@@ -0,0 +1,10 @@
+- hosts: localhost
+ remote_user: vagrant
+ serial: 1
+ roles:
+ - common
+ - docker
+ - docker-compose
+ - consul-template
+ - registry
+ - java8-oracle
diff --git a/ansible/roles/common/defaults/main.yml b/ansible/roles/common/defaults/main.yml
new file mode 100644
index 0000000..4ccfffb
--- /dev/null
+++ b/ansible/roles/common/defaults/main.yml
@@ -0,0 +1,8 @@
+hosts: [
+ { host_ip: "10.100.198.200", host_name: "corddev"},
+ { host_ip: "10.100.198.201", host_name: "prod"},
+]
+
+obsolete_services:
+ - puppet
+ - chef-client
diff --git a/ansible/roles/common/tasks/main.yml b/ansible/roles/common/tasks/main.yml
new file mode 100644
index 0000000..32b60e8
--- /dev/null
+++ b/ansible/roles/common/tasks/main.yml
@@ -0,0 +1,21 @@
+- name: JQ is present
+ apt:
+ name: jq
+ force: yes
+ tags: [common]
+
+- name: Host is present
+ lineinfile:
+ dest: /etc/hosts
+ regexp: "^{{ item.host_ip }}"
+ line: "{{ item.host_ip }} {{ item.host_name }}"
+ with_items: hosts
+ tags: [common]
+
+- name: Services are not running
+ service:
+ name: "{{ item }}"
+ state: stopped
+ ignore_errors: yes
+ with_items: obsolete_services
+ tags: [common]
diff --git a/ansible/roles/consul-template/files/consul-template b/ansible/roles/consul-template/files/consul-template
new file mode 100755
index 0000000..46262c6
--- /dev/null
+++ b/ansible/roles/consul-template/files/consul-template
Binary files differ
diff --git a/ansible/roles/consul-template/files/example.conf.tmpl b/ansible/roles/consul-template/files/example.conf.tmpl
new file mode 100644
index 0000000..fbedd1d
--- /dev/null
+++ b/ansible/roles/consul-template/files/example.conf.tmpl
@@ -0,0 +1 @@
+The address is {{getv "/nginx/nginx"}}
diff --git a/ansible/roles/consul-template/files/example.ctmpl b/ansible/roles/consul-template/files/example.ctmpl
new file mode 100644
index 0000000..8a215ab
--- /dev/null
+++ b/ansible/roles/consul-template/files/example.ctmpl
@@ -0,0 +1,3 @@
+{{range service "nginx"}}
+The address is {{.Address}}:{{.Port}}
+{{end}}
diff --git a/ansible/roles/consul-template/files/example.toml b/ansible/roles/consul-template/files/example.toml
new file mode 100644
index 0000000..4c2d3a1
--- /dev/null
+++ b/ansible/roles/consul-template/files/example.toml
@@ -0,0 +1,6 @@
+[template]
+src = "example.conf.tmpl"
+dest = "/tmp/example.conf"
+keys = [
+ "/nginx/nginx"
+]
\ No newline at end of file
diff --git a/ansible/roles/consul-template/tasks/main.yml b/ansible/roles/consul-template/tasks/main.yml
new file mode 100644
index 0000000..06a5e03
--- /dev/null
+++ b/ansible/roles/consul-template/tasks/main.yml
@@ -0,0 +1,19 @@
+- name: Directory is created
+ file:
+ path: /data/consul-template
+ state: directory
+ tags: [consul-template]
+
+- name: File is copied
+ copy:
+ src: consul-template
+ dest: /usr/local/bin/consul-template
+ mode: 0755
+ tags: [consul-template]
+
+- name: Example template is copied
+ copy:
+ src: example.ctmpl
+ dest: /data/consul-template/example.ctmpl
+ mode: 0644
+ tags: [consul-template]
diff --git a/ansible/roles/docker-compose/tasks/main.yml b/ansible/roles/docker-compose/tasks/main.yml
new file mode 100644
index 0000000..3845f4a
--- /dev/null
+++ b/ansible/roles/docker-compose/tasks/main.yml
@@ -0,0 +1,5 @@
+- name: Executable is present
+ get_url:
+ url: https://github.com/docker/compose/releases/download/1.6.2/docker-compose-Linux-x86_64
+ dest: /usr/local/bin/docker-compose
+ mode: 0755
diff --git a/ansible/roles/docker/defaults/main.yml b/ansible/roles/docker/defaults/main.yml
new file mode 100644
index 0000000..338d16e
--- /dev/null
+++ b/ansible/roles/docker/defaults/main.yml
@@ -0,0 +1,6 @@
+docker_extra: ""
+
+centos_files: [
+ { src: "docker.centos.repo", dest: "/etc/yum.repos.d/docker.repo" },
+ { src: "docker.centos.service", dest: "/lib/systemd/system/docker.service" },
+]
\ No newline at end of file
diff --git a/ansible/roles/docker/files/docker.centos.repo b/ansible/roles/docker/files/docker.centos.repo
new file mode 100644
index 0000000..b472187
--- /dev/null
+++ b/ansible/roles/docker/files/docker.centos.repo
@@ -0,0 +1,6 @@
+[dockerrepo]
+name=Docker Repository
+baseurl=https://yum.dockerproject.org/repo/main/centos/7
+enabled=1
+gpgcheck=1
+gpgkey=https://yum.dockerproject.org/gpg
\ No newline at end of file
diff --git a/ansible/roles/docker/files/docker.centos.service b/ansible/roles/docker/files/docker.centos.service
new file mode 100644
index 0000000..3bbef84
--- /dev/null
+++ b/ansible/roles/docker/files/docker.centos.service
@@ -0,0 +1,17 @@
+[Unit]
+Description=Docker Application Container Engine
+Documentation=https://docs.docker.com
+After=network.target docker.socket
+Requires=docker.socket
+
+[Service]
+EnvironmentFile=-/etc/sysconfig/docker
+Type=notify
+ExecStart=/usr/bin/docker daemon --insecure-registry 10.100.198.200:5000 -H fd://
+MountFlags=slave
+LimitNOFILE=1048576
+LimitNPROC=1048576
+LimitCORE=infinity
+
+[Install]
+WantedBy=multi-user.target
diff --git a/ansible/roles/docker/tasks/centos.yml b/ansible/roles/docker/tasks/centos.yml
new file mode 100644
index 0000000..a8910d4
--- /dev/null
+++ b/ansible/roles/docker/tasks/centos.yml
@@ -0,0 +1,23 @@
+- name: CentOS files are copied
+ copy:
+ src: "{{ item.src }}"
+ dest: "{{ item.dest }}"
+ with_items: centos_files
+ tags: [docker]
+
+- name: CentOS package is installed
+ yum:
+ name: docker-engine
+ state: present
+ tags: [docker]
+
+- name: CentOS Daemon is reloaded
+ command: systemctl daemon-reload
+ tags: [docker]
+
+- name: CentOS service is running
+ service:
+ name: docker
+ state: running
+ tags: [docker]
+
diff --git a/ansible/roles/docker/tasks/debian.yml b/ansible/roles/docker/tasks/debian.yml
new file mode 100644
index 0000000..aa10934
--- /dev/null
+++ b/ansible/roles/docker/tasks/debian.yml
@@ -0,0 +1,61 @@
+- name: Debian add Docker repository and update apt cache
+ apt_repository:
+ repo: deb https://apt.dockerproject.org/repo ubuntu-{{ debian_version }} main
+ update_cache: yes
+ state: present
+ tags: [docker]
+
+- name: Debian Docker is present
+ apt:
+ name: docker-engine
+ state: latest
+ force: yes
+ tags: [docker]
+
+- name: Debian python-pip is present
+ apt: name=python-pip state=present
+ tags: [docker]
+
+- name: Debian docker-py is present
+ pip:
+ name: docker-py
+ version: 1.6.0
+ state: present
+ tags: [docker]
+
+- name: Debian files are present
+ template:
+ src: "{{ docker_cfg }}"
+ dest: "{{ docker_cfg_dest }}"
+ register: copy_result
+ tags: [docker]
+
+- name: Debian Daemon is reloaded
+ command: systemctl daemon-reload
+ when: copy_result|changed and is_systemd is defined
+ tags: [docker]
+
+- name: vagrant user is added to the docker group
+ user:
+ name: vagrant
+ group: docker
+ register: user_result
+ tags: [docker]
+
+- name: Debian Docker service is restarted
+ service:
+ name: docker
+ state: restarted
+ when: copy_result|changed or user_result|changed
+ tags: [docker]
+
+- name: DockerUI is running
+ docker:
+ image: abh1nav/dockerui
+ name: dockerui
+ ports: 9000:9000
+ privileged: yes
+ volumes:
+ - /var/run/docker.sock:/var/run/docker.sock
+ when: not skip_ui is defined
+ tags: [docker]
\ No newline at end of file
diff --git a/ansible/roles/docker/tasks/main.yml b/ansible/roles/docker/tasks/main.yml
new file mode 100644
index 0000000..1495847
--- /dev/null
+++ b/ansible/roles/docker/tasks/main.yml
@@ -0,0 +1,5 @@
+- include: debian.yml
+ when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
+
+- include: centos.yml
+ when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
diff --git a/ansible/roles/docker/templates/docker-swarm-master.service b/ansible/roles/docker/templates/docker-swarm-master.service
new file mode 100644
index 0000000..1ec64aa
--- /dev/null
+++ b/ansible/roles/docker/templates/docker-swarm-master.service
@@ -0,0 +1,21 @@
+[Unit]
+Description=Docker Application Container Engine
+Documentation=https://docs.docker.com
+After=network.target docker.socket
+Requires=docker.socket
+
+[Service]
+Type=notify
+ExecStart=/usr/bin/docker daemon -H fd:// \
+ --insecure-registry 10.100.198.200:5000 \
+ --registry-mirror=http://10.100.198.200:5001 \
+ --cluster-store=consul://{{ ip }}:8500/swarm \
+ --cluster-advertise={{ ip }}:2375 {{ docker_extra }}
+MountFlags=master
+LimitNOFILE=1048576
+LimitNPROC=1048576
+LimitCORE=infinity
+
+[Install]
+WantedBy=multi-user.target
+
diff --git a/ansible/roles/docker/templates/docker-swarm-node.service b/ansible/roles/docker/templates/docker-swarm-node.service
new file mode 100644
index 0000000..09f5141
--- /dev/null
+++ b/ansible/roles/docker/templates/docker-swarm-node.service
@@ -0,0 +1,23 @@
+[Unit]
+Description=Docker Application Container Engine
+Documentation=https://docs.docker.com
+After=network.target docker.socket
+Requires=docker.socket
+
+[Service]
+Type=notify
+ExecStart=/usr/bin/docker daemon -H fd:// \
+ -H tcp://0.0.0.0:2375 \
+ -H unix:///var/run/docker.sock \
+ --insecure-registry 10.100.198.200:5000 \
+ --registry-mirror=http://10.100.198.200:5001 \
+ --cluster-store=consul://{{ ip }}:8500/swarm \
+ --cluster-advertise={{ ip }}:2375 {{ docker_extra }}
+MountFlags=slave
+LimitNOFILE=1048576
+LimitNPROC=1048576
+LimitCORE=infinity
+
+[Install]
+WantedBy=multi-user.target
+
diff --git a/ansible/roles/docker/templates/docker.cfg b/ansible/roles/docker/templates/docker.cfg
new file mode 100644
index 0000000..ac03f17
--- /dev/null
+++ b/ansible/roles/docker/templates/docker.cfg
@@ -0,0 +1 @@
+DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 10.100.198.200:5000 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=http://10.100.198.200:5001"
\ No newline at end of file
diff --git a/ansible/roles/java8-oracle/tasks/main.yml b/ansible/roles/java8-oracle/tasks/main.yml
new file mode 100644
index 0000000..809fbee
--- /dev/null
+++ b/ansible/roles/java8-oracle/tasks/main.yml
@@ -0,0 +1,20 @@
+---
+- name: Install add-apt-repository
+ sudo: yes
+ apt: name=software-properties-common state=latest
+
+- name: Add Oracle Java repository
+ sudo: yes
+ apt_repository: repo='ppa:webupd8team/java'
+
+- name: Accept Java 8 license
+ sudo: yes
+ debconf: name='oracle-java8-installer' question='shared/accepted-oracle-license-v1-1' value='true' vtype='select'
+
+- name: Install Oracle Java 8
+ sudo: yes
+ apt: name={{item}} state=latest
+ with_items:
+ - oracle-java8-installer
+ - ca-certificates
+ - oracle-java8-set-default
diff --git a/ansible/roles/registry/files/mirror-config.yml b/ansible/roles/registry/files/mirror-config.yml
new file mode 100644
index 0000000..65ff62c
--- /dev/null
+++ b/ansible/roles/registry/files/mirror-config.yml
@@ -0,0 +1,23 @@
+version: 0.1
+log:
+ fields:
+ service: registry
+storage:
+ cache:
+ blobdescriptor: inmemory
+ filesystem:
+ rootdirectory: /var/lib/registry
+ delete:
+ enabled: true
+http:
+ addr: :5000
+ headers:
+ X-Content-Type-Options: [nosniff]
+health:
+ storagedriver:
+ enabled: true
+ interval: 10s
+ threshold: 3
+
+proxy:
+ remoteurl: https://registry-1.docker.io
diff --git a/ansible/roles/registry/tasks/main.yml b/ansible/roles/registry/tasks/main.yml
new file mode 100644
index 0000000..ceb8e46
--- /dev/null
+++ b/ansible/roles/registry/tasks/main.yml
@@ -0,0 +1,35 @@
+- name: Directories are present
+ file:
+ path: "{{ item }}"
+ state: directory
+ recurse: yes
+ with_items:
+ - /data/registry-mirror/conf
+ tags: [registry]
+
+- name: Configuration is copied
+ copy:
+ src: mirror-config.yml
+ dest: /data/registry-mirror/conf/config.yml
+ tags: [registry]
+
+- name: Registry container is running
+ docker:
+ name: registry
+ image: registry:2.4.0
+ ports: 5000:5000
+ volumes:
+ - /vagrant/registry:/var/lib/registry/docker/registry
+ - /data/registry/conf:/conf
+ tags: [registry]
+
+- name: Mirror container is running
+ docker:
+ name: registry-mirror
+ image: registry:2.4.0
+ ports: 5001:5000
+ volumes:
+ - /vagrant/registry-mirror:/var/lib/registry/docker/registry
+ - /data/registry-mirror/conf:/conf
+ command: /conf/config.yml
+ tags: [registry]
diff --git a/automation/Dockerfile b/automation/Dockerfile
new file mode 100644
index 0000000..5e1be43
--- /dev/null
+++ b/automation/Dockerfile
@@ -0,0 +1,8 @@
+FROM golang:alpine
+
+RUN apk --update add git
+
+WORKDIR /go
+RUN go get github.com/ciena/cord-maas-automation
+
+ENTRYPOINT ["/go/bin/cord-maas-automation"]
diff --git a/automation/Godeps/Godeps.json b/automation/Godeps/Godeps.json
new file mode 100644
index 0000000..77ff54f
--- /dev/null
+++ b/automation/Godeps/Godeps.json
@@ -0,0 +1,18 @@
+{
+ "ImportPath": "_/Users/dbainbri/src/develop/mf",
+ "GoVersion": "go1.5",
+ "Packages": [
+ "github.com/ciena/maas-flow"
+ ],
+ "Deps": [
+ {
+ "ImportPath": "github.com/juju/gomaasapi",
+ "Rev": "e173bc8d8d3304ff11b0ded5f6d4eea0cb560a40"
+ },
+ {
+ "ImportPath": "gopkg.in/mgo.v2/bson",
+ "Comment": "r2015.12.06-2-g03c9f3e",
+ "Rev": "03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64"
+ }
+ ]
+}
diff --git a/automation/LICENSE b/automation/LICENSE
new file mode 100644
index 0000000..8dada3e
--- /dev/null
+++ b/automation/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/automation/Makefile b/automation/Makefile
new file mode 100644
index 0000000..d3aa83a
--- /dev/null
+++ b/automation/Makefile
@@ -0,0 +1,17 @@
+.PHONY: help
+help:
+ @echo "image - create docker image for the MAAS deploy flow utility"
+ @echo "save - save the docker image for the MAAS deployment flow utility to a local tar file"
+ @echo "clean - remove any generated files"
+ @echo "help - this message"
+
+.PHONY: image
+image:
+ docker build -t cord/maas-automation:0.1-prerelease .
+
+save: image
+ docker save -o cord_maas-automation.1-prerelease.tar cord/maas-automation:0.1-prerelease
+
+.PHONT: clean
+clean:
+ rm -f cord_maas-automation.1-prerelease.tar
diff --git a/automation/README.md b/automation/README.md
new file mode 100644
index 0000000..ee49648
--- /dev/null
+++ b/automation/README.md
@@ -0,0 +1,86 @@
+# Metal as a Service Automation (maas-flow)
+This is a utility that works in conjunction with an Ubuntu Metal as a Service
+([MAAS](http://maas.io)) deployment. By default, the MAAS system allows an
+operator to manually control the lifecycle of a compute host as it comes on
+line leveraging PXE, DHCP, DNS, etc.
+
+The utility leverages the MAAS REST API to periodically monitor the **status**
+of the hosts under control of MAAS and continuous attempts to move those hosts
+into a **deployed** state. (Note: this will likely change in the future to
+support additional target states.)
+
+### Filtering Hosts on which to Operate
+Using a filter the operator can control on which hosts automation acts. The
+filter is a basic **JSON** object and can either be specified as a string on
+the command line or a file which contains the filter. When specifying a file
+the value of the **-filter** command line option should be a **@** followed by
+the name of the file, i.e. @$HOME/some/file, and it may container environment
+variable.
+
+The structure of the filter object is:
+```
+{
+ "hosts" : {
+ "include" : [],
+ "exclude" : []
+ },
+ "zones" : {
+ "include" : [],
+ "exclude" : []
+ }
+}
+```
+For **hosts** the **include** and **exclude** values are a list of regular
+expressions which are mapped against the hostname of a device under control of
+MAAS.
+
+for **zones** the **include** and **exclude** values are a list of regular
+expression which are mapped against the zone with which a host is associated.
+
+When both **include** and **exclude** values are specified the **include**
+is processed followed by the **exclude**.
+
+The default filter, if none is specified, is depicted below. Essentially it
+specifies that the automation will act on all hosts in only the **default**
+zone. (*NOTE: This default filter may change in the future.*)
+```
+{
+ "hosts" : {
+ "include" : [],
+ "exclude" : []
+ },
+ "zones" : {
+ "include" : ["default"],
+ "exclude" : []
+ }
+}
+```
+
+*NOTE:* only include is currently (January 26, 2016) supported.
+
+### Connecting to MAAS
+The connection to MAAS is controlled by command line parameters, specifically:
+* **-apiVersion** - (default: *1.0*) specifies the version of the MAAS API to use
+* **-apiKey** - (default: *none*) specifies the API key to use to authenticate to
+the MAAS server. For a given user this can be found on under their account
+settings in the MAAS UI. This value is important as the automation is acting
+on behalf of this user and the SSH keys that are pushed to hosts will be the
+SSH keys associated with this user.
+* **-maas** - (default: *http://localhost/MAAS*) specifies the base URL on which
+to contact the MAAS server.
+* **-period** - (default: *15s*) specifies how often the automation queries the
+MAAS server to retrieve the state of the hosts. Automation must query the state
+of the hosts from MAAS as MAAS does not support an asynchronous change
+mechanism today. This value should be set such that the automation can fully
+process all the hosts within a period.
+
+### Docker Image
+The project contains a `Dockerfile` that can be used to construct a docker
+image from the repository. The docker image is also provided via Docker Hub at
+https://hub.docker.com/r/ciena/maas-flow/.
+
+### State machine
+The state machine on which the MAAS automation is based is depicted below.
+Currently (January 26, 2016) the automation only supports a deployed target
+state and will not act on hosts that are in a failed, broken, or error state.
+![](lifecycle.png)
diff --git a/automation/lifecycle.png b/automation/lifecycle.png
new file mode 100644
index 0000000..422f247
--- /dev/null
+++ b/automation/lifecycle.png
Binary files differ
diff --git a/automation/maas-flow.go b/automation/maas-flow.go
new file mode 100644
index 0000000..1514e32
--- /dev/null
+++ b/automation/maas-flow.go
@@ -0,0 +1,167 @@
+package main
+
+import (
+ "encoding/json"
+ "flag"
+ "log"
+ "net/url"
+ "os"
+ "strings"
+ "time"
+ "unicode"
+
+ maas "github.com/juju/gomaasapi"
+)
+
+const (
+ // defaultFilter specifies the default filter to use when none is specified
+ defaultFilter = `{
+ "hosts" : {
+ "include" : [ ".*" ],
+ "exclude" : []
+ },
+ "zones" : {
+ "include" : ["default"],
+ "exclude" : []
+ }
+ }`
+ defaultMapping = "{}"
+)
+
+var apiKey = flag.String("apikey", "", "key with which to access MAAS server")
+var maasURL = flag.String("maas", "http://localhost/MAAS", "url over which to access MAAS")
+var apiVersion = flag.String("apiVersion", "1.0", "version of the API to access")
+var queryPeriod = flag.String("period", "15s", "frequency the MAAS service is polled for node states")
+var preview = flag.Bool("preview", false, "displays the action that would be taken, but does not do the action, in this mode the nodes are processed only once")
+var mappings = flag.String("mappings", "{}", "the mac to name mappings")
+var always = flag.Bool("always-rename", true, "attempt to rename at every stage of workflow")
+var verbose = flag.Bool("verbose", false, "display verbose logging")
+var filterSpec = flag.String("filter", strings.Map(func(r rune) rune {
+ if unicode.IsSpace(r) {
+ return -1
+ }
+ return r
+}, defaultFilter), "constrain by hostname what will be automated")
+
+// checkError if the given err is not nil, then fatally log the message, else
+// return false.
+func checkError(err error, message string, v ...interface{}) bool {
+ if err != nil {
+ log.Fatalf("[error] "+message, v)
+ }
+ return false
+}
+
+// checkWarn if the given err is not nil, then log the message as a warning and
+// return true, else return false.
+func checkWarn(err error, message string, v ...interface{}) bool {
+ if err != nil {
+ log.Printf("[warn] "+message, v)
+ return true
+ }
+ return false
+}
+
+// fetchNodes do a HTTP GET to the MAAS server to query all the nodes
+func fetchNodes(client *maas.MAASObject) ([]MaasNode, error) {
+ nodeListing := client.GetSubObject("nodes")
+ listNodeObjects, err := nodeListing.CallGet("list", url.Values{})
+ if checkWarn(err, "unable to get the list of all nodes: %s", err) {
+ return nil, err
+ }
+ listNodes, err := listNodeObjects.GetArray()
+ if checkWarn(err, "unable to get the node objects for the list: %s", err) {
+ return nil, err
+ }
+
+ var nodes = make([]MaasNode, len(listNodes))
+ for index, nodeObj := range listNodes {
+ node, err := nodeObj.GetMAASObject()
+ if !checkWarn(err, "unable to retrieve object for node: %s", err) {
+ nodes[index] = MaasNode{node}
+ }
+ }
+ return nodes, nil
+}
+
+func main() {
+
+ flag.Parse()
+
+ options := ProcessingOptions{
+ Preview: *preview,
+ Verbose: *verbose,
+ AlwaysRename: *always,
+ }
+
+ // Determine the filter, this can either be specified on the the command
+ // line as a value or a file reference. If none is specified the default
+ // will be used
+ if len(*filterSpec) > 0 {
+ if (*filterSpec)[0] == '@' {
+ name := os.ExpandEnv((*filterSpec)[1:])
+ file, err := os.OpenFile(name, os.O_RDONLY, 0)
+ checkError(err, "[error] unable to open file '%s' to load the filter : %s", name, err)
+ decoder := json.NewDecoder(file)
+ err = decoder.Decode(&options.Filter)
+ checkError(err, "[error] unable to parse filter configuration from file '%s' : %s", name, err)
+ } else {
+ err := json.Unmarshal([]byte(*filterSpec), &options.Filter)
+ checkError(err, "[error] unable to parse filter specification: '%s' : %s", *filterSpec, err)
+ }
+ } else {
+ err := json.Unmarshal([]byte(defaultFilter), &options.Filter)
+ checkError(err, "[error] unable to parse default filter specificiation: '%s' : %s", defaultFilter, err)
+ }
+
+ // Determine the mac to name mapping, this can either be specified on the the command
+ // line as a value or a file reference. If none is specified the default
+ // will be used
+ if len(*mappings) > 0 {
+ if (*mappings)[0] == '@' {
+ name := os.ExpandEnv((*mappings)[1:])
+ file, err := os.OpenFile(name, os.O_RDONLY, 0)
+ checkError(err, "[error] unable to open file '%s' to load the mac name mapping : %s", name, err)
+ decoder := json.NewDecoder(file)
+ err = decoder.Decode(&options.Mappings)
+ checkError(err, "[error] unable to parse filter configuration from file '%s' : %s", name, err)
+ } else {
+ err := json.Unmarshal([]byte(*mappings), &options.Mappings)
+ checkError(err, "[error] unable to parse mac name mapping: '%s' : %s", *mappings, err)
+ }
+ } else {
+ err := json.Unmarshal([]byte(defaultMapping), &options.Mappings)
+ checkError(err, "[error] unable to parse default mac name mappings: '%s' : %s", defaultMapping, err)
+ }
+
+ // Verify the specified period for queries can be converted into a Go duration
+ period, err := time.ParseDuration(*queryPeriod)
+ checkError(err, "[error] unable to parse specified query period duration: '%s': %s", queryPeriod, err)
+
+ authClient, err := maas.NewAuthenticatedClient(*maasURL, *apiKey, *apiVersion)
+ if err != nil {
+ checkError(err, "[error] Unable to use specified client key, '%s', to authenticate to the MAAS server: %s", *apiKey, err)
+ }
+
+ // Create an object through which we will communicate with MAAS
+ client := maas.NewMAAS(*authClient)
+
+ // This utility essentially polls the MAAS server for node state and
+ // process the node to the next state. This is done by kicking off the
+ // process every specified duration. This means that the first processing of
+ // nodes will have "period" in the future. This is really not the behavior
+ // we want, we really want, do it now, and then do the next one in "period".
+ // So, the code does one now.
+ nodes, _ := fetchNodes(client)
+ ProcessAll(client, nodes, options)
+
+ if !(*preview) {
+ // Create a ticker and fetch and process the nodes every "period"
+ ticker := time.NewTicker(period)
+ for t := range ticker.C {
+ log.Printf("[info] query server at %s", t)
+ nodes, _ := fetchNodes(client)
+ ProcessAll(client, nodes, options)
+ }
+ }
+}
diff --git a/automation/mappings.json b/automation/mappings.json
new file mode 100644
index 0000000..be99f7a
--- /dev/null
+++ b/automation/mappings.json
@@ -0,0 +1,26 @@
+{
+ "2c:60:0c:e3:c0:f1":{
+ "hostname":"cord-r1-s1"
+ },
+ "2c:60:0c:e3:c4:bd":{
+ "hostname":"cord-r1-s2"
+ },
+ "2c:60:0c:e3:c2:83":{
+ "hostname":"cord-r1-s3"
+ },
+ "2c:60:0c:e3:bb:ae":{
+ "hostname":"cord-r1-s4"
+ },
+ "2c:60:0c:e3:bf:b0":{
+ "hostname":"cord-r1-s5"
+ },
+ "2c:60:0c:e3:be:ff":{
+ "hostname":"cord-r1-s6"
+ },
+ "2c:60:0c:e3:c5:fe":{
+ "hostname":"cord-r1-s7"
+ },
+ "2c:60:0c:e3:bd:10":{
+ "hostname":"cord-r1-s8"
+ }
+}
diff --git a/automation/node.go b/automation/node.go
new file mode 100644
index 0000000..d364fe0
--- /dev/null
+++ b/automation/node.go
@@ -0,0 +1,116 @@
+package main
+
+import (
+ "fmt"
+
+ maas "github.com/juju/gomaasapi"
+)
+
+// MaasNodeStatus MAAS lifecycle status for nodes
+type MaasNodeStatus int
+
+// MAAS Node Statuses
+const (
+ Invalid MaasNodeStatus = -1
+ New MaasNodeStatus = 0
+ Commissioning MaasNodeStatus = 1
+ FailedCommissioning MaasNodeStatus = 2
+ Missing MaasNodeStatus = 3
+ Ready MaasNodeStatus = 4
+ Reserved MaasNodeStatus = 5
+ Deployed MaasNodeStatus = 6
+ Retired MaasNodeStatus = 7
+ Broken MaasNodeStatus = 8
+ Deploying MaasNodeStatus = 9
+ Allocated MaasNodeStatus = 10
+ FailedDeployment MaasNodeStatus = 11
+ Releasing MaasNodeStatus = 12
+ FailedReleasing MaasNodeStatus = 13
+ DiskErasing MaasNodeStatus = 14
+ FailedDiskErasing MaasNodeStatus = 15
+)
+
+var names = []string{"New", "Commissioning", "FailedCommissioning", "Missing", "Ready", "Reserved",
+ "Deployed", "Retired", "Broken", "Deploying", "Allocated", "FailedDeployment",
+ "Releasing", "FailedReleasing", "DiskErasing", "FailedDiskErasing"}
+
+func (v MaasNodeStatus) String() string {
+ return names[v]
+}
+
+// FromString lookup the constant value for a given node state name
+func FromString(name string) (MaasNodeStatus, error) {
+ for i, v := range names {
+ if v == name {
+ return MaasNodeStatus(i), nil
+ }
+ }
+ return -1, fmt.Errorf("Unknown MAAS node state name, '%s'", name)
+}
+
+// MaasNode convenience wrapper for an MAAS node on top of a generic MAAS object
+type MaasNode struct {
+ maas.MAASObject
+}
+
+// GetString get attribute value as string
+func (n *MaasNode) GetString(key string) (string, error) {
+ return n.GetMap()[key].GetString()
+}
+
+// GetFloat64 get attribute value as float64
+func (n *MaasNode) GetFloat64(key string) (float64, error) {
+ return n.GetMap()[key].GetFloat64()
+}
+
+// ID get the system id of the node
+func (n *MaasNode) ID() string {
+ id, _ := n.GetString("system_id")
+ return id
+}
+
+func (n *MaasNode) PowerState() string {
+ state, _ := n.GetString("power_state")
+ return state
+}
+
+// Hostname get the hostname
+func (n *MaasNode) Hostname() string {
+ hn, _ := n.GetString("hostname")
+ return hn
+}
+
+// MACs get the MAC Addresses
+func (n *MaasNode) MACs() []string {
+ macsObj, _ := n.GetMap()["macaddress_set"]
+ macs, _ := macsObj.GetArray()
+ if len(macs) == 0 {
+ return []string{}
+ }
+ result := make([]string, len(macs))
+ for i, mac := range macs {
+ obj, _ := mac.GetMap()
+ addr, _ := obj["mac_address"]
+ s, _ := addr.GetString()
+ result[i] = s
+ }
+
+ return result
+}
+
+// Zone get the zone
+func (n *MaasNode) Zone() string {
+ zone := n.GetMap()["zone"]
+ attrs, _ := zone.GetMap()
+ v, _ := attrs["name"].GetString()
+ return v
+}
+
+// GetInteger get attribute value as integer
+func (n *MaasNode) GetInteger(key string) (int, error) {
+ v, err := n.GetMap()[key].GetFloat64()
+ if err != nil {
+ return 0, err
+ }
+ return int(v), nil
+}
diff --git a/automation/sample-filter.json b/automation/sample-filter.json
new file mode 100644
index 0000000..2a81a99
--- /dev/null
+++ b/automation/sample-filter.json
@@ -0,0 +1,14 @@
+{
+ "hosts":{
+ "include":[
+ ".*"
+ ]
+ },
+ "zones":{
+ "include":[
+ "default",
+ "petaluma-lab"
+ ]
+ }
+}
+
diff --git a/automation/state.go b/automation/state.go
new file mode 100644
index 0000000..4b94089
--- /dev/null
+++ b/automation/state.go
@@ -0,0 +1,422 @@
+package main
+
+import (
+ "fmt"
+ "log"
+ "net/url"
+ "regexp"
+ "strconv"
+ "strings"
+
+ maas "github.com/juju/gomaasapi"
+)
+
+// Action how to get from there to here
+type Action func(*maas.MAASObject, MaasNode, ProcessingOptions) error
+
+// Transition the map from where i want to be from where i might be
+type Transition struct {
+ Target string
+ Current string
+ Using Action
+}
+
+// ProcessingOptions used to determine on what hosts to operate
+type ProcessingOptions struct {
+ Filter struct {
+ Zones struct {
+ Include []string
+ Exclude []string
+ }
+ Hosts struct {
+ Include []string
+ Exclude []string
+ }
+ }
+ Mappings map[string]interface{}
+ Verbose bool
+ Preview bool
+ AlwaysRename bool
+}
+
+// Transitions the actual map
+//
+// Currently this is a hand compiled / optimized "next step" table. This should
+// really be generated from the state machine chart input. Once this has been
+// accomplished you should be able to determine the action to take given your
+// target state and your current state.
+var Transitions = map[string]map[string]Action{
+ "Deployed": {
+ "New": Commission,
+ "Deployed": Done,
+ "Ready": Aquire,
+ "Allocated": Deploy,
+ "Retired": AdminState,
+ "Reserved": AdminState,
+ "Releasing": Wait,
+ "DiskErasing": Wait,
+ "Deploying": Wait,
+ "Commissioning": Wait,
+ "Missing": Fail,
+ "FailedReleasing": Fail,
+ "FailedDiskErasing": Fail,
+ "FailedDeployment": Fail,
+ "Broken": Fail,
+ "FailedCommissioning": Fail,
+ },
+}
+
+const (
+ // defaultStateMachine Would be nice to drive from a graph language
+ defaultStateMachine string = `
+ (New)->(Commissioning)
+ (Commissioning)->(FailedCommissioning)
+ (FailedCommissioning)->(New)
+ (Commissioning)->(Ready)
+ (Ready)->(Deploying)
+ (Ready)->(Allocated)
+ (Allocated)->(Deploying)
+ (Deploying)->(Deployed)
+ (Deploying)->(FailedDeployment)
+ (FailedDeployment)->(Broken)
+ (Deployed)->(Releasing)
+ (Releasing)->(FailedReleasing)
+ (FailedReleasing)->(Broken)
+ (Releasing)->(DiskErasing)
+ (DiskErasing)->(FailedEraseDisk)
+ (FailedEraseDisk)->(Broken)
+ (Releasing)->(Ready)
+ (DiskErasing)->(Ready)
+ (Broken)->(Ready)`
+)
+
+// updateName - changes the name of the MAAS node based on the configuration file
+func updateNodeName(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ macs := node.MACs()
+
+ // Get current node name and strip off domain name
+ current := node.Hostname()
+ if i := strings.IndexRune(current, '.'); i != -1 {
+ current = current[:i]
+ }
+ for _, mac := range macs {
+ if entry, ok := options.Mappings[mac]; ok {
+ if name, ok := entry.(map[string]interface{})["hostname"]; ok && current != name.(string) {
+ nodesObj := client.GetSubObject("nodes")
+ nodeObj := nodesObj.GetSubObject(node.ID())
+ log.Printf("RENAME '%s' to '%s'\n", node.Hostname(), name.(string))
+
+ if !options.Preview {
+ nodeObj.Update(url.Values{"hostname": []string{name.(string)}})
+ }
+ }
+ }
+ }
+ return nil
+}
+
+// Done we are at the target state, nothing to do
+var Done = func(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ // As devices are normally in the "COMPLETED" state we don't want to
+ // log this fact unless we are in verbose mode. I suspect it would be
+ // nice to log it once when the device transitions from a non COMPLETE
+ // state to a complete state, but that would require keeping state.
+ if options.Verbose {
+ log.Printf("COMPLETE: %s", node.Hostname())
+ }
+
+ if options.AlwaysRename {
+ updateNodeName(client, node, options)
+ }
+
+ return nil
+}
+
+// Deploy cause a node to deploy
+var Deploy = func(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ log.Printf("DEPLOY: %s", node.Hostname())
+
+ if options.AlwaysRename {
+ updateNodeName(client, node, options)
+ }
+
+ if !options.Preview {
+ nodesObj := client.GetSubObject("nodes")
+ myNode := nodesObj.GetSubObject(node.ID())
+ // Start the node with the trusty distro. This should really be looked up or
+ // a parameter default
+ _, err := myNode.CallPost("start", url.Values {"distro_series" : []string{"trusty"}})
+ if err != nil {
+ log.Printf("ERROR: DEPLOY '%s' : '%s'", node.Hostname(), err)
+ return err
+ }
+ }
+ return nil
+}
+
+// Aquire aquire a machine to a specific operator
+var Aquire = func(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ log.Printf("AQUIRE: %s", node.Hostname())
+ nodesObj := client.GetSubObject("nodes")
+
+ if options.AlwaysRename {
+ updateNodeName(client, node, options)
+ }
+
+ if !options.Preview {
+ // With a new version of MAAS we have to make sure the node is linked
+ // to the subnet vid DHCP before we move to the Aquire state. To do this
+ // We need to unlink the interface to the subnet and then relink it.
+ //
+ // Iterate through all the interfaces on the node, searching for ones
+ // that are valid and not DHCP and move them to DHCP
+ ifcsObj := client.GetSubObject("nodes").GetSubObject(node.ID()).GetSubObject("interfaces")
+ ifcsListObj, err := ifcsObj.CallGet("", url.Values{})
+ if err != nil {
+ return err
+ }
+
+ ifcsArray, err := ifcsListObj.GetArray()
+ if err != nil {
+ return err
+ }
+
+ for _, ifc := range ifcsArray {
+ ifcMap, err := ifc.GetMap()
+ if err != nil {
+ return err
+ }
+
+ // Iterate over the links assocated with the interface, looking for
+ // links with a subnect as well as a mode of "auto"
+ links, ok := ifcMap["links"]
+ if ok {
+ linkArray, err := links.GetArray()
+ if err != nil {
+ return err
+ }
+
+ for _, link := range linkArray {
+ linkMap, err := link.GetMap()
+ if err != nil {
+ return err
+ }
+ subnet, ok := linkMap["subnet"]
+ if ok {
+ subnetMap, err := subnet.GetMap()
+ if err != nil {
+ return err
+ }
+
+ val, err := linkMap["mode"].GetString()
+ if err != nil {
+ return err
+ }
+
+ if val == "auto" {
+ // Found one we like, so grab the subnet from the data and
+ // then relink this as DHCP
+ cidr, err := subnetMap["cidr"].GetString()
+ if err != nil {
+ return err
+ }
+
+ fifcID, err := ifcMap["id"].GetFloat64()
+ if err != nil {
+ return err
+ }
+ ifcID := strconv.Itoa(int(fifcID))
+
+ flID, err := linkMap["id"].GetFloat64()
+ if err != nil {
+ return err
+ }
+ lID := strconv.Itoa(int(flID))
+
+ ifcObj := ifcsObj.GetSubObject(ifcID)
+ _, err = ifcObj.CallPost("unlink_subnet", url.Values{"id": []string{lID}})
+ if err != nil {
+ return err
+ }
+ _, err = ifcObj.CallPost("link_subnet", url.Values{"mode": []string{"DHCP"}, "subnet": []string{cidr}})
+ if err != nil {
+ return err
+ }
+ }
+ }
+ }
+ }
+ }
+ _, err = nodesObj.CallPost("acquire",
+ url.Values{"name": []string{node.Hostname()}})
+ if err != nil {
+ log.Printf("ERROR: AQUIRE '%s' : '%s'", node.Hostname(), err)
+ return err
+ }
+ }
+ return nil
+}
+
+// Commission cause a node to be commissioned
+var Commission = func(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ updateNodeName(client, node, options)
+
+ // Need to understand the power state of the node. We only want to move to "Commissioning" if the node
+ // power is off. If the node power is not off, then turn it off.
+ state := node.PowerState()
+ switch state {
+ case "on":
+ // Attempt to turn the node off
+ log.Printf("POWER DOWN: %s", node.Hostname())
+ if !options.Preview {
+ //POST /api/1.0/nodes/{system_id}/ op=stop
+ nodesObj := client.GetSubObject("nodes")
+ nodeObj := nodesObj.GetSubObject(node.ID())
+ _, err := nodeObj.CallPost("stop", url.Values{"stop_mode" : []string{"soft"}})
+ if err != nil {
+ log.Printf("ERROR: Commission '%s' : changing power start to off : '%s'", node.Hostname(), err)
+ }
+ return err
+ }
+ break
+ case "off":
+ // We are off so move to commissioning
+ log.Printf("COMISSION: %s", node.Hostname())
+ if !options.Preview {
+ nodesObj := client.GetSubObject("nodes")
+ nodeObj := nodesObj.GetSubObject(node.ID())
+
+ updateNodeName(client, node, options)
+
+ _, err := nodeObj.CallPost("commission", url.Values{})
+ if err != nil {
+ log.Printf("ERROR: Commission '%s' : '%s'", node.Hostname(), err)
+ }
+ return err
+ }
+ break
+ default:
+ // We are in a state from which we can't move forward.
+ log.Printf("ERROR: %s has invalid power state '%s'", node.Hostname(), state)
+ break
+ }
+ return nil
+}
+
+// Wait a do nothing state, while work is being done
+var Wait = func(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ log.Printf("WAIT: %s", node.Hostname())
+ return nil
+}
+
+// Fail a state from which we cannot, currently, automatically recover
+var Fail = func(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ log.Printf("FAIL: %s", node.Hostname())
+ return nil
+}
+
+// AdminState an administrative state from which we should make no automatic transition
+var AdminState = func(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ log.Printf("ADMIN: %s", node.Hostname())
+ return nil
+}
+
+func findAction(target string, current string) (Action, error) {
+ targets, ok := Transitions[target]
+ if !ok {
+ log.Printf("[warn] unable to find transitions to target state '%s'", target)
+ return nil, fmt.Errorf("Could not find transition to target state '%s'", target)
+ }
+
+ action, ok := targets[current]
+ if !ok {
+ log.Printf("[warn] unable to find transition from current state '%s' to target state '%s'",
+ current, target)
+ return nil, fmt.Errorf("Could not find transition from current state '%s' to target state '%s'",
+ current, target)
+ }
+
+ return action, nil
+}
+
+// ProcessNode something
+func ProcessNode(client *maas.MAASObject, node MaasNode, options ProcessingOptions) error {
+ substatus, err := node.GetInteger("substatus")
+ if err != nil {
+ return err
+ }
+ action, err := findAction("Deployed", MaasNodeStatus(substatus).String())
+ if err != nil {
+ return err
+ }
+
+ if options.Preview {
+ action(client, node, options)
+ } else {
+ go action(client, node, options)
+ }
+ return nil
+}
+
+func buildFilter(filter []string) ([]*regexp.Regexp, error) {
+
+ results := make([]*regexp.Regexp, len(filter))
+ for i, v := range filter {
+ r, err := regexp.Compile(v)
+ if err != nil {
+ return nil, err
+ }
+ results[i] = r
+ }
+ return results, nil
+}
+
+func matchedFilter(include []*regexp.Regexp, target string) bool {
+ for _, e := range include {
+ if e.MatchString(target) {
+ return true
+ }
+ }
+ return false
+}
+
+// ProcessAll something
+func ProcessAll(client *maas.MAASObject, nodes []MaasNode, options ProcessingOptions) []error {
+ errors := make([]error, len(nodes))
+ includeHosts, err := buildFilter(options.Filter.Hosts.Include)
+ if err != nil {
+ log.Fatalf("[error] invalid regular expression for include filter '%s' : %s", options.Filter.Hosts.Include, err)
+ }
+
+ includeZones, err := buildFilter(options.Filter.Zones.Include)
+ if err != nil {
+ log.Fatalf("[error] invalid regular expression for include filter '%v' : %s", options.Filter.Zones.Include, err)
+ }
+
+ for i, node := range nodes {
+ // For hostnames we always match on an empty filter
+ if len(includeHosts) >= 0 && matchedFilter(includeHosts, node.Hostname()) {
+
+ // For zones we don't match on an empty filter
+ if len(includeZones) >= 0 && matchedFilter(includeZones, node.Zone()) {
+ err := ProcessNode(client, node, options)
+ if err != nil {
+ errors[i] = err
+ } else {
+ errors[i] = nil
+ }
+ } else {
+ if options.Verbose {
+ log.Printf("[info] ignoring node '%s' as its zone '%s' didn't match include zone name filter '%v'",
+ node.Hostname(), node.Zone(), options.Filter.Zones.Include)
+ }
+ }
+ } else {
+ if options.Verbose {
+ log.Printf("[info] ignoring node '%s' as it didn't match include hostname filter '%v'",
+ node.Hostname(), options.Filter.Hosts.Include)
+ }
+ }
+ }
+ return errors
+}
diff --git a/bar/Dockerfile b/bar/Dockerfile
deleted file mode 100644
index 2e7f20b..0000000
--- a/bar/Dockerfile
+++ /dev/null
@@ -1,6 +0,0 @@
-FROM golang:alpine
-RUN mkdir /app
-ADD . /app
-WORKDIR /app
-RUN go build -o main .
-CMD [ "/app/main" ]
diff --git a/bar/README.md b/bar/README.md
deleted file mode 100644
index 609ef2b..0000000
--- a/bar/README.md
+++ /dev/null
@@ -1 +0,0 @@
-Placeholder for a sub-component Docker image build
diff --git a/bar/bar.go b/bar/bar.go
deleted file mode 100644
index 73fc28a..0000000
--- a/bar/bar.go
+++ /dev/null
@@ -1,8 +0,0 @@
-package main
-
-import "fmt"
-
-func main() {
- fmt.Printf("hello, bar\n")
-}
-
diff --git a/bootstrap/Dockerfile b/bootstrap/Dockerfile
new file mode 100644
index 0000000..6d387a6
--- /dev/null
+++ b/bootstrap/Dockerfile
@@ -0,0 +1,13 @@
+FROM ubuntu:14.04
+MAINTAINER David Bainbridge <dbainbri@ciena.com>
+
+RUN apt-get update -y && \
+ apt-get install -y python-pip
+
+RUN pip install maasclient==0.3 && \
+ pip install requests_oauthlib && \
+ pip install ipaddress
+
+ADD bootstrap.py /bootstrap.py
+
+ENTRYPOINT [ "/bootstrap.py" ]
diff --git a/bootstrap/LICENSE b/bootstrap/LICENSE
new file mode 100644
index 0000000..8dada3e
--- /dev/null
+++ b/bootstrap/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/bootstrap/Makefile b/bootstrap/Makefile
new file mode 100644
index 0000000..49d7ff2
--- /dev/null
+++ b/bootstrap/Makefile
@@ -0,0 +1,9 @@
+help:
+ -@echo "Available actions"
+ -@echo " docker - builds the docker container"
+
+docker: bootstrap.py
+ docker build -t cord/maas-bootstrap:0.1-prerelease .
+
+run:
+ docker run -ti --rm=true cord/maas-bootstrap:0.1-prerelease --apikey=$(CORD_APIKEY) --sshkey="$(CORD_SSHKEY)" --url=$(CORD_URL)
diff --git a/bootstrap/bootstrap.py b/bootstrap/bootstrap.py
new file mode 100755
index 0000000..4c00853
--- /dev/null
+++ b/bootstrap/bootstrap.py
@@ -0,0 +1,340 @@
+#!/usr/bin/python
+
+from __future__ import print_function
+import sys
+import json
+import ipaddress
+import requests
+from optparse import OptionParser
+from maasclient.auth import MaasAuth
+from maasclient import MaasClient
+
+# For some reason the maasclient doesn't provide a put method. So
+# we will add it here
+def put(client, url, params=None):
+ return requests.put(url=client.auth.api_url + url,
+ auth=client._oauth(),
+ data=params)
+
+def add_or_update_node_group_interface(client, ng, gw, foundIfc, ifcName, subnet):
+ ip = ipaddress.IPv4Network(unicode(subnet, 'utf-8'))
+ hosts = list(ip.hosts())
+
+ # if the caller specified the default gateway then honor that, else used the default
+ gw = gw or str(hosts[0])
+
+ ifc = {
+ 'ip_range_high': str(hosts[-1]),
+ 'ip_range_low': str(hosts[2]),
+ 'static_ip_range_high' : None,
+ 'static_ip_range_low' : None,
+ 'management': 2,
+ 'name': ifcName,
+ #'router_ip' : gw,
+ #'gateway_ip' : gw,
+ 'ip': str(hosts[0]),
+ 'subnet_mask': str(ip.netmask),
+ 'broadcast_ip': str(ip.broadcast_address),
+ 'interface': ifcName,
+ }
+
+ if foundIfc is not None:
+ print("INFO: network for specified interface, '%s', already exists" % (ifcName))
+
+ resp = client.get('/nodegroups/' + ng['uuid'] + '/interfaces/' + ifcName + '/', dict())
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to read specified interface, '%s', '%d : %s'"
+ % (ifcName, resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+
+ # A bit of a hack here. Turns out MAAS won't return the router_ip / gateway_ip value
+ # so we can't really tell if that value is set correctly. So we will compare the
+ # values we can and use that as the "CHANGED" value, but always set all values.
+
+ # Save the compare value
+ same = ifc == json.loads(resp.text)
+
+ # Add router_ip and gateway_ip to the desired state so that those will be set
+ ifc['router_ip'] = gw
+ ifc['gateway_ip'] = gw
+
+ # If the network already exists, update it with the information we want
+ resp = put(client, '/nodegroups/' + ng['uuid'] + '/interfaces/' + ifcName + '/', ifc)
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to update specified network, '%s', on specified interface '%s', '%d : %s'"
+ % (subnet, ifcName, resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+
+ if not same:
+ print("CHANGED: updated network, '%s', for interface '%s'" % (subnet, ifcName))
+ else:
+ print("INFO: Network settings for interface '%s' unchanged" % ifcName)
+
+ else:
+ # Add the operation
+ ifc['op'] = 'new'
+ ifc['router_ip'] = gw
+ ifc['gateway_ip'] = gw
+
+ resp = client.post('/nodegroups/' + ng['uuid'] + '/interfaces/', ifc)
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to create specified network, '%s', on specified interface '%s', '%d : %s'"
+ % (subnet, ifcName, resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+ else:
+ print("CHANGED: created network, '%s', for interface '%s'" % (subnet, ifcName))
+
+ # Add the first host to the subnet as the dns_server
+ subnets = None
+ resp = client.get('/subnets/', dict())
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to query subnets: '%d : %s'" % (resp.status_code, resp.text))
+ sys.exit(1)
+ else:
+ subnets = json.loads(resp.text)
+
+ id = None
+ for sn in subnets:
+ if sn['name'] == subnet:
+ id = str(sn['id'])
+ break
+
+ if id == None:
+ print("ERROR: unable to find subnet entry for network '%s'" % (subnet))
+ sys.exit(1)
+
+ resp = client.get('/subnets/' + id + '/')
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to query subnet '%s': '%d : %s'" % (subnet, resp.status_code, resp.text))
+ sys.exit(1)
+
+ data = json.loads(resp.text)
+
+ found = False
+ for ns in data['dns_servers']:
+ if unicode(ns) == unicode(hosts[0]):
+ found = True
+
+ if not found:
+ resp = put(client, '/subnets/' + id + '/', dict(dns_servers=[hosts[0]]))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to query subnet '%s': '%d : %s'" % (subnet, resp.status_code, resp.text))
+ sys.exit(1)
+ else:
+ print("CHANGED: updated DNS server information")
+ else:
+ print("INFO: DNS already set correctly")
+
+
+def main():
+ parser = OptionParser()
+ parser.add_option('-c', '--config', dest='config_file',
+ help="specifies file from which configuration should be read", metavar='FILE')
+ parser.add_option('-a', '--apikey', dest='apikey',
+ help="specifies the API key to use when accessing MAAS")
+ parser.add_option('-u', '--url', dest='url', default='http://localhost/MAAS/api/1.0',
+ help="specifies the URL on which to contact MAAS")
+ parser.add_option('-z', '--zone', dest='zone', default='administrative',
+ help="specifies the zone to create for manually managed hosts")
+ parser.add_option('-i', '--interface', dest='interface', default='eth0:1',
+ help="the interface on which to set up DHCP for POD local hosts")
+ parser.add_option('-n', '--network', dest='network', default='10.0.0.0/16',
+ help="subnet to use for POD local DHCP")
+ parser.add_option('-b', '--bridge', dest='bridge', default='mgmtbr',
+ help="bridge to use for host local VM allocation")
+ parser.add_option('-t', '--bridge-subnet', dest='bridge_subnet', default='172.18.0.0/16',
+ help="subnet to assign from for bridged hosts")
+ parser.add_option('-r', '--cluster', dest='cluster', default='Cluster master',
+ help="name of cluster to user for POD / DHCP")
+ parser.add_option('-s', '--sshkey', dest='sshkey', default=None,
+ help="specifies public ssh key")
+ parser.add_option('-d', '--domain', dest='domain', default='cord.lab',
+ help="specifies the domain to configure in maas")
+ parser.add_option('-g', '--gateway', dest='gw', default=None,
+ help="specifies the gateway to configure for servers")
+ (options, args) = parser.parse_args()
+
+ if len(args) > 0:
+ print("unknown command line arguments specified", file=sys.stderr)
+ parser.print_help()
+ sys.exit(1)
+
+ # If a config file was specified then read the config from that
+ config = {}
+ if options.config_file != None:
+ with open(options.config_file) as config_file:
+ config = json.load(config_file)
+
+ # Override the config with any command line options
+ if options.apikey == None:
+ print("must specify a MAAS API key", file=sys.stderr)
+ sys.exit(1)
+ else:
+ config['key'] = options.apikey
+ if options.url != None:
+ config['url'] = options.url
+ if options.zone != None:
+ config['zone'] = options.zone
+ if options.interface != None:
+ config['interface'] = options.interface
+ if options.network != None:
+ config['network'] = options.network
+ if options.bridge != None:
+ config['bridge'] = options.bridge
+ if options.bridge_subnet != None:
+ config['bridge-subnet'] = options.bridge_subnet
+ if options.cluster != None:
+ config['cluster'] = options.cluster
+ if options.domain != None:
+ config['domain'] = options.domain
+ if options.gw != None:
+ config['gw'] = options.gw
+ if not 'gw' in config.keys():
+ config['gw'] = None
+ if options.sshkey == None:
+ print("must specify a SSH key to use for cord user", file=sys.stderr)
+ sys.exit(1)
+ else:
+ config['sshkey'] = options.sshkey
+
+ auth = MaasAuth(config['url'], config['key'])
+ client = MaasClient(auth)
+
+ resp = client.get('/account/prefs/sshkeys/', dict(op='list'))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to query SSH keys from server '%d : %s'"
+ % (resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+
+ found_key = False
+ keys = json.loads(resp.text)
+ for key in keys:
+ if key['key'] == config['sshkey']:
+ print("INFO: specified SSH key already exists")
+ found_key = True
+
+ # Add the SSH key to the user
+ # POST /api/2.0/account/prefs/sshkeys/ op=new
+ if not found_key:
+ resp = client.post('/account/prefs/sshkeys/', dict(op='new', key=config['sshkey']))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to add sshkey for user: '%d : %s'"
+ % (resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+ else:
+ print("CHANGED: updated ssh key")
+
+ # Check to see if an "administrative" zone exists and if not
+ # create one
+ found = None
+ zones = client.zones
+ for zone in zones:
+ if zone['name'] == config['zone']:
+ found=zone
+
+ if found is not None:
+ print("INFO: administrative zone, '%s', already exists" % config['zone'], file=sys.stderr)
+ else:
+ if not client.zone_new(config['zone'], "Zone for manually administrated nodes"):
+ print("ERROR: unable to create administrative zone '%s'" % config['zone'], file=sys.stderr)
+ sys.exit(1)
+ else:
+ print("CHANGED: Zone '%s' created" % config['zone'])
+
+ # If the interface doesn't already exist in the cluster then
+ # create it. Look for the "Cluster Master" node group, but
+ # if it is not found used the first one in the list, if the
+ # list is empty, error out
+ found = None
+ ngs = client.nodegroups
+ for ng in ngs:
+ if ng['cluster_name'] == config['cluster']:
+ found = ng
+ break
+
+ if found is None:
+ print("ERROR: unable to find cluster with specified name, '%s'" % config['cluster'], file=sys.stderr)
+ sys.exit(1)
+
+ resp = client.get('/nodegroups/' + ng['uuid'] + '/', dict())
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to get node group information for cluster '%s': '%d : %s'"
+ % (config['cluster'], resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+
+ data = json.loads(resp.text)
+
+ # Set the DNS domain name (zone) for the cluster
+ if data['name'] != config['domain']:
+ resp = put(client, '/nodegroups/' + ng['uuid'] + '/', dict(name=config['domain']))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to set the DNS domain name for the cluster with specified name, '%s': '%d : %s'"
+ % (config['cluster'], resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+ else:
+ print("CHANGE: updated name of cluster to '%s' : %s" % (config['domain'], resp))
+ else:
+ print("INFO: domain name already set")
+
+ found = None
+ resp = client.get('/nodegroups/' + ng['uuid'] + '/interfaces/', dict(op='list'))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to fetch interfaces for cluster with specified name, '%s': '%d : %s'"
+ % (config['cluster'], resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+ ifcs = json.loads(resp.text)
+
+ localIfc = hostIfc = None
+ for ifc in ifcs:
+ localIfc = ifc if ifc['name'] == config['interface'] else localIfc
+ hostIfc = ifc if ifc['name'] == config['bridge'] else hostIfc
+
+ add_or_update_node_group_interface(client, ng, config['gw'], localIfc, config['interface'], config['network'])
+ add_or_update_node_group_interface(client, ng, config['gw'], hostIfc, config['bridge'], config['bridge-subnet'])
+
+ # Update the server settings to upstream DNS request to Google
+ # POST /api/2.0/maas/ op=set_config
+ resp = client.get('/maas/', dict(op='get_config', name='upstream_dns'))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to get the upstream DNS servers: '%d : %s'"
+ % (resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+
+ if unicode(json.loads(resp.text)) != u'8.8.8.8 8.8.8.4':
+ resp = client.post('/maas/', dict(op='set_config', name='upstream_dns', value='8.8.8.8 8.8.8.4'))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to set the upstream DNS servers: '%d : %s'"
+ % (resp.status_code, resp.text), file=sys.stderr)
+ else:
+ print("CHANGED: updated up stream DNS servers")
+ else:
+ print("INFO: Upstream DNS servers correct")
+
+ # Start the download of boot images
+ resp = client.get('/boot-resources/', None)
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to read existing images download: '%d : %s'" % (resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+
+ imgs = json.loads(resp.text)
+ found = False
+ for img in imgs:
+ if img['name'] == u'ubuntu/trusty' and img['architecture'] == u'amd64/hwe-t':
+ found = True
+
+ if not found:
+ resp = client.post('/boot-resources/', dict(op='import'))
+ if int(resp.status_code / 100) != 2:
+ print("ERROR: unable to start image download: '%d : %s'" % (resp.status_code, resp.text), file=sys.stderr)
+ sys.exit(1)
+ else:
+ print("CHANGED: Image download started")
+ else:
+ print("INFO: required images already available")
+
+if __name__ == '__main__':
+ #try:
+ main()
+ #except:
+# e = sys.exc_info()[0]
+# print("ERROR: Unexpected exception: '%s'" % e, file=sys.stderr)
diff --git a/build.gradle b/build.gradle
index 7786ec3..e82f774 100644
--- a/build.gradle
+++ b/build.gradle
@@ -24,32 +24,46 @@
}
-// ~~~~~~~~~~~~~~~~~ Example helper tasks ~~~~~~~~~~~~~~~~~~~~
-
-task buildFooImage(type: Exec) {
- commandLine '/usr/bin/docker', 'build', '-t', "foo", "./foo"
+task buildBootstrapImage(type: Exec) {
+ commandLine '/usr/bin/docker', 'build', '-t', 'cord-maas-bootstrap', './bootstrap'
}
-task tagFooImage(type: Exec) {
- commandLine '/usr/bin/docker', 'tag', 'foo', "$targetReg/foo:$targetTag"
+task tagBootstrapImage(type: Exec) {
+ dependsOn buildBootstrapImage
+ commandLine '/usr/bin/docker', 'tag', 'cord-maas-bootstrap', "$targetReg/cord-maas-bootstrap:$targetTag"
}
-task publishFooImage(type: Exec) {
- dependsOn tagFooImage
- commandLine '/usr/bin/docker', 'push', "$targetReg/foo:$targetTag"
+task publishBootstrapImage(type: Exec) {
+ dependsOn tagBootstrapImage
+ commandLine '/usr/bin/docker', 'push', "$targetReg/cord-maas-bootstrap:$targetTag"
}
-task buildBarImage(type: Exec) {
- commandLine '/usr/bin/docker', 'build', '-t', "bar", "./bar"
+task buildAutomationImage(type: Exec) {
+ commandLine '/usr/bin/docker', 'build', '-t', "cord-maas-automation", "./automation"
}
-task tagBarImage(type: Exec) {
- commandLine '/usr/bin/docker', 'tag', 'bar', "$targetReg/bar:$targetTag"
+task tagAutomationImage(type: Exec) {
+ dependsOn buildAutomationImage
+ commandLine '/usr/bin/docker', 'tag', 'cord-maas-automation', "$targetReg/cord-maas-automation:$targetTag"
}
-task publishBarImage(type: Exec) {
- dependsOn tagBarImage
- commandLine '/usr/bin/docker', 'push', "$targetReg/bar:$targetTag"
+task publishAutomationImage(type: Exec) {
+ dependsOn tagAutomationImage
+ commandLine '/usr/bin/docker', 'push', "$targetReg/cord-maas-automation:$targetTag"
+}
+
+task buildHarvesterImage(type: Exec) {
+ commandLine '/usr/bin/docker', 'build', '-t', "cord-maas-dhcp-harvester", "./harvester"
+}
+
+task tagHarvesterImage(type: Exec) {
+ dependsOn buildHarvesterImage
+ commandLine '/usr/bin/docker', 'tag', 'cord-maas-dhcp-harvester', "$targetReg/cord-maas-dhcp-harvester:$targetTag"
+}
+
+task publishHarvesterImage(type: Exec) {
+ dependsOn tagHarvesterImage
+ commandLine '/usr/bin/docker', 'push', "$targetReg/cord-maas-dhcp-harvester:$targetTag"
}
// ~~~~~~~~~~~~~~~~~~~ Global tasks ~~~~~~~~~~~~~~~~~~~~~~~
@@ -59,18 +73,40 @@
// this is where we fetch upstream artifacts that we do not need internet for the build phase"
// Placeholdr example:
commandLine "/usr/bin/docker", "pull", "golang:alpine"
+ commandLine "/usr/bin/docker", "pull", "python:2.7-alpine"
}
// To be used to generate all needed binaries that need to be present on the target
// as docker images in the local docker runner.
task buildImages {
- dependsOn buildFooImage
- dependsOn buildBarImage
- println "This is where we build the docker images for MAAS"
+ dependsOn buildBootstrapImage
+ dependsOn buildHarvesterImage
+ dependsOn buildAutomationImage
+}
+
+task tagImages {
+ dependsOn tagBootstrapImage
+ dependsOn tagHarvesterImage
+ dependsOn tagAutomationImage
}
task publish {
- // this is where we publish the properly tagged image into the target registry
- dependsOn publishFooImage
- dependsOn publishBarImage
+ dependsOn publishBootstrapImage
+ dependsOn publishHarvesterImage
+ dependsOn publishAutomationImage
}
+
+// ~~~~~~~~~~~~~~~~~~~ Deployment / Test Tasks ~~~~~~~~~~~~~~~~~~~~~~~
+
+// This task will invoke the ansible configuration on the vagrant head node. The ansible deployment is
+// executed remotely to the head node as this is a more realistic scenario for a production deployment.
+// The assumption is that this task is executed from the maasdev virtual machine as it access the head
+// node virtual box over a private network.
+//
+// TODO: Currently the deployment of the head node does not use the locally built docker containers, it
+// should be modified to do so. This likely means that we need to configure docker on the head node
+// to access the docker registry on the maasdev virtual box.
+task deployMaas(type: Exec) {
+ commandLine '/usr/bin/ansible-playbook', '-i', '10.100.198.202,', '--skip-tags=switch_support,interface_config', 'dev-head-node.yml'
+}
+
diff --git a/dev-head-node.yml b/dev-head-node.yml
new file mode 100644
index 0000000..283f9a3
--- /dev/null
+++ b/dev-head-node.yml
@@ -0,0 +1,8 @@
+- hosts: 10.100.198.202
+ remote_user: vagrant
+ serial: 1
+ vars:
+ virtualbox_support: 1
+ ansible_ssh_pass: vagrant
+ roles:
+ - maas
diff --git a/foo/Dockerfile b/foo/Dockerfile
deleted file mode 100644
index 2e7f20b..0000000
--- a/foo/Dockerfile
+++ /dev/null
@@ -1,6 +0,0 @@
-FROM golang:alpine
-RUN mkdir /app
-ADD . /app
-WORKDIR /app
-RUN go build -o main .
-CMD [ "/app/main" ]
diff --git a/foo/README.md b/foo/README.md
deleted file mode 100644
index 609ef2b..0000000
--- a/foo/README.md
+++ /dev/null
@@ -1 +0,0 @@
-Placeholder for a sub-component Docker image build
diff --git a/foo/foo.go b/foo/foo.go
deleted file mode 100644
index 7550055..0000000
--- a/foo/foo.go
+++ /dev/null
@@ -1,8 +0,0 @@
-package main
-
-import "fmt"
-
-func main() {
- fmt.Printf("hello, foo\n")
-}
-
diff --git a/harvester/Dockerfile b/harvester/Dockerfile
new file mode 100644
index 0000000..057e257
--- /dev/null
+++ b/harvester/Dockerfile
@@ -0,0 +1,6 @@
+FROM python:2.7-alpine
+
+RUN apk update && apk add bind
+
+ADD dhcpharvester.py /dhcpharvester.py
+ENTRYPOINT [ "python", "/dhcpharvester.py" ]
diff --git a/harvester/README.md b/harvester/README.md
new file mode 100644
index 0000000..509c26d
--- /dev/null
+++ b/harvester/README.md
@@ -0,0 +1,44 @@
+# DHCP/DNS Name and IP Harvester
+This Python application and Docker image provide an utility that periodically parses the DHCP leases files and updates the `bind9` DNS configuration so that hosts
+that are assigned IP addresses dynamically from DHCP can be looked up via DNS.
+
+### Integration
+There are several keys to making all this work. The utility needs to be able to read the DHCP lease file as well as write a file to a location that can be read
+by the DNS server; so more than likely this utility should be run on the same host that is running DHCP and DNS. Additionally, this utility needs to be able to
+run the bind9 utility `rndc` to reload the DNS zone. This means that it needs a `DNSSEC` key and secret to access the DNS server.
+
+Lastly, this utility generates a file that can be `$include`-ed into a bind9 zone file, so the original zone file needs to be augmented with a `$INCLUDE` statement
+that includes the files to which the uility is configured to write via the `-dest` command line option.
+
+### Docker Build
+To build the docker image use the command:
+```
+docker build -t harvester .
+```
+
+### Docker Run
+To run the utility, a docker command similar to what is below may be used
+
+```
+docker run -d --name=dhcpharvester \
+ -v `pwd`/key:/key -v /var/lib/maas/dhcp:/dhcp -v /etc/bind/maas:/bind harvester \
+ -f '^(?!cord)' -u -s 192.168.42.231 -p 954 -k /key/mykey.conf -z cord.lab -r 5m \
+ -y -t 1s
+```
+
+### API
+There is a simple REST API on this utility so that an external client can asynchronously invoke the DHCP harvest behavior. The REST API is
+synchronous in that the request will not return a response until the harvest is complete. To invoke the request a `HTTP PUT` request needs
+be sent to the utility, such as by curl:
+```
+curl -XPOST http://<apiserver>:<apiport>/harvest
+```
+Currently there is not security around this so it could be abused. There is some protection so that if the system is sent multple request
+if won't actually reharvest until a quiet period has expired. The purpose is to not allow the system to be overloaded.
+
+### Implementation Details
+Internally the implementation uses threads and queues to communicate between the threads when the utility is in the mode to periodically
+harvest.
+
+For the verification of IP addresses, i.e. pinging the hosts, worker threads are used to support concurrency, thus making the verification
+process faster.
diff --git a/harvester/dhcpharvester.py b/harvester/dhcpharvester.py
new file mode 100755
index 0000000..cc4e372
--- /dev/null
+++ b/harvester/dhcpharvester.py
@@ -0,0 +1,614 @@
+#!/usr/bin/python
+import sys, threading, thread, subprocess, re, time, datetime, bisect, BaseHTTPServer
+from optparse import OptionParser
+from Queue import Queue
+
+def parse_timestamp(raw_str):
+ tokens = raw_str.split()
+
+ if len(tokens) == 1:
+ if tokens[0].lower() == 'never':
+ return 'never';
+
+ else:
+ raise Exception('Parse error in timestamp')
+
+ elif len(tokens) == 3:
+ return datetime.datetime.strptime(' '.join(tokens[1:]),
+ '%Y/%m/%d %H:%M:%S')
+
+ else:
+ raise Exception('Parse error in timestamp')
+
+def timestamp_is_ge(t1, t2):
+ if t1 == 'never':
+ return True
+
+ elif t2 == 'never':
+ return False
+
+ else:
+ return t1 >= t2
+
+
+def timestamp_is_lt(t1, t2):
+ if t1 == 'never':
+ return False
+
+ elif t2 == 'never':
+ return t1 != 'never'
+
+ else:
+ return t1 < t2
+
+
+def timestamp_is_between(t, tstart, tend):
+ return timestamp_is_ge(t, tstart) and timestamp_is_lt(t, tend)
+
+
+def parse_hardware(raw_str):
+ tokens = raw_str.split()
+
+ if len(tokens) == 2:
+ return tokens[1]
+
+ else:
+ raise Exception('Parse error in hardware')
+
+
+def strip_endquotes(raw_str):
+ return raw_str.strip('"')
+
+
+def identity(raw_str):
+ return raw_str
+
+
+def parse_binding_state(raw_str):
+ tokens = raw_str.split()
+
+ if len(tokens) == 2:
+ return tokens[1]
+
+ else:
+ raise Exception('Parse error in binding state')
+
+
+def parse_next_binding_state(raw_str):
+ tokens = raw_str.split()
+
+ if len(tokens) == 3:
+ return tokens[2]
+
+ else:
+ raise Exception('Parse error in next binding state')
+
+
+def parse_rewind_binding_state(raw_str):
+ tokens = raw_str.split()
+
+ if len(tokens) == 3:
+ return tokens[2]
+
+ else:
+ raise Exception('Parse error in next binding state')
+
+def parse_res_fixed_address(raw_str):
+ return raw_str
+
+def parse_res_hardware(raw_str):
+ tokens = raw_str.split()
+ return tokens[1]
+
+def parse_reservation_file(res_file):
+ valid_keys = {
+ 'hardware' : parse_res_hardware,
+ 'fixed-address' : parse_res_fixed_address,
+ }
+
+ res_db = {}
+ res_rec = {}
+ in_res = False
+ for line in res_file:
+ if line.lstrip().startswith('#'):
+ continue
+ tokens = line.split()
+
+ if len(tokens) == 0:
+ continue
+
+ key = tokens[0].lower()
+
+ if key == 'host':
+ if not in_res:
+ res_rec = {'hostname' : tokens[1]}
+ in_res = True
+
+ else:
+ raise Exception("Parse error in reservation file")
+ elif key == '}':
+ if in_res:
+ for k in valid_keys:
+ if callable(valid_keys[k]):
+ res_rec[k] = res_rec.get(k, '')
+ else:
+ res_rec[k] = False
+
+ hostname = res_rec['hostname']
+
+ if hostname in res_db:
+ res_db[hostname].insert(0, res_rec)
+
+ else:
+ res_db[hostname] = [res_rec]
+
+ res_rec = {}
+ in_res = False
+
+ else:
+ raise Exception('Parse error in reservation file')
+
+ elif key in valid_keys:
+ if in_res:
+ value = line[(line.index(key) + len(key)):]
+ value = value.strip().rstrip(';').rstrip()
+
+ if callable(valid_keys[key]):
+ res_rec[key] = valid_keys[key](value)
+ else:
+ res_rec[key] = True
+
+ else:
+ raise Exception('Parse error in reservation file')
+
+ else:
+ if in_res:
+ raise Exception('Parse error in reservation file')
+
+ if in_res:
+ raise Exception('Parse error in reservation file')
+
+ # Turn the leases into an array
+ results = []
+ for res in res_db:
+ results.append({
+ 'client-hostname' : res_db[res][0]['hostname'],
+ 'hardware' : res_db[res][0]['hardware'],
+ 'ip_address' : res_db[res][0]['fixed-address'],
+ })
+ return results
+
+
+def parse_leases_file(leases_file):
+ valid_keys = {
+ 'starts': parse_timestamp,
+ 'ends': parse_timestamp,
+ 'tstp': parse_timestamp,
+ 'tsfp': parse_timestamp,
+ 'atsfp': parse_timestamp,
+ 'cltt': parse_timestamp,
+ 'hardware': parse_hardware,
+ 'binding': parse_binding_state,
+ 'next': parse_next_binding_state,
+ 'rewind': parse_rewind_binding_state,
+ 'uid': strip_endquotes,
+ 'client-hostname': strip_endquotes,
+ 'option': identity,
+ 'set': identity,
+ 'on': identity,
+ 'abandoned': None,
+ 'bootp': None,
+ 'reserved': None,
+ }
+
+ leases_db = {}
+
+ lease_rec = {}
+ in_lease = False
+ in_failover = False
+
+ for line in leases_file:
+ if line.lstrip().startswith('#'):
+ continue
+
+ tokens = line.split()
+
+ if len(tokens) == 0:
+ continue
+
+ key = tokens[0].lower()
+
+ if key == 'lease':
+ if not in_lease:
+ ip_address = tokens[1]
+
+ lease_rec = {'ip_address' : ip_address}
+ in_lease = True
+
+ else:
+ raise Exception('Parse error in leases file')
+
+ elif key == 'failover':
+ in_failover = True
+ elif key == '}':
+ if in_lease:
+ for k in valid_keys:
+ if callable(valid_keys[k]):
+ lease_rec[k] = lease_rec.get(k, '')
+ else:
+ lease_rec[k] = False
+
+ ip_address = lease_rec['ip_address']
+
+ if ip_address in leases_db:
+ leases_db[ip_address].insert(0, lease_rec)
+
+ else:
+ leases_db[ip_address] = [lease_rec]
+
+ lease_rec = {}
+ in_lease = False
+
+ elif in_failover:
+ in_failover = False
+ continue
+ else:
+ raise Exception('Parse error in leases file')
+
+ elif key in valid_keys:
+ if in_lease:
+ value = line[(line.index(key) + len(key)):]
+ value = value.strip().rstrip(';').rstrip()
+
+ if callable(valid_keys[key]):
+ lease_rec[key] = valid_keys[key](value)
+ else:
+ lease_rec[key] = True
+
+ else:
+ raise Exception('Parse error in leases file')
+
+ else:
+ if in_lease:
+ raise Exception('Parse error in leases file')
+
+ if in_lease:
+ raise Exception('Parse error in leases file')
+
+ return leases_db
+
+
+def round_timedelta(tdelta):
+ return datetime.timedelta(tdelta.days,
+ tdelta.seconds + (0 if tdelta.microseconds < 500000 else 1))
+
+
+def timestamp_now():
+ n = datetime.datetime.utcnow()
+ return datetime.datetime(n.year, n.month, n.day, n.hour, n.minute,
+ n.second)# + (0 if n.microsecond < 500000 else 1))
+
+
+def lease_is_active(lease_rec, as_of_ts):
+ return lease_rec['binding'] != 'free' and timestamp_is_between(as_of_ts, lease_rec['starts'],
+ lease_rec['ends'])
+
+
+def ipv4_to_int(ipv4_addr):
+ parts = ipv4_addr.split('.')
+ return (int(parts[0]) << 24) + (int(parts[1]) << 16) + \
+ (int(parts[2]) << 8) + int(parts[3])
+
+def select_active_leases(leases_db, as_of_ts):
+ retarray = []
+ sortedarray = []
+
+ for ip_address in leases_db:
+ lease_rec = leases_db[ip_address][0]
+
+ if lease_is_active(lease_rec, as_of_ts):
+ ip_as_int = ipv4_to_int(ip_address)
+ insertpos = bisect.bisect(sortedarray, ip_as_int)
+ sortedarray.insert(insertpos, ip_as_int)
+ retarray.insert(insertpos, lease_rec)
+
+ return retarray
+
+def matched(list, target):
+ if list == None:
+ return False
+
+ for r in list:
+ if re.match(r, target) != None:
+ return True
+ return False
+
+def convert_to_seconds(time_val):
+ num = int(time_val[:-1])
+ if time_val.endswith('s'):
+ return num
+ elif time_val.endswith('m'):
+ return num * 60
+ elif time_val.endswith('h'):
+ return num * 60 * 60
+ elif time_val.endswith('d'):
+ return num * 60 * 60 * 24
+
+def ping(ip, timeout):
+ cmd = ['ping', '-c', '1', '-w', timeout, ip]
+ try:
+ out = subprocess.check_output(cmd)
+ return True
+ except subprocess.CalledProcessError as e:
+ return False
+
+def ping_worker(list, to, respQ):
+ for lease in list:
+ respQ.put(
+ {
+ 'verified': ping(lease['ip_address'], to),
+ 'lease' : lease,
+ })
+
+def interruptable_get(q):
+ r = None
+ while True:
+ try:
+ return q.get(timeout=1000)
+ except Queue.Empty:
+ pass
+
+##############################################################################
+
+def harvest(options):
+
+ ifilter = None
+ if options.include != None:
+ ifilter = options.include.translate(None, ' ').split(',')
+
+ rfilter = None
+ if options.filter != None:
+ rfilter = options.filter.split(',')
+
+ myfile = open(options.leases, 'r')
+ leases = parse_leases_file(myfile)
+ myfile.close()
+
+ reservations = []
+ try:
+ with open(options.reservations, 'r') as res_file:
+ reservations = parse_reservation_file(res_file)
+ res_file.close()
+ except (IOError) as e:
+ pass
+
+ now = timestamp_now()
+ report_dataset = select_active_leases(leases, now) + reservations
+
+ verified = []
+ if options.verify:
+
+ # To verify is lease information is valid, i.e. that the host which got the lease still responding
+ # we ping the host. Not perfect, but good for the main use case. As the lease file can get long
+ # a little concurrency is used. The lease list is divided amoung workers and each worker takes
+ # a share.
+ respQ = Queue()
+ to = str(convert_to_seconds(options.timeout))
+ share = int(len(report_dataset) / options.worker_count)
+ extra = len(report_dataset) % options.worker_count
+ start = 0
+ for idx in range(0, options.worker_count):
+ end = start + share
+ if extra > 0:
+ end = end + 1
+ extra = extra - 1
+ worker = threading.Thread(target=ping_worker, args=(report_dataset[start:end], to, respQ))
+ worker.daemon = True
+ worker.start()
+ start = end
+
+ # All the verification work has been farmed out to worker threads, so sit back and wait for reponses.
+ # Once all responses are received we are done. Probably should put a time out here as well, but for
+ # now we expect a response for every lease, either positive or negative
+ count = 0
+ while count != len(report_dataset):
+ resp = interruptable_get(respQ)
+ count = count + 1
+ if resp['verified']:
+ print("INFO: verified host '%s' with address '%s'" % (resp['lease']['client-hostname'], resp['lease']['ip_address']))
+ verified.append(resp['lease'])
+ else:
+ print("INFO: dropping host '%s' with address '%s' (not verified)" % (resp['lease']['client-hostname'], resp['lease']['ip_address']))
+ else:
+ verified = report_dataset
+
+ # Look for duplicate names and add the compressed MAC as a suffix
+ names = {}
+ for lease in verified:
+ # If no client hostname use MAC
+ name = lease['client-hostname']
+ if 'client-hostname' not in lease or len(name) == 0:
+ name = "UNK-" + lease['hardware'].translate(None, ':').upper()
+
+ if name in names:
+ names[name] = '+'
+ else:
+ names[name] = '-'
+
+ size = 0
+ count = 0
+ for lease in verified:
+ name = lease['client-hostname']
+ if 'client-hostname' not in lease or len(name) == 0:
+ name = "UNK-" + lease['hardware'].translate(None, ':').upper()
+
+ if (ifilter != None and name in ifilter) or matched(rfilter, name):
+ if names[name] == '+':
+ lease['client-hostname'] = name + '-' + lease['hardware'].translate(None, ':').upper()
+ size = max(size, len(lease['client-hostname']))
+ count += 1
+
+ if options.dest == '-':
+ out=sys.stdout
+ else:
+ out=open(options.dest, 'w+')
+
+ for lease in verified:
+ name = lease['client-hostname']
+ if 'client-hostname' not in lease or len(name) == 0:
+ name = "UNK-" + lease['hardware'].translate(None, ':').upper()
+
+ if ifilter != None and name in ifilter or matched(rfilter, name):
+ out.write(format(name, '<'+str(size)) + ' IN A ' + lease['ip_address'] + '\n')
+ if options.dest != '-':
+ out.close()
+ return count
+
+def reload_zone(rndc, server, port, key, zone):
+ cmd = [rndc, '-s', server]
+ if key != None:
+ cmd.extend(['-c', key])
+ cmd.extend(['-p', port, 'reload'])
+ if zone != None:
+ cmd.append(zone)
+
+ try:
+ out = subprocess.check_output(cmd)
+ print("INFO: [%s UTC] updated DNS sever" % time.asctime(time.gmtime()))
+ except subprocess.CalledProcessError as e:
+ print("ERROR: failed to update DNS server, exit code %d" % e.returncode)
+ print(e.output)
+
+def handleRequestsUsing(requestQ):
+ return lambda *args: ApiHandler(requestQ, *args)
+
+class ApiHandler(BaseHTTPServer.BaseHTTPRequestHandler):
+ def __init__(s, requestQ, *args):
+ s.requestQ = requestQ
+ BaseHTTPServer.BaseHTTPRequestHandler.__init__(s, *args)
+
+ def do_HEAD(s):
+ s.send_response(200)
+ s.send_header("Content-type", "application/json")
+ s.end_headers()
+
+ def do_POST(s):
+ if s.path == '/harvest':
+ waitQ = Queue()
+ s.requestQ.put(waitQ)
+ resp = waitQ.get(block=True, timeout=None)
+ s.send_response(200)
+ s.send_header('Content-type', 'application/json')
+ s.end_headers()
+
+ if resp == "QUIET":
+ s.wfile.write('{ "response" : "QUIET" }')
+ else:
+ s.wfile.write('{ "response" : "OK" }')
+
+ else:
+ s.send_response(404)
+
+ def do_GET(s):
+ """Respond to a GET request."""
+ s.send_response(404)
+
+def do_api(hostname, port, requestQ):
+ server_class = BaseHTTPServer.HTTPServer
+ httpd = server_class((hostname, int(port)), handleRequestsUsing(requestQ))
+ print("INFO: [%s UTC] Start API server on %s:%s" % (time.asctime(time.gmtime()), hostname, port))
+ try:
+ httpd.serve_forever()
+ except KeyboardInterrupt:
+ pass
+ httpd.server_close()
+ print("INFO: [%s UTC] Stop API server on %s:%s" % (time.asctime(time.gmtime()), hostname, port))
+
+def harvester(options, requestQ):
+ quiet = convert_to_seconds(options.quiet)
+ last = -1
+ resp = "OK"
+ while True:
+ responseQ = requestQ.get(block=True, timeout=None)
+ if last == -1 or (time.time() - last) > quiet:
+ work_field(options)
+ last = time.time()
+ resp = "OK"
+ else:
+ resp = "QUIET"
+
+ if responseQ != None:
+ responseQ.put(resp)
+
+def work_field(options):
+ start = datetime.datetime.now()
+ print("INFO: [%s UTC] starting to harvest hosts from DHCP" % (time.asctime(time.gmtime())))
+ count = harvest(options)
+ end = datetime.datetime.now()
+ delta = end - start
+ print("INFO: [%s UTC] harvested %d hosts, taking %d seconds" % (time.asctime(time.gmtime()), count, delta.seconds))
+ if options.update:
+ reload_zone(options.rndc, options.server, options.port, options.key, options.zone)
+
+def main():
+ parser = OptionParser()
+ parser.add_option('-l', '--leases', dest='leases', default='/dhcp/dhcpd.leases',
+ help="specifies the DHCP lease file from which to harvest")
+ parser.add_option('-x', '--reservations', dest='reservations', default='/etc/dhcp/dhcpd.reservations',
+ help="specified the reservation file as ISC DHCP doesn't update the lease file for fixed addresses")
+ parser.add_option('-d', '--dest', dest='dest', default='/bind/dhcp_harvest.inc',
+ help="specifies the file to write the additional DNS information")
+ parser.add_option('-i', '--include', dest='include', default=None,
+ help="list of hostnames to include when harvesting DNS information")
+ parser.add_option('-f', '--filter', dest='filter', default=None,
+ help="list of regex expressions to use as an include filter")
+ parser.add_option('-r', '--repeat', dest='repeat', default=None,
+ help="continues to harvest DHCP information every specified interval")
+ parser.add_option('-c', '--command', dest='rndc', default='rndc',
+ help="shell command to execute to cause reload")
+ parser.add_option('-k', '--key', dest='key', default=None,
+ help="rndc key file to use to access DNS server")
+ parser.add_option('-s', '--server', dest='server', default='127.0.0.1',
+ help="server to reload after generating updated dns information")
+ parser.add_option('-p', '--port', dest='port', default='954',
+ help="port on server to contact to reload server")
+ parser.add_option('-z', '--zone', dest='zone', default=None,
+ help="zone to reload after generating updated dns information")
+ parser.add_option('-u', '--update', dest='update', default=False, action='store_true',
+ help="update the DNS server, by reloading the zone")
+ parser.add_option('-y', '--verify', dest='verify', default=False, action='store_true',
+ help="verify the hosts with a ping before pushing them to DNS")
+ parser.add_option('-t', '--timeout', dest='timeout', default='1s',
+ help="specifies the duration to wait for a verification ping from a host")
+ parser.add_option('-a', '--apiserver', dest='apiserver', default='0.0.0.0',
+ help="specifies the interfaces on which to listen for API requests")
+ parser.add_option('-e', '--apiport', dest='apiport', default='8954',
+ help="specifies the port on which to listen for API requests")
+ parser.add_option('-q', '--quiet', dest='quiet', default='1m',
+ help="specifieds a minimum quiet period between actually harvest times.")
+ parser.add_option('-w', '--workers', dest='worker_count', type='int', default=5,
+ help="specifies the number of workers to use when verifying IP addresses")
+
+ (options, args) = parser.parse_args()
+
+ # Kick off a thread to listen for HTTP requests to force a re-evaluation
+ requestQ = Queue()
+ api = threading.Thread(target=do_api, args=(options.apiserver, options.apiport, requestQ))
+ api.daemon = True
+ api.start()
+
+ if options.repeat == None:
+ work_field(options)
+ else:
+ secs = convert_to_seconds(options.repeat)
+ farmer = threading.Thread(target=harvester, args=(options, requestQ))
+ farmer.daemon = True
+ farmer.start()
+ while True:
+ cropQ = Queue()
+ requestQ.put(cropQ)
+ interruptable_get(cropQ)
+ time.sleep(secs)
+
+if __name__ == "__main__":
+ main()
diff --git a/harvester/harvest-compose.yml b/harvester/harvest-compose.yml
new file mode 100644
index 0000000..c21504d
--- /dev/null
+++ b/harvester/harvest-compose.yml
@@ -0,0 +1,13 @@
+harvester:
+ image: cord/dhcpharvester
+ container_name: harvester
+ restart: never
+ labels:
+ - "lab.cord.component=Controller"
+ volumes:
+ - "/var/lib/maas/dhcp:/dhcp"
+ - "/etc/bind/maas:/bind"
+ - "/home/ubuntu/compose-services/dhcpharvester/key:/key"
+ ports:
+ - "8954:8954"
+ command: [ "--server", "192.168.42.231", "--port", "954", "--key", "/key/mykey.conf", "--zone", "cord.lab", "--update", "--verify", "--timeout", "1s", "--repeat", "5m", "--quiet", "2s", "--workers", "10", "--filter", "^" ]
diff --git a/harvester/key/mykey.conf b/harvester/key/mykey.conf
new file mode 100644
index 0000000..5c1ee5a
--- /dev/null
+++ b/harvester/key/mykey.conf
@@ -0,0 +1,8 @@
+key "rndc-maas-key" {
+ algorithm hmac-md5;
+ secret "3wUD5ethlazwlMKLGe2PViPJoPl2Cen5r9BePqwyHac=";
+};
+
+options {
+ default-key "rndc-maas-key";
+};
diff --git a/roles/compute-node/tasks/i40e_driver.yml b/roles/compute-node/tasks/i40e_driver.yml
index 5f6b199..69c14cd 100644
--- a/roles/compute-node/tasks/i40e_driver.yml
+++ b/roles/compute-node/tasks/i40e_driver.yml
@@ -2,15 +2,15 @@
- name: Copy i40e Interface Driver
unarchive:
src=files/i40e-1.4.25.tar.gz
- dest={{ ansible_env.HOME }}
- owner=ubuntu
- group=ubuntu
+ dest=/home/{{ ansible_user }}
+ owner={{ ansible_user }}
+ group={{ ansible_user }}
- name: Build i40e Driver
command: make
args:
chdir: i40e-1.4.25/src
- creates: "{{ ansible_env.HOME }}/i40e-1.4.25/src/i40e/i40e.ko"
+ creates: /home/{{ ansible_user }}/i40e-1.4.25/src/i40e/i40e.ko
- name: Unload i40e Driver
become: yes
@@ -36,5 +36,5 @@
- name: Remove Build Files
file:
- path={{ ansible_env.HOME }}/i40e-1.4.25
+ path=/home/{{ ansible_user }}/i40e-1.4.25
state=absent
diff --git a/roles/compute-node/tasks/main.yml b/roles/compute-node/tasks/main.yml
index f3ee4aa..f8edc77 100644
--- a/roles/compute-node/tasks/main.yml
+++ b/roles/compute-node/tasks/main.yml
@@ -8,29 +8,29 @@
- name: Set Default Password
become: yes
user:
- name=ubuntu
+ name={{ ansible_user }}
password="$6$TjhJuOgh8xp.v$z/4GwFbn5koVmkD6Ex9wY7bgP7L3uP2ujZkZSs1HNdzQdz9YclbnZH9GvqMC/M1iwC0MceL05.13HoFz/bai0/"
- name: Authorize SSH Key
become: yes
authorized_key:
key="{{ pub_ssh_key }}"
- user=ubuntu
+ user={{ ansible_user }}
state=present
- name: Verify Private SSH Key
become: yes
stat:
- path=/home/ubuntu/.ssh/id_rsa
+ path=/home/{{ ansible_user }}/.ssh/id_rsa
register: private_key
-- name: Ensure Private SSH Key
+- name: Ensure SSH Key
become: yes
copy:
src=files/{{ item }}
- dest=/home/ubuntu/.ssh/{{ item }}
- owner=ubuntu
- group=ubuntu
+ dest=/home/{{ ansible_user }}/.ssh/{{ item }}
+ owner={{ ansible_user }}
+ group={{ ansible_user }}
mode=0600
with_items:
- id_rsa
@@ -49,6 +49,7 @@
command: modinfo --field=version i40e
register: i40e_version
changed_when: False
+ failed_when: False
tags:
- interface_config
diff --git a/roles/maas/files/amt.template b/roles/maas/files/amt.template
new file mode 100755
index 0000000..2f2df8c
--- /dev/null
+++ b/roles/maas/files/amt.template
@@ -0,0 +1,48 @@
+#!/bin/bash
+
+POWER_ADDRESS={{power_address}}
+POWER_CHANGE={{power_change}}
+POWER_PASS={{power_pass}}
+POWER_MAC={{mac_address}}
+IP_ADDRESS={{ip_address}}
+BOOT_MODE={{boot_mode}}
+
+get_uuid () {
+ local DATA=$(echo -n "$1" | sed -e 's/://g')
+ echo $(ssh $POWER_PASS@$POWER_ADDRESS vboxmanage list vms | grep "$DATA" | awk '{print $NF}' | sed -e 's/[{}]//g')
+}
+
+query_state () {
+ local state=$(ssh $POWER_PASS@$POWER_ADDRESS vboxmanage showvminfo $1 | grep "^State" | grep -i running | wc -l)
+ if [ $state -eq 1 ]; then
+ echo 'on'
+ else
+ echo 'off'
+ fi
+}
+
+power_on () {
+ ssh $POWER_PASS@$POWER_ADDRESS vboxmanage startvm $1
+ return 0
+}
+
+power_off () {
+ ssh $POWER_PASS@$POWER_ADDRESS vboxmanage controlvm $1 poweroff
+ return 0
+}
+
+main () {
+ case "${POWER_CHANGE}" in
+ 'on')
+ power_on "$1"
+ ;;
+ 'off')
+ power_off "$1"
+ ;;
+ 'query')
+ query_state "$1"
+ ;;
+ esac
+}
+
+main "$(get_uuid $POWER_MAC)" "$POWER_CHANGE"
diff --git a/roles/maas/files/ssh_config b/roles/maas/files/ssh_config
new file mode 100644
index 0000000..f30d239
--- /dev/null
+++ b/roles/maas/files/ssh_config
@@ -0,0 +1,2 @@
+Host *
+ StrictHostKeyChecking no
diff --git a/roles/maas/tasks/main.yml b/roles/maas/tasks/main.yml
index c5be210..65edae6 100644
--- a/roles/maas/tasks/main.yml
+++ b/roles/maas/tasks/main.yml
@@ -158,6 +158,8 @@
with_items:
- { url : "https://www.dropbox.com/s/eqxs4kx84omtkha/onie-installer-x86_64-accton_as6712_32x-r0?dl=1", dest : "onie-installer-x86_64-accton_as6712_32x-r0" }
- { url : "https://www.dropbox.com/s/eqxs4kx84omtkha/onie-installer-x86_64-accton_as6712_32x-r0?dl=1", dest : "onie-installer-x86_64-accton_as5712_54x-r0" }
+ tags:
+ - switch_support
- name: Wait for MAAS to Intialize (start)
pause:
@@ -166,7 +168,7 @@
- name: Configure MAAS
become: yes
- command: docker run -ti ciena/cord-maas-bootstrap:0.1-prerelease --apikey='{{apikey.stdout}}' --sshkey='{{maas.user_sshkey}}' --url='http://{{mgmt_ip_address.stdout}}/MAAS/api/1.0' --network='{{networks.management}}' --interface='{{interfaces.management}}' --zone='administrative' --cluster='Cluster master' --domain='{{maas.domain}}' --bridge='{{networks.bridge_name}}' --bridge-subnet='{{networks.bridge}}'
+ command: docker run ciena/cord-maas-bootstrap:0.1-prerelease --apikey='{{apikey.stdout}}' --sshkey='{{maas.user_sshkey}}' --url='http://{{mgmt_ip_address.stdout}}/MAAS/api/1.0' --network='{{networks.management}}' --interface='{{interfaces.management}}' --zone='administrative' --cluster='Cluster master' --domain='{{maas.domain}}' --bridge='{{networks.bridge_name}}' --bridge-subnet='{{networks.bridge}}'
register: maas_config_result
changed_when: maas_config_result.stdout.find("CHANGED") != -1
failed_when: "'ERROR' in maas_config_result.stdout"
@@ -194,6 +196,33 @@
register: dns_template_changed
changed_when: dns_template_changed.stdout == 'true'
+- name: Ensure Nameserver
+ become: yes
+ lineinfile:
+ dest: /etc/resolvconf/resolv.conf.d/head
+ state: present
+ insertafter: EOF
+ line: "nameserver {{ mgmt_ip_address.stdout }}"
+ register: ns_nameserver
+
+- name: Ensure Domain Search
+ become: yes
+ lineinfile:
+ dest: /etc/resolvconf/resolv.conf.d/base
+ state: present
+ insertafter: EOF
+ line: 'search cord.lab'
+ register: ns_search
+
+- name: Ensure DNS
+ become: yes
+ command: resolvconf -u
+ when: ns_nameserver.changed or ns_search.changed
+
+- name: Ensure VirtualBox Power Management
+ include: virtualbox.yml
+ when: virtualbox_support is defined
+
- name: Custom Automation Compose Configurations
become: yes
template:
diff --git a/roles/maas/tasks/virtualbox.yml b/roles/maas/tasks/virtualbox.yml
new file mode 100644
index 0000000..b263886
--- /dev/null
+++ b/roles/maas/tasks/virtualbox.yml
@@ -0,0 +1,38 @@
+- name: VirtualBox Power Support
+ become: yes
+ apt: name={{ item }} state=latest
+ with_items:
+ - amtterm
+ - wsmancli
+
+- name: VirtualBox Power Script
+ become: yes
+ copy:
+ src: files/amt.template
+ dest: /etc/maas/templates/power/amt.template
+ owner: maas
+ group: maas
+ mode: 0755
+
+- name: Ensure SSH Directory
+ become: yes
+ file:
+ path: /var/lib/maas/.ssh
+ state: directory
+ owner: maas
+ group: maas
+ mode: 0700
+
+- name: VirtualBox SSH Support
+ become: yes
+ copy:
+ src: files/{{ item.src }}
+ dest: /var/lib/maas/.ssh/{{ item.dest }}
+ owner: maas
+ group: maas
+ mode: 0600
+ with_items:
+ - { src: cord_id_rsa, dest: id_rsa }
+ - { src: cord_id_rsa.pub, dest: id_rsa.pub }
+ - { src: ssh_config, dest: config }
+
diff --git a/roles/onos-fabric/tasks/main.yml b/roles/onos-fabric/tasks/main.yml
index b2bccc5..74b07e8 100644
--- a/roles/onos-fabric/tasks/main.yml
+++ b/roles/onos-fabric/tasks/main.yml
@@ -1,18 +1,18 @@
---
- name: User Local bin directory
file:
- path={{ ansible_env.HOME }}/bin
+ path=/home/{{ ansible_user }}/bin
state=directory
- owner=ubuntu
- group=ubuntu
+ owner={{ ansible_user }}
+ group={{ ansible_user }}
mode=0755
- name: Copy Utility Commands
copy:
src=files/bin/{{ item }}
- dest={{ ansible_env.HOME }}/bin
- owner=ubuntu
- group=ubuntu
+ dest=/home/{{ ansible_user }}/bin
+ owner={{ ansible_user }}
+ group={{ ansible_user }}
mode=0755
with_items:
- minify
@@ -23,7 +23,7 @@
- name: Include Utility Commands in User Path
lineinfile:
- dest={{ ansible_env.HOME }}/.bashrc
+ dest=/home/{{ ansible_user }}/.bashrc
line="PATH=$HOME/bin:$PATH"
state=present
insertafter=EOF
@@ -31,15 +31,15 @@
- name: Custom ONOS
unarchive:
src=files/onos-1.6.0.ubuntu.tar.gz
- dest={{ ansible_env.HOME }}
- owner=ubuntu
- group=ubuntu
+ dest=/home/{{ ansible_user }}
+ owner={{ ansible_user }}
+ group={{ ansible_user }}
- name: ONOS Fabric Configuration
template:
src=templates/fabric-network-config.json.j2
- dest={{ ansible_env.HOME }}/fabric-network.config.json
- owner=ubuntu
- group=ubuntu
+ dest=/home/{{ ansible_user }}/fabric-network.config.json
+ owner={{ ansible_user }}
+ group={{ ansible_user }}
mode=0644
diff --git a/scripts/bootstrap_ansible.sh b/scripts/bootstrap_ansible.sh
new file mode 100755
index 0000000..a5e8549
--- /dev/null
+++ b/scripts/bootstrap_ansible.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+#
+# Copyright 2012 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+set -e
+
+echo "Installing Ansible..."
+apt-get install -y software-properties-common ca-certificates
+apt-add-repository ppa:ansible/ansible
+apt-get update
+apt-get install -y ansible
+cp /maasdev/ansible/ansible.cfg /etc/ansible/ansible.cfg