VOL-572: Integration testing with Kubernetes

* Updated test_voltha_alarm_filters to run in the single-node Kubernetes environment,
as well as in docker-compose.

* Fixed voltha-ponsim-k8s-start.sh, which is used by test_dispatcher and will be
used by test_persistence.

Change-Id: I0ca3bd3c108d170c5704620d3cfe3a134efdef56
diff --git a/tests/itests/README.md b/tests/itests/README.md
index ebbba5a..e90ca54 100644
--- a/tests/itests/README.md
+++ b/tests/itests/README.md
@@ -189,7 +189,7 @@
 device and verify that alarms are generated by the device.
 
 To run the test in the docker-compose environment,
-start the Voltha ensemble and then run the test:
+start the Voltha ensemble and then execute the test:
 ```
 cd /cord/incubator/voltha
 . ./env.sh
@@ -227,7 +227,8 @@
 along with a filter against one of the devices.  The test will validate that alarms are received
 for the unfiltered device and alarms will be suppressed for the filtered device.
 
-After starting the Voltha ensemble, run the test:
+To run the test in the docker-compose environment,
+start the Voltha ensemble and then execute the test:
 ```
 cd /cord/incubator/voltha
 . ./env.sh
@@ -235,6 +236,22 @@
 docker-compose -f compose/docker-compose-system-test.yml up -d
 nosetests -s tests/itests/voltha/test_voltha_alarm_filters.py
 ```
+To run the test in a single-node Kubernetes environment (see document voltha/BUILD.md),
+start the Voltha ensemble:
+```
+./tests/itests/env/voltha-k8s-stop.sh
+./tests/itests/env/voltha-k8s-start.sh
+```
+Wait until all the Voltha pods are in the Running state and take note of
+Kafka's pod IP address (See description for Voltha_alarm_events test).
+Enter the following line into the /etc/hosts file:
+```
+<kafka-pod-IP-address> kafka-0.kafka.voltha.svc.cluster.local
+```
+To run the test:
+```
+nosetests -s tests/itests/voltha/test_voltha_alarm_filters.py --tc-file=tests/itests/env/k8s-consul.ini
+```
 
 * **Dispatcher**:  This test exercises the requests forwarding via the Global 
 handler.
@@ -264,7 +281,7 @@
 ``` 
 
 To run the test in Kubernetes, set up a single-node environment by following
-document voltha/BUILD.md. The test is fully automated; simple execute:
+document voltha/BUILD.md. The test is fully automated; simply execute:
 ```
 nosetests -s tests/itests/voltha/test_dispatcher.py --tc-file=tests/itests/env/k8s-consul.ini
 ```
diff --git a/tests/itests/env/voltha-ponsim-k8s-start.sh b/tests/itests/env/voltha-ponsim-k8s-start.sh
index e2d82e2..55f087d 100755
--- a/tests/itests/env/voltha-ponsim-k8s-start.sh
+++ b/tests/itests/env/voltha-ponsim-k8s-start.sh
@@ -40,9 +40,24 @@
 
 # An ONU container creates the pon0 bridge
 kubectl apply -f k8s/onu.yml
-sleep 30
+
 echo 8 > tests/itests/env/tmp_pon0_group_fwd_mask
-sudo cp tests/itests/env/tmp_pon0_group_fwd_mask /sys/class/net/pon0/bridge/group_fwd_mask
+RETRY=30
+while [ $RETRY -gt 0 ];
+do
+    if [ -f /sys/class/net/pon0/bridge/group_fwd_mask ]; then
+        echo "pon0 found"
+        sudo cp tests/itests/env/tmp_pon0_group_fwd_mask /sys/class/net/pon0/bridge/group_fwd_mask
+        break
+    else
+        echo "waiting for pon0..."
+        RETRY=$(expr $RETRY - 1)
+        sleep 1
+    fi
+done
+if [ $RETRY -eq 0 ]; then
+    echo "Timed out waiting for creation of bridge pon0"
+fi
 rm tests/itests/env/tmp_pon0_group_fwd_mask
 
 kubectl apply -f k8s/rg.yml
diff --git a/tests/itests/voltha/test_voltha_alarm_filters.py b/tests/itests/voltha/test_voltha_alarm_filters.py
index ccb2929..0096745 100644
--- a/tests/itests/voltha/test_voltha_alarm_filters.py
+++ b/tests/itests/voltha/test_voltha_alarm_filters.py
@@ -4,15 +4,23 @@
 from google.protobuf.json_format import MessageToDict
 
 from common.utils.consulhelpers import get_endpoint_from_consul
-from tests.itests.test_utils import \
+from tests.itests.test_utils import get_pod_ip, \
     run_long_running_command_with_timeout
 from tests.itests.voltha.rest_base import RestBase
 from voltha.protos.device_pb2 import Device
 from voltha.protos.voltha_pb2 import AlarmFilter
+from testconfig import config
 
 # ~~~~~~~ Common variables ~~~~~~~
 
 LOCAL_CONSUL = "localhost:8500"
+ENV_DOCKER_COMPOSE = 'docker-compose'
+ENV_K8S_SINGLE_NODE = 'k8s-single-node'
+
+orch_env = ENV_DOCKER_COMPOSE
+if 'test_parameters' in config and 'orch_env' in config['test_parameters']:
+    orch_env = config['test_parameters']['orch_env']
+print 'orchestration-environment: %s' % orch_env
 
 COMMANDS = dict(
     kafka_client_run="kafkacat -b {} -L",
@@ -25,15 +33,17 @@
 
 
 class VolthaAlarmFilterTests(RestBase):
-    # Retrieve details on the REST entry point
-    rest_endpoint = get_endpoint_from_consul(LOCAL_CONSUL, 'envoy-8443')
+    # Get endpoint info
+    if orch_env == ENV_K8S_SINGLE_NODE:
+        rest_endpoint = get_pod_ip('voltha') + ':8443'
+        kafka_endpoint = get_pod_ip('kafka')
+    else:
+        rest_endpoint = get_endpoint_from_consul(LOCAL_CONSUL, 'voltha-envoy-8443')
+        kafka_endpoint = get_endpoint_from_consul(LOCAL_CONSUL, 'kafka')
 
     # Construct the base_url
     base_url = 'https://' + rest_endpoint
 
-    # Start by querying consul to get the endpoint details
-    kafka_endpoint = get_endpoint_from_consul(LOCAL_CONSUL, 'kafka')
-
     # ~~~~~~~~~~~~ Tests ~~~~~~~~~~~~
 
     def test_1_alarm_topic_exists(self):
@@ -63,8 +73,8 @@
         self.verify_rest()
 
         # Create a new device
-        device_not_filtered = self.add_device()
-        device_filtered = self.add_device()
+        device_not_filtered = self.add_device('00:00:00:00:00:01')
+        device_filtered = self.add_device('00:00:00:00:00:02')
 
         self.add_device_id_filter(device_filtered['id'])
 
@@ -87,9 +97,10 @@
         self.get('/api/v1')
 
     # Create a new simulated device
-    def add_device(self):
+    def add_device(self, mac_address):
         device = Device(
             type='simulated_olt',
+            mac_address=mac_address
         )
         device = self.post('/api/v1/devices', MessageToDict(device),
                            expected_http_code=200)