Reintegrate with olt-oftest

* Added updated notes on how to run voltha with olt-oftest
  in fake_dataplane mode (for now)
* Relocated test_flow_decompose to unittest
* Added new comprehensive unit test for LogicalDeviceAgent.
  Note that this class covers the following important
  business logic:
  * handling flow_mod and group_mod requests coming from the
    NBI
  * building route table and default flows/groups tables
    for the physical devices
  * performs flow decomposition from the logical device
    flows/groups to the physical device flows/groups
  All three functions are covered by tests now.
* Many small fixes/imporvements pciked up by the tests.

Change-Id: I34d341830e39bec29bcb8a2ed2eaf2027595c0e3
diff --git a/docs/pon-testing/olt-oftest-notes.md b/docs/pon-testing/olt-oftest-notes.md
new file mode 100644
index 0000000..d76c398
--- /dev/null
+++ b/docs/pon-testing/olt-oftest-notes.md
@@ -0,0 +1,165 @@
+# Notes on How to Run olt-oftest on with Voltha
+
+[Still raw notes.]
+
+Steps:
+
+### Bring up dev host and install prerequisites
+
+Assuming a fresh Vagrant machine:
+
+```
+cd ~/voltha  # whatever location your Voltha repo dir is
+rm -fr venv-linux  # to make sure we don't have residues
+vagrant destroy -f  # ditto
+vagrant up
+vagrant ssh
+cd /voltha
+git clone git@bitbucket.org:corddesign/olt-oftest.git
+git clone https://github.com/floodlight/oftest.git
+git clone git://github.com/mininet/mininet
+./mininet/utils/install.sh
+pip install pypcap
+```
+
+### Build Voltha proto derivatives and start Voltha
+
+On the above Vagrant box:
+
+```
+cd /voltha
+. env.sh
+make protos
+docker-compose -f compose/docker-compose-system-test.yml up -d consul zookeeper kafka registrator fluentd
+docker-compose -f compose/docker-compose-system-test.yml ps  # to see if all are up and happy
+```
+
+For development purposes, it is better to run voltha, chameleon and ofagent in the terminal, so we do that.
+
+Open three terminals on the Vagrant host. In terminal one, start voltha:
+
+```
+cd /voltha
+. env.sh
+./voltha/main.py --kafka=@kafka
+```
+
+In the second terminal, start chameleon:
+
+```
+cd /voltha
+. env.sh
+./chameleon/main.py
+```
+
+In the third terminal, start ofagent:
+
+```
+cd /voltha
+. env.sh
+./ofagent/main.py
+```
+
+Open a fourth terminal and run some sanity checks:
+
+To see we can reach Voltha via REST:
+
+```
+curl -s http://localhost:8881/health | jq '.'
+```
+
+and
+
+```
+curl -s -H 'Get-Depth: 2' http://localhost:8881/api/v1/local | jq '.'
+```
+
+To verify we have exactly one logical device (this is important for olt-oftest, which assumes this):
+
+```
+curl -s http://localhost:8881/api/v1/local/logical_devices | jq '.items'
+```
+
+Check in the output that there is one entry in the logical device list, along these lines:
+
+```
+[
+  {
+    "datapath_id": "1",
+    "root_device_id": "simulated_olt_1",
+    "switch_features": {
+      "auxiliary_id": 0,
+      "n_tables": 2,
+      "datapath_id": "0",
+      "capabilities": 15,
+      "n_buffers": 256
+    },
+    "id": "simulated1",
+    "ports": [],
+    "desc": {
+      "dp_desc": "n/a",
+      "sw_desc": "simualted pon",
+      "hw_desc": "simualted pon",
+      "serial_num": "1cca4175aa8d4163b8b4aed9bc65c380",
+      "mfr_desc": "cord porject"
+    }
+  }
+]
+```
+
+
+To verify that the above logical device has all three logical ports, run this:
+
+```
+curl -s http://localhost:8881/api/v1/local/logical_devices/simulated1/ports | jq '.items'
+```
+
+This shall have three entries, one OLT NNI port and two ONU (UNI) ports. Make note of the corresponding
+```of_port.port_no``` numbers. They shall be as follows:
+
+* For OLT port (```id=olt1```): ```ofp_port.port_no=129```
+* For ONU1 port (```id=onu1```): ```ofp_port.port_no=1```
+* For ONU2 port (```id=onu2```): ```ofp_port.port_no=2```
+
+If they are different, you will need to adjust olt-oftest input arguments accordingly.
+
+Finally, check the flow and flow_group tables of the logical device; they should both be empty at this point:
+
+
+```
+curl -s http://localhost:8881/api/v1/local/logical_devices/simulated1/flows | jq '.items'
+curl -s http://localhost:8881/api/v1/local/logical_devices/simulated1/flow_groups | jq '.items'
+```
+
+### Create fake interfaces needed by olt-oftest
+
+Despite that we will run olt-oftest with "fake_dataplane" mode, meaning that it will not attempt to send/receive dataplane traffic, it still wants to be able to open its usual dataplane interfaces. We will make it happy by creating a few veth interfaces:
+
+```
+sudo ip link add type veth
+sudo ip link add type veth
+sudo ip link add type veth
+sudo ip link add type veth
+sudo ifconfig veth0 up
+sudo ifconfig veth2 up
+sudo ifconfig veth4 up
+sudo ifconfig veth6 up
+```
+
+### Start olt-oftest in fake_dataplane mode
+
+```
+cd /voltha
+sudo -s
+export PYTHONPATH=/voltha/voltha/adapters/tibit_olt:/voltha/mininet
+./oftest/oft --test-dir=olt-oftest/ \
+    -t "fake_dataplane=True;olt_port=129;onu_port=1;onu_port2=2" \
+    -i 1@veth0 \
+    -i 2@veth2 \
+    -i 129@veth4 \
+    -p 6633 -V 1.3 -vv -T olt-complex
+```
+
+The above shall finish with OK (showing seven (7) or more tests completed).
+
+
diff --git a/tests/itests/docutests/OLT-TESTING.md b/tests/itests/docutests/OLT-TESTING.obsolete.md
similarity index 100%
rename from tests/itests/docutests/OLT-TESTING.md
rename to tests/itests/docutests/OLT-TESTING.obsolete.md
diff --git a/tests/utests/voltha/core/flow_helpers.py b/tests/utests/voltha/core/flow_helpers.py
new file mode 100644
index 0000000..70d8638
--- /dev/null
+++ b/tests/utests/voltha/core/flow_helpers.py
@@ -0,0 +1,26 @@
+"""
+Mixin class to help in flow inspection
+"""
+from unittest import TestCase
+
+from google.protobuf.json_format import MessageToDict
+from jsonpatch import make_patch
+from simplejson import dumps
+
+
+class FlowHelpers(TestCase):
+
+    # ~~~~~~~~~~~~~~~~~~~~~~~~~ HELPER METHODS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def assertFlowsEqual(self, flow1, flow2):
+        if flow1 != flow2:
+            self.fail('flow1 %s differs from flow2; differences: \n%s' % (
+                      dumps(MessageToDict(flow1), indent=4),
+                      self.diffMsgs(flow1, flow2)))
+
+    def diffMsgs(self, msg1, msg2):
+        msg1_dict = MessageToDict(msg1)
+        msg2_dict = MessageToDict(msg2)
+        diff = make_patch(msg1_dict, msg2_dict)
+        return dumps(diff.patch, indent=2)
+
diff --git a/tests/itests/voltha/test_flow_decomposer.py b/tests/utests/voltha/core/test_flow_decomposer.py
similarity index 89%
rename from tests/itests/voltha/test_flow_decomposer.py
rename to tests/utests/voltha/core/test_flow_decomposer.py
index f131a28..9d1630d 100644
--- a/tests/itests/voltha/test_flow_decomposer.py
+++ b/tests/utests/voltha/core/test_flow_decomposer.py
@@ -1,17 +1,13 @@
-from unittest import TestCase, main
+from unittest import main
 
-from jsonpatch import make_patch
-from simplejson import dumps
-
+from tests.utests.voltha.core.flow_helpers import FlowHelpers
 from voltha.core.flow_decomposer import *
-from voltha.core.logical_device_agent import \
-    flow_stats_entry_from_flow_mod_message
+from voltha.protos import third_party
 from voltha.protos.device_pb2 import Device, Port
 from voltha.protos.logical_device_pb2 import LogicalPort
-from google.protobuf.json_format import MessageToDict
 
 
-class TestFlowDecomposer(TestCase, FlowDecomposer):
+class TestFlowDecomposer(FlowHelpers, FlowDecomposer):
 
     def setUp(self):
         self.logical_device_id = 'pon'
@@ -142,41 +138,6 @@
                      _devices['olt'].ports[1]),
         ],
 
-        # UPSTREAM CONTROLLER-BOUND (IN-BAND SENDING TO DATAPLANE
-
-        (1, ofp.OFPP_CONTROLLER): [
-            RouteHop(_devices['onu1'],
-                     _devices['onu1'].ports[1],
-                     _devices['onu1'].ports[0]),
-            RouteHop(_devices['olt'],
-                     _devices['olt'].ports[0],
-                     _devices['olt'].ports[1]),
-        ],
-        (2, ofp.OFPP_CONTROLLER): [
-            RouteHop(_devices['onu2'],
-                     _devices['onu2'].ports[1],
-                     _devices['onu2'].ports[0]),
-            RouteHop(_devices['olt'],
-                     _devices['olt'].ports[0],
-                     _devices['olt'].ports[1]),
-        ],
-        (3, ofp.OFPP_CONTROLLER): [
-            RouteHop(_devices['onu3'],
-                     _devices['onu3'].ports[1],
-                     _devices['onu3'].ports[0]),
-            RouteHop(_devices['olt'],
-                     _devices['olt'].ports[0],
-                     _devices['olt'].ports[1]),
-        ],
-        (4, ofp.OFPP_CONTROLLER): [
-            RouteHop(_devices['onu4'],
-                     _devices['onu4'].ports[1],
-                     _devices['onu4'].ports[0]),
-            RouteHop(_devices['olt'],
-                     _devices['olt'].ports[0],
-                     _devices['olt'].ports[1]),
-        ],
-
         # UPSTREAM NEXT TABLE BASED
 
         (1, None): [
@@ -219,8 +180,16 @@
                      _devices['olt'].ports[1],
                      _devices['olt'].ports[0]),
             None  # 2nd hop is not known yet
-        ]
+        ],
 
+        # UPSTREAM WILD-CARD
+        (None, 0): [
+            None,  # 1st hop is wildcard
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]
+                     )
+        ]
     }
 
     _default_rules = {
@@ -309,22 +278,12 @@
         return self._default_rules[device_id]
 
     def get_route(self, in_port_no, out_port_no):
+        if out_port_no is not None and \
+                        (out_port_no & 0x7fffffff) == ofp.OFPP_CONTROLLER:
+            # treat it as if the output port is the NNI of the OLT
+            out_port_no = 0  # OLT NNI port
         return self._routes[(in_port_no, out_port_no)]
 
-    # ~~~~~~~~~~~~~~~~~~~~~~~~~ HELPER METHODS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    def assertFlowsEqual(self, flow1, flow2):
-        if flow1 != flow2:
-            self.fail('flow1 %s differs from flow2; differences: \n%s' % (
-                      dumps(MessageToDict(flow1), indent=4),
-                      self.diffMsgs(flow1, flow2)))
-
-    def diffMsgs(self, msg1, msg2):
-        msg1_dict = MessageToDict(msg1)
-        msg2_dict = MessageToDict(msg2)
-        diff = make_patch(msg1_dict, msg2_dict)
-        return dumps(diff.patch, indent=2)
-
     # ~~~~~~~~~~~~~~~~~~~~~~~~ ACTUAL TEST CASES ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     def test_eapol_reroute_rule_decomposition(self):
@@ -459,6 +418,48 @@
             ]
         ))
 
+    def test_wildcarded_igmp_reroute_rule_decomposition(self):
+        flow = mk_flow_stat(
+            match_fields=[
+                eth_type(0x0800),
+                ip_proto(2)
+            ],
+            actions=[output(ofp.OFPP_CONTROLLER)],
+            priority=2000,
+            cookie=140
+        )
+        device_rules = self.decompose_rules([flow], [])
+        onu1_flows, onu1_groups = device_rules['onu1']
+        olt_flows, olt_groups = device_rules['olt']
+        self.assertEqual(len(onu1_flows), 1)
+        self.assertEqual(len(onu1_groups), 0)
+        self.assertEqual(len(olt_flows), 2)
+        self.assertEqual(len(olt_groups), 0)
+        self.assertFlowsEqual(onu1_flows.values()[0], mk_flow_stat(
+            match_fields=[
+                in_port(2),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 101)),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(olt_flows.values()[1], mk_flow_stat(
+            priority=2000,
+            cookie=140,
+            match_fields=[
+                in_port(1),
+                eth_type(0x0800),
+                ip_proto(2)
+            ],
+            actions=[
+                push_vlan(0x8100),
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 4000)),
+                output(2)
+            ]
+        ))
+
     def test_unicast_upstream_rule_decomposition(self):
         flow1 = mk_flow_stat(
             priority=500,
diff --git a/tests/utests/voltha/core/test_logical_device_agent.py b/tests/utests/voltha/core/test_logical_device_agent.py
new file mode 100644
index 0000000..5e97f6c
--- /dev/null
+++ b/tests/utests/voltha/core/test_logical_device_agent.py
@@ -0,0 +1,611 @@
+from unittest import main
+
+from mock import Mock
+
+from tests.utests.voltha.core.flow_helpers import FlowHelpers
+from voltha.core import logical_device_agent
+from voltha.core.flow_decomposer import *
+from voltha.core.logical_device_agent import LogicalDeviceAgent
+from voltha.protos import third_party
+from voltha.protos.device_pb2 import Device, Port
+from voltha.protos.logical_device_pb2 import LogicalDevice, LogicalPort
+from voltha.protos.openflow_13_pb2 import Flows, FlowGroups
+
+
+class test_logical_device_agent(FlowHelpers):
+
+    def setup_mock_registry(self):
+        registry = Mock()
+        logical_device_agent.registry = registry
+
+    def setUp(self):
+        self.setup_mock_registry()
+
+        self.flows = Flows(items=[])
+        self.groups = FlowGroups(items=[])
+        self.ld_ports = [
+            LogicalPort(
+                id='0',
+                device_id='olt',
+                device_port_no=0,
+                root_port=True,
+                ofp_port=ofp.ofp_port(
+                    port_no=0
+                )
+            ),
+            LogicalPort(
+                id='1',
+                device_id='onu1',
+                device_port_no=0,
+                ofp_port=ofp.ofp_port(
+                    port_no=1
+                )
+            ),
+            LogicalPort(
+                id='2',
+                device_id='onu2',
+                device_port_no=0,
+                ofp_port=ofp.ofp_port(
+                    port_no=2
+                )
+            )
+        ]
+
+        self.devices = {
+            'olt': Device(id='olt', root=True, parent_id='id'),
+            'onu1': Device(id='onu1', parent_id='olt', parent_port_no=1),
+            'onu2': Device(id='onu2', parent_id='olt', parent_port_no=1),
+        }
+
+        self.ports = {
+            'olt': [
+                Port(port_no=0, type=Port.ETHERNET_NNI, device_id='olt'),
+                Port(port_no=1, type=Port.PON_OLT, device_id='olt',
+                     peers=[
+                         Port.PeerPort(device_id='onu1', port_no=1),
+                         Port.PeerPort(device_id='onu2', port_no=1)
+                     ]
+                )
+            ],
+            'onu1': [
+                Port(port_no=0, type=Port.ETHERNET_UNI, device_id='onu1'),
+                Port(port_no=1, type=Port.PON_ONU, device_id='onu1',
+                     peers=[
+                         Port.PeerPort(device_id='olt', port_no=1),
+                     ]
+                )
+            ],
+            'onu2': [
+                Port(port_no=0, type=Port.ETHERNET_UNI, device_id='onu2'),
+                Port(port_no=1, type=Port.PON_ONU, device_id='onu2',
+                     peers=[
+                         Port.PeerPort(device_id='olt', port_no=1),
+                     ]
+                )
+            ],
+        }
+
+        self.device_flows = {
+            'olt': Flows(),
+            'onu1': Flows(),
+            'onu2': Flows()
+        }
+
+        self.device_groups = {
+            'olt': FlowGroups(),
+            'onu1': FlowGroups(),
+            'onu2': FlowGroups()
+        }
+
+        self.ld = LogicalDevice(id='id', root_device_id='olt')
+
+        self.root_proxy = Mock()
+        def get_devices(path):
+            if path == '':
+                return self.devices.values()
+            if path.endswith('/ports'):
+                return self.ports[path[:-len('/ports')]]
+            elif path.find('/') == -1:
+                return self.devices[path]
+            else:
+                raise Exception(
+                    'Nothing to yield for path /devices/{}'.format(path))
+        def update_devices(path, data):
+            if path.endswith('/flows'):
+                self.device_flows[path[:-len('/flows')]] = data
+            elif path.endswith('/flow_groups'):
+                self.device_groups[path[:-len('/flows')]] = data
+            else:
+                raise NotImplementedError(
+                    'not handling path /devices/{}'.format(path))
+
+        self.root_proxy.get = lambda p: \
+            get_devices(p[len('/devices/'):]) if p.startswith('/devices') \
+                else None
+        self.root_proxy.update = lambda p, d: \
+            update_devices(p[len('/devices/'):], d) \
+                if p.startswith('/devices') \
+                else None
+        self.ld_proxy = Mock()
+        self.ld_proxy.get = lambda p: \
+            self.ld_ports if p == '/ports' else (
+                self.ld if p == '/' else None
+            )
+
+        self.flows_proxy = Mock()
+        self.flows_proxy.get = lambda _: self.flows  # always '/' path
+        def update_flows(_, flows):  # always '/' path
+            self.flows = flows
+        self.flows_proxy.update = update_flows
+
+        self.groups_proxy = Mock()
+        self.groups_proxy.get = lambda _: self.groups  # always '/' path
+        def update_groups(_, groups):  # always '/' path
+            self.groups = groups
+        self.groups_proxy.update = update_groups
+
+        self.core = Mock()
+        self.core.get_proxy = lambda path: \
+            self.root_proxy if path == '/' else (
+                self.ld_proxy if path.endswith('id') else (
+                    self.flows_proxy if path.endswith('flows') else
+                    self.groups_proxy
+                )
+            )
+
+        self.lda = LogicalDeviceAgent(self.core, self.ld)
+
+    def test_init(self):
+        pass  # really just tests the setUp method
+
+    # ~~~~~~~~~~~~~~~~~~~ TEST FLOW TABLE MANIPULATION ~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def test_add_flow(self):
+        flow_mod = mk_simple_flow_mod(
+            match_fields=[],
+            actions=[]
+        )
+        self.lda.update_flow_table(flow_mod)
+
+        expected_flows = Flows(items=[
+            flow_stats_entry_from_flow_mod_message(flow_mod)
+        ])
+        self.assertFlowsEqual(self.flows, expected_flows)
+
+    def test_add_redundant_flows(self):
+        flow_mod = mk_simple_flow_mod(
+            match_fields=[],
+            actions=[]
+        )
+        self.lda.update_flow_table(flow_mod)
+        self.lda.update_flow_table(flow_mod)
+        self.lda.update_flow_table(flow_mod)
+        self.lda.update_flow_table(flow_mod)
+
+        expected_flows = Flows(items=[
+            flow_stats_entry_from_flow_mod_message(flow_mod)
+        ])
+        self.assertFlowsEqual(self.flows, expected_flows)
+
+    def test_add_different_flows(self):
+        flow_mod1 = mk_simple_flow_mod(
+            match_fields=[
+                in_port(1)
+            ],
+            actions=[]
+        )
+        flow_mod2 = mk_simple_flow_mod(
+            match_fields=[
+                in_port(2)
+            ],
+            actions=[]
+        )
+        self.lda.update_flow_table(flow_mod1)
+        self.lda.update_flow_table(flow_mod2)
+
+        expected_flows = Flows(items=[
+            flow_stats_entry_from_flow_mod_message(flow_mod1),
+            flow_stats_entry_from_flow_mod_message(flow_mod2)
+        ])
+        self.assertFlowsEqual(self.flows, expected_flows)
+
+    def test_delete_all_flows(self):
+        for i in range(5):
+            flow_mod = mk_simple_flow_mod(
+                match_fields=[in_port(i)],
+                actions=[output(i + 1)]
+            )
+            self.lda.update_flow_table(flow_mod)
+        self.assertEqual(len(self.flows.items), 5)
+
+        self.lda.update_flow_table(mk_simple_flow_mod(
+            command=ofp.OFPFC_DELETE,
+            out_port=ofp.OFPP_ANY,
+            out_group=ofp.OFPG_ANY,
+            match_fields=[],
+            actions=[]
+        ))
+        self.assertEqual(len(self.flows.items), 0)
+
+    def test_delete_specific_flows(self):
+        for i in range(5):
+            flow_mod = mk_simple_flow_mod(
+                match_fields=[in_port(i)],
+                actions=[output(i + 1)]
+            )
+            self.lda.update_flow_table(flow_mod)
+        self.assertEqual(len(self.flows.items), 5)
+
+        self.lda.update_flow_table(mk_simple_flow_mod(
+            command=ofp.OFPFC_DELETE_STRICT,
+            match_fields=[in_port(2)],
+            actions=[]
+        ))
+        self.assertEqual(len(self.flows.items), 4)
+
+    # ~~~~~~~~~~~~~~~~~~~ TEST GROUP TABLE MANIPULATION ~~~~~~~~~~~~~~~~~~~~~~~
+
+    def test_add_group(self):
+        group_mod = mk_multicast_group_mod(
+            group_id=2,
+            buckets=[
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(1)
+                ]),
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(2)
+                ]),
+            ]
+        )
+        self.lda.update_group_table(group_mod)
+
+        expected_groups = FlowGroups(items=[
+            group_entry_from_group_mod(group_mod)
+        ])
+        self.assertEqual(self.groups, expected_groups)
+
+    def test_add_redundant_groups(self):
+        group_mod = mk_multicast_group_mod(
+            group_id=2,
+            buckets=[
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(1)
+                ]),
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(2)
+                ]),
+            ]
+        )
+        self.lda.update_group_table(group_mod)
+        self.lda.update_group_table(group_mod)
+        self.lda.update_group_table(group_mod)
+        self.lda.update_group_table(group_mod)
+        self.lda.update_group_table(group_mod)
+
+        expected_groups = FlowGroups(items=[
+            group_entry_from_group_mod(group_mod)
+        ])
+        self.assertEqual(self.groups, expected_groups)
+
+    def test_modify_group(self):
+        group_mod = mk_multicast_group_mod(
+            group_id=2,
+            buckets=[
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(1)
+                ]),
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(2)
+                ]),
+            ]
+        )
+        self.lda.update_group_table(group_mod)
+
+        group_mod = mk_multicast_group_mod(
+            command=ofp.OFPGC_MODIFY,
+            group_id=2,
+            buckets=[
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(i)
+                ]) for i in range(1, 4)
+            ]
+        )
+        self.lda.update_group_table(group_mod)
+
+        self.assertEqual(len(self.groups.items), 1)
+        self.assertEqual(len(self.groups.items[0].desc.buckets), 3)
+
+    def test_delete_all_groups(self):
+        for i in range(10):
+            group_mod = mk_multicast_group_mod(
+                group_id=i,
+                buckets=[
+                    ofp.ofp_bucket(actions=[
+                        pop_vlan(),
+                        output(i)
+                    ]) for j in range(1, 4)
+                ]
+            )
+            self.lda.update_group_table(group_mod)
+        self.assertEqual(len(self.groups.items), 10)
+
+        # now delete all
+        self.lda.update_group_table(mk_multicast_group_mod(
+            command=ofp.OFPGC_DELETE,
+            group_id=ofp.OFPG_ALL,
+            buckets=[]
+        ))
+        self.assertEqual(len(self.groups.items), 0)
+
+    def test_delete_specific_group(self):
+        for i in range(10):
+            group_mod = mk_multicast_group_mod(
+                group_id=i,
+                buckets=[
+                    ofp.ofp_bucket(actions=[
+                        pop_vlan(),
+                        output(i)
+                    ]) for j in range(1, 4)
+                ]
+            )
+            self.lda.update_group_table(group_mod)
+        self.assertEqual(len(self.groups.items), 10)
+
+        # now delete all
+        self.lda.update_group_table(mk_multicast_group_mod(
+            command=ofp.OFPGC_DELETE,
+            group_id=3,
+            buckets=[]
+        ))
+        self.assertEqual(len(self.groups.items), 9)
+
+    # ~~~~~~~~~~~~~~~~~~~~ DEFAULT RULES AND ROUTES ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def test_default_rules(self):
+        rules = self.lda.get_all_default_rules()
+        # we assume one default flow and no default group for each of 3 devs
+        self.assertEqual(len(rules['olt'][0]), 1)
+        self.assertEqual(len(rules['olt'][1]), 0)
+        self.assertEqual(len(rules['onu1'][0]), 1)
+        self.assertEqual(len(rules['onu1'][1]), 0)
+        self.assertEqual(len(rules['onu2'][0]), 1)
+        self.assertEqual(len(rules['onu2'][1]), 0)
+
+    def test_routes(self):
+        self.lda.get_all_default_rules()  # this will prepare the _routes
+        routes = self.lda._routes
+        self.assertEqual(len(routes), 4)
+        self.assertEqual(set(routes.keys()),
+                         set([(0, 1), (0, 2), (1, 0), (2, 0)]))
+
+        # verify all routes
+        route = routes[(0, 1)]
+        self.assertEqual(len(route), 2)
+        self.assertEqual(route[0].device, self.devices['olt'])
+        self.assertEqual(route[0].ingress_port, self.ports['olt'][0])
+        self.assertEqual(route[0].egress_port, self.ports['olt'][1])
+        self.assertEqual(route[1].device, self.devices['onu1'])
+        self.assertEqual(route[1].ingress_port, self.ports['onu1'][1])
+        self.assertEqual(route[1].egress_port, self.ports['onu1'][0])
+
+        route = routes[(0, 2)]
+        self.assertEqual(len(route), 2)
+        self.assertEqual(route[0].device, self.devices['olt'])
+        self.assertEqual(route[0].ingress_port, self.ports['olt'][0])
+        self.assertEqual(route[0].egress_port, self.ports['olt'][1])
+        self.assertEqual(route[1].device, self.devices['onu2'])
+        self.assertEqual(route[1].ingress_port, self.ports['onu2'][1])
+        self.assertEqual(route[1].egress_port, self.ports['onu2'][0])
+
+        route = routes[(1, 0)]
+        self.assertEqual(len(route), 2)
+        self.assertEqual(route[0].device, self.devices['onu1'])
+        self.assertEqual(route[0].ingress_port, self.ports['onu1'][0])
+        self.assertEqual(route[0].egress_port, self.ports['onu1'][1])
+        self.assertEqual(route[1].device, self.devices['olt'])
+        self.assertEqual(route[1].ingress_port, self.ports['olt'][1])
+        self.assertEqual(route[1].egress_port, self.ports['olt'][0])
+
+        route = routes[(2, 0)]
+        self.assertEqual(len(route), 2)
+        self.assertEqual(route[0].device, self.devices['onu2'])
+        self.assertEqual(route[0].ingress_port, self.ports['onu2'][0])
+        self.assertEqual(route[0].egress_port, self.ports['onu2'][1])
+        self.assertEqual(route[1].device, self.devices['olt'])
+        self.assertEqual(route[1].ingress_port, self.ports['olt'][1])
+        self.assertEqual(route[1].egress_port, self.ports['olt'][0])
+
+    # ~~~~~~~~~~~~~~~~~~~~~~~~~~ FLOW DECOMP TESTS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def test_eapol_flow_decomp_case(self):
+        self.lda.update_flow_table(mk_simple_flow_mod(
+            priority=1000,
+            match_fields=[in_port(1), eth_type(0x888e)],
+            actions=[output(ofp.OFPP_CONTROLLER)]
+        ))
+        self.lda._flow_table_updated(self.flows)
+        self.assertEqual(len(self.device_flows['olt'].items), 2)
+        self.assertEqual(len(self.device_flows['onu1'].items), 1)
+        self.assertEqual(len(self.device_flows['onu2'].items), 1)
+        self.assertEqual(len(self.device_groups['olt'].items), 0)
+        self.assertEqual(len(self.device_groups['onu1'].items), 0)
+        self.assertEqual(len(self.device_groups['onu2'].items), 0)
+
+        # the only non-default flow (check without the id field)
+        self.assertFlowsEqual(self.device_flows['olt'].items[1], mk_flow_stat(
+                priority=1000,
+                match_fields=[in_port(1), eth_type(0x888e)],
+                actions=[
+                    push_vlan(0x8100),
+                    set_field(vlan_vid(4096 + 4000)),
+                    output(0)
+                ]
+            ))
+
+    def test_wildcarded_igmp_rule(self):
+        self.lda.update_flow_table(mk_simple_flow_mod(
+            priority=1000,
+            match_fields=[eth_type(0x800), ip_proto(2)],
+            actions=[output(ofp.OFPP_CONTROLLER)]
+        ))
+        self.lda._flow_table_updated(self.flows)
+        self.assertEqual(len(self.device_flows['olt'].items), 2)
+        self.assertEqual(len(self.device_flows['onu1'].items), 1)
+        self.assertEqual(len(self.device_flows['onu2'].items), 1)
+        self.assertEqual(len(self.device_groups['olt'].items), 0)
+        self.assertEqual(len(self.device_groups['onu1'].items), 0)
+        self.assertEqual(len(self.device_groups['onu2'].items), 0)
+
+        # the only non-default flow
+        self.assertFlowsEqual(self.device_flows['olt'].items[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[in_port(1), eth_type(0x800), ip_proto(2)],
+            actions=[
+                push_vlan(0x8100),
+                set_field(vlan_vid(4096 + 4000)),
+                output(0)
+            ]
+        ))
+
+    def test_multicast_group_with_one_subscriber(self):
+        self.lda.update_group_table(mk_multicast_group_mod(
+            group_id=2,
+            buckets=[
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(1)
+                ]),
+            ]
+        ))
+        self.lda.update_flow_table(mk_simple_flow_mod(
+            priority=1000,
+            match_fields=[
+                in_port(0),
+                eth_type(0x800),
+                vlan_vid(4096 + 140),
+                ipv4_dst(0xe60a0a0a)
+            ],
+            actions=[group(2)]
+        ))
+        self.lda._flow_table_updated(self.flows)
+        self.assertEqual(len(self.device_flows['olt'].items), 2)
+        self.assertEqual(len(self.device_flows['onu1'].items), 2)
+        self.assertEqual(len(self.device_flows['onu2'].items), 1)
+        self.assertEqual(len(self.device_groups['olt'].items), 0)
+        self.assertEqual(len(self.device_groups['onu1'].items), 0)
+        self.assertEqual(len(self.device_groups['onu2'].items), 0)
+
+        self.assertFlowsEqual(self.device_flows['olt'].items[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[in_port(0), vlan_vid(4096 + 140)],
+            actions=[
+                pop_vlan(),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(self.device_flows['onu1'].items[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[in_port(1), eth_type(0x800), ipv4_dst(0xe60a0a0a)],
+            actions=[
+                output(0)
+            ]
+        ))
+
+    def test_multicast_group_with_two_subscribers(self):
+        self.lda.update_group_table(mk_multicast_group_mod(
+            group_id=2,
+            buckets=[
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(1)
+                ]),
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(2)
+                ]),
+            ]
+        ))
+        self.lda.update_flow_table(mk_simple_flow_mod(
+            priority=1000,
+            match_fields=[
+                in_port(0),
+                eth_type(0x800),
+                vlan_vid(4096 + 140),
+                ipv4_dst(0xe60a0a0a)
+            ],
+            actions=[group(2)]
+        ))
+        self.lda._flow_table_updated(self.flows)
+        self.assertEqual(len(self.device_flows['olt'].items), 2)
+        self.assertEqual(len(self.device_flows['onu1'].items), 2)
+        self.assertEqual(len(self.device_flows['onu2'].items), 2)
+        self.assertEqual(len(self.device_groups['olt'].items), 0)
+        self.assertEqual(len(self.device_groups['onu1'].items), 0)
+        self.assertEqual(len(self.device_groups['onu2'].items), 0)
+
+        self.assertFlowsEqual(self.device_flows['olt'].items[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[in_port(0), vlan_vid(4096 + 140)],
+            actions=[
+                pop_vlan(),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(self.device_flows['onu1'].items[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[in_port(1), eth_type(0x800), ipv4_dst(0xe60a0a0a)],
+            actions=[
+                output(0)
+            ]
+        ))
+        self.assertFlowsEqual(self.device_flows['onu2'].items[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[in_port(1), eth_type(0x800), ipv4_dst(0xe60a0a0a)],
+            actions=[
+                output(0)
+            ]
+        ))
+
+    def test_multicast_group_with_no_subscribers(self):
+        self.lda.update_group_table(mk_multicast_group_mod(
+            group_id=2,
+            buckets=[]  # No subscribers
+        ))
+        self.lda.update_flow_table(mk_simple_flow_mod(
+            priority=1000,
+            match_fields=[
+                in_port(0),
+                eth_type(0x800),
+                vlan_vid(4096 + 140),
+                ipv4_dst(0xe60a0a0a)
+            ],
+            actions=[group(2)]
+        ))
+        self.lda._flow_table_updated(self.flows)
+        self.assertEqual(len(self.device_flows['olt'].items), 2)
+        self.assertEqual(len(self.device_flows['onu1'].items), 1)
+        self.assertEqual(len(self.device_flows['onu2'].items), 1)
+        self.assertEqual(len(self.device_groups['olt'].items), 0)
+        self.assertEqual(len(self.device_groups['onu1'].items), 0)
+        self.assertEqual(len(self.device_groups['onu2'].items), 0)
+
+        self.assertFlowsEqual(self.device_flows['olt'].items[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[in_port(0), vlan_vid(4096 + 140)],
+            actions=[
+                pop_vlan(),
+                output(1)
+            ]
+        ))
+
+
+
+if __name__ == '__main__':
+    main()
diff --git a/voltha/core/device_graph.py b/voltha/core/device_graph.py
index 537124d..5db6a37 100644
--- a/voltha/core/device_graph.py
+++ b/voltha/core/device_graph.py
@@ -62,7 +62,7 @@
                 for peer in port.peers:
                     if peer.device_id not in devices_added:
                         peer_device = root_proxy.get(
-                            'devices/{}'.format(peer.device_id))
+                            '/devices/{}'.format(peer.device_id))
                         add_device(peer_device)
                     else:
                         peer_port_id = (peer.device_id, peer.port_no)
diff --git a/voltha/core/flow_decomposer.py b/voltha/core/flow_decomposer.py
index 2bd690c..bb346d4 100644
--- a/voltha/core/flow_decomposer.py
+++ b/voltha/core/flow_decomposer.py
@@ -452,7 +452,8 @@
 
         device_rules = {}  # accumulator
 
-        if (out_port_no & 0x7fffffff) == ofp.OFPP_CONTROLLER:
+        if out_port_no is not None and \
+                        (out_port_no & 0x7fffffff) == ofp.OFPP_CONTROLLER:
 
             # UPSTREAM CONTROLLER-BOUND FLOW
 
diff --git a/voltha/core/logical_device_agent.py b/voltha/core/logical_device_agent.py
index 41badb4..42b88d9 100644
--- a/voltha/core/logical_device_agent.py
+++ b/voltha/core/logical_device_agent.py
@@ -65,6 +65,8 @@
 
         self.log = structlog.get_logger(logical_device_id=logical_device.id)
 
+        self._routes = None
+
     def start(self):
         self.log.debug('starting')
         self.log.info('started')
@@ -305,7 +307,7 @@
         # indicates the flow entry matches
         match = flow_mod.match
         assert isinstance(match, ofp.ofp_match)
-        if not match.oxm_list:
+        if not match.oxm_fields:
             # If we got this far and the match is empty in the flow spec,
             # than the flow matches
             return True
@@ -589,9 +591,33 @@
 
     def get_route(self, ingress_port_no, egress_port_no):
         self._assure_cached_tables_up_to_date()
-        if (egress_port_no & 0x7fffffff) == ofp.OFPP_CONTROLLER:
+        if egress_port_no is not None and \
+                        (egress_port_no & 0x7fffffff) == ofp.OFPP_CONTROLLER:
             # treat it as if the output port is the NNI of the OLT
             egress_port_no = self._nni_logical_port_no
+
+        # If ingress_port is not specified (None), it may be a wildcarded
+        # route if egress_port is OFPP_CONTROLLER or _nni_logical_port,
+        # in which case we need to create a half-route where only the egress
+        # hop is filled, the first hope is None
+        if ingress_port_no is None and \
+                        egress_port_no == self._nni_logical_port_no:
+            # We can use the 2nd hop of any upstream route, so just find the
+            # first upstream:
+            for (ingress, egress), route in self._routes.iteritems():
+                if egress == self._nni_logical_port_no:
+                    return [None, route[1]]
+            raise Exception('not a single upstream route')
+
+        # If egress_port is not specified (None), we can also can return a
+        # "half" route
+        if egress_port_no is None and \
+            ingress_port_no == self._nni_logical_port_no:
+            for (ingress, egress), route in self._routes.iteritems():
+                if ingress == self._nni_logical_port_no:
+                    return [route[0], None]
+            raise Exception('not a single downstream route')
+
         return self._routes[(ingress_port_no, egress_port_no)]
 
     def get_all_default_rules(self):