Cleaner Tibit ONU handling.

Change-Id: I487d190d20917d10c28afaf724c5ccf4d0792260
diff --git a/voltha/adapters/tibit_olt/README.md b/voltha/adapters/tibit_olt/README.md
index ecf2c52..d37856e 100644
--- a/voltha/adapters/tibit_olt/README.md
+++ b/voltha/adapters/tibit_olt/README.md
@@ -1,44 +1,67 @@
 # Developer notes:
 
-Before auto-discovery is implemented, you can follow the steps below to activate a Tibit PON.
-These steps assume:
+Before auto-discovery is implemented, you can follow the steps below
+to activate a Tibit PON.  These steps assume:
 
+* Voltha code was downloaded and compiled successfully
 * Voltha starts with fresh state (was just launched in single-instance mode)
-* Tibit OLT and ONU(s) are powered on and properly connected with splitters.
-* There is network reach from Voltha's host environment to the Tibit OLT via
+* Tibit OLT and ONU(s) are powered on and properly connected with splitters
+* There is a network reachable from Voltha's host environment to the Tibit OLT via
   a specific interface of the host OS. We symbolically refer to this Linux
   interface as \<interface\>.
-* All commands are to be executed from the root dir of the voltha project after
-  env.sh was sourced, and at least 'make protos' was executed on the latest code.
+* All commands are to be executed from the root dir of the voltha project
 
 
-## Step 1: Launch Voltha with the proper interface value.
+## Step 1: Launch Voltha support applications and Chameleon
+
+Open a shell and execute the following commands.
 
 ```
-./voltha/main.py -I <interface>  # example: ./voltha/main.py -I eth2
+$ cd voltha
+$ . ./env.sh
+(venv-linux)$ docker-compose -f compose/docker-compose-system-test.yml up -d consul kafka zookeeper fluentd registrator
 ```
 
-## Step 2: Launch Chameleon (in a different terminal)
+In the same shell, launch chameleon. The command below assumes that you are in the top level Voltha directory.
 
 ```
-./chamaleon/main.py
+(venv-linux)$ ./chameleon/main.py
+```
+
+## Step 2: Launch Voltha with the proper interface value.
+
+Note: For Voltha to properly access the interface, it needs to be run with sudo priveleges.
+
+```
+$ sudo -s
+# cd ~/cord/incubator/voltha
+# . ./env.sh
+(venv-linux)# ./voltha/main.py --interface <interface>
+```
+
+Also, the interface being used for Voltha needs to be in promiscuous
+mode.  To set the interface in promiscuous mode, use the following
+command.
+
+```
+$ sudo ip link set <interface> promisc on
 ```
 
 ## Step 3: Verify Tibit adapters loaded
 
-In a third terminal, issue the following RESt requests:
+In a third terminal, issue the following REST requests:
 
 ```
-curl -s http://localhost:8881/api/v1/local/adapters | jq
+$ curl -s http://localhost:8881/api/v1/local/adapters | jq
 ```
 
-This should list (among other entries) two entries for Tibit devices:
+This should list (among other entries) two entries for Tibit devices,
 one for the Tibit OLT and one for the Tibit ONU.
 
 The following request should show the device types supported:
 
 ```
-curl -s http://localhost:8881/api/v1/local/device_types | jq
+$ curl -s http://localhost:8881/api/v1/local/device_types | jq
 ```
 
 This should include two entries for Tibit devices, one for the OLT
@@ -53,9 +76,9 @@
     http://localhost:8881/api/v1/local/devices | jq '.' | tee olt.json
 ```
 
-This shall return with a complete Device JSON object, including a 12-character
-id of the new device and a preprovisioned state as admin state (it also saved the
-json blob in a olt.json file):
+This will return a complete Device JSON object, including a
+12-character id of the new device and a preprovisioned state as admin
+state (it also saved the json blob in a olt.json file):
 
 ```
 {
@@ -94,12 +117,14 @@
 curl -s -X POST http://localhost:8881/api/v1/local/devices/$OLT_ID/activate
 ```
 
-After this, if you retrieve the state of the OLT device, it should be enabled
-and in the 'ACTIVATING' operational status:
+After this, if you retrieve the state of the OLT device, it should be
+enabled and in the 'ACTIVE' operational status.  If it is not in the
+'ACTIVE' operational status it is likely that the handshake with the
+OLT device was not successful.
 
 ```
 curl http://localhost:8881/api/v1/local/devices/$OLT_ID | jq '.oper_status,.admin_state'
-"ACTIVATING"
+"ACTIVE"
 "ENABLED"
 ```
 When the device is ACTIVE, the logical devices and logical ports should be created.  To check
@@ -111,24 +136,14 @@
 curl -s http://localhost:8881/api/v1/local/logical_devices/47d2bb42a2c6/ports | jq '.'
 ```
 
-
-
-
-
-# OLD stuff
-
-[This will be moved to some other place soon.]
+## Running the ONOS olt-test
 
 To get the EOAM stack to work with the ONOS olt-test, the following
 command was used in the shell to launch the olt-test.
 
-NOTE: This command should soon be eliminated as the adapter should
-be started by VOLTHA. By running the commands as listed below, then
-the olt-test can take advantage of the virtual environment.
-
 ```
 $ cd <LOCATION_OF_VOLTHA>
 $ sudo -s
 # . ./env.sh
-(venv-linux) # PYTHONPATH=$HOME/dev/voltha/voltha/adapters/tibit ./oftest/oft --test-dir=olt-oftest/ -i 1@enp1s0f0 -i 2@enp1s0f1 --port 6633 -V 1.3 -t "olt_port=1;onu_port=2;in_out_port=1;device_type='tibit'" olt-complex.TestScenario1SingleOnu
+(venv-linux) # PYTHONPATH=$HOME/cord/incubator/voltha/voltha/extensions/eoam ./oftest/oft --test-dir=olt-oftest/ -i 1@enp1s0f0 -i 2@enp1s0f1 --port 6633 -V 1.3 -t "olt_port=1;onu_port=2;in_out_port=1;device_type='tibit'" olt-complex.TestScenario1SingleOnu
 ```
diff --git a/voltha/adapters/tibit_olt/tibit_olt.py b/voltha/adapters/tibit_olt/tibit_olt.py
index ec58dfe..564674d 100644
--- a/voltha/adapters/tibit_olt/tibit_olt.py
+++ b/voltha/adapters/tibit_olt/tibit_olt.py
@@ -390,6 +390,7 @@
         raise NotImplementedError()
 
     def update_flows_bulk(self, device, flows, groups):
+        log.info('########################################')
         log.info('bulk-flow-update', device_id=device.id,
                  flows=flows, groups=groups)
 
@@ -402,11 +403,79 @@
             precedence = 255 - min(flow.priority / 256, 255)
 
             if in_port == 2:
-                # Downstream rule
-                pass  # TODO still ignores
+                log.info('#### Downstream Rule ####')
+
+                for field in get_ofb_fields(flow):
+
+                    if field.type == ETH_TYPE:
+                        log.info('#### field.type == ETH_TYPE ####')
+                        _type = field.eth_type
+                        req /= PortIngressRuleClauseMatchLength02(
+                            fieldcode=3,
+                            operator=1,
+                            match0=(_type >> 8) & 0xff,
+                            match1=_type & 0xff)
+
+                    elif field.type == IP_PROTO:
+                        _proto = field.ip_proto
+                        log.info('#### field.type == IP_PROTO ####')
+                        pass  # construct ip_proto based condition here
+
+                    elif field.type == IN_PORT:
+                        _port = field.port
+                        log.info('#### field.type == IN_PORT ####')
+                        pass  # construct in_port based condition here
+
+                    elif field.type == VLAN_VID:
+                        _vlan_vid = field.vlan_vid
+                        log.info('#### field.type == VLAN_VID ####')
+                        pass  # construct VLAN ID based filter condition here
+
+                    elif field.type == VLAN_PCP:
+                        _vlan_pcp = field.vlan_pcp
+                        log.info('#### field.type == VLAN_PCP ####')
+                        pass  # construct VLAN PCP based filter condition here
+
+                    elif field.type == UDP_DST:
+                        _udp_dst = field.udp_dst
+                        log.info('#### field.type == UDP_DST ####')
+                        pass  # construct UDP SDT based filter here
+
+                    else:
+                        raise NotImplementedError('field.type={}'.format(
+                            field.type))
+
+                    for action in get_actions(flow):
+
+                        if action.type == OUTPUT:
+                            req /= PortIngressRuleResultForward()
+
+                        elif action.type == POP_VLAN:
+                            pass  # construct vlan pop command here
+
+                        elif action.type == PUSH_VLAN:
+                            if action.push.ethertype != 0x8100:
+                                log.error('unhandled-ether-type',
+                                          ethertype=action.push.ethertype)
+                                req /= PortIngressRuleResultInsert(fieldcode=7)
+
+                            elif action.type == SET_FIELD:
+                                assert (action.set_field.field.oxm_class ==
+                                        ofp.OFPXMC_OPENFLOW_BASIC)
+                                field = action.set_field.field.ofb_field
+                                if field.type == VLAN_VID:
+                                    req /= PortIngressRuleResultSet(
+                                        fieldcode=7, value=field.vlan_vid & 0xfff)
+                                else:
+                                    log.error('unsupported-action-set-field-type',
+                                              field_type=field.type)
+                        else:
+                            log.error('unsupported-action-type',
+                                      action_type=action.type)
 
             elif in_port == 1:
                 # Upstream rule
+                log.info('#### Upstream Rule ####')
                 req = PonPortObject()
                 req /= PortIngressRuleHeader(precedence=precedence)
 
@@ -422,22 +491,27 @@
 
                     elif field.type == IP_PROTO:
                         _proto = field.ip_proto
+                        log.info('#### field.type == IP_PROTO ####')
                         pass  # construct ip_proto based condition here
 
                     elif field.type == IN_PORT:
                         _port = field.port
+                        log.info('#### field.type == IN_PORT ####')
                         pass  # construct in_port based condition here
 
                     elif field.type == VLAN_VID:
                         _vlan_vid = field.vlan_vid
+                        log.info('#### field.type == VLAN_VID ####')
                         pass  # construct VLAN ID based filter condition here
 
                     elif field.type == VLAN_PCP:
                         _vlan_pcp = field.vlan_pcp
+                        log.info('#### field.type == VLAN_PCP ####')
                         pass  # construct VLAN PCP based filter condition here
 
                     elif field.type == UDP_DST:
                         _udp_dst = field.udp_dst
+                        log.info('#### field.type == UDP_DST ####')
                         pass  # construct UDP SDT based filter here
 
                     else:
@@ -493,9 +567,8 @@
 
     def send_proxied_message(self, proxy_address, msg):
         log.info('send-proxied-message', proxy_address=proxy_address)
-        # TODO build device_id -> mac_address cache
         device = self.adapter_agent.get_device(proxy_address.device_id)
-        frame = Ether(dst='00:0c:e2:22:29:00') / \
+        frame = Ether(dst=device.mac_address) / \
                 Dot1Q(vlan=TIBIT_MGMT_VLAN, prio=TIBIT_MGMT_PRIORITY) / \
                 Dot1Q(vlan=proxy_address.channel_id, prio=TIBIT_MGMT_PRIORITY) / \
                 msg
diff --git a/voltha/adapters/tibit_olt/tmp.py b/voltha/adapters/tibit_olt/tmp.py
deleted file mode 100644
index 18a7587..0000000
--- a/voltha/adapters/tibit_olt/tmp.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from scapy.fields import ShortEnumField, XShortField, ShortField
-from scapy.layers.inet import IP
-from scapy.layers.l2 import Ether, Dot1Q
-from scapy.packet import Packet, bind_layers
-
-
-class EoamPayload(Packet):
-    name = "EOAM Payload"
-    fields_desc = [
-        ShortField("junk1", 12),
-        XShortField("junk2", None),
-    ]
-
-bind_layers(Ether, EoamPayload, type=0xbeef)
-
-
-
-
-
-f1 = Ether() / EoamPayload()
-print '0x%X' % f1.type
-
-
-f2 = Ether() / EoamPayload()
-print '0x%X' % f2.type
-
-f3 = Ether() / Dot1Q() / EoamPayload()
-
-print '0x%X' % f3.type
-print '0x%X' % f3.payload.type
-
-f4 = Ether() / Dot1Q() / Dot1Q() / EoamPayload()
-
-print '0x%X' % f4.type
-print '0x%X' % f4.payload.type
-print '0x%X' % f4.payload.payload.type