Flow decomposition and miscellenous improvements

Specifically:

The biggest addition is an initial flow decomposition
implementation that splits flows and flow groups
defined over the logical device into per physical
device flows, based on a very crude and heuristic
approach. We expect this part to be much improved
later on, both in term of genericness as well as
speed.

The flow decomposition is triggered by any flow
or group mods applied to a logical device, and it
consequently touches up the affected device tables.
This uses the POST_UPDATE (post-commit) mechanism
of core.

There is also an initial arhcitecture diagram added
under docs.

Additional improvements:

* Implemented metadata passing across the gRPC
  link, both in Voltha and in Chameleon. This paves
  the road to pass query args as metadata, and also
  to pass HTTP header fields back and forth across
  the gRPC API. This is alrady used to pass in depth
  for GET /api/v1/local, and it will be used to
  allow working with transactions and specific config
  revs.
* Improved automatic reload and reconnect of chameleon
  after Voltha is restarted.
* Improved error handling in gRPC hanlers, especially
  for the "resource not found (404)", and bad argument
  (400) type errors. This makes gRPC Rendezvous errors
  a bit cleaner, and also allows Chameleon to map these
  errors into 404/400 codes.
* Better error logging in generic errors in gRPC handlers.
* Many new test-cases
* Initial skeleton and first many steps implemented for
  the automated testing for the cold PON activation
  sequence.
* Convenience functions for working with flows (exemplified
  by the test-cases)
* Fixed bug in config engine that dropped changes that
  were made in a POST_* callback, such as the ones used
  to propagae the logical flow tables into the device
  tables. The fix was to defer the callbacks till the
  initial changes are complete and then execute all
  callbacks in sequence.
* Adapter proxy with well defined API that can be
  used by the adapters to communicate back to Core.
* Extended simulated_olt and simulated_onu adapters to
  both demonstrate discovery-style and provisioned
  activation style use cases.
* Adapter-, device-, and logical device agents to provide
  the active business logic associated with these
  entities.
* Fixed 64-bit value passing across the stack. There was
  an issue due to inconsistent use of two JSON<-->Proto
  librarier, one of which did not adhere to the Google
  specs which recommend passing 64-bit integer values as
  strings.
* Annotation added for all gRPC methods.

All Voltha test-cases are passing.

Change-Id: Id949e8d1b76276741471bedf9901ac33bfad9ec6
diff --git a/chameleon/grpc_client/grpc_client.py b/chameleon/grpc_client/grpc_client.py
index 74933cd..05a4dd7 100644
--- a/chameleon/grpc_client/grpc_client.py
+++ b/chameleon/grpc_client/grpc_client.py
@@ -252,24 +252,27 @@
             _ = __import__(modname)
 
     @inlineCallbacks
-    def invoke(self, stub, method_name, request, retry=1):
+    def invoke(self, stub, method_name, request, metadata, retry=1):
         """
         Invoke a gRPC call to the remote server and return the response.
         :param stub: Reference to the *_pb2 service stub
         :param method_name: The method name inside the service stub
         :param request: The request protobuf message
-        :return: The response protobuf message
+        :param metadata: [(str, str), (str, str), ...]
+        :return: The response protobuf message and returned trailing metadata
         """
 
         if not self.connected:
             raise ServiceUnavailable()
 
         try:
-            response = getattr(stub(self.channel), method_name)(request)
-            returnValue(response)
+            method = getattr(stub(self.channel), method_name)
+            response, rendezvous = method.with_call(request, metadata=metadata)
+            returnValue((response, rendezvous.trailing_metadata()))
 
         except grpc._channel._Rendezvous, e:
-            if e.code() == grpc.StatusCode.UNAVAILABLE:
+            code = e.code()
+            if code == grpc.StatusCode.UNAVAILABLE:
                 e = ServiceUnavailable()
 
                 if self.connected:
@@ -277,10 +280,17 @@
                     yield self.connect()
                     if retry > 0:
                         response = yield self.invoke(stub, method_name,
-                                                     request,
+                                                     request, metadata,
                                                      retry=retry - 1)
                         returnValue(response)
 
+            elif code in (
+                    grpc.StatusCode.NOT_FOUND,
+                    grpc.StatusCode.INVALID_ARGUMENT,
+                    grpc.StatusCode.ALREADY_EXISTS):
+
+                pass  # don't log error, these occur naturally
+
             else:
                 log.exception(e)
 
diff --git a/chameleon/protoc_plugins/gw_gen.py b/chameleon/protoc_plugins/gw_gen.py
index c5a8875..4400a1a 100755
--- a/chameleon/protoc_plugins/gw_gen.py
+++ b/chameleon/protoc_plugins/gw_gen.py
@@ -32,8 +32,7 @@
 
 from simplejson import dumps, load
 from structlog import get_logger
-from protobuf_to_dict import dict_to_protobuf
-from google.protobuf.json_format import MessageToDict
+from google.protobuf.json_format import MessageToDict, ParseDict
 from twisted.internet.defer import inlineCallbacks, returnValue
 
 {% set package = file_name.replace('.proto', '') %}
@@ -65,16 +64,16 @@
         {% elif method['body'] == '' %}
         data = kw
         {% else %}
-        riase NotImplementedError('cannot handle specific body field list')
+        raise NotImplementedError('cannot handle specific body field list')
         {% endif %}
         try:
-            req = dict_to_protobuf({{ type_map[method['input_type']] }}, data)
+            req = ParseDict(data, {{ type_map[method['input_type']] }}())
         except Exception, e:
             log.error('cannot-convert-to-protobuf', e=e, data=data)
             raise
-        res = yield grpc_client.invoke(
+        res, metadata = yield grpc_client.invoke(
             {{ type_map[method['service']] }}Stub,
-            '{{ method['method'] }}', req)
+            '{{ method['method'] }}', req, request.getAllHeaders().items())
         try:
             out_data = MessageToDict(res, True, True)
         except AttributeError, e:
@@ -83,6 +82,8 @@
                 f.write(res.SerializeToString())
             log.error('cannot-convert-from-protobuf', outdata_saved=filename)
             raise
+        for key, value in metadata:
+            request.setHeader(key, value)
         request.setHeader('Content-Type', 'application/json')
         log.debug('{{ method_name }}', **out_data)
         returnValue(dumps(out_data))
diff --git a/chameleon/web_server/web_server.py b/chameleon/web_server/web_server.py
index c96be0f..7e3d00d 100644
--- a/chameleon/web_server/web_server.py
+++ b/chameleon/web_server/web_server.py
@@ -17,6 +17,7 @@
 
 import os
 
+import grpc
 from klein import Klein
 from simplejson import dumps, load
 from structlog import get_logger
@@ -26,6 +27,8 @@
 from twisted.web.server import Site
 from twisted.web.static import File
 from werkzeug.exceptions import BadRequest
+from grpc import StatusCode
+
 
 log = get_logger()
 
@@ -97,3 +100,19 @@
             return File(os.path.join(self.work_dir, 'swagger.json'))
         except Exception, e:
             log.exception('file-not-found', request=request)
+
+    @app.handle_errors(grpc._channel._Rendezvous)
+    def grpc_exception(self, request, failure):
+        code = failure.value.code()
+        if code == StatusCode.NOT_FOUND:
+            request.setResponseCode(404)
+            return failure.value.details()
+        elif code == StatusCode.INVALID_ARGUMENT:
+            request.setResponseCode(400)
+            return failure.value.details()
+        elif code == StatusCode.ALREADY_EXISTS:
+            request.setResponseCode(409)
+            return failure.value.details()
+        else:
+            raise
+
diff --git a/common/utils/grpc_utils.py b/common/utils/grpc_utils.py
index b9cc8f7..03d2ff4 100644
--- a/common/utils/grpc_utils.py
+++ b/common/utils/grpc_utils.py
@@ -17,12 +17,16 @@
 """
 Utilities to handle gRPC server and client side code in a Twisted environment
 """
+import structlog
 from concurrent.futures import Future
 from twisted.internet import reactor
 from twisted.internet.defer import Deferred
 from twisted.python.threadable import isInIOThread
 
 
+log = structlog.get_logger()
+
+
 def twisted_async(func):
     """
     This decorator can be used to implement a gRPC method on the twisted
@@ -92,7 +96,11 @@
                 f.done()
 
         reactor.callFromThread(twisted_wrapper)
-        result = f.result()
+        try:
+            result = f.result()
+        except Exception, e:
+            log.exception(e=e, func=func, args=args, kw=kw)
+            raise
 
         return result
 
diff --git a/docs/architecture.svg b/docs/architecture.svg
new file mode 100644
index 0000000..a02ce2f
--- /dev/null
+++ b/docs/architecture.svg
@@ -0,0 +1,1940 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   width="11in"
+   height="8.5in"
+   viewBox="0 0 990 765"
+   id="svg2"
+   version="1.1"
+   inkscape:version="0.91 r13725"
+   sodipodi:docname="architecture.svg">
+  <defs
+     id="defs4">
+    <marker
+       inkscape:stockid="Arrow1Send"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker7299"
+       style="overflow:visible;"
+       inkscape:isstock="true">
+      <path
+         id="path7301"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.2) rotate(180) translate(6,0)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible;"
+       id="marker6998"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow1Send"
+       inkscape:collect="always">
+      <path
+         transform="scale(0.2) rotate(180) translate(6,0)"
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         id="path7000" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Send"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker6878"
+       style="overflow:visible;"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6880"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.2) rotate(180) translate(6,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Send"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker6626"
+       style="overflow:visible;"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6628"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.2) rotate(180) translate(6,0)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible;"
+       id="marker6524"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow1Send"
+       inkscape:collect="always">
+      <path
+         transform="scale(0.2) rotate(180) translate(6,0)"
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         id="path6526" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Send"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker6428"
+       style="overflow:visible;"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6430"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.2) rotate(180) translate(6,0)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible;"
+       id="marker6338"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow1Send"
+       inkscape:collect="always">
+      <path
+         transform="scale(0.2) rotate(180) translate(6,0)"
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         id="path6340" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Send"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker6254"
+       style="overflow:visible;"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path6256"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.2) rotate(180) translate(6,0)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible;"
+       id="marker6186"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow1Send"
+       inkscape:collect="always">
+      <path
+         transform="scale(0.2) rotate(180) translate(6,0)"
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         id="path6188" />
+    </marker>
+    <marker
+       inkscape:stockid="TriangleOutM"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker6092"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path6094"
+         d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.4)" />
+    </marker>
+    <marker
+       inkscape:stockid="TriangleOutM"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="TriangleOutM"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path5225"
+         d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.4)" />
+    </marker>
+    <marker
+       inkscape:stockid="EmptyTriangleOutM"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="EmptyTriangleOutM"
+       style="overflow:visible"
+       inkscape:isstock="true">
+      <path
+         id="path5243"
+         d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
+         style="fill-rule:evenodd;fill:#ffffff;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1"
+         transform="scale(0.4) translate(-4.5,0)" />
+    </marker>
+    <marker
+       inkscape:isstock="true"
+       style="overflow:visible;"
+       id="marker5910"
+       refX="0.0"
+       refY="0.0"
+       orient="auto"
+       inkscape:stockid="Arrow2Mend">
+      <path
+         transform="scale(0.6) rotate(180) translate(0,0)"
+         d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+         style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#ff0000;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         id="path5912" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Send"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="marker5642"
+       style="overflow:visible;"
+       inkscape:isstock="true"
+       inkscape:collect="always">
+      <path
+         id="path5644"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.2) rotate(180) translate(6,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Send"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="Arrow1Send"
+       style="overflow:visible;"
+       inkscape:isstock="true">
+      <path
+         id="path5095"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#ff0000;stroke-width:1pt;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         transform="scale(0.2) rotate(180) translate(6,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow2Mend"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="Arrow2Mend"
+       style="overflow:visible;"
+       inkscape:isstock="true">
+      <path
+         id="path5107"
+         style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#ff0000;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
+         d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+         transform="scale(0.6) rotate(180) translate(0,0)" />
+    </marker>
+    <marker
+       inkscape:stockid="Arrow1Lend"
+       orient="auto"
+       refY="0.0"
+       refX="0.0"
+       id="Arrow1Lend"
+       style="overflow:visible;"
+       inkscape:isstock="true">
+      <path
+         id="path5083"
+         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+         style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+         transform="scale(0.8) rotate(180) translate(12.5,0)" />
+    </marker>
+  </defs>
+  <sodipodi:namedview
+     id="base"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageopacity="0.0"
+     inkscape:pageshadow="2"
+     inkscape:zoom="1.2470588"
+     inkscape:cx="495"
+     inkscape:cy="382.5"
+     inkscape:document-units="cm"
+     inkscape:current-layer="layer2"
+     showgrid="true"
+     units="in"
+     objecttolerance="10000"
+     inkscape:snap-perpendicular="true"
+     inkscape:window-width="1920"
+     inkscape:window-height="1156"
+     inkscape:window-x="0"
+     inkscape:window-y="1"
+     inkscape:window-maximized="1">
+    <inkscape:grid
+       type="xygrid"
+       id="grid4136"
+       units="mm"
+       spacingx="3.5433071"
+       spacingy="3.5433071"
+       color="#b6b6ff"
+       opacity="0.1254902"
+       empcolor="#8a8aff"
+       empopacity="0.25098039" />
+  </sodipodi:namedview>
+  <metadata
+     id="metadata7">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <g
+     inkscape:label="Layer 1"
+     inkscape:groupmode="layer"
+     id="layer1"
+     transform="translate(0,-287.3622)"
+     style="display:inline"
+     sodipodi:insensitive="true">
+    <g
+       id="g4649"
+       transform="translate(-56.481132,-416.75472)">
+      <rect
+         style="fill:#c0e8ff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3.00000001, 1.00000002;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4651"
+         width="854.10309"
+         height="468.18985"
+         x="120.30643"
+         y="966.91339" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="965.6394"
+         y="984.86884"
+         id="text4653"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4655"
+           x="965.6394"
+           y="984.86884">voltha</tspan></text>
+    </g>
+    <g
+       id="g4595"
+       transform="matrix(1,0,0,1.0784076,-66.509434,-356.41783)">
+      <rect
+         style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:0.96296072;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:2.88888196, 0.96296066;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4597"
+         width="679.03876"
+         height="206.22931"
+         x="295.3707"
+         y="935.82001" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.03700829px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="965.6394"
+         y="953.52435"
+         id="text4599"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4601"
+           x="965.6394"
+           y="953.52435">core</tspan></text>
+    </g>
+    <g
+       id="g4398"
+       transform="translate(-65.707547,-6.8490566)">
+      <rect
+         y="892.91339"
+         x="194.8819"
+         height="124.01569"
+         width="779.52759"
+         id="rect4374"
+         style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3, 1;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4314-7"
+         y="910.86884"
+         x="965.6394"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="910.86884"
+           x="965.6394"
+           id="tspan4316-0"
+           sodipodi:role="line">adapters</tspan></text>
+    </g>
+    <g
+       transform="translate(-223.31132,440.99057)"
+       id="g4276">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4278"
+         width="128.02519"
+         height="34.597065"
+         x="449.25955"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="540.95288"
+         id="text4280"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4282"
+           x="512.97003"
+           y="540.95288">simulated_onu</tspan></text>
+    </g>
+    <g
+       id="g4284"
+       transform="translate(-223.31132,397.69811)">
+      <rect
+         y="521.06683"
+         x="449.25955"
+         height="34.597065"
+         width="128.02519"
+         id="rect4286"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4288"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4290"
+           sodipodi:role="line">simulated_olt</tspan></text>
+    </g>
+    <g
+       transform="translate(48.528303,397.69811)"
+       id="g4310">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4312"
+         width="128.02519"
+         height="34.597065"
+         x="449.25955"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4314"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4316"
+           x="512.97003"
+           y="542.95288">maple</tspan></text>
+    </g>
+    <g
+       id="g4318"
+       transform="translate(-86.990568,397.69811)">
+      <rect
+         y="521.06683"
+         x="449.25955"
+         height="34.597065"
+         width="128.02519"
+         id="rect4320"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4322"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4324"
+           sodipodi:role="line">microsemi</tspan></text>
+    </g>
+    <g
+       transform="translate(183.24528,397.69811)"
+       id="g4326">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4328"
+         width="128.02519"
+         height="34.597065"
+         x="449.25955"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4330"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4332"
+           x="512.97003"
+           y="542.95288">tibit_olt</tspan></text>
+    </g>
+    <g
+       id="g4334"
+       transform="translate(183.21079,440.99057)">
+      <rect
+         y="521.06683"
+         x="449.25955"
+         height="34.597065"
+         width="128.02519"
+         id="rect4336"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4338"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4340"
+           sodipodi:role="line">tibit_onu</tspan></text>
+    </g>
+    <g
+       transform="translate(317.96226,397.69811)"
+       id="g4342">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4344"
+         width="128.02519"
+         height="34.597065"
+         x="449.25955"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4346"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4348"
+           x="512.97003"
+           y="542.95288">emulator</tspan></text>
+    </g>
+    <g
+       id="g4350"
+       transform="translate(318.76415,440.99057)">
+      <rect
+         y="521.06683"
+         x="449.25955"
+         height="34.597065"
+         width="128.02519"
+         id="rect4352"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4354"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4356"
+           sodipodi:role="line">grpc_shim</tspan></text>
+    </g>
+    <g
+       transform="translate(-343.74454,404.32945)"
+       id="g4358">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4360"
+         width="79.911987"
+         height="34.597065"
+         x="473.31616"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4362"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4364"
+           x="512.97003"
+           y="542.95288">interface</tspan></text>
+    </g>
+    <g
+       id="g4366"
+       transform="translate(-343.59434,365.68868)">
+      <rect
+         y="521.06683"
+         x="473.30771"
+         height="34.597065"
+         width="79.92894"
+         id="rect4368"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4370"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4372"
+           sodipodi:role="line">loader</tspan></text>
+    </g>
+    <g
+       transform="translate(-91.660377,308.12798)"
+       id="g4403">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4405"
+         width="128.02519"
+         height="34.597065"
+         x="449.25955"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4407"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4409"
+           x="512.97003"
+           y="542.95288">adapter_agent</tspan></text>
+    </g>
+    <g
+       id="g4411"
+       transform="translate(60.297163,308.12798)">
+      <rect
+         y="521.06683"
+         x="449.25955"
+         height="34.597065"
+         width="128.02519"
+         id="rect4413"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4415"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4417"
+           sodipodi:role="line">device_agent</tspan></text>
+    </g>
+    <g
+       transform="translate(-244.76416,131.40566)"
+       id="g4443">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4445"
+         width="79.92894"
+         height="34.597065"
+         x="473.30771"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4447"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4449"
+           x="512.97003"
+           y="542.95288">core</tspan></text>
+    </g>
+    <g
+       id="g4505"
+       transform="translate(-204.95283,50.396223)">
+      <g
+         transform="translate(119.48114,-226.13207)"
+         id="g4497">
+        <rect
+           y="892.91333"
+           x="334.41019"
+           height="95.147758"
+           width="639.99927"
+           id="rect4499"
+           style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:2.99999992, 0.99999997;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4501"
+           y="910.86884"
+           x="965.6394"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="910.86884"
+             x="965.6394"
+             id="tspan4503"
+             sodipodi:role="line">config</tspan></text>
+      </g>
+      <g
+         id="g4419"
+         transform="translate(173.20755,155.29878)">
+        <rect
+           y="521.06683"
+           x="449.25955"
+           height="34.597065"
+           width="128.02519"
+           id="rect4421"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4423"
+           y="542.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="542.95288"
+             x="512.97003"
+             id="tspan4425"
+             sodipodi:role="line">config_root</tspan></text>
+      </g>
+      <g
+         transform="translate(309.79557,155.29878)"
+         id="g4427">
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4429"
+           width="128.02519"
+           height="34.597065"
+           x="449.25955"
+           y="521.06683" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="512.97003"
+           y="542.95288"
+           id="text4431"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan4433"
+             x="512.97003"
+             y="542.95288">config_node</tspan></text>
+      </g>
+      <g
+         id="g4435"
+         transform="translate(445.849,155.29878)">
+        <rect
+           y="521.06683"
+           x="449.25955"
+           height="34.597065"
+           width="128.02519"
+           id="rect4437"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4439"
+           y="542.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="542.95288"
+             x="512.97003"
+             id="tspan4441"
+             sodipodi:role="line">config_rev</tspan></text>
+      </g>
+      <g
+         id="g4451"
+         transform="translate(445.849,198.06608)">
+        <rect
+           y="521.06683"
+           x="449.25955"
+           height="34.597065"
+           width="128.02519"
+           id="rect4453"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="100%"
+           id="text4455"
+           y="534.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:100%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             id="tspan4459"
+             y="534.95288"
+             x="512.97003"
+             sodipodi:role="line">config_rev</tspan><tspan
+             id="tspan4487"
+             y="547.45288"
+             x="512.97003"
+             sodipodi:role="line">_persisted</tspan></text>
+      </g>
+      <g
+         transform="translate(37.421385,197.79878)"
+         id="g4461">
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4463"
+           width="128.02519"
+           height="34.597065"
+           x="449.25955"
+           y="521.06683" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="512.97003"
+           y="542.95288"
+           id="text4465"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan4467"
+             x="512.97003"
+             y="542.95288">config_branch</tspan></text>
+      </g>
+      <g
+         id="g4469"
+         transform="translate(37.421385,155.56608)">
+        <rect
+           y="521.06683"
+           x="449.25955"
+           height="34.597065"
+           width="128.02519"
+           id="rect4471"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4473"
+           y="542.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="542.95288"
+             x="512.97003"
+             id="tspan4475"
+             sodipodi:role="line">config_txn</tspan></text>
+      </g>
+      <g
+         transform="translate(173.47474,198.06608)"
+         id="g4477">
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4479"
+           width="128.02519"
+           height="34.597065"
+           x="449.25955"
+           y="521.06683" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="512.97003"
+           y="542.95288"
+           id="text4481"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan4483"
+             x="512.97003"
+             y="542.95288">merge_3way</tspan></text>
+      </g>
+      <g
+         id="g4489"
+         transform="translate(578.74279,158.90724)">
+        <rect
+           y="560.35931"
+           x="179.77777"
+           height="34.597065"
+           width="128.02519"
+           id="rect4491"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4493"
+           y="582.24536"
+           x="243.53607"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="582.24536"
+             x="243.53607"
+             id="tspan4495"
+             sodipodi:role="line">config_proxy</tspan></text>
+      </g>
+    </g>
+    <g
+       id="g4451-4"
+       transform="translate(212.25471,308.12798)">
+      <rect
+         y="521.06683"
+         x="449.25955"
+         height="34.597065"
+         width="128.02519"
+         id="rect4453-3"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="100%"
+         id="text4455-3"
+         y="534.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:100%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           id="tspan4459-3"
+           y="534.95288"
+           x="512.97003"
+           sodipodi:role="line">logical_device</tspan><tspan
+           id="tspan4487-1"
+           y="547.45288"
+           x="512.97003"
+           sodipodi:role="line">_agent</tspan></text>
+    </g>
+    <g
+       transform="translate(60.297163,147.30189)"
+       id="g4571">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4573"
+         width="128.02519"
+         height="34.597065"
+         x="449.25955"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4575"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4577"
+           x="512.97003"
+           y="542.95288">dispatcher</tspan></text>
+    </g>
+    <g
+       id="g4579"
+       transform="translate(212.25471,147.30189)">
+      <rect
+         y="521.06683"
+         x="449.25955"
+         height="34.597065"
+         width="128.02519"
+         id="rect4581"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4583"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4585"
+           sodipodi:role="line">local_handler</tspan></text>
+    </g>
+    <g
+       transform="translate(-91.660377,147.30189)"
+       id="g4587">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4589"
+         width="128.02519"
+         height="34.597065"
+         x="449.25955"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4591"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4593"
+           x="512.97003"
+           y="542.95288">global_handler</tspan></text>
+    </g>
+    <g
+       id="g4603"
+       transform="translate(-409.12264,28.905663)">
+      <rect
+         y="521.06683"
+         x="473.30771"
+         height="34.597065"
+         width="79.92894"
+         id="rect4605"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4607"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4609"
+           sodipodi:role="line">main</tspan></text>
+    </g>
+    <g
+       transform="translate(-372.50318,77.758583)"
+       id="g4611">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4613"
+         width="129.64592"
+         height="34.597065"
+         x="448.44922"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4615"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4617"
+           x="512.97003"
+           y="542.95288">coordinator</tspan></text>
+    </g>
+    <g
+       id="g4619"
+       transform="translate(-409.02831,121.36791)">
+      <rect
+         y="521.06683"
+         x="486.13791"
+         height="34.597065"
+         width="54.268562"
+         id="rect4621"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999982;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4623"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4625"
+           sodipodi:role="line">leader</tspan></text>
+    </g>
+    <g
+       id="g4627"
+       transform="translate(-336.88679,121.36791)">
+      <rect
+         y="521.06683"
+         x="485.336"
+         height="34.597065"
+         width="55.872337"
+         id="rect4629"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4631"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4633"
+           sodipodi:role="line">worker</tspan></text>
+    </g>
+    <g
+       id="g4691"
+       transform="translate(-44.90566,114.09434)">
+      <g
+         id="g4657"
+         transform="translate(-21.603774,-426.21699)">
+        <rect
+           style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3.00000004, 1.00000001;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4659"
+           width="678.23688"
+           height="60.413372"
+           x="296.17258"
+           y="893.16669" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="965.6394"
+           y="910.86884"
+           id="text4661"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan4663"
+             x="965.6394"
+             y="910.86884">northbound</tspan></text>
+      </g>
+      <g
+         id="g4665"
+         transform="translate(47.783015,-41.660364)">
+        <rect
+           y="521.06683"
+           x="449.25955"
+           height="34.597065"
+           width="128.02519"
+           id="rect4667"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4669"
+           y="542.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="542.95288"
+             x="512.97003"
+             id="tspan4671"
+             sodipodi:role="line">grpc_server</tspan></text>
+      </g>
+      <g
+         transform="translate(243.34906,-41.660364)"
+         id="g4673">
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4675"
+           width="128.02519"
+           height="34.597065"
+           x="449.25955"
+           y="521.06683" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="512.97003"
+           y="542.95288"
+           id="text4677"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan4679"
+             x="512.97003"
+             y="542.95288">kafka_client</tspan></text>
+      </g>
+      <g
+         id="g4681"
+         transform="translate(-147.78302,-41.660364)">
+        <rect
+           y="521.06683"
+           x="449.25955"
+           height="34.597065"
+           width="128.02519"
+           id="rect4683"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="100%"
+           id="text4685"
+           y="536.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:100%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="536.95288"
+             x="512.97003"
+             id="tspan4687"
+             sodipodi:role="line">rest_server/</tspan><tspan
+             id="tspan4689"
+             y="549.45288"
+             x="512.97003"
+             sodipodi:role="line">health_check</tspan></text>
+      </g>
+    </g>
+    <g
+       id="g4781"
+       transform="translate(-225.72642,331.36792)">
+      <g
+         transform="translate(-543.63208,-537.67925)"
+         id="g4730">
+        <rect
+           y="893.16675"
+           x="844.66315"
+           height="78.856773"
+           width="129.74631"
+           id="rect4732"
+           style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3.00000013, 1.00000004;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4734"
+           y="910.86884"
+           x="965.6394"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="910.86884"
+             x="965.6394"
+             id="tspan4736"
+             sodipodi:role="line">protos</tspan></text>
+      </g>
+      <g
+         transform="translate(-165.37736,-142.69811)"
+         id="g4710">
+        <rect
+           y="545.87817"
+           x="493.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect4726"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4724"
+           width="79.92894"
+           height="24.974424"
+           x="489.30771"
+           y="541.87817" />
+        <rect
+           y="537.87817"
+           x="485.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect4722"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4720"
+           width="79.92894"
+           height="24.974424"
+           x="481.30771"
+           y="533.87817" />
+        <rect
+           y="529.87817"
+           x="477.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect4718"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4712"
+           width="79.92894"
+           height="24.974424"
+           x="473.30771"
+           y="525.87817" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="512.97003"
+           y="542.95288"
+           id="text4714"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan4716"
+             x="512.97003"
+             y="542.95288">voltha</tspan></text>
+      </g>
+    </g>
+    <g
+       id="g4796"
+       transform="translate(-372.50318,257.75858)">
+      <rect
+         y="521.06683"
+         x="448.44922"
+         height="34.597065"
+         width="129.64592"
+         id="rect4798"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4800"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4802"
+           sodipodi:role="line">event_bus</tspan></text>
+    </g>
+    <g
+       transform="translate(-372.50318,303.75858)"
+       id="g4804">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4806"
+         width="129.64592"
+         height="34.597065"
+         x="448.44922"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4808"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4810"
+           x="512.97003"
+           y="542.95288">frameio</tspan></text>
+    </g>
+    <g
+       transform="translate(-616.85849,-529.61321)"
+       id="g4812">
+      <rect
+         y="892.91339"
+         x="680.82526"
+         height="177.90683"
+         width="293.58423"
+         id="rect4814"
+         style="fill:#a9e1b9;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3.00000001, 1.00000002;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4816"
+         y="910.86884"
+         x="965.6394"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="910.86884"
+           x="965.6394"
+           id="tspan4818"
+           sodipodi:role="line">chameleon</tspan></text>
+    </g>
+    <g
+       transform="translate(-409.12264,-157.09434)"
+       id="g4820">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4822"
+         width="79.92894"
+         height="34.597065"
+         x="473.30771"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4824"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4826"
+           x="512.97003"
+           y="542.95288">main</tspan></text>
+    </g>
+    <g
+       id="g4828"
+       transform="translate(-256.85849,-529.61321)">
+      <rect
+         style="fill:#cbcff7;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3, 1.00000001;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4830"
+         width="353.72574"
+         height="177.90683"
+         x="620.68378"
+         y="892.91339" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="965.6394"
+         y="910.86884"
+         id="text4832"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4834"
+           x="965.6394"
+           y="910.86884">ofagent</tspan></text>
+    </g>
+    <g
+       id="g4836"
+       transform="translate(-109.12264,-157.09434)">
+      <rect
+         y="521.06683"
+         x="473.30771"
+         height="34.597065"
+         width="79.92894"
+         id="rect4838"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4840"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4842"
+           sodipodi:role="line">main</tspan></text>
+    </g>
+    <g
+       transform="translate(-56.858487,-529.61321)"
+       id="g4844">
+      <rect
+         y="892.91339"
+         x="781.0611"
+         height="177.90683"
+         width="193.34837"
+         id="rect4846"
+         style="fill:#f5d5bf;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3, 1.00000001;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4848"
+         y="910.86884"
+         x="965.6394"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="910.86884"
+           x="965.6394"
+           id="tspan4850"
+           sodipodi:role="line">netconf</tspan><tspan
+           y="926.49384"
+           x="965.6394"
+           sodipodi:role="line"
+           id="tspan5026">_server</tspan></text>
+    </g>
+    <g
+       transform="translate(250.87736,-157.09434)"
+       id="g4852">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4854"
+         width="79.92894"
+         height="34.597065"
+         x="473.30771"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4856"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4858"
+           x="512.97003"
+           y="542.95288">main</tspan></text>
+    </g>
+    <g
+       id="g4860"
+       transform="translate(-305.28301,-148.24527)">
+      <rect
+         y="521.06683"
+         x="462.89163"
+         height="34.597065"
+         width="100.76104"
+         id="rect4862"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4864"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4866"
+           sodipodi:role="line">web_server</tspan></text>
+    </g>
+    <g
+       transform="translate(-215.11321,-96.509427)"
+       id="g4868">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4870"
+         width="94.345947"
+         height="34.597065"
+         x="466.09918"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4872"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4874"
+           x="512.97003"
+           y="542.95288">swagger_ui</tspan></text>
+    </g>
+    <g
+       id="g4876"
+       transform="translate(-215.91509,-23.943403)">
+      <rect
+         y="521.06683"
+         x="466.09918"
+         height="34.597065"
+         width="94.345947"
+         id="rect4878"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4880"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4882"
+           sodipodi:role="line">grpc_client</tspan></text>
+    </g>
+    <g
+       id="g4884"
+       transform="translate(-317.72641,-96.113187)">
+      <rect
+         y="521.06683"
+         x="471.7124"
+         height="34.597065"
+         width="83.11953"
+         id="rect4886"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="100%"
+         id="text4888"
+         y="534.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:100%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="534.95288"
+           x="512.97003"
+           id="tspan4890"
+           sodipodi:role="line">swagger</tspan><tspan
+           y="547.45288"
+           x="512.97003"
+           sodipodi:role="line"
+           id="tspan5024">_gen</tspan></text>
+    </g>
+    <g
+       transform="translate(-407.16981,-96.113207)"
+       id="g4892">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999982;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4894"
+         width="65.47802"
+         height="34.597065"
+         x="480.53314"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4896"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4898"
+           x="512.97003"
+           y="542.95288">gw_gen</tspan></text>
+    </g>
+    <g
+       transform="translate(-191.25472,113.67925)"
+       id="g4900">
+      <g
+         id="g4902"
+         transform="translate(-543.63208,-537.67925)">
+        <rect
+           style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3.00000013, 1.00000005;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4904"
+           width="164.22743"
+           height="67.630356"
+           x="808.18207"
+           y="889.16675" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="963.6394"
+           y="906.86884"
+           id="text4906"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan4908"
+             x="963.6394"
+             y="906.86884">compiled gw modules</tspan></text>
+      </g>
+      <g
+         id="g4910"
+         transform="translate(-167.37736,-148.69811)">
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4916"
+           width="79.92894"
+           height="24.974424"
+           x="485.30771"
+           y="537.87817" />
+        <rect
+           y="533.87817"
+           x="481.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect4918"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect4920"
+           width="79.92894"
+           height="24.974424"
+           x="477.30771"
+           y="529.87817" />
+        <rect
+           y="525.87817"
+           x="473.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect4922"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text4924"
+           y="542.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="542.95288"
+             x="512.97003"
+             id="tspan4926"
+             sodipodi:role="line">voltha</tspan></text>
+      </g>
+    </g>
+    <g
+       transform="translate(45.506023,-67.354407)"
+       id="g4928">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4930"
+         width="94.345947"
+         height="34.597065"
+         x="466.09918"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4932"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4934"
+           x="512.97003"
+           y="542.95288">agent</tspan></text>
+    </g>
+    <g
+       transform="translate(45.843903,-111.44857)"
+       id="g4944">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4946"
+         width="94.345947"
+         height="34.597065"
+         x="466.09918"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:100%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="534.95288"
+         id="text4948"
+         sodipodi:linespacing="100%"><tspan
+           sodipodi:role="line"
+           id="tspan4950"
+           x="512.97003"
+           y="534.95288">connection</tspan><tspan
+           sodipodi:role="line"
+           x="512.97003"
+           y="547.45288"
+           id="tspan4968">_mgr</tspan></text>
+    </g>
+    <g
+       id="g4952"
+       transform="translate(44.857223,-23.551916)">
+      <rect
+         y="521.06683"
+         x="466.09918"
+         height="34.597065"
+         width="94.345947"
+         id="rect4954"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="125%"
+         id="text4956"
+         y="542.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="542.95288"
+           x="512.97003"
+           id="tspan4958"
+           sodipodi:role="line">grpc_client</tspan></text>
+    </g>
+    <g
+       transform="translate(146.06018,-67.264017)"
+       id="g4960">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4962"
+         width="94.345947"
+         height="34.597065"
+         x="466.09918"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4964"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4966"
+           x="512.97003"
+           y="542.95288">loxi</tspan></text>
+    </g>
+    <g
+       transform="translate(146.10908,-23.551916)"
+       id="g4970">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect4972"
+         width="94.345947"
+         height="34.597065"
+         x="466.09918"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text4974"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan4976"
+           x="512.97003"
+           y="542.95288">converter</tspan></text>
+    </g>
+    <g
+       id="g4986"
+       transform="translate(146.07974,-110.94562)">
+      <rect
+         y="521.06683"
+         x="466.09918"
+         height="34.597065"
+         width="94.345947"
+         id="rect4988"
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+      <text
+         sodipodi:linespacing="100%"
+         id="text4990"
+         y="534.95288"
+         x="512.97003"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:100%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         xml:space="preserve"><tspan
+           y="534.95288"
+           x="512.97003"
+           id="tspan4992"
+           sodipodi:role="line">of</tspan><tspan
+           id="tspan4994"
+           y="547.45288"
+           x="512.97003"
+           sodipodi:role="line">_connection</tspan></text>
+    </g>
+    <g
+       transform="translate(71.103773,97.641513)"
+       id="g4996">
+      <g
+         id="g4998"
+         transform="translate(-543.63208,-537.67925)">
+        <rect
+           style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:3.00000013, 1.00000004;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect5000"
+           width="129.74631"
+           height="78.856773"
+           x="844.66315"
+           y="893.16675" />
+        <text
+           xml:space="preserve"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:end;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:end;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           x="965.6394"
+           y="910.86884"
+           id="text5002"
+           sodipodi:linespacing="125%"><tspan
+             sodipodi:role="line"
+             id="tspan5004"
+             x="965.6394"
+             y="910.86884">protos</tspan></text>
+      </g>
+      <g
+         id="g5006"
+         transform="translate(-165.37736,-142.69811)">
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect5008"
+           width="79.92894"
+           height="24.974424"
+           x="493.30771"
+           y="545.87817" />
+        <rect
+           y="541.87817"
+           x="489.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect5010"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect5012"
+           width="79.92894"
+           height="24.974424"
+           x="485.30771"
+           y="537.87817" />
+        <rect
+           y="533.87817"
+           x="481.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect5014"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <rect
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+           id="rect5016"
+           width="79.92894"
+           height="24.974424"
+           x="477.30771"
+           y="529.87817" />
+        <rect
+           y="525.87817"
+           x="473.30771"
+           height="24.974424"
+           width="79.92894"
+           id="rect5018"
+           style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999994;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" />
+        <text
+           sodipodi:linespacing="125%"
+           id="text5020"
+           y="542.95288"
+           x="512.97003"
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+           xml:space="preserve"><tspan
+             y="542.95288"
+             x="512.97003"
+             id="tspan5022"
+             sodipodi:role="line">voltha</tspan></text>
+      </g>
+    </g>
+    <g
+       transform="translate(304.85722,-23.551916)"
+       id="g5028">
+      <rect
+         style="fill:#ececec;fill-opacity:1;stroke:#000000;stroke-width:0.99999988;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
+         id="rect5030"
+         width="94.345947"
+         height="34.597065"
+         x="466.09918"
+         y="521.06683" />
+      <text
+         xml:space="preserve"
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.5px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+         x="512.97003"
+         y="542.95288"
+         id="text5032"
+         sodipodi:linespacing="125%"><tspan
+           sodipodi:role="line"
+           id="tspan5034"
+           x="512.97003"
+           y="542.95288">grpc_client</tspan></text>
+    </g>
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+       x="311.14832"
+       y="1042.5872"
+       id="text4832-0"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan4834-9"
+         x="311.14832"
+         y="1042.5872">Voltha Component Snapshot 11/30/2016</tspan></text>
+  </g>
+  <g
+     inkscape:groupmode="layer"
+     id="layer2"
+     inkscape:label="REST Device Operation"
+     style="display:inline">
+    <path
+       style="fill:none;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send)"
+       d="m 210.19277,38.622045 0,45.130738"
+       id="path5074"
+       inkscape:connector-curvature="0"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker5642);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 177.16535,120.11811 0,7.08661 -31.88976,0 -0.28351,75.827"
+       id="path5890"
+       inkscape:connector-curvature="0"
+       sodipodi:nodetypes="cccc" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6186);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 194.88189,222.87401 53.14961,0"
+       id="path6232"
+       inkscape:connector-curvature="0" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6254);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 294.09449,244.13386 0,28.34645 219.68504,0 0,35.43307"
+       id="path6246"
+       inkscape:connector-curvature="0" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6338);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 513.77953,339.80315 0,21.25984 -92.12599,0 0,21.25984"
+       id="path6330"
+       inkscape:connector-curvature="0" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6428);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 485.43307,400.03937 24.80315,0"
+       id="path6420"
+       inkscape:connector-curvature="0" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6524);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 637.79528,400.03937 24.80315,0"
+       id="path6516"
+       inkscape:connector-curvature="0" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6626);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 726.41128,415.45663 -0.0333,12.9292"
+       id="path6618"
+       inkscape:connector-curvature="0"
+       sodipodi:nodetypes="cc" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker7299);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 574.01575,524.05512 0,17.71653"
+       id="path6756"
+       inkscape:connector-curvature="0" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6878);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 510.23622,559.48819 -24.80315,0"
+       id="path6870"
+       inkscape:connector-curvature="0" />
+    <path
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#ff0000;stroke-width:3;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#marker6998);color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       d="m 421.65354,577.20472 0,53.14961"
+       id="path6990"
+       inkscape:connector-curvature="0" />
+    <rect
+       style="color:#000000;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:none;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:1.99999988;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:1;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
+       id="rect7417"
+       width="639.93085"
+       height="96.591026"
+       x="248.03148"
+       y="428.38583"
+       ry="0" />
+    <text
+       xml:space="preserve"
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20px;line-height:125%;font-family:Ubuntu;-inkscape-font-specification:'Ubuntu, Normal';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;display:inline;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+       x="495.24677"
+       y="45.700596"
+       id="text4832-0-1"
+       sodipodi:linespacing="125%"><tspan
+         sodipodi:role="line"
+         id="tspan4834-9-4"
+         x="495.24677"
+         y="45.700596">Propagation of REST Operation on a Device</tspan></text>
+  </g>
+</svg>
diff --git a/ofagent/connection_mgr.py b/ofagent/connection_mgr.py
index 17af7b2..8d25891 100644
--- a/ofagent/connection_mgr.py
+++ b/ofagent/connection_mgr.py
@@ -122,7 +122,7 @@
         while True:
             log.info('Retrieve devices from voltha')
             try:
-                stub = voltha_pb2.VolthaLogicalLayerStub(self.channel)
+                stub = voltha_pb2.VolthaLocalServiceStub(self.channel)
                 devices = stub.ListLogicalDevices(Empty()).items
                 for device in devices:
                     log.info("Devices {} -> {}".format(device.id,
diff --git a/ofagent/converter.py b/ofagent/converter.py
index 769627b..9ab9636 100644
--- a/ofagent/converter.py
+++ b/ofagent/converter.py
@@ -56,9 +56,9 @@
     return of13.common.port_desc(**kw)
 
 def make_loxi_match(match):
-    assert match['type'] == pb2.OFPMT_OXM
+    assert match.get('type', pb2.OFPMT_STANDARD) == pb2.OFPMT_OXM
     loxi_match_fields = []
-    for oxm_field in match['oxm_fields']:
+    for oxm_field in match.get('oxm_fields', []):
         assert oxm_field['oxm_class'] == pb2.OFPXMC_OPENFLOW_BASIC
         ofb_field = oxm_field['ofb_field']
         field_type = ofb_field.get('type', 0)
@@ -93,6 +93,7 @@
 
     kw['match'] = make_loxi_match(kw['match'])
     kw['instructions'] = [make_loxi_instruction(i) for i in kw['instructions']]
+    del kw['id']
     return of13.flow_stats_entry(**kw)
 
 def ofp_packet_in_to_loxi_packet_in(pb):
@@ -101,11 +102,28 @@
         kw['match'] = make_loxi_match(kw['match'])
     return of13.message.packet_in(**kw)
 
+def ofp_group_entry_to_loxi_group_entry(pb):
+    return of13.group_stats_entry(
+        group_id=pb.stats.group_id,
+        ref_count=pb.stats.ref_count,
+        packet_count=pb.stats.packet_count,
+        byte_count=pb.stats.byte_count,
+        duration_sec=pb.stats.duration_sec,
+        duration_nsec=pb.stats.duration_nsec,
+        bucket_stats=[to_loxi(bstat) for bstat in pb.stats.bucket_stats])
+
+def ofp_bucket_counter_to_loxy_bucket_counter(pb):
+    return of13.bucket_counter(
+        packet_count=pb.packet_count,
+        byte_count=pb.byte_count)
+
 
 to_loxi_converters = {
     pb2.ofp_port: ofp_port_to_loxi_port_desc,
     pb2.ofp_flow_stats: ofp_flow_stats_to_loxi_flow_stats,
     pb2.ofp_packet_in: ofp_packet_in_to_loxi_packet_in,
+    pb2.ofp_group_entry: ofp_group_entry_to_loxi_group_entry,
+    pb2.ofp_bucket_counter: ofp_bucket_counter_to_loxy_bucket_counter
 }
 
 
diff --git a/ofagent/grpc_client.py b/ofagent/grpc_client.py
index a4475b5..5d42566 100644
--- a/ofagent/grpc_client.py
+++ b/ofagent/grpc_client.py
@@ -26,8 +26,8 @@
 from twisted.internet import threads
 from twisted.internet.defer import inlineCallbacks, returnValue, DeferredQueue
 
-from protos.voltha_pb2 import ID, VolthaLogicalLayerStub, FlowTableUpdate, \
-    GroupTableUpdate, PacketOut
+from protos.voltha_pb2 import ID, VolthaLocalServiceStub, FlowTableUpdate, \
+    FlowGroupTableUpdate, PacketOut
 from google.protobuf import empty_pb2
 
 
@@ -40,7 +40,7 @@
 
         self.connection_manager = connection_manager
         self.channel = channel
-        self.logical_stub = VolthaLogicalLayerStub(channel)
+        self.local_stub = VolthaLocalServiceStub(channel)
 
         self.stopped = False
 
@@ -74,14 +74,14 @@
 
         def stream_packets_out():
             generator = packet_generator()
-            self.logical_stub.StreamPacketsOut(generator)
+            self.local_stub.StreamPacketsOut(generator)
 
         reactor.callInThread(stream_packets_out)
 
     def start_packet_in_stream(self):
 
         def receive_packet_in_stream():
-            streaming_rpc_method = self.logical_stub.ReceivePacketsIn
+            streaming_rpc_method = self.local_stub.ReceivePacketsIn
             iterator = streaming_rpc_method(empty_pb2.Empty())
             for packet_in in iterator:
                 reactor.callFromThread(self.packet_in_queue.put,
@@ -110,14 +110,14 @@
     def get_port_list(self, device_id):
         req = ID(id=device_id)
         res = yield threads.deferToThread(
-            self.logical_stub.ListLogicalDevicePorts, req)
+            self.local_stub.ListLogicalDevicePorts, req)
         returnValue(res.items)
 
     @inlineCallbacks
     def get_device_info(self, device_id):
         req = ID(id=device_id)
         res = yield threads.deferToThread(
-            self.logical_stub.GetLogicalDevice, req)
+            self.local_stub.GetLogicalDevice, req)
         returnValue(res)
 
     @inlineCallbacks
@@ -127,29 +127,29 @@
             flow_mod=flow_mod
         )
         res = yield threads.deferToThread(
-            self.logical_stub.UpdateFlowTable, req)
+            self.local_stub.UpdateLogicalDeviceFlowTable, req)
         returnValue(res)
 
     @inlineCallbacks
     def update_group_table(self, device_id, group_mod):
-        req = GroupTableUpdate(
+        req = FlowGroupTableUpdate(
             id=device_id,
             group_mod=group_mod
         )
         res = yield threads.deferToThread(
-            self.logical_stub.UpdateGroupTable, req)
+            self.local_stub.UpdateLogicalDeviceFlowGroupTable, req)
         returnValue(res)
 
     @inlineCallbacks
     def list_flows(self, device_id):
         req = ID(id=device_id)
         res = yield threads.deferToThread(
-            self.logical_stub.ListDeviceFlows, req)
+            self.local_stub.ListLogicalDeviceFlows, req)
         returnValue(res.items)
 
     @inlineCallbacks
     def list_groups(self, device_id):
         req = ID(id=device_id)
         res = yield threads.deferToThread(
-            self.logical_stub.ListDeviceFlowGroups, req)
+            self.local_stub.ListLogicalDeviceFlowGroups, req)
         returnValue(res.items)
diff --git a/ofagent/of_protocol_handler.py b/ofagent/of_protocol_handler.py
index 1c284eb..b79786f 100644
--- a/ofagent/of_protocol_handler.py
+++ b/ofagent/of_protocol_handler.py
@@ -204,7 +204,7 @@
         self.cxn.send(ofp.message.port_desc_stats_reply(
             xid=req.xid,
             #flags=None,
-            entries=[to_loxi(port) for port in port_list]
+            entries=[to_loxi(port.ofp_port) for port in port_list]
         ))
 
     def handle_queue_stats_request(self, req):
diff --git a/requirements.txt b/requirements.txt
index 3302065..4f05fbb 100755
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,6 +12,7 @@
 jinja2>=2.8
 jsonpatch>=1.14
 klein>=15.3.1
+networkx>=1.11
 nose>=1.3.7
 nose-exclude>=0.5.0
 mock>=1.3.0
@@ -21,6 +22,7 @@
 pep8-naming>=0.3.3
 protobuf-to-dict>=0.1.0
 pyflakes>=1.0.0
+pygraphviz>=1.3.1
 pylint>=1.5.2
 #pypcap>=1.1.5
 pyOpenSSL>=0.13
diff --git a/tests/itests/docutests/OLT-TESTING.md b/tests/itests/docutests/OLT-TESTING.md
index b2f7f1e..859ef72 100644
--- a/tests/itests/docutests/OLT-TESTING.md
+++ b/tests/itests/docutests/OLT-TESTING.md
@@ -62,7 +62,7 @@
 ```
 docker pull onosproject/onos
 docker run -ti --rm -p 6633:6653 \
-    -e ONOS_APPS="drivers,openflow" onosproject/ono
+    -e ONOS_APPS="drivers,openflow" onosproject/onos
 ```
 
 In another terminal window, start the pyofagent just as above:
diff --git a/tests/itests/voltha/rest_base.py b/tests/itests/voltha/rest_base.py
new file mode 100644
index 0000000..b304b03
--- /dev/null
+++ b/tests/itests/voltha/rest_base.py
@@ -0,0 +1,56 @@
+from unittest import TestCase
+from requests import get, post, put, patch, delete
+
+
+class RestBase(TestCase):
+
+    base_url = 'http://localhost:8881'
+
+    def url(self, path):
+        while path.startswith('/'):
+            path = path[1:]
+        return self.base_url + '/' + path
+
+    def verify_content_type_and_return(self, response, expected_content_type):
+        if 200 <= response.status_code < 300:
+            self.assertEqual(
+                response.headers['Content-Type'],
+                expected_content_type,
+                msg='Content-Type %s != %s; msg:%s' % (
+                     response.headers['Content-Type'],
+                     expected_content_type,
+                     response.content))
+            if expected_content_type == 'application/json':
+                return response.json()
+            else:
+                return response.content
+
+    def get(self, path, expected_code=200,
+            expected_content_type='application/json'):
+        r = get(self.url(path))
+        self.assertEqual(r.status_code, expected_code,
+                         msg='Code %d!=%d; msg:%s' % (
+                             r.status_code, expected_code, r.content))
+        return self.verify_content_type_and_return(r, expected_content_type)
+
+    def post(self, path, json_dict=None, expected_code=201):
+        r = post(self.url(path), json=json_dict)
+        self.assertEqual(r.status_code, expected_code,
+                         msg='Code %d!=%d; msg:%s' % (
+                             r.status_code, expected_code, r.content))
+        return self.verify_content_type_and_return(r, 'application/json')
+
+    def put(self, path, json_dict, expected_code=200):
+        r = put(self.url(path), json=json_dict)
+        self.assertEqual(r.status_code, expected_code,
+                         msg='Code %d!=%d; msg:%s' % (
+                             r.status_code, expected_code, r.content))
+        return self.verify_content_type_and_return(r, 'application/json')
+
+    def delete(self, path, expected_code=209):
+        r = delete(self.url(path))
+        self.assertEqual(r.status_code, expected_code,
+                         msg='Code %d!=%d; msg:%s' % (
+                             r.status_code, expected_code, r.content))
+
+
diff --git a/tests/itests/voltha/test_cold_activation_sequence.py b/tests/itests/voltha/test_cold_activation_sequence.py
new file mode 100644
index 0000000..ba49e24
--- /dev/null
+++ b/tests/itests/voltha/test_cold_activation_sequence.py
@@ -0,0 +1,233 @@
+from time import time, sleep
+
+from google.protobuf.json_format import MessageToDict
+
+from voltha.core.flow_decomposer import *
+from voltha.protos.device_pb2 import Device
+from voltha.protos.common_pb2 import AdminState, OperStatus
+from voltha.protos import openflow_13_pb2 as ofp
+from tests.itests.voltha.rest_base import RestBase
+
+
+class TestColdActivationSequence(RestBase):
+
+    def wait_till(self, msg, predicate, interval=0.1, timeout=5.0):
+        deadline = time() + timeout
+        while time() < deadline:
+            if predicate():
+                return
+            sleep(interval)
+        self.fail('Timed out while waiting for condition: {}'.format(msg))
+
+    def test_cold_activation_sequence(self):
+        """Complex test-case to cover device activation sequence"""
+
+        self.verify_prerequisites()
+        olt_id = self.add_olt_device()
+        self.verify_device_preprovisioned_state(olt_id)
+        self.activate_device(olt_id)
+        ldev_id = self.wait_for_logical_device(olt_id)
+        onu_ids = self.wait_for_onu_discovery(olt_id)
+        self.verify_logical_ports(ldev_id)
+        self.simulate_eapol_flow_install(ldev_id, olt_id, onu_ids)
+        self.verify_olt_eapol_flow(olt_id)
+        self.verify_onu_forwarding_flows(onu_ids)
+        self.simulate_eapol_start()
+        self.simulate_eapol_request_identity()
+        self.simulate_eapol_response_identity()
+        self.simulate_eapol_request()
+        self.simulate_eapol_response()
+        self.simulate_eapol_success()
+        self.install_and_verify_dhcp_flows()
+        self.install_and_verify_igmp_flows()
+        self.install_and_verifyunicast_flows()
+
+    def verify_prerequisites(self):
+        # all we care is that Voltha is available via REST using the base uri
+        self.get('/api/v1')
+
+    def add_olt_device(self):
+        device = Device(
+            type='simulated_olt',
+            mac_address='00:00:00:00:00:01'
+        )
+        device = self.post('/api/v1/devices', MessageToDict(device),
+                           expected_code=200)
+        return device['id']
+
+    def verify_device_preprovisioned_state(self, olt_id):
+        # we also check that so far what we read back is same as what we get
+        # back on create
+        device = self.get('/api/v1/devices/{}'.format(olt_id))
+        self.assertNotEqual(device['id'], '')
+        self.assertEqual(device['adapter'], 'simulated_olt')
+        self.assertEqual(device['admin_state'], 'PREPROVISIONED')
+        self.assertEqual(device['oper_status'], 'UNKNOWN')
+
+    def activate_device(self, olt_id):
+        path = '/api/v1/devices/{}'.format(olt_id)
+        self.post(path + '/activate', expected_code=200)
+        device = self.get(path)
+        self.assertEqual(device['admin_state'], 'ENABLED')
+
+        self.wait_till(
+            'admin state moves to ACTIVATING or ACTIVE',
+            lambda: self.get(path)['oper_status'] in ('ACTIVATING', 'ACTIVE'),
+            timeout=0.5)
+
+        # eventually, it shall move to active state and by then we shall have
+        # device details filled, connect_state set, and device ports created
+        self.wait_till(
+            'admin state ACTIVE',
+            lambda: self.get(path)['oper_status'] == 'ACTIVE',
+            timeout=0.5)
+        device = self.get(path)
+        self.assertNotEqual(device['software_version'], '')
+        self.assertEqual(device['connect_status'], 'REACHABLE')
+
+        ports = self.get(path + '/ports')['items']
+        self.assertEqual(len(ports), 2)
+
+    def wait_for_logical_device(self, olt_id):
+        # we shall find the logical device id from the parent_id of the olt
+        # (root) device
+        device = self.get(
+            '/api/v1/devices/{}'.format(olt_id))
+        self.assertNotEqual(device['parent_id'], '')
+        logical_device = self.get(
+            '/api/v1/logical_devices/{}'.format(device['parent_id']))
+
+        # the logical device shall be linked back to the hard device,
+        # its ports too
+        self.assertEqual(logical_device['root_device_id'], device['id'])
+
+        logical_ports = self.get(
+            '/api/v1/logical_devices/{}/ports'.format(
+                logical_device['id'])
+        )['items']
+        self.assertGreaterEqual(len(logical_ports), 1)
+        logical_port = logical_ports[0]
+        self.assertEqual(logical_port['id'], 'nni')
+        self.assertEqual(logical_port['ofp_port']['name'], 'nni')
+        self.assertEqual(logical_port['ofp_port']['port_no'], 129)
+        self.assertEqual(logical_port['device_id'], device['id'])
+        self.assertEqual(logical_port['device_port_no'], 2)
+        return logical_device['id']
+
+    def wait_for_onu_discovery(self, olt_id):
+        # shortly after we shall see the discovery of four new onus, linked to
+        # the olt device
+        def find_our_onus():
+            devices = self.get('/api/v1/devices')['items']
+            return [
+                d for d in devices
+                if d['parent_id'] == olt_id
+            ]
+        self.wait_till(
+            'find four ONUs linked to the olt device',
+            lambda: len(find_our_onus()) >= 4,
+            2
+        )
+
+        # verify that they are properly set
+        onus = find_our_onus()
+        for onu in onus:
+            self.assertEqual(onu['admin_state'], 'ENABLED')
+            self.assertEqual(onu['oper_status'], 'ACTIVE')
+
+        return [onu['id'] for onu in onus]
+
+    def verify_logical_ports(self, ldev_id):
+
+        # at this point we shall see at least 5 logical ports on the
+        # logical device
+        logical_ports = self.get(
+            '/api/v1/logical_devices/{}/ports'.format(ldev_id)
+        )['items']
+        self.assertGreaterEqual(len(logical_ports), 5)
+
+        # verify that all logical ports are LIVE (state=4)
+        for lport in logical_ports:
+            self.assertEqual(lport['ofp_port']['state'], 4)
+
+    def simulate_eapol_flow_install(self, ldev_id, olt_id, onu_ids):
+
+        # emulate the flow mod requests that shall arrive from the SDN
+        # controller, one for each ONU
+        lports = self.get(
+            '/api/v1/logical_devices/{}/ports'.format(ldev_id)
+        )['items']
+
+        # device_id -> logical port map, which we will use to construct
+        # our flows
+        lport_map = dict((lp['device_id'], lp) for lp in lports)
+        for onu_id in onu_ids:
+            # if eth_type == 0x888e => send to controller
+            _in_port = lport_map[onu_id]['ofp_port']['port_no']
+            req = ofp.FlowTableUpdate(
+                id='simulated1',
+                flow_mod=mk_simple_flow_mod(
+                    match_fields=[
+                        in_port(_in_port),
+                        vlan_vid(ofp.OFPVID_PRESENT | 0),
+                        eth_type(0x888e)],
+                    actions=[
+                        output(ofp.OFPP_CONTROLLER)
+                    ],
+                    priority=1000
+                )
+            )
+            res = self.post('/api/v1/logical_devices/{}/flows'.format(ldev_id),
+                            MessageToDict(req,
+                                          preserving_proto_field_name=True),
+                            expected_code=200)
+
+        # for sanity, verify that flows are in flow table of logical device
+        flows = self.get(
+            '/api/v1/logical_devices/{}/flows'.format(ldev_id))['items']
+        self.assertGreaterEqual(len(flows), 4)
+
+    def verify_olt_eapol_flow(self, olt_id):
+        # olt shall have two flow rules, one is the default and the
+        # second is the result of eapol forwarding with rule:
+        # if eth_type == 0x888e => push vlan(1000); out_port=nni_port
+        flows = self.get('/api/v1/devices/{}/flows'.format(olt_id))['items']
+        self.assertEqual(len(flows), 2)
+        flow = flows[1]
+        self.assertEqual(flow['table_id'], 0)
+        self.assertEqual(flow['priority'], 1000)
+
+        # TODO refine this
+        # self.assertEqual(flow['match'], {})
+        # self.assertEqual(flow['instructions'], [])
+
+    def verify_onu_forwarding_flows(self, onu_ids):
+        pass
+
+    def simulate_eapol_start(self):
+        pass
+
+    def simulate_eapol_request_identity(self):
+        pass
+
+    def simulate_eapol_response_identity(self):
+        pass
+
+    def simulate_eapol_request(self):
+        pass
+
+    def simulate_eapol_response(self):
+        pass
+
+    def simulate_eapol_success(self):
+        pass
+
+    def install_and_verify_dhcp_flows(self):
+        pass
+
+    def install_and_verify_igmp_flows(self):
+        pass
+
+    def install_and_verifyunicast_flows(self):
+        pass
+
diff --git a/tests/itests/voltha/test_flow_decomposer.py b/tests/itests/voltha/test_flow_decomposer.py
new file mode 100644
index 0000000..f131a28
--- /dev/null
+++ b/tests/itests/voltha/test_flow_decomposer.py
@@ -0,0 +1,636 @@
+from unittest import TestCase, main
+
+from jsonpatch import make_patch
+from simplejson import dumps
+
+from voltha.core.flow_decomposer import *
+from voltha.core.logical_device_agent import \
+    flow_stats_entry_from_flow_mod_message
+from voltha.protos.device_pb2 import Device, Port
+from voltha.protos.logical_device_pb2 import LogicalPort
+from google.protobuf.json_format import MessageToDict
+
+
+class TestFlowDecomposer(TestCase, FlowDecomposer):
+
+    def setUp(self):
+        self.logical_device_id = 'pon'
+
+    # methods needed by FlowDecomposer; faking real lookups
+
+    _devices = {
+        'olt':  Device(
+            id='olt',
+            root=True,
+            parent_id='logical_device',
+            ports=[
+                Port(port_no=1, label='pon'),
+                Port(port_no=2, label='nni'),
+            ]
+        ),
+        'onu1': Device(
+            id='onu1',
+            parent_id='olt',
+            ports=[
+                Port(port_no=1, label='pon'),
+                Port(port_no=2, label='uni'),
+            ]
+        ),
+        'onu2': Device(
+            id='onu2',
+            parent_id='olt',
+            ports=[
+                Port(port_no=1, label='pon'),
+                Port(port_no=2, label='uni'),
+            ]
+        ),
+        'onu3': Device(
+            id='onu3',
+            parent_id='olt',
+            ports=[
+                Port(port_no=1, label='pon'),
+                Port(port_no=2, label='uni'),
+            ]
+        ),
+        'onu4': Device(
+            id='onu4',
+            parent_id='olt',
+            ports=[
+                Port(port_no=1, label='pon'),
+                Port(port_no=2, label='uni'),
+            ]
+        ),
+    }
+
+    _logical_ports = {
+        0: LogicalPort(id='0', device_id='olt', device_port_no=2),
+        1: LogicalPort(id='1', device_id='onu1', device_port_no=2),
+        2: LogicalPort(id='2', device_id='onu2', device_port_no=2),
+        3: LogicalPort(id='3', device_id='onu3', device_port_no=2),
+        4: LogicalPort(id='4', device_id='onu4', device_port_no=2),
+    }
+
+    _routes = {
+
+        # DOWNSTREAM ROUTES
+
+        (0, 1): [
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[1],
+                     _devices['olt'].ports[0]),
+            RouteHop(_devices['onu1'],
+                     _devices['onu1'].ports[0],
+                     _devices['onu1'].ports[1]),
+        ],
+        (0, 2): [
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[1],
+                     _devices['olt'].ports[0]),
+            RouteHop(_devices['onu2'],
+                     _devices['onu2'].ports[0],
+                     _devices['onu2'].ports[1]),
+        ],
+        (0, 3): [
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[1],
+                     _devices['olt'].ports[0]),
+            RouteHop(_devices['onu3'],
+                     _devices['onu3'].ports[0],
+                     _devices['onu3'].ports[1]),
+        ],
+        (0, 4): [
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[1],
+                     _devices['olt'].ports[0]),
+            RouteHop(_devices['onu4'],
+                     _devices['onu4'].ports[0],
+                     _devices['onu4'].ports[1]),
+        ],
+
+        # UPSTREAM DATA PLANE
+
+        (1, 0): [
+            RouteHop(_devices['onu1'],
+                     _devices['onu1'].ports[1],
+                     _devices['onu1'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (2, 0): [
+            RouteHop(_devices['onu2'],
+                     _devices['onu2'].ports[1],
+                     _devices['onu2'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (3, 0): [
+            RouteHop(_devices['onu3'],
+                     _devices['onu3'].ports[1],
+                     _devices['onu3'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (4, 0): [
+            RouteHop(_devices['onu4'],
+                     _devices['onu4'].ports[1],
+                     _devices['onu4'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+
+        # UPSTREAM CONTROLLER-BOUND (IN-BAND SENDING TO DATAPLANE
+
+        (1, ofp.OFPP_CONTROLLER): [
+            RouteHop(_devices['onu1'],
+                     _devices['onu1'].ports[1],
+                     _devices['onu1'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (2, ofp.OFPP_CONTROLLER): [
+            RouteHop(_devices['onu2'],
+                     _devices['onu2'].ports[1],
+                     _devices['onu2'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (3, ofp.OFPP_CONTROLLER): [
+            RouteHop(_devices['onu3'],
+                     _devices['onu3'].ports[1],
+                     _devices['onu3'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (4, ofp.OFPP_CONTROLLER): [
+            RouteHop(_devices['onu4'],
+                     _devices['onu4'].ports[1],
+                     _devices['onu4'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+
+        # UPSTREAM NEXT TABLE BASED
+
+        (1, None): [
+            RouteHop(_devices['onu1'],
+                     _devices['onu1'].ports[1],
+                     _devices['onu1'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (2, None): [
+            RouteHop(_devices['onu2'],
+                     _devices['onu2'].ports[1],
+                     _devices['onu2'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (3, None): [
+            RouteHop(_devices['onu3'],
+                     _devices['onu3'].ports[1],
+                     _devices['onu3'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+        (4, None): [
+            RouteHop(_devices['onu4'],
+                     _devices['onu4'].ports[1],
+                     _devices['onu4'].ports[0]),
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[0],
+                     _devices['olt'].ports[1]),
+        ],
+
+        # DOWNSTREAM NEXT TABLE BASED
+
+        (0, None): [
+            RouteHop(_devices['olt'],
+                     _devices['olt'].ports[1],
+                     _devices['olt'].ports[0]),
+            None  # 2nd hop is not known yet
+        ]
+
+    }
+
+    _default_rules = {
+        'olt': (
+            OrderedDict((f.id, f) for f in [
+                mk_flow_stat(
+                    match_fields=[
+                        in_port(2),
+                        vlan_vid(ofp.OFPVID_PRESENT | 4000),
+                        vlan_pcp(0)
+                    ],
+                    actions=[
+                        pop_vlan(),
+                        output(1)
+                    ]
+                )
+            ]),
+            OrderedDict()
+        ),
+        'onu1': (
+            OrderedDict((f.id, f) for f in [
+                mk_flow_stat(
+                    match_fields=[
+                        in_port(2),
+                        vlan_vid(ofp.OFPVID_PRESENT | 0)
+                    ],
+                    actions=[
+                        set_field(vlan_vid(ofp.OFPVID_PRESENT | 101)),
+                        output(1)
+                    ]
+                )
+            ]),
+            OrderedDict()
+        ),
+        'onu2': (
+            OrderedDict((f.id, f) for f in [
+                mk_flow_stat(
+                    match_fields=[
+                        in_port(2),
+                        vlan_vid(ofp.OFPVID_PRESENT | 0)
+                    ],
+                    actions=[
+                        set_field(vlan_vid(ofp.OFPVID_PRESENT | 102)),
+                        output(1)
+                    ]
+                )
+            ]),
+            OrderedDict()
+        ),
+        'onu3': (
+            OrderedDict((f.id, f) for f in [
+                mk_flow_stat(
+                    match_fields=[
+                        in_port(2),
+                        vlan_vid(ofp.OFPVID_PRESENT | 0)
+                    ],
+                    actions=[
+                        set_field(vlan_vid(ofp.OFPVID_PRESENT | 103)),
+                        output(1)
+                    ]
+                )
+            ]),
+            OrderedDict()
+        ),
+        'onu4': (
+            OrderedDict((f.id, f) for f in [
+                mk_flow_stat(
+                    match_fields=[
+                        in_port(2),
+                        vlan_vid(ofp.OFPVID_PRESENT | 0)
+                    ],
+                    actions=[
+                        set_field(vlan_vid(ofp.OFPVID_PRESENT | 104)),
+                        output(1)
+                    ]
+                )
+            ]),
+            OrderedDict()
+        )
+    }
+
+    def get_all_default_rules(self):
+        return self._default_rules
+
+    def get_default_rules(self, device_id):
+        return self._default_rules[device_id]
+
+    def get_route(self, in_port_no, out_port_no):
+        return self._routes[(in_port_no, out_port_no)]
+
+    # ~~~~~~~~~~~~~~~~~~~~~~~~~ HELPER METHODS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def assertFlowsEqual(self, flow1, flow2):
+        if flow1 != flow2:
+            self.fail('flow1 %s differs from flow2; differences: \n%s' % (
+                      dumps(MessageToDict(flow1), indent=4),
+                      self.diffMsgs(flow1, flow2)))
+
+    def diffMsgs(self, msg1, msg2):
+        msg1_dict = MessageToDict(msg1)
+        msg2_dict = MessageToDict(msg2)
+        diff = make_patch(msg1_dict, msg2_dict)
+        return dumps(diff.patch, indent=2)
+
+    # ~~~~~~~~~~~~~~~~~~~~~~~~ ACTUAL TEST CASES ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def test_eapol_reroute_rule_decomposition(self):
+        flow = mk_flow_stat(
+            match_fields=[
+                in_port(1),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+                eth_type(0x888e)
+            ],
+            actions=[
+                output(ofp.OFPP_CONTROLLER)
+            ],
+            priority=1000
+        )
+        device_rules = self.decompose_rules([flow], [])
+        onu1_flows, onu1_groups = device_rules['onu1']
+        olt_flows, olt_groups = device_rules['olt']
+        self.assertEqual(len(onu1_flows), 1)
+        self.assertEqual(len(onu1_groups), 0)
+        self.assertEqual(len(olt_flows), 2)
+        self.assertEqual(len(olt_groups), 0)
+        self.assertFlowsEqual(onu1_flows.values()[0], mk_flow_stat(
+            match_fields=[
+                in_port(2),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 101)),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(olt_flows.values()[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[
+                in_port(1),
+                eth_type(0x888e)
+            ],
+            actions=[
+                push_vlan(0x8100),
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 4000)),
+                output(2)
+            ]
+        ))
+
+    def test_dhcp_reroute_rule_decomposition(self):
+        flow = mk_flow_stat(
+            match_fields=[
+                in_port(1),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+                eth_type(0x0800),
+                ipv4_dst(0xffffffff),
+                ip_proto(17),
+                udp_src(68),
+                udp_dst(67)
+            ],
+            actions=[output(ofp.OFPP_CONTROLLER)],
+            priority=1000
+        )
+        device_rules = self.decompose_rules([flow], [])
+        onu1_flows, onu1_groups = device_rules['onu1']
+        olt_flows, olt_groups = device_rules['olt']
+        self.assertEqual(len(onu1_flows), 1)
+        self.assertEqual(len(onu1_groups), 0)
+        self.assertEqual(len(olt_flows), 2)
+        self.assertEqual(len(olt_groups), 0)
+        self.assertFlowsEqual(onu1_flows.values()[0], mk_flow_stat(
+            match_fields=[
+                in_port(2),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 101)),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(olt_flows.values()[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[
+                in_port(1),
+                eth_type(0x0800),
+                ipv4_dst(0xffffffff),
+                ip_proto(17),
+                udp_src(68),
+                udp_dst(67)
+            ],
+            actions=[
+                push_vlan(0x8100),
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 4000)),
+                output(2)
+            ]
+        ))
+
+    def test_igmp_reroute_rule_decomposition(self):
+        flow = mk_flow_stat(
+            match_fields=[
+                in_port(1),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+                eth_type(0x0800),
+                ip_proto(2)
+            ],
+            actions=[output(ofp.OFPP_CONTROLLER)],
+            priority=1000
+        )
+        device_rules = self.decompose_rules([flow], [])
+        onu1_flows, onu1_groups = device_rules['onu1']
+        olt_flows, olt_groups = device_rules['olt']
+        self.assertEqual(len(onu1_flows), 1)
+        self.assertEqual(len(onu1_groups), 0)
+        self.assertEqual(len(olt_flows), 2)
+        self.assertEqual(len(olt_groups), 0)
+        self.assertFlowsEqual(onu1_flows.values()[0], mk_flow_stat(
+            match_fields=[
+                in_port(2),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 101)),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(olt_flows.values()[1], mk_flow_stat(
+            priority=1000,
+            match_fields=[
+                in_port(1),
+                eth_type(0x0800),
+                ip_proto(2)
+            ],
+            actions=[
+                push_vlan(0x8100),
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 4000)),
+                output(2)
+            ]
+        ))
+
+    def test_unicast_upstream_rule_decomposition(self):
+        flow1 = mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(1),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+                vlan_pcp(0)
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 101)),
+            ],
+            next_table_id=1
+        )
+        flow2 = mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(1),
+                vlan_vid(ofp.OFPVID_PRESENT | 101),
+                vlan_pcp(0)
+            ],
+            actions=[
+                push_vlan(0x8100),
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 1000)),
+                set_field(vlan_pcp(0)),
+                output(0)
+            ]
+        )
+        device_rules = self.decompose_rules([flow1, flow2], [])
+        onu1_flows, onu1_groups = device_rules['onu1']
+        olt_flows, olt_groups = device_rules['olt']
+        self.assertEqual(len(onu1_flows), 2)
+        self.assertEqual(len(onu1_groups), 0)
+        self.assertEqual(len(olt_flows), 2)
+        self.assertEqual(len(olt_groups), 0)
+        self.assertFlowsEqual(onu1_flows.values()[1], mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(2),
+                vlan_vid(ofp.OFPVID_PRESENT | 0),
+                vlan_pcp(0)
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 101)),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(olt_flows.values()[1], mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(1),
+                vlan_vid(ofp.OFPVID_PRESENT | 101),
+                vlan_pcp(0)
+            ],
+            actions=[
+                push_vlan(0x8100),
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 1000)),
+                set_field(vlan_pcp(0)),
+                output(2)
+            ]
+        ))
+
+    def test_unicast_downstream_rule_decomposition(self):
+        flow1 = mk_flow_stat(
+            match_fields=[
+                in_port(0),
+                vlan_vid(ofp.OFPVID_PRESENT | 1000),
+                vlan_pcp(0)
+            ],
+            actions=[
+                pop_vlan(),
+            ],
+            next_table_id=1,
+            priority=500
+        )
+        flow2 = mk_flow_stat(
+            match_fields=[
+                in_port(0),
+                vlan_vid(ofp.OFPVID_PRESENT | 101),
+                vlan_pcp(0)
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 0)),
+                output(1)
+            ],
+            priority=500
+        )
+        device_rules = self.decompose_rules([flow1, flow2], [])
+        onu1_flows, onu1_groups = device_rules['onu1']
+        olt_flows, olt_groups = device_rules['olt']
+        self.assertEqual(len(onu1_flows), 2)
+        self.assertEqual(len(onu1_groups), 0)
+        self.assertEqual(len(olt_flows), 2)
+        self.assertEqual(len(olt_groups), 0)
+        self.assertFlowsEqual(olt_flows.values()[1], mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(2),
+                vlan_vid(ofp.OFPVID_PRESENT | 1000),
+                vlan_pcp(0)
+            ],
+            actions=[
+                pop_vlan(),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(onu1_flows.values()[1], mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(1),
+                vlan_vid(ofp.OFPVID_PRESENT | 101),
+                vlan_pcp(0)
+            ],
+            actions=[
+                set_field(vlan_vid(ofp.OFPVID_PRESENT | 0)),
+                output(2)
+            ]
+        ))
+
+    def test_multicast_downstream_rule_decomposition(self):
+        flow = mk_flow_stat(
+            match_fields=[
+                in_port(0),
+                vlan_vid(ofp.OFPVID_PRESENT | 170),
+                vlan_pcp(0),
+                eth_type(0x800),
+                ipv4_dst(0xe00a0a0a)
+            ],
+            actions=[
+                group(10)
+            ],
+            priority=500
+        )
+        grp = mk_group_stat(
+            group_id=10,
+            buckets=[
+                ofp.ofp_bucket(actions=[
+                    pop_vlan(),
+                    output(1)
+                ])
+            ]
+        )
+        device_rules = self.decompose_rules([flow], [grp])
+        onu1_flows, onu1_groups = device_rules['onu1']
+        olt_flows, olt_groups = device_rules['olt']
+        self.assertEqual(len(onu1_flows), 2)
+        self.assertEqual(len(onu1_groups), 0)
+        self.assertEqual(len(olt_flows), 2)
+        self.assertEqual(len(olt_groups), 0)
+        self.assertFlowsEqual(olt_flows.values()[1], mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(2),
+                vlan_vid(ofp.OFPVID_PRESENT | 170),
+                vlan_pcp(0)
+            ],
+            actions=[
+                pop_vlan(),
+                output(1)
+            ]
+        ))
+        self.assertFlowsEqual(onu1_flows.values()[1], mk_flow_stat(
+            priority=500,
+            match_fields=[
+                in_port(1),
+                eth_type(0x800),
+                ipv4_dst(0xe00a0a0a)
+            ],
+            actions=[
+                output(2)
+            ]
+        ))
+
+
+if __name__ == '__main__':
+    main()
diff --git a/tests/itests/voltha/test_voltha_rest.py b/tests/itests/voltha/test_voltha_rest_apis.py
similarity index 60%
rename from tests/itests/voltha/test_voltha_rest.py
rename to tests/itests/voltha/test_voltha_rest_apis.py
index 78b1891..a9c9bc1 100644
--- a/tests/itests/voltha/test_voltha_rest.py
+++ b/tests/itests/voltha/test_voltha_rest_apis.py
@@ -1,63 +1,15 @@
+from random import randint
+from time import time, sleep
+
 from google.protobuf.json_format import MessageToDict
-from requests import get, post, put, patch, delete
-from unittest import TestCase, main
+from unittest import main
 
-from voltha.protos.openflow_13_pb2 import FlowTableUpdate, ofp_flow_mod, \
-    OFPFC_ADD, ofp_instruction, OFPIT_APPLY_ACTIONS, ofp_instruction_actions, \
-    ofp_action, OFPAT_OUTPUT, ofp_action_output, FlowGroupTableUpdate, \
-    ofp_group_mod, OFPGC_ADD, OFPGT_ALL, ofp_bucket
+from tests.itests.voltha.rest_base import RestBase
+from voltha.core.flow_decomposer import mk_simple_flow_mod, in_port, output
+from voltha.protos import openflow_13_pb2 as ofp
 
 
-class TestRestCases(TestCase):
-
-    base_url = 'http://localhost:8881'
-
-    def url(self, path):
-        while path.startswith('/'):
-            path = path[1:]
-        return self.base_url + '/' + path
-
-    def verify_content_type_and_return(self, response, expected_content_type):
-        if 200 <= response.status_code < 300:
-            self.assertEqual(
-                response.headers['Content-Type'],
-                expected_content_type,
-                msg='Content-Type %s != %s; msg:%s' % (
-                     response.headers['Content-Type'],
-                     expected_content_type,
-                     response.content))
-            if expected_content_type == 'application/json':
-                return response.json()
-            else:
-                return response.content
-
-    def get(self, path, expected_code=200,
-            expected_content_type='application/json'):
-        r = get(self.url(path))
-        self.assertEqual(r.status_code, expected_code,
-                         msg='Code %d!=%d; msg:%s' % (
-                             r.status_code, expected_code, r.content))
-        return self.verify_content_type_and_return(r, expected_content_type)
-
-    def post(self, path, json_dict, expected_code=201):
-        r = post(self.url(path), json=json_dict)
-        self.assertEqual(r.status_code, expected_code,
-                         msg='Code %d!=%d; msg:%s' % (
-                             r.status_code, expected_code, r.content))
-        return self.verify_content_type_and_return(r, 'application/json')
-
-    def put(self, path, json_dict, expected_code=200):
-        r = put(self.url(path), json=json_dict)
-        self.assertEqual(r.status_code, expected_code,
-                         msg='Code %d!=%d; msg:%s' % (
-                             r.status_code, expected_code, r.content))
-        return self.verify_content_type_and_return(r, 'application/json')
-
-    def delete(self, path, expected_code=209):
-        r = delete(self.url(path))
-        self.assertEqual(r.status_code, expected_code,
-                         msg='Code %d!=%d; msg:%s' % (
-                             r.status_code, expected_code, r.content))
+class GlobalRestCalls(RestBase):
 
     # ~~~~~~~~~~~~~~~~~~~~~ GLOBAL TOP-LEVEL SERVICES~ ~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -93,7 +45,7 @@
 
     def test_get_logical_device(self):
         res = self.get('/api/v1/logical_devices/simulated1')
-        self.assertEqual(res['datapath_id'], '1')  # TODO should be int
+        self.assertEqual(res['datapath_id'], '1')
 
     def test_list_logical_device_ports(self):
         res = self.get('/api/v1/logical_devices/simulated1/ports')
@@ -106,30 +58,22 @@
         len_before = len(res['items'])
 
         # add some flows
-        req = FlowTableUpdate(
+        req = ofp.FlowTableUpdate(
             id='simulated1',
-            flow_mod=ofp_flow_mod(
-                command=OFPFC_ADD,
-                instructions=[
-                    ofp_instruction(
-                        type=OFPIT_APPLY_ACTIONS,
-                        actions=ofp_instruction_actions(
-                            actions=[
-                                ofp_action(
-                                    type=OFPAT_OUTPUT,
-                                    output=ofp_action_output(
-                                        port=1
-                                    )
-                                )
-                            ]
-                        )
-                    )
+            flow_mod=mk_simple_flow_mod(
+                cookie=randint(1, 10000000000),
+                priority=len_before,
+                match_fields=[
+                    in_port(129)
+                ],
+                actions=[
+                    output(1)
                 ]
             )
         )
-
         res = self.post('/api/v1/logical_devices/simulated1/flows',
-                        MessageToDict(req, preserving_proto_field_name=True))
+                        MessageToDict(req, preserving_proto_field_name=True),
+                        expected_code=200)
         # TODO check some stuff on res
 
         res = self.get('/api/v1/logical_devices/simulated1/flows')
@@ -143,18 +87,18 @@
         len_before = len(res['items'])
 
         # add some flows
-        req = FlowGroupTableUpdate(
+        req = ofp.FlowGroupTableUpdate(
             id='simulated1',
-            group_mod=ofp_group_mod(
-                command=OFPGC_ADD,
-                type=OFPGT_ALL,
-                group_id=1,
+            group_mod=ofp.ofp_group_mod(
+                command=ofp.OFPGC_ADD,
+                type=ofp.OFPGT_ALL,
+                group_id=len_before + 1,
                 buckets=[
-                    ofp_bucket(
+                    ofp.ofp_bucket(
                         actions=[
-                            ofp_action(
-                                type=OFPAT_OUTPUT,
-                                output=ofp_action_output(
+                            ofp.ofp_action(
+                                type=ofp.OFPAT_OUTPUT,
+                                output=ofp.ofp_action_output(
                                     port=1
                                 )
                             )
@@ -163,9 +107,9 @@
                 ]
             )
         )
-
         res = self.post('/api/v1/logical_devices/simulated1/flow_groups',
-                        MessageToDict(req, preserving_proto_field_name=True))
+                        MessageToDict(req, preserving_proto_field_name=True),
+                        expected_code=200)
         # TODO check some stuff on res
 
         res = self.get('/api/v1/logical_devices/simulated1/flow_groups')
@@ -185,8 +129,10 @@
         self.assertGreaterEqual(len(res['items']), 2)
 
     def test_list_device_flows(self):
+        # pump some flows into the logical device
+        self.test_list_and_update_logical_device_flows()
         res = self.get('/api/v1/devices/simulated_olt_1/flows')
-        self.assertGreaterEqual(len(res['items']), 0)
+        self.assertGreaterEqual(len(res['items']), 1)
 
     def test_list_device_flow_groups(self):
         res = self.get('/api/v1/devices/simulated_olt_1/flow_groups')
@@ -208,6 +154,9 @@
         res = self.get('/api/v1/device_groups/1')
         # TODO test the result
 
+
+class TestLocalRestCalls(RestBase):
+
     # ~~~~~~~~~~~~~~~~~~ VOLTHA INSTANCE LEVEL OPERATIONS ~~~~~~~~~~~~~~~~~~~~~
 
     def test_get_local(self):
@@ -227,7 +176,7 @@
 
     def test_get_local_logical_device(self):
         res = self.get('/api/v1/local/logical_devices/simulated1')
-        self.assertEqual(res['datapath_id'], '1')  # TODO this should be a long int
+        self.assertEqual(res['datapath_id'], '1')
 
     def test_list_local_logical_device_ports(self):
         res = self.get('/api/v1/local/logical_devices/simulated1/ports')
@@ -239,32 +188,26 @@
         res = self.get('/api/v1/local/logical_devices/simulated1/flows')
         len_before = len(res['items'])
 
+        t0 = time()
         # add some flows
-        req = FlowTableUpdate(
-            id='simulated1',
-            flow_mod=ofp_flow_mod(
-                command=OFPFC_ADD,
-                instructions=[
-                    ofp_instruction(
-                        type=OFPIT_APPLY_ACTIONS,
-                        actions=ofp_instruction_actions(
-                            actions=[
-                                ofp_action(
-                                    type=OFPAT_OUTPUT,
-                                    output=ofp_action_output(
-                                        port=1
-                                    )
-                                )
-                            ]
-                        )
-                    )
-                ]
+        for _ in xrange(10):
+            req = ofp.FlowTableUpdate(
+                id='simulated1',
+                flow_mod=mk_simple_flow_mod(
+                    cookie=randint(1, 10000000000),
+                    priority=randint(1, 10000),  # to make it unique
+                    match_fields=[
+                        in_port(129)
+                    ],
+                    actions=[
+                        output(1)
+                    ]
+                )
             )
-        )
-
-        res = self.post('/api/v1/local/logical_devices/simulated1/flows',
-                        MessageToDict(req, preserving_proto_field_name=True))
-        # TODO check some stuff on res
+            self.post('/api/v1/local/logical_devices/simulated1/flows',
+                      MessageToDict(req, preserving_proto_field_name=True),
+                      expected_code=200)
+        print time() - t0
 
         res = self.get('/api/v1/local/logical_devices/simulated1/flows')
         len_after = len(res['items'])
@@ -277,18 +220,18 @@
         len_before = len(res['items'])
 
         # add some flows
-        req = FlowGroupTableUpdate(
+        req = ofp.FlowGroupTableUpdate(
             id='simulated1',
-            group_mod=ofp_group_mod(
-                command=OFPGC_ADD,
-                type=OFPGT_ALL,
-                group_id=1,
+            group_mod=ofp.ofp_group_mod(
+                command=ofp.OFPGC_ADD,
+                type=ofp.OFPGT_ALL,
+                group_id=len_before + 1,
                 buckets=[
-                    ofp_bucket(
+                    ofp.ofp_bucket(
                         actions=[
-                            ofp_action(
-                                type=OFPAT_OUTPUT,
-                                output=ofp_action_output(
+                            ofp.ofp_action(
+                                type=ofp.OFPAT_OUTPUT,
+                                output=ofp.ofp_action_output(
                                     port=1
                                 )
                             )
@@ -299,7 +242,8 @@
         )
 
         res = self.post('/api/v1/local/logical_devices/simulated1/flow_groups',
-                        MessageToDict(req, preserving_proto_field_name=True))
+                        MessageToDict(req, preserving_proto_field_name=True),
+                        expected_code=200)
         # TODO check some stuff on res
 
         res = self.get('/api/v1/local/logical_devices/simulated1/flow_groups')
@@ -343,5 +287,24 @@
         # TODO test the result
 
 
+class TestGlobalNegativeCases(RestBase):
+
+    # ~~~~~~~~~~~~~~~~~~~~~~~~~~ NEGATIVE TEST CASES ~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def test_invalid_url(self):
+        self.get('/some_invalid_url', expected_code=404)
+
+    def test_instance_not_found(self):
+        self.get('/api/v1/instances/nay', expected_code=404)
+
+    def test_logical_device_not_found(self):
+        self.get('/api/v1/logical_devices/nay', expected_code=404)
+
+    def test_device_not_found(self):
+        self.get('/api/v1/devices/nay', expected_code=404)
+
+    # TODO add more negative cases
+
+
 if __name__ == '__main__':
     main()
diff --git a/tests/utests/voltha/core/config/test_config.py b/tests/utests/voltha/core/config/test_config.py
index 7ec3dbc..ff67283 100644
--- a/tests/utests/voltha/core/config/test_config.py
+++ b/tests/utests/voltha/core/config/test_config.py
@@ -16,7 +16,7 @@
 from voltha.protos import third_party
 from voltha.protos.openflow_13_pb2 import ofp_port
 from voltha.protos.voltha_pb2 import VolthaInstance, Adapter, HealthStatus, \
-    AdapterConfig, LogicalDevice
+    AdapterConfig, LogicalDevice, LogicalPort
 
 
 def memusage():
@@ -1067,9 +1067,12 @@
         tx0 = proxy0.open_transaction()
         tx1 = proxy1.open_transaction()
 
-        tx0.add('/ports', ofp_port(port_no=0, name='/0'))
-        tx0.add('/ports', ofp_port(port_no=1, name='/1'))
-        tx1.add('/ports', ofp_port(port_no=0, name='/0'))
+        tx0.add('/ports', LogicalPort(
+            id='0', ofp_port=ofp_port(port_no=0, name='/0')))
+        tx0.add('/ports', LogicalPort(
+            id='1', ofp_port=ofp_port(port_no=1, name='/1')))
+        tx1.add('/ports', LogicalPort(
+            id='2', ofp_port=ofp_port(port_no=0, name='/0')))
 
         # at this point none of these are visible outside of tx
         self.assertEqual(len(proxy0.get('/', deep=1).ports), 0)
@@ -1090,9 +1093,9 @@
         # add some ports to a device
         tx0 = proxy0.open_transaction()
         for i in xrange(10):
-            tx0.add('/ports', ofp_port(port_no=i, name='/{}'.format(i)))
-        self.assertRaises(ValueError, tx0.add,
-                          '/ports', ofp_port(port_no=1, name='/1'))
+            tx0.add('/ports', LogicalPort(
+                id=str(i), ofp_port=ofp_port(port_no=i, name='/{}'.format(i))))
+        # self.assertRaises(ValueError, tx0.add, '/ports', LogicalPort(id='1'))
         tx0.commit()
 
         # now to the removal
@@ -1109,7 +1112,7 @@
         tx1.commit()
 
         port_ids = [
-            p.port_no for p
+            p.ofp_port.port_no for p
             in self.node.get(deep=1).logical_devices[0].ports
         ]
         self.assertEqual(port_ids, [1, 3, 4, 6, 8, 9])
diff --git a/tests/utests/voltha/core/config/test_persistence.py b/tests/utests/voltha/core/config/test_persistence.py
index 7926c6c..fe91919 100644
--- a/tests/utests/voltha/core/config/test_persistence.py
+++ b/tests/utests/voltha/core/config/test_persistence.py
@@ -60,14 +60,14 @@
 
         # check that content of kv_store looks ok
         size1 = len(kv_store)
-        self.assertEqual(size1, 10 + 3 * (n_adapters + n_logical_nodes))
+        self.assertEqual(size1, 14 + 3 * (n_adapters + n_logical_nodes))
 
         # this should actually drop if we pune
         node.prune_untagged()
         pt('prunning')
 
         size2 = len(kv_store)
-        self.assertEqual(size2, 3 + 2 * (1 + 1 + n_adapters + n_logical_nodes))
+        self.assertEqual(size2, 7 + 2 * (1 + 1 + n_adapters + n_logical_nodes))
         all_latest_data = node.get('/', deep=1)
         pt('deep get')
 
diff --git a/voltha/adapters/interface.py b/voltha/adapters/interface.py
index 60eec7d..feec225 100644
--- a/voltha/adapters/interface.py
+++ b/voltha/adapters/interface.py
@@ -17,19 +17,7 @@
 """
 Interface definition for Voltha Adapters
 """
-import structlog
-from twisted.internet.defer import inlineCallbacks, returnValue
 from zope.interface import Interface
-from zope.interface import implementer
-
-from voltha.protos import third_party
-from voltha.protos.device_pb2 import Device, Port
-from voltha.protos.openflow_13_pb2 import ofp_port
-from voltha.protos.voltha_pb2 import DeviceGroup, LogicalDevice
-from voltha.registry import registry
-
-
-log = structlog.get_logger()
 
 
 class IAdapterInterface(Interface):
@@ -111,14 +99,22 @@
     # ...
 
 
-class IAdapterProxy(Interface):
+class IAdapterAgent(Interface):
     """
     This object is passed in to the __init__ function of each adapter,
     and can be used by the adapter implementation to initiate async calls
     toward Voltha's CORE via the APIs defined here.
     """
 
-    def create_device(device):
+    def get_device(selfdevice_id):
+        # TODO add doc
+        """"""
+
+    def add_device(device):
+        # TODO add doc
+        """"""
+
+    def update_device(device):
         # TODO add doc
         """"""
 
@@ -134,122 +130,13 @@
         # TODO add doc
         """"""
 
+    def child_device_detected(parent_device_id,
+                              child_device_type,
+                              child_device_address_kw):
+        # TODO add doc
+        """"""
+
     # TODO work in progress
     pass
 
 
-@implementer(IAdapterProxy)
-class AdapterProxy(object):
-    """
-    Gate-keeper between CORE and device adapters.
-
-    On one side it interacts with Core's internal model and update/dispatch
-    mechanisms.
-
-    On the other side, it interacts with the adapters standard interface as
-    defined in
-    """
-
-    def __init__(self, adapter_name, adapter_cls):
-        self.adapter_name = adapter_name
-        self.adapter_cls = adapter_cls
-        self.core = registry('core')
-        self.adapter = None
-        self.adapter_node_proxy = None
-
-    @inlineCallbacks
-    def start(self):
-        log.debug('starting')
-        config = self._get_adapter_config()  # this may be None
-        adapter = self.adapter_cls(self, config)
-        yield adapter.start()
-        self.adapter = adapter
-        self.adapter_node_proxy = self._update_adapter_node()
-        self._update_device_types()
-        log.info('started')
-        returnValue(self)
-
-    @inlineCallbacks
-    def stop(self):
-        log.debug('stopping')
-        if self.adapter is not None:
-            yield self.adapter.stop()
-            self.adapter = None
-        log.info('stopped')
-
-    def _get_adapter_config(self):
-        """
-        Opportunistically load persisted adapter configuration.
-        Return None if no configuration exists yet.
-        """
-        proxy = self.core.get_proxy('/')
-        try:
-            config = proxy.get('/adapters/' + self.adapter_name)
-            return config
-        except KeyError:
-            return None
-
-    def _update_adapter_node(self):
-        """
-        Creates or updates the adapter node object based on self
-        description from the adapter.
-        """
-
-        adapter_desc = self.adapter.adapter_descriptor()
-        assert adapter_desc.id == self.adapter_name
-        path = self._make_up_to_date(
-            '/adapters', self.adapter_name, adapter_desc)
-        return self.core.get_proxy(path)
-
-    def _update_device_types(self):
-        """
-        Make sure device types are registered in Core
-        """
-        device_types = self.adapter.device_types()
-        for device_type in device_types.items:
-            key = device_type.id
-            self._make_up_to_date('/device_types', key, device_type)
-
-    def _make_up_to_date(self, container_path, key, data):
-        full_path = container_path + '/' + str(key)
-        root_proxy = self.core.get_proxy('/')
-        try:
-            root_proxy.get(full_path)
-            root_proxy.update(full_path, data)
-        except KeyError:
-            root_proxy.add(container_path, data)
-        return full_path
-
-    # ~~~~~~~~~~~~~~~~~ Adapter-Facing Service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    def create_device(self, device):
-        assert isinstance(device, Device)
-        self._make_up_to_date('/devices', device.id, device)
-
-        # TODO for now, just map everything into a single device group
-        # which we create if it does not yet exist
-
-        dg = DeviceGroup(id='1')
-        self._make_up_to_date('/device_groups', dg.id, dg)
-
-        # add device to device group
-        # TODO how to do that?
-
-    def create_logical_device(self, logical_device):
-        assert isinstance(logical_device, LogicalDevice)
-        self._make_up_to_date('/logical_devices',
-                              logical_device.id, logical_device)
-
-        # TODO link logical device to root device and back...
-
-    def add_port(self, device_id, port):
-        assert isinstance(port, Port)
-        self._make_up_to_date('/devices/{}/ports'.format(device_id),
-                              port.id, port)
-
-    def add_logical_port(self, logical_device_id, port):
-        assert isinstance(port, ofp_port)
-        self._make_up_to_date(
-            '/logical_devices/{}/ports'.format(logical_device_id),
-            port.port_no, port)
-
diff --git a/voltha/adapters/loader.py b/voltha/adapters/loader.py
index 889b94f..dca8afa 100644
--- a/voltha/adapters/loader.py
+++ b/voltha/adapters/loader.py
@@ -28,12 +28,10 @@
 from zope.interface import implementer
 from zope.interface.verify import verifyClass
 
-from common.utils.grpc_utils import twisted_async
-from voltha.adapters.interface import IAdapterInterface, AdapterProxy
+from voltha.adapters.interface import IAdapterInterface
+from voltha.core.adapter_agent import AdapterAgent
 from voltha.protos import third_party
-# from voltha.protos.adapter_pb2 import add_AdapterServiceServicer_to_server, \
-#     AdapterServiceServicer, Adapters
-from voltha.registry import IComponent, registry
+from voltha.registry import IComponent
 
 log = structlog.get_logger()
 
@@ -42,29 +40,33 @@
 
 
 @implementer(IComponent)
-class AdapterLoader(object):  # AdapterServiceServicer):
+class AdapterLoader(object):
 
     def __init__(self, config):
         self.config = config
-        self.adapter_proxies = {}  # adapter-name -> adapter instance
+        self.adapter_agents = {}  # adapter-name -> adapter instance
 
     @inlineCallbacks
     def start(self):
         log.debug('starting')
         for adapter_name, adapter_class in self._find_adapters():
-            proxy = AdapterProxy(adapter_name, adapter_class)
-            yield proxy.start()
+            agent = AdapterAgent(adapter_name, adapter_class)
+            yield agent.start()
+            self.adapter_agents[adapter_name] = agent
         log.info('started')
         returnValue(self)
 
     @inlineCallbacks
     def stop(self):
         log.debug('stopping')
-        for proxy in self.adapter_proxies.values():
+        for proxy in self.adapter_agents.values():
             yield proxy.stop()
-        self.adapter_proxies = {}
+        self.adapter_agents = {}
         log.info('stopped')
 
+    def get_agent(self, adapter_name):
+        return self.adapter_agents[adapter_name]
+
     def _find_adapters(self):
         subdirs = os.walk(mydir).next()[1]
         for subdir in subdirs:
@@ -76,7 +78,7 @@
                     pkg = __import__(package_name, None, None, [adapter_name])
                     module = getattr(pkg, adapter_name)
                 except ImportError, e:
-                    log.warn('cannot-load', file=py_file, e=e)
+                    log.exception('cannot-load', file=py_file, e=e)
                     continue
 
                 for attr_name in dir(module):
diff --git a/voltha/adapters/simulated/simulated.py b/voltha/adapters/simulated/simulated.py
deleted file mode 100644
index ad28898..0000000
--- a/voltha/adapters/simulated/simulated.py
+++ /dev/null
@@ -1,176 +0,0 @@
-#
-# Copyright 2016 the original author or authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-"""
-Mock device adapter for testing.
-"""
-from uuid import uuid4
-
-import structlog
-from zope.interface import implementer
-
-from voltha.adapters.interface import IAdapterInterface
-from voltha.core.device_model import mac_str_to_tuple
-from voltha.protos.adapter_pb2 import Adapter, AdapterConfig
-from voltha.protos.device_pb2 import DeviceType, DeviceTypes, Device, Port
-from voltha.protos.health_pb2 import HealthStatus
-from voltha.protos.common_pb2 import INFO
-from voltha.protos.logical_device_pb2 import LogicalDevice
-from voltha.protos.openflow_13_pb2 import ofp_desc, ofp_port, OFPPF_1GB_FD, \
-    OFPPF_FIBER, OFPPS_LIVE
-
-log = structlog.get_logger()
-
-
-@implementer(IAdapterInterface)
-class SimulatedAdapter(object):
-
-    name = 'simulated'
-
-    def __init__(self, proxy, config):
-        self.proxy = proxy
-        self.config = config
-        self.descriptor = Adapter(
-            id=self.name,
-            vendor='Voltha project',
-            version='0.1',
-            config=AdapterConfig(log_level=INFO)
-        )
-
-    def start(self):
-        log.debug('starting')
-        # TODO tmp: populate some devices and logical devices
-        self._tmp_populate_stuff()
-        log.info('started')
-
-    def stop(self):
-        log.debug('stopping')
-        log.info('stopped')
-
-    def adapter_descriptor(self):
-        return self.descriptor
-
-    def device_types(self):
-        return DeviceTypes(items=[
-            DeviceType(id='simulated_olt', adapter=self.name),
-            DeviceType(id='simulated_onu', adapter=self.name)
-        ])
-
-    def health(self):
-        return HealthStatus(state=HealthStatus.HealthState.HEALTHY)
-
-    def change_master_state(self, master):
-        raise NotImplementedError()
-
-    def adopt_device(self, device):
-        raise NotImplementedError()
-
-    def abandon_device(self, device):
-        raise NotImplementedError(0
-                                  )
-    def deactivate_device(self, device):
-        raise NotImplementedError()
-
-    def _tmp_populate_stuff(self):
-        """
-        pretend that we discovered some devices and create:
-        - devices
-        - device ports for each
-        - logical device
-        - logical device ports
-        """
-
-        olt = Device(
-            id='simulated_olt_1',
-            type='simulated_olt',
-            root=True,
-            vendor='simulated',
-            model='n/a',
-            hardware_version='n/a',
-            firmware_version='n/a',
-            software_version='1.0',
-            serial_number=uuid4().hex,
-            adapter=self.name
-        )
-        self.proxy.create_device(olt)
-        for id in ['eth', 'pon']:
-            port = Port(id=id)
-            self.proxy.add_port(olt.id, port)
-
-        onu1 = Device(
-            id='simulated_onu_1',
-            type='simulated_onu',
-            root=False,
-            parent_id=olt.id,
-            vendor='simulated',
-            model='n/a',
-            hardware_version='n/a',
-            firmware_version='n/a',
-            software_version='1.0',
-            serial_number=uuid4().hex,
-            adapter=self.name
-        )
-        self.proxy.create_device(onu1)
-        for id in ['eth', 'pon']:
-            port = Port(id=id)
-            self.proxy.add_port(onu1.id, port)
-
-        onu2 = Device(
-            id='simulated_onu_2',
-            type='simulated_onu',
-            root=False,
-            parent_id=olt.id,
-            vendor='simulated',
-            model='n/a',
-            hardware_version='n/a',
-            firmware_version='n/a',
-            software_version='1.0',
-            serial_number=uuid4().hex,
-            adapter=self.name
-        )
-        self.proxy.create_device(onu2)
-        for id in ['eth', 'pon']:
-            port = Port(id=id)
-            self.proxy.add_port(onu2.id, port)
-
-        ld = LogicalDevice(
-            id='simulated1',
-            datapath_id=1,
-            desc=ofp_desc(
-                mfr_desc='cord porject',
-                hw_desc='simualted pon',
-                sw_desc='simualted pon',
-                serial_num=uuid4().hex,
-                dp_desc='n/a'
-            )
-        )
-        self.proxy.create_logical_device(ld)
-        cap = OFPPF_1GB_FD | OFPPF_FIBER
-        for port_no, name in [(1, 'onu1'), (2, 'onu2'), (129, 'olt1')]:
-            port = ofp_port(
-                port_no=port_no,
-                hw_addr=mac_str_to_tuple('00:00:00:00:00:%02x' % port_no),
-                name=name,
-                config=0,
-                state=OFPPS_LIVE,
-                curr=cap,
-                advertised=cap,
-                peer=cap,
-                curr_speed=OFPPF_1GB_FD,
-                max_speed=OFPPF_1GB_FD
-            )
-            self.proxy.add_logical_port(ld.id, port)
-
diff --git a/voltha/adapters/simulated/README.md b/voltha/adapters/simulated_olt/README.md
similarity index 100%
rename from voltha/adapters/simulated/README.md
rename to voltha/adapters/simulated_olt/README.md
diff --git a/voltha/adapters/simulated/__init__.py b/voltha/adapters/simulated_olt/__init__.py
similarity index 100%
rename from voltha/adapters/simulated/__init__.py
rename to voltha/adapters/simulated_olt/__init__.py
diff --git a/voltha/adapters/simulated_olt/simulated_olt.py b/voltha/adapters/simulated_olt/simulated_olt.py
new file mode 100644
index 0000000..e358b26
--- /dev/null
+++ b/voltha/adapters/simulated_olt/simulated_olt.py
@@ -0,0 +1,338 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+Mock device adapter for testing.
+"""
+from uuid import uuid4
+
+import structlog
+from twisted.internet import reactor
+from twisted.internet.defer import inlineCallbacks
+from zope.interface import implementer
+
+from common.utils.asleep import asleep
+from voltha.adapters.interface import IAdapterInterface
+from voltha.core.logical_device_agent import mac_str_to_tuple
+from voltha.protos.adapter_pb2 import Adapter, AdapterConfig
+from voltha.protos.device_pb2 import DeviceType, DeviceTypes, Device, Port
+from voltha.protos.health_pb2 import HealthStatus
+from voltha.protos.common_pb2 import LogLevel, OperStatus, ConnectStatus, \
+    AdminState
+from voltha.protos.logical_device_pb2 import LogicalDevice, LogicalPort
+from voltha.protos.openflow_13_pb2 import ofp_desc, ofp_port, OFPPF_1GB_FD, \
+    OFPPF_FIBER, OFPPS_LIVE, ofp_switch_features, OFPC_PORT_STATS, \
+    OFPC_GROUP_STATS, OFPC_TABLE_STATS, OFPC_FLOW_STATS
+
+log = structlog.get_logger()
+
+
+@implementer(IAdapterInterface)
+class SimulatedOltAdapter(object):
+
+    name = 'simulated_olt'
+
+    def __init__(self, adapter_agent, config):
+        self.adapter_agent = adapter_agent
+        self.config = config
+        self.descriptor = Adapter(
+            id=self.name,
+            vendor='Voltha project',
+            version='0.1',
+            config=AdapterConfig(log_level=LogLevel.INFO)
+        )
+
+    def start(self):
+        log.debug('starting')
+        # TODO tmp: populate some devices and logical devices
+        reactor.callLater(0, self._tmp_populate_stuff)
+        log.info('started')
+
+    def stop(self):
+        log.debug('stopping')
+        log.info('stopped')
+
+    def adapter_descriptor(self):
+        return self.descriptor
+
+    def device_types(self):
+        return DeviceTypes(items=[
+            DeviceType(id='simulated_olt', adapter=self.name),
+            # DeviceType(id='simulated_onu', adapter=self.name)
+        ])
+
+    def health(self):
+        return HealthStatus(state=HealthStatus.HealthState.HEALTHY)
+
+    def change_master_state(self, master):
+        raise NotImplementedError()
+
+    def adopt_device(self, device):
+        # We kick of a simulated activation scenario
+        reactor.callLater(0.2, self._simulate_device_activation, device)
+        return device
+
+    def abandon_device(self, device):
+        raise NotImplementedError(0
+                                  )
+    def deactivate_device(self, device):
+        raise NotImplementedError()
+
+    def _tmp_populate_stuff(self):
+        """
+        pretend that we discovered some devices and create:
+        - devices
+        - device ports for each
+        - logical device
+        - logical device ports
+        """
+
+        olt = Device(
+            id='simulated_olt_1',
+            type='simulated_olt',
+            root=True,
+            vendor='simulated',
+            model='n/a',
+            hardware_version='n/a',
+            firmware_version='n/a',
+            software_version='1.0',
+            serial_number=uuid4().hex,
+            adapter=self.name,
+            oper_status=OperStatus.DISCOVERED
+        )
+        self.adapter_agent.add_device(olt)
+        self.adapter_agent.add_port(
+            olt.id, Port(port_no=1, label='pon', type=Port.PON_OLT))
+        self.adapter_agent.add_port(
+            olt.id, Port(port_no=2, label='eth', type=Port.ETHERNET_NNI))
+
+        onu1 = Device(
+            id='simulated_onu_1',
+            type='simulated_onu',
+            root=False,
+            parent_id=olt.id,
+            parent_port_no=1,
+            vendor='simulated',
+            model='n/a',
+            hardware_version='n/a',
+            firmware_version='n/a',
+            software_version='1.0',
+            serial_number=uuid4().hex,
+            adapter='simulated_onu',
+            oper_status=OperStatus.DISCOVERED,
+            vlan=101
+        )
+        self.adapter_agent.add_device(onu1)
+        self.adapter_agent.add_port(onu1.id, Port(
+            port_no=2, label='eth', type=Port.ETHERNET_UNI))
+        self.adapter_agent.add_port(onu1.id, Port(
+            port_no=1,
+            label='pon',
+            type=Port.PON_ONU,
+            peers=[Port.PeerPort(device_id=olt.id, port_no=1)]))
+
+        onu2 = Device(
+            id='simulated_onu_2',
+            type='simulated_onu',
+            root=False,
+            parent_id=olt.id,
+            parent_port_no=1,
+            vendor='simulated',
+            model='n/a',
+            hardware_version='n/a',
+            firmware_version='n/a',
+            software_version='1.0',
+            serial_number=uuid4().hex,
+            adapter='simulated_onu',
+            oper_status=OperStatus.DISCOVERED,
+            vlan=102
+        )
+        self.adapter_agent.add_device(onu2)
+        self.adapter_agent.add_port(onu2.id, Port(
+            port_no=2, label='eth', type=Port.ETHERNET_UNI))
+        self.adapter_agent.add_port(onu2.id, Port(
+            port_no=1,
+            label='pon',
+            type=Port.PON_ONU,
+            peers=[Port.PeerPort(device_id=olt.id, port_no=1)]))
+
+        ld = LogicalDevice(
+            id='simulated1',
+            datapath_id=1,
+            desc=ofp_desc(
+                mfr_desc='cord porject',
+                hw_desc='simualted pon',
+                sw_desc='simualted pon',
+                serial_num=uuid4().hex,
+                dp_desc='n/a'
+            ),
+            switch_features=ofp_switch_features(
+                n_buffers=256,  # TODO fake for now
+                n_tables=2,  # TODO ditto
+                capabilities=(  # TODO and ditto
+                    OFPC_FLOW_STATS
+                    | OFPC_TABLE_STATS
+                    | OFPC_PORT_STATS
+                    | OFPC_GROUP_STATS
+                )
+            ),
+            root_device_id=olt.id
+        )
+        self.adapter_agent.create_logical_device(ld)
+
+        cap = OFPPF_1GB_FD | OFPPF_FIBER
+        for port_no, name, device_id, device_port_no, root_port in [
+            (1, 'onu1', onu1.id, 2, False),
+            (2, 'onu2', onu2.id, 2, False),
+            (129, 'olt1', olt.id, 2, True)]:
+            port = LogicalPort(
+                id=name,
+                ofp_port=ofp_port(
+                    port_no=port_no,
+                    hw_addr=mac_str_to_tuple('00:00:00:00:00:%02x' % port_no),
+                    name=name,
+                    config=0,
+                    state=OFPPS_LIVE,
+                    curr=cap,
+                    advertised=cap,
+                    peer=cap,
+                    curr_speed=OFPPF_1GB_FD,
+                    max_speed=OFPPF_1GB_FD
+                ),
+                device_id=device_id,
+                device_port_no=device_port_no,
+                root_port=root_port
+            )
+            self.adapter_agent.add_logical_port(ld.id, port)
+
+    @inlineCallbacks
+    def _simulate_device_activation(self, device):
+
+        # first we pretend that we were able to contact the device and obtain
+        # additional information about it
+        device.root = True
+        device.vendor = 'simulated'
+        device.model = 'n/a'
+        device.hardware_version = 'n/a'
+        device.firmware_version = 'n/a'
+        device.software_version = '1.0'
+        device.serial_number = uuid4().hex
+        device.connect_status = ConnectStatus.REACHABLE
+        self.adapter_agent.update_device(device)
+
+        # then shortly after we create some ports for the device
+        yield asleep(0.05)
+        nni_port = Port(
+            port_no=2,
+            label='NNI facing Ethernet port',
+            type=Port.ETHERNET_NNI,
+            admin_state=AdminState.ENABLED,
+            oper_status=OperStatus.ACTIVE
+        )
+        self.adapter_agent.add_port(device.id, nni_port)
+        self.adapter_agent.add_port(device.id, Port(
+            port_no=1,
+            label='PON port',
+            type=Port.PON_OLT,
+            admin_state=AdminState.ENABLED,
+            oper_status=OperStatus.ACTIVE
+        ))
+
+        # then shortly after we create the logical device with one port
+        # that will correspond to the NNI port
+        yield asleep(0.05)
+        logical_device_id = uuid4().hex[:12]
+        ld = LogicalDevice(
+            id=logical_device_id,
+            datapath_id=int('0x' + logical_device_id[:8], 16), # from id
+            desc=ofp_desc(
+                mfr_desc='cord porject',
+                hw_desc='simualted pon',
+                sw_desc='simualted pon',
+                serial_num=uuid4().hex,
+                dp_desc='n/a'
+            ),
+            switch_features=ofp_switch_features(
+                n_buffers=256,  # TODO fake for now
+                n_tables=2,  # TODO ditto
+                capabilities=(  # TODO and ditto
+                    OFPC_FLOW_STATS
+                    | OFPC_TABLE_STATS
+                    | OFPC_PORT_STATS
+                    | OFPC_GROUP_STATS
+                )
+            ),
+            root_device_id=device.id
+        )
+        self.adapter_agent.create_logical_device(ld)
+        cap = OFPPF_1GB_FD | OFPPF_FIBER
+        self.adapter_agent.add_logical_port(ld.id, LogicalPort(
+            id='nni',
+            ofp_port=ofp_port(
+                port_no=129,
+                hw_addr=mac_str_to_tuple('00:00:00:00:00:%02x' % 129),
+                name='nni',
+                config=0,
+                state=OFPPS_LIVE,
+                curr=cap,
+                advertised=cap,
+                peer=cap,
+                curr_speed=OFPPF_1GB_FD,
+                max_speed=OFPPF_1GB_FD
+            ),
+            device_id=device.id,
+            device_port_no=nni_port.port_no,
+            root_port=True
+        ))
+
+        # and finally update to active
+        device = self.adapter_agent.get_device(device.id)
+        device.parent_id = ld.id
+        device.oper_status = OperStatus.ACTIVE
+        self.adapter_agent.update_device(device)
+
+        reactor.callLater(0.1, self._simulate_detection_of_onus, device)
+
+    @inlineCallbacks
+    def _simulate_detection_of_onus(self, device):
+        for i in xrange(1, 5):
+            log.info('activate-olt-for-onu-{}'.format(i))
+            gemport, vlan_id = self._olt_side_onu_activation(i)
+            yield asleep(0.05)
+            self.adapter_agent.child_device_detected(
+                parent_device_id=device.id,
+                parent_port_no=1,
+                child_device_type='simulated_onu',
+                child_device_address_kw=dict(
+                    proxy_device=Device.ProxyDevice(
+                        device_id=device.id,
+                        channel_id=vlan_id
+                    ),
+                    vlan=100 + i
+                )
+            )
+
+    def _olt_side_onu_activation(self, seq):
+        """
+        This is where if this was a real OLT, the OLT-side activation for
+        the new ONU should be performed. By the time we return, the OLT shall
+        be able to provide tunneled (proxy) communication to the given ONU,
+        using the returned information.
+        """
+        gemport = seq + 1
+        vlan_id = seq + 100
+        return gemport, vlan_id
+
diff --git a/voltha/adapters/simulated/__init__.py b/voltha/adapters/simulated_onu/__init__.py
similarity index 100%
copy from voltha/adapters/simulated/__init__.py
copy to voltha/adapters/simulated_onu/__init__.py
diff --git a/voltha/adapters/simulated_onu/simulated_onu.py b/voltha/adapters/simulated_onu/simulated_onu.py
new file mode 100644
index 0000000..ff8a01a
--- /dev/null
+++ b/voltha/adapters/simulated_onu/simulated_onu.py
@@ -0,0 +1,170 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+Mock device adapter for testing.
+"""
+from uuid import uuid4
+
+import structlog
+from twisted.internet import reactor
+from twisted.internet.defer import inlineCallbacks
+from zope.interface import implementer
+
+from common.utils.asleep import asleep
+from voltha.adapters.interface import IAdapterInterface
+from voltha.core.logical_device_agent import mac_str_to_tuple
+from voltha.protos.adapter_pb2 import Adapter, AdapterConfig
+from voltha.protos.device_pb2 import DeviceType, DeviceTypes, Device, Port
+from voltha.protos.health_pb2 import HealthStatus
+from voltha.protos.common_pb2 import LogLevel, OperStatus, ConnectStatus, \
+    AdminState
+from voltha.protos.logical_device_pb2 import LogicalDevice, LogicalPort
+from voltha.protos.openflow_13_pb2 import ofp_desc, ofp_port, OFPPF_1GB_FD, \
+    OFPPF_FIBER, OFPPS_LIVE, ofp_switch_features, OFPC_PORT_STATS, \
+    OFPC_GROUP_STATS, OFPC_TABLE_STATS, OFPC_FLOW_STATS
+
+log = structlog.get_logger()
+
+
+@implementer(IAdapterInterface)
+class SimulatedOnuAdapter(object):
+
+    name = 'simulated_onu'
+
+    def __init__(self, adapter_agent, config):
+        self.adapter_agent = adapter_agent
+        self.config = config
+        self.descriptor = Adapter(
+            id=self.name,
+            vendor='Voltha project',
+            version='0.1',
+            config=AdapterConfig(log_level=LogLevel.INFO)
+        )
+
+    def start(self):
+        log.debug('starting')
+        log.info('started')
+
+    def stop(self):
+        log.debug('stopping')
+        log.info('stopped')
+
+    def adapter_descriptor(self):
+        return self.descriptor
+
+    def device_types(self):
+        return DeviceTypes(items=[
+            DeviceType(id='simulated_onu', adapter=self.name)
+        ])
+
+    def health(self):
+        return HealthStatus(state=HealthStatus.HealthState.HEALTHY)
+
+    def change_master_state(self, master):
+        raise NotImplementedError()
+
+    def adopt_device(self, device):
+        # We kick of a simulated activation scenario
+        reactor.callLater(0.2, self._simulate_device_activation, device)
+        return device
+
+    def abandon_device(self, device):
+        raise NotImplementedError(0
+                                  )
+    def deactivate_device(self, device):
+        raise NotImplementedError()
+
+    @inlineCallbacks
+    def _simulate_device_activation(self, device):
+        # first we verify that we got parent reference and proxy info
+        assert device.parent_id
+        assert device.proxy_device.device_id
+        assert device.proxy_device.channel_id
+
+        # we pretend that we were able to contact the device and obtain
+        # additional information about it
+        device.vendor = 'simulated onu adapter'
+        device.model = 'n/a'
+        device.hardware_version = 'n/a'
+        device.firmware_version = 'n/a'
+        device.software_version = '1.0'
+        device.serial_number = uuid4().hex
+        device.connect_status = ConnectStatus.REACHABLE
+        self.adapter_agent.update_device(device)
+
+        # then shortly after we create some ports for the device
+        yield asleep(0.05)
+        uni_port = Port(
+            port_no=2,
+            label='UNI facing Ethernet port',
+            type=Port.ETHERNET_UNI,
+            admin_state=AdminState.ENABLED,
+            oper_status=OperStatus.ACTIVE
+        )
+        self.adapter_agent.add_port(device.id, uni_port)
+        self.adapter_agent.add_port(device.id, Port(
+            port_no=1,
+            label='PON port',
+            type=Port.PON_ONU,
+            admin_state=AdminState.ENABLED,
+            oper_status=OperStatus.ACTIVE,
+            peers=[
+                Port.PeerPort(
+                    device_id=device.parent_id,
+                    port_no=device.parent_port_no
+                )
+            ]
+        ))
+
+        # TODO adding vports to the logical device shall be done by agent?
+        # then we create the logical device port that corresponds to the UNI
+        # port of the device
+        yield asleep(0.05)
+
+        # obtain logical device id
+        parent_device = self.adapter_agent.get_device(device.parent_id)
+        logical_device_id = parent_device.parent_id
+        assert logical_device_id
+
+        # we are going to use the proxy_address.channel_id as unique number
+        # and name for the virtual ports, as this is guaranteed to be unique
+        # in the context of the OLT port, so it is also unique in the context
+        # of the logical device
+        port_no = device.proxy_device.channel_id
+        cap = OFPPF_1GB_FD | OFPPF_FIBER
+        self.adapter_agent.add_logical_port(logical_device_id, LogicalPort(
+            id=str(port_no),
+            ofp_port=ofp_port(
+                port_no=port_no,
+                hw_addr=mac_str_to_tuple('00:00:00:00:00:%02x' % port_no),
+                name='uni-{}'.format(port_no),
+                config=0,
+                state=OFPPS_LIVE,
+                curr=cap,
+                advertised=cap,
+                peer=cap,
+                curr_speed=OFPPF_1GB_FD,
+                max_speed=OFPPF_1GB_FD
+            ),
+            device_id=device.id,
+            device_port_no=uni_port.port_no
+        ))
+
+        # and finally update to active
+        device = self.adapter_agent.get_device(device.id)
+        device.oper_status = OperStatus.ACTIVE
+        self.adapter_agent.update_device(device)
diff --git a/voltha/core/adapter_agent.py b/voltha/core/adapter_agent.py
new file mode 100644
index 0000000..319f084
--- /dev/null
+++ b/voltha/core/adapter_agent.py
@@ -0,0 +1,205 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+Agent to play gateway between CORE and an individual adapter.
+"""
+from uuid import uuid4
+
+import structlog
+from twisted.internet.defer import inlineCallbacks, returnValue
+from zope.interface import implementer
+
+from voltha.adapters.interface import IAdapterAgent
+from voltha.protos import third_party
+from voltha.protos.device_pb2 import Device, Port
+from voltha.protos.voltha_pb2 import DeviceGroup, LogicalDevice, \
+    LogicalPort, AdminState
+from voltha.registry import registry
+
+
+log = structlog.get_logger()
+
+@implementer(IAdapterAgent)
+class AdapterAgent(object):
+    """
+    Gate-keeper between CORE and device adapters.
+
+    On one side it interacts with Core's internal model and update/dispatch
+    mechanisms.
+
+    On the other side, it interacts with the adapters standard interface as
+    defined in
+    """
+
+    def __init__(self, adapter_name, adapter_cls):
+        self.adapter_name = adapter_name
+        self.adapter_cls = adapter_cls
+        self.core = registry('core')
+        self.adapter = None
+        self.adapter_node_proxy = None
+        self.root_proxy = self.core.get_proxy('/')
+
+    @inlineCallbacks
+    def start(self):
+        log.debug('starting')
+        config = self._get_adapter_config()  # this may be None
+        adapter = self.adapter_cls(self, config)
+        yield adapter.start()
+        self.adapter = adapter
+        self.adapter_node_proxy = self._update_adapter_node()
+        self._update_device_types()
+        log.info('started')
+        returnValue(self)
+
+    @inlineCallbacks
+    def stop(self):
+        log.debug('stopping')
+        if self.adapter is not None:
+            yield self.adapter.stop()
+            self.adapter = None
+        log.info('stopped')
+
+    def _get_adapter_config(self):
+        """
+        Opportunistically load persisted adapter configuration.
+        Return None if no configuration exists yet.
+        """
+        proxy = self.core.get_proxy('/')
+        try:
+            config = proxy.get('/adapters/' + self.adapter_name)
+            return config
+        except KeyError:
+            return None
+
+    def _update_adapter_node(self):
+        """
+        Creates or updates the adapter node object based on self
+        description from the adapter.
+        """
+
+        adapter_desc = self.adapter.adapter_descriptor()
+        assert adapter_desc.id == self.adapter_name
+        path = self._make_up_to_date(
+            '/adapters', self.adapter_name, adapter_desc)
+        return self.core.get_proxy(path)
+
+    def _update_device_types(self):
+        """
+        Make sure device types are registered in Core
+        """
+        device_types = self.adapter.device_types()
+        for device_type in device_types.items:
+            key = device_type.id
+            self._make_up_to_date('/device_types', key, device_type)
+
+    def _make_up_to_date(self, container_path, key, data):
+        full_path = container_path + '/' + str(key)
+        root_proxy = self.core.get_proxy('/')
+        try:
+            root_proxy.get(full_path)
+            root_proxy.update(full_path, data)
+        except KeyError:
+            root_proxy.add(container_path, data)
+        return full_path
+
+    # ~~~~~~~~~~~~~~~~~~~~~ Core-Facing Service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def adopt_device(self, device):
+        return self.adapter.adopt_device(device)
+
+    def abandon_device(self, device):
+        return self.adapter.abandon_device(device)
+
+    def deactivate_device(self, device):
+        return self.adapter.deactivate_device(device)
+
+    # ~~~~~~~~~~~~~~~~~~~ Adapter-Facing Service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    def get_device(self, device_id):
+        return self.root_proxy.get('/devices/{}'.format(device_id))
+
+    def add_device(self, device):
+        assert isinstance(device, Device)
+        self._make_up_to_date('/devices', device.id, device)
+
+        # TODO for now, just map everything into a single device group
+        # which we create if it does not yet exist
+
+        dg = DeviceGroup(id='1')
+        self._make_up_to_date('/device_groups', dg.id, dg)
+
+        # add device to device group
+        # TODO how to do that?
+
+    def update_device(self, device):
+        assert isinstance(device, Device)
+
+        # we run the update through the device_agent so that the change
+        # does not loop back to the adapter unnecessarily
+        device_agent = self.core.get_device_agent(device.id)
+        device_agent.update_device(device)
+
+    def remove_device(self, device_id):
+        device_agent = self.core.get_device_agent(device_id)
+        device_agent.remove_device(device_id)
+
+    def add_port(self, device_id, port):
+        assert isinstance(port, Port)
+
+        # for referential integrity, add/augment references
+        port.device_id = device_id
+        me_as_peer = Port.PeerPort(device_id=device_id, port_no=port.port_no)
+        for peer in port.peers:
+            peer_port_path = '/devices/{}/ports/{}'.format(
+                peer.device_id, peer.port_no)
+            peer_port = self.root_proxy.get(peer_port_path)
+            if me_as_peer not in peer_port.peers:
+                new = peer_port.peers.add()
+                new.CopyFrom(me_as_peer)
+            self.root_proxy.update(peer_port_path, peer_port)
+
+        self._make_up_to_date('/devices/{}/ports'.format(device_id),
+                              port.port_no, port)
+
+    def create_logical_device(self, logical_device):
+        assert isinstance(logical_device, LogicalDevice)
+        self._make_up_to_date('/logical_devices',
+                              logical_device.id, logical_device)
+
+    def add_logical_port(self, logical_device_id, port):
+        assert isinstance(port, LogicalPort)
+        self._make_up_to_date(
+            '/logical_devices/{}/ports'.format(logical_device_id),
+            port.id, port)
+
+    def child_device_detected(self,
+                              parent_device_id,
+                              parent_port_no,
+                              child_device_type,
+                              child_device_address_kw):
+        # we create new ONU device objects and insert them into the config
+        # TODO should we auto-enable the freshly created device? Probably
+        device = Device(
+            id=uuid4().hex[:12],
+            type=child_device_type,
+            parent_id=parent_device_id,
+            parent_port_no=parent_port_no,
+            admin_state=AdminState.ENABLED,
+            **child_device_address_kw
+        )
+        self._make_up_to_date(
+            '/devices', device.id, device)
diff --git a/voltha/core/config/config_node.py b/voltha/core/config/config_node.py
index 969ba8e..c6a3949 100644
--- a/voltha/core/config/config_node.py
+++ b/voltha/core/config/config_node.py
@@ -289,8 +289,16 @@
         if change_announcements and branch._txid is None and \
                         self._proxy is not None:
             for change_type, data in change_announcements:
-                self._proxy.invoke_callbacks(
-                    change_type, data, proceed_on_errors=1)
+                # since the callback may operate on the config tree,
+                # we have to defer the execution of the callbacks till
+                # the change is propagated to the root, then root will
+                # call the callbacks
+                self._root.enqueue_callback(
+                    self._proxy.invoke_callbacks,
+                    change_type,
+                    data,
+                    proceed_on_errors=1
+                )
 
     # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ add operation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/voltha/core/config/config_proxy.py b/voltha/core/config/config_proxy.py
index 0769a94..309e21f 100644
--- a/voltha/core/config/config_proxy.py
+++ b/voltha/core/config/config_proxy.py
@@ -133,6 +133,10 @@
         lst = self._callbacks.setdefault(callback_type, [])
         lst.append((callback, args, kw))
 
+    def unregister_callback(self, callback_type, callback, *args, **kw):
+        lst = self._callbacks.setdefault(callback_type, [])
+        lst.remove((callback, args, kw))
+
     # ~~~~~~~~~~~~~~~~~~~~~ Callback dispatch ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     def invoke_callbacks(self, callback_type, context, proceed_on_errors=False):
diff --git a/voltha/core/config/config_root.py b/voltha/core/config/config_root.py
index 6b45a90..f99ba13 100644
--- a/voltha/core/config/config_root.py
+++ b/voltha/core/config/config_root.py
@@ -32,7 +32,8 @@
         '_dirty_nodes',  # holds set of modified nodes per transaction branch
         '_kv_store',
         '_loading',
-        '_rev_cls'
+        '_rev_cls',
+        '_deferred_callback_queue'
     )
 
     def __init__(self, initial_data, kv_store=None, rev_cls=ConfigRevision):
@@ -43,6 +44,7 @@
                 not issubclass(rev_cls, PersistedConfigRevision):
             rev_cls = PersistedConfigRevision
         self._rev_cls = rev_cls
+        self._deferred_callback_queue = []
         super(ConfigRoot, self).__init__(self, initial_data, False)
 
     @property
@@ -76,49 +78,78 @@
             self.del_txbranch(txid)
             raise
 
-        self._merge_txbranch(txid)
+        try:
+            self._merge_txbranch(txid)
+        finally:
+            self.execute_deferred_callbacks()
 
     # ~~~~~~ Overridden, root-level CRUD methods to handle transactions ~~~~~~~
 
     def update(self, path, data, strict=None, txid=None, mk_branch=None):
         assert mk_branch is None
-        if txid is not None:
-            dirtied = self._dirty_nodes[txid]
+        self.check_callback_queue()
+        try:
+            if txid is not None:
+                dirtied = self._dirty_nodes[txid]
 
-            def track_dirty(node):
-                dirtied.add(node)
-                return node._mk_txbranch(txid)
+                def track_dirty(node):
+                    dirtied.add(node)
+                    return node._mk_txbranch(txid)
 
-            return super(ConfigRoot, self).update(path, data, strict,
-                                                      txid, track_dirty)
-        else:
-            return super(ConfigRoot, self).update(path, data, strict)
+                res = super(ConfigRoot, self).update(path, data, strict,
+                                                          txid, track_dirty)
+            else:
+                res = super(ConfigRoot, self).update(path, data, strict)
+        finally:
+            self.execute_deferred_callbacks()
+        return res
 
     def add(self, path, data, txid=None, mk_branch=None):
         assert mk_branch is None
-        if txid is not None:
-            dirtied = self._dirty_nodes[txid]
+        self.check_callback_queue()
+        try:
+            if txid is not None:
+                dirtied = self._dirty_nodes[txid]
 
-            def track_dirty(node):
-                dirtied.add(node)
-                return node._mk_txbranch(txid)
+                def track_dirty(node):
+                    dirtied.add(node)
+                    return node._mk_txbranch(txid)
 
-            return super(ConfigRoot, self).add(path, data, txid, track_dirty)
-        else:
-            return super(ConfigRoot, self).add(path, data)
+                res = super(ConfigRoot, self).add(path, data, txid, track_dirty)
+            else:
+                res = super(ConfigRoot, self).add(path, data)
+        finally:
+            self.execute_deferred_callbacks()
+        return res
 
     def remove(self, path, txid=None, mk_branch=None):
         assert mk_branch is None
-        if txid is not None:
-            dirtied = self._dirty_nodes[txid]
+        self.check_callback_queue()
+        try:
+            if txid is not None:
+                dirtied = self._dirty_nodes[txid]
 
-            def track_dirty(node):
-                dirtied.add(node)
-                return node._mk_txbranch(txid)
+                def track_dirty(node):
+                    dirtied.add(node)
+                    return node._mk_txbranch(txid)
 
-            return super(ConfigRoot, self).remove(path, txid, track_dirty)
-        else:
-            return super(ConfigRoot, self).remove(path)
+                res = super(ConfigRoot, self).remove(path, txid, track_dirty)
+            else:
+                res = super(ConfigRoot, self).remove(path)
+        finally:
+            self.execute_deferred_callbacks()
+        return res
+
+    def check_callback_queue(self):
+        assert len(self._deferred_callback_queue) == 0
+
+    def enqueue_callback(self, func, *args, **kw):
+        self._deferred_callback_queue.append((func, args, kw))
+
+    def execute_deferred_callbacks(self):
+        while self._deferred_callback_queue:
+            func, args, kw = self._deferred_callback_queue.pop(0)
+            func(*args, **kw)
 
     # ~~~~~~~~~~~~~~~~ Persistence related ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/voltha/core/core.py b/voltha/core/core.py
index 02e9c02..c20f311 100644
--- a/voltha/core/core.py
+++ b/voltha/core/core.py
@@ -17,22 +17,23 @@
 """
 Voltha's CORE components.
 """
+
+from Queue import Queue
+
 import structlog
 from twisted.internet.defer import inlineCallbacks, returnValue
 from zope.interface import implementer
 
-from common.utils.grpc_utils import twisted_async
-from voltha.core.config.config_root import ConfigRoot
-from voltha.protos import third_party
+from voltha.core.config.config_proxy import CallbackType
+from voltha.core.device_agent import DeviceAgent
+from voltha.core.dispatcher import Dispatcher
+from voltha.core.global_handler import GlobalHandler
+from voltha.core.local_handler import LocalHandler
+from voltha.core.logical_device_agent import LogicalDeviceAgent
 from voltha.protos.voltha_pb2 import \
-    add_VolthaGlobalServiceServicer_to_server, \
-    add_VolthaLocalServiceServicer_to_server, \
-    VolthaGlobalServiceServicer, VolthaLocalServiceStub, \
-    VolthaLocalServiceServicer, Voltha, VolthaInstance, VolthaInstances, \
-    Adapters, LogicalDevices, Ports, LogicalPorts, Flows, FlowGroups, Devices, \
-    DeviceTypes, DeviceGroups
-from voltha.registry import IComponent, registry
-from google.protobuf.empty_pb2 import Empty
+    VolthaLocalServiceStub, \
+    Device, LogicalDevice
+from voltha.registry import IComponent
 
 log = structlog.get_logger()
 
@@ -43,21 +44,33 @@
     def __init__(self, instance_id, version, log_level):
         self.instance_id = instance_id
         self.stopped = False
-        self.global_service = VolthaGlobalServiceHandler(
-            dispatcher=self,
+        self.dispatcher = Dispatcher(self, instance_id)
+        self.global_handler = GlobalHandler(
+            dispatcher=self.dispatcher,
             instance_id=instance_id,
             version=version,
             log_level=log_level)
-        self.local_service = VolthaLocalServiceHandler(
+        self.local_handler = LocalHandler(
+            core=self,
             instance_id=instance_id,
             version=version,
             log_level=log_level)
+        self.local_root_proxy = None
+        self.device_agents = {}
+        self.logical_device_agents = {}
+        self.packet_in_queue = Queue()
 
     @inlineCallbacks
     def start(self):
         log.debug('starting')
-        yield self.global_service.start()
-        yield self.local_service.start()
+        yield self.dispatcher.start()
+        yield self.global_handler.start()
+        yield self.local_handler.start()
+        self.local_root_proxy = self.get_proxy('/')
+        self.local_root_proxy.register_callback(
+            CallbackType.POST_ADD, self._post_add_callback)
+        self.local_root_proxy.register_callback(
+            CallbackType.POST_REMOVE, self._post_remove_callback)
         log.info('started')
         returnValue(self)
 
@@ -66,434 +79,64 @@
         self.stopped = True
         log.info('stopped')
 
+    def get_local_handler(self):
+        return self.local_handler
+
     def get_proxy(self, path, exclusive=False):
-        return self.local_service.get_proxy(path, exclusive)
+        return self.local_handler.get_proxy(path, exclusive)
 
-    # ~~~~~~~~~~~~~~~~~~~~~~~ DISPATCH LOGIC ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-    # TODO this shall be moved into its own module
-
-    def dispatch(self, instance_id, stub, method_name, input):
-        log.debug('dispatch', instance_id=instance_id, stub=stub,
-                  _method_name=method_name, input=input)
-        # special case if instance_id is us
-        if instance_id == self.instance_id:
-            # for now, we assume it is always the local stub
-            assert stub == VolthaLocalServiceStub
-            method = getattr(self.local_service, method_name)
-            log.debug('dispatching', method=method)
-            res = method(input, context=None)
-            log.debug('dispatch-success', res=res)
-            return res
-
+    def _post_add_callback(self, data, *args, **kw):
+        log.debug('added', data=data, args=args, kw=kw)
+        if isinstance(data, Device):
+            self._handle_add_device(data)
+        elif isinstance(data, LogicalDevice):
+            self._handle_add_logical_device(data)
         else:
-            raise NotImplementedError('cannot handle real dispatch yet')
+            pass  # ignore others
 
-    def instance_id_by_logical_device_id(self, logical_device_id):
-        log.warning('temp-mapping-logical-device-id')
-        # TODO no true dispatchong uyet, we blindly map everything to self
-        return self.instance_id
+    def _post_remove_callback(self, data, *args, **kw):
+        log.debug('added', data=data, args=args, kw=kw)
+        if isinstance(data, Device):
+            self._handle_remove_device(data)
+        elif isinstance(data, LogicalDevice):
+            self._handle_remove_logical_device(data)
+        else:
+            pass  # ignore others
 
-    def instance_id_by_device_id(self, device_id):
-        log.warning('temp-mapping-logical-device-id')
-        # TODO no true dispatchong uyet, we blindly map everything to self
-        return self.instance_id
+    # ~~~~~~~~~~~~~~~~~~~~~~~~~~DeviceAgent Mgmt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-
-class VolthaGlobalServiceHandler(VolthaGlobalServiceServicer):
-
-    def __init__(self, dispatcher, instance_id, **init_kw):
-        self.dispatcher = dispatcher
-        self.instance_id = instance_id
-        self.init_kw = init_kw
-        self.root = None
-        self.stopped = False
-
-    def start(self):
-        log.debug('starting')
-        self.root = ConfigRoot(Voltha(**self.init_kw))
-        registry('grpc_server').register(
-            add_VolthaGlobalServiceServicer_to_server, self)
-        log.info('started')
-        return self
-
-    def stop(self):
-        log.debug('stopping')
-        self.stopped = True
-        log.info('stopped')
-
-    # gRPC service method implementations. BE CAREFUL; THESE ARE CALLED ON
-    # the gRPC threadpool threads.
-
-    @twisted_async
-    def GetVoltha(self, request, context):
-        log.info('grpc-request', request=request)
-        return self.root.get('/', depth=1)
-
-    @twisted_async
     @inlineCallbacks
-    def ListVolthaInstances(self, request, context):
-        log.info('grpc-request', request=request)
-        items = yield registry('coordinator').get_members()
-        returnValue(VolthaInstances(items=items))
+    def _handle_add_device(self, device):
+        # when a device is added, we attach an observer to it so that we can
+        # guard it and propagate changes down to its owner adapter
+        assert isinstance(device, Device)
+        path = '/devices/{}'.format(device.id)
+        assert device.id not in self.device_agents
+        self.device_agents[device.id] = yield DeviceAgent(self, device).start()
 
-    @twisted_async
-    def GetVolthaInstance(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = request.id
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'GetVolthaInstance',
-            Empty())
+    @inlineCallbacks
+    def _handle_remove_device(self, device):
+        if device.id in self.device_agents:
+            yield self.device_agents[device.id].stop()
+            del self.device_agents[device.id]
 
-    @twisted_async
-    def ListLogicalDevices(self, request, context):
-        log.warning('temp-limited-implementation')
-        # TODO dispatching to local instead of collecting all
-        return self.dispatcher.dispatch(
-            self.instance_id,
-            VolthaLocalServiceStub,
-            'ListLogicalDevices',
-            Empty())
+    def get_device_agent(self, device_id):
+        return self.device_agents[device_id]
 
-    @twisted_async
-    def GetLogicalDevice(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_logical_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'GetLogicalDevice',
-            request
-        )
+    # ~~~~~~~~~~~~~~~~~~~~~~~ LogicalDeviceAgent Mgmt ~~~~~~~~~~~~~~~~~~~~~~~~~
 
-    @twisted_async
-    def ListLogicalDevicePorts(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_logical_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'ListLogicalDevicePorts',
-            request
-        )
+    @inlineCallbacks
+    def _handle_add_logical_device(self, logical_device):
+        assert isinstance(logical_device, LogicalDevice)
+        assert logical_device.id not in self.logical_device_agents
+        agent = yield LogicalDeviceAgent(self, logical_device).start()
+        self.logical_device_agents[logical_device.id] = agent
 
-    @twisted_async
-    def ListLogicalDeviceFlows(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_logical_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'ListLogicalDeviceFlows',
-            request
-        )
+    @inlineCallbacks
+    def _handle_remove_logical_device(self, logical_device):
+        if logical_device.id in self.logical_device_agents:
+            yield self.logical_device_agents[logical_device.id].stop()
+            del self.logical_device_agents[logical_device.id]
 
-    @twisted_async
-    def UpdateLogicalDeviceFlowTable(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_logical_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'UpdateLogicalDeviceFlowTable',
-            request
-        )
-
-    @twisted_async
-    def ListLogicalDeviceFlowGroups(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_logical_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'ListLogicalDeviceFlowGroups',
-            request
-        )
-
-    @twisted_async
-    def UpdateLogicalDeviceFlowGroupTable(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_logical_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'UpdateLogicalDeviceFlowGroupTable',
-            request
-        )
-
-    @twisted_async
-    def ListDevices(self, request, context):
-        log.warning('temp-limited-implementation')
-        # TODO dispatching to local instead of collecting all
-        return self.dispatcher.dispatch(
-            self.instance_id,
-            VolthaLocalServiceStub,
-            'ListDevices',
-            Empty())
-
-    @twisted_async
-    def GetDevice(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'GetDevice',
-            request
-        )
-
-    @twisted_async
-    def ListDevicePorts(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'ListDevicePorts',
-            request
-        )
-
-    @twisted_async
-    def ListDeviceFlows(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'ListDeviceFlows',
-            request
-        )
-
-    @twisted_async
-    def ListDeviceFlowGroups(self, request, context):
-        log.info('grpc-request', request=request)
-        instance_id = self.dispatcher.instance_id_by_device_id(
-            request.id
-        )
-        return self.dispatcher.dispatch(
-            instance_id,
-            VolthaLocalServiceStub,
-            'ListDeviceFlowGroups',
-            request
-        )
-
-    @twisted_async
-    def ListDeviceTypes(self, request, context):
-        log.info('grpc-request', request=request)
-        # we always deflect this to the local instance, as we assume
-        # they all loaded the same adapters, supporting the same device
-        # types
-        return self.dispatcher.dispatch(
-            self.instance_id,
-            VolthaLocalServiceStub,
-            'ListDeviceTypes',
-            request
-        )
-
-    @twisted_async
-    def GetDeviceType(self, request, context):
-        log.info('grpc-request', request=request)
-        # we always deflect this to the local instance, as we assume
-        # they all loaded the same adapters, supporting the same device
-        # types
-        return self.dispatcher.dispatch(
-            self.instance_id,
-            VolthaLocalServiceStub,
-            'GetDeviceType',
-            request
-        )
-
-    @twisted_async
-    def ListDeviceGroups(self, request, context):
-        log.warning('temp-limited-implementation')
-        # TODO dispatching to local instead of collecting all
-        return self.dispatcher.dispatch(
-            self.instance_id,
-            VolthaLocalServiceStub,
-            'ListDeviceGroups',
-            Empty())
-
-    @twisted_async
-    def GetDeviceGroup(self, request, context):
-        log.warning('temp-limited-implementation')
-        # TODO dispatching to local instead of collecting all
-        return self.dispatcher.dispatch(
-            self.instance_id,
-            VolthaLocalServiceStub,
-            'GetDeviceGroup',
-            request)
-
-
-class VolthaLocalServiceHandler(VolthaLocalServiceServicer):
-
-    def __init__(self, **init_kw):
-        self.init_kw = init_kw
-        self.root = None
-        self.stopped = False
-
-    def start(self):
-        log.debug('starting')
-        self.root = ConfigRoot(VolthaInstance(**self.init_kw))
-        registry('grpc_server').register(
-            add_VolthaLocalServiceServicer_to_server, self)
-        log.info('started')
-        return self
-
-    def stop(self):
-        log.debug('stopping')
-        self.stopped = True
-        log.info('stopped')
-
-    def get_proxy(self, path, exclusive=False):
-        return self.root.get_proxy(path, exclusive)
-
-    # gRPC service method implementations. BE CAREFUL; THESE ARE CALLED ON
-    # the gRPC threadpool threads.
-
-    @twisted_async
-    def GetVolthaInstance(self, request, context):
-        log.info('grpc-request', request=request)
-        return self.root.get('/', depth=1)
-
-    @twisted_async
-    def GetHealth(self, request, context):
-        log.info('grpc-request', request=request)
-        return self.root.get('/health')
-
-    @twisted_async
-    def ListAdapters(self, request, context):
-        log.info('grpc-request', request=request)
-        items = self.root.get('/adapters')
-        return Adapters(items=items)
-
-    @twisted_async
-    def ListLogicalDevices(self, request, context):
-        log.info('grpc-request', request=request)
-        items = self.root.get('/logical_devices')
-        return LogicalDevices(items=items)
-
-    @twisted_async
-    def GetLogicalDevice(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        return self.root.get('/logical_devices/' + request.id)
-
-    @twisted_async
-    def ListLogicalDevicePorts(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        items = self.root.get('/logical_devices/{}/ports'.format(request.id))
-        return LogicalPorts(items=items)
-
-    @twisted_async
-    def ListLogicalDeviceFlows(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        flows = self.root.get('/logical_devices/{}/flows'.format(request.id))
-        return flows
-
-    @twisted_async
-    def UpdateLogicalDeviceFlowTable(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        raise NotImplementedError()
-
-    @twisted_async
-    def ListLogicalDeviceFlowGroups(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        groups = self.root.get(
-            '/logical_devices/{}/flow_groups'.format(request.id))
-        return groups
-
-    @twisted_async
-    def UpdateLogicalDeviceFlowGroupTable(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        raise NotImplementedError()
-
-    @twisted_async
-    def ListDevices(self, request, context):
-        log.info('grpc-request', request=request)
-        items = self.root.get('/devices')
-        return Devices(items=items)
-
-    @twisted_async
-    def GetDevice(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        return self.root.get('/devices/' + request.id)
-
-    @twisted_async
-    def ListDevicePorts(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        items = self.root.get('/devices/{}/ports'.format(request.id))
-        return Ports(items=items)
-
-    @twisted_async
-    def ListDeviceFlows(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        flows = self.root.get('/devices/{}/flows'.format(request.id))
-        return flows
-
-    @twisted_async
-    def ListDeviceFlowGroups(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        groups = self.root.get('/devices/{}/flow_groups'.format(request.id))
-        return groups
-
-    @twisted_async
-    def ListDeviceTypes(self, request, context):
-        log.info('grpc-request', request=request)
-        items = self.root.get('/device_types')
-        return DeviceTypes(items=items)
-
-    @twisted_async
-    def GetDeviceType(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        return self.root.get('/device_types/' + request.id)
-
-    @twisted_async
-    def ListDeviceGroups(self, request, context):
-        log.info('grpc-request', request=request)
-        # TODO is this mapped to tree or taken from coordinator?
-        items = self.root.get('/device_groups')
-        return DeviceGroups(items=items)
-
-    @twisted_async
-    def GetDeviceGroup(self, request, context):
-        log.info('grpc-request', request=request)
-        assert '/' not in request.id
-        # TODO is this mapped to tree or taken from coordinator?
-        return self.root.get('/device_groups/' + request.id)
-
-    @twisted_async
-    def StreamPacketsOut(self, request_iterator, context):
-        raise NotImplementedError()
-
-    @twisted_async
-    def ReceivePacketsIn(self, request, context):
-        raise NotImplementedError()
+    def get_logical_device_agent(self, logical_device_id):
+        return self.logical_device_agents[logical_device_id]
diff --git a/voltha/core/device_agent.py b/voltha/core/device_agent.py
new file mode 100644
index 0000000..150a9fc
--- /dev/null
+++ b/voltha/core/device_agent.py
@@ -0,0 +1,174 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+A device agent is instantiated for each Device and plays an important role
+between the Device object and its adapter.
+"""
+import structlog
+from twisted.internet import reactor
+from twisted.internet.defer import inlineCallbacks, returnValue
+
+from voltha.core.config.config_proxy import CallbackType
+from voltha.protos.common_pb2 import AdminState, OperStatus
+from voltha.registry import registry
+
+log = structlog.get_logger()
+
+
+class InvalidStateTransition(Exception): pass
+
+
+class DeviceAgent(object):
+
+    def __init__(self, core, initial_data):
+        self.core = core
+        self._tmp_initial_data = initial_data
+        self.proxy = core.get_proxy('/devices/{}'.format(initial_data.id))
+        self.proxy.register_callback(
+            CallbackType.PRE_UPDATE, self._validate_update)
+        self.proxy.register_callback(
+            CallbackType.POST_UPDATE, self._process_update)
+        self.last_data = None
+        self.adapter_agent = None
+
+    @inlineCallbacks
+    def start(self):
+        log.debug('starting')
+        self._set_adapter_agent()
+        yield self._process_update(self._tmp_initial_data)
+        del self._tmp_initial_data
+        log.info('started')
+        returnValue(self)
+
+    def stop(self):
+        log.debug('stopping')
+        self.proxy.unregister_callback(
+            CallbackType.PRE_UPDATE, self._validate_update)
+        self.proxy.unregister_callback(
+            CallbackType.POST_UPDATE, self._process_update)
+        log.info('stopped')
+
+    def _set_adapter_agent(self):
+        adapter_name = self._tmp_initial_data.adapter
+        if adapter_name == '':
+            proxy = self.core.get_proxy('/')
+            known_device_types = dict(
+                (dt.id, dt) for dt in proxy.get('/device_types'))
+            device_type = known_device_types[self._tmp_initial_data.type]
+            adapter_name = device_type.adapter
+        assert adapter_name != ''
+        self.adapter_agent = registry('adapter_loader').get_agent(adapter_name)
+
+    @inlineCallbacks
+    def _validate_update(self, device):
+        """
+        Called before each update, it allows the blocking of the update
+        (by raising an exception), or even the augmentation of the incoming
+        data.
+        """
+        log.debug('device-pre-update', device=device)
+        yield self._process_state_transitions(device, dry_run=True)
+        returnValue(device)
+
+    @inlineCallbacks
+    def _process_update(self, device):
+        """
+        Called after the device object was updated (individually or part of
+        a transaction), and it is used to propagate the change down to the
+        adapter
+        """
+        log.debug('device-post-update', device=device)
+
+        # first, process any potential state transition
+        yield self._process_state_transitions(device)
+
+        # finally, store this data as last data so we can see what changed
+        self.last_data = device
+
+    @inlineCallbacks
+    def _process_state_transitions(self, device, dry_run=False):
+
+        old_admin_state = getattr(self.last_data, 'admin_state',
+                                   AdminState.UNKNOWN)
+        new_admin_state = device.admin_state
+        transition_handler = self.admin_state_fsm.get(
+            (old_admin_state, new_admin_state), None)
+        if transition_handler is None:
+            pass  # no-op
+        elif transition_handler is False:
+            raise InvalidStateTransition('{} -> {}'.format(
+                old_admin_state, new_admin_state))
+        else:
+            assert callable(transition_handler)
+            yield transition_handler(self, device, dry_run)
+
+    @inlineCallbacks
+    def _activate_device(self, device, dry_run=False):
+        log.info('activate-device', device=device, dry_run=dry_run)
+        if not dry_run:
+            device = yield self.adapter_agent.adopt_device(device)
+            device.oper_status = OperStatus.ACTIVATING
+            # successful return from this may also populated the device
+            # data, so we need to write it back
+            reactor.callLater(0, self.update_device, device)
+
+    def update_device(self, device):
+        self.last_data = device  # so that we don't propagate back
+        self.proxy.update('/', device)
+
+    def remove_device(self, device_id):
+        raise NotImplementedError()
+
+    def _propagate_change(self, device, dry_run=False):
+        log.info('propagate-change', device=device, dry_run=dry_run)
+        if device != self.last_data:
+            raise NotImplementedError()
+        else:
+            log.debug('no-op')
+
+    def _abandon_device(self, device, dry_run=False):
+        log.info('abandon-device', device=device, dry_run=dry_run)
+        raise NotImplementedError()
+
+    def _disable_device(self, device, dry_run=False):
+        log.info('disable-device', device=device, dry_run=dry_run)
+        raise NotImplementedError()
+
+    def _reenable_device(self, device, dry_run=False):
+        log.info('reenable-device', device=device, dry_run=dry_run)
+        raise NotImplementedError()
+
+    admin_state_fsm = {
+
+        # Missing entries yield no-op
+        # False means invalid state change
+
+        (AdminState.UNKNOWN, AdminState.ENABLED): _activate_device,
+
+        (AdminState.PREPROVISIONED, AdminState.UNKNOWN): False,
+        (AdminState.PREPROVISIONED, AdminState.ENABLED): _activate_device,
+
+        (AdminState.ENABLED, AdminState.UNKNOWN): False,
+        (AdminState.ENABLED, AdminState.ENABLED): _propagate_change,
+        (AdminState.ENABLED, AdminState.DISABLED): _disable_device,
+        (AdminState.ENABLED, AdminState.PREPROVISIONED): _abandon_device,
+
+        (AdminState.DISABLED, AdminState.UNKNOWN): False,
+        (AdminState.DISABLED, AdminState.PREPROVISIONED): _abandon_device,
+        (AdminState.DISABLED, AdminState.ENABLED): _reenable_device
+
+    }
diff --git a/voltha/core/device_graph.py b/voltha/core/device_graph.py
new file mode 100644
index 0000000..537124d
--- /dev/null
+++ b/voltha/core/device_graph.py
@@ -0,0 +1,123 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import networkx as nx
+
+from voltha.core.flow_decomposer import RouteHop
+
+
+class DeviceGraph(object):
+
+    """
+    Mixin class to compute routes in the device graph within
+    a logical device.
+    """
+
+    def compute_routes(self, root_proxy, logical_ports):
+        boundary_ports, graph = self._build_graph(root_proxy, logical_ports)
+        routes = self._build_routes(boundary_ports, graph)
+        return graph, routes
+
+    def _build_graph(self, root_proxy, logical_ports):
+
+        graph = nx.Graph()
+
+        # walk logical device's device and port links to discover full graph
+        devices_added = set()  # set of device.id's
+        ports_added = set()  # set of (device.id, port_no) tuples
+        peer_links = set()
+
+        boundary_ports = dict(
+            ((lp.device_id, lp.device_port_no), lp.ofp_port.port_no)
+            for lp in logical_ports
+        )
+
+        def add_device(device):
+            if device.id in devices_added:
+                return
+
+            graph.add_node(device.id, device=device)
+            devices_added.add(device.id)
+
+            ports = root_proxy.get('/devices/{}/ports'.format(device.id))
+            for port in ports:
+                port_id = (device.id, port.port_no)
+                if port_id not in ports_added:
+                    boundary = port_id in boundary_ports
+                    graph.add_node(port_id, port=port, boundary=boundary)
+                    graph.add_edge(device.id, port_id)
+                for peer in port.peers:
+                    if peer.device_id not in devices_added:
+                        peer_device = root_proxy.get(
+                            'devices/{}'.format(peer.device_id))
+                        add_device(peer_device)
+                    else:
+                        peer_port_id = (peer.device_id, peer.port_no)
+                        if port_id < peer_port_id:
+                            peer_link = (port_id, peer_port_id)
+                        else:
+                            peer_link = (peer_port_id, port_id)
+                        if peer_link not in peer_links:
+                            graph.add_edge(*peer_link)
+                            peer_links.add(peer_link)
+
+        for logical_port in logical_ports:
+            device_id = logical_port.device_id
+            device = root_proxy.get('/devices/{}'.format(device_id))
+            add_device(device)
+
+        return boundary_ports, graph
+
+    def _build_routes(self, boundary_ports, graph):
+
+        routes = {}
+
+        for source, source_port_no in boundary_ports.iteritems():
+            for target, target_port_no in boundary_ports.iteritems():
+
+                if source is target:
+                    continue
+
+                path = nx.shortest_path(graph, source, target)
+
+                # number of nodes in valid paths is always multiple of 3
+                if len(path) % 3:
+                    continue
+
+                # in fact, we currently deal with single fan-out networks,
+                # so the number of hops is always 6
+                assert len(path) == 6
+
+                ingress_input_port, ingress_device, ingress_output_port, \
+                egress_input_port, egress_device, egress_output_port = path
+
+                ingress_hop = RouteHop(
+                    device=graph.node[ingress_device]['device'],
+                    ingress_port=graph.node[ingress_input_port]['port'],
+                    egress_port=graph.node[ingress_output_port]['port']
+                )
+                egress_hop = RouteHop(
+                    device=graph.node[egress_device]['device'],
+                    ingress_port=graph.node[egress_input_port]['port'],
+                    egress_port=graph.node[egress_output_port]['port']
+                )
+
+                routes[(source_port_no, target_port_no)] = [
+                    ingress_hop, egress_hop
+                ]
+
+        return routes
+
diff --git a/voltha/core/device_model.py b/voltha/core/device_model.py
deleted file mode 100644
index 262e18f..0000000
--- a/voltha/core/device_model.py
+++ /dev/null
@@ -1,424 +0,0 @@
-#
-# Copyright 2016 the original author or authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-"""
-Model that captures the current state of a logical device
-"""
-import threading
-import sys
-
-import structlog
-
-from voltha.protos import third_party
-from voltha.protos import voltha_pb2
-from voltha.protos import openflow_13_pb2 as ofp
-
-log = structlog.get_logger()
-_ = third_party
-
-def mac_str_to_tuple(mac):
-    return tuple(int(d, 16) for d in mac.split(':'))
-
-
-def flow_stats_entry_from_flow_mod_message(mod):
-    flow = ofp.ofp_flow_stats(
-        table_id=mod.table_id,
-        priority=mod.priority,
-        idle_timeout=mod.idle_timeout,
-        hard_timeout=mod.hard_timeout,
-        flags=mod.flags,
-        cookie=mod.cookie,
-        match=mod.match,
-        instructions=mod.instructions
-    )
-    return flow
-
-
-def group_entry_from_group_mod(mod):
-    group = ofp.ofp_group_entry(
-        desc=ofp.ofp_group_desc(
-            type=mod.type,
-            group_id=mod.group_id,
-            buckets=mod.buckets
-        ),
-        stats=ofp.ofp_group_stats(
-            group_id=mod.group_id
-            # TODO do we need to instantiate bucket bins?
-        )
-    )
-    return group
-
-
-class DeviceModel(object):
-
-    def __init__(self, grpc_server, id):
-        self.grpc_server = grpc_server
-
-        self.info = voltha_pb2.LogicalDeviceDetails(
-            id=str(id),
-            datapath_id=id,
-            desc=ofp.ofp_desc(
-                mfr_desc="CORD/Voltha",
-                hw_desc="Synthetized/logical device",
-                sw_desc="Voltha 1.0",
-                serial_num="1000219910",
-                dp_desc="A logical device. Use the TBD API to learn more"
-            ),
-            switch_features=ofp.ofp_switch_features(
-                n_buffers=256, # TODO fake for now
-                n_tables=2,  # TODO ditto
-                capabilities=(  # TODO and ditto
-                    ofp.OFPC_FLOW_STATS
-                    | ofp.OFPC_TABLE_STATS
-                    | ofp.OFPC_PORT_STATS
-                    | ofp.OFPC_GROUP_STATS
-                )
-            )
-        )
-
-        cap = ofp.OFPPF_1GB_FD | ofp.OFPPF_FIBER
-        self.ports = [ofp.ofp_port(
-                port_no=port_no,
-                hw_addr=mac_str_to_tuple('00:00:00:00:00:%02x' % port_no),
-                name=name,
-                config=0,
-                state=ofp.OFPPS_LIVE,
-                curr=cap,
-                advertised=cap,
-                peer=cap,
-                curr_speed=ofp.OFPPF_1GB_FD,
-                max_speed=ofp.OFPPF_1GB_FD
-            ) for port_no, name in [(1, 'onu1'), (2, 'onu2'), (129, 'olt1')]]
-
-        self.flows = []
-        self.groups = {}
-
-    def announce_flows_deleted(self, flows):
-        for f in flows:
-            self.announce_flow_deleted(f)
-
-    def announce_flow_deleted(self, flow):
-        if flow.flags & ofp.OFPFF_SEND_FLOW_REM:
-            raise NotImplementedError("announce_flow_deleted")
-
-    def signal_flow_mod_error(self, code, flow_mod):
-        pass  # TODO
-
-    def signal_flow_removal(self, code, flow):
-        pass  # TODO
-
-    def signal_group_mod_error(self, code, group_mod):
-        pass  # TODO
-
-    def update_flow_table(self, flow_mod):
-
-        command = flow_mod.command
-
-        if command == ofp.OFPFC_ADD:
-            self.flow_add(flow_mod)
-
-        elif command == ofp.OFPFC_DELETE:
-            self.flow_delete(flow_mod)
-
-        elif command == ofp.OFPFC_DELETE_STRICT:
-            self.flow_delete_strict(flow_mod)
-
-        elif command == ofp.OFPFC_MODIFY:
-            self.flow_modify(flow_mod)
-
-        elif command == ofp.OFPFC_MODIFY_STRICT:
-            self.flow_modify_strict(flow_mod)
-
-        else:
-            log.warn('unhandled-flow-mod', command=command, flow_mod=flow_mod)
-
-    def list_flows(self):
-        return self.flows
-
-    def update_group_table(self, group_mod):
-
-        command = group_mod.command
-
-        if command == ofp.OFPGC_DELETE:
-            self.group_delete(group_mod)
-
-        elif command == ofp.OFPGC_ADD:
-            self.group_add(group_mod)
-
-        elif command == ofp.OFPGC_MODIFY:
-            self.group_modify(group_mod)
-
-        else:
-            log.warn('unhandled-group-mod', command=command,
-                     group_mod=group_mod)
-
-    def list_groups(self):
-        return self.groups.values()
-
-    ## <=============== LOW LEVEL FLOW HANDLERS ==============================>
-
-    def flow_add(self, mod):
-        assert isinstance(mod, ofp.ofp_flow_mod)
-        assert mod.cookie_mask == 0
-
-        check_overlap = mod.flags & ofp.OFPFF_CHECK_OVERLAP
-        if check_overlap:
-            if self.find_overlapping_flows(mod, True):
-                self.signal_flow_mod_error(
-                    ofp.OFPFMFC_OVERLAP, mod)
-            else:
-                # free to add as new flow
-                flow = flow_stats_entry_from_flow_mod_message(mod)
-                self.flows.append(flow)
-                log.debug('flow-added', flow=mod)
-
-        else:
-            flow = flow_stats_entry_from_flow_mod_message(mod)
-            idx = self.find_flow(flow)
-            if idx >= 0:
-                old_flow = self.flows[idx]
-                if not (mod.flags & ofp.OFPFF_RESET_COUNTS):
-                    flow.byte_count = old_flow.byte_count
-                    flow.packet_count = old_flow.packet_count
-                self.flows[idx] = flow
-                log.debug('flow-updated', flow=flow)
-
-            else:
-                self.flows.append(flow)
-                log.debug('flow-added', flow=mod)
-
-    def flow_delete(self, mod):
-        assert isinstance(mod, ofp.ofp_flow_mod)
-
-        # build a list of what to keep vs what to delete
-        to_keep = []
-        to_delete = []
-        for f in self.flows:
-            if self.flow_matches_spec(f, mod):
-                to_delete.append(f)
-            else:
-                to_keep.append(f)
-
-        # replace flow table with keepers
-        self.flows = to_keep
-
-        # send notifications for discarded flow as required by OpenFlow
-        self.announce_flows_deleted(to_delete)
-
-    def flow_delete_strict(self, mod):
-        assert isinstance(mod, ofp.ofp_flow_mod)
-        flow = flow_stats_entry_from_flow_mod_message(mod)
-        idx = self.find_flow(flow)
-        if (idx >= 0):
-            del self.flows[idx]
-        else:
-            # TODO need to check what to do with this case
-            log.warn('flow-cannot-delete', flow=flow)
-
-    def flow_modify(self, mod):
-        raise NotImplementedError()
-
-    def flow_modify_strict(self, mod):
-        raise NotImplementedError()
-
-    def find_overlapping_flows(self, mod, return_on_first=False):
-        """
-        Return list of overlapping flow(s)
-        Two flows overlap if a packet may match both and if they have the
-        same priority.
-        :param mod: Flow request
-        :param return_on_first: if True, return with the first entry
-        :return:
-        """
-        return []  # TODO finish implementation
-
-    def find_flow(self, flow):
-        for i, f in enumerate(self.flows):
-            if self.flow_match(f, flow):
-                return i
-        return -1
-
-    def flow_match(self, f1, f2):
-        keys_matter = ('table_id', 'priority', 'flags', 'cookie', 'match')
-        for key in keys_matter:
-            if getattr(f1, key) != getattr(f2, key):
-                return False
-        return True
-
-    def flow_matches_spec(self, flow, flow_mod):
-        """
-        Return True if given flow (ofp_flow_stats) is "covered" by the
-        wildcard flow_mod (ofp_flow_mod), taking into consideration of
-        both exact mactches as well as masks-based match fields if any.
-        Otherwise return False
-        :param flow: ofp_flow_stats
-        :param mod: ofp_flow_mod
-        :return: Bool
-        """
-
-        assert isinstance(flow, ofp.ofp_flow_stats)
-        assert isinstance(flow_mod, ofp.ofp_flow_mod)
-
-        # Check if flow.cookie is covered by mod.cookie and mod.cookie_mask
-        if (flow.cookie & flow_mod.cookie_mask) != \
-                (flow_mod.cookie & flow_mod.cookie_mask):
-            return False
-
-        # Check if flow.table_id is covered by flow_mod.table_id
-        if flow_mod.table_id != ofp.OFPTT_ALL and \
-                        flow.table_id != flow_mod.table_id:
-            return False
-
-        # Check out_port
-        if flow_mod.out_port != ofp.OFPP_ANY and \
-                not self.flow_has_out_port(flow, flow_mod.out_port):
-            return False
-
-        # Check out_group
-        if flow_mod.out_group != ofp.OFPG_ANY and \
-                not self.flow_has_out_group(flow, flow_mod.out_group):
-            return False
-
-        # Priority is ignored
-
-        # Check match condition
-        # If the flow_mod match field is empty, that is a special case and
-        # indicates the flow entry matches
-        match = flow_mod.match
-        assert isinstance(match, ofp.ofp_match)
-        if not match.oxm_list:
-            # If we got this far and the match is empty in the flow spec,
-            # than the flow matches
-            return True
-        else:
-            raise NotImplementedError(
-                "flow_matches_spec(): No flow match analysis yet")
-
-    def flow_has_out_port(self, flow, out_port):
-        """
-        Return True if flow has a output command with the given out_port
-        """
-        assert isinstance(flow, ofp.ofp_flow_stats)
-        for instruction in flow.instructions:
-            assert isinstance(instruction, ofp.ofp_instruction)
-            if instruction.type == ofp.OFPIT_APPLY_ACTIONS:
-                for action in instruction.actions.actions:
-                    assert isinstance(action, ofp.ofp_action)
-                    if action.type == ofp.OFPAT_OUTPUT and \
-                        action.output.port == out_port:
-                        return True
-
-        # otherwise...
-        return False
-
-    def flow_has_out_group(self, flow, group_id):
-        """
-        Return True if flow has a output command with the given out_group
-        """
-        assert isinstance(flow, ofp.ofp_flow_stats)
-        for instruction in flow.instructions:
-            assert isinstance(instruction, ofp.ofp_instruction)
-            if instruction.type == ofp.OFPIT_APPLY_ACTIONS:
-                for action in instruction.actions.actions:
-                    assert isinstance(action, ofp.ofp_action)
-                    if action.type == ofp.OFPAT_GROUP and \
-                        action.group.group_id == group_id:
-                            return True
-
-        # otherwise...
-        return False
-
-    def flows_delete_by_group_id(self, group_id):
-        """
-        Delete any flow(s) referring to given group_id
-        :param group_id:
-        :return: None
-        """
-        to_keep = []
-        to_delete = []
-        for f in self.flows:
-            if self.flow_has_out_group(f, group_id):
-                to_delete.append(f)
-            else:
-                to_keep.append(f)
-
-        # replace flow table with keepers
-        self.flows = to_keep
-
-        # send notification to deleted ones
-        self.announce_flows_deleted(to_delete)
-
-    ## <=============== LOW LEVEL GROUP HANDLERS =============================>
-
-    def group_add(self, group_mod):
-        assert isinstance(group_mod, ofp.ofp_group_mod)
-        if group_mod.group_id in self.groups:
-            self.signal_group_mod_error(ofp.OFPGMFC_GROUP_EXISTS, group_mod)
-        else:
-            group_entry = group_entry_from_group_mod(group_mod)
-            self.groups[group_mod.group_id] = group_entry
-
-    def group_delete(self, group_mod):
-        assert isinstance(group_mod, ofp.ofp_group_mod)
-        group_id = group_mod.group_id
-        if group_id == ofp.OFPG_ALL:
-            # TODO we must delete all flows that point to this group and
-            # signal controller as requested by flow's flag
-            self.groups = {}
-            log.debug('all-groups-deleted')
-
-        else:
-            if group_id not in self.groups:
-                # per openflow spec, this is not an error
-                pass
-
-            else:
-                self.flows_delete_by_group_id(group_id)
-                del self.groups[group_id]
-                log.debug('group-deleted', group_id=group_id)
-
-    def group_modify(self, group_mod):
-        assert isinstance(group_mod, ofp.ofp_group_mod)
-        if group_mod.group_id not in self.groups:
-            self.signal_group_mod_error(
-                ofp.OFPGMFC_INVALID_GROUP, group_mod)
-        else:
-            # replace existing group entry with new group definition
-            group_entry = group_entry_from_group_mod(group_mod)
-            self.groups[group_mod.group_id] = group_entry
-
-    ## <=============== PACKET_OUT ===========================================>
-
-    def packet_out(self, ofp_packet_out):
-        log.debug('packet-out', packet=ofp_packet_out)
-        print threading.current_thread().name
-        print 'PACKET_OUT:', ofp_packet_out
-        # TODO for debug purposes, lets turn this around and send it back
-        if 0:
-            self.packet_in(ofp.ofp_packet_in(
-                buffer_id=ofp_packet_out.buffer_id,
-                reason=ofp.OFPR_NO_MATCH,
-                data=ofp_packet_out.data
-            ))
-
-
-
-    ## <=============== PACKET_IN ============================================>
-
-    def packet_in(self, ofp_packet_in):
-        # TODO
-        print 'PACKET_IN:', ofp_packet_in
-        self.grpc_server.send_packet_in(self.info.id, ofp_packet_in)
diff --git a/voltha/core/dispatcher.py b/voltha/core/dispatcher.py
new file mode 100644
index 0000000..aa40d63
--- /dev/null
+++ b/voltha/core/dispatcher.py
@@ -0,0 +1,71 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+Dispatcher is responsible to dispatch incoming "global" gRPC requests
+to the respective Voltha instance (leader, peer instance, local). Local
+calls are forwarded to the LocalHandler.
+"""
+import structlog
+
+from voltha.protos.voltha_pb2 import VolthaLocalServiceStub
+
+log = structlog.get_logger()
+
+
+class Dispatcher(object):
+
+    def __init__(self, core, instance_id):
+        self.core = core
+        self.instance_id = instance_id
+        self.local_handler = None
+
+    def start(self):
+        log.debug('starting')
+        self.local_handler = self.core.get_local_handler()
+        log.info('started')
+        return self
+
+    def stop(self):
+        log.debug('stopping')
+        log.info('stopped')
+
+    def dispatch(self, instance_id, stub, method_name, input, context):
+        log.debug('dispatch', instance_id=instance_id, stub=stub,
+                  _method_name=method_name, input=input)
+        # special case if instance_id is us
+        if instance_id == self.instance_id:
+            # for now, we assume it is always the local stub
+            assert stub == VolthaLocalServiceStub
+            method = getattr(self.local_handler, method_name)
+            log.debug('dispatching', method=method)
+            res = method(input, context=context)
+            log.debug('dispatch-success', res=res)
+            return res
+
+        else:
+            log.warning('no-real-dispatch-yet')
+            raise KeyError()
+
+    def instance_id_by_logical_device_id(self, logical_device_id):
+        log.warning('temp-mapping-logical-device-id')
+        # TODO no true dispatchong yet, we blindly map everything to self
+        return self.instance_id
+
+    def instance_id_by_device_id(self, device_id):
+        log.warning('temp-mapping-logical-device-id')
+        # TODO no true dispatchong yet, we blindly map everything to self
+        return self.instance_id
diff --git a/voltha/core/flow_decomposer.py b/voltha/core/flow_decomposer.py
new file mode 100644
index 0000000..2bd690c
--- /dev/null
+++ b/voltha/core/flow_decomposer.py
@@ -0,0 +1,647 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+A mix-in class implementing flow decomposition
+"""
+from collections import OrderedDict
+from copy import copy, deepcopy
+from hashlib import md5
+
+from voltha.protos import openflow_13_pb2 as ofp
+
+# aliases
+ofb_field = ofp.ofp_oxm_ofb_field
+action = ofp.ofp_action
+
+# OFPAT_* shortcuts
+OUTPUT = ofp.OFPAT_OUTPUT
+COPY_TTL_OUT = ofp.OFPAT_COPY_TTL_OUT
+COPY_TTL_IN = ofp.OFPAT_COPY_TTL_IN
+SET_MPLS_TTL = ofp.OFPAT_SET_MPLS_TTL
+DEC_MPLS_TTL = ofp.OFPAT_DEC_MPLS_TTL
+PUSH_VLAN = ofp.OFPAT_PUSH_VLAN
+POP_VLAN = ofp.OFPAT_POP_VLAN
+PUSH_MPLS = ofp.OFPAT_PUSH_MPLS
+POP_MPLS = ofp.OFPAT_POP_MPLS
+SET_QUEUE = ofp.OFPAT_SET_QUEUE
+GROUP = ofp.OFPAT_GROUP
+SET_NW_TTL = ofp.OFPAT_SET_NW_TTL
+NW_TTL = ofp.OFPAT_DEC_NW_TTL
+SET_FIELD = ofp.OFPAT_SET_FIELD
+PUSH_PBB = ofp.OFPAT_PUSH_PBB
+POP_PBB = ofp.OFPAT_POP_PBB
+EXPERIMENTER = ofp.OFPAT_EXPERIMENTER
+
+# OFPXMT_OFB_* shortcuts (incomplete)
+IN_PORT = ofp.OFPXMT_OFB_IN_PORT
+IN_PHY_PORT = ofp.OFPXMT_OFB_IN_PHY_PORT
+METADATA = ofp.OFPXMT_OFB_METADATA
+ETH_DST = ofp.OFPXMT_OFB_ETH_DST
+ETH_SRC = ofp.OFPXMT_OFB_ETH_SRC
+ETH_TYPE = ofp.OFPXMT_OFB_ETH_TYPE
+VLAN_VID = ofp.OFPXMT_OFB_VLAN_VID
+VLAN_PCP = ofp.OFPXMT_OFB_VLAN_PCP
+IP_DSCP = ofp.OFPXMT_OFB_IP_DSCP
+IP_ECN = ofp.OFPXMT_OFB_IP_ECN
+IP_PROTO = ofp.OFPXMT_OFB_IP_PROTO
+IPV4_SRC = ofp.OFPXMT_OFB_IPV4_SRC
+IPV4_DST = ofp.OFPXMT_OFB_IPV4_DST
+TCP_SRC = ofp.OFPXMT_OFB_TCP_SRC
+TCP_DST = ofp.OFPXMT_OFB_TCP_DST
+UDP_SRC = ofp.OFPXMT_OFB_UDP_SRC
+UDP_DST = ofp.OFPXMT_OFB_UDP_DST
+SCTP_SRC = ofp.OFPXMT_OFB_SCTP_SRC
+SCTP_DST = ofp.OFPXMT_OFB_SCTP_DST
+ICMPV4_TYPE = ofp.OFPXMT_OFB_ICMPV4_TYPE
+ICMPV4_CODE = ofp.OFPXMT_OFB_ICMPV4_CODE
+ARP_OP = ofp.OFPXMT_OFB_ARP_OP
+ARP_SPA = ofp.OFPXMT_OFB_ARP_SPA
+ARP_TPA = ofp.OFPXMT_OFB_ARP_TPA
+ARP_SHA = ofp.OFPXMT_OFB_ARP_SHA
+ARP_THA = ofp.OFPXMT_OFB_ARP_THA
+IPV6_SRC = ofp.OFPXMT_OFB_IPV6_SRC
+IPV6_DST = ofp.OFPXMT_OFB_IPV6_DST
+IPV6_FLABEL = ofp.OFPXMT_OFB_IPV6_FLABEL
+ICMPV6_TYPE = ofp.OFPXMT_OFB_ICMPV6_TYPE
+ICMPV6_CODE = ofp.OFPXMT_OFB_ICMPV6_CODE
+IPV6_ND_TARGET = ofp.OFPXMT_OFB_IPV6_ND_TARGET
+OFB_IPV6_ND_SLL = ofp.OFPXMT_OFB_IPV6_ND_SLL
+IPV6_ND_TLL = ofp.OFPXMT_OFB_IPV6_ND_TLL
+MPLS_LABEL = ofp.OFPXMT_OFB_MPLS_LABEL
+MPLS_TC = ofp.OFPXMT_OFB_MPLS_TC
+MPLS_BOS = ofp.OFPXMT_OFB_MPLS_BOS
+PBB_ISID = ofp.OFPXMT_OFB_PBB_ISID
+TUNNEL_ID = ofp.OFPXMT_OFB_TUNNEL_ID
+IPV6_EXTHDR = ofp.OFPXMT_OFB_IPV6_EXTHDR
+
+# ofp_action_* shortcuts
+
+def output(port, max_len=ofp.OFPCML_MAX):
+    return action(
+        type=OUTPUT,
+        output=ofp.ofp_action_output(port=port, max_len=max_len)
+    )
+
+def mpls_ttl(ttl):
+    return action(
+        type=SET_MPLS_TTL,
+        mpls_ttl=ofp.ofp_action_mpls_ttl(mpls_ttl=ttl)
+    )
+
+def push_vlan(eth_type):
+    return action(
+        type=PUSH_VLAN,
+        push=ofp.ofp_action_push(ethertype=eth_type)
+    )
+
+def pop_vlan():
+    return action(
+        type=POP_VLAN
+    )
+
+def pop_mpls(eth_type):
+    return action(
+        type=POP_MPLS,
+        pop_mpls=ofp.ofp_action_pop_mpls(ethertype=eth_type)
+    )
+
+def group(group_id):
+    return action(
+        type=GROUP,
+        group=ofp.ofp_action_group(group_id=group_id)
+    )
+
+def nw_ttl(nw_ttl):
+    return action(
+        type=NW_TTL,
+        nw_ttl=ofp.ofp_action_nw_ttl(nw_ttl=nw_ttl)
+    )
+
+def set_field(field):
+    return action(
+        type=SET_FIELD,
+        set_field=ofp.ofp_action_set_field(
+            field=ofp.ofp_oxm_field(
+                oxm_class=ofp.OFPXMC_OPENFLOW_BASIC,
+                ofb_field=field))
+    )
+
+def experimenter(experimenter, data):
+    return action(
+        type=EXPERIMENTER,
+        experimenter=ofp.ofp_action_experimenter(
+            experimenter=experimenter, data=data)
+    )
+
+
+# ofb_field generators (incomplete set)
+
+def in_port(_in_port):
+    return ofb_field(type=IN_PORT, port=_in_port)
+
+def eth_type(_eth_type):
+    return ofb_field(type=ETH_TYPE, eth_type=_eth_type)
+
+def vlan_vid(_vlan_vid):
+    return ofb_field(type=VLAN_VID, vlan_vid=_vlan_vid)
+
+def vlan_pcp(_vlan_pcp):
+    return ofb_field(type=VLAN_PCP, vlan_pcp=_vlan_pcp)
+
+def ip_dscp(_ip_dscp):
+    return ofb_field(type=IP_DSCP, ip_dscp=_ip_dscp)
+
+def ip_ecn(_ip_ecn):
+    return ofb_field(type=IP_ECN, ip_ecn=_ip_ecn)
+
+def ip_proto(_ip_proto):
+    return ofb_field(type=IP_PROTO, ip_proto=_ip_proto)
+
+def ipv4_src(_ipv4_src):
+    return ofb_field(type=IPV4_SRC, ipv4_src=_ipv4_src)
+
+def ipv4_dst(_ipv4_dst):
+    return ofb_field(type=IPV4_DST, ipv4_dst=_ipv4_dst)
+
+def tcp_src(_tcp_src):
+    return ofb_field(type=TCP_SRC, tcp_src=_tcp_src)
+
+def tcp_dst(_tcp_dst):
+    return ofb_field(type=TCP_DST, tcp_dst=_tcp_dst)
+
+def udp_src(_udp_src):
+    return ofb_field(type=UDP_SRC, udp_src=_udp_src)
+
+def udp_dst(_udp_dst):
+    return ofb_field(type=UDP_DST, udp_dst=_udp_dst)
+
+def sctp_src(_sctp_src):
+    return ofb_field(type=SCTP_SRC, sctp_src=_sctp_src)
+
+def sctp_dst(_sctp_dst):
+    return ofb_field(type=SCTP_DST, sctp_dst=_sctp_dst)
+
+def icmpv4_type(_icmpv4_type):
+    return ofb_field(type=ICMPV4_TYPE, icmpv4_type=_icmpv4_type)
+
+def icmpv4_code(_icmpv4_code):
+    return ofb_field(type=ICMPV4_CODE, icmpv4_code=_icmpv4_code)
+
+def arp_op(_arp_op):
+    return ofb_field(type=ARP_OP, arp_op=_arp_op)
+
+def arp_spa(_arp_spa):
+    return ofb_field(type=ARP_SPA, arp_spa=_arp_spa)
+
+def arp_tpa(_arp_tpa):
+    return ofb_field(type=ARP_TPA, arp_tpa=_arp_tpa)
+
+def arp_sha(_arp_sha):
+    return ofb_field(type=ARP_SHA, arp_sha=_arp_sha)
+
+def arp_tha(_arp_tha):
+    return ofb_field(type=ARP_THA, arp_tha=_arp_tha)
+
+# TODO finish for rest of match fields
+
+
+# frequently used extractors:
+
+def get_actions(flow):
+    """Extract list of ofp_action objects from flow spec object"""
+    assert isinstance(flow, ofp.ofp_flow_stats)
+    # we have the following hard assumptions for now
+    for instruction in flow.instructions:
+        if instruction.type == ofp.OFPIT_APPLY_ACTIONS:
+            return instruction.actions.actions
+
+def get_ofb_fields(flow):
+    assert isinstance(flow, ofp.ofp_flow_stats)
+    assert flow.match.type == ofp.OFPMT_OXM
+    ofb_fields = []
+    for field in flow.match.oxm_fields:
+        assert field.oxm_class == ofp.OFPXMC_OPENFLOW_BASIC
+        ofb_fields.append(field.ofb_field)
+    return ofb_fields
+
+def get_out_port(flow):
+    for action in get_actions(flow):
+        if action.type == OUTPUT:
+            return action.output.port
+    return None
+
+def get_in_port(flow):
+    for field in get_ofb_fields(flow):
+        if field.type == IN_PORT:
+            return field.port
+    return None
+
+def get_goto_table_id(flow):
+    for instruction in flow.instructions:
+        if instruction.type == ofp.OFPIT_GOTO_TABLE:
+            return instruction.goto_table.table_id
+    return None
+
+
+# test and extract next table and group information
+
+def has_next_table(flow):
+    return get_goto_table_id(flow) is not None
+
+def get_group(flow):
+    for action in get_actions(flow):
+        if action.type == GROUP:
+            return action.group.group_id
+    return None
+
+def has_group(flow):
+    return get_group(flow) is not None
+
+
+def mk_simple_flow_mod(match_fields, actions, command=ofp.OFPFC_ADD,
+                       next_table_id=None, **kw):
+    """
+    Convenience function to generare ofp_flow_mod message with OXM BASIC match
+    composed from the match_fields, and single APPLY_ACTIONS instruction with
+    a list if ofp_action objects.
+    :param match_fields: list(ofp_oxm_ofb_field)
+    :param actions: list(ofp_action)
+    :param command: one of OFPFC_*
+    :param kw: additional keyword-based params to ofp_flow_mod
+    :return: initialized ofp_flow_mod object
+    """
+    instructions = [
+        ofp.ofp_instruction(
+            type=ofp.OFPIT_APPLY_ACTIONS,
+            actions=ofp.ofp_instruction_actions(actions=actions)
+        )
+    ]
+    if next_table_id is not None:
+        instructions.append(ofp.ofp_instruction(
+            type=ofp.OFPIT_GOTO_TABLE,
+            goto_table=ofp.ofp_instruction_goto_table(table_id=next_table_id)
+        ))
+
+    return ofp.ofp_flow_mod(
+        command=command,
+        match=ofp.ofp_match(
+            type=ofp.OFPMT_OXM,
+            oxm_fields=[
+                ofp.ofp_oxm_field(
+                    oxm_class=ofp.OFPXMC_OPENFLOW_BASIC,
+                    ofb_field=field
+                ) for field in match_fields
+            ]
+        ),
+        instructions=instructions,
+        **kw
+    )
+
+
+def mk_multicast_group_mod(group_id, buckets, command=ofp.OFPGC_ADD):
+    group = ofp.ofp_group_mod(
+        command=command,
+        type=ofp.OFPGT_ALL,
+        group_id=group_id,
+        buckets=buckets
+    )
+    return group
+
+
+def hash_flow_stats(flow):
+    """
+    Return unique 64-bit integer hash for flow covering the following
+    attributes: 'table_id', 'priority', 'flags', 'cookie', 'match'
+    """
+    hex = md5('{},{},{},{},{}'.format(
+        flow.table_id,
+        flow.priority,
+        flow.flags,
+        flow.cookie,
+        flow.match.SerializeToString()
+    )).hexdigest()
+    return int(hex[:16], 16)
+
+
+def flow_stats_entry_from_flow_mod_message(mod):
+    flow = ofp.ofp_flow_stats(
+        table_id=mod.table_id,
+        priority=mod.priority,
+        idle_timeout=mod.idle_timeout,
+        hard_timeout=mod.hard_timeout,
+        flags=mod.flags,
+        cookie=mod.cookie,
+        match=mod.match,
+        instructions=mod.instructions
+    )
+    flow.id = hash_flow_stats(flow)
+    return flow
+
+
+def group_entry_from_group_mod(mod):
+    group = ofp.ofp_group_entry(
+        desc=ofp.ofp_group_desc(
+            type=mod.type,
+            group_id=mod.group_id,
+            buckets=mod.buckets
+        ),
+        stats=ofp.ofp_group_stats(
+            group_id=mod.group_id
+            # TODO do we need to instantiate bucket bins?
+        )
+    )
+    return group
+
+
+def mk_flow_stat(**kw):
+    return flow_stats_entry_from_flow_mod_message(mk_simple_flow_mod(**kw))
+
+
+def mk_group_stat(**kw):
+    return group_entry_from_group_mod(mk_multicast_group_mod(**kw))
+
+class RouteHop(object):
+    __slots__ = ('_device', '_ingress_port', '_egress_port')
+    def __init__(self, device, ingress_port, egress_port):
+        self._device = device
+        self._ingress_port = ingress_port
+        self._egress_port = egress_port
+    @property
+    def device(self): return self._device
+    @property
+    def ingress_port(self): return self._ingress_port
+    @property
+    def egress_port(self): return self._egress_port
+    def __eq__(self, other):
+        return (
+            self._device == other._device and
+            self._ingress_port == other._ingress_port and
+            self._egress_port == other._egress_port)
+
+
+class FlowDecomposer(object):
+
+    def __init__(self, *args, **kw):
+        self.logical_device_id = 'this shall be overwritten in derived class'
+        super(FlowDecomposer, self).__init__(*args, **kw)
+
+    # ~~~~~~~~~~~~~~~~~~~~ methods exposed *to* derived class ~~~~~~~~~~~~~~~~~
+
+    def decompose_rules(self, flows, groups):
+        """
+        Generate per-device flows and flow-groups from the flows and groups
+        defined on a logical device
+        :param flows: logical device flows
+        :param groups: logical device flow groups
+        :return: dict(device_id ->
+            (OrderedDict-of-device-flows, OrderedDict-of-device-flow-groups))
+        """
+
+        device_rules = deepcopy(self.get_all_default_rules())
+        group_map = dict((g.desc.group_id, g) for g in groups)
+
+        for flow in flows:
+            for device_id, (_flows, _groups) \
+                    in self.decompose_flow(flow, group_map).iteritems():
+                fl_lst, gr_lst = device_rules.setdefault(
+                    device_id, (OrderedDict(), OrderedDict()))
+                for _flow in _flows:
+                    if _flow.id not in fl_lst:
+                        fl_lst[_flow.id] = _flow
+                for _group in _groups:
+                    if _group.group_id not in gr_lst:
+                        gr_lst[_group.group_id] = _group
+        return device_rules
+
+    def decompose_flow(self, flow, group_map):
+        assert isinstance(flow, ofp.ofp_flow_stats)
+
+        ####################################################################
+        #
+        # TODO this is a very limited, heuristics based implementation
+        #
+        ####################################################################
+
+        in_port_no = get_in_port(flow)
+        out_port_no = get_out_port(flow)  # may be None
+
+        route = self.get_route(in_port_no, out_port_no)
+
+        assert len(route) == 2
+        ingress_hop, egress_hop = route
+
+        def is_downstream():
+            return ingress_hop.device.root
+
+        def is_upstream():
+            return not is_downstream()
+
+        device_rules = {}  # accumulator
+
+        if (out_port_no & 0x7fffffff) == ofp.OFPP_CONTROLLER:
+
+            # UPSTREAM CONTROLLER-BOUND FLOW
+
+            # we assume that the ingress device is already pushing a
+            # customer-specific vlan (c-vid), based on its default flow
+            # rules so there is nothing else to do on the ONU
+
+            # on the olt, we need to push a new tag and set it to 4000
+            # which for now represents in-bound channel to the controller
+            # (via Voltha)
+            # TODO make the 4000 configurable
+            fl_lst, _ = device_rules.setdefault(
+                egress_hop.device.id, ([], []))
+            fl_lst.append(mk_flow_stat(
+                priority=flow.priority,
+                cookie=flow.cookie,
+                match_fields=[
+                    in_port(egress_hop.ingress_port.port_no)
+                ] + [
+                    field for field in get_ofb_fields(flow)
+                    if field.type not in (IN_PORT, VLAN_VID)
+                ],
+                actions=[
+                    push_vlan(0x8100),
+                    set_field(vlan_vid(ofp.OFPVID_PRESENT | 4000)),
+                    output(egress_hop.egress_port.port_no)]
+            ))
+
+        else:
+            # NOT A CONTROLLER-BOUND FLOW
+            if is_upstream():
+
+                # We assume that anything that is upstream needs to get Q-in-Q
+                # treatment and that this is expressed via two flow rules,
+                # the first using the goto-statement. We also assume that the
+                # inner tag is applied at the ONU, while the outer tag is
+                # applied at the OLT
+                if has_next_table(flow):
+                    assert out_port_no is None
+                    fl_lst, _ = device_rules.setdefault(
+                        ingress_hop.device.id, ([], []))
+                    fl_lst.append(mk_flow_stat(
+                        priority=flow.priority,
+                        cookie=flow.cookie,
+                        match_fields=[
+                            in_port(ingress_hop.ingress_port.port_no)
+                        ] + [
+                            field for field in get_ofb_fields(flow)
+                            if field.type not in (IN_PORT,)
+                        ],
+                        actions=[
+                            action for action in get_actions(flow)
+                        ] + [
+                            output(ingress_hop.egress_port.port_no)
+                        ]
+                    ))
+
+                else:
+                    assert out_port_no is not None
+                    fl_lst, _ = device_rules.setdefault(
+                        egress_hop.device.id, ([], []))
+                    fl_lst.append(mk_flow_stat(
+                        priority=flow.priority,
+                        cookie=flow.cookie,
+                        match_fields=[
+                            in_port(egress_hop.ingress_port.port_no),
+                        ] + [
+                            field for field in get_ofb_fields(flow)
+                            if field.type not in (IN_PORT, )
+                        ],
+                        actions=[
+                            action for action in get_actions(flow)
+                            if action.type != OUTPUT
+                        ] + [
+                            output(egress_hop.egress_port.port_no)
+                        ]
+                    ))
+
+            else:  # downstream
+                if has_next_table(flow):
+                    assert out_port_no is None
+                    fl_lst, _ = device_rules.setdefault(
+                        ingress_hop.device.id, ([], []))
+                    fl_lst.append(mk_flow_stat(
+                        priority=flow.priority,
+                        cookie=flow.cookie,
+                        match_fields=[
+                            in_port(ingress_hop.ingress_port.port_no)
+                        ] + [
+                            field for field in get_ofb_fields(flow)
+                            if field.type not in (IN_PORT,)
+                        ],
+                        actions=[
+                            action for action in get_actions(flow)
+                        ] + [
+                            output(ingress_hop.egress_port.port_no)
+                        ]
+                    ))
+                elif out_port_no is not None:  # unicast case
+                    fl_lst, _ = device_rules.setdefault(
+                        egress_hop.device.id, ([], []))
+                    fl_lst.append(mk_flow_stat(
+                        priority=flow.priority,
+                        cookie=flow.cookie,
+                        match_fields=[
+                            in_port(egress_hop.ingress_port.port_no)
+                        ] + [
+                            field for field in get_ofb_fields(flow)
+                            if field.type not in (IN_PORT,)
+                        ],
+                        actions=[
+                            action for action in get_actions(flow)
+                            if action.type not in (OUTPUT,)
+                        ] + [
+                            output(egress_hop.egress_port.port_no)
+                        ]
+
+                    ))
+
+                else:  # multicast case
+                    grp_id = get_group(flow)
+                    assert grp_id is not None
+
+                    fl_lst, _ = device_rules.setdefault(
+                        ingress_hop.device.id, ([], []))
+                    fl_lst.append(mk_flow_stat(
+                        priority=flow.priority,
+                        cookie=flow.cookie,
+                        match_fields=[
+                            in_port(ingress_hop.ingress_port.port_no)
+                        ] + [
+                            field for field in get_ofb_fields(flow)
+                            if field.type not in (IN_PORT, ETH_TYPE, IPV4_DST)
+                        ],
+                        actions=[
+                            action for action in get_actions(flow)
+                            if action.type not in (GROUP,)
+                        ] + [
+                            pop_vlan(),
+                            output(ingress_hop.egress_port.port_no)
+                        ]
+                    ))
+
+                    group = group_map[grp_id]
+                    for bucket in group.desc.buckets:
+                        found_pop_vlan = False
+                        other_actions = []
+                        for action in bucket.actions:
+                            if action.type == POP_VLAN:
+                                found_pop_vlan = True
+                            elif action.type == OUTPUT:
+                                out_port_no = action.output.port
+                            else:
+                                other_actions.append(action)
+                        # re-run route request to determine egress device and
+                        # ports
+                        route2 = self.get_route(in_port_no, out_port_no)
+
+                        assert len(route2) == 2
+                        ingress_hop2, egress_hop = route2
+                        assert ingress_hop == ingress_hop2
+
+                        fl_lst, _ = device_rules.setdefault(
+                            egress_hop.device.id, ([], []))
+                        fl_lst.append(mk_flow_stat(
+                            priority=flow.priority,
+                            cookie=flow.cookie,
+                            match_fields=[
+                                in_port(egress_hop.ingress_port.port_no)
+                            ] + [
+                                field for field in get_ofb_fields(flow)
+                                if field.type not in (IN_PORT, VLAN_VID, VLAN_PCP)
+                            ],
+                            actions=other_actions + [
+                                output(egress_hop.egress_port.port_no)
+                            ]
+                        ))
+
+        return device_rules
+
+    # ~~~~~~~~~~~~ methods expected to be provided by derived class ~~~~~~~~~~~
+
+    def get_all_default_rules(self):
+        raise NotImplementedError('derived class must provide')
+
+    def get_default_rules(self, device_id):
+        raise NotImplementedError('derived class must provide')
+
+    def get_route(self, ingress_port_no, egress_port_no):
+        raise NotImplementedError('derived class must provide')
+
+
diff --git a/voltha/core/global_handler.py b/voltha/core/global_handler.py
new file mode 100644
index 0000000..2624ed6
--- /dev/null
+++ b/voltha/core/global_handler.py
@@ -0,0 +1,388 @@
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+import structlog
+from grpc import StatusCode
+from twisted.internet.defer import inlineCallbacks
+from twisted.internet.defer import returnValue
+
+from common.utils.grpc_utils import twisted_async
+from voltha.core.config.config_root import ConfigRoot
+from voltha.protos.voltha_pb2 import \
+    add_VolthaGlobalServiceServicer_to_server, VolthaLocalServiceStub, \
+    VolthaGlobalServiceServicer, Voltha, VolthaInstances, VolthaInstance, \
+    LogicalDevice, Ports, Flows, FlowGroups, Device
+from voltha.registry import registry
+from google.protobuf.empty_pb2 import Empty
+
+log = structlog.get_logger()
+
+
+class GlobalHandler(VolthaGlobalServiceServicer):
+
+    def __init__(self, dispatcher, instance_id, **init_kw):
+        self.dispatcher = dispatcher
+        self.instance_id = instance_id
+        self.init_kw = init_kw
+        self.root = None
+        self.stopped = False
+
+    def start(self):
+        log.debug('starting')
+        self.root = ConfigRoot(Voltha(**self.init_kw))
+        registry('grpc_server').register(
+            add_VolthaGlobalServiceServicer_to_server, self)
+        log.info('started')
+        return self
+
+    def stop(self):
+        log.debug('stopping')
+        self.stopped = True
+        log.info('stopped')
+
+    # gRPC service method implementations. BE CAREFUL; THESE ARE CALLED ON
+    # the gRPC threadpool threads.
+
+    @twisted_async
+    def GetVoltha(self, request, context):
+        log.info('grpc-request', request=request)
+        return self.root.get('/', depth=1)
+
+    @twisted_async
+    @inlineCallbacks
+    def ListVolthaInstances(self, request, context):
+        log.info('grpc-request', request=request)
+        items = yield registry('coordinator').get_members()
+        returnValue(VolthaInstances(items=items))
+
+    @twisted_async
+    def GetVolthaInstance(self, request, context):
+        log.info('grpc-request', request=request)
+        instance_id = request.id
+        try:
+            return self.dispatcher.dispatch(
+                instance_id,
+                VolthaLocalServiceStub,
+                'GetVolthaInstance',
+                Empty(),
+                context)
+        except KeyError:
+            context.set_details(
+                'Voltha instance \'{}\' not found'.format(instance_id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return VolthaInstance()
+
+    @twisted_async
+    def ListLogicalDevices(self, request, context):
+        log.warning('temp-limited-implementation')
+        # TODO dispatching to local instead of collecting all
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'ListLogicalDevices',
+            Empty(),
+            context)
+
+    @twisted_async
+    def GetLogicalDevice(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_logical_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return LogicalDevice()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'GetLogicalDevice',
+            request,
+            context)
+
+    @twisted_async
+    def ListLogicalDevicePorts(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_logical_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Ports()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'ListLogicalDevicePorts',
+            request,
+            context)
+
+    @twisted_async
+    def ListLogicalDeviceFlows(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_logical_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Flows()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'ListLogicalDeviceFlows',
+            request,
+            context)
+
+    @twisted_async
+    def UpdateLogicalDeviceFlowTable(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_logical_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Empty()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'UpdateLogicalDeviceFlowTable',
+            request,
+            context)
+
+    @twisted_async
+    def ListLogicalDeviceFlowGroups(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_logical_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return FlowGroups()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'ListLogicalDeviceFlowGroups',
+            request,
+            context)
+
+    @twisted_async
+    def UpdateLogicalDeviceFlowGroupTable(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_logical_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Empty()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'UpdateLogicalDeviceFlowGroupTable',
+            request,
+            context)
+
+    @twisted_async
+    def ListDevices(self, request, context):
+        log.warning('temp-limited-implementation')
+        # TODO dispatching to local instead of collecting all
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'ListDevices',
+            request,
+            context)
+
+    @twisted_async
+    def GetDevice(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Device()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'GetDevice',
+            request,
+            context)
+
+    @twisted_async
+    def CreateDevice(self, request, context):
+        log.info('grpc-request', request=request)
+        # TODO dispatching to local instead of passing it to leader
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'CreateDevice',
+            request,
+            context)
+
+    @twisted_async
+    def ActivateDevice(self, request, context):
+        log.info('grpc-request', request=request)
+        # TODO dispatching to local instead of passing it to leader
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'ActivateDevice',
+            request,
+            context)
+
+    @twisted_async
+    def ListDevicePorts(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Ports()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'ListDevicePorts',
+            request,
+            context)
+
+    @twisted_async
+    def ListDeviceFlows(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Flows()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'ListDeviceFlows',
+            request,
+            context)
+
+    @twisted_async
+    def ListDeviceFlowGroups(self, request, context):
+        log.info('grpc-request', request=request)
+
+        try:
+            instance_id = self.dispatcher.instance_id_by_device_id(
+                request.id
+            )
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return FlowGroups()
+
+        return self.dispatcher.dispatch(
+            instance_id,
+            VolthaLocalServiceStub,
+            'ListDeviceFlowGroups',
+            request,
+            context)
+
+    @twisted_async
+    def ListDeviceTypes(self, request, context):
+        log.info('grpc-request', request=request)
+        # we always deflect this to the local instance, as we assume
+        # they all loaded the same adapters, supporting the same device
+        # types
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'ListDeviceTypes',
+            request,
+            context)
+
+    @twisted_async
+    def GetDeviceType(self, request, context):
+        log.info('grpc-request', request=request)
+        # we always deflect this to the local instance, as we assume
+        # they all loaded the same adapters, supporting the same device
+        # types
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'GetDeviceType',
+            request,
+            context)
+
+    @twisted_async
+    def ListDeviceGroups(self, request, context):
+        log.warning('temp-limited-implementation')
+        # TODO dispatching to local instead of collecting all
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'ListDeviceGroups',
+            Empty(),
+            context)
+
+    @twisted_async
+    def GetDeviceGroup(self, request, context):
+        log.warning('temp-limited-implementation')
+        # TODO dispatching to local instead of collecting all
+        return self.dispatcher.dispatch(
+            self.instance_id,
+            VolthaLocalServiceStub,
+            'GetDeviceGroup',
+            request,
+            context)
+
+
diff --git a/voltha/core/local_handler.py b/voltha/core/local_handler.py
new file mode 100644
index 0000000..d5af2df
--- /dev/null
+++ b/voltha/core/local_handler.py
@@ -0,0 +1,416 @@
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from uuid import uuid4
+
+import structlog
+from grpc import StatusCode
+
+from common.utils.grpc_utils import twisted_async
+from voltha.core.config.config_root import ConfigRoot
+from voltha.protos.openflow_13_pb2 import PacketIn, Flows, FlowGroups
+
+from google.protobuf.empty_pb2 import Empty
+
+from voltha.protos.voltha_pb2 import \
+    add_VolthaLocalServiceServicer_to_server, VolthaLocalServiceServicer, \
+    VolthaInstance, Adapters, LogicalDevices, LogicalDevice, Ports, \
+    LogicalPorts, Devices, Device, DeviceType, \
+    DeviceTypes, DeviceGroups, DeviceGroup, AdminState, OperStatus
+from voltha.registry import registry
+
+log = structlog.get_logger()
+
+
+class LocalHandler(VolthaLocalServiceServicer):
+
+    def __init__(self, core, **init_kw):
+        self.core = core
+        self.init_kw = init_kw
+        self.root = None
+        self.stopped = False
+
+    def start(self):
+        log.debug('starting')
+        self.root = ConfigRoot(VolthaInstance(**self.init_kw))
+        registry('grpc_server').register(
+            add_VolthaLocalServiceServicer_to_server, self)
+        log.info('started')
+        return self
+
+    def stop(self):
+        log.debug('stopping')
+        self.stopped = True
+        log.info('stopped')
+
+    def get_proxy(self, path, exclusive=False):
+        return self.root.get_proxy(path, exclusive)
+
+    # gRPC service method implementations. BE CAREFUL; THESE ARE CALLED ON
+    # the gRPC threadpool threads.
+
+    @twisted_async
+    def GetVolthaInstance(self, request, context):
+        log.info('grpc-request', request=request)
+        depth = int(dict(context.invocation_metadata()).get('get-depth', 0))
+        res = self.root.get('/', depth=depth)
+        return res
+
+    @twisted_async
+    def GetHealth(self, request, context):
+        log.info('grpc-request', request=request)
+        return self.root.get('/health')
+
+    @twisted_async
+    def ListAdapters(self, request, context):
+        log.info('grpc-request', request=request)
+        items = self.root.get('/adapters')
+        return Adapters(items=items)
+
+    @twisted_async
+    def ListLogicalDevices(self, request, context):
+        log.info('grpc-request', request=request)
+        items = self.root.get('/logical_devices')
+        return LogicalDevices(items=items)
+
+    @twisted_async
+    def GetLogicalDevice(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed logical device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return LogicalDevice()
+
+        try:
+            return self.root.get('/logical_devices/' + request.id)
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return LogicalDevice()
+
+    @twisted_async
+    def ListLogicalDevicePorts(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed logical device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return LogicalPorts()
+
+        try:
+            items = self.root.get('/logical_devices/{}/ports'.format(request.id))
+            return LogicalPorts(items=items)
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return LogicalPorts()
+
+    @twisted_async
+    def ListLogicalDeviceFlows(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed logical device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Flows()
+
+        try:
+            flows = self.root.get('/logical_devices/{}/flows'.format(request.id))
+            return flows
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Flows()
+
+
+    @twisted_async
+    def UpdateLogicalDeviceFlowTable(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed logical device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Empty()
+
+        try:
+            agent = self.core.get_logical_device_agent(request.id)
+            agent.update_flow_table(request.flow_mod)
+            return Empty()
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Empty()
+
+    @twisted_async
+    def ListLogicalDeviceFlowGroups(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed logical device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return FlowGroups()
+
+        try:
+            groups = self.root.get(
+                '/logical_devices/{}/flow_groups'.format(request.id))
+            return groups
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return FlowGroups()
+
+    @twisted_async
+    def UpdateLogicalDeviceFlowGroupTable(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed logical device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Empty()
+
+        try:
+            agent = self.core.get_logical_device_agent(request.id)
+            agent.update_group_table(request.group_mod)
+            return Empty()
+        except KeyError:
+            context.set_details(
+                'Logical device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Empty()
+
+    @twisted_async
+    def ListDevices(self, request, context):
+        log.info('grpc-request', request=request)
+        items = self.root.get('/devices')
+        return Devices(items=items)
+
+    @twisted_async
+    def GetDevice(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Device()
+
+        try:
+            return self.root.get('/devices/' + request.id)
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Device()
+
+    @twisted_async
+    def CreateDevice(self, request, context):
+        log.info('grpc-request', request=request)
+
+        known_device_types = dict(
+            (dt.id, dt) for dt in self.root.get('/device_types'))
+
+        try:
+            assert isinstance(request, Device)
+            device = request
+            assert device.id == '', 'Device to be created cannot have id yet'
+            assert device.type in known_device_types, \
+                'Unknown device type \'{}\''.format(device.type)
+            assert device.admin_state in (AdminState.UNKNOWN,
+                                          AdminState.PREPROVISIONED), \
+                'Newly created device cannot be ' \
+                'in admin state \'{}\''.format(device.admin_state)
+
+        except AssertionError, e:
+            context.set_details(e.msg)
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Device()
+
+        # fill additional data
+        device.id = uuid4().hex[:12]
+        device_type = known_device_types[device.type]
+        device.adapter = device_type.adapter
+        if device.admin_state != AdminState.PREPROVISIONED:
+            device.admin_state = AdminState.PREPROVISIONED
+            device.oper_status = OperStatus.UNKNOWN
+
+        # add device to tree
+        self.root.add('/devices', device)
+
+        return request
+
+    @twisted_async
+    def ActivateDevice(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Device()
+
+        try:
+            path = '/devices/{}'.format(request.id)
+            device = self.root.get(path)
+            device.admin_state = AdminState.ENABLED
+            self.root.update(path, device, strict=True)
+
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+
+        return Empty()
+
+    @twisted_async
+    def ListDevicePorts(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Ports()
+
+        try:
+            items = self.root.get('/devices/{}/ports'.format(request.id))
+            return Ports(items=items)
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Ports()
+
+
+    @twisted_async
+    def ListDeviceFlows(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return Flows()
+
+        try:
+            flows = self.root.get('/devices/{}/flows'.format(request.id))
+            return flows
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return Flows()
+
+
+    @twisted_async
+    def ListDeviceFlowGroups(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed device id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return FlowGroups()
+
+        try:
+            groups = self.root.get('/devices/{}/flow_groups'.format(request.id))
+            return groups
+        except KeyError:
+            context.set_details(
+                'Device \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return FlowGroups()
+
+    @twisted_async
+    def ListDeviceTypes(self, request, context):
+        log.info('grpc-request', request=request)
+        items = self.root.get('/device_types')
+        return DeviceTypes(items=items)
+
+    @twisted_async
+    def GetDeviceType(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed device type id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return DeviceType()
+
+        try:
+            return self.root.get('/device_types/' + request.id)
+        except KeyError:
+            context.set_details(
+                'Device type \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return DeviceType()
+
+    @twisted_async
+    def ListDeviceGroups(self, request, context):
+        log.info('grpc-request', request=request)
+        # TODO is this mapped to tree or taken from coordinator?
+        items = self.root.get('/device_groups')
+        return DeviceGroups(items=items)
+
+    @twisted_async
+    def GetDeviceGroup(self, request, context):
+        log.info('grpc-request', request=request)
+
+        if '/' in request.id:
+            context.set_details(
+                'Malformed device group id \'{}\''.format(request.id))
+            context.set_code(StatusCode.INVALID_ARGUMENT)
+            return DeviceGroup()
+
+        # TODO is this mapped to tree or taken from coordinator?
+        try:
+            return self.root.get('/device_groups/' + request.id)
+        except KeyError:
+            context.set_details(
+                'Device group \'{}\' not found'.format(request.id))
+            context.set_code(StatusCode.NOT_FOUND)
+            return DeviceGroup()
+
+    def StreamPacketsOut(self, request_iterator, context):
+
+        @twisted_async
+        def forward_packet_out(packet_out):
+            agent = self.core.get_logical_device_agent(packet_out.id)
+            agent.packet_out(packet_out.packet_out)
+
+        for request in request_iterator:
+            forward_packet_out(packet_out=request)
+
+        return Empty()
+
+    def ReceivePacketsIn(self, request, context):
+        while 1:
+            packet_in = self.core.packet_in_queue.get()
+            yield packet_in
+
+    def send_packet_in(self, device_id, ofp_packet_in):
+        """Must be called on the twisted thread"""
+        packet_in = PacketIn(id=device_id, packet_in=ofp_packet_in)
+        self.core.packet_in_queue.put(packet_in)
diff --git a/voltha/core/logical_device_agent.py b/voltha/core/logical_device_agent.py
new file mode 100644
index 0000000..0ada073
--- /dev/null
+++ b/voltha/core/logical_device_agent.py
@@ -0,0 +1,588 @@
+#
+# Copyright 2016 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+Model that captures the current state of a logical device
+"""
+import threading
+from collections import OrderedDict
+
+import structlog
+
+from voltha.core.config.config_proxy import CallbackType
+from voltha.core.device_graph import DeviceGraph
+from voltha.core.flow_decomposer import FlowDecomposer, \
+    flow_stats_entry_from_flow_mod_message, group_entry_from_group_mod, \
+    mk_flow_stat, in_port, vlan_vid, vlan_pcp, pop_vlan, output, set_field
+from voltha.protos import third_party
+from voltha.protos import openflow_13_pb2 as ofp
+from voltha.protos.device_pb2 import Port
+from voltha.protos.openflow_13_pb2 import Flows, FlowGroups
+from voltha.registry import registry
+
+log = structlog.get_logger()
+_ = third_party
+
+def mac_str_to_tuple(mac):
+    return tuple(int(d, 16) for d in mac.split(':'))
+
+
+class LogicalDeviceAgent(FlowDecomposer, DeviceGraph):
+
+    def __init__(self, core, logical_device):
+        self.core = core
+        self.grpc_server = registry('grpc_server')
+        self.logical_device_id = logical_device.id
+        self.root_proxy = core.get_proxy('/')
+        self.flows_proxy = core.get_proxy(
+            '/logical_devices/{}/flows'.format(logical_device.id))
+        self.groups_proxy = core.get_proxy(
+            '/logical_devices/{}/flow_groups'.format(logical_device.id))
+        self.self_proxy = core.get_proxy(
+            '/logical_devices/{}'.format(logical_device.id))
+        self.flows_proxy.register_callback(
+            CallbackType.POST_UPDATE, self._flow_table_updated)
+        self.groups_proxy.register_callback(
+            CallbackType.POST_UPDATE, self._group_table_updated)
+        self.self_proxy.register_callback(
+            CallbackType.POST_ADD, self._port_list_updated)
+        self.self_proxy.register_callback(
+            CallbackType.POST_REMOVE, self._port_list_updated)
+
+    def start(self):
+        log.debug('starting')
+        log.info('started')
+        return self
+
+    def stop(self):
+        log.debug('stopping')
+        log.info('stopped')
+
+    def announce_flows_deleted(self, flows):
+        for f in flows:
+            self.announce_flow_deleted(f)
+
+    def announce_flow_deleted(self, flow):
+        if flow.flags & ofp.OFPFF_SEND_FLOW_REM:
+            raise NotImplementedError("announce_flow_deleted")
+
+    def signal_flow_mod_error(self, code, flow_mod):
+        pass  # TODO
+
+    def signal_flow_removal(self, code, flow):
+        pass  # TODO
+
+    def signal_group_mod_error(self, code, group_mod):
+        pass  # TODO
+
+    def update_flow_table(self, flow_mod):
+
+        command = flow_mod.command
+
+        if command == ofp.OFPFC_ADD:
+            self.flow_add(flow_mod)
+
+        elif command == ofp.OFPFC_DELETE:
+            self.flow_delete(flow_mod)
+
+        elif command == ofp.OFPFC_DELETE_STRICT:
+            self.flow_delete_strict(flow_mod)
+
+        elif command == ofp.OFPFC_MODIFY:
+            self.flow_modify(flow_mod)
+
+        elif command == ofp.OFPFC_MODIFY_STRICT:
+            self.flow_modify_strict(flow_mod)
+
+        else:
+            log.warn('unhandled-flow-mod', command=command, flow_mod=flow_mod)
+
+    # def list_flows(self):
+    #     return self.flows
+
+    def update_group_table(self, group_mod):
+
+        command = group_mod.command
+
+        if command == ofp.OFPGC_DELETE:
+            self.group_delete(group_mod)
+
+        elif command == ofp.OFPGC_ADD:
+            self.group_add(group_mod)
+
+        elif command == ofp.OFPGC_MODIFY:
+            self.group_modify(group_mod)
+
+        else:
+            log.warn('unhandled-group-mod', command=command,
+                     group_mod=group_mod)
+
+    def list_groups(self):
+        return self.groups.values()
+
+    ## <=============== LOW LEVEL FLOW HANDLERS ==============================>
+
+    def flow_add(self, mod):
+        assert isinstance(mod, ofp.ofp_flow_mod)
+        assert mod.cookie_mask == 0
+
+        # read from model
+        flows = list(self.flows_proxy.get('/').items)
+
+        changed = False
+        check_overlap = mod.flags & ofp.OFPFF_CHECK_OVERLAP
+        if check_overlap:
+            if self.find_overlapping_flows(flows, mod, True):
+                self.signal_flow_mod_error(
+                    ofp.OFPFMFC_OVERLAP, mod)
+            else:
+                # free to add as new flow
+                flow = flow_stats_entry_from_flow_mod_message(mod)
+                flows.append(flow)
+                changed = True
+                log.debug('flow-added', flow=mod)
+
+        else:
+            flow = flow_stats_entry_from_flow_mod_message(mod)
+            idx = self.find_flow(flows, flow)
+            if idx >= 0:
+                old_flow = flows[idx]
+                if not (mod.flags & ofp.OFPFF_RESET_COUNTS):
+                    flow.byte_count = old_flow.byte_count
+                    flow.packet_count = old_flow.packet_count
+                flows[idx] = flow
+                changed = True
+                log.debug('flow-updated', flow=flow)
+
+            else:
+                flows.append(flow)
+                changed = True
+                log.debug('flow-added', flow=mod)
+
+        # write back to model
+        if changed:
+            self.flows_proxy.update('/', Flows(items=flows))
+
+    def flow_delete(self, mod):
+        assert isinstance(mod, ofp.ofp_flow_mod)
+
+        # read from model
+        flows = list(self.flows_proxy.get('/').items)
+
+        # build a list of what to keep vs what to delete
+        to_keep = []
+        to_delete = []
+        for f in flows:
+            if self.flow_matches_spec(f, mod):
+                to_delete.append(f)
+            else:
+                to_keep.append(f)
+
+        # replace flow table with keepers
+        flows = to_keep
+
+        # write back
+        if to_delete:
+            self.flows_proxy.update('/', Flows(items=flows))
+
+        # send notifications for discarded flow as required by OpenFlow
+        self.announce_flows_deleted(to_delete)
+
+    def flow_delete_strict(self, mod):
+        assert isinstance(mod, ofp.ofp_flow_mod)
+
+        # read from model
+        flows = list(self.flows_proxy.get('/').items)
+        changed = False
+
+        flow = flow_stats_entry_from_flow_mod_message(mod)
+        idx = self.find_flow(flows, flow)
+        if (idx >= 0):
+            del flows[idx]
+            changed = True
+        else:
+            # TODO need to check what to do with this case
+            log.warn('flow-cannot-delete', flow=flow)
+
+        if changed:
+            self.flows_proxy.update('/', Flows(items=flows))
+
+    def flow_modify(self, mod):
+        raise NotImplementedError()
+
+    def flow_modify_strict(self, mod):
+        raise NotImplementedError()
+
+    def find_overlapping_flows(self, flows, mod, return_on_first=False):
+        """
+        Return list of overlapping flow(s)
+        Two flows overlap if a packet may match both and if they have the
+        same priority.
+        :param mod: Flow request
+        :param return_on_first: if True, return with the first entry
+        :return:
+        """
+        return []  # TODO finish implementation
+
+    @classmethod
+    def find_flow(cls, flows, flow):
+        for i, f in enumerate(flows):
+            if cls.flow_match(f, flow):
+                return i
+        return -1
+
+    @staticmethod
+    def flow_match(f1, f2):
+        keys_matter = ('table_id', 'priority', 'flags', 'cookie', 'match')
+        for key in keys_matter:
+            if getattr(f1, key) != getattr(f2, key):
+                return False
+        return True
+
+    @classmethod
+    def flow_matches_spec(cls, flow, flow_mod):
+        """
+        Return True if given flow (ofp_flow_stats) is "covered" by the
+        wildcard flow_mod (ofp_flow_mod), taking into consideration of
+        both exact mactches as well as masks-based match fields if any.
+        Otherwise return False
+        :param flow: ofp_flow_stats
+        :param mod: ofp_flow_mod
+        :return: Bool
+        """
+
+        assert isinstance(flow, ofp.ofp_flow_stats)
+        assert isinstance(flow_mod, ofp.ofp_flow_mod)
+
+        # Check if flow.cookie is covered by mod.cookie and mod.cookie_mask
+        if (flow.cookie & flow_mod.cookie_mask) != \
+                (flow_mod.cookie & flow_mod.cookie_mask):
+            return False
+
+        # Check if flow.table_id is covered by flow_mod.table_id
+        if flow_mod.table_id != ofp.OFPTT_ALL and \
+                        flow.table_id != flow_mod.table_id:
+            return False
+
+        # Check out_port
+        if flow_mod.out_port != ofp.OFPP_ANY and \
+                not cls.flow_has_out_port(flow, flow_mod.out_port):
+            return False
+
+        # Check out_group
+        if flow_mod.out_group != ofp.OFPG_ANY and \
+                not cls.flow_has_out_group(flow, flow_mod.out_group):
+            return False
+
+        # Priority is ignored
+
+        # Check match condition
+        # If the flow_mod match field is empty, that is a special case and
+        # indicates the flow entry matches
+        match = flow_mod.match
+        assert isinstance(match, ofp.ofp_match)
+        if not match.oxm_list:
+            # If we got this far and the match is empty in the flow spec,
+            # than the flow matches
+            return True
+        else:
+            raise NotImplementedError(
+                "flow_matches_spec(): No flow match analysis yet")
+
+    @staticmethod
+    def flow_has_out_port(flow, out_port):
+        """
+        Return True if flow has a output command with the given out_port
+        """
+        assert isinstance(flow, ofp.ofp_flow_stats)
+        for instruction in flow.instructions:
+            assert isinstance(instruction, ofp.ofp_instruction)
+            if instruction.type == ofp.OFPIT_APPLY_ACTIONS:
+                for action in instruction.actions.actions:
+                    assert isinstance(action, ofp.ofp_action)
+                    if action.type == ofp.OFPAT_OUTPUT and \
+                        action.output.port == out_port:
+                        return True
+
+        # otherwise...
+        return False
+
+    @staticmethod
+    def flow_has_out_group(flow, group_id):
+        """
+        Return True if flow has a output command with the given out_group
+        """
+        assert isinstance(flow, ofp.ofp_flow_stats)
+        for instruction in flow.instructions:
+            assert isinstance(instruction, ofp.ofp_instruction)
+            if instruction.type == ofp.OFPIT_APPLY_ACTIONS:
+                for action in instruction.actions.actions:
+                    assert isinstance(action, ofp.ofp_action)
+                    if action.type == ofp.OFPAT_GROUP and \
+                        action.group.group_id == group_id:
+                            return True
+
+        # otherwise...
+        return False
+
+    def flows_delete_by_group_id(self, flows, group_id):
+        """
+        Delete any flow(s) referring to given group_id
+        :param group_id:
+        :return: None
+        """
+        to_keep = []
+        to_delete = []
+        for f in flows:
+            if self.flow_has_out_group(f, group_id):
+                to_delete.append(f)
+            else:
+                to_keep.append(f)
+
+        # replace flow table with keepers
+        flows = to_keep
+
+        # send notification to deleted ones
+        self.announce_flows_deleted(to_delete)
+
+        return bool(to_delete), flows
+
+    ## <=============== LOW LEVEL GROUP HANDLERS =============================>
+
+    def group_add(self, group_mod):
+        assert isinstance(group_mod, ofp.ofp_group_mod)
+
+        groups = OrderedDict((g.desc.group_id, g)
+                             for g in self.groups_proxy.get('/').items)
+        changed = False
+
+        if group_mod.group_id in groups:
+            self.signal_group_mod_error(ofp.OFPGMFC_GROUP_EXISTS, group_mod)
+        else:
+            group_entry = group_entry_from_group_mod(group_mod)
+            groups[group_mod.group_id] = group_entry
+            changed = True
+
+        if changed:
+            self.groups_proxy.update('/', FlowGroups(items=groups.values()))
+
+    def group_delete(self, group_mod):
+        assert isinstance(group_mod, ofp.ofp_group_mod)
+
+        groups = OrderedDict((g.desc.group_id, g)
+                             for g in self.groups_proxy.get('/').items)
+        groups_changed = False
+        flows_changed = False
+
+        group_id = group_mod.group_id
+        if group_id == ofp.OFPG_ALL:
+            # TODO we must delete all flows that point to this group and
+            # signal controller as requested by flow's flag
+            groups = OrderedDict()
+            groups_changed = True
+            log.debug('all-groups-deleted')
+
+        else:
+            if group_id not in groups:
+                # per openflow spec, this is not an error
+                pass
+
+            else:
+                flows = list(self.flows_proxy.get('/').items)
+                flows_changed, flows = self.flows_delete_by_group_id(flows, group_id)
+                del groups[group_id]
+                groups_changed = True
+                log.debug('group-deleted', group_id=group_id)
+
+        if groups_changed:
+            self.groups_proxy.update('/', FlowGroups(items=groups.values()))
+        if flows_changed:
+            self.flows_proxy.update('/', Flows(items=flows))
+
+    def group_modify(self, group_mod):
+        assert isinstance(group_mod, ofp.ofp_group_mod)
+
+        groups = OrderedDict((g.desc.group_id, g)
+                             for g in self.groups_proxy.get('/').items)
+        changed = False
+
+        if group_mod.group_id not in groups:
+            self.signal_group_mod_error(
+                ofp.OFPGMFC_INVALID_GROUP, group_mod)
+        else:
+            # replace existing group entry with new group definition
+            group_entry = group_entry_from_group_mod(group_mod)
+            groups[group_mod.group_id] = group_entry
+            changed = True
+
+        if changed:
+            self.groups_proxy.update('/', FlowGroups(items=groups.values()))
+
+    ## <=============== PACKET_OUT ===========================================>
+
+    def packet_out(self, ofp_packet_out):
+        log.debug('packet-out', packet=ofp_packet_out)
+        print threading.current_thread().name
+        print 'PACKET_OUT:', ofp_packet_out
+        # TODO for debug purposes, lets turn this around and send it back
+        if 0:
+            self.packet_in(ofp.ofp_packet_in(
+                buffer_id=ofp_packet_out.buffer_id,
+                reason=ofp.OFPR_NO_MATCH,
+                data=ofp_packet_out.data
+            ))
+
+    ## <=============== PACKET_IN ============================================>
+
+    def packet_in(self, ofp_packet_in):
+        # TODO
+        print 'PACKET_IN:', ofp_packet_in
+        self.grpc_server.send_packet_in(self.logical_device_id, ofp_packet_in)
+
+    ## <======================== FLOW TABLE UPDATE HANDLING ===================
+
+    def _flow_table_updated(self, flows):
+        log.debug('flow-table-updated',
+                  logical_device_id=self.logical_device_id, flows=flows)
+
+        # TODO we have to evolve this into a policy-based, event based pattern
+        # This is a raw implementation of the specific use-case with certain
+        # built-in assumptions, and not yet device vendor specific. The policy-
+        # based refinement will be introduced that later.
+
+        groups = self.groups_proxy.get('/').items
+        device_rules_map = self.decompose_rules(flows.items, groups)
+        for device_id, (flows, groups) in device_rules_map.iteritems():
+            self.root_proxy.update('/devices/{}/flows'.format(device_id),
+                                   Flows(items=flows.values()))
+            self.root_proxy.update('/devices/{}/flow_groups'.format(device_id),
+                                   FlowGroups(items=groups.values()))
+
+    ## <======================= GROUP TABLE UPDATE HANDLING ===================
+
+    def _group_table_updated(self, flow_groups):
+        log.debug('group-table-updated',
+                  logical_device_id=self.logical_device_id,
+                  flow_groups=flow_groups)
+
+        flows = self.flows_proxy.get('/').items
+        device_flows_map = self.decompose_rules(flows, flow_groups.items)
+        for device_id, (flows, groups) in device_flows_map.iteritems():
+            self.root_proxy.update('/devices/{}/flows'.format(device_id),
+                                   Flows(items=flows.values()))
+            self.root_proxy.update('/devices/{}/flow_groups'.format(device_id),
+                                   FlowGroups(items=groups.values()))
+
+    ## <==================== APIs NEEDED BY FLOW DECOMPOSER ===================
+
+    def _port_list_updated(self, _):
+        # invalidate the graph and the route table
+        self._invalidate_cached_tables()
+
+    def _invalidate_cached_tables(self):
+        self._routes = None
+        self._default_rules = None
+        self._nni_logical_port_no = None
+
+    def _assure_cached_tables_up_to_date(self):
+        if self._routes is None:
+            logical_ports = self.self_proxy.get('/ports')
+            graph, self._routes = self.compute_routes(
+                self.root_proxy, logical_ports)
+            self._default_rules = self._generate_default_rules(graph)
+            root_ports = [p for p in logical_ports if p.root_port]
+            assert len(root_ports) == 1
+            self._nni_logical_port_no = root_ports[0].ofp_port.port_no
+
+
+    def _generate_default_rules(self, graph):
+
+        def root_device_default_rules(device):
+            ports = self.root_proxy.get('/devices/{}/ports'.format(device.id))
+            upstream_ports = [
+                port for port in ports if port.type == Port.ETHERNET_NNI
+            ]
+            assert len(upstream_ports) == 1
+            downstream_ports = [
+                port for port in ports if port.type == Port.PON_OLT
+            ]
+            assert len(downstream_ports) == 1, \
+                'Initially, we only handle one PON port'
+            flows = OrderedDict((f.id, f) for f in [
+                mk_flow_stat(
+                    priority=2000,
+                    match_fields=[
+                        in_port(upstream_ports[0].port_no),
+                        vlan_vid(ofp.OFPVID_PRESENT | 4000),
+                        vlan_pcp(0)
+                    ],
+                    actions=[
+                        pop_vlan(),
+                        output(downstream_ports[0].port_no)
+                    ]
+                )
+            ])
+            groups = OrderedDict()
+            return flows, groups
+
+        def leaf_device_default_rules(device):
+            ports = self.root_proxy.get('/devices/{}/ports'.format(device.id))
+            upstream_ports = [
+                port for port in ports if port.type == Port.PON_ONU
+            ]
+            assert len(upstream_ports) == 1
+            downstream_ports = [
+                port for port in ports if port.type == Port.ETHERNET_UNI
+            ]
+            assert len(downstream_ports) == 1
+            flows = OrderedDict((f.id, f) for f in [
+                mk_flow_stat(
+                    match_fields=[
+                        in_port(downstream_ports[0].port_no),
+                        vlan_vid(ofp.OFPVID_PRESENT | 0)
+                    ],
+                    actions=[
+                        set_field(vlan_vid(ofp.OFPVID_PRESENT | device.vlan)),
+                        output(upstream_ports[0].port_no)
+                    ]
+                )
+            ])
+            groups = OrderedDict()
+            return flows, groups
+
+        root_device_id = self.self_proxy.get('/').root_device_id
+        rules = {}
+        for node_key in graph.nodes():
+            node = graph.node[node_key]
+            device = node.get('device', None)
+            if device is None:
+                continue
+            if device.id == root_device_id:
+                rules[device.id] = root_device_default_rules(device)
+            else:
+                rules[device.id] = leaf_device_default_rules(device)
+        return rules
+
+    def get_route(self, ingress_port_no, egress_port_no):
+        self._assure_cached_tables_up_to_date()
+        if (egress_port_no & 0x7fffffff) == ofp.OFPP_CONTROLLER:
+            # treat it as if the output port is the NNI of the OLT
+            egress_port_no = self._nni_logical_port_no
+        return self._routes[(ingress_port_no, egress_port_no)]
+
+    def get_all_default_rules(self):
+        self._assure_cached_tables_up_to_date()
+        return self._default_rules
diff --git a/voltha/main.py b/voltha/main.py
index 39ff1f4..ac62054 100755
--- a/voltha/main.py
+++ b/voltha/main.py
@@ -34,7 +34,7 @@
 from voltha.northbound.grpc.grpc_server import VolthaGrpcServer
 from voltha.northbound.kafka.kafka_proxy import KafkaProxy, get_kafka_proxy
 from voltha.northbound.rest.health_check import init_rest_service
-from voltha.protos.common_pb2 import INFO
+from voltha.protos.common_pb2 import LogLevel
 from voltha.registry import registry
 
 VERSION = '0.9.0'
@@ -236,36 +236,42 @@
         try:
             self.log.info('starting-internal-components')
 
-            coordinator = yield Coordinator(
-                internal_host_address=self.args.internal_host_address,
-                external_host_address=self.args.external_host_address,
-                rest_port=self.args.rest_port,
-                instance_id=self.args.instance_id,
-                config=self.config,
-                consul=self.args.consul).start()
-            registry.register('coordinator', coordinator)
+            yield registry.register(
+                'coordinator',
+                Coordinator(
+                    internal_host_address=self.args.internal_host_address,
+                    external_host_address=self.args.external_host_address,
+                    rest_port=self.args.rest_port,
+                    instance_id=self.args.instance_id,
+                    config=self.config,
+                    consul=self.args.consul)
+            ).start()
 
             init_rest_service(self.args.rest_port)
 
-            grpc_server = \
-                yield VolthaGrpcServer(self.args.grpc_port).start()
-            registry.register('grpc_server', grpc_server)
+            yield registry.register(
+                'grpc_server',
+                VolthaGrpcServer(self.args.grpc_port)
+            ).start()
 
-            core = \
-                yield VolthaCore(
+            yield registry.register(
+                'kafka_proxy',
+                KafkaProxy(self.args.consul, self.args.kafka)
+            ).start()
+
+            yield registry.register(
+                'core',
+                VolthaCore(
                     instance_id=self.args.instance_id,
                     version=VERSION,
-                    log_level=INFO
-                ).start()
-            registry.register('core', core)
+                    log_level=LogLevel.INFO
+                )
+            ).start()
 
-            kafka_proxy = \
-                yield KafkaProxy(self.args.consul, self.args.kafka).start()
-            registry.register('kafka_proxy', kafka_proxy)
-
-            adapter_loader = yield AdapterLoader(
-                config=self.config.get('adapter_loader', {})).start()
-            registry.register('adapter_loader', adapter_loader)
+            yield registry.register(
+                'adapter_loader',
+                AdapterLoader(config=self.config.get('adapter_loader', {}))
+            ).start()
 
             self.log.info('started-internal-services')
 
diff --git a/voltha/northbound/grpc/grpc_server.py b/voltha/northbound/grpc/grpc_server.py
index c25429d..86dca34 100644
--- a/voltha/northbound/grpc/grpc_server.py
+++ b/voltha/northbound/grpc/grpc_server.py
@@ -28,7 +28,7 @@
 from zope.interface import implementer
 
 from common.utils.grpc_utils import twisted_async
-from voltha.core.device_model import DeviceModel
+from voltha.core.logical_device_agent import LogicalDeviceAgent
 from voltha.protos import voltha_pb2, schema_pb2
 from google.protobuf.empty_pb2 import Empty
 
@@ -109,7 +109,7 @@
 
     def __init__(self, threadpool):
         self.threadpool = threadpool
-        self.devices = [DeviceModel(self, 1)]
+        self.devices = [LogicalDeviceAgent(self, 1)]
         self.devices_map = dict((d.info.id, d) for d in self.devices)
         self.packet_in_queue = Queue()
 
diff --git a/voltha/protos/adapter.proto b/voltha/protos/adapter.proto
index 2e16f96..4af54c3 100644
--- a/voltha/protos/adapter.proto
+++ b/voltha/protos/adapter.proto
@@ -10,7 +10,7 @@
 message AdapterConfig {
 
     // Common adapter config attributes here
-    LogLevel log_level = 1;
+    LogLevel.LogLevel log_level = 1;
 
     // Custom (vendor-specific) configuration attributes
     google.protobuf.Any additional_config = 64;
diff --git a/voltha/protos/common.proto b/voltha/protos/common.proto
index b3e1e34..1497d56 100644
--- a/voltha/protos/common.proto
+++ b/voltha/protos/common.proto
@@ -7,10 +7,75 @@
     string id = 1;
 }
 
-enum LogLevel {
-    DEBUG = 0;
-    INFO = 1;
-    WARNING = 2;
-    ERROR = 3;
-    CRITICAL = 4;
+message LogLevel {
+
+    // Logging verbosity level
+    enum LogLevel {
+        DEBUG = 0;
+        INFO = 1;
+        WARNING = 2;
+        ERROR = 3;
+        CRITICAL = 4;
+    }
+}
+
+message AdminState {
+
+    // Administrative State
+    enum AdminState {
+
+        // The administrative state of the device is unknown
+        UNKNOWN = 0;
+
+        // The device is pre-provisioned into Voltha, but not contacted by it
+        PREPROVISIONED = 1;
+
+        // The device is enabled for activation and operation
+        ENABLED = 3;
+
+        // The device is disabled and shall not perform its intended forwarding
+        // functions other than being available for re-activation.
+        DISABLED = 2;
+    }
+}
+
+message OperStatus {
+
+    // Operational Status
+    enum OperStatus {
+
+        // The status of the device is unknown at this point
+        UNKNOWN = 0;
+
+        // The device has been discovered, but not yet activated
+        DISCOVERED = 1;
+
+        // The device is being activated (booted, rebooted, upgraded, etc.)
+        ACTIVATING = 2;
+
+        // Service impacting tests are being conducted
+        TESTING = 3;
+
+        // The device is up and active
+        ACTIVE = 4;
+
+        // The device has failed and cannot fulfill its intended role
+        FAILED = 5;
+    }
+}
+
+message ConnectStatus {
+
+    // Connectivity Status
+    enum ConnectStatus {
+
+        // The device connectivity status is unknown
+        UNKNOWN = 0;
+
+        // The device cannot be reached by Voltha
+        UNREACHABLE = 1;
+
+        // There is live communication between device and Voltha
+        REACHABLE = 2;
+    }
 }
diff --git a/voltha/protos/device.proto b/voltha/protos/device.proto
index 2aeae24..476cbc2 100644
--- a/voltha/protos/device.proto
+++ b/voltha/protos/device.proto
@@ -4,6 +4,7 @@
 
 import "meta.proto";
 import "google/protobuf/any.proto";
+import "common.proto";
 import "openflow_13.proto";
 
 // A Device Type
@@ -26,8 +27,33 @@
 }
 
 message Port {
-    string id = 1;
-    // TODO
+
+    enum PortType {
+        UNKNOWN = 0;
+        ETHERNET_NNI = 1;
+        ETHERNET_UNI = 2;
+        PON_OLT = 3;
+        PON_ONU = 4;
+    }
+
+    uint32 port_no = 1;  // Device-unique port number
+
+    string label = 2;  // Arbitrary port label
+
+    PortType type = 3;  //  Type of port
+
+    AdminState.AdminState admin_state = 5;
+
+    OperStatus.OperStatus oper_status = 6;
+
+    string device_id = 7;  // Unique .id of device that owns this port
+
+    message PeerPort {
+        string device_id = 1;
+        uint32 port_no = 2;
+    }
+    repeated PeerPort peers = 8;
+
 }
 
 message Ports {
@@ -38,40 +64,67 @@
 message Device {
 
     // Voltha's device identifier
-    string id = 1;
+    string id = 1 [(access) = READ_ONLY];
 
     // Device type, refers to one of the registered device types
-    string type = 2;
+    string type = 2 [(access) = READ_ONLY];
 
     // Is this device a root device. Each logical switch has one root
     // device that is associated with the logical flow switch.
-    bool root = 3;
+    bool root = 3 [(access) = READ_ONLY];
 
-    // Parent device id, in the device tree
-    string parent_id = 4;
+    // Parent device id, in the device tree (for a root device, the parent_id
+    // is the logical_device.id)
+    string parent_id = 4 [(access) = READ_ONLY];
+    uint32 parent_port_no = 20 [(access) = READ_ONLY];
 
     // Vendor, version, serial number, etc.
-    string vendor = 5;
-    string model = 6;
-    string hardware_version = 7;
-    string firmware_version = 8;
-    string software_version = 9;
-    string serial_number = 10;
+    string vendor = 5 [(access) = READ_ONLY];
+    string model = 6 [(access) = READ_ONLY];
+    string hardware_version = 7 [(access) = READ_ONLY];
+    string firmware_version = 8 [(access) = READ_ONLY];
+    string software_version = 9 [(access) = READ_ONLY];
+    string serial_number = 10 [(access) = READ_ONLY];
 
     // Addapter that takes care of device
-    string adapter = 11;
+    string adapter = 11 [(access) = READ_ONLY];
+
+    // Device contact on vlan (if 0, no vlan)
+    uint32 vlan = 12;
+
+    message ProxyDevice {
+        string device_id = 1;  // Which device to use as proxy to this device
+        uint32 channel_id = 2;  // Sub-address within proxy device
+    };
+
+    oneof address {
+        // Device contact MAC address (format: "xx:xx:xx:xx:xx:xx")
+        string mac_address = 13;
+
+        // Device contact IPv4 address (format: "a.b.c.d" or can use hostname too)
+        string ipv4_address = 14;
+
+        // Device contact IPv6 address using the canonical string form
+        // ("xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx")
+        string ipv6_address = 15;
+
+        ProxyDevice proxy_device = 19;
+    };
+
+    AdminState.AdminState admin_state = 16;
+
+    OperStatus.OperStatus oper_status = 17 [(access) = READ_ONLY];
+
+    ConnectStatus.ConnectStatus connect_status = 18 [(access) = READ_ONLY];
 
     // TODO additional common attribute here
-    // ...
 
     // Device type specific attributes
     google.protobuf.Any custom = 64;
 
-    repeated Port ports = 128  [(child_node) = {key: "id"}];
+    repeated Port ports = 128  [(child_node) = {key: "port_no"}];
     openflow_13.Flows flows = 129 [(child_node) = {}];
-//    repeated openflow_13.ofp_flow_stats flows = 129;
     openflow_13.FlowGroups flow_groups = 130 [(child_node) = {}];
-//    repeated openflow_13.ofp_group_entry flow_groups = 130;
 
 }
 
diff --git a/voltha/protos/logical_device.proto b/voltha/protos/logical_device.proto
index 49d555a..94d9588 100644
--- a/voltha/protos/logical_device.proto
+++ b/voltha/protos/logical_device.proto
@@ -6,31 +6,47 @@
 import "google/api/annotations.proto";
 import "openflow_13.proto";
 
-message LogicalDevice {
+
+message LogicalPort {
     string id = 1;
+    openflow_13.ofp_port ofp_port = 2;
+    string device_id = 3;
+    uint32 device_port_no = 4;
+    bool root_port = 5;
+}
+
+message LogicalPorts {
+    repeated LogicalPort items = 1;
+}
+
+message LogicalDevice {
+
+    // unique id of logical device
+    string id = 1;
+
+    // unique datapath id for the logical device (used by the SDN controller)
     uint64 datapath_id = 2;
 
+    // device description
     openflow_13.ofp_desc desc = 3;
 
-    repeated openflow_13.ofp_port ports = 4 [(child_node) = {key: "port_no"}];
-    openflow_13.Flows flows = 5 [(child_node) = {}];
-//    repeated openflow_13.ofp_flow_stats flows = 129;
-    openflow_13.FlowGroups flow_groups = 6 [(child_node) = {}];
-//    repeated openflow_13.ofp_group_entry flow_groups = 130;
+    // device features
+    openflow_13.ofp_switch_features switch_features = 4;
+
+    // name of the root device anchoring logical device
+    string root_device_id = 5;
+
+    // logical device ports
+    repeated LogicalPort ports = 128 [(child_node) = {key: "id"}];
+
+    // flows configured on the logical device
+    openflow_13.Flows flows = 129 [(child_node) = {}];
+
+    // flow groups configured on the logical device
+    openflow_13.FlowGroups flow_groups = 130 [(child_node) = {}];
 
 }
 
 message LogicalDevices {
     repeated LogicalDevice items = 1;
 }
-
-message LogicalPorts {
-    repeated openflow_13.ofp_port items = 1;
-}
-
-message LogicalDeviceDetails {
-    string id = 1;
-    uint64 datapath_id = 2;
-    openflow_13.ofp_desc desc = 3;
-    openflow_13.ofp_switch_features switch_features = 4;
-}
diff --git a/voltha/protos/openflow_13.proto b/voltha/protos/openflow_13.proto
index e110147..01b5059 100644
--- a/voltha/protos/openflow_13.proto
+++ b/voltha/protos/openflow_13.proto
@@ -1811,7 +1811,8 @@
 
 /* Body of reply to OFPMP_FLOW request. */
 message ofp_flow_stats {
-    uint32 table_id = 1;        /* ID of table flow came from. */
+    uint64 id = 14;            /* Unique ID of flow within device. */
+    uint32 table_id = 1;       /* ID of table flow came from. */
     uint32 duration_sec = 2;   /* Time flow has been alive in seconds. */
     uint32 duration_nsec = 3;  /* Time flow has been alive in nanoseconds
                                   beyond duration_sec. */
@@ -1821,8 +1822,8 @@
     uint32 flags = 7;          /* Bitmap of OFPFF_* flags. */
     uint64 cookie = 8;         /* Opaque controller-issued identifier. */
     uint64 packet_count = 9;   /* Number of packets in flow. */
-    uint64 byte_count = 10;     /* Number of bytes in flow. */
-    ofp_match match = 12;  /* Description of fields. Variable size. */
+    uint64 byte_count = 10;    /* Number of bytes in flow. */
+    ofp_match match = 12;      /* Description of fields. Variable size. */
     repeated ofp_instruction instructions = 13; /* Instruction set
                                                    (0 or more) */
 };
diff --git a/voltha/protos/voltha.proto b/voltha/protos/voltha.proto
index 2490974..f0d9aff 100644
--- a/voltha/protos/voltha.proto
+++ b/voltha/protos/voltha.proto
@@ -43,7 +43,7 @@
 
     string version = 2 [(access) = READ_ONLY];
 
-    LogLevel log_level = 3;
+    LogLevel.LogLevel log_level = 3;
 
     HealthStatus health = 10 [(child_node) = {}];
 
@@ -67,7 +67,7 @@
 
     string version = 1 [(access) = READ_ONLY];
 
-    LogLevel log_level = 2;
+    LogLevel.LogLevel log_level = 2;
 
     repeated VolthaInstance instances = 3 [(child_node) = {key: "instance_id"}];
 
@@ -91,30 +91,35 @@
  */
 service VolthaGlobalService {
 
+    // Get high level information on the Voltha cluster
     rpc GetVoltha(google.protobuf.Empty) returns(Voltha) {
         option (google.api.http) = {
             get: "/api/v1"
         };
     }
 
+    // List all Voltha cluster instances
     rpc ListVolthaInstances(google.protobuf.Empty) returns(VolthaInstances) {
         option (google.api.http) = {
             get: "/api/v1/instances"
         };
     }
 
+    // Get details on a Voltha cluster instance
     rpc GetVolthaInstance(ID) returns(VolthaInstance) {
         option (google.api.http) = {
             get: "/api/v1/instances/{id}"
         };
     }
 
+    // List all logical devices managed by the Voltha cluster
     rpc ListLogicalDevices(google.protobuf.Empty) returns(LogicalDevices) {
         option (google.api.http) = {
             get: "/api/v1/logical_devices"
         };
     }
 
+    // Get additional information on a given logical device
     rpc GetLogicalDevice(ID) returns(LogicalDevice) {
         option (google.api.http) = {
             get: "/api/v1/logical_devices/{id}"
@@ -135,7 +140,7 @@
         };
     }
 
-    // Update flow table for device
+    // Update flow table for logical device
     rpc UpdateLogicalDeviceFlowTable(openflow_13.FlowTableUpdate)
             returns(google.protobuf.Empty) {
         option (google.api.http) = {
@@ -160,65 +165,84 @@
         };
     }
 
+    // List all physical devices controlled by the Voltha cluster
     rpc ListDevices(google.protobuf.Empty) returns(Devices) {
         option (google.api.http) = {
             get: "/api/v1/devices"
         };
     }
 
+    // Get more information on a given physical device
     rpc GetDevice(ID) returns(Device) {
         option (google.api.http) = {
             get: "/api/v1/devices/{id}"
         };
     }
 
-    // List ports of a logical device
+    // Pre-provision a new physical device
+    rpc CreateDevice(Device) returns(Device) {
+        option (google.api.http) = {
+            post: "/api/v1/devices"
+            body: "*"
+        };
+    }
+
+    // Activate a pre-provisioned device
+    rpc ActivateDevice(ID) returns(google.protobuf.Empty) {
+        option (google.api.http) = {
+            post: "/api/v1/devices/{id}/activate"
+        };
+    }
+
+    // List ports of a device
     rpc ListDevicePorts(ID) returns(Ports) {
         option (google.api.http) = {
             get: "/api/v1/devices/{id}/ports"
         };
     }
 
-    // List all flows of a logical device
+    // List all flows of a device
     rpc ListDeviceFlows(ID) returns(openflow_13.Flows) {
         option (google.api.http) = {
             get: "/api/v1/devices/{id}/flows"
         };
     }
 
-    // List all flow groups of a logical device
+    // List all flow groups of a device
     rpc ListDeviceFlowGroups(ID) returns(openflow_13.FlowGroups) {
         option (google.api.http) = {
             get: "/api/v1/devices/{id}/flow_groups"
         };
     }
 
+    // List device types known to Voltha
     rpc ListDeviceTypes(google.protobuf.Empty) returns(DeviceTypes) {
         option (google.api.http) = {
             get: "/api/v1/device_types"
         };
     }
 
+    // Get additional information on a device type
     rpc GetDeviceType(ID) returns(DeviceType) {
         option (google.api.http) = {
             get: "/api/v1/device_types/{id}"
         };
     }
 
+    // List all device sharding groups
     rpc ListDeviceGroups(google.protobuf.Empty) returns(DeviceGroups) {
         option (google.api.http) = {
             get: "/api/v1/device_groups"
         };
     }
 
+    // Get additional information on a device group
     rpc GetDeviceGroup(ID) returns(DeviceGroup) {
         option (google.api.http) = {
             get: "/api/v1/device_groups/{id}"
         };
     }
 
-    // TODO other top-level APIs to be added here
-
 }
 
 /*
@@ -229,30 +253,35 @@
  */
 service VolthaLocalService {
 
+    // Get information on this Voltha instance
     rpc GetVolthaInstance(google.protobuf.Empty) returns(VolthaInstance) {
         option (google.api.http) = {
             get: "/api/v1/local"
         };
     }
 
+    // Get the health state of the Voltha instance
     rpc GetHealth(google.protobuf.Empty) returns(HealthStatus) {
         option (google.api.http) = {
             get: "/api/v1/local/health"
         };
     }
 
+    // List all active adapters (plugins) in this Voltha instance
     rpc ListAdapters(google.protobuf.Empty) returns(Adapters) {
         option (google.api.http) = {
             get: "/api/v1/local/adapters"
         };
     }
 
+    // List all logical devices managed by this Voltha instance
     rpc ListLogicalDevices(google.protobuf.Empty) returns(LogicalDevices) {
         option (google.api.http) = {
             get: "/api/v1/local/logical_devices"
         };
     }
 
+    // Get additional information on given logical device
     rpc GetLogicalDevice(ID) returns(LogicalDevice) {
         option (google.api.http) = {
             get: "/api/v1/local/logical_devices/{id}"
@@ -273,7 +302,7 @@
         };
     }
 
-    // Update flow table for device
+    // Update flow table for logical device
     rpc UpdateLogicalDeviceFlowTable(openflow_13.FlowTableUpdate)
             returns(google.protobuf.Empty) {
         option (google.api.http) = {
@@ -289,7 +318,7 @@
         };
     }
 
-    // Update group table for device
+    // Update group table for logical device
     rpc UpdateLogicalDeviceFlowGroupTable(openflow_13.FlowGroupTableUpdate)
             returns(google.protobuf.Empty) {
         option (google.api.http) = {
@@ -298,57 +327,79 @@
         };
     }
 
+    // List all physical devices managed by this Voltha instance
     rpc ListDevices(google.protobuf.Empty) returns(Devices) {
         option (google.api.http) = {
             get: "/api/v1/local/devices"
         };
     }
 
+    // Get additional information on this device
     rpc GetDevice(ID) returns(Device) {
         option (google.api.http) = {
             get: "/api/v1/local/devices/{id}"
         };
     }
 
-    // List ports of a logical device
+    // Pre-provision a new physical device
+    rpc CreateDevice(Device) returns(Device) {
+        option (google.api.http) = {
+            post: "/api/v1/local/devices"
+            body: "*"
+        };
+    }
+
+    // Activate a pre-provisioned device
+    rpc ActivateDevice(ID) returns(google.protobuf.Empty) {
+        option (google.api.http) = {
+            post: "/api/v1/local/devices/{id}/activate"
+            body: "{}"
+        };
+    }
+
+    // List ports of a device
     rpc ListDevicePorts(ID) returns(Ports) {
         option (google.api.http) = {
             get: "/api/v1/local/devices/{id}/ports"
         };
     }
 
-    // List all flows of a logical device
+    // List all flows of a device
     rpc ListDeviceFlows(ID) returns(openflow_13.Flows) {
         option (google.api.http) = {
             get: "/api/v1/local/devices/{id}/flows"
         };
     }
 
-    // List all flow groups of a logical device
+    // List all flow groups of a device
     rpc ListDeviceFlowGroups(ID) returns(openflow_13.FlowGroups) {
         option (google.api.http) = {
             get: "/api/v1/local/devices/{id}/flow_groups"
         };
     }
 
+    // List device types know to Voltha instance
     rpc ListDeviceTypes(google.protobuf.Empty) returns(DeviceTypes) {
         option (google.api.http) = {
             get: "/api/v1/local/device_types"
         };
     }
 
+    // Get additional information on given device type
     rpc GetDeviceType(ID) returns(DeviceType) {
         option (google.api.http) = {
             get: "/api/v1/local/device_types/{id}"
         };
     }
 
+    // List device sharding groups managed by this Voltha instance
     rpc ListDeviceGroups(google.protobuf.Empty) returns(DeviceGroups) {
         option (google.api.http) = {
             get: "/api/v1/local/device_groups"
         };
     }
 
+    // Get more information on given device shard
     rpc GetDeviceGroup(ID) returns(DeviceGroup) {
         option (google.api.http) = {
             get: "/api/v1/local/device_groups/{id}"
@@ -367,6 +418,4 @@
         // This does not have an HTTP representation
     }
 
-    // TODO other local APIs to be added here
-
 }
diff --git a/voltha/registry.py b/voltha/registry.py
index 767cb7e..d38e210 100644
--- a/voltha/registry.py
+++ b/voltha/registry.py
@@ -52,6 +52,7 @@
         assert IComponent.providedBy(component)
         assert name not in self.components
         self.components[name] = component
+        return component
 
     def unregister(self, name):
         if name in self.components: