Flow decomposition and miscellenous improvements

Specifically:

The biggest addition is an initial flow decomposition
implementation that splits flows and flow groups
defined over the logical device into per physical
device flows, based on a very crude and heuristic
approach. We expect this part to be much improved
later on, both in term of genericness as well as
speed.

The flow decomposition is triggered by any flow
or group mods applied to a logical device, and it
consequently touches up the affected device tables.
This uses the POST_UPDATE (post-commit) mechanism
of core.

There is also an initial arhcitecture diagram added
under docs.

Additional improvements:

* Implemented metadata passing across the gRPC
  link, both in Voltha and in Chameleon. This paves
  the road to pass query args as metadata, and also
  to pass HTTP header fields back and forth across
  the gRPC API. This is alrady used to pass in depth
  for GET /api/v1/local, and it will be used to
  allow working with transactions and specific config
  revs.
* Improved automatic reload and reconnect of chameleon
  after Voltha is restarted.
* Improved error handling in gRPC hanlers, especially
  for the "resource not found (404)", and bad argument
  (400) type errors. This makes gRPC Rendezvous errors
  a bit cleaner, and also allows Chameleon to map these
  errors into 404/400 codes.
* Better error logging in generic errors in gRPC handlers.
* Many new test-cases
* Initial skeleton and first many steps implemented for
  the automated testing for the cold PON activation
  sequence.
* Convenience functions for working with flows (exemplified
  by the test-cases)
* Fixed bug in config engine that dropped changes that
  were made in a POST_* callback, such as the ones used
  to propagae the logical flow tables into the device
  tables. The fix was to defer the callbacks till the
  initial changes are complete and then execute all
  callbacks in sequence.
* Adapter proxy with well defined API that can be
  used by the adapters to communicate back to Core.
* Extended simulated_olt and simulated_onu adapters to
  both demonstrate discovery-style and provisioned
  activation style use cases.
* Adapter-, device-, and logical device agents to provide
  the active business logic associated with these
  entities.
* Fixed 64-bit value passing across the stack. There was
  an issue due to inconsistent use of two JSON<-->Proto
  librarier, one of which did not adhere to the Google
  specs which recommend passing 64-bit integer values as
  strings.
* Annotation added for all gRPC methods.

All Voltha test-cases are passing.

Change-Id: Id949e8d1b76276741471bedf9901ac33bfad9ec6
diff --git a/grpc_client/grpc_client.py b/grpc_client/grpc_client.py
index 74933cd..05a4dd7 100644
--- a/grpc_client/grpc_client.py
+++ b/grpc_client/grpc_client.py
@@ -252,24 +252,27 @@
             _ = __import__(modname)
 
     @inlineCallbacks
-    def invoke(self, stub, method_name, request, retry=1):
+    def invoke(self, stub, method_name, request, metadata, retry=1):
         """
         Invoke a gRPC call to the remote server and return the response.
         :param stub: Reference to the *_pb2 service stub
         :param method_name: The method name inside the service stub
         :param request: The request protobuf message
-        :return: The response protobuf message
+        :param metadata: [(str, str), (str, str), ...]
+        :return: The response protobuf message and returned trailing metadata
         """
 
         if not self.connected:
             raise ServiceUnavailable()
 
         try:
-            response = getattr(stub(self.channel), method_name)(request)
-            returnValue(response)
+            method = getattr(stub(self.channel), method_name)
+            response, rendezvous = method.with_call(request, metadata=metadata)
+            returnValue((response, rendezvous.trailing_metadata()))
 
         except grpc._channel._Rendezvous, e:
-            if e.code() == grpc.StatusCode.UNAVAILABLE:
+            code = e.code()
+            if code == grpc.StatusCode.UNAVAILABLE:
                 e = ServiceUnavailable()
 
                 if self.connected:
@@ -277,10 +280,17 @@
                     yield self.connect()
                     if retry > 0:
                         response = yield self.invoke(stub, method_name,
-                                                     request,
+                                                     request, metadata,
                                                      retry=retry - 1)
                         returnValue(response)
 
+            elif code in (
+                    grpc.StatusCode.NOT_FOUND,
+                    grpc.StatusCode.INVALID_ARGUMENT,
+                    grpc.StatusCode.ALREADY_EXISTS):
+
+                pass  # don't log error, these occur naturally
+
             else:
                 log.exception(e)
 
diff --git a/protoc_plugins/gw_gen.py b/protoc_plugins/gw_gen.py
index c5a8875..4400a1a 100755
--- a/protoc_plugins/gw_gen.py
+++ b/protoc_plugins/gw_gen.py
@@ -32,8 +32,7 @@
 
 from simplejson import dumps, load
 from structlog import get_logger
-from protobuf_to_dict import dict_to_protobuf
-from google.protobuf.json_format import MessageToDict
+from google.protobuf.json_format import MessageToDict, ParseDict
 from twisted.internet.defer import inlineCallbacks, returnValue
 
 {% set package = file_name.replace('.proto', '') %}
@@ -65,16 +64,16 @@
         {% elif method['body'] == '' %}
         data = kw
         {% else %}
-        riase NotImplementedError('cannot handle specific body field list')
+        raise NotImplementedError('cannot handle specific body field list')
         {% endif %}
         try:
-            req = dict_to_protobuf({{ type_map[method['input_type']] }}, data)
+            req = ParseDict(data, {{ type_map[method['input_type']] }}())
         except Exception, e:
             log.error('cannot-convert-to-protobuf', e=e, data=data)
             raise
-        res = yield grpc_client.invoke(
+        res, metadata = yield grpc_client.invoke(
             {{ type_map[method['service']] }}Stub,
-            '{{ method['method'] }}', req)
+            '{{ method['method'] }}', req, request.getAllHeaders().items())
         try:
             out_data = MessageToDict(res, True, True)
         except AttributeError, e:
@@ -83,6 +82,8 @@
                 f.write(res.SerializeToString())
             log.error('cannot-convert-from-protobuf', outdata_saved=filename)
             raise
+        for key, value in metadata:
+            request.setHeader(key, value)
         request.setHeader('Content-Type', 'application/json')
         log.debug('{{ method_name }}', **out_data)
         returnValue(dumps(out_data))
diff --git a/web_server/web_server.py b/web_server/web_server.py
index c96be0f..7e3d00d 100644
--- a/web_server/web_server.py
+++ b/web_server/web_server.py
@@ -17,6 +17,7 @@
 
 import os
 
+import grpc
 from klein import Klein
 from simplejson import dumps, load
 from structlog import get_logger
@@ -26,6 +27,8 @@
 from twisted.web.server import Site
 from twisted.web.static import File
 from werkzeug.exceptions import BadRequest
+from grpc import StatusCode
+
 
 log = get_logger()
 
@@ -97,3 +100,19 @@
             return File(os.path.join(self.work_dir, 'swagger.json'))
         except Exception, e:
             log.exception('file-not-found', request=request)
+
+    @app.handle_errors(grpc._channel._Rendezvous)
+    def grpc_exception(self, request, failure):
+        code = failure.value.code()
+        if code == StatusCode.NOT_FOUND:
+            request.setResponseCode(404)
+            return failure.value.details()
+        elif code == StatusCode.INVALID_ARGUMENT:
+            request.setResponseCode(400)
+            return failure.value.details()
+        elif code == StatusCode.ALREADY_EXISTS:
+            request.setResponseCode(409)
+            return failure.value.details()
+        else:
+            raise
+