Flow decomposition and miscellenous improvements
Specifically:
The biggest addition is an initial flow decomposition
implementation that splits flows and flow groups
defined over the logical device into per physical
device flows, based on a very crude and heuristic
approach. We expect this part to be much improved
later on, both in term of genericness as well as
speed.
The flow decomposition is triggered by any flow
or group mods applied to a logical device, and it
consequently touches up the affected device tables.
This uses the POST_UPDATE (post-commit) mechanism
of core.
There is also an initial arhcitecture diagram added
under docs.
Additional improvements:
* Implemented metadata passing across the gRPC
link, both in Voltha and in Chameleon. This paves
the road to pass query args as metadata, and also
to pass HTTP header fields back and forth across
the gRPC API. This is alrady used to pass in depth
for GET /api/v1/local, and it will be used to
allow working with transactions and specific config
revs.
* Improved automatic reload and reconnect of chameleon
after Voltha is restarted.
* Improved error handling in gRPC hanlers, especially
for the "resource not found (404)", and bad argument
(400) type errors. This makes gRPC Rendezvous errors
a bit cleaner, and also allows Chameleon to map these
errors into 404/400 codes.
* Better error logging in generic errors in gRPC handlers.
* Many new test-cases
* Initial skeleton and first many steps implemented for
the automated testing for the cold PON activation
sequence.
* Convenience functions for working with flows (exemplified
by the test-cases)
* Fixed bug in config engine that dropped changes that
were made in a POST_* callback, such as the ones used
to propagae the logical flow tables into the device
tables. The fix was to defer the callbacks till the
initial changes are complete and then execute all
callbacks in sequence.
* Adapter proxy with well defined API that can be
used by the adapters to communicate back to Core.
* Extended simulated_olt and simulated_onu adapters to
both demonstrate discovery-style and provisioned
activation style use cases.
* Adapter-, device-, and logical device agents to provide
the active business logic associated with these
entities.
* Fixed 64-bit value passing across the stack. There was
an issue due to inconsistent use of two JSON<-->Proto
librarier, one of which did not adhere to the Google
specs which recommend passing 64-bit integer values as
strings.
* Annotation added for all gRPC methods.
All Voltha test-cases are passing.
Change-Id: Id949e8d1b76276741471bedf9901ac33bfad9ec6
diff --git a/ofagent/grpc_client.py b/ofagent/grpc_client.py
index a4475b5..5d42566 100644
--- a/ofagent/grpc_client.py
+++ b/ofagent/grpc_client.py
@@ -26,8 +26,8 @@
from twisted.internet import threads
from twisted.internet.defer import inlineCallbacks, returnValue, DeferredQueue
-from protos.voltha_pb2 import ID, VolthaLogicalLayerStub, FlowTableUpdate, \
- GroupTableUpdate, PacketOut
+from protos.voltha_pb2 import ID, VolthaLocalServiceStub, FlowTableUpdate, \
+ FlowGroupTableUpdate, PacketOut
from google.protobuf import empty_pb2
@@ -40,7 +40,7 @@
self.connection_manager = connection_manager
self.channel = channel
- self.logical_stub = VolthaLogicalLayerStub(channel)
+ self.local_stub = VolthaLocalServiceStub(channel)
self.stopped = False
@@ -74,14 +74,14 @@
def stream_packets_out():
generator = packet_generator()
- self.logical_stub.StreamPacketsOut(generator)
+ self.local_stub.StreamPacketsOut(generator)
reactor.callInThread(stream_packets_out)
def start_packet_in_stream(self):
def receive_packet_in_stream():
- streaming_rpc_method = self.logical_stub.ReceivePacketsIn
+ streaming_rpc_method = self.local_stub.ReceivePacketsIn
iterator = streaming_rpc_method(empty_pb2.Empty())
for packet_in in iterator:
reactor.callFromThread(self.packet_in_queue.put,
@@ -110,14 +110,14 @@
def get_port_list(self, device_id):
req = ID(id=device_id)
res = yield threads.deferToThread(
- self.logical_stub.ListLogicalDevicePorts, req)
+ self.local_stub.ListLogicalDevicePorts, req)
returnValue(res.items)
@inlineCallbacks
def get_device_info(self, device_id):
req = ID(id=device_id)
res = yield threads.deferToThread(
- self.logical_stub.GetLogicalDevice, req)
+ self.local_stub.GetLogicalDevice, req)
returnValue(res)
@inlineCallbacks
@@ -127,29 +127,29 @@
flow_mod=flow_mod
)
res = yield threads.deferToThread(
- self.logical_stub.UpdateFlowTable, req)
+ self.local_stub.UpdateLogicalDeviceFlowTable, req)
returnValue(res)
@inlineCallbacks
def update_group_table(self, device_id, group_mod):
- req = GroupTableUpdate(
+ req = FlowGroupTableUpdate(
id=device_id,
group_mod=group_mod
)
res = yield threads.deferToThread(
- self.logical_stub.UpdateGroupTable, req)
+ self.local_stub.UpdateLogicalDeviceFlowGroupTable, req)
returnValue(res)
@inlineCallbacks
def list_flows(self, device_id):
req = ID(id=device_id)
res = yield threads.deferToThread(
- self.logical_stub.ListDeviceFlows, req)
+ self.local_stub.ListLogicalDeviceFlows, req)
returnValue(res.items)
@inlineCallbacks
def list_groups(self, device_id):
req = ID(id=device_id)
res = yield threads.deferToThread(
- self.logical_stub.ListDeviceFlowGroups, req)
+ self.local_stub.ListLogicalDeviceFlowGroups, req)
returnValue(res.items)