VOL-1352 - Use serial_number reported by device instead of host:port.

Currently the logical device's serial number is set to the
host:port of the olt. This was done since the device's
serial number was not reported by the openolt agent.
Since the openolt agent now reports the actual device serial
number, this commit makes it a requirement to use the
device's serial number in the onos config.

This change will require onos sadis config to specify
the serial number instead of host:port:

                "entries" : [
                    {
                        "id" : "EC1721000216",
                        "hardwareIdentifier" : "de:ad:be:ef:ba:11",
                        "uplinkPort" : 65536
                    },

This commit requires the corresponsing openolt agent change which
provides the serial number in the device_info:

commit 42bc6ec6af647ebd42e690c0e28e1d5623ab912f
Author: Thiyagarajan Subramani <Thiyagarajan.Subramani@radisys.com>
Date:   Sat Feb 2 03:21:43 2019 -0800

    VOL-1392: OpenOLT driver should send the actual device serial number

    Change-Id: I1c9703568bc85f7e8e3c62313a4a9abaa9d7b1e7

Change-Id: I9a40717baf6ca23d6a1171d4e79f49a0c5175133
1 file changed
tree: 62356a58ff16f2adbb95809903267fd1cd697676
  1. .dockerignore
  2. .gitignore
  3. .gitreview
  4. BUILD.md
  5. BuildingVolthaUsingVagrantOnKVM.md
  6. DOCKER_BUILD.md
  7. GettingStartedLinux.md
  8. Jenkinsfile
  9. LICENSE.txt
  10. Makefile
  11. README.md
  12. TODO.md
  13. Vagrantfile
  14. alarm-generator/
  15. ansible/
  16. cli/
  17. common/
  18. compose/
  19. consul_config/
  20. dashd/
  21. docker/
  22. docs/
  23. env.sh
  24. envoy/
  25. experiments/
  26. fluentd_config/
  27. install/
  28. k8s/
  29. kafka/
  30. netconf/
  31. netopeer/
  32. nginx_config/
  33. obsolete/
  34. ofagent/
  35. pki/
  36. ponsim/
  37. portainer/
  38. reg_config/
  39. requirements.txt
  40. scripts/
  41. settings.vagrant.nightly-docker.yaml
  42. settings.vagrant.nightly.yaml
  43. settings.vagrant.yaml
  44. setup.mk
  45. setup.py
  46. shovel/
  47. tests/
  48. tmp_integration.md
  49. unum/
  50. vagrant-base/
  51. voltha/
README.md

VOLTHA

What is Voltha?

Voltha aims to provide a layer of abstraction on top of legacy and next generation access network equipment for the purpose of control and management. Its initial focus is on PON (GPON, EPON, NG PON 2), but it aims to go beyond to eventually cover other access technologies (xDSL, Docsis, G.FAST, dedicated Ethernet, fixed wireless).

Key concepts of Voltha:

  • Network as a Switch: It makes a set of connected access network devices to look like a(n abstract) programmable flow device, a L2/L3/L4 switch. Examples:
    • PON as a Switch
    • PON + access backhaul as a Switch
    • xDSL service as a Switch
  • Evolution to virtualization: it can work with a variety of (access) network technologies and devices, including legacy, fully virtualized (in the sense of separation of hardware and software), and in between. Voltha can run on a decice, on general purpose servers in the central office, or in data centers.
  • Unified OAM abstraction: it provides unified, vendor- and technology agnostic handling of device management tasks, such as service lifecycle, device lifecycle (including discovery, upgrade), system monitoring, alarms, troubleshooting, security, etc.
  • Cloud/DevOps bridge to modernization: it does all above while also treating the abstracted network functions as software services manageable much like other software components in the cloud, i.e., containers.

Why Voltha?

Control and management in the access network space is a mess. Each access technology brings its own bag of protocols, and on top of that vendors have their own interpretation/extension of the same standards. Compounding the problem is that these vendor- and technology specific differences ooze way up into the centralized OSS systems of the service provider, creating a lot of inefficiencies.

Ideally, all vendor equipment for the same access technology should provide an identical interface for control and management. Moreover, there shall be much higher synergies across technologies. While we wait for vendors to unite, Voltha provides an increment to that direction, by confining the differences to the locality of access and hiding them from the upper layers of the OSS stack.

How can you work with Voltha?

While we are still at the early phase of development, you can check out the BUILD.md file to see how you can build it, run it, test it, etc.

How can you help?

Contributions, small and large, are welcome. Minor contributions and bug fixes are always welcome in form of pull requests. For larger work, the best is to check in with the existing developers to see where help is most needed and to make sure your solution is compatible with the general philosophy of Voltha.

Contributing Unit Tests

To begin, make sure to have a development environement installed according to the OpenCord WIKI. Next, In a shell environment

source env.sh;             # Source the environment Settings and create a virtual environment
make utest-with-coverage;  # Execute the Unit Test with coverage reporting

Unit-testing the Core

New unit tests for the core can be written in the nosetest framework and can be found under /tests/utest/

Unit-testing an Adapter

Each adapter's unit tests are discovered by the presence of a test.mk submake file underneath the adapter's directory. for example)

# voltha/adapters/my_new_adapter/test.mk

.PHONY test
test:
   @echo "Testing my amazing new adapter"
   @./my_test_harness
   

Voltha's test framework will execute the FIRST Target in the submake file as the unit test function. It may include as many dependencies as needed, such as using a different python framework for testing (pytest, unittest, tox) or even alternate languages (go, rust, php).

In order for you adapter's test-coverage to be reported, make sure that your test_harness creates a coverage report in a junit xml format. Most test harnesses can easily produce this report format. The Jenkins Job will pick up your coverage report file if named appropriately junit-report.xml according to the Jenkins configuration.