Zack Williams | 3645191 | 2022-03-04 20:31:37 -0700 | [diff] [blame^] | 1 | Development Conventions |
| 2 | ======================= |
| 3 | |
| 4 | ONF has many (over 300) software repositories as of 2022, and to effectively |
| 5 | develop across all of these projects, the following development conventions are |
| 6 | used. |
| 7 | |
| 8 | Strategy |
| 9 | -------- |
| 10 | |
| 11 | The general strategy for ONF software development is as follows: |
| 12 | |
| 13 | Make it easy to start contributing to any repo |
| 14 | """""""""""""""""""""""""""""""""""""""""""""" |
| 15 | |
| 16 | A convention we've embraced in nearly all repos is to have a ``Makefile`` in |
| 17 | their base directory, which has a few common targets: |
| 18 | |
| 19 | * ``make help``: Get a list of ``make`` targets |
| 20 | * ``make build`` run |
| 21 | * ``make test`` run all tests, as they'd be run in CI by a |
| 22 | |
| 23 | |
| 24 | Most ``Makefiles`` use the GNU Make syntax variant. |
| 25 | |
| 26 | Automated tests should be identical for the developer and automation |
| 27 | """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" |
| 28 | |
| 29 | The CI test runner (Jenkins or similar) that performs automated testing should |
| 30 | run the same tests in the same way that a developer would on their local |
| 31 | system, by running ``make test``. There should be no (or very rare) cases where |
| 32 | a test will pass locally but fail in CI. |
| 33 | |
| 34 | Additionally, this greatly simplifies the configuration of the test runner - it |
| 35 | only needs to run ``make test``. In some cases, it may be necessary for the |
| 36 | Makefile to include commands that generate Jenkins-consumable output, such as |
| 37 | test results (usually in JUnit, xUnit, or TAP formats) or coverage information |
| 38 | (usually in Cobertura XML format). |
| 39 | |
| 40 | |
| 41 | Style, formatting, linting, license compliance should be automated |
| 42 | """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" |
| 43 | |
| 44 | Code review should be about the structure and design of the code, and should |
| 45 | |
| 46 | To that end, automated tools should be used that verify and conform the code to |
| 47 | a specific convention and standard: |
| 48 | |
| 49 | * Formatting tools such as ``go fmt``, ``black`` and similar should be used to |
| 50 | conform code to style guidelines |
| 51 | |
| 52 | * Linting tools such as ``pylint``, ``yamllint`` |
| 53 | |
| 54 | In the documentation space, spelling and other checks should also be performed. |
| 55 | |
| 56 | Versioning and Releasing Software |
| 57 | --------------------------------- |
| 58 | |
| 59 | Versioning software and performing releases are fundamentally for two audiences: |
| 60 | |
| 61 | 1. Providing compatibility claims about |
| 62 | 2. Developers who use the |
| 63 | |
| 64 | Two common and recommended versioning schemes are; |
| 65 | |
| 66 | * SemVer |
| 67 | * CalVer |
| 68 | |
| 69 | Division of responsibilities |
| 70 | """""""""""""""""""""""""""" |
| 71 | |
| 72 | Automated tools should perform low-level versioning tasks like creating tags on |
| 73 | repos. |