blob: 0f74c721aae3de5a2b73c3d12dc28e43c8e1d0ea [file] [log] [blame]
Zack Williams712caf62020-04-28 13:37:41 -07001..
2 SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org>
3 SPDX-License-Identifier: Apache-2.0
4
5Static Jenkins Site Generator (SJSG)
6====================================
7
8To-Do
9-----
10
11scrape.yaml:
12
13- Add more metadata (links, specs, etc.)
14
15templates:
16
17- Organization of results by metadata (ex: list all products by a
18 Vendor, all products by Type, etc)
19
20static files:
21
22- Add images of products
23
24buildcollector.py:
25
26- Regex support in filters
27- Use correct functions to build file paths, not just string concat
28
29siterender.py:
30
31- ?
32
33Theory of Operation
34-------------------
35
36This tool has two parts:
37
381. ``buildcollector.py``, which reads a configuration file containing metadata
39 and describing Jenkins jobs, then retrieves information about those jobs
40 from the Jenkins API and stores that metadata, extracted data, and job
41 artifacts in directory structure.
42
432. ``siterender.py``, which reads the data from the directory structure and
44 builds static HTML site using Jinja2 templates.
45
46Running the scripts
47-------------------
48
49Make sure you have a working Python 3.6 or later instance, and ``virtualenv``
50installed.
51
52Run ``make site``. Scrape and render will both be run, and site will be
53generated in the ``site/`` subdirectory.
54
55Run ``make help`` for information on other Makefile targets.
56
57Changing the look and feel of the Site
58--------------------------------------
59
60Modify the templates in ``templates/``. These are `Jinja2
61<https://jinja.palletsprojects.com/en/2.11.x/templates/>`_ format files.
62
63Static content is kept in ``static/``, and is copied into ``site/`` as a part
64of the site render.
65
66To view the site locally, run ``make serve``, then use a web browser to go to
67`http://localhost:8000 <http://localhost:8000>`_.
68
69Scrape Files
70------------
71
72For each Jenkins job you want to scrape, you need to create a Scrape File. The
73default scrape file is at ``scrape.yaml``. All data is required unless
74speicifed as optional.
75
76This file is in YAML format, and contains information about the job(s) to
77scrape. You can put multiple YAML documents within one file, separated with the
78``---`` document start line.
79
80These keys are required in the Scrape File:
81
82- ``product_name`` (string): Human readable product name
83
84- ``onf_project`` (string): ONF Project name
85
86- ``jenkins_jobs`` (list): list of groups of jobs
87
88 - ``group`` (string): Name of group of jobs. Used mainly for maintaining
89 version separation
90
91 - ``jenkins_url`` (string): Bare URL to the Jenkins instance for this group -
92 ex: ``https://jenkins.opencord.org``
93
94 - ``credentials`` (string, optional): Path to a credentials YAML file with
95 Jenkins API Tokens, if trying to access private jobs on the server. See
96 ``credentials.yaml.example`` for examples.
97
98 - ``jobs`` (list of dicts): List of jobs to be pulled for this
99
100 - ``name`` (string): Job name in Jenkins
101
102 - ``name_override`` (string, optional): Override for name shown in the
103 output, used to keep job names private.
104
105 - ``extract`` (dictionary): Name keys and JSONPath values, which are extracted
106 from the individual Job output JSON files and put in the output.
107
108 - ``filter`` (dictionary, optional): Set of name keys and literal values used to filter
109 which builds are included for this product. After the ``extract`` step is
110 run, the for each key in filter the extracted values are compared with the
111 literal value, and if they match, then the build is retained.
112
113The JSONpath library used is `jsonpath-ng
114<https://github.com/h2non/jsonpath-ng>`_, which seems to be the most regularly
115maintained Python implementation.
116
117Arbitrary variables can also be included in the Scrape File, and will be merged
118into to the output of every JSON file generated by the collector. This can and
119should include "marketing friendly" information used to modify the Jinja2
120output - for example, links to product pages, names for static images, etc.
121
122The Scrape File also be used to set default values that are only replaced when
123extracted data isn't available - when a JSONPath query returns no results, it
124will contain an empty list, which is ignored in the merge. If an extracted
125value is found, that value will replace the value given in the Scrape File.
126
127Design Consideration
128--------------------
129
130Metadata addition is needed to avoid adding that metadata to the Jenkins jobs.
131
132Filesystem storage was used because of arbitrary artifacts, and to reduce the
133number of API calls, especially when filtering the same list of builds with
134multiple products. Raw job output is kept in ``jobs/`` by default. Processed
135job output is kept in ``builds/`` on a per-product basis.
136
137Jenkins infrastructure is always changing:
138
139- Jobs are Added, Renamed, or Removed
140- Naming schemes may not match up with marketing names
141- Data should be retained even if the job is deleted
142- Fields will differ between products and projects
143
144Tests
145-----
146
147``make test`` will check YAML, static check python, and run tox tests.
148Currently there are no unit tests, but everything is in place to add them.