Static Jenkins Site Generator

- Private Jenkins job scraping w/API key
- Added Gilroy font to match main public website
- Link back to ONF website for products
- Add more products

Change-Id: I3ed2dc1e371c564ee483ab83fd110a88d818bca7
diff --git a/README.rst b/README.rst
new file mode 100644
index 0000000..0f74c72
--- /dev/null
+++ b/README.rst
@@ -0,0 +1,148 @@
+..
+  SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org>
+  SPDX-License-Identifier: Apache-2.0
+
+Static Jenkins Site Generator (SJSG)
+====================================
+
+To-Do
+-----
+
+scrape.yaml:
+
+- Add more metadata (links, specs, etc.)
+
+templates:
+
+- Organization of results by metadata (ex: list all products by a
+  Vendor, all products by Type, etc)
+
+static files:
+
+- Add images of products
+
+buildcollector.py:
+
+- Regex support in filters
+- Use correct functions to build file paths, not just string concat
+
+siterender.py:
+
+- ?
+
+Theory of Operation
+-------------------
+
+This tool has two parts:
+
+1. ``buildcollector.py``, which reads a configuration file containing metadata
+   and describing Jenkins jobs, then retrieves information about those jobs
+   from the Jenkins API and stores that metadata, extracted data, and job
+   artifacts in directory structure.
+
+2. ``siterender.py``, which reads the data from the directory structure and
+   builds static HTML site using Jinja2 templates.
+
+Running the scripts
+-------------------
+
+Make sure you have a working Python 3.6 or later instance, and ``virtualenv``
+installed.
+
+Run ``make site``. Scrape and render will both be run, and site will be
+generated in the ``site/`` subdirectory.
+
+Run ``make help`` for information on other Makefile targets.
+
+Changing the look and feel of the Site
+--------------------------------------
+
+Modify the templates in ``templates/``.  These are `Jinja2
+<https://jinja.palletsprojects.com/en/2.11.x/templates/>`_ format files.
+
+Static content is kept in ``static/``, and is copied into ``site/`` as a part
+of the site render.
+
+To view the site locally, run ``make serve``, then use a web browser to go to
+`http://localhost:8000 <http://localhost:8000>`_.
+
+Scrape Files
+------------
+
+For each Jenkins job you want to scrape, you need to create a Scrape File. The
+default scrape file is at ``scrape.yaml``.  All data is required unless
+speicifed as optional.
+
+This file is in YAML format, and contains information about the job(s) to
+scrape. You can put multiple YAML documents within one file, separated with the
+``---`` document start line.
+
+These keys are required in the Scrape File:
+
+- ``product_name`` (string): Human readable product name
+
+- ``onf_project`` (string): ONF Project name
+
+- ``jenkins_jobs`` (list): list of groups of jobs
+
+  - ``group`` (string): Name of group of jobs. Used mainly for maintaining
+    version separation
+
+  - ``jenkins_url`` (string): Bare URL to the Jenkins instance for this group -
+    ex: ``https://jenkins.opencord.org``
+
+  - ``credentials`` (string, optional): Path to a credentials YAML file with
+    Jenkins API Tokens, if trying to access private jobs on the server.  See
+    ``credentials.yaml.example`` for examples.
+
+  - ``jobs`` (list of dicts): List of jobs to be pulled for this
+
+    - ``name`` (string): Job name in Jenkins
+
+    - ``name_override`` (string, optional): Override for name shown in the
+      output, used to keep job names private.
+
+    - ``extract`` (dictionary): Name keys and JSONPath values, which are extracted
+      from the individual Job output JSON files and put in the output.
+
+    - ``filter`` (dictionary, optional): Set of name keys and literal values used to filter
+      which builds are included for this product. After the ``extract`` step is
+      run, the for each key in filter the extracted values are compared with the
+      literal value, and if they match, then the build is retained.
+
+The JSONpath library used is `jsonpath-ng
+<https://github.com/h2non/jsonpath-ng>`_, which seems to be the most regularly
+maintained Python implementation.
+
+Arbitrary variables can also be included in the Scrape File, and will be merged
+into to the output of every JSON file generated by the collector.  This can and
+should include "marketing friendly" information used to modify the Jinja2
+output - for example, links to product pages, names for static images, etc.
+
+The Scrape File also be used to set default values that are only replaced when
+extracted data isn't available - when a JSONPath query returns no results, it
+will contain an empty list, which is ignored in the merge.  If an extracted
+value is found, that value will replace the value given in the Scrape File.
+
+Design Consideration
+--------------------
+
+Metadata addition is needed to avoid adding that metadata to the Jenkins jobs.
+
+Filesystem storage was used because of arbitrary artifacts, and to reduce the
+number of API calls, especially when filtering the same list of builds with
+multiple products.  Raw job output is kept in ``jobs/`` by default.  Processed
+job output is kept in ``builds/`` on a per-product basis.
+
+Jenkins infrastructure is always changing:
+
+- Jobs are Added, Renamed, or Removed
+- Naming schemes may not match up with marketing names
+- Data should be retained even if the job is deleted
+- Fields will differ between products and projects
+
+Tests
+-----
+
+``make test`` will check YAML, static check python, and run tox tests.
+Currently there are no unit tests, but everything is in place to add them.