[CORD-2446]
Scenario creation documentation

[CORD-2710]
`inventory_groups` documentation

Change-Id: I6776c604dd75bf97b1a5882c981699b16e9c8ca2
diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md
index 9a60f8c..53aee2c 100644
--- a/docs/SUMMARY.md
+++ b/docs/SUMMARY.md
@@ -11,7 +11,8 @@
         * [Connecting to Upstream Networks](vrouter.md)
         * [Container Images](appendix_images.md)
         * [vSG Configuration](appendix_vsg.md)
-    * [Troubleshooting and Build Internals](troubleshooting.md)
+    * [Troubleshooting](troubleshooting.md)
+    * [Build Process Internals](build_internals.md)
     * [Building Docker Images](build_images.md)
     * [Build System Config Glossary](build_glossary.md)
     * [Installing CORD in China](cord_in_china.md)
diff --git a/docs/build_images.md b/docs/build_images.md
index 34934aa..993b069 100644
--- a/docs/build_images.md
+++ b/docs/build_images.md
@@ -7,7 +7,8 @@
 }}/scripts/imagebuilder.py), to be developed to perform image rebuilds in a
 consistent and efficient manner.
 
-Imagebuilder is currently used for XOS and ONOS images, but not MaaS images.
+Imagebuilder is currently used for XOS, ONOS, and the `mavenrepo` (source of
+ONOS Apps used in CORD) images, but not MaaS images.
 
 While imagebuilder will pull down required images from DockerHub and build/tag
 images, it does not push those images or delete obsolete ones.  These tasks are
@@ -23,6 +24,51 @@
 If you do need to rebuild images, there is a `make clean-images` target that
 will force imagebuilder to be run again and images to be moved into place.
 
+## Adding a new Docker image to CORD
+
+There are several cases where an Image would need to be added to CORD.
+
+### Adding an image developed outside of CORD
+
+There are cases where a 3rd party image developed outside of CORD may be
+needed. This is the case with ONOS, Redis, and a few other pieces of software
+that are already containerized, and we deploy as-is (or with minor
+modifications).
+
+To do this, add the full name of the image, including a version tag, to the
+`pull_only_images` list in the `docker_images.yml` file, and to
+`docker_image_whitelist` list in the `scenarios/<scenario name>/config.yml`
+file.
+
+These images will be retagged with a `candidate` tag after being pulled.
+
+### Adding a synchronizer image
+
+Adding a synchronizer image is usually as simple as adding it to the
+`buildable_images` list in the
+[docker_images.yml](https://github.com/opencord/cord/blob/{{ book.branch
+}}/docker_images.yml) file.  The name of the synchronizer container is added to
+the `docker_images_whitelist` dynamically, based on whether it is in the
+`xos_services` list of the profile, so it doesn't need to be added manually to
+`docker_images_whitelist`.
+
+If you are adding a new service that is not in the repo manifest yet, you may
+have to add your service's directory to the `.repo/manifest.xml` file and then
+list it in `build/docker_images.yml`, so it can build the synchronizer image
+locally.
+
+### Adding other CORD images
+
+If you want imagebuilder to build an image from a Dockerfile somewhere in the
+CORD source tree, you need to add it to the `buildable_images` list in the
+`docker_images.yml` file (see that file for the specific format), then making
+sure the image name is listed in the `docker_image_whitelist` list in the
+`scenarios/<scenario name>/config.yml` file.
+
+Note that you don't need to add external parent images to the
+`pull_only_images` in this manner - those are determined by the `FROM` line in
+`Dockerfile`
+
 ## Debugging imagebuilder
 
 If you get a different error or  think that imagebuilder isn't working
@@ -30,9 +76,11 @@
 the output carefully, and then post about the issue on the mailing list or
 Slack.
 
-If an image is not found on Dockerhub, you may see a 404 error like the
-following in the logs. If this happens, imagebuilder will attempt to build the
-image from scratch rather than pulling it:
+If an image is not found on Dockerhub (for example, if you have local
+modifications to the `Dockerfile` or context, or are have a patchset checked
+out), you may see a 404 error like the following in the logs. If this happens,
+imagebuilder will attempt to build the image from scratch rather than pulling
+it:
 
 ```python
 NotFound: 404 Client Error: Not Found ("{"message":"manifest for xosproject/xos-gui-extension-builder:<hash> not found"}")
@@ -58,7 +106,8 @@
 
 4. Determines which images need to be rebuilt based on:
 
-    * Whether the image exists and is has current tags added to it.
+    * Whether the image exists and has current tags on it. If an image is
+      tagged with the `candidate` tag, but shouldn't be this tag is removed.
     * If the Docker build context is *dirty* or differs (is on a different
       branch) from the git tag specified in the repo manifest
     * If the image's parent (or grandparent, etc.) needs to be rebuilt
@@ -169,49 +218,6 @@
 
 Labels on a built image can be seen by running `docker inspect <image name or id>`
 
-## Adding a new Docker image to CORD
-
-There are a few cases when an image would be needed to be added to CORD during
-the development process.
-
-### Adding an image developed outside of CORD
-
-There are cases where a 3rd party image developed outside of CORD may be
-needed. This is the case with ONOS, Redis, and a few other pieces of software
-that are already containerized, and we deploy as-is (or with minor
-modifications).
-
-To do this, add the full name of the image, including a version tag, to the
-`pull_only_images` list in the `docker_images.yml` file, and to
-`docker_image_whitelist` list in the `scenarios/<scenario name>/config.yml`
-file.
-
-These images will be retagged with a `candidate` tag after being pulled.
-
-### Adding a synchronizer image
-
-Adding a synchronizer image is usually as simple as adding it to the
-`buildable_images` list in the `docker_images.yml` file (see that file for the
-), then making sure the image name is listed in the `docker_image_whitelist`
-list in the `scenarios/<scenario name>/config.yml` file.
-
-If you are adding a new service that is not in the repo manifest yet, you may
-have to your service's directory to the `.repo/manifest.xml` file and then list
-it in `build/docker_images.yml`, so it will then build the  synchronizer image
-locally.
-
-### Adding other CORD images
-
-If you want imagebuilder to build an image from a Dockerfile somewhere in the
-CORD source tree, you need to add it to the `buildable_images` list in the
-`docker_images.yml` file (see that file for the specific format), then making
-sure the image name is listed in the `docker_image_whitelist` list in the
-`scenarios/<scenario name>/config.yml` file.
-
-Note that you don't need to add external parent images to the
-`pull_only_images` in this manner - those are determined by the `FROM` line in
-`Dockerfile`
-
 ## Automating image builds
 
 There is a [Jenkinsfile.imagebuilder](https://github.com/opencord/cord/blob/{{
diff --git a/docs/build_internals.md b/docs/build_internals.md
new file mode 100644
index 0000000..427b54e
--- /dev/null
+++ b/docs/build_internals.md
@@ -0,0 +1,277 @@
+# Build Process Internals
+
+## Config Generation
+
+All configuration in CORD is driven off of YAML files which contain variables
+used by Ansible, make, and Vagrant to build development and production
+environments. A [glossary of build system variables](build_glossary.md) is
+available which describes these variables and where they are used.
+
+When a command to generate config such as `make PODCONFIG=rcord-mock.yml
+config` is run, the following steps happen:
+
+1. The POD Config file is read, in this case
+   [orchestration/profiles/rcord/podconfig/rcord-mock.yml](https://github.com/opencord/rcord/blob/{{
+     book.branch }}/podconfig/rcord-mock.yml), which specifies the scenario and
+     profile.  In virtual cases, frequently no further information is required,
+     but in a physical POD, at least the [inventory](#inventory) configuration
+     must be specified.
+
+2. The Scenario config file is read, in this case
+   [build/scenarios/mock/config.yml](https://github.com/opencord/cord/blob/{{
+   book.branch }}/scenarios/mock/config.yml). The scenario determines which sets
+   of components of CORD are installed, by controlling which make targets are
+   run.  It also contains a default inventory, which is used for development
+   and testing of the scenario with virtual machines.
+
+3. The Profile config file is read, in this case
+   [orchestration/profiles/rcord/rcord.yml](https://github.com/opencord/rcord/blob/{{
+   book.branch }}/rcord.yml).  The profile contains the use-case
+   _Service_Graph_ and additional configuration specific to the
+   configuration.
+
+4. The contents of these three files are combined into a master config
+   variable. The Scenario overwrites any config set in the Profile, and the POD
+   Config overwrites any config set in the Scenario or Profile.
+
+5. The entire master config is written to `genconfig/config.yml`.
+
+6. The `inventory_groups` variable is used to generate an ansible inventory
+   file and put in `genconfig/inventory.ini`.
+
+7. Various variables are used to generate the makefile config file
+   `genconfig/config.mk`. This sets the targets invoked by `make build`, which
+   is the `build_targets` list.
+
+> NOTE: The combination of the POD config, Scenario config, and Profile config
+> in step #4 is not a union or merge. If you define an item in the root of the
+> POD Config that is complex and has subkeys, it will overwrite every subkey
+> defined in the Scenario or Profile.
+>
+> This is most noticeable when setting the `inventory_groups` or
+> `docker_image_whitelist` variable. If you are creating a change in a POD
+> Config, you must recreate the entire structure or list.
+>
+> This may seem inconvenient, but other list or tree merging strategies
+> available in Ansible lack a way to remove items from a tree structure, which
+> is an incomplete solution.
+
+## Build Process Steps
+
+The build process is driven by running `make`. The two most common makefile
+targets are `config` and `build`, but there are also utility targets that are
+handy to use during development.
+
+### `config` make target
+
+`config` requires a `PODCONFIG` argument, which is the name of a file in
+`orchestration/profiles/<use-case>/podconfig/`. `PODCONFIG` defaults to
+`invalid`, so if you get errors claiming an invalid config, you probably didn't
+run the `make config` step, or set it to a filename or `use-case` path that
+doesn't exist.  Additionally, a `PODCONFIG_PATH` variable can be used, which
+takes an arbitrary path, when the podconfig is not within the use-case
+directory. `PODCONFIG_PATH` overrides `PODCONFIG` if both are set.
+
+#### Examples: `make config`
+
+`make PODCONFIG=rcord-local.yml config`
+
+`make PODCONFIG=opencloud-mock.yml config`
+
+### `build` make target
+
+`make build` performs the build process, and usualy takes no arguments. The
+targets run are specified in the scenario in the `build_targets` list.
+
+Most of the build targets in the Makefile don't leave artifacts behind, so we
+write a placeholder file (aka "sentinels" or "empty targets") in the
+`build/milestones` directory.
+
+See [adding targets to the Makefile](#adding-targets-to-the-makefile) for
+Makefile development information.
+
+### Utility make targets
+
+There are various utility targets:
+
+- `printconfig`: Prints the configured scenario and profile.
+
+- `xos-teardown`: Stop and remove a running set of XOS docker containers,
+  removing the database.
+
+- `xos-update-images`: Rebuild the images used by XOS, without tearing down
+  running XOS containers.
+
+- `collect-diag`: Collect detailed diagnostic information on a deployed head
+  and compute nodes, into `diag-<datestamp>` directory on the head node.
+
+- `compute-node-refresh`: Reload compute nodes brought up by MaaS into XOS,
+  useful in the cord virtual and physical scenarios
+
+- `pod-test`: Run the `platform-install/pod-test-playbook.yml`, testing the
+  virtual/physical cord scenario.
+
+- `vagrant-destroy`: Destroy Vagrant containers (for mock/virtual/physical
+  installs)
+
+- `clean-images`: Have containers rebuild during the next build cycle. Does
+  not actually delete any images, just causes imagebuilder to be run again.
+
+- `clean-genconfig`: Deletes the `make config` generated config files in
+  `genconfig`, useful when switching between POD configs
+
+- `clean-onos`: Stops the ONOS containers on the head node
+
+- `clean-openstack`: Cleans up and deletes all instances and networks created
+  in OpenStack.
+
+- `clean-profile`: Deletes the `cord_profile` directory
+
+- `clean-all`: Runs `vagrant-destroy`, `clean-genconfig`, and `clean-profile`
+  targets, removes all milestones. Good for resetting a dev environment back
+  to an unconfigured state.
+
+- `clean-local`:  `clean-all` but for the `local` scenario - Runs
+  `clean-genconfig` and `clean-profile` targets, removes local milestones.
+
+The `clean-*` utility targets should modify the contents of the milestones
+directory appropriately to cause the steps they clean up after to be rerun on
+the next `make build` cycle.
+
+### Development workflow
+
+#### Updating XOS Container Images on a running POD
+
+To rebuild and update XOS container images, run:
+
+```shell
+make xos-update-images
+make -j4 build
+```
+
+This will build new copies of all the images, then when build is run the newly
+built containers will be restarted.
+
+If you additionally want to stop all the XOS containers, clear the database,
+and reload the profile, use `xos-teardown`:
+
+```shell
+make xos-teardown
+make -j4 build
+```
+
+This will teardown the XOS container set, tell the build system to rebuild
+images, then perform a build and reload the profile.
+
+## Creating a Scenario
+
+Creating a new scenarios requires creating 3 items:
+
+1. Creating a scenario config file, in `build/scenarios/<scenario-name>/config.yml`.
+
+2. Creating any makefile targets necessary for new features
+
+3. The Virtual machine configuration (stored in a `Vagrantfile`) for testing
+   the scenario.
+
+By default, the inventory in a Scenario is specific to the machines created in
+the `Vagrantfile`, and if needed elsewhere the
+
+### Creating the scenario config.yaml
+
+A scenario configuration yaml file must define the following items:
+
+1. `build_targets`, which is a list of milestones to complete. In most cases,
+   there is only one item in this list, and all other milestone targets are
+   invoked via dependencies within the makefile.
+
+2. `docker_image_whitelist`, which is the list of [images used in the
+   scenario](build_images.md#adding-a-new-docker-image-to-cord).
+
+3. `inventory_groups`, which specifies the [inventory](install.md#inventory)
+   used for testing the scenario in a virtual installation.
+
+Most scenarios also define the following:
+
+- `vagrant_vms`, a list of the Vagrant VM's to bring up for testing the
+  scenario in a virtual environment.
+
+- Various Vagrant configuration variables, used to [configure VM's in the
+  Vagrantfile](#creating-a-scenario-vagrantfile).
+
+- `headnode`, and possibly `buildnode`, which specify the hosts that are logged
+  into with SSH when running various build steps that have to be executed
+  locally on those nodes.  These names should match the names or IP addresses
+  given in `inventory_groups`, but in a SSH compatible format. Some example of
+  this: `<host>`, `<user>@<host>`, `<user>@<ipaddr>`.
+
+- `physical_node_list`, which is used for DNS, DHCP, and network configuration
+  of compute nodes.  It specifies the last octet of the IP addresses used for
+  each node on the management and fabric networks, and the management network
+  DNS names assigned to the node.  This should match or be a superset of every
+  system listed in `inventory_groups`.
+
+### Adding targets to the Makefile
+
+If you would like to add functionality to the makefile for inclusion in a
+scenario, the process is:
+
+1. Create an ansible playbook or script for your task.  In most cases this is
+   in the `platform-install` if the task applies to the entire platform, or in
+   `maas` if it's specific to the MaaS hardware deployment component.
+
+2. Add a target to the makefile for the task. Many paths to source code
+   locations, names of binaries, and similar are variables in the Makefile, so
+   check the [top of the Makefile](https://github.com/opencord/cord/blob/{{
+   book.branch }}/Makefile) for these.
+
+   The general format of a make target is:
+
+   ```make
+   $(M)/my-target: | $(m)prereq-target
+     $(ANSIBLE_PB) $(PI)/my-task-playbook.yml $(LOGCMD)
+     touch $@
+   ```
+
+   The `touch $@` is to create a create a file in the `milestones` directory
+   (make variable for the path to `milestones` is `$(M)`) after the target is
+   run.
+
+3. Create dependencies within the makefile to depend upon your task.  Be aware
+   of adding dependencies that could break other scenarios, so do this with
+   care.
+
+To handle the case where a target may be used only in a subset of scenarios,
+`prereqs_*` lists of milestones are added to the scenario, and when `make
+config` is run, these are added to the `genconfig/config.mk` file, in a
+capitalized version.  For example the `start_xos_prereqs` list adds milestones
+to the `START_XOS_PREREQS` variable (the `$(M)` milestones directory path is
+added to every item in the list)
+
+If you need to add a new `prereq_*` variable, see the
+[config.mk.j2](https://github.com/opencord/cord/blob/{{ book.branch
+}}/ansible/roles/genconfig/templates/config.mk.j2) and
+[Makefile](https://github.com/opencord/cord/blob/{{ book.branch }}/Makefile).
+
+### Creating a scenario Vagrantfile
+
+A Vagrantfile should be created for the scenario, for testing and development
+work.
+
+Vagrantfiles can be thought of as a ruby Domain Specific Language (DSL), and as
+such can take advantage of other ruby language features.  This is used in CORD
+to read the `genconfig/config.yml` file generated during the `make config` step
+as the `settings` variable , which allows for the VMs to  have additional
+configuration at runtime - for example, the amount of memory used in a VM is
+frequently set this way.
+
+Some example configuration variables found in scenarios that are used in
+Vagrantfiles:
+
+- `vagrant_box` - The name of the Vagrant Box image used for the VM's.  This is
+  currently either `ubuntu/trusty64` or `bento/ubuntu-16.04` depending on the
+  base OS used.
+
+- `*_vm_mem` - The amount of memory allocated for the VM
+
+- `*_vm_cpu` - The number of CPU's allocated for the VM
diff --git a/docs/install.md b/docs/install.md
index bdfd28f..a360254 100644
--- a/docs/install.md
+++ b/docs/install.md
@@ -15,8 +15,11 @@
 If you are interested in developing a new CORD service, working on the XOS GUI,
 or performing other development tasks, see [Developing for CORD](develop.md).
 
-If you've run into trouble or want to know more about the CORD build process,
-please see [Troubleshooting and Build Internals](troubleshooting.md).
+If want to know more about the CORD build process, please see [Build Process
+Internals](build_internals.md) and [Building Docker Images](build_images.md).
+
+If you're having trouble installing CORD, please see
+[Troubleshooting](troubleshooting.md).
 
 ## Required Tools
 
@@ -33,7 +36,7 @@
 - [Docker](https://www.docker.com/community-edition), for *local* build
   scenarios, *tested with Community Edition version 17.06*
 - [Vagrant](https://www.vagrantup.com/downloads.html), for all other scenarios
-  *tested with version 1.9.3, requires specific plugins and modules if using
+  *tested with version 2.0.1, requires specific plugins and modules if using
   with libvirt, see `cord-bootstrap.sh` for more details *
 
 You can manually install these on your development system - see [Getting the
@@ -60,6 +63,7 @@
   -p <project:change/revision> Download a patch from gerrit. Can be repeated.
   -t <target>                  Run 'make -j4 <target>' in cord/build/. Can be repeated.
   -v                           Install Vagrant for mock/virtual/physical scenarios.
+  -x                           Use Xenial (16.04) in Vagrant VM's.
 ```
 
 Using the `-v` option is required to install Vagrant for running a [Virtual Pod
@@ -118,67 +122,209 @@
 ### POD Config
 
 Each CORD *use-case* (e.g., R-CORD, M-CORD, E-CORD) has its own repository
-containing configuration files for that type of POD.  All of these
-repositories appear in the source tree under `orchestration/profiles/`.
-For example, R-CORD's repository is
-[orchestration/profiles/rcord](https://github.com/opencord/rcord/tree/{{ book.branch }}).
+containing configuration files for that type of POD.  All of these repositories
+appear in the source tree under `orchestration/profiles/`.  For example,
+R-CORD's repository is
+[orchestration/profiles/rcord](https://github.com/opencord/rcord/tree/{{
+  book.branch }}).
 
-The top level configuration for a build is the *POD config* file, a
-YAML file stored in each use-case repository's `podconfig` subdirectory.
-Each Pod config file contains a list of variables that control how
-the build proceeds, and can override the configuration of the rest of the
-build.  A minimal POD config file must define two variables:
+The top level configuration for a build is the *POD config* file, a YAML file
+stored in each use-case repository's `podconfig` subdirectory.  Each Pod config
+file contains a list of variables that control how the build proceeds, and can
+override the configuration of the rest of the build.  A minimal POD config file
+must define two variables:
 
-`cord_scenario` - the name of the *scenario* to use, which is defined in a
-directory under [build/scenarios](https://github.com/opencord/cord/tree/{{ book.branch }}/scenarios).
+1. `cord_profile` - the name of a [profile](#profiles) to use, defined as a
+   YAML file at the top level of the use-case repository - ex:
+   [mcord-ng40.yml](https://github.com/opencord/mcord/blob/{{ book.branch
+   }}/mcord-ng40.yml).
 
-`cord_profile` - the name of a *profile* to use, defined as a YAML file at
-the top level of the use-case repository - ex:
-[mcord-ng40.yml](https://github.com/opencord/mcord/blob/{{ book.branch }}/mcord-ng40.yml).
+2. `cord_scenario` - the name of the [scenario](#scenarios) to use, which is
+   defined in a directory under
+   [build/scenarios](https://github.com/opencord/cord/tree/{{ book.branch
+   }}/scenarios).
 
-The naming convention for POD configs stored in the use case
-repository is `<profile>-<scenario>.yml` - ex:
-[mcord-ng40-virtual.yml](https://github.com/opencord/mcord/blob/{{ book.branch }}/podconfig/mcord-ng40-virtual.yml) builds the `virtual` scenario using the
-`mcord-ng40` profile.  All such POD configs can be specified during a
-build using the `PODCONFIG` variable:
+The naming convention for POD configs stored in the use case repository is
+`<profile>-<scenario>.yml` - ex:
+[mcord-ng40-virtual.yml](https://github.com/opencord/mcord/blob/{{ book.branch
+}}/podconfig/mcord-ng40-virtual.yml) builds the `virtual` scenario using the
+`mcord-ng40` profile.  All such POD configs can be specified during a build
+using the `PODCONFIG` variable:
 
 ```shell
 make PODCONFIG=rcord-virtual.yml config
 ```
 
-POD configs with arbitrary names can be specified using
+POD configs with arbitrary paths can be specified using
 `PODCONFIG_PATH`.  This will override the `PODCONFIG` variable.
 
 ```shell
 make PODCONFIG_PATH=./podconfig/my-pod-config.yml config
 ```
 
+Additionally, if you are specifying a physical installation, you need to:
+
+- Specify the [inventory](#inventory) in your POD Config in order to tells
+  Ansible and the build system which machines you wish to install a CORD POD
+  on, and which ansible inventory roles map onto them.
+
+- Set `headnode` (and optionally `buildnode`) variables to match the inventory.
+
+- Set `vagrant_vms` variable to only bring up VM's you need, which may be none,
+  in which case you should specify the empty list: `vagrant_vms: []`.
+
+- Add items to the [physical_node_list](build_glossary.md#physicalnodelist) for
+  all nodes listed in the inventory.
+
+The ONF internal
+[pod-configs](https://gerrit.opencord.org/gitweb?p=pod-configs.git;a=tree)
+repository gives many examples of physical POD Configs that are used for
+testing of CORD.
+
 ### Profiles
 
 The set of services that XOS on-boards into CORD -- the  _Service Graph_, and
-other per-profile configuration for a CORD deployment.  These are located in
-[build/platform-install/profile_manifests](https://github.com/opencord/platform-install/tree/{{
-  book.branch }}/profile_manifests).
+other per-profile configuration for a CORD deployment.  These are checked out
+by repo into the `orchestration/profiles` directory.  The current set of
+profiles is:
+
+- [R-CORD](https://github.com/opencord/rcord)
+- [M-CORD](https://github.com/opencord/mcord)
+- [E-CORD](https://github.com/opencord/ecord)
 
 ### Scenarios
 
-Scenarios define the physical or virtual environment that CORD will be
-installed into, a default mapping of ansible groups to nodes, the set of Docker
-images that can be built, and software and platform features are installed onto
-those nodes. Scenarios are subdirectories of the
+To handle the variety of types of deployments CORD supports, including physical
+deployments, development, and testing in virtual environments, the
+scenario mechanism was developed to allow for a common build system.
+
+A scenario determine:
+
+- Which sets of components of CORD are installed during a build, by controlling
+  which make targets are run, and the dependencies between them.
+
+- A virtual-specific inventory and machine definitions used with
+  [Vagrant](https://vagrantup.com), which is used for development and testing
+  of the scenario with virtual machines.
+
+- A [whitelist of docker
+  images](build_images.md#adding-a-new-docker-image-to-cord) used when building
+  this scenario.
+
+Scenarios are subdirectories of the
 [build/scenarios](https://github.com/opencord/cord/tree/{{ book.branch
-}}/scenarios) directory, and consist of a `config.yaml` file and possibly VM's
-specified in a `Vagrantfile`.
+}}/scenarios) directory, and consist of a `config.yaml` configuration file and
+VM's specified in a `Vagrantfile`.
 
 The current set of scenarios:
 
 - `local`: Minimal set of containers running locally on the development host
+
 - `mock`: Creates a single Vagrant VM with containers and DNS set up, without
   synchronizers
-- `single`: Creates a single Vagrant VM with containers and DNS set up, with
-  synchronizers and optional ElasticStack/ONOS
-- `cord`: Physical or virtual multi-node CORD pod, with MaaS and OpenStack
-- `opencloud`: Physical or virtual multi-node OpenCloud pod, with OpenStack
 
-The scenario is specified in the POD config's `cord_scenario` line.
+- `single`: Creates a single Vagrant VM with containers and DNS set up, with
+  synchronizers and optionally ElasticStack/ONOS
+
+- `cord`: Physical or virtual multi-node CORD pod, with MaaS and OpenStack
+
+- `controlpod`: Physical or virtual single-node CORD control-plane only POD, with
+  XOS and ONOS, suitable for some use cases such as `ecord-global`.
+
+- `controlkube`: "Laptop sized" Kubernetes-enabled multi-node pod
+
+- `preppedpod`: Physical or virtual multi-node CORD pod on a pre-prepared (OS
+  installed) nodes, with ONOS and OpenStack, but lacking MaaS,  for use in
+  environments where there is an existing provisioning system for compute
+  hardware.
+
+- `preppedkube`: "Deployment/Server sized" Kubernetes-enabled multi-node pod
+
+How these scenarios are used for development is covered in the [Example
+Worflows](workflows.md) section.
+
+The primary mechanism that Scenarios use to modularize the build process by
+enabling Makefile and creating prerequisite targets, which in turn controls
+whether ansible playbooks or commands are run.  `make` is dependency based, and
+the targets to create when `make build` is run is specified in the
+`build_targets` list in the scenario.
+
+Creating and extending scenarios is covered in the [Build Process
+Internals](build_internals.md#creating-a-scenario).
+
+### Inventory
+
+A node inventory is defined in the `inventory_groups` variable, and must be set
+in the scenario or overridden in the POD config.
+
+The default inventory defined in a scenario has the settings for a virtual
+install which is used in testing and development. It corresponds to the
+machines in the Vagrantfile for that scenario.
+
+The CORD build system defines 4 ansible inventory groups (`config`, `build`,
+`head`, `compute`), 3 of which (all but `compute`) need to have at least one
+physical or virtual node assigned.  The same node can be assigned to multiple
+groups, which is one way that build modularity is achieved.
+
+- `config`: This is where the build steps are run (where `make` and `ansible`
+  execute), and where the configuration of a POD is generated. It is generally
+  is set to `localhost`.
+- `build`: Where the build steps are run, for building the Docker containers
+  for CORD.  The node in this group is frequently the same as the `head`node.
+- `head`: The traditional "head node", which hosts services for bootstrapping
+  the POD.
+- `compute`: Nodes for running VNFs.  This may be populated (as in the
+  `preppedpod` and related scenarios) or not (in the `cord` scenario, which
+  bootstraps these compute nodes with MaaS)
+
+For all physical installations of CORD, the POD config must override the
+scenario inventory. The [default physical
+example](https://github.com/opencord/cord/blob/{{ book.branch
+}}/podconfig/physical-example.yml) gives examples of how to do this, and
+additional POD config examples can be found in the CORD QA systems [pod-config
+repo](https://gerrit.opencord.org/gitweb?p=pod-configs.git;a=tree).
+
+The inventory is specified using the `inventory_groups` dictionary in the
+scenario `config.yml` or the podconfig YAML file. The format is similar to the
+[Ansible YAML
+inventory](http://docs.ansible.com/ansible/latest/intro_inventory.html#hosts-and-groups)
+except without the `hosts` key below the groups names. Variables can optionally
+be specified on a per-host basis by adding sub-keys to the host name. If the
+host is listed multiple times, the same variables must be specified each time.
+
+Here is an example inventory for a podconfig using the `preppedpod` scenario
+with a physical pod - note that the build node is the same as the head node
+(they have the same IP), and the variables are replicated there. The config
+node is the default of `localhost`:
+
+```yaml
+inventory_groups:
+
+  config:
+    localhost:
+      ansible_connection: local
+
+  build:
+    head1:
+      ansible_host: 10.1.1.40
+      ansible_user: cordadmin
+      ansible_ssh_pass: cordpass
+
+  head:
+    head1:
+      ansible_host: 10.1.1.40
+      ansible_user: cordadmin
+      ansible_ssh_pass: cordpass
+
+  compute:
+    compute1:
+      ansible_host: 10.1.1.41
+      ansible_user: cordcompute
+      ansible_ssh_pass: cordpass
+
+    compute2:
+      ansible_host: 10.1.1.42
+      ansible_user: cordcompute
+      ansible_ssh_pass: cordpass
+```
 
diff --git a/docs/install_physical.md b/docs/install_physical.md
index 34ad577..80a4afd 100644
--- a/docs/install_physical.md
+++ b/docs/install_physical.md
@@ -316,12 +316,12 @@
 the external and the internal networks, what users the system should run during
 the automated installation, and much more.
 
-POD configuration files are YAML files with extension .yml. You can either create a new
-file with your favorite editor or copy-and-edit an existing file. The
-[physical-example.yml](https://github.com/opencord/cord/blob/{{ book.branch }}/podconfig/physical-example.yml)
-configuration file is there for this purpose, and the most commonly set
-parameters are described. Optional lines have been commented out, but can be
-used as needed.
+POD configuration files are YAML files with extension .yml. You can either
+create a new file with your favorite editor or copy-and-edit an existing file.
+The [physical-example.yml](https://github.com/opencord/cord/blob/{{ book.branch
+}}/podconfig/physical-example.yml) configuration file is there for this
+purpose, and the most commonly set parameters are described. Optional lines
+have been commented out, but can be used as needed.
 
 More information about how the network configuration for the POD can be
 customized can be found in [Network Settings](appendix_network_settings.md).
diff --git a/docs/install_virtual.md b/docs/install_virtual.md
index e7d32da..9e58b46 100644
--- a/docs/install_virtual.md
+++ b/docs/install_virtual.md
@@ -427,8 +427,11 @@
 If you need to force `make build` to re-run steps that have already completed,
 remove the appropriate file in the `milestones` directory prior to re-running.
 
-For more information about how the build works, see [Troubleshooting and Build
-Internals](troubleshooting.md).
+More troubleshooting information can be found in the
+[Troubleshooting](troubleshooting.md) section.
+
+For more information about how the build works, see [Build
+Internals](build_internals.md).
 
 ### Failed: TASK \[maas-provision : Wait for node to become ready\]
 
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 629311f..089877b 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -1,4 +1,4 @@
-# Troubleshooting and Build System Internals
+# Troubleshooting
 
 > NOTE: Most of the debugging below assumes that you are logged into the
 > `config` node, which is the node where `make` is run to start the build, and
@@ -245,136 +245,3 @@
 This must be done before running the `make config` target - it won't affect an
 already-running POD.
 
-## Config Generation Overview
-
-All configuration in CORD is driven off of YAML files which contain variables
-used by Ansible, make, and Vagrant to build development and production
-environments. A [glossary of build system variables](build_glossary.md) is
-available which describes these variables and where they are used.
-
-When a command to generate config such as `make PODCONFIG=rcord-mock.yml
-config` is run, the following steps happen:
-
-1. The POD Config file is read, in this case
-[orchestration/profiles/rcord/podconfig/rcord-mock.yml](https://github.com/opencord/rcord/blob/{{ book.branch }}/podconfig/rcord-mock.yml),
-which specifies the scenario and profile.
-2. The Scenario config file is read, in this case
-[build/scenarios/mock/config.yml](https://github.com/opencord/cord/blob/{{ book.branch }}/scenarios/mock/config.yml).
-3. The contents of these files are combined into a master config variable, with
-   the POD Config overwriting any config set in the Scenario.
-4. The entire master config is written to `genconfig/config.yml`.
-5. The `inventory_groups` variable is used to generate an ansible inventory
-   file and put in `genconfig/inventory.ini`.
-6. Various variables are used to generate the makefile config file
-   `genconfig/config.mk`. This sets the targets invoked by `make build`
-
-Note that the combination of the POD and Scenaro config in step #3 is not a
-merge. If you define an item in the root of the POD Config that has subkeys,
-it will overwrite every subkey defined in the Scenario.  This is most noticeable
-when setting the `inventory_groups` or `docker_image_whitelist`
-variable. If changing either in a POD Config, you must recreate the
-entire structure or list. This may seem inconvenient, but other list
-or tree merging strategies lack a way to remove items from a tree
-structure.
-
-## Build Process Overview
-
-The build process is driven by running `make`. The two most common makefile
-targets are `config` and `build`, but there are also utility targets that are
-handy to use during development.
-
-### `config` make target
-
-`config` requires a `PODCONFIG` argument, which is the name of a file in
-`orchestration/profiles/<use-case>/podconfig/`. `PODCONFIG` defaults to `invalid`, so if you get errors
-claiming an invalid config, you probably didn't set it, or set it to a filename
-that doesn't exist.
-
-#### Examples: `make config`
-
-`make PODCONFIG=rcord-local.yml config`
-
-`make PODCONFIG=opencloud-mock.yml config`
-
-### `build` make target
-
-`make build` performs the build process, and takes no arguments.  It may run
-different targets specified by the scenario.
-
-Most of the build targets in the Makefile don't leave artifacts behind, so we
-write a placeholder file (aka "sentinels" or "empty targets") in the
-`milestones` directory.
-
-### Utility make targets
-
-There are various utility targets:
-
-- `printconfig`: Prints the configured scenario and profile.
-
-- `xos-teardown`: Stop and remove a running set of XOS docker containers,
-  removing the database.
-
-- `xos-update-images`: Rebuild the images used by XOS, without tearing down
-  running XOS containers.
-
-- `collect-diag`: Collect detailed diagnostic information on a deployed head
-  and compute nodes, into `diag-<datestamp>` directory on the head node.
-
-- `compute-node-refresh`: Reload compute nodes brought up by MaaS into XOS,
-  useful in the cord virtual and physical scenarios
-
-- `pod-test`: Run the `platform-install/pod-test-playbook.yml`, testing the
-  virtual/physical cord scenario.
-
-- `vagrant-destroy`: Destroy Vagrant containers (for mock/virtual/physical
-  installs)
-
-- `clean-images`: Have containers rebuild during the next build cycle. Does
-  not actually delete any images, just causes imagebuilder to be run again.
-
-- `clean-genconfig`: Deletes the `make config` generated config files in
-  `genconfig`, useful when switching between POD configs
-
-- `clean-onos`: Stops the ONOS containers on the head node
-
-- `clean-openstack`: Cleans up and deletes all instances and networks created
-  in OpenStack.
-
-- `clean-profile`: Deletes the `cord_profile` directory
-
-- `clean-all`: Runs `vagrant-destroy`, `clean-genconfig`, and `clean-profile`
-  targets, removes all milestones. Good for resetting a dev environment back
-  to an unconfigured state.
-
-- `clean-local`:  `clean-all` but for the `local` scenario - Runs
-  `clean-genconfig` and `clean-profile` targets, removes local milestones.
-
-The `clean-*` utility targets should modify the contents of the milestones
-directory appropriately to cause the steps they clean up after to be rerun on
-the next `make build` cycle.
-
-### Development workflow
-
-#### Updating XOS Container Images on a running POD
-
-To rebuild and update XOS container images, run:
-
-```shell
-make xos-update-images
-make -j4 build
-```
-
-This will build new copies of all the images, then when build is run the newly
-built containers will be restarted.
-
-If you additionally want to stop all the XOS containers, clear the database,
-and reload the profile, use `xos-teardown`:
-
-```shell
-make xos-teardown
-make -j4 build
-```
-
-This will teardown the XOS container set, tell the build system to rebuild
-images, then perform a build and reload the profile.
-