CORD-2551 update exampleservice tutorial for dynamic load
Change-Id: I46dd38e1cd0d849b6c2f2fd9784160ac144bb1d9
diff --git a/docs/example_service.md b/docs/example_service.md
index e9c3d6c..6d1d07e 100644
--- a/docs/example_service.md
+++ b/docs/example_service.md
@@ -36,8 +36,10 @@
| Component | Source Code (https://github.com/opencord/exampleservice/) |
|----------|-----------------------------------------------------|
-| Data Model | `xos/exampleservice.xproto` |
-| Synchronizer | `xos/synchronizer/steps/sync_exampletenant.py` `xos/synchronizer/steps/exampletenant_playbook.yaml` `xos/synchronizer/Dockerfile.synchronizer` |
+| Data Model | `xos/synchronizer/models/exampleservice.xproto` |
+| Syncronizer Program | `xos/synchronizer/exampleservice-synchronizer.py` `xos/synchronizer/exampleservice_config.yaml` `xos/synchronizer/model-deps` `xos/synchronizer/Dockerfile.synchronizer` |
+| Sync Steps | `xos/synchronizer/steps/sync_exampletenant.py` `xos/synchronizer/steps/exampletenant_playbook.yaml` |
+| Model Policies | `xos/synchronizer/model_policies/model_policy_exampleserviceinstance.py` |
| On-Boarding Spec | `xos/exampleservice-onboard.yaml`
Earlier releases (3.0 and before) required additional files (mostly Python
@@ -51,11 +53,7 @@
(or write a new)
[service profile](https://guide.opencord.org/service-profiles.html).
This tutorial uses the existing R-CORD profile for illustrative
-purposes. These profile definitions currently live in the
-`profile_manifest` directory of
-(https://github.com/opencord/platform-install/), which corresponds to the
-`CORD_ROOT/build/platform-install/profile_manifests` directory
-in your downloaded copy of the CORD source tree.
+purposes. These profile definitions currently live in the (https://github.com/opencord/rcord) repository. Additional related playbooks reside in the (https://github.com/opencord/platform-install/) for historical reasons.
## Development Environment
@@ -69,35 +67,52 @@
familiarize yourself with the CiaB environment and the [POD Development
Loop](dev/workflow_pod.md#development-loop).
+## Create the synchronizer directory
+
+The synchronizer directory holds the model declarations and the synchronizer for the service. Usually this directory is `xos/synchronizer`. This tutorial will first walk through creating the models, and then discuss creating the synchronizer itself.
+
+Make a new root directory for your service, and within that directory,
+create an `xos` subdirectory. The `xos` subdirectory will hold all xos-related files for your service.
+
+Within the `xos` subdirectory, create a `synchronizer` subdirectory. The `synchronizer` subdirectory holds the subset of files that end up built into the `synchronizer` container image.
+
## Define a Model
-The first step is to create a set of models for the service. Before you do
-this, make a new root directory for your service, and within that directory,
-create an `xos` subdirectory. The `xos` subdirectory will hold files that XOS
-needs to on-board your service.
-
-Your models live in a file named `exampleservice.xproto` in your service's `xos` directory. This
+Your models live in a file named `exampleservice.xproto` in your service's `xos/synchronizer/models` directory. This
file encodes the models in the service in a format called
[xproto](../xos/dev/xproto.md) which is a combination of Google Protocol
Buffers and some XOS-specific annotations to facilitate the generation of
service components, such as the GRPC and REST APIs, security policies, and
database models among other things. It consists of two parts:
+* The XPROTO Header, which contains options that are global to the rest of the file.
+
* The Service model, which manages the service as a whole.
* The ServiceInstance model, which manages tenant-specific
(per-service-instance) state.
+### XPROTO Header
+
+Some options are typically specified at the top of your xproto file:
+
+```
+option name = "exampleservice";
+option app_label = "exampleservice";
+```
+
+`name` specifies a name for your service. This is used as a default in several places, for example it will be used for `app_label` if you don't specifically choose an `app_label`. Normally it suffices to set this the name of your service, lower case, with no spaces.
+
+`app_label` configures the internal xos database application that is attached to these models. As with `name`, it suffices to set this the name of your service, lower case, with no spaces.
+
### Service Model (Service-wide state)
A Service model extends (inherits from) the XOS base *Service* model. At its
-head is a set of option declarations: the name of the service as a
-configuration string, and as a human readable one. Then follows a set of field
+head is a set of option declarations such as `verbose_name`, which specifies a human-readable name for the service model. Then follows a set of field
definitions.
```
message ExampleService (Service){
- option name = "exampleservice";
option verbose_name = "Example Service";
required string service_message = 1 [help_text = "Service Message to Display", max_length = 254, null = False, db_index = False, blank = False];
}
@@ -110,7 +125,6 @@
```
message ExampleServiceInstance (TenantWithContainer){
- option name = "exampleserviceinstance";
option verbose_name = "Example Service Instance";
required string tenant_message = 1 [help_text = "Tenant Message to Display", max_length = 254, null = False, db_index = False, blank = False];
}
@@ -140,11 +154,9 @@
> Note: Earlier versions included a tool to track model dependencies, but today
> it is sufficient to create a file named `model-deps` with the contents:` {}`.
-The Synchronizer has two parts: A container that runs the synchronizer process,
-and a playbook (typically Ansible) that configures the underlying system. The following
-describes how to construct both.
+The Synchronizer has three parts: The synchronizer python program, model policies which enact changes on the data model, and a playbook (typically Ansible) that configures the underlying system. The following describes how to construct these.
-### Synchronizer Container
+### Synchronizer Python Program
First, create a file named `exampleservice-synchronizer.py`:
@@ -173,7 +185,7 @@
logging options:
```
-name: exampleservice-synchronizer
+name: exampleservice
accessor:
username: xosadmin@opencord.org
password: "@/opt/xos/services/exampleservice/credentials/xosadmin@opencord.org"
@@ -186,7 +198,11 @@
steps_dir: "/opt/xos/synchronizers/exampleservice/steps"
sys_dir: "/opt/xos/synchronizers/exampleservice/sys"
model_policies_dir: "/opt/xos/synchronizers/exampleservice/model_policies"
+models_dir: "/opt/xos/synchronizers/exampleservice/models"
```
+
+Make sure the `name` in your synchronizer config file is that same as the app_label in your `xproto` file. Otherwise the models won't be dynamically loaded correctly.
+
> NOTE: Historically, synchronizers were named “observers”, so
> `s/observer/synchronizer/` when you come upon this term in the XOS code and
> documentation.
@@ -207,7 +223,7 @@
logger = Logger(level=logging.INFO)
```
-Bring in some basic prerequities. Also include the models created earlier, and
+Bring in some basic prerequisites. Also include the models created earlier, and
`SyncInstanceUsingAnsible` which will run the Ansible playbook in the Instance
VM.
@@ -303,10 +319,21 @@
CMD bash -c "cd /opt/xos/synchronizers/exampleservice; ./run-from-api.sh"
```
+### Synchronizer Model Policies
+
+Model policies are used to implement change within the data model. When an `ExampleServiceInstance` object is saved, we want an `Instance` to be automatically created that will hold the ExampleServiceInstance's web server. Fortunately, there's a base class that implements this functionality for us, so minimal coding needs to be done at this time. Create the `model_policies` subdirectory and within that subdirectory create the file `model_policy_exampleserviceinstance.py`:
+
+```
+from synchronizers.new_base.modelaccessor import *
+from synchronizers.new_base.model_policies.model_policy_tenantwithcontainer import TenantWithContainerPolicy
+
+class ExampleServiceInstancePolicy(TenantWithContainerPolicy):
+ model_name = "ExampleServiceInstance"
+```
### Synchronizer Playbooks
-In the same `steps` directory, create an Ansible playbook named
+In the same `steps` directory where you created `sync_exampleserviceinstance.py`, create an Ansible playbook named
`exampleserviceinstance_playbook.yml` which is the “master playbook” for this
set of plays:
@@ -391,8 +418,7 @@
directory of the service.
The on-boarding recipe is a TOSCA specification that lists all of the resources
-for your synchronizer. It's basically a collection of everything that has been
- created above. For example, here is the on-boarding recipe for
+for your synchronizer. For example, here is the on-boarding recipe for
*ExampleService*:
```
@@ -411,21 +437,20 @@
base_url: file:///opt/xos_services/exampleservice/xos/
# The following will concatenate with base_url automatically, if
# base_url is non-null.
- xproto: ./
private_key: file:///opt/xos/key_import/exampleservice_rsa
public_key: file:///opt/xos/key_import/exampleservice_rsa.pub
```
-Note that this recipe (when executed) on-boards *ExampleService* in the
-sense that it registers the service with the system (i.e., loads its
-model into XOS), but it does not provision the service or create
-instances of the service. These latter steps can be done
+This is a legacy recipe that (when executed) on-boards *ExampleService* in the
+sense that it registers the service with the system, but it does not provision the service or create instances of the service. These latter steps can be done
through CORD's GUI or REST API, or by submitting yet other TOSCA
workflows to a running CORD POD (all based on end-points that are
auto-generated from these on-boarded models). Additional information
on how to provision and use the service is given in the last section
of this tutorial.
+NOTE: This file may soon be removed.
+
## Include the Service in a Profile
The final step to on-boarding a service is to include it in one or
@@ -433,14 +458,14 @@
profiles are currently defined as part of the
[CORD build system](https://guide.opencord.org/install.html),
so this involves editing some build-related configuration files.
-These files can be found in the `CORD_ROOT/build/platform-install`
-directory of the checkout out source code.
+These files can be found in the `CORD_ROOT/orchestration/profiles/rcord` and `CORD_ROOT/build/platform-install`
+directories of the checkout out source code.
### Profile Manifests
Inserting *ExampleService* in a service profile requires creating or
modifying one of the `.yml` files in
-`build/platform-install/profile_manifests` of your local repo.
+`orchestration/profiles/rcord` of your local repo.
In the following, we use `rcord.yml` as an illustrative example. There
are potentially three sections of this file that need attention.
@@ -484,12 +509,6 @@
* Add the service's synchronizer image to `build/docker_images.yml`
-* Add the service's synchronizer image to `docker_image_whitelist` for
-the scenarios you want your service to run in. For example,
-*ExampleService* is included in these two files:
- * `build/scenarios/cord/config.yml`
- * `build/scenarios/single/config.yml`
-
* Because the build system is integrated with the `git` and `repo`
tools, if your service is not already checked into
`gerrit.opencord.org`, you will also need to add the service to
@@ -529,6 +548,21 @@
both the `management_network` and the `public_network`.
```
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+imports:
+ - custom_types/slice.yaml
+ - custom_types/site.yaml
+ - custom_types/image.yaml
+ - custom_types/flavor.yaml
+ - custom_types/network.yaml
+ - custom_types/networktemplate.yaml
+ - custom_types/networkslice.yaml
+ - custom_types/exampleservice.yaml
+ - custom_types/exampleserviceinstance.yaml
+
+description: configure exampleservice
+
topology_template:
node_templates:
@@ -537,7 +571,7 @@
type: tosca.nodes.Site
properties:
must-exist: true
- name: {{ site_humanname }}
+ name: {{ site_name }}
m1.small:
type: tosca.nodes.Flavor
@@ -576,18 +610,18 @@
This is followed by the specification of a `private` network used by
*ExampleService* :
```
- exampleservice_network:
- type: tosca.nodes.Network
+ exampleservice_network:
+ type: tosca.nodes.Network
properties:
- name: exampleservice_network
- labels: exampleservice_private_network
+ name: exampleservice_network
+ labels: exampleservice_private_network
requirements:
- template:
- node: private
- relationship: tosca.relationships.BelongsToOne
+ node: private
+ relationship: tosca.relationships.BelongsToOne
- owner:
- node: {{ site_name }}_exampleservice
- relationship: tosca.relationships.BelongsToOne
+ node: {{ site_name }}_exampleservice
+ relationship: tosca.relationships.BelongsToOne
```
The next part of the workflow provisions the `Slice` (and related