updated modeling info

Change-Id: If1a7e18f2f695734a5d57a64c23e7035aaf4306c
diff --git a/docs/README.md b/docs/README.md
index c83d24d..f37af2b 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,11 +1,32 @@
 # Defining Models for CORD
 
-XOS implements a model-based service control plane for CORD.
-This guide describes the how models are expressed in XOS and
-documents the toolchain used to auto-generate various
-elements of CORD from these models.
+CORD adopts a model-based design, which is to say all aspects
+of operating and managing CORD is mediated by a model-based
+control plane. XOS is the component in CORD that implements
+this control plane.
 
-It also describes the role of Synchronizers in bridging the CORD data
-model with the backend components (e.g., VNFs, micro-services,
-SDN control apps) that implement CORD's service data plane.
+This guide describes XOS, and the role it plays in the CORD Controller.
+XOS is not a monolithic component. It is best viewed as having
+three inter-related aspects, and this guide is organized accordingly.
+
+First, XOS defines a [modeling framework](dev/xproto.md), which
+includes both a modeling language (*xproto*) and a generative
+toolchain (*xosgen*). The core abstractions that define CORD's
+behavior are expressed in xproto, with xosgen then used to
+generate code for several elements required to control CORD
+(including an API that serves the set of models that have been
+loaded into XOS).
+
+Second, CORD is based on a core set of models. These models are
+expressed and realized using XOS, but they are architecturally
+independent of XOS. These models are central to defining what
+CORD **is**, including its [core abstractions](core_models.md)
+and the [security policies](security_policies.md) that govern how
+various principals can act on those abstractions in a multi-tenant
+environment.
+
+Third, XOS defines a [synchronization framework](dev/synchronizers.md)
+that actuates the CORD data model. This framework is reponsible for
+driving the underlying components configured into CORD (for example,
+services, access devices) towards the desired state.
 
diff --git a/docs/core_models.md b/docs/core_models.md
index dd6ff86..7458194 100644
--- a/docs/core_models.md
+++ b/docs/core_models.md
@@ -1,109 +1,161 @@
-#Core Models
+# Core Models
 
-CORD adopts a model-based design. Each service configured into a
-given CORD deployment has a service-specific model, but CORD
-also defines a set of core models. Service-specific models are
-anchored in these core models.
+The XOS modeling framework provides a foundation for building CORD,
+but it is just a means to defining the set of core models that effectively
+specify CORD's architecture. 
 
-CORD's core models are defined by a set of [xproto](dev/xproto.md)
-specifications. They are defined in their full detail in the source
-code. See:
-[core.xproto](https://github.com/opencord/xos/blob/master/xos/core/models/core.xproto).
-The following describes these core models -- along with the
-relationships (bindings) among them -- in words.
+## Overview
+
+CORD's core starts with the **Service** model, which represents
+all functionality that can be on-boarded into CORD. The power of the
+Service model is that it is implementation-agnostic, supporting both
+*server-based* services (e.g., legacy VNF running in VMs and
+micro-services running in containers) and *switch-based* services
+(e.g., SDN control applications that installs flow rules into
+white-box switches).
+
+To realize this range of implementation choices, each service is bound
+a set of **Slice** models, each of which represents a
+combination of virtualized compute resources (both containers and VMs)
+and virtualized network resources (virtual networks).
+
+Next, a **ServiceDependency** model represents a relationship betwee
+a pair of services: a *subscriber*  and a *provider*. This dependency is
+parameterized by a **connect_method** field that defines how the two
+services are interconnected in the underlying network data plane. The
+approach is general enough to interconnect two server-based services,
+two switch-based services, or a server-based and a switch-based
+service pair. This makes it possible to construct a service graph
+without regard to how the underlying services are implemented.
+
+For a service graph defined by a collection of Service and
+ServiceDependency models, every time a subscriber requests service
+(e.g., connects their cell phone or home router to CORD), a
+**ServiceInstance** object is created to represent the virtualized
+instance of each service traversed through the service graph on behalf
+of that subscriber. Different subscribers may traverse different paths
+through the service graph, based on their customer profile, but the
+end result is a collection of interconnected ServiceInstance objects,
+forming the CORD-equivalent of a service chain. (This "chain" is often
+linear, but because the model allows for a many-to-many relationship
+among service instances, this "chain" can from an arbitrary graph.)
+
+Each node in this service chain (graph of ServiceInstance objects)
+represents some combination of virtualized compute and network
+resources; the service chain is not necessarily implemented by a
+sequence of containers or VMs. That would be one possible
+incarnation in the underlying service data plane, but how each
+individual service instance is realized in the underlying resources
+is an implementation detail. Moreover, because the data model
+provides a way to represent this end-to-end service chain, it is
+possible to access and control resources on a per-subscriber basis,
+in addition to controlling them on a per-service basis.
+
+Finally, each model defines a set of fields that are used to either
+configure/control the underlying 
+component (these fields are said to hold *declarative state*) or to 
+record operational data about the underlying component (these 
+fields are said to hold *feedback state*). For more information 
+about declarative and feedback state, and the role they play in 
+synchornizing the data model with the backend components,
+read about the [Synchronizer Architecture](dev/sync_arch.md). 
+
+## Model Glossary 
+
+CORD's core models are defined by a set of [xproto](dev/xproto.md) 
+specifications. They are defined in their full detail in the source 
+code (see
+[core.xproto](https://github.com/opencord/xos/blob/master/xos/core/models/core.xproto)).
+The following summarizes these core models -- along with the 
+key relationships (bindings) among them -- in words. 
 
 * **Service:** Represents an elastically scalable, multi-tenant
-program, including the means to instantiate, control, and scale
-functionality.
+program, including the declarative state needed to instantiate,
+control, and scale functionality.
 
-   - Bound to a set of Slices that contains the collection of
+   - Bound to a set of `Slices` that contains the collection of
       virtualized resources (e.g., compute, network) in which the
-      Service runs.
+      `Service` runs.
 
   In many CORD documents you will see mention of each service also
-  having a "controller" but this effectively corresponds to the
-  *Service* model itself, which is used to generate a "control
-  interface" for the service. There is no explicit *Controller* model
-  bound to a service. (There actually is a *Controller* model, but it
-  represents the controller for a backend infrastructure service, such
-  as OpenStack.)
+  having a "controller" which effectively corresponds to the
+  `Service` model itself (i.e., its purpose is to generate a "control
+  interface" for the service). There  is no "Controller" model
+  bound to a service. (Confusingly, CORD does include a `Controller` 
+  model, but it represents information about OpenStack. There is
+  also a `ServiceController` construct in the TOSCA interface for
+  CORD, which provides a means to load the `Service` model for
+  a given service into CORD.)
    
+* **ServiceDependency:** Represents a dependency between a *Subscriber*
+service on a *Provider*  service. The set of `ServiceDependency` 
+and `Service` models defined in CORD collectively represent the edges 
+and verticies of a *Service Graph*, but there is no explicit
+"ServiceGraph" model in CORD. The dependency between a pair of
+services is parameterized by the `connect_method` by which the service are
+interconnected in the data plane.Connect methods include:
+
+   - **None:** The two services are not connected in the data plane. 
+   - **Private:** The two services are connected by a common private network. 
+   - **Public:** The two services are connected by a publicly routable 
+   network. 
+   
+
 * **ServiceInstance:** Represents an instance of a service
   instantiated on behalf of a particular tenant. This is a
   generalization of the idea of a Compute-as-a-Service spinning up
   individual "compute instances," or using another common
-  example, the *ServiceInstance* corresponding to a Storage Service
+  example, the `ServiceInstance` corresponding to a Storage Service
   might be called a "Volume" or a "Bucket." Confusingly, there are
-  also instances of a *Service* model that represent different
+  also instances of a `Service` model that represent different
   services, but this is a consequence of standard modeling
-  terminology, whereas  *ServiceInstance* is a core model in CORD
-  (and yes, there are "instances of the *ServiceInstance* model").
+  terminology, whereas  `ServiceInstance` is a core model in CORD
+  (and yes, there are instances of the `ServiceInstance` model).
 
-* **ServiceDependency:** Represents a dependency between a *Subscriber*
-Service on a *Provider*  Service. The set of ServiceDependency 
-and Service models defined in CORD collectively represent the edges
-and verticies of a *Service Graph*. (There is no explicit **ServiceGraph** model.)
-The dependency between a pair of services is parameterized by the method
-by which they are interconnected in the data plane. Connect methods include:
+* **ServiceInstanceLink:** Represents a logical connection between
+`ServiceInstances` of two `Services`. A related model, `ServiceInterface`,
+types the `ServiceInstanceLink` between two `ServiceInstances`. A
+connected sequence of `ServiceInstances` and `ServiceInstanceLinks` form
+what is often called a *Service Chain*, but there is no explicit
+"ServiceChain" model in CORD.
 
-   - **None:** The two services are not connected in the data plane.
-   - **Private:** The two services are connected by a common private network.
-   - **Public:** The two services are connected by a publicly routable
-   network.
-   
+* **Slice:** Represents a distributed resource container that includes
+the compute and network resources that belong to (are used by) some
+`Service`.
 
-* **Slice:** A distributed resource container that includes the compute and 
-network resources that belong to (are used by) some Service.
+   - Bound to a set of `Instances` that provide compute resources for
+      the `Slice`.
 
-   - Bound to a (possibly empty) set of Instances that provide compute
-      resources for the Slice. 
-
-   - Bound to a set of Networks that connect the Slice's Instances to
-      each other, and connect this Slice to the Slices of other Services.
+   - Bound to a set of `Networks` that connect the  slice's `Instances` to
+      each other.
   
-   - Bound to a Flavor that defines how the Slice's Instances are 
-      scheduled. 
+   - Bound to  a default `Flavor` that represents a bundle of
+      resources (e.g., disk, memory, and cores) allocated to an
+      instance. Current flavors borrow from EC2. 
 
-   - Bound to an Image that boots in each of the Slice's Instances.
+   - Bound to a default `Image` that boots in each of the slice's`Instances`.
+      Each `Image` implies a virtualization layer (e.g., Docker, KVM).
 
 
 * **Instance:** Represents a single compute instance associated
    with a Slice and instantiated on some physical Node. Each Instance
-   is of some isolation type:
+   is of some `isolation` type: `vm` (implemented as a KVM virtual machine),
+   `container` (implemented as a Docker container), or `container_vm`
+   (implemented as a Docker container running inside a KVM virtual machine).
 
-   - **VM:** The instance is implemented as a KVM virtual machine.
-   - **Container:** The instance is implemented as a Docker container.
-   - **Container-in-VM:** The instance is implemented as a Docker
-   container running inside a KVM virtual machine.
-   
+* **Network:** Represents a virtual network associated with a `Slice`. The
+behavior of a given `Network`is defined by a `NetworkTemplate`, which
+specifies a set of parameters, including `visibility` (set to `public` or
+`private`),  `access` (set to `direct` or `indirect`), `translation`
+(set to `none`or `nat`), and `topology_kind` (set to `bigswitch`,
+`physical` or `custom`). There is also a `vtn_kind` parameter
+(indicating the `Network` is manged by VTN), with possible settings:
+`PRIVATE`, `PUBLIC`, `MANAGEMENT_LOCAL`, `MANAGEMENT_HOST`,
+`VSG`, or `ACCESS__AGENT`.
 
-* **Network:** A virtual network associated with a Slice. Networks are
-of one of the following types:
+* **Node:** Represents a physical server that can be virtualized and host Instances.
 
-   - **PRIVATE:** Virtual network for the instances in the same service
-   - **PUBLIC:** Externally accessible network
-   - **MANAGEMENT_LOCAL:** Instance management network which does not span
-      compute nodes, only accessible from the host machine
-   - **MANAGEMENT_HOST:** Real management network which spans compute and
- 	  head nodes
-   - **ACCESS_AGENT:** Network for access agent infrastructure service
-	  
-
-* **Image:** A bootable image that runs in a virtual machine. Each 
-  Image implies a virtualization layer (e.g., Docker, KVM), so the latter 
-  need not be a distinct object. 
-
-* **Flavor:** Represents a bundle of resources (e.g., disk, memory,
-   and cores) allocated to an instance. Current flavors borrow from EC2. 
-
-* **Controller:** Represents the binding of an object
-  in the data model to a back-end element (e.g., an OpenStack head
-  node).  Includes the credentials required to invoke the backend
-  resource.
-
-* **Node:** A physical server that can be virtualized and host Instances.
-
-   - Bound to the Site where the Node is physically located.
+   - Bound to the `Site` where the `Node` is physically located.
 
 
 * **User:** Represents an authenticated principal that is granted a set of
@@ -113,14 +165,14 @@
 * **Privilege:** Represents the right to perform a set of read, write,
   or grant operations on a set of models, objects, and fields.
 
-* **Site:** A logical grouping of Nodes that are co-located at the
-  same geographic location, which also typically corresponds to the
-  Nodes' location in the physical network.
-
-  - Bound to a set of Users that are affiliated with the Site.
-
-  - Bound to a set of Nodes located at the Site.
-
+* **Site:** Represents a logical grouping of `Nodes` that are
+  co-located at the same geographic location, which also typically
+  corresponds to the nodes' location in the physical network.
   The typical use case involves one configuration of a CORD POD 
-  deployed at a single location. However, the underlying core includes 
+  deployed at a single location, although the underlying core includes 
   allows for multi-site deployments.
+
+  - Bound to a set of `Nodes` located at the `Site`.
+
+
+
diff --git a/docs/security_policies.md b/docs/security_policies.md
index d900788..f2ff50b 100644
--- a/docs/security_policies.md
+++ b/docs/security_policies.md
@@ -8,16 +8,16 @@
 within that object), and (2) the access type (whether it is a read, a
 write, or a privilege update).
 
-## Summary of Policy Mechanism
-
 The mechanism for expressing these policies is provided by xproto’s
 policy extensions. The policies are enforced at the API boundary. When
 an API call is made, the appropriate policy is executed to determine
 whether or not access should be granted, and an audit trail is left
-behind. (Note: auditing is a TODO). The policy enforcers are
+behind. The policy enforcers are
 auto-generated by the generative toolchain as part of the model
 generation process.
 
+> Note: Auditing is still todo.
+
 Policies are generic logic expressions and can operate on any model
 or on the environment, but they frequently use the *Privilege* model.
 Specifically, when a policy cannot be expressed as a general principle
@@ -27,9 +27,6 @@
 object may be created to indicate that a user who is not a slice’s
 creator has admin privileges on it.
 
-Details on how policies are encoded can be found elsewhere.
-This document is about the “what” rather than the “how.”
-
 The set of security policies is being bootstrapped into the following
 state:
 
@@ -47,6 +44,21 @@
 *Grant*. Grant arbitrates access to Privilege objects (e.g., a slice
 admin could grant slice admin privileges to a user).
 
-The current policies are defined as follows:
+The current policies are defined in
+[core.xproto](https://github.com/opencord/xos/blob/master/xos/core/models/core.xproto). For
+example, the following `site_policy` controls access to instances of
+the `Site` model:
 
-**To be included...*
+```python
+// Everyone has read access
+// For write access, you have to be a site_admin
+policy site_policy <
+         ctx.user.is_admin
+         | (ctx.write_access -> exists Privilege:
+		 Privilege.object_type = "Site" & Privilege.object_id = obj.id
+		 & Privilege.accessor_id = ctx.user.id & Privilege.permission
+		 = "role:admin") >
+```
+
+For more information about security policy definitions, read about
+[xproto and xosgen](dev/xproto.md).