AETHER-954 Add VPN section to bootstrapping

Also updated runtime deployments to use Makefile to generate configs
and connectivity control for adding HSSDB update instruction

Change-Id: Ie3a52b5690958ef1987fc136c728d830de134631
diff --git a/dict.txt b/dict.txt
index 8a02b40..4c2e638 100644
--- a/dict.txt
+++ b/dict.txt
@@ -45,3 +45,5 @@
 vpn
 YAML
 yaml
+Ansible
+ansible
diff --git a/pronto_deployment_guide/bootstrapping.rst b/pronto_deployment_guide/bootstrapping.rst
index 9053f7c..b89db55 100644
--- a/pronto_deployment_guide/bootstrapping.rst
+++ b/pronto_deployment_guide/bootstrapping.rst
@@ -6,6 +6,246 @@
 Bootstrapping
 =============
 
+VPN
+===
+This section walks you through how to set up a VPN between ACE and Aether Central in GCP.
+We will be using GitOps based Aether CD pipeline for this,
+so we just need to create a patch to **aether-pod-configs** repository.
+Note that some of the steps described here are not directly related to setting up a VPN,
+but rather are a prerequisite for adding a new ACE.
+
+Before you begin
+----------------
+* Make sure firewall in front of ACE allows UDP port 500, UDP port 4500, and ESP packets
+  from **gcpvpn1.infra.aetherproject.net(35.242.47.15)** and **gcpvpn2.infra.aetherproject.net(34.104.68.78)**
+* Make sure that the external IP on ACE side is owned by or routed to the management node
+
+To help your understanding, the following sample ACE environment will be used in the rest of this section.
+Make sure to replace the sample values when you actually create a review request.
+
++-----------------------------+----------------------------------+
+| Management node external IP | 128.105.144.189                  |
++-----------------------------+----------------------------------+
+| ASN                         | 65003                            |
++-----------------------------+----------------------------------+
+| GCP BGP IP address          | Tunnel 1: 169.254.0.9/30         |
+|                             +----------------------------------+
+|                             | Tunnel 2: 169.254.1.9/30         |
++-----------------------------+----------------------------------+
+| ACE BGP IP address          | Tunnel 1: 169.254.0.10/30        |
+|                             +----------------------------------+
+|                             | Tunnel 2: 169.254.1.10/30        |
++-----------------------------+----------------------------------+
+| PSK                         | UMAoZA7blv6gd3IaArDqgK2s0sDB8mlI |
++-----------------------------+----------------------------------+
+| Management Subnet           | 10.91.0.0/24                     |
++-----------------------------+----------------------------------+
+| K8S Subnet                  | Pod IP: 10.66.0.0/17             |
+|                             +----------------------------------+
+|                             | Cluster IP: 10.66.128.0/17       |
++-----------------------------+----------------------------------+
+
+
+Download aether-pod-configs repository
+--------------------------------------
+.. code-block:: shell
+
+   $ cd $WORKDIR
+   $ git clone "ssh://[username]@gerrit.opencord.org:29418/aether-pod-configs"
+
+Update global resource maps
+---------------------------
+Add a new ACE information at the end of the following global resource maps.
+
+* user_map.tfvars
+* cluster_map.tfvars
+* vpn_map.tfvars
+
+As a note, you can find several other global resource maps under the `production` directory.
+Resource definitions that need to be shared among clusters or are better managed in a
+single file to avoid configuration conflicts are maintained in this way.
+
+.. code-block:: diff
+
+   $ cd $WORKDIR/aether-pod-configs/production
+   $ vi user_map.tfvars
+
+   # Add the new cluster admin user at the end of the map
+   $ git diff user_map.tfvars
+   --- a/production/user_map.tfvars
+   +++ b/production/user_map.tfvars
+   @@ user_map = {
+      username      = "menlo"
+      password      = "changeme"
+      global_roles  = ["user-base", "catalogs-use"]
+   +  },
+   +  test_admin = {
+   +    username      = "test"
+   +    password      = "changeme"
+   +    global_roles  = ["user-base", "catalogs-use"]
+      }
+   }
+
+.. code-block:: diff
+
+   $ cd $WORKDIR/aether-pod-configs/production
+   $ vi cluster_map.tfvars
+
+   # Add the new K8S cluster information at the end of the map
+   $ git diff cluster_map.tfvars
+   --- a/production/cluster_map.tfvars
+   +++ b/production/cluster_map.tfvars
+   @@ cluster_map = {
+         kube_dns_cluster_ip     = "10.53.128.10"
+         cluster_domain          = "prd.menlo.aetherproject.net"
+         calico_ip_detect_method = "can-reach=www.google.com"
+   +    },
+   +    ace-test = {
+   +      cluster_name            = "ace-test"
+   +      management_subnets      = ["10.91.0.0/24"]
+   +      k8s_version             = "v1.18.8-rancher1-1"
+   +      k8s_pod_range           = "10.66.0.0/17"
+   +      k8s_cluster_ip_range    = "10.66.128.0/17"
+   +      kube_dns_cluster_ip     = "10.66.128.10"
+   +      cluster_domain          = "prd.test.aetherproject.net"
+   +      calico_ip_detect_method = "can-reach=www.google.com"
+         }
+      }
+   }
+
+.. code-block:: diff
+
+   $ cd $WORKDIR/aether-pod-configs/production
+   $ vi vpn_map.tfvars
+
+   # Add VPN and tunnel information at the end of the map
+   $ git diff vpn_map.tfvars
+   --- a/production/vpn_map.tfvars
+   +++ b/production/vpn_map.tfvars
+   @@ vpn_map = {
+      bgp_peer_ip_address_1    = "169.254.0.6"
+      bgp_peer_ip_range_2      = "169.254.1.5/30"
+      bgp_peer_ip_address_2    = "169.254.1.6"
+   +  },
+   +  ace-test = {
+   +    peer_name                = "production-ace-test"
+   +    peer_vpn_gateway_address = "128.105.144.189"
+   +    tunnel_shared_secret     = "UMAoZA7blv6gd3IaArDqgK2s0sDB8mlI"
+   +    bgp_peer_asn             = "65003"
+   +    bgp_peer_ip_range_1      = "169.254.0.9/30"
+   +    bgp_peer_ip_address_1    = "169.254.0.10"
+   +    bgp_peer_ip_range_2      = "169.254.1.9/30"
+   +    bgp_peer_ip_address_2    = "169.254.1.10"
+      }
+   }
+
+.. note::
+   Unless you have a specific requirement, set ASN and BGP addresses to the next available values in the map.
+
+
+Create ACE specific configurations
+----------------------------------
+In this step, we will create a directory under `production` with the same name as ACE,
+and add several Terraform configurations and Ansible inventory needed to configure a VPN connection.
+Throughout the deployment procedure, this directory will contain all ACE specific configurations.
+
+Run the following commands to auto-generate necessary files under the target ACE directory.
+
+.. code-block:: shell
+
+   $ cd $WORKDIR/aether-pod-configs/tools
+   $ vi ace_env
+   # Set environment variables
+
+   $ source ace_env
+   $ make vpn
+   Created ../production/ace-test
+   Created ../production/ace-test/main.tf
+   Created ../production/ace-test/variables.tf
+   Created ../production/ace-test/gcp_fw.tf
+   Created ../production/ace-test/gcp_ha_vpn.tf
+   Created ../production/ace-test/ansible
+   Created ../production/ace-test/backend.tf
+   Created ../production/ace-test/cluster_val.tfvars
+   Created ../production/ace-test/ansible/hosts.ini
+   Created ../production/ace-test/ansible/extra_vars.yml
+
+.. attention::
+   The predefined templates are tailored to Pronto BOM. You'll need to fix `cluster_val.tfvars` and `ansible/extra_vars.yml`
+   when using a different BOM.
+
+Create a review request
+-----------------------
+.. code-block:: shell
+
+   $ cd $WORKDIR/aether-pod-configs/production
+   $ git status
+   On branch tools
+   Changes not staged for commit:
+
+      modified:   cluster_map.tfvars
+      modified:   user_map.tfvars
+      modified:   vpn_map.tfvars
+
+   Untracked files:
+   (use "git add <file>..." to include in what will be committed)
+
+      ace-test/
+
+   $ git add .
+   $ git commit -m "Add test ACE"
+   $ git review
+
+Once the review request is accepted and merged,
+CD pipeline will create VPN tunnels on both GCP and the management node.
+
+Verify VPN connection
+---------------------
+You can verify the VPN connections after successful post-merge job
+by checking the routing table on the management node and trying to ping to one of the central cluster VMs.
+Make sure two tunnel interfaces, `gcp_tunnel1` and `gcp_tunnel2`, exist
+and three additional routing entries via one of the tunnel interfaces.
+
+.. code-block:: shell
+
+   $ netstat -rn
+   Kernel IP routing table
+   Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
+   0.0.0.0         128.105.144.1   0.0.0.0         UG        0 0          0 eno1
+   10.45.128.0     169.254.0.9     255.255.128.0   UG        0 0          0 gcp_tunnel1
+   10.52.128.0     169.254.0.9     255.255.128.0   UG        0 0          0 gcp_tunnel1
+   10.66.128.0     10.91.0.8       255.255.128.0   UG        0 0          0 eno1
+   10.91.0.0       0.0.0.0         255.255.255.0   U         0 0          0 eno1
+   10.168.0.0      169.254.0.9     255.255.240.0   UG        0 0          0 gcp_tunnel1
+   128.105.144.0   0.0.0.0         255.255.252.0   U         0 0          0 eno1
+   169.254.0.8     0.0.0.0         255.255.255.252 U         0 0          0 gcp_tunnel1
+   169.254.1.8     0.0.0.0         255.255.255.252 U         0 0          0 gcp_tunnel2
+
+   $ ping 10.168.0.6 -c 3
+   PING 10.168.0.6 (10.168.0.6) 56(84) bytes of data.
+   64 bytes from 35.235.67.169: icmp_seq=1 ttl=56 time=67.9 ms
+   64 bytes from 35.235.67.169: icmp_seq=2 ttl=56 time=67.4 ms
+   64 bytes from 35.235.67.169: icmp_seq=3 ttl=56 time=67.1 ms
+
+   --- 10.168.0.6 ping statistics ---
+   3 packets transmitted, 3 received, 0% packet loss, time 2002ms
+   rtt min/avg/max/mdev = 67.107/67.502/67.989/0.422 ms
+
+Post VPN setup
+--------------
+Once you verify the VPN connections, please update `ansible` directory name to `_ansible` to prevent
+the ansible playbook from running again.
+Note that it is no harm to re-run the ansible playbook but not recommended.
+
+.. code-block:: shell
+
+   $ cd $WORKDIR/aether-pod-configs/production/$ACE_NAME
+   $ mv ansible _ansible
+   $ git add .
+   $ git commit -m "Mark ansible done for test ACE"
+   $ git review
+
+
 OS Installation - Switches
 ==========================
 
diff --git a/pronto_deployment_guide/connectivity_service_update.rst b/pronto_deployment_guide/connectivity_service_update.rst
index d787ad4..0926771 100644
--- a/pronto_deployment_guide/connectivity_service_update.rst
+++ b/pronto_deployment_guide/connectivity_service_update.rst
@@ -32,8 +32,8 @@
    $ cd $WORKDIR
    $ git clone "ssh://[username]@gerrit.opencord.org:29418/aether-pod-configs"
 
-Create a patch to update omec-control-plane
-===========================================
+Update OMEC control plane configs
+=================================
 Once you successfully download the `aether-pod-configs` repository to your local development machine
 then move the directory to `aether-pod-configs/production/acc-gcp/app_values`
 and edit `omec-control-plane.yml` file to add new user profile and subscribers for the new ACE.
@@ -116,3 +116,30 @@
    $ git commit -m “Update OMEC control plane for the new ACE”
    $ git review
 
+
+Add subscribers to HSSDB
+========================
+Attach to one of the **cassandra-0** pod and run `hss-add-user.sh` script to add the subscribers.
+
+.. code-block:: shell
+
+   $ kubectl exec -it cassandra-0 /bin/bash -n omec
+   # hss-add-user.sh arguments
+   # count=${1}
+   # imsi=${2}
+   # msisdn=${3}
+   # apn=${4}
+   # key=${5:-'000102030405060708090a0b0c0d0e0f'}
+   # opc=${6:-'69d5c2eb2e2e624750541d3bbc692ba5'}
+   # sqn=${7:-'135'}
+   # cassandra_ip=${8:-'localhost'}
+   # mmeidentity=${9:-'mme.omec.svc.prd.acc.gcp.aetherproject.net'}
+   # mmerealm=${10:-'omec.svc.prd.acc.gcp.aetherproject.net'}
+
+   $ root@cassandra-0:/# ./hss-add-user.sh \
+      30 \
+      315010102000001 \
+      9999234455 \
+      internet \
+      ACB9E480B30DC12C6BDD26BE882D2940 \
+      F5929B14A34AD906BC44D205242CD182
diff --git a/pronto_deployment_guide/run_time_deployment.rst b/pronto_deployment_guide/run_time_deployment.rst
index e465dac..d4e70a3 100644
--- a/pronto_deployment_guide/run_time_deployment.rst
+++ b/pronto_deployment_guide/run_time_deployment.rst
@@ -5,349 +5,60 @@
 ==========================
 Aether Run-Time Deployment
 ==========================
-This section describes how to install Aether edge runtime and connectivity edge applications.
-Aether provides GitOps based automated deployment,
-so we just need to create a couple of patches to aether-pod-configs repository.
+This section describes how to install Aether edge runtime and Aether managed applications.
+We will be using GitOps based Aether CD pipeline for this,
+so we just need to create a patch to **aether-pod-configs** repository.
 
 Before you begin
 ================
-Make sure you have the edge pod checklist ready. Specifically, the following information is required in this section.
-
-* Management network subnet
-* K8S pod and service IP ranges
-* List of servers and switches, and their management IP addresses
+Make sure :doc:`Bootstrapping <bootstrapping>` **Update global resource maps** section is completed.
 
 Download aether-pod-configs repository
 ======================================
-First, download the aether-pod-configs repository to your development machine.
+Download aether-pod-configs repository if you don't have it in your develop machine.
 
 .. code-block:: shell
 
    $ cd $WORKDIR
    $ git clone "ssh://[username]@gerrit.opencord.org:29418/aether-pod-configs"
 
-Create first patch to add ACE admin user
-========================================
-The first patch is to add a new ACE admin with full access to `EdgeApps` project.
-Here is an example review request https://gerrit.opencord.org/c/aether-pod-configs/+/21393 you can refer to with the commands below.
-Please replace "new" keyword with the name of the new ACE.
-
-.. code-block:: diff
-
-   $ cd $WORKDIR/aether-pod-configs/production
-   $ vi user_map.tfvars
-   # Add the new cluster admin user to the end of the list
-
-   $ git diff
-   diff --git a/production/user_map.tfvars b/production/user_map.tfvars
-   index c0ec3a3..6b9ffb4 100644
-   --- a/production/user_map.tfvars
-   +++ b/production/user_map.tfvars
-   @@ -40,5 +40,10 @@ user_map = {
-      username      = "menlo"
-      password      = "changeme"
-      global_roles  = ["user-base", "catalogs-use"]
-   +  },
-   +  new_admin = {
-   +    username      = "new"
-   +    password      = "changeme"
-   +    global_roles  = ["user-base", "catalogs-use"]
-      }
-   }
-
-   $ git add production/user_map.tfvars
-   $ git commit -m "Add admin user for new ACE"
-   $ git review
-
-The second patch has dependency on the first patch, so please make sure the first patch is merged before proceeding.
-
-Create second patch to install edge runtime and apps
-====================================================
-Now create another patch that will eventually install K8S and edge applications
-including monitoring and logging stacks as well as Aether connected edge.
-Unlike the first patch, this patch requires creating and editing multiple files.
-Here is an example of the patch https://gerrit.opencord.org/c/aether-pod-configs/+/21395.
-Please replace cluster names and IP addresses in this example accordingly.
-
-Update cluster_map.tfvars
-^^^^^^^^^^^^^^^^^^^^^^^^^
-The first file to edit is `cluster_map.tfvars`.
-Move the directory to `aether-pod-configs/production`, open `cluster_map.tfvars` file, and add the new ACE cluster information at the end of the map.
-This change is required to register a new K8S cluster to Rancher, and update ACC and AMP clusters for inter-cluster service discovery.
-
-.. code-block:: diff
-
-   $ cd $WORKDIR/aether-pod-configs/production
-   $ vi cluster_map.tfvars
-   # Edit the file and add the new cluster information to the end of the map
-
-   $ git diff cluster_map.tfvars
-   diff --git a/production/cluster_map.tfvars b/production/cluster_map.tfvars
-   index c944352..a6d05a8 100644
-   --- a/production/cluster_map.tfvars
-   +++ b/production/cluster_map.tfvars
-   @@ -89,6 +89,16 @@ cluster_map = {
-         kube_dns_cluster_ip     = "10.53.128.10"
-         cluster_domain          = "prd.menlo.aetherproject.net"
-         calico_ip_detect_method = "can-reach=www.google.com"
-   +    },
-   +    ace-new = {
-   +      cluster_name            = "ace-new"
-   +      management_subnets      = ["10.94.1.0/24"]
-   +      k8s_version             = "v1.18.8-rancher1-1"
-   +      k8s_pod_range           = "10.54.0.0/17"
-   +      k8s_cluster_ip_range    = "10.54.128.0/17"
-   +      kube_dns_cluster_ip     = "10.54.128.10"
-   +      cluster_domain          = "prd.new.aetherproject.net"
-   +      calico_ip_detect_method = "can-reach=www.google.com"
-         }
-      }
-   }
-
-Update vpn_map.tfvars
-^^^^^^^^^^^^^^^^^^^^^
-The second file to edit is `vpn_map.tfvars`.
-Move the directory to `aether-pod-configs/production`, open `vpn_map.tfvars` file, and add VPN tunnel information at the end of the map.
-Unless you have specific preference, set ASN and BGP peer addresses to the next available vales in the map.
-This change is required to add tunnels and router interfaces to Aether central.
-
-.. code-block:: diff
-
-   $ cd $WORKDIR/aether-pod-configs/production
-   $ vi vpn_map.tfvars
-   # Edit the file and add VPN tunnel information to the end of the map
-
-   $ git diff vpn_map.tfvars
-   diff --git a/production/vpn_map.tfvars b/production/vpn_map.tfvars
-   index 3c1f9b9..dd62fce 100644
-   --- a/production/vpn_map.tfvars
-   +++ b/production/vpn_map.tfvars
-   @@ -24,5 +24,15 @@ vpn_map = {
-      bgp_peer_ip_address_1    = "169.254.0.6"
-      bgp_peer_ip_range_2      = "169.254.1.5/30"
-      bgp_peer_ip_address_2    = "169.254.1.6"
-   +  },
-   +  ace-new = {
-   +    peer_name                = "production-ace-new"
-   +    peer_vpn_gateway_address = "111.222.333.444"
-   +    tunnel_shared_secret     = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
-   +    bgp_peer_asn             = "65003"
-   +    bgp_peer_ip_range_1      = "169.254.0.9/30"
-   +    bgp_peer_ip_address_1    = "169.254.0.10"
-   +    bgp_peer_ip_range_2      = "169.254.1.9/30"
-   +    bgp_peer_ip_address_2    = "169.254.1.10"
-      }
-   }
-
-Create ACE specific state directory
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Next step is to create a directory containing Terraform configs
-that define desired state of Rancher and GCP resources for the new ACE cluster,
-and ACE specific configurations such as IP addresses of the ACE cluster nodes.
-
-
-Let's create a new directory under `aether-pod-configs/production` and
-symbolic links to predefined Terraform configs(`*.tf` files) that will add
-cluster, projects and applications in Rancher and VPN tunnels and firewall rules in GCP for the new ACE.
-And note that Aether maintains a separate Terraform state per ACE.
-So we will create a remote Terraform state definition for the new ACE, too.
+Create runtime configurations
+=============================
+In this step, we will add several Terraform configurations and overriding values for the managed applications.
+Run the following commands to auto-generate necessary files under the target ACE directory.
 
 .. code-block:: shell
 
-   # Create symbolic links to pre-defined Terraform configs
-   $ cd $WORKDIR/aether-pod-configs/production
-   $ mkdir ace-new && cd ace-new
-   $ ln -s ../../common/ace-custom/* .
-   $ ln -s ../../common/alerts/alerts.tf .
+   $ cd $WORKDIR/aether-pod-configs/tools
+   $ vi ace_env
+   # Set environment variables
 
-   $ export CLUSTER_NAME=ace-new
-   $ export CLUSTER_DOMAIN=prd.new.aetherproject.net
-
-   # Create Terraform state definition file
-   $ cat >> backend.tf << EOF
-   # SPDX-FileCopyrightText: 2020-present Open Networking Foundation <info@opennetworking.org>
-
-   terraform {
-     backend "gcs" {
-       bucket  = "aether-terraform-bucket"
-       prefix  = "product/${CLUSTER_NAME}"
-     }
-   }
-   EOF
-
-   # Confirm the changes
-   $ tree .
-   .
-   ├── alerts.tf -> ../../common/ace-custom/alerts.tf
-   ├── backend.tf
-   ├── cluster.tf -> ../../common/ace-custom/cluster.tf
-   ├── gcp_fw.tf -> ../../common/ace-custom/gcp_fw.tf
-   ├── gcp_ha_vpn.tf -> ../../common/ace-custom/gcp_ha_vpn.tf
-   ├── main.tf -> ../../common/ace-custom/main.tf
-   └── variables.tf -> ../../common/ace-custom/variables.tf
-
-
-Now create another file called `cluster_val.tfvars` that defines all cluster nodes including switches and servers.
-ACE can have various number of servers and switches but note that an odd number of *servers* can have `etcd` and `controlplane` roles.
-Also, switches are not allowed to play a K8S master or normal worker role.
-So don’t forget to add `node-role.aetherproject.org=switch` to labels and `node-role.aetherproject.org=switch:NoSchedule` to taints.
-
-
-If the ACE requires any special settings, different set of projects for example,
-please take a closer look at `variables.tf` file and override the default values specified there to `cluster_val.tfvars`, too.
-
-.. code-block:: shell
-
-   $ cd $WORKDIR/aether-pod-configs/production/$CLUSTER_NAME
-   $ vi cluster_val.tfvars
-   # SPDX-FileCopyrightText: 2020-present Open Networking Foundation <info@opennetworking.org>
-
-   cluster_name  = "ace-new"
-   cluster_admin = "new_admin"
-   cluster_nodes = {
-     new-prd-leaf1 = {
-       user        = "root"
-       private_key = "~/.ssh/id_rsa_terraform"
-       host        = "10.94.1.3"
-       roles       = ["worker"]
-       labels      = ["node-role.aetherproject.org=switch"]
-       taints      = ["node-role.aetherproject.org=switch:NoSchedule"]
-     },
-     new-server-1 = {
-       user        = "terraform"
-       private_key = "~/.ssh/id_rsa_terraform"
-       host        = "10.94.1.3"
-       roles       = ["etcd", "controlplane", "worker"]
-       labels      = []
-       taints      = []
-     },
-     new-server-2 = {
-       user        = "terraform"
-       private_key = "~/.ssh/id_rsa_terraform"
-       host        = "10.94.1.4"
-       roles       = ["etcd", "controlplane", "worker"]
-       labels      = []
-       taints      = []
-     },
-     new-server-3 = {
-       user        = "terraform"
-       private_key = "~/.ssh/id_rsa_terraform"
-       host        = "10.94.1.5"
-       roles       = ["etcd", "controlplane", "worker"]
-       labels      = []
-       taints      = []
-     }
-   }
-
-   projects = [
-     "system_apps",
-     "connectivity_edge_up4",
-     "edge_apps"
-   ]
-
-Lastly, we will create a couple of overriding values files for the managed applications,
-one for DNS server for UEs and the other for the connectivity edge application, omec-upf-pfcp-agent.
-
-.. code-block:: shell
-
-   $ cd $WORKDIR/aether-pod-configs/production/$CLUSTER_NAME
-   $ mkdir app_values && cd app_values
-
-   $ export CLUSTER_NAME=ace-new
-   $ export CLUSTER_DOMAIN=prd.new.aetherproject.net
-   $ export K8S_DNS=10.54.128.10 # same address as kube_dns_cluster_ip
-   $ export UE_DNS=10.54.128.11  # next address of kube_dns_cluster_ip
-
-   # Create ace-coredns overriding values file
-   $ cat >> ace-coredns.yml << EOF
-   # SPDX-FileCopyrightText: 2020-present Open Networking Foundation <info@opennetworking.org>
-
-   serviceType: ClusterIP
-   service:
-     clusterIP: ${UE_DNS}
-   servers:
-   - zones:
-     - zone: .
-     port: 53
-     plugins:
-     - name: errors
-     - name: health
-       configBlock: |-
-         lameduck 5s
-     - name: ready
-     - name: prometheus
-       parameters: 0.0.0.0:9153
-     - name: forward
-       parameters: . /etc/resolv.conf
-     - name: cache
-       parameters: 30
-     - name: loop
-     - name: reload
-     - name: loadbalance
-   - zones:
-     - zone: apps.svc.${CLUSTER_DOMAIN}
-     port: 53
-     plugins:
-     - name: errors
-     - name: forward
-       parameters: . ${K8S_DNS}
-     - name: cache
-       parameters: 30
-   EOF
-
-   # Create PFCP agent overriding values file
-   $ cat >> omec-upf-pfcp-agent.yml << EOF
-   # SPDX-FileCopyrightText: 2020-present Open Networking Foundation <info@opennetworking.org>
-
-   config:
-     pfcp:
-       cfgFiles:
-         upf.json:
-           p4rtciface:
-             p4rtc_server: "onos-tost-onos-classic-hs.tost.svc.${CLUSTER_DOMAIN}"
-   EOF
-
-Make sure the ace-new directory has all necessary files and before a review request.
-
-.. code-block:: shell
-
-   $ cd $WORKDIR/aether-pod-configs/production/$CLUSTER_NAME
-   $ tree .
-   .
-   ├── alerts.tf -> ../../common/ace-custom/alerts.tf
-   ├── app_values
-   │   ├── ace-coredns.yml
-   │   └── omec-upf-pfcp-agent.yml
-   ├── backend.tf
-   ├── cluster.tf -> ../../common/ace-custom/cluster.tf
-   ├── cluster_val.tfvars
-   ├── gcp_fw.tf -> ../../common/ace-custom/gcp_fw.tf
-   ├── gcp_ha_vpn.tf -> ../../common/ace-custom/gcp_ha_vpn.tf
-   ├── main.tf -> ../../common/ace-custom/main.tf
-   └── variables.tf -> ../../common/ace-custom/variables.tf
+   $ source ace_env
+   $ make runtime
+   Created ../production/ace-test/main.tf
+   Created ../production/ace-test/variables.tf
+   Created ../production/ace-test/cluster.tf
+   Created ../production/ace-test/alerts.tf
+   Created ../production/ace-test/app_values/ace-coredns.yml
+   Created ../production/ace-test/app_values/omec-upf-pfcp-agent.yml
 
 Create a review request
-^^^^^^^^^^^^^^^^^^^^^^^
-Now the patch is ready to review. The final step is to create a pull request!
-Once the patch is accepted and merged, CD pipeline will install ACE runtime based on the patch.
-
+=======================
 .. code-block:: shell
 
-   $ cd $WORKDIR/aether-pod-configs/production
+   $ cd $WORKDIR/aether-pod-configs
    $ git status
-   On branch ace-new
-   Changes not staged for commit:
-   (use "git add <file>..." to update what will be committed)
-   (use "git checkout -- <file>..." to discard changes in working directory)
-
-      modified:   cluster_map.tfvars
-      modified:   vpn_map.tfvars
 
    Untracked files:
    (use "git add <file>..." to include in what will be committed)
 
-      ace-new/
+      production/ace-test/alerts.tf
+      production/ace-test/app_values/
+      production/ace-test/cluster.tf
 
    $ git add .
-   $ git commit -m "Add new ACE"
+   $ git commit -m "Add test ACE runtime configs"
    $ git review
+
+Once the review request is accepted and merged,
+CD pipeline will start to deploy K8S and Aether managed applications on it.