blob: 223e3d8b750a9aa374b39f5097656270ac4eb0bd [file] [log] [blame]
..
SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org>
SPDX-License-Identifier: Apache-2.0
VPN Bootstrap
=============
This section walks you through how to set up a VPN between ACE and Aether
Central in GCP. We will be using GitOps based Aether CD pipeline for this, so
we just need to create a patch to **aether-pod-configs** repository. Note that
some of the steps described here are not directly related to setting up a VPN,
but rather are a prerequisite for adding a new ACE.
.. attention::
If you are adding another ACE to an existing VPN connection, go to
:ref:`Add ACE to an existing VPN connection <add_ace_to_vpn>`
Before you begin
----------------
* Make sure firewall in front of ACE allows UDP port 500, UDP port 4500, and
ESP packets from **gcpvpn1.infra.aetherproject.net(35.242.47.15)** and
**gcpvpn2.infra.aetherproject.net(34.104.68.78)**
* Make sure that the external IP on ACE side is owned by or routed to the
management node
To help your understanding, the following sample ACE environment will be used
in the rest of this section. Make sure to replace the sample values when you
actually create a review request.
+-----------------------------+----------------------------------+
| Management node external IP | 128.105.144.189 |
+-----------------------------+----------------------------------+
| ASN | 65003 |
+-----------------------------+----------------------------------+
| GCP BGP IP address | Tunnel 1: 169.254.0.9/30 |
| +----------------------------------+
| | Tunnel 2: 169.254.1.9/30 |
+-----------------------------+----------------------------------+
| ACE BGP IP address | Tunnel 1: 169.254.0.10/30 |
| +----------------------------------+
| | Tunnel 2: 169.254.1.10/30 |
+-----------------------------+----------------------------------+
| PSK | UMAoZA7blv6gd3IaArDqgK2s0sDB8mlI |
+-----------------------------+----------------------------------+
| Management Subnet | 10.91.0.0/24 |
+-----------------------------+----------------------------------+
| K8S Subnet | Pod IP: 10.66.0.0/17 |
| +----------------------------------+
| | Cluster IP: 10.66.128.0/17 |
+-----------------------------+----------------------------------+
Download aether-pod-configs repository
--------------------------------------
.. code-block:: shell
$ cd $WORKDIR
$ git clone "ssh://[username]@gerrit.opencord.org:29418/aether-pod-configs"
.. _update_global_resource:
Update global resource maps
---------------------------
Add a new ACE information at the end of the following global resource maps.
* ``user_map.tfvars``
* ``cluster_map.tfvars``
* ``vpn_map.tfvars``
As a note, you can find several other global resource maps under the
``production`` directory. Resource definitions that need to be shared among
clusters or are better managed in a single file to avoid configuration
conflicts are maintained in this way.
.. code-block:: diff
$ cd $WORKDIR/aether-pod-configs/production
$ vi user_map.tfvars
# Add the new cluster admin user at the end of the map
$ git diff user_map.tfvars
--- a/production/user_map.tfvars
+++ b/production/user_map.tfvars
@@ user_map = {
username = "menlo"
password = "changeme"
global_roles = ["user-base", "catalogs-use"]
+ },
+ test_admin = {
+ username = "test"
+ password = "changeme"
+ global_roles = ["user-base", "catalogs-use"]
}
}
.. code-block:: diff
$ cd $WORKDIR/aether-pod-configs/production
$ vi cluster_map.tfvars
# Add the new K8S cluster information at the end of the map
$ git diff cluster_map.tfvars
--- a/production/cluster_map.tfvars
+++ b/production/cluster_map.tfvars
@@ cluster_map = {
kube_dns_cluster_ip = "10.53.128.10"
cluster_domain = "prd.menlo.aetherproject.net"
calico_ip_detect_method = "can-reach=www.google.com"
+ },
+ ace-test = {
+ cluster_name = "ace-test"
+ management_subnets = ["10.91.0.0/24"]
+ k8s_version = "v1.18.8-rancher1-1"
+ k8s_pod_range = "10.66.0.0/17"
+ k8s_cluster_ip_range = "10.66.128.0/17"
+ kube_dns_cluster_ip = "10.66.128.10"
+ cluster_domain = "prd.test.aetherproject.net"
+ calico_ip_detect_method = "can-reach=www.google.com"
}
}
}
.. code-block:: diff
$ cd $WORKDIR/aether-pod-configs/production
$ vi vpn_map.tfvars
# Add VPN and tunnel information at the end of the map
$ git diff vpn_map.tfvars
--- a/production/vpn_map.tfvars
+++ b/production/vpn_map.tfvars
@@ vpn_map = {
bgp_peer_ip_address_1 = "169.254.0.6"
bgp_peer_ip_range_2 = "169.254.1.5/30"
bgp_peer_ip_address_2 = "169.254.1.6"
+ },
+ ace-test = {
+ peer_name = "production-ace-test"
+ peer_vpn_gateway_address = "128.105.144.189"
+ tunnel_shared_secret = "UMAoZA7blv6gd3IaArDqgK2s0sDB8mlI"
+ bgp_peer_asn = "65003"
+ bgp_peer_ip_range_1 = "169.254.0.9/30"
+ bgp_peer_ip_address_1 = "169.254.0.10"
+ bgp_peer_ip_range_2 = "169.254.1.9/30"
+ bgp_peer_ip_address_2 = "169.254.1.10"
}
}
.. note::
Unless you have a specific requirement, set ASN and BGP addresses to the next available values in the map.
Create ACE specific configurations
----------------------------------
In this step, we will create a directory under `production` with the same name
as ACE, and add several Terraform configurations and Ansible inventory needed
to configure a VPN connection.
Throughout the deployment procedure, this directory will contain all ACE
specific configurations.
Run the following commands to auto-generate necessary files under the target
ACE directory.
.. code-block:: shell
$ cd $WORKDIR/aether-pod-configs/tools
$ cp ace_env /tmp/ace_env
$ vi /tmp/ace_env
# Set environment variables
$ source /tmp/ace_env
$ make vpn
Created ../production/ace-test
Created ../production/ace-test/main.tf
Created ../production/ace-test/variables.tf
Created ../production/ace-test/gcp_fw.tf
Created ../production/ace-test/gcp_ha_vpn.tf
Created ../production/ace-test/ansible
Created ../production/ace-test/backend.tf
Created ../production/ace-test/cluster_val.tfvars
Created ../production/ace-test/ansible/hosts.ini
Created ../production/ace-test/ansible/extra_vars.yml
.. attention::
The predefined templates are tailored to Pronto BOM. You'll need to fix `cluster_val.tfvars` and `ansible/extra_vars.yml`
when using a different BOM.
Create a review request
-----------------------
.. code-block:: shell
$ cd $WORKDIR/aether-pod-configs/production
$ git status
On branch tools
Changes not staged for commit:
modified: cluster_map.tfvars
modified: user_map.tfvars
modified: vpn_map.tfvars
Untracked files:
(use "git add <file>..." to include in what will be committed)
ace-test/
$ git add .
$ git commit -m "Add test ACE"
$ git review
Once the review request is accepted and merged,
CD pipeline will create VPN tunnels on both GCP and the management node.
Verify VPN connection
---------------------
You can verify the VPN connections after successful post-merge job by checking
the routing table on the management node and trying to ping to one of the
central cluster VMs.
Make sure two tunnel interfaces, `gcp_tunnel1` and `gcp_tunnel2`, exist
and three additional routing entries via one of the tunnel interfaces.
.. code-block:: shell
# Verify routings
$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 128.105.144.1 0.0.0.0 UG 0 0 0 eno1
10.45.128.0 169.254.0.9 255.255.128.0 UG 0 0 0 gcp_tunnel1
10.52.128.0 169.254.0.9 255.255.128.0 UG 0 0 0 gcp_tunnel1
10.66.128.0 10.91.0.8 255.255.128.0 UG 0 0 0 eno1
10.91.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1
10.168.0.0 169.254.0.9 255.255.240.0 UG 0 0 0 gcp_tunnel1
128.105.144.0 0.0.0.0 255.255.252.0 U 0 0 0 eno1
169.254.0.8 0.0.0.0 255.255.255.252 U 0 0 0 gcp_tunnel1
169.254.1.8 0.0.0.0 255.255.255.252 U 0 0 0 gcp_tunnel2
# Verify ACC VM access
$ ping 10.168.0.6
# Verify ACC K8S cluster access
$ nslookup kube-dns.kube-system.svc.prd.acc.gcp.aetherproject.net 10.52.128.10
You can further verify whether the ACE routes are propagated well to GCP
by checking GCP dashboard **VPC Network > Routes > Dynamic**.
Post VPN setup
--------------
Once you verify the VPN connections, please update `ansible` directory name to
`_ansible` to prevent the ansible playbook from running again. Note that it is
no harm to re-run the ansible playbook but not recommended.
.. code-block:: shell
$ cd $WORKDIR/aether-pod-configs/production/$ACE_NAME
$ mv ansible _ansible
$ git add .
$ git commit -m "Mark ansible done for test ACE"
$ git review
.. _add_ace_to_vpn:
Add another ACE to an existing VPN connection
"""""""""""""""""""""""""""""""""""""""""""""
VPN connections can be shared when there are multiple ACE clusters in a site.
In order to add ACE to an existing VPN connection, you'll have to SSH into the
management node and manually update BIRD configuration.
.. note::
This step needs improvements in the future.
.. code-block:: shell
$ sudo vi /etc/bird/bird.conf
protocol static {
...
route 10.66.128.0/17 via 10.91.0.10;
# Add routings for the new ACE's K8S cluster IP range via cluster nodes
# TODO: Configure iBGP peering with Calico nodes and dynamically learn these routings
route <NEW-ACE-CLUSTER-IP> via <SERVER1>
route <NEW-ACE-CLUSTER-IP> via <SERVER2>
route <NEW-ACE-CLUSTER-IP> via <SERVER3>
}
filter gcp_tunnel_out {
# Add the new ACE's K8S cluster IP range and the management subnet if required to the list
if (net ~ [ 10.91.0.0/24, 10.66.128.0/17, <NEW-ACE-CLUSTER-IP-RANGE> ]) then accept;
else reject;
}
# Save and exit
$ sudo birdc configure
# Confirm the static routes are added
$ sudo birdc show route