Zack Williams | 794532a | 2021-03-18 17:38:36 -0700 | [diff] [blame] | 1 | .. |
| 2 | SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org> |
| 3 | SPDX-License-Identifier: Apache-2.0 |
| 4 | |
| 5 | Server Bootstrap |
| 6 | ================ |
| 7 | |
| 8 | Management Server Bootstrap |
| 9 | """"""""""""""""""""""""""" |
| 10 | |
| 11 | The management server is bootstrapped into a customized version of the standard |
| 12 | Ubuntu 18.04 OS installer. |
| 13 | |
| 14 | The `iPXE boot firmware <https://ipxe.org/>`_. is used to start this process |
| 15 | and is built using the steps detailed in the `ipxe-build |
| 16 | <https://gerrit.opencord.org/plugins/gitiles/ipxe-build>`_. repo, which |
| 17 | generates both USB and PXE chainloadable boot images. |
| 18 | |
| 19 | Once a system has been started using these images started, these images will |
| 20 | download a customized script from an external webserver to continue the boot |
| 21 | process. This iPXE to webserver connection is secured with mutual TLS |
| 22 | authentication, enforced by the nginx webserver. |
| 23 | |
| 24 | The iPXE scripts are created by the `pxeboot |
| 25 | <https://gerrit.opencord.org/plugins/gitiles/ansible/role/pxeboot>`_ role, |
| 26 | which creates both a boot menu, downloads the appropriate binaries for |
| 27 | bootstrapping an OS installation, and creates per-node installation preseed files. |
| 28 | |
| 29 | The preseed files contain configuration steps to install the OS from the |
| 30 | upstream Ubuntu repos, as well as customization of packages and creating the |
| 31 | ``onfadmin`` user. |
| 32 | |
| 33 | Creating a bootable USB drive |
| 34 | ''''''''''''''''''''''''''''' |
| 35 | |
| 36 | 1. Get a USB key. Can be tiny as the uncompressed image is floppy sized |
| 37 | (1.4MB). Download the USB image file (``<date>_onf_ipxe.usb.zip``) on the |
| 38 | system you're using to write the USB key, and unzip it. |
| 39 | |
| 40 | 2. Put a USB key in the system you're using to create the USB key, then |
| 41 | determine which USB device file it's at in ``/dev``. You might look at the |
| 42 | end of the ``dmesg`` output on Linux/Unix or the output of ``diskutil |
| 43 | list`` on macOS. |
| 44 | |
| 45 | Be very careful about this, as if you accidentally overwrite some other disk in |
| 46 | your system that would be highly problematic. |
| 47 | |
| 48 | 3. Write the image to the device:: |
| 49 | |
| 50 | $ dd if=/path/to/20201116_onf_ipxe.usb of=/dev/sdg |
| 51 | 2752+0 records in |
| 52 | 2752+0 records out |
| 53 | 1409024 bytes (1.4 MB, 1.3 MiB) copied, 2.0272 s, 695 kB/s |
| 54 | |
| 55 | You may need to use `sudo` for this. |
| 56 | |
| 57 | Boot and Image Management Server |
| 58 | '''''''''''''''''''''''''''''''' |
| 59 | |
| 60 | 1. Connect a USB keyboard and VGA monitor to the management node. Put the USB |
| 61 | Key in one of the management node's USB ports (port 2 or 3): |
| 62 | |
| 63 | .. image:: images/mgmtsrv-000.png |
| 64 | :alt: Management Server Ports |
| 65 | :scale: 50% |
| 66 | |
| 67 | 2. Turn on the management node, and press the F11 key as it starts to get into |
| 68 | the Boot Menu: |
| 69 | |
| 70 | .. image:: images/mgmtsrv-001.png |
| 71 | :alt: Management Server Boot Menu |
| 72 | :scale: 50% |
| 73 | |
| 74 | 3. Select the USB key (in this case "PNY USB 2.0", your options may vary) and press return. You should see iPXE load: |
| 75 | |
| 76 | .. image:: images/mgmtsrv-002.png |
| 77 | :alt: iPXE load |
| 78 | :scale: 50% |
| 79 | |
| 80 | 4. A menu will appear which displays the system information and DHCP discovered |
| 81 | network settings (your network must provide the IP address to the management |
| 82 | server via DHCP): |
| 83 | |
| 84 | Use the arrow keys to select "Ubuntu 18.04 Installer (fully automatic)": |
| 85 | |
| 86 | .. image:: images/mgmtsrv-003.png |
| 87 | :alt: iPXE Menu |
| 88 | :scale: 50% |
| 89 | |
| 90 | There is a 10 second default timeout if left untouched (it will continue the |
| 91 | system boot process) so restart the system if you miss the 10 second window. |
| 92 | |
| 93 | 5. The Ubuntu 18.04 installer will be downloaded and booted: |
| 94 | |
| 95 | .. image:: images/mgmtsrv-004.png |
| 96 | :alt: Ubuntu Boot |
| 97 | :scale: 50% |
| 98 | |
| 99 | 6. Then the installer starts and takes around 10 minutes to install (depends on |
| 100 | your connection speed): |
| 101 | |
| 102 | .. image:: images/mgmtsrv-005.png |
| 103 | :alt: Ubuntu Install |
| 104 | :scale: 50% |
| 105 | |
| 106 | |
| 107 | 7. At the end of the install, the system will restart and present you with a |
| 108 | login prompt: |
| 109 | |
| 110 | .. image:: images/mgmtsrv-006.png |
| 111 | :alt: Ubuntu Install Complete |
| 112 | :scale: 50% |
| 113 | |
| 114 | |
| 115 | Management Server Configuration |
| 116 | ''''''''''''''''''''''''''''''' |
| 117 | |
| 118 | Once the OS is installed on the management server, Ansible is used to remotely |
| 119 | install software on the management server. |
| 120 | |
| 121 | To checkout the ONF ansible repo and enter the virtualenv with the tooling:: |
| 122 | |
| 123 | mkdir infra |
| 124 | cd infra |
| 125 | repo init -u ssh://<your gerrit username>@gerrit.opencord.org:29418/infra-manifest |
| 126 | repo sync |
| 127 | cd ansible |
| 128 | make galaxy |
| 129 | source venv_onfansible/bin/activate |
| 130 | |
| 131 | Obtain the ``undionly.kpxe`` iPXE artifact for bootstrapping the compute |
| 132 | servers, and put it in the ``playbook/files`` directory. |
| 133 | |
| 134 | Next, create an inventory file to access the NetBox API. An example is given |
| 135 | in ``inventory/example-netbox.yml`` - duplicate this file and modify it. Fill |
| 136 | in the ``api_endpoint`` address and ``token`` with an API key you get out of |
| 137 | the NetBox instance. List the IP Prefixes used by the site in the |
| 138 | ``ip_prefixes`` list. |
| 139 | |
| 140 | Next, run the ``scripts/netbox_edgeconfig.py`` to generate a host_vars file for |
| 141 | the management server. Assuming that the management server in the edge is |
| 142 | named ``mgmtserver1.stage1.menlo``, you'd run:: |
| 143 | |
| 144 | python scripts/netbox_edgeconfig.py inventory/my-netbox.yml > inventory/host_vars/mgmtserver1.stage1.menlo.yml |
| 145 | |
| 146 | One manual change needs to be made to this output - edit the |
| 147 | ``inventory/host_vars/mgmtserver1.stage1.menlo.yml`` file and add the following |
| 148 | to the bottom of the file, replacing the IP addresses with the management |
| 149 | server IP address for each segment. |
| 150 | |
| 151 | In the case of the Fabric that has two leaves and IP ranges, add the Management |
| 152 | server IP address used for the leaf that it is connected to, and then add a |
| 153 | route for the other IP address range for the non-Management-connected leaf that |
| 154 | is via the Fabric router address in the connected leaf range. |
| 155 | |
| 156 | This configures the `netplan <https://netplan.io>`_ on the management server, |
| 157 | and creates a SNAT rule for the UE range route, and will be automated away |
| 158 | soon:: |
| 159 | |
| 160 | # added manually |
| 161 | netprep_netplan: |
| 162 | ethernets: |
| 163 | eno2: |
| 164 | addresses: |
| 165 | - 10.0.0.1/25 |
| 166 | vlans: |
| 167 | mgmt800: |
| 168 | id: 800 |
| 169 | link: eno2 |
| 170 | addresses: |
| 171 | - 10.0.0.129/25 |
| 172 | fabr801: |
| 173 | id: 801 |
| 174 | link: eno2 |
| 175 | addresses: |
| 176 | - 10.0.1.129/25 |
| 177 | routes: |
| 178 | - to: 10.0.1.0/25 |
| 179 | via: 10.0.1.254 |
| 180 | metric: 100 |
| 181 | |
| 182 | netprep_nftables_nat_postrouting: > |
| 183 | ip saddr 10.0.1.0/25 ip daddr 10.168.0.0/20 counter snat to 10.0.1.129; |
| 184 | |
| 185 | |
| 186 | Using the ``inventory/example-aether.ini`` as a template, create an |
| 187 | :doc:`ansible inventory <ansible:user_guide/intro_inventory>` file for the |
| 188 | site. Change the device names, IP addresses, and ``onfadmin`` password to match |
| 189 | the ones for this site. The management server's configuration is in the |
| 190 | ``[aethermgmt]`` and corresponding ``[aethermgmt:vars]`` section. |
| 191 | |
| 192 | Then, to configure a management server, run:: |
| 193 | |
| 194 | ansible-playbook -i inventory/sitename.ini playbooks/aethermgmt-playbook.yml |
| 195 | |
| 196 | This installs software with the following functionality: |
| 197 | |
| 198 | - VLANs on second Ethernet port to provide connectivity to the rest of the pod. |
| 199 | - Firewall with NAT for routing traffic |
| 200 | - DHCP and TFTP for bootstrapping servers and switches |
| 201 | - DNS for host naming and identification |
| 202 | - HTTP server for serving files used for bootstrapping switches |
| 203 | - Downloads the Tofino switch image |
| 204 | - Creates user accounts for administrative access |
| 205 | |
| 206 | Compute Server Bootstrap |
| 207 | """""""""""""""""""""""" |
| 208 | |
| 209 | Once the management server has finished installation, it will be set to offer |
| 210 | the same iPXE bootstrap file to the computer. |
| 211 | |
| 212 | Each node will be booted, and when iPXE loads select the ``Ubuntu 18.04 |
| 213 | Installer (fully automatic)`` option. |
| 214 | |
| 215 | The nodes can be controlled remotely via their BMC management interfaces - if |
| 216 | the BMC is at ``10.0.0.3`` a remote user can SSH into them with:: |
| 217 | |
| 218 | ssh -L 2443:10.0.0.3:443 onfadmin@<mgmt server ip> |
| 219 | |
| 220 | And then use their web browser to access the BMC at:: |
| 221 | |
| 222 | https://localhost:2443 |
| 223 | |
| 224 | The default BMC credentials for the Pronto nodes are:: |
| 225 | |
| 226 | login: ADMIN |
| 227 | password: Admin123 |
| 228 | |
| 229 | The BMC will also list all of the MAC addresses for the network interfaces |
| 230 | (including BMC) that are built into the logic board of the system. Add-in |
| 231 | network cards like the 40GbE ones used in compute servers aren't listed. |
| 232 | |
| 233 | To prepare the compute nodes, software must be installed on them. As they |
| 234 | can't be accessed directly from your local system, a :ref:`jump host |
| 235 | <ansible:use_ssh_jump_hosts>` configuration is added, so the SSH connection |
| 236 | goes through the management server to the compute systems behind it. Doing this |
| 237 | requires a few steps: |
| 238 | |
| 239 | First, configure SSH to use Agent forwarding - create or edit your |
| 240 | ``~/.ssh/config`` file and add the following lines:: |
| 241 | |
| 242 | Host <management server IP> |
| 243 | ForwardAgent yes |
| 244 | |
| 245 | Then try to login to the management server, then the compute node:: |
| 246 | |
| 247 | $ ssh onfadmin@<management server IP> |
| 248 | Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-54-generic x86_64) |
| 249 | ... |
| 250 | onfadmin@mgmtserver1:~$ ssh onfadmin@10.0.0.138 |
| 251 | Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-54-generic x86_64) |
| 252 | ... |
| 253 | onfadmin@node2:~$ |
| 254 | |
| 255 | Being able to login to the compute nodes from the management node means that |
| 256 | SSH Agent forwarding is working correctly. |
| 257 | |
| 258 | Verify that your inventory (Created earlier from the |
| 259 | ``inventory/example-aether.ini`` file) includes an ``[aethercompute]`` section |
| 260 | that has all the names and IP addresses of the compute nodes in it. |
| 261 | |
| 262 | Then run a ping test:: |
| 263 | |
| 264 | ansible -i inventory/sitename.ini -m ping aethercompute |
| 265 | |
| 266 | It may ask you about authorized keys - answer ``yes`` for each host to trust the keys:: |
| 267 | |
| 268 | The authenticity of host '10.0.0.138 (<no hostip for proxy command>)' can't be established. |
| 269 | ECDSA key fingerprint is SHA256:... |
| 270 | Are you sure you want to continue connecting (yes/no/[fingerprint])? yes |
| 271 | |
| 272 | You should then see a success message for each host:: |
| 273 | |
| 274 | node1.stage1.menlo | SUCCESS => { |
| 275 | "changed": false, |
| 276 | "ping": "pong" |
| 277 | } |
| 278 | node2.stage1.menlo | SUCCESS => { |
| 279 | "changed": false, |
| 280 | "ping": "pong" |
| 281 | } |
| 282 | ... |
| 283 | |
| 284 | Once you've seen this, run the playbook to install the prerequisites (Terraform |
| 285 | user, Docker):: |
| 286 | |
| 287 | ansible-playbook -i inventory/sitename.ini playbooks/aethercompute-playbook.yml |
| 288 | |
| 289 | Note that Docker is quite large and may take a few minutes for installation |
| 290 | depending on internet connectivity. |
| 291 | |
| 292 | Now that these compute nodes have been brought up, the rest of the installation |
| 293 | can continue. |