Zack Williams | 794532a | 2021-03-18 17:38:36 -0700 | [diff] [blame] | 1 | .. |
| 2 | SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org> |
| 3 | SPDX-License-Identifier: Apache-2.0 |
| 4 | |
| 5 | Server Bootstrap |
| 6 | ================ |
| 7 | |
| 8 | Management Server Bootstrap |
| 9 | """"""""""""""""""""""""""" |
| 10 | |
| 11 | The management server is bootstrapped into a customized version of the standard |
| 12 | Ubuntu 18.04 OS installer. |
| 13 | |
| 14 | The `iPXE boot firmware <https://ipxe.org/>`_. is used to start this process |
| 15 | and is built using the steps detailed in the `ipxe-build |
| 16 | <https://gerrit.opencord.org/plugins/gitiles/ipxe-build>`_. repo, which |
| 17 | generates both USB and PXE chainloadable boot images. |
| 18 | |
| 19 | Once a system has been started using these images started, these images will |
| 20 | download a customized script from an external webserver to continue the boot |
| 21 | process. This iPXE to webserver connection is secured with mutual TLS |
| 22 | authentication, enforced by the nginx webserver. |
| 23 | |
| 24 | The iPXE scripts are created by the `pxeboot |
| 25 | <https://gerrit.opencord.org/plugins/gitiles/ansible/role/pxeboot>`_ role, |
| 26 | which creates both a boot menu, downloads the appropriate binaries for |
| 27 | bootstrapping an OS installation, and creates per-node installation preseed files. |
| 28 | |
| 29 | The preseed files contain configuration steps to install the OS from the |
| 30 | upstream Ubuntu repos, as well as customization of packages and creating the |
| 31 | ``onfadmin`` user. |
| 32 | |
| 33 | Creating a bootable USB drive |
| 34 | ''''''''''''''''''''''''''''' |
| 35 | |
| 36 | 1. Get a USB key. Can be tiny as the uncompressed image is floppy sized |
| 37 | (1.4MB). Download the USB image file (``<date>_onf_ipxe.usb.zip``) on the |
| 38 | system you're using to write the USB key, and unzip it. |
| 39 | |
| 40 | 2. Put a USB key in the system you're using to create the USB key, then |
| 41 | determine which USB device file it's at in ``/dev``. You might look at the |
| 42 | end of the ``dmesg`` output on Linux/Unix or the output of ``diskutil |
| 43 | list`` on macOS. |
| 44 | |
| 45 | Be very careful about this, as if you accidentally overwrite some other disk in |
| 46 | your system that would be highly problematic. |
| 47 | |
| 48 | 3. Write the image to the device:: |
| 49 | |
| 50 | $ dd if=/path/to/20201116_onf_ipxe.usb of=/dev/sdg |
| 51 | 2752+0 records in |
| 52 | 2752+0 records out |
| 53 | 1409024 bytes (1.4 MB, 1.3 MiB) copied, 2.0272 s, 695 kB/s |
| 54 | |
| 55 | You may need to use `sudo` for this. |
| 56 | |
| 57 | Boot and Image Management Server |
| 58 | '''''''''''''''''''''''''''''''' |
| 59 | |
| 60 | 1. Connect a USB keyboard and VGA monitor to the management node. Put the USB |
| 61 | Key in one of the management node's USB ports (port 2 or 3): |
| 62 | |
| 63 | .. image:: images/mgmtsrv-000.png |
| 64 | :alt: Management Server Ports |
| 65 | :scale: 50% |
| 66 | |
| 67 | 2. Turn on the management node, and press the F11 key as it starts to get into |
| 68 | the Boot Menu: |
| 69 | |
| 70 | .. image:: images/mgmtsrv-001.png |
| 71 | :alt: Management Server Boot Menu |
| 72 | :scale: 50% |
| 73 | |
| 74 | 3. Select the USB key (in this case "PNY USB 2.0", your options may vary) and press return. You should see iPXE load: |
| 75 | |
| 76 | .. image:: images/mgmtsrv-002.png |
| 77 | :alt: iPXE load |
| 78 | :scale: 50% |
| 79 | |
| 80 | 4. A menu will appear which displays the system information and DHCP discovered |
| 81 | network settings (your network must provide the IP address to the management |
| 82 | server via DHCP): |
| 83 | |
| 84 | Use the arrow keys to select "Ubuntu 18.04 Installer (fully automatic)": |
| 85 | |
| 86 | .. image:: images/mgmtsrv-003.png |
| 87 | :alt: iPXE Menu |
| 88 | :scale: 50% |
| 89 | |
| 90 | There is a 10 second default timeout if left untouched (it will continue the |
| 91 | system boot process) so restart the system if you miss the 10 second window. |
| 92 | |
| 93 | 5. The Ubuntu 18.04 installer will be downloaded and booted: |
| 94 | |
| 95 | .. image:: images/mgmtsrv-004.png |
| 96 | :alt: Ubuntu Boot |
| 97 | :scale: 50% |
| 98 | |
| 99 | 6. Then the installer starts and takes around 10 minutes to install (depends on |
| 100 | your connection speed): |
| 101 | |
| 102 | .. image:: images/mgmtsrv-005.png |
| 103 | :alt: Ubuntu Install |
| 104 | :scale: 50% |
| 105 | |
| 106 | |
| 107 | 7. At the end of the install, the system will restart and present you with a |
| 108 | login prompt: |
| 109 | |
| 110 | .. image:: images/mgmtsrv-006.png |
| 111 | :alt: Ubuntu Install Complete |
| 112 | :scale: 50% |
| 113 | |
| 114 | |
| 115 | Management Server Configuration |
| 116 | ''''''''''''''''''''''''''''''' |
| 117 | |
| 118 | Once the OS is installed on the management server, Ansible is used to remotely |
| 119 | install software on the management server. |
| 120 | |
| 121 | To checkout the ONF ansible repo and enter the virtualenv with the tooling:: |
| 122 | |
| 123 | mkdir infra |
| 124 | cd infra |
| 125 | repo init -u ssh://<your gerrit username>@gerrit.opencord.org:29418/infra-manifest |
| 126 | repo sync |
| 127 | cd ansible |
| 128 | make galaxy |
| 129 | source venv_onfansible/bin/activate |
| 130 | |
| 131 | Obtain the ``undionly.kpxe`` iPXE artifact for bootstrapping the compute |
| 132 | servers, and put it in the ``playbook/files`` directory. |
| 133 | |
| 134 | Next, create an inventory file to access the NetBox API. An example is given |
| 135 | in ``inventory/example-netbox.yml`` - duplicate this file and modify it. Fill |
| 136 | in the ``api_endpoint`` address and ``token`` with an API key you get out of |
| 137 | the NetBox instance. List the IP Prefixes used by the site in the |
| 138 | ``ip_prefixes`` list. |
| 139 | |
Wei-Yu Chen | 4c43ac3 | 2021-09-09 15:38:26 +0800 | [diff] [blame] | 140 | Next, run the ``scripts/edgeconfig.py`` to generate a host variables file in |
| 141 | ``inventory/host_vars/<device name>.yaml`` for the management server and other |
| 142 | compute servers.:: |
Zack Williams | 794532a | 2021-03-18 17:38:36 -0700 | [diff] [blame] | 143 | |
Wei-Yu Chen | 4c43ac3 | 2021-09-09 15:38:26 +0800 | [diff] [blame] | 144 | python scripts/edgeconfig.py inventory/staging-netbox.yml |
Zack Williams | 794532a | 2021-03-18 17:38:36 -0700 | [diff] [blame] | 145 | |
Wei-Yu Chen | 4c43ac3 | 2021-09-09 15:38:26 +0800 | [diff] [blame] | 146 | The script will use the **Tenant** as the key to lookup data, and write the |
| 147 | configuration files for each host. These configuration files will only be generated |
| 148 | for device roles **Router** and **Server**. |
Zack Williams | 794532a | 2021-03-18 17:38:36 -0700 | [diff] [blame] | 149 | |
| 150 | In the case of the Fabric that has two leaves and IP ranges, add the Management |
| 151 | server IP address used for the leaf that it is connected to, and then add a |
| 152 | route for the other IP address range for the non-Management-connected leaf that |
| 153 | is via the Fabric router address in the connected leaf range. |
| 154 | |
Zack Williams | 794532a | 2021-03-18 17:38:36 -0700 | [diff] [blame] | 155 | Using the ``inventory/example-aether.ini`` as a template, create an |
| 156 | :doc:`ansible inventory <ansible:user_guide/intro_inventory>` file for the |
| 157 | site. Change the device names, IP addresses, and ``onfadmin`` password to match |
| 158 | the ones for this site. The management server's configuration is in the |
| 159 | ``[aethermgmt]`` and corresponding ``[aethermgmt:vars]`` section. |
| 160 | |
| 161 | Then, to configure a management server, run:: |
| 162 | |
| 163 | ansible-playbook -i inventory/sitename.ini playbooks/aethermgmt-playbook.yml |
| 164 | |
| 165 | This installs software with the following functionality: |
| 166 | |
| 167 | - VLANs on second Ethernet port to provide connectivity to the rest of the pod. |
| 168 | - Firewall with NAT for routing traffic |
| 169 | - DHCP and TFTP for bootstrapping servers and switches |
| 170 | - DNS for host naming and identification |
| 171 | - HTTP server for serving files used for bootstrapping switches |
| 172 | - Downloads the Tofino switch image |
| 173 | - Creates user accounts for administrative access |
| 174 | |
| 175 | Compute Server Bootstrap |
| 176 | """""""""""""""""""""""" |
| 177 | |
| 178 | Once the management server has finished installation, it will be set to offer |
| 179 | the same iPXE bootstrap file to the computer. |
| 180 | |
| 181 | Each node will be booted, and when iPXE loads select the ``Ubuntu 18.04 |
| 182 | Installer (fully automatic)`` option. |
| 183 | |
| 184 | The nodes can be controlled remotely via their BMC management interfaces - if |
| 185 | the BMC is at ``10.0.0.3`` a remote user can SSH into them with:: |
| 186 | |
| 187 | ssh -L 2443:10.0.0.3:443 onfadmin@<mgmt server ip> |
| 188 | |
| 189 | And then use their web browser to access the BMC at:: |
| 190 | |
| 191 | https://localhost:2443 |
| 192 | |
| 193 | The default BMC credentials for the Pronto nodes are:: |
| 194 | |
| 195 | login: ADMIN |
| 196 | password: Admin123 |
| 197 | |
| 198 | The BMC will also list all of the MAC addresses for the network interfaces |
| 199 | (including BMC) that are built into the logic board of the system. Add-in |
| 200 | network cards like the 40GbE ones used in compute servers aren't listed. |
| 201 | |
| 202 | To prepare the compute nodes, software must be installed on them. As they |
| 203 | can't be accessed directly from your local system, a :ref:`jump host |
| 204 | <ansible:use_ssh_jump_hosts>` configuration is added, so the SSH connection |
| 205 | goes through the management server to the compute systems behind it. Doing this |
| 206 | requires a few steps: |
| 207 | |
| 208 | First, configure SSH to use Agent forwarding - create or edit your |
| 209 | ``~/.ssh/config`` file and add the following lines:: |
| 210 | |
| 211 | Host <management server IP> |
| 212 | ForwardAgent yes |
| 213 | |
| 214 | Then try to login to the management server, then the compute node:: |
| 215 | |
| 216 | $ ssh onfadmin@<management server IP> |
| 217 | Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-54-generic x86_64) |
| 218 | ... |
| 219 | onfadmin@mgmtserver1:~$ ssh onfadmin@10.0.0.138 |
| 220 | Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-54-generic x86_64) |
| 221 | ... |
| 222 | onfadmin@node2:~$ |
| 223 | |
| 224 | Being able to login to the compute nodes from the management node means that |
| 225 | SSH Agent forwarding is working correctly. |
| 226 | |
| 227 | Verify that your inventory (Created earlier from the |
| 228 | ``inventory/example-aether.ini`` file) includes an ``[aethercompute]`` section |
| 229 | that has all the names and IP addresses of the compute nodes in it. |
| 230 | |
| 231 | Then run a ping test:: |
| 232 | |
| 233 | ansible -i inventory/sitename.ini -m ping aethercompute |
| 234 | |
| 235 | It may ask you about authorized keys - answer ``yes`` for each host to trust the keys:: |
| 236 | |
| 237 | The authenticity of host '10.0.0.138 (<no hostip for proxy command>)' can't be established. |
| 238 | ECDSA key fingerprint is SHA256:... |
| 239 | Are you sure you want to continue connecting (yes/no/[fingerprint])? yes |
| 240 | |
| 241 | You should then see a success message for each host:: |
| 242 | |
| 243 | node1.stage1.menlo | SUCCESS => { |
| 244 | "changed": false, |
| 245 | "ping": "pong" |
| 246 | } |
| 247 | node2.stage1.menlo | SUCCESS => { |
| 248 | "changed": false, |
| 249 | "ping": "pong" |
| 250 | } |
| 251 | ... |
| 252 | |
| 253 | Once you've seen this, run the playbook to install the prerequisites (Terraform |
| 254 | user, Docker):: |
| 255 | |
| 256 | ansible-playbook -i inventory/sitename.ini playbooks/aethercompute-playbook.yml |
| 257 | |
| 258 | Note that Docker is quite large and may take a few minutes for installation |
| 259 | depending on internet connectivity. |
| 260 | |
| 261 | Now that these compute nodes have been brought up, the rest of the installation |
| 262 | can continue. |