blob: c7fe5e163073d661eced36233798d71f753e334e [file] [log] [blame]
Zack Williams794532a2021-03-18 17:38:36 -07001..
2 SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org>
3 SPDX-License-Identifier: Apache-2.0
4
5Server Bootstrap
6================
7
8Management Server Bootstrap
9"""""""""""""""""""""""""""
10
11The management server is bootstrapped into a customized version of the standard
12Ubuntu 18.04 OS installer.
13
14The `iPXE boot firmware <https://ipxe.org/>`_. is used to start this process
15and is built using the steps detailed in the `ipxe-build
16<https://gerrit.opencord.org/plugins/gitiles/ipxe-build>`_. repo, which
17generates both USB and PXE chainloadable boot images.
18
19Once a system has been started using these images started, these images will
20download a customized script from an external webserver to continue the boot
21process. This iPXE to webserver connection is secured with mutual TLS
22authentication, enforced by the nginx webserver.
23
24The iPXE scripts are created by the `pxeboot
25<https://gerrit.opencord.org/plugins/gitiles/ansible/role/pxeboot>`_ role,
26which creates both a boot menu, downloads the appropriate binaries for
27bootstrapping an OS installation, and creates per-node installation preseed files.
28
29The preseed files contain configuration steps to install the OS from the
30upstream Ubuntu repos, as well as customization of packages and creating the
31``onfadmin`` user.
32
33Creating a bootable USB drive
34'''''''''''''''''''''''''''''
35
361. Get a USB key. Can be tiny as the uncompressed image is floppy sized
37 (1.4MB). Download the USB image file (``<date>_onf_ipxe.usb.zip``) on the
38 system you're using to write the USB key, and unzip it.
39
402. Put a USB key in the system you're using to create the USB key, then
41 determine which USB device file it's at in ``/dev``. You might look at the
42 end of the ``dmesg`` output on Linux/Unix or the output of ``diskutil
43 list`` on macOS.
44
45 Be very careful about this, as if you accidentally overwrite some other disk in
46 your system that would be highly problematic.
47
483. Write the image to the device::
49
50 $ dd if=/path/to/20201116_onf_ipxe.usb of=/dev/sdg
51 2752+0 records in
52 2752+0 records out
53 1409024 bytes (1.4 MB, 1.3 MiB) copied, 2.0272 s, 695 kB/s
54
55 You may need to use `sudo` for this.
56
57Boot and Image Management Server
58''''''''''''''''''''''''''''''''
59
601. Connect a USB keyboard and VGA monitor to the management node. Put the USB
61 Key in one of the management node's USB ports (port 2 or 3):
62
63 .. image:: images/mgmtsrv-000.png
64 :alt: Management Server Ports
65 :scale: 50%
66
672. Turn on the management node, and press the F11 key as it starts to get into
68 the Boot Menu:
69
70 .. image:: images/mgmtsrv-001.png
71 :alt: Management Server Boot Menu
72 :scale: 50%
73
743. Select the USB key (in this case "PNY USB 2.0", your options may vary) and press return. You should see iPXE load:
75
76 .. image:: images/mgmtsrv-002.png
77 :alt: iPXE load
78 :scale: 50%
79
804. A menu will appear which displays the system information and DHCP discovered
81 network settings (your network must provide the IP address to the management
82 server via DHCP):
83
84 Use the arrow keys to select "Ubuntu 18.04 Installer (fully automatic)":
85
86 .. image:: images/mgmtsrv-003.png
87 :alt: iPXE Menu
88 :scale: 50%
89
90 There is a 10 second default timeout if left untouched (it will continue the
91 system boot process) so restart the system if you miss the 10 second window.
92
935. The Ubuntu 18.04 installer will be downloaded and booted:
94
95 .. image:: images/mgmtsrv-004.png
96 :alt: Ubuntu Boot
97 :scale: 50%
98
996. Then the installer starts and takes around 10 minutes to install (depends on
100 your connection speed):
101
102 .. image:: images/mgmtsrv-005.png
103 :alt: Ubuntu Install
104 :scale: 50%
105
106
1077. At the end of the install, the system will restart and present you with a
108 login prompt:
109
110 .. image:: images/mgmtsrv-006.png
111 :alt: Ubuntu Install Complete
112 :scale: 50%
113
114
115Management Server Configuration
116'''''''''''''''''''''''''''''''
117
118Once the OS is installed on the management server, Ansible is used to remotely
119install software on the management server.
120
121To checkout the ONF ansible repo and enter the virtualenv with the tooling::
122
123 mkdir infra
124 cd infra
125 repo init -u ssh://<your gerrit username>@gerrit.opencord.org:29418/infra-manifest
126 repo sync
127 cd ansible
128 make galaxy
129 source venv_onfansible/bin/activate
130
131Obtain the ``undionly.kpxe`` iPXE artifact for bootstrapping the compute
132servers, and put it in the ``playbook/files`` directory.
133
134Next, create an inventory file to access the NetBox API. An example is given
135in ``inventory/example-netbox.yml`` - duplicate this file and modify it. Fill
136in the ``api_endpoint`` address and ``token`` with an API key you get out of
137the NetBox instance. List the IP Prefixes used by the site in the
138``ip_prefixes`` list.
139
140Next, run the ``scripts/netbox_edgeconfig.py`` to generate a host_vars file for
141the management server. Assuming that the management server in the edge is
142named ``mgmtserver1.stage1.menlo``, you'd run::
143
144 python scripts/netbox_edgeconfig.py inventory/my-netbox.yml > inventory/host_vars/mgmtserver1.stage1.menlo.yml
145
146One manual change needs to be made to this output - edit the
147``inventory/host_vars/mgmtserver1.stage1.menlo.yml`` file and add the following
148to the bottom of the file, replacing the IP addresses with the management
149server IP address for each segment.
150
151In the case of the Fabric that has two leaves and IP ranges, add the Management
152server IP address used for the leaf that it is connected to, and then add a
153route for the other IP address range for the non-Management-connected leaf that
154is via the Fabric router address in the connected leaf range.
155
156This configures the `netplan <https://netplan.io>`_ on the management server,
157and creates a SNAT rule for the UE range route, and will be automated away
158soon::
159
160 # added manually
161 netprep_netplan:
162 ethernets:
163 eno2:
164 addresses:
165 - 10.0.0.1/25
166 vlans:
167 mgmt800:
168 id: 800
169 link: eno2
170 addresses:
171 - 10.0.0.129/25
172 fabr801:
173 id: 801
174 link: eno2
175 addresses:
176 - 10.0.1.129/25
177 routes:
178 - to: 10.0.1.0/25
179 via: 10.0.1.254
180 metric: 100
181
182 netprep_nftables_nat_postrouting: >
183 ip saddr 10.0.1.0/25 ip daddr 10.168.0.0/20 counter snat to 10.0.1.129;
184
185
186Using the ``inventory/example-aether.ini`` as a template, create an
187:doc:`ansible inventory <ansible:user_guide/intro_inventory>` file for the
188site. Change the device names, IP addresses, and ``onfadmin`` password to match
189the ones for this site. The management server's configuration is in the
190``[aethermgmt]`` and corresponding ``[aethermgmt:vars]`` section.
191
192Then, to configure a management server, run::
193
194 ansible-playbook -i inventory/sitename.ini playbooks/aethermgmt-playbook.yml
195
196This installs software with the following functionality:
197
198- VLANs on second Ethernet port to provide connectivity to the rest of the pod.
199- Firewall with NAT for routing traffic
200- DHCP and TFTP for bootstrapping servers and switches
201- DNS for host naming and identification
202- HTTP server for serving files used for bootstrapping switches
203- Downloads the Tofino switch image
204- Creates user accounts for administrative access
205
206Compute Server Bootstrap
207""""""""""""""""""""""""
208
209Once the management server has finished installation, it will be set to offer
210the same iPXE bootstrap file to the computer.
211
212Each node will be booted, and when iPXE loads select the ``Ubuntu 18.04
213Installer (fully automatic)`` option.
214
215The nodes can be controlled remotely via their BMC management interfaces - if
216the BMC is at ``10.0.0.3`` a remote user can SSH into them with::
217
218 ssh -L 2443:10.0.0.3:443 onfadmin@<mgmt server ip>
219
220And then use their web browser to access the BMC at::
221
222 https://localhost:2443
223
224The default BMC credentials for the Pronto nodes are::
225
226 login: ADMIN
227 password: Admin123
228
229The BMC will also list all of the MAC addresses for the network interfaces
230(including BMC) that are built into the logic board of the system. Add-in
231network cards like the 40GbE ones used in compute servers aren't listed.
232
233To prepare the compute nodes, software must be installed on them. As they
234can't be accessed directly from your local system, a :ref:`jump host
235<ansible:use_ssh_jump_hosts>` configuration is added, so the SSH connection
236goes through the management server to the compute systems behind it. Doing this
237requires a few steps:
238
239First, configure SSH to use Agent forwarding - create or edit your
240``~/.ssh/config`` file and add the following lines::
241
242 Host <management server IP>
243 ForwardAgent yes
244
245Then try to login to the management server, then the compute node::
246
247 $ ssh onfadmin@<management server IP>
248 Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-54-generic x86_64)
249 ...
250 onfadmin@mgmtserver1:~$ ssh onfadmin@10.0.0.138
251 Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-54-generic x86_64)
252 ...
253 onfadmin@node2:~$
254
255Being able to login to the compute nodes from the management node means that
256SSH Agent forwarding is working correctly.
257
258Verify that your inventory (Created earlier from the
259``inventory/example-aether.ini`` file) includes an ``[aethercompute]`` section
260that has all the names and IP addresses of the compute nodes in it.
261
262Then run a ping test::
263
264 ansible -i inventory/sitename.ini -m ping aethercompute
265
266It may ask you about authorized keys - answer ``yes`` for each host to trust the keys::
267
268 The authenticity of host '10.0.0.138 (<no hostip for proxy command>)' can't be established.
269 ECDSA key fingerprint is SHA256:...
270 Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
271
272You should then see a success message for each host::
273
274 node1.stage1.menlo | SUCCESS => {
275 "changed": false,
276 "ping": "pong"
277 }
278 node2.stage1.menlo | SUCCESS => {
279 "changed": false,
280 "ping": "pong"
281 }
282 ...
283
284Once you've seen this, run the playbook to install the prerequisites (Terraform
285user, Docker)::
286
287 ansible-playbook -i inventory/sitename.ini playbooks/aethercompute-playbook.yml
288
289Note that Docker is quite large and may take a few minutes for installation
290depending on internet connectivity.
291
292Now that these compute nodes have been brought up, the rest of the installation
293can continue.