[CORD-2585]
Lint check documentation with markdownlint

Change-Id: I9c87dad207b8c3b209b320a0a3b6fb16e9d86e75
(cherry picked from commit cb807976a4491e1c391acea4272688f6aad7dbcd)
diff --git a/docs/PLATFORM_INSTALL_INTERNALS.md b/docs/PLATFORM_INSTALL_INTERNALS.md
index 6af2848..f73cdeb 100644
--- a/docs/PLATFORM_INSTALL_INTERNALS.md
+++ b/docs/PLATFORM_INSTALL_INTERNALS.md
@@ -1,36 +1,57 @@
 # Platform-Install Internals
 
-This repository consists of some Ansible playbooks that deploy and configure OpenStack,
-ONOS, and XOS in a CORD POD, as well as some Gradle "glue" to invoke these playbooks
-during the process of building a [single-node POD](https://wiki.opencord.org/display/CORD/Build+CORD-in-a-Box)
-and a [multi-node POD](https://wiki.opencord.org/display/CORD/Build+a+CORD+POD).
+This repository consists of some Ansible playbooks that deploy and configure
+OpenStack, ONOS, and XOS in a CORD POD, as well as some Gradle "glue" to invoke
+these playbooks during the process of building a [single-node
+POD](https://wiki.opencord.org/display/CORD/Build+CORD-in-a-Box) and a
+[multi-node POD](https://wiki.opencord.org/display/CORD/Build+a+CORD+POD).
 
 ## Prerequisites
 
-When platform-install starts, it is assumed that `gradlew fetch` has already been run on the cord repo, fetching the necessary sub-repositories for CORD. This includes fetching the platform-install repository.
+When platform-install starts, it is assumed that `gradlew fetch` has already
+been run on the cord repo, fetching the necessary sub-repositories for CORD.
+This includes fetching the platform-install repository.
 
-For the purposes of this document, paths are relative to the root of the platform-install repo unless specified otherwise. When starting from the uber-cord repo, platform-install is usually located at `/cord/components/platform-install`.
+For the purposes of this document, paths are relative to the root of the
+platform-install repo unless specified otherwise. When starting from the
+uber-cord repo, platform-install is usually located at
+`/cord/components/platform-install`.
 
 ## Configuration
 
-Platform-install uses a configuration file, `config/default.yml`, that contains several variables that will be passed to Ansible playbooks. Notable variables include the IP address of the target machine and user account information for SSHing into the target machine. There's also an extra-variable, `on-cloudlab` that will trigger additional cloudlab-specific actions to run.
+Platform-install uses a configuration file, `config/default.yml`, that contains
+several variables that will be passed to Ansible playbooks. Notable variables
+include the IP address of the target machine and user account information for
+SSHing into the target machine. There's also an extra-variable, `on-cloudlab`
+that will trigger additional cloudlab-specific actions to run.
 
-Cloudlab nodes boot with small disk partitions setup, and most of the disk space unallocated. Setting the variable `on-cloudlab` in `config/default.yml` to true will cause actions to be run that will allocate this unallocated space.
+Cloudlab nodes boot with small disk partitions setup, and most of the disk
+space unallocated. Setting the variable `on-cloudlab` in `config/default.yml`
+to true will cause actions to be run that will allocate this unallocated space.
 
 ## Gradle Scripts
 
 The main gradle script is located in `build.gradle`.
 
-`build.gradle` includes two notable tasks, `deployPlatform` and `deploySingle`. These are for multi-node and single-node pod installs and end up executing the Ansible playbooks `cord-head-playbook.yml` and `cord-single-playbook.yml` respectively.
+`build.gradle` includes two notable tasks, `deployPlatform` and `deploySingle`.
+These are for multi-node and single-node pod installs and end up executing the
+Ansible playbooks `cord-head-playbook.yml` and `cord-single-playbook.yml`
+respectively.
 
 ## Ansible Playbooks
 
-Platform-install makes extensive use of Ansible Roles, and the roles are selected via two playbooks: `cord-head-playbook.yml` and `cord-single-playbook.yml`.
+Platform-install makes extensive use of Ansible Roles, and the roles are
+selected via two playbooks: `cord-head-playbook.yml` and
+`cord-single-playbook.yml`.
 
 They key differences are that:
-* The single-node playbook sets up a simulated fabric, whereas the multi-node install uses a real fabric.
-* The single-node playbook sets up a single compute node running in a VM, whereas the multi-node playbook uses maas to provision compute nodes.
-* The single-node playbook installs a DNS server. The multi-node playbook only installs a DNS Server when maas is not used.
+
+* The single-node playbook sets up a simulated fabric, whereas the multi-node
+  install uses a real fabric.
+* The single-node playbook sets up a single compute node running in a VM,
+  whereas the multi-node playbook uses maas to provision compute nodes.
+* The single-node playbook installs a DNS server. The multi-node playbook only
+  installs a DNS Server when maas is not used.
 
 ## Ansible Roles and Variables
 
@@ -40,7 +61,10 @@
 
 ### DNS-server and Apt Cache
 
-The first step in bringing up the platform is to setup a DNS server. This is done for the single-node install, and for the multi-node install if maas is not used. An apt cache is setup to facilitate package installation in the many VMs that will be setup as part of the platform. Roles executed include:
+The first step in bringing up the platform is to setup a DNS server. This is
+done for the single-node install, and for the multi-node install if maas is not
+used. An apt cache is setup to facilitate package installation in the many VMs
+that will be setup as part of the platform. Roles executed include:
 
 * dns-nsd
 * dns-unbound
@@ -48,19 +72,23 @@
 
 ### Pointing to the DNS server
 
-Assuming a DNS server was setup in the previous step, then the next step is to point the head node to use that DNS server. Roles executed include:
+Assuming a DNS server was setup in the previous step, then the next step is to
+point the head node to use that DNS server. Roles executed include:
 
 * dns-configure
 
 ### Prep system
 
-The next step is to prepare the system. This includes such tasks as installing default packages (tmux, vim, etc), configuring editors, etc. Roles executed include:
+The next step is to prepare the system. This includes such tasks as installing
+default packages (tmux, vim, etc), configuring editors, etc. Roles executed
+include:
 
 * common-prep
 
 ### Configuring the head node and setting up VMs
 
-Next the head node is configured and VMs are created to host the OpenStack and XOS services. Roles executed include:
+Next the head node is configured and VMs are created to host the OpenStack and
+XOS services. Roles executed include:
 
 * head-prep
 * config-virt
@@ -68,7 +96,10 @@
 
 ### Set up VMs, juju, simulate fabric
 
-Finally, we install the appropriate software in the VMs. This is a large, time consuming step since it includes launching the OpenStack services (using juju), launching ONOS, and launching XOS (using service-platform). Roles executed include:
+Finally, we install the appropriate software in the VMs. This is a large, time
+consuming step since it includes launching the OpenStack services (using juju),
+launching ONOS, and launching XOS (using service-platform). Roles executed
+include:
 
 * xos-vm-install
 * onos-vm-install
@@ -79,14 +110,23 @@
 * onos-load-apps
 * xos-start
 
-Juju is leveraged to perform the OpenStack portion of the install. Cord specific juju charm changes are documented in [Internals of the CORD Build Process](https://wiki.opencord.org/display/CORD/Internals+of+the+CORD+Build+Process).
+Juju is leveraged to perform the OpenStack portion of the install. Cord
+specific juju charm changes are documented in [Internals of the CORD Build
+Process](https://wiki.opencord.org/display/CORD/Internals+of+the+CORD+Build+Process).
 
 ## Starting XOS
 
-The final ansible role executed by platform-install is to start XOS. This uses the XOS `service-profile` repository to bring up a stack of CORD services.
+The final ansible role executed by platform-install is to start XOS. This uses
+the XOS `service-profile` repository to bring up a stack of CORD services.
 
-For a discussion of how the XOS service-profile system works, please see [Dynamic On-boarding System and Service Profiles](https://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles).
+For a discussion of how the XOS service-profile system works, please see
+[Dynamic On-boarding System and Service
+Profiles](https://wiki.opencord.org/display/CORD/Dynamic+On-boarding+System+and+Service+Profiles).
 
 ## Helpful log files and diagnostic information
 
-The xos-build and xos-onboard steps run ansible playbooks to setup the xos virtual machine. The output of these playbooks is stored (inside the `xos-1` VM) in the files `service-profile/cord-pod/xos-build.out` and `service-profile/cord-pod/xos-onboard.out` respectively.
+The xos-build and xos-onboard steps run ansible playbooks to setup the xos
+virtual machine. The output of these playbooks is stored (inside the `xos-1`
+VM) in the files `service-profile/cord-pod/xos-build.out` and
+`service-profile/cord-pod/xos-onboard.out` respectively.
+
diff --git a/docs/bootstrap_models_in_xos.md b/docs/bootstrap_models_in_xos.md
index 0f73b99..2623c1f 100644
--- a/docs/bootstrap_models_in_xos.md
+++ b/docs/bootstrap_models_in_xos.md
@@ -1,34 +1,34 @@
 # TOSCA Development
 
-TOSCA is typically used to provision the services loaded into
-CORD as part of some profile. This is a two-step process:
-(1) during the build stage, TOSCA templates are rendered using
-variables set in the `podconfig`, `scenario`, and `default` roles
-into fully qualified TOSCA recipes, and (2) as the last step of the
-deploy stage, these TOSCA recipes are onboarded into XOS (which
-provisions the service profile accordingly). These two steps are
-implemented by a pair of Ansible roles:
+TOSCA is typically used to provision the services loaded into CORD as part of
+some profile. This is a two-step process: (1) during the build stage, TOSCA
+templates are rendered using variables set in the `podconfig`, `scenario`, and
+`default` roles into fully qualified TOSCA recipes, and (2) as the last step of
+the deploy stage, these TOSCA recipes are onboarded into XOS (which provisions
+the service profile accordingly). These two steps are implemented by a pair of
+Ansible roles:
 
-- `cord-profile` responsible for generating the TOSCA recipes from templates
-- `xos-config` responsible for onboarding the TOSCA recipes into XOS
+* `cord-profile` responsible for generating the TOSCA recipes from templates
+* `xos-config` responsible for onboarding the TOSCA recipes into XOS
 
-The following describes how to create a new TOSCA template and make
-it available to provision CORD. This is done in the context of the profile
-you want to provision, where profiles are defined in
+The following describes how to create a new TOSCA template and make it
+available to provision CORD. This is done in the context of the profile you
+want to provision, where profiles are defined in
 `orchestration/profiles/<profilename>`.
 
 ## Create a New Template
 
-You can create as many templates as needed for your profile in
-directory `orchestration/profiles/<profilename>/templates`.
-There are also some platform-wide TOSCA templates defined
-in `/platform-install/roles/cord-profile/templates` but these
-are typically not modified on a profile-by-profile basis.
+You can create as many templates as needed for your profile in directory
+`orchestration/profiles/<profilename>/templates`.  There are also some
+platform-wide TOSCA templates defined in
+`/platform-install/roles/cord-profile/templates` but these are typically not
+modified on a profile-by-profile basis.
 
-These templates use the `jinja2` syntax, so for example, a
-basic template might be `site.yml.j2`:
+These templates use the [jinja2
+syntax](http://jinja.pocoo.org/docs/latest/templates/), so for example, a basic
+template might be `site.yml.j2`:
 
-```
+```yaml
 tosca_definitions_version: tosca_simple_yaml_1_0
 
 description: created by platform-install, need to add M-CORD services later
@@ -42,18 +42,19 @@
       type: tosca.nodes.Site
 ```
 
-Your templates can use all the variables defined in
-the [build glossary](../build_glossary.md).
+Your templates can use all the variables defined in the [build
+glossary](../build_glossary.md).
 
 ## Add the Template to your Profile Manifest
 
 Locate and open the profile manifest you want to affect:
 `orchestration/profiles/<profilename>/<profilename>.yml`.
 
-Locate a section called `xos_tosca_config_templates` (if it's missing create it), 
-and add there the list of templates you want to be generated and
+Locate a section called `xos_tosca_config_templates` (if it's missing create
+it), and add there the list of templates you want to be generated and
 onboarded; for example:
-```
+
+```yaml
 xos_tosca_config_templates:
   - site.yml
 ```
diff --git a/docs/install_opencloud_site.md b/docs/install_opencloud_site.md
index daaa23b..cc978d6 100644
--- a/docs/install_opencloud_site.md
+++ b/docs/install_opencloud_site.md
@@ -1,10 +1,10 @@
-## Introduction
+# Introduction
 
 The following steps are required in order to bring up a new OpenCloud sites.
 
 1. Allocate servers
 
-2. Install Uubuntu
+2. Install Ubuntu
 
 3. Install OpenStack controller & compute nodes
 
@@ -12,62 +12,88 @@
 
 ## Allocate Servers
 
-**It may happen that for different reasons that few servers are offline. **Allocating servers involves finding those nodes that are offline and bringing them back online. In most cases just rebooting the nodes will bring them back online. Sometimes they may be offline for hardware malfunctions or maintenance. In that case someone would need to provide help, locally from the facility.
+**It may happen that for different reasons that few servers are offline.
+**Allocating servers involves finding those nodes that are offline and bringing
+them back online. In most cases just rebooting the nodes will bring them back
+online. Sometimes they may be offline for hardware malfunctions or maintenance.
+In that case someone would need to provide help, locally from the facility.
 
-NOffline nodes can be rebooted either manually (accessing through ssh to the node) or remotely, using an via ipmi script , usually called using the ipmi-cmd.sh script and located on some machines(usually found at /root/ipmi-cmd.sh). Reference at the section "Rebooting machines remotely" for more info.
+NOffline nodes can be rebooted either manually (accessing through ssh to the
+node) or remotely, using an via ipmi script , usually called using the
+ipmi-cmd.sh script and located on some machines(usually found at
+/root/ipmi-cmd.sh). Reference at the section "Rebooting machines remotely" for
+more info.
 
-Note: For example, for the Stanford cluster, the script should be located I’ve installed the ipmi-cmd.sh on node4.stanford.vicci.org. You should be able to reboot nodes from there.
+Note: For example, for the Stanford cluster, the script should be located I’ve
+installed the ipmi-cmd.sh on node4.stanford.vicci.org. You should be able to
+reboot nodes from there.
 
 ## Install Ubuntu
 
 Opencloud nodes are expected to be Ubuntu 14.x.
 
-Please note that  Ubuntu nodes that are already configured for other OopenCcloud environments (i.e. portal) needs to must be re-installed, even if already running Ubuntu with Ubunutu. At Stanford, every node that is not reserved must be re-installed.
+Please note that  Ubuntu nodes that are already configured for other
+OopenCcloud environments (i.e. portal) needs to must be re-installed, even if
+already running Ubuntu with Ubunutu. At Stanford, every node that is not
+reserved must be re-installed.
 
-The provisioning of the nodes and their setup (including installing a fresh Ubuntu 14) is done through the Vicci portal. In order to perform such steps, it’s required to have an administrative account on vicci.org. In case you don’t have it, please register on www.vicci.org and wait for the approval.
+The provisioning of the nodes and their setup (including installing a fresh
+Ubuntu 14) is done through the Vicci portal. In order to perform such steps,
+it’s required to have an administrative account on vicci.org. In case you don’t
+have it, please register on www.vicci.org and wait for the approval.
 
 Below, the main steps needed to install Ubuntu on the cluster machines are reported:
 
 1. After loggin in, on [www.vicci.org](http://www.vicci.org/)
 
-* Change the node’s deployment tag to "ansible_ubuntu_14"
+    * Change the node’s deployment tag to "ansible_ubuntu_14"
 
-* Set the node’s boot_state to ‘reinstall’
+    * Set the node’s boot_state to ‘reinstall’
 
 2. Reboot the node
 
-* Manually logging into the remote node (see "accessing the machines", below)
+    * Manually logging into the remote node (see "accessing the machines", below)
 
-* Through the IPMI script (see "Rebooting machines remotely", below)
+    * Through the IPMI script (see "Rebooting machines remotely", below)
 
-After reboot, the machine should go through the Ubuntu installation automatically. At the end of the process, the ones registered as administrators should be notified of the successfully installation. If you’re not an official opencloud.us administrator, just try to log into the machines again after 20-30 mins form the reboot.
+After reboot, the machine should go through the Ubuntu installation
+automatically. At the end of the process, the ones registered as administrators
+should be notified of the successfully installation. If you’re not an official
+opencloud.us administrator, just try to log into the machines again after 20-30
+mins form the reboot.
 
 3. Update Ubuntu
 
-```
+```shell
 sudo apt-get update
 sudo apt-get dist-upgrade
 ```
 
+## Install Openstack
 
-**Install Openstack**
-
-Ansible is a software that enables easy centralized configuration and management of a set of machines.
+Ansible is a software that enables easy centralized configuration and
+management of a set of machines.
 
 In the context of OpenCloud, it is used to setup the remote clusters machines.
 
-The following steps are needed in order to install Openstack on the clusters machines. 
+The following steps are needed in order to install Openstack on the clusters
+machines.
 
-They The following tasks can be performed from whatever node,  able to access the deployment machines. The deployment Vicci root ssh key is required in order to perform the ansible tasks described in this section. 
+They The following tasks can be performed from whatever node,  able to access
+the deployment machines. The deployment Vicci root ssh key is required in order
+to perform the ansible tasks described in this section.
 
 1. From a computer able to ssh into the machines:
 
 * Clone the openstack-cluster-setup git repo
 
-*$ git clone **[https://github.com/open-cloud/openstack-cluster-setu*p](https://github.com/open-cloud/openstack-cluster-setup)
+```shell
+git clone https://github.com/open-cloud/openstack-cluster-setup
+```
 
 The format of the file is the following:
 
+```yaml
 head ansible_ssh_host=headNodeAddress
 
 [compute]
@@ -75,86 +101,125 @@
 compute01Address
 
 compute02Address
+```
 
-….
+* Edit the site-specific hosts file and specify the controller (head) & compute nodes.
 
-* Edit the site-specific hosts file and specify the controller (head) & compute nodes. 
-
-	*$ cd openstack-cluster-setup && vi SITENAME-hosts*
+```shell
+cd openstack-cluster-setup && vi SITENAME-hosts
+```
 
 * Setup the controller (head) node by executing the site-specific playbook:
 
-*$ **ansible-playbook -i SITENAME-hosts SITENAME-setup.yml*
+```shell
+ansible-playbook -i SITENAME-hosts SITENAME-setup.yml
+```
 
-*NOTE: The file SITENAME-setup.yml should be created separately or copied over from  *
-
-*another SITENAME-setup.yml file*
-
-**IMPORTANT NOTE:** When the head node is configured by the script, one or more routes are added for each compute node specified in the configuration file. This is needed in order to let the head node and the compute nodes correctly communicate together. Forgetting to insert all the compute nodes, may cause undesired behaviors. If a compute node was forgotten, it’s suggested to repeat the procedure, after correcting the configuration in the config file.
-
-For the same reason, the procedure should be repeated** **whenever we want to add new compute nodes to the cluster. 
+> NOTE: The file SITENAME-setup.yml should be created separately or copied over
+> from another SITENAME-setup.yml file
+>
+> When the head node is configured by the script, one or more routes are added
+> for each compute node specified in the configuration file. This is needed in
+> order to let the head node and the compute nodes correctly communicate
+> together. Forgetting to insert all the compute nodes, may cause undesired
+> behaviors. If a compute node was forgotten, it’s suggested to repeat the
+> procedure, after correcting the configuration in the config file.
+>
+> For the same reason, the procedure should be repeated** **whenever we want to
+> add new compute nodes to the cluster.
 
 2. Log into the head node and for each compute node run
 
-* *$ **juju add-machine ssh:COMPUTE_NODE_ADDRESSnodeXX.stanford.vicci.org*
+```shell
+juju add-machine ssh:COMPUTE_NODE_ADDRESSnodeXX.stanford.vicci.org
+```
 
-As stated earlier, before you run 'juju add-machine' for any compute nodes, you need to add them to SITENAME-hosts and re-run SITENAME-setup.yml.  If you don't want to wait through the whole thing you can start at the right step as follows:
+As stated earlier, before you run 'juju add-machine' for any compute nodes, you
+need to add them to SITENAME-hosts and re-run SITENAME-setup.yml.  If you don't
+want to wait through the whole thing you can start at the right step as
+follows:
 
-    $ ansible-playbook -i SITENAME-hosts SITENAME-setup.yml --start-at-task="Get public key"
+    ansible-playbook -i SITENAME-hosts SITENAME-setup.yml --start-at-task="Get public key"
 
-5. On your workstation, setup the compute node by executing the site-specific playbook    
+5. On your workstation, setup the compute node by executing the site-specific playbook
 
-    $ ansible-playbook -i SITENAME-hosts SITENAME-compute.yml
+    ansible-playbook -i SITENAME-hosts SITENAME-compute.yml
 
-**Update XOS**
+## Update XOS
 
-Now that we have a controller and some compute nodes, we need to add the controller’s information to xos so that it can be access by the synchronizer/observer. 
+Now that we have a controller and some compute nodes, we need to add the
+controller’s information to xos so that it can be access by the
+synchronizer/observer.
 
-1. Update the site’s controller record. Stanford’s controller record can be found at:
+1. Update the site’s controller record. Stanford’s controller record can be
+   found at: [http://alpha.opencloud.us/admin/core/controller/18/](http://alpha.opencloud.us/admin/core/controller/18/)
 
-
[http://alpha.opencloud.us/admin/core/controller/18/](http://alpha.opencloud.us/admin/core/controller/18/)
+   The information that needs to be entered here can be found in
+   /home/ubuntu/admin-openrc.sh on the site’s controller (head) node.
 
-The information that needs to be entered here can be found in /home/ubuntu/admin-openrc.sh on the site’s controller (head) node. 
+2. Add the controller to the
+   site: [http://alpha.opencloud.us/admin/core/site/17/#admin-only](http://alpha.opencloud.us/admin/core/site/17/#admin-only)
 
-2. Add the controller to the site:
[http://alpha.opencloud.us/admin/core/site/17/#admin-only](http://alpha.opencloud.us/admin/core/site/17/#admin-only)
+   (tenant_id is showing up in the form even though it is not required here.
+   Just add any string there for now)
 
-(tenant_id is showing up in the form even though it is not required here. Just add any string there for now)
+3. Add compute nodes to the
+   site: [http://alpha.opencloud.us/admin/core/site/17/#nodes](http://alpha.opencloud.us/admin/core/site/17/#nodes)
 
-3. Add compute nodes to the site:
[http://alpha.opencloud.us/admin/core/site/17/#nodes](http://alpha.opencloud.us/admin/core/site/17/#nodes)
+4. Add Iptables rules in xos synchronizer host vm so that the synchronizer can
+   access the site’s management network
 
-4. Add Iptables rules in xos synchronizer host vm so that the synchronizer can access the site’s management network
+   Princeton VICCI cluster: head is[
+   node70.princeton.vicci.org](http://node70.princeton.vicci.org/)
+   (128.112.171.158)
 
-# Princeton VICCI cluster: head is[ node70.princeton.vicci.org](http://node70.princeton.vicci.org/) (128.112.171.158)
+   `iptables -t nat -A OUTPUT -p tcp -d 192.168.100.0/24 -j DNAT
+   --to-destination 128.112.171.158`
 
-iptables -t nat -A OUTPUT -p tcp -d 192.168.100.0/24 -j DNAT --to-destination 128.112.171.158
+   If running synchronizer inside of a container
 
-# if running synchronizer inside of a container
+   `iptables -t nat -A PREROUTING -p tcp -d 192.168.100.0/24 -j DNAT
+   --to-destination 128.112.171.158`
 
-iptables -t nat -A PREROUTING -p tcp -d 192.168.100.0/24 -j DNAT --to-destination 128.112.171.158
+5. Update the firewall rules on the cluster head nodes to accept connections
+   from the xos synchronizer vm
 
-5. Update the firewall rules on the cluster head nodes to accept connections from the xos synchronizer vm
+6. Copy the certificates from the cluster head nodes and put them in
+   `/usr/local/share/ca-certificates` on the xos synchronizer vm.  Then re-run
+   `update-ca-certificates` inside the synchronizer container.
 
-6. Copy the certificates from the cluster head nodes and put them in `/usr/local/share/ca-certificates` on the xos synchronizer vm.  Then re-run `update-ca-certificates` inside the synchronizer container.
+## Accessing the machines
 
-Accessing the machines
-
-Accessing new Ubuntu machines is pretty straight forward. The default user is ubuntu. No password is required and the key used to authenticate is the official deployment root key, that one of the administrator should have given to you separately.
+Accessing new Ubuntu machines is pretty straight forward. The default user is
+ubuntu. No password is required and the key used to authenticate is the
+official deployment root key, that one of the administrator should have given
+to you separately.
 
 So, in order to access to a fresh new Ubuntu node, just type:
 
-ssh -i /path/to/the/root/key ubuntu@ip_of_the_machine
+```shell
+ssh -i /path/to/the/root/key ubuntu@ip_of_the_machine_
+```
 
-Sometime, it may happen that you need to access to already existing nodes. These nodes may either run an Ubuntu or a Fedora. Knowing what node runs what may be tricky and the only way to discover it would be trying to access to it. While the key to get inside still remains the deployment root key (as described above), the username may vary between Ubuntu and Fedora machines. Contrarily to Ubuntu, the default Fedora username is root.
+Sometime, it may happen that you need to access to already existing nodes.
+These nodes may either run an Ubuntu or a Fedora. Knowing what node runs what
+may be tricky and the only way to discover it would be trying to access to it.
+While the key to get inside still remains the deployment root key (as described
+above), the username may vary between Ubuntu and Fedora machines. Contrarily to
+Ubuntu, the default Fedora username is root.
 
 So, in order to access to a one of the Fedora machines, you would type:
 
 ssh -i /path/to/the/root/key root@ip_of_the_machine
 
-Rebooting machines remotely
+## Rebooting machines remotely
 
-Machines can be rebooted remotely through an ipmi script, usually located on specific machines of the clusters under /root. The script is named ipmi-cmd.sh.
+Machines can be rebooted remotely through an ipmi script, usually located on
+specific machines of the clusters under /root. The script is named ipmi-cmd.sh.
 
 In the following example, node44.stanford.vicci.org is rebootd:
 
-$ /root/ipmi-cmd.sh 44 'power cycle'
+```shell
+/root/ipmi-cmd.sh 44 'power cycle'
+```