divyadesai | a702ba9 | 2020-08-31 11:27:46 +0000 | [diff] [blame] | 1 | Centralized Log Analysis |
| 2 | ======================== |
| 3 | |
| 4 | Objective |
| 5 | --------- |
| 6 | |
| 7 | Operators should be able to view logs from all the VOLTHA components as |
| 8 | well as from whitebox OLT devices in a single stream. |
| 9 | |
Andrea Campanella | 882cfcc | 2021-02-04 10:53:57 +0100 | [diff] [blame] | 10 | Solution Approach For VOLTHA Ecosystem |
divyadesai | a702ba9 | 2020-08-31 11:27:46 +0000 | [diff] [blame] | 11 | -------------------------------------- |
| 12 | |
| 13 | The solution we have chosen EFK (elasticsearch, kibana and |
| 14 | fluentd-elasticsearch) setup for voltha enables the Operator to push |
| 15 | logs from all VOLTHA components. |
| 16 | |
Andrea Campanella | c18d118 | 2021-09-10 12:01:38 +0200 | [diff] [blame] | 17 | To deploy VOLTHA with the EFK stack follow the paraghraph `Support-for-logging-and-tracing-(optional)` |
| 18 | in the `voltha-helm-charts README <../voltha-helm-charts/README.md>`_. |
Andrea Campanella | 882cfcc | 2021-02-04 10:53:57 +0100 | [diff] [blame] | 19 | |
| 20 | This will deploy Efk stack with a single node elasticsearch and |
divyadesai | a702ba9 | 2020-08-31 11:27:46 +0000 | [diff] [blame] | 21 | kibana instance will be deployed and a fluentd-elasticsearch pod will be |
Andrea Campanella | 882cfcc | 2021-02-04 10:53:57 +0100 | [diff] [blame] | 22 | deployed on each node that allows workloads to be scheduled. |
divyadesai | a702ba9 | 2020-08-31 11:27:46 +0000 | [diff] [blame] | 23 | |
| 24 | The number of deployed Pods will be dependent on the value of Deployment |
| 25 | Type and SCHEDULE\_ON\_CONTROL\_NODES flag as shown in the below table. |
| 26 | |
| 27 | .. figure:: ../_static/fluentd-pods.png |
| 28 | :width: 6.50000in |
| 29 | :height: 1.50000in |
| 30 | |
divyadesai | a702ba9 | 2020-08-31 11:27:46 +0000 | [diff] [blame] | 31 | **To start using Kibana, In your browser ,navigate to |
| 32 | http://<k8s\_node\_ip>:<exposed\_port>.** Then you can search for events |
| 33 | in the *Discover* section. |
| 34 | |
| 35 | Solution Approach For Whitebox OLT Device |
| 36 | ----------------------------------------- |
| 37 | |
| 38 | The solution approach we have chosen is to install td-agent (fluentd |
| 39 | variant) directly on OLT device for capturing and transmitting logs to |
| 40 | elasticsearch pod running in voltha cluster. |
| 41 | |
| 42 | Created custom td-agent configuration file to handle the format of |
| 43 | involved log files using right input plugins for openolt process, dev |
| 44 | mgmt daemon, system logs and elasticsearch output plugin.You can find |
| 45 | custom td-agent configuration file in |
| 46 | `*https://github.com/opencord/openolt/tree/master/logConf* <https://github.com/opencord/openolt/tree/master/logConf>`__ |
| 47 | and find installation steps in |
| 48 | `*https://github.com/opencord/openolt/tree/master* <https://github.com/opencord/openolt/tree/master/logConf>`__ |
| 49 | README. |
| 50 | |
| 51 | Log Collection from VOLTHA Ecosystem and Whitebox OLT Device |
| 52 | ------------------------------------------------------------ |
| 53 | |
| 54 | Below diagram depicts the log collection from voltha components and |
Andrea Campanella | 8245ff5 | 2021-10-06 11:50:03 +0200 | [diff] [blame] | 55 | whitebox OLT device through EFK.The fluentd pod running collects logs |
| 56 | from all the voltha components and push to elasticsearch pod. |
| 57 | The td-agent(fluentd variant) service running on |
divyadesai | a702ba9 | 2020-08-31 11:27:46 +0000 | [diff] [blame] | 58 | whitebox OLT device capture the logs from openolt agent process, device |
| 59 | mgmt daemon process and system logs and transmits the logs to the |
| 60 | elasticsearch pod running in voltha cluster over tcp protocol. |
| 61 | |
| 62 | .. figure:: ../_static/centralize-logging.png |
| 63 | :width: 6.50000in |
| 64 | :height: 2.50000in |
| 65 | |
| 66 | Secure EFK setup and transport of Logs from OLT device |
| 67 | ------------------------------------------------------ |
| 68 | |
| 69 | The Operator can enhance the setup by making configuration changes with |
| 70 | the requirement. |
| 71 | |
| 72 | The Authentication, Authorization, and Security features for EFK can be |
| 73 | enabled via X-Pack plugin and Role Based Access Control (RBAC) in |
| 74 | Elasticsearch.The transmission of logs from the Whitebox OLT device can |
| 75 | be secured by enabling tls/ssl encryption with EFK setup and |
| 76 | td-agent.Refer following link for Security features. |
| 77 | `*https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-security.html* <https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-security.html>`__ |
| 78 | |
| 79 | To enable TLS/SSL encryption for elasticsearch pod refer the following |
| 80 | link |
| 81 | |
| 82 | `*https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/security* <https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/security>`__ |
| 83 | |
| 84 | To enable TLS/SSL encryption for kibana pod refer the following link |
| 85 | |
| 86 | `*https://github.com/elastic/helm-charts/tree/master/kibana/examples/security* <https://github.com/elastic/helm-charts/tree/master/kibana/examples/security>`__ |
| 87 | |
| 88 | To enable TLS/SSL encryption for fluentd pod and td-agent service refer |
| 89 | following link |
| 90 | |
| 91 | `*https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch* <https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch>`__ |
| 92 | |
| 93 | Note: create certs directory in /etc/td-agent on the OLT device and copy |
| 94 | the elastic-ca.pem certificate. |
| 95 | |
| 96 | Archive of Logs |
| 97 | --------------- |
| 98 | |
| 99 | There are various mechanisms available with EFK to save data.For example |
| 100 | operators can use **reporting feature** to generate reports of saved |
| 101 | search as CSV documents, that can be transferred to a support |
| 102 | organization via email.You can save searches with time-boxed or with |
| 103 | filtering the required fields then generate the report.To use reporting |
| 104 | features refer the following link |
| 105 | `*https://www.elastic.co/guide/en/kibana/current/reporting-getting-started.html* <https://www.elastic.co/guide/en/kibana/current/reporting-getting-started.html>`__ |
| 106 | |
| 107 | Note: By default a 10mb of CSV file can be generated.To generate > 10mb |
| 108 | of file enable x-pack plugin and rbac.To generate larger files need to |
| 109 | have a bigger cluster configuration for the elasticsearch pod.The java |
| 110 | heap space,cpu and memory need to be increased with the CSV file size. |