divyadesai | a702ba9 | 2020-08-31 11:27:46 +0000 | [diff] [blame^] | 1 | Centralized Log Analysis |
| 2 | ======================== |
| 3 | |
| 4 | Objective |
| 5 | --------- |
| 6 | |
| 7 | Operators should be able to view logs from all the VOLTHA components as |
| 8 | well as from whitebox OLT devices in a single stream. |
| 9 | |
| 10 | Solution Approach For Voltha Ecosystem |
| 11 | -------------------------------------- |
| 12 | |
| 13 | The solution we have chosen EFK (elasticsearch, kibana and |
| 14 | fluentd-elasticsearch) setup for voltha enables the Operator to push |
| 15 | logs from all VOLTHA components. |
| 16 | |
| 17 | Kind-Voltha script enables Operator to setup EFK with minimal |
| 18 | configuration.The configuration set for EFK in minimal-values.yaml or |
| 19 | full-values.yaml start efk stack with a single node elasticsearch and |
| 20 | kibana instance will be deployed and a fluentd-elasticsearch pod will be |
| 21 | deployed on each node that allows workloads to be scheduled.If you have |
| 22 | the prerequisites installed, just execute |
| 23 | |
| 24 | .. code:: bash |
| 25 | |
| 26 | $ DEPLOY\_K8S=y WITH\_BBSIM=y WITH\_EFK=y ./voltha up |
| 27 | |
| 28 | and the minimal cluster with efk should start. |
| 29 | |
| 30 | The number of deployed Pods will be dependent on the value of Deployment |
| 31 | Type and SCHEDULE\_ON\_CONTROL\_NODES flag as shown in the below table. |
| 32 | |
| 33 | .. figure:: ../_static/fluentd-pods.png |
| 34 | :width: 6.50000in |
| 35 | :height: 1.50000in |
| 36 | |
| 37 | To remove voltha efk use DEPLOY\_K8S=y ./voltha down or, to avoid |
| 38 | removing the k8s cluster DEPLOY\_K8S=n WITH\_EFK=y ./voltha down |
| 39 | |
| 40 | **To start using Kibana, In your browser ,navigate to |
| 41 | http://<k8s\_node\_ip>:<exposed\_port>.** Then you can search for events |
| 42 | in the *Discover* section. |
| 43 | |
| 44 | Solution Approach For Whitebox OLT Device |
| 45 | ----------------------------------------- |
| 46 | |
| 47 | The solution approach we have chosen is to install td-agent (fluentd |
| 48 | variant) directly on OLT device for capturing and transmitting logs to |
| 49 | elasticsearch pod running in voltha cluster. |
| 50 | |
| 51 | Created custom td-agent configuration file to handle the format of |
| 52 | involved log files using right input plugins for openolt process, dev |
| 53 | mgmt daemon, system logs and elasticsearch output plugin.You can find |
| 54 | custom td-agent configuration file in |
| 55 | `*https://github.com/opencord/openolt/tree/master/logConf* <https://github.com/opencord/openolt/tree/master/logConf>`__ |
| 56 | and find installation steps in |
| 57 | `*https://github.com/opencord/openolt/tree/master* <https://github.com/opencord/openolt/tree/master/logConf>`__ |
| 58 | README. |
| 59 | |
| 60 | Log Collection from VOLTHA Ecosystem and Whitebox OLT Device |
| 61 | ------------------------------------------------------------ |
| 62 | |
| 63 | Below diagram depicts the log collection from voltha components and |
| 64 | whitebox OLT device through EFK.The fluentd pod running kind-voltha |
| 65 | setup collects logs from all the voltha components and push to |
| 66 | elasticsearch pod.The td-agent(fluentd variant) service running on |
| 67 | whitebox OLT device capture the logs from openolt agent process, device |
| 68 | mgmt daemon process and system logs and transmits the logs to the |
| 69 | elasticsearch pod running in voltha cluster over tcp protocol. |
| 70 | |
| 71 | .. figure:: ../_static/centralize-logging.png |
| 72 | :width: 6.50000in |
| 73 | :height: 2.50000in |
| 74 | |
| 75 | Secure EFK setup and transport of Logs from OLT device |
| 76 | ------------------------------------------------------ |
| 77 | |
| 78 | The Operator can enhance the setup by making configuration changes with |
| 79 | the requirement. |
| 80 | |
| 81 | The Authentication, Authorization, and Security features for EFK can be |
| 82 | enabled via X-Pack plugin and Role Based Access Control (RBAC) in |
| 83 | Elasticsearch.The transmission of logs from the Whitebox OLT device can |
| 84 | be secured by enabling tls/ssl encryption with EFK setup and |
| 85 | td-agent.Refer following link for Security features. |
| 86 | `*https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-security.html* <https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-security.html>`__ |
| 87 | |
| 88 | To enable TLS/SSL encryption for elasticsearch pod refer the following |
| 89 | link |
| 90 | |
| 91 | `*https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/security* <https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/security>`__ |
| 92 | |
| 93 | To enable TLS/SSL encryption for kibana pod refer the following link |
| 94 | |
| 95 | `*https://github.com/elastic/helm-charts/tree/master/kibana/examples/security* <https://github.com/elastic/helm-charts/tree/master/kibana/examples/security>`__ |
| 96 | |
| 97 | To enable TLS/SSL encryption for fluentd pod and td-agent service refer |
| 98 | following link |
| 99 | |
| 100 | `*https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch* <https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch>`__ |
| 101 | |
| 102 | Note: create certs directory in /etc/td-agent on the OLT device and copy |
| 103 | the elastic-ca.pem certificate. |
| 104 | |
| 105 | Archive of Logs |
| 106 | --------------- |
| 107 | |
| 108 | There are various mechanisms available with EFK to save data.For example |
| 109 | operators can use **reporting feature** to generate reports of saved |
| 110 | search as CSV documents, that can be transferred to a support |
| 111 | organization via email.You can save searches with time-boxed or with |
| 112 | filtering the required fields then generate the report.To use reporting |
| 113 | features refer the following link |
| 114 | `*https://www.elastic.co/guide/en/kibana/current/reporting-getting-started.html* <https://www.elastic.co/guide/en/kibana/current/reporting-getting-started.html>`__ |
| 115 | |
| 116 | Note: By default a 10mb of CSV file can be generated.To generate > 10mb |
| 117 | of file enable x-pack plugin and rbac.To generate larger files need to |
| 118 | have a bigger cluster configuration for the elasticsearch pod.The java |
| 119 | heap space,cpu and memory need to be increased with the CSV file size. |