blob: 1e0277aebd5f322f7a8fcc1bd90ea841233d82de [file] [log] [blame]
Dinesh Belwalkare63f7f92019-11-22 23:11:16 +00001/*
2Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level
3API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level
4API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation.
5
6To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel
7and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases.
8The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be
9useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees
10depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the
11SyncProducer can still sometimes be lost.
12
13To consume messages, use Consumer or Consumer-Group API.
14
15For lower-level needs, the Broker and Request/Response objects permit precise control over each connection
16and message sent on the wire; the Client provides higher-level metadata management that is shared between
17the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up
18exactly with the protocol fields documented by Kafka at
19https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
20
21Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry.
22
23Broker related metrics:
24
25 +----------------------------------------------+------------+---------------------------------------------------------------+
26 | Name | Type | Description |
27 +----------------------------------------------+------------+---------------------------------------------------------------+
28 | incoming-byte-rate | meter | Bytes/second read off all brokers |
29 | incoming-byte-rate-for-broker-<broker-id> | meter | Bytes/second read off a given broker |
30 | outgoing-byte-rate | meter | Bytes/second written off all brokers |
31 | outgoing-byte-rate-for-broker-<broker-id> | meter | Bytes/second written off a given broker |
32 | request-rate | meter | Requests/second sent to all brokers |
33 | request-rate-for-broker-<broker-id> | meter | Requests/second sent to a given broker |
34 | request-size | histogram | Distribution of the request size in bytes for all brokers |
35 | request-size-for-broker-<broker-id> | histogram | Distribution of the request size in bytes for a given broker |
36 | request-latency-in-ms | histogram | Distribution of the request latency in ms for all brokers |
37 | request-latency-in-ms-for-broker-<broker-id> | histogram | Distribution of the request latency in ms for a given broker |
38 | response-rate | meter | Responses/second received from all brokers |
39 | response-rate-for-broker-<broker-id> | meter | Responses/second received from a given broker |
40 | response-size | histogram | Distribution of the response size in bytes for all brokers |
41 | response-size-for-broker-<broker-id> | histogram | Distribution of the response size in bytes for a given broker |
42 +----------------------------------------------+------------+---------------------------------------------------------------+
43
44Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics.
45
46Producer related metrics:
47
48 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
49 | Name | Type | Description |
50 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
51 | batch-size | histogram | Distribution of the number of bytes sent per partition per request for all topics |
52 | batch-size-for-topic-<topic> | histogram | Distribution of the number of bytes sent per partition per request for a given topic |
53 | record-send-rate | meter | Records/second sent to all topics |
54 | record-send-rate-for-topic-<topic> | meter | Records/second sent to a given topic |
55 | records-per-request | histogram | Distribution of the number of records sent per request for all topics |
56 | records-per-request-for-topic-<topic> | histogram | Distribution of the number of records sent per request for a given topic |
57 | compression-ratio | histogram | Distribution of the compression ratio times 100 of record batches for all topics |
58 | compression-ratio-for-topic-<topic> | histogram | Distribution of the compression ratio times 100 of record batches for a given topic |
59 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
60
61Consumer related metrics:
62
63 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
64 | Name | Type | Description |
65 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
66 | consumer-batch-size | histogram | Distribution of the number of messages in a batch |
67 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
68
69*/
70package sarama
71
72import (
73 "io/ioutil"
74 "log"
75)
76
77var (
78 // Logger is the instance of a StdLogger interface that Sarama writes connection
79 // management events to. By default it is set to discard all log messages via ioutil.Discard,
80 // but you can set it to redirect wherever you want.
81 Logger StdLogger = log.New(ioutil.Discard, "[Sarama] ", log.LstdFlags)
82
83 // PanicHandler is called for recovering from panics spawned internally to the library (and thus
84 // not recoverable by the caller's goroutine). Defaults to nil, which means panics are not recovered.
85 PanicHandler func(interface{})
86
87 // MaxRequestSize is the maximum size (in bytes) of any request that Sarama will attempt to send. Trying
88 // to send a request larger than this will result in an PacketEncodingError. The default of 100 MiB is aligned
89 // with Kafka's default `socket.request.max.bytes`, which is the largest request the broker will attempt
90 // to process.
91 MaxRequestSize int32 = 100 * 1024 * 1024
92
93 // MaxResponseSize is the maximum size (in bytes) of any response that Sarama will attempt to parse. If
94 // a broker returns a response message larger than this value, Sarama will return a PacketDecodingError to
95 // protect the client from running out of memory. Please note that brokers do not have any natural limit on
96 // the size of responses they send. In particular, they can send arbitrarily large fetch responses to consumers
97 // (see https://issues.apache.org/jira/browse/KAFKA-2063).
98 MaxResponseSize int32 = 100 * 1024 * 1024
99)
100
101// StdLogger is used to log error messages.
102type StdLogger interface {
103 Print(v ...interface{})
104 Printf(format string, v ...interface{})
105 Println(v ...interface{})
106}