blob: 7d5dc60d3e2824682c3c9216a748828c710f480b [file] [log] [blame]
Matteo Scandolo9a2772a2018-11-19 14:56:26 -08001/*
2Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level
3API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level
4API is insufficient. Usage examples for the high-level APIs are provided inline with their full documentation.
5
6To produce messages, use either the AsyncProducer or the SyncProducer. The AsyncProducer accepts messages on a channel
7and produces them asynchronously in the background as efficiently as possible; it is preferred in most cases.
8The SyncProducer provides a method which will block until Kafka acknowledges the message as produced. This can be
9useful but comes with two caveats: it will generally be less efficient, and the actual durability guarantees
10depend on the configured value of `Producer.RequiredAcks`. There are configurations where a message acknowledged by the
11SyncProducer can still sometimes be lost.
12
13To consume messages, use the Consumer. Note that Sarama's Consumer implementation does not currently support automatic
14consumer-group rebalancing and offset tracking. For Zookeeper-based tracking (Kafka 0.8.2 and earlier), the
15https://github.com/wvanbergen/kafka library builds on Sarama to add this support. For Kafka-based tracking (Kafka 0.9
16and later), the https://github.com/bsm/sarama-cluster library builds on Sarama to add this support.
17
18For lower-level needs, the Broker and Request/Response objects permit precise control over each connection
19and message sent on the wire; the Client provides higher-level metadata management that is shared between
20the producers and the consumer. The Request/Response objects and properties are mostly undocumented, as they line up
21exactly with the protocol fields documented by Kafka at
22https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
23
24Metrics are exposed through https://github.com/rcrowley/go-metrics library in a local registry.
25
26Broker related metrics:
27
28 +----------------------------------------------+------------+---------------------------------------------------------------+
29 | Name | Type | Description |
30 +----------------------------------------------+------------+---------------------------------------------------------------+
31 | incoming-byte-rate | meter | Bytes/second read off all brokers |
32 | incoming-byte-rate-for-broker-<broker-id> | meter | Bytes/second read off a given broker |
33 | outgoing-byte-rate | meter | Bytes/second written off all brokers |
34 | outgoing-byte-rate-for-broker-<broker-id> | meter | Bytes/second written off a given broker |
35 | request-rate | meter | Requests/second sent to all brokers |
36 | request-rate-for-broker-<broker-id> | meter | Requests/second sent to a given broker |
37 | request-size | histogram | Distribution of the request size in bytes for all brokers |
38 | request-size-for-broker-<broker-id> | histogram | Distribution of the request size in bytes for a given broker |
39 | request-latency-in-ms | histogram | Distribution of the request latency in ms for all brokers |
40 | request-latency-in-ms-for-broker-<broker-id> | histogram | Distribution of the request latency in ms for a given broker |
41 | response-rate | meter | Responses/second received from all brokers |
42 | response-rate-for-broker-<broker-id> | meter | Responses/second received from a given broker |
43 | response-size | histogram | Distribution of the response size in bytes for all brokers |
44 | response-size-for-broker-<broker-id> | histogram | Distribution of the response size in bytes for a given broker |
45 +----------------------------------------------+------------+---------------------------------------------------------------+
46
47Note that we do not gather specific metrics for seed brokers but they are part of the "all brokers" metrics.
48
49Producer related metrics:
50
51 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
52 | Name | Type | Description |
53 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
54 | batch-size | histogram | Distribution of the number of bytes sent per partition per request for all topics |
55 | batch-size-for-topic-<topic> | histogram | Distribution of the number of bytes sent per partition per request for a given topic |
56 | record-send-rate | meter | Records/second sent to all topics |
57 | record-send-rate-for-topic-<topic> | meter | Records/second sent to a given topic |
58 | records-per-request | histogram | Distribution of the number of records sent per request for all topics |
59 | records-per-request-for-topic-<topic> | histogram | Distribution of the number of records sent per request for a given topic |
60 | compression-ratio | histogram | Distribution of the compression ratio times 100 of record batches for all topics |
61 | compression-ratio-for-topic-<topic> | histogram | Distribution of the compression ratio times 100 of record batches for a given topic |
62 +-------------------------------------------+------------+--------------------------------------------------------------------------------------+
63
64*/
65package sarama
66
67import (
68 "io/ioutil"
69 "log"
70)
71
72// Logger is the instance of a StdLogger interface that Sarama writes connection
73// management events to. By default it is set to discard all log messages via ioutil.Discard,
74// but you can set it to redirect wherever you want.
75var Logger StdLogger = log.New(ioutil.Discard, "[Sarama] ", log.LstdFlags)
76
77// StdLogger is used to log error messages.
78type StdLogger interface {
79 Print(v ...interface{})
80 Printf(format string, v ...interface{})
81 Println(v ...interface{})
82}
83
84// PanicHandler is called for recovering from panics spawned internally to the library (and thus
85// not recoverable by the caller's goroutine). Defaults to nil, which means panics are not recovered.
86var PanicHandler func(interface{})
87
88// MaxRequestSize is the maximum size (in bytes) of any request that Sarama will attempt to send. Trying
89// to send a request larger than this will result in an PacketEncodingError. The default of 100 MiB is aligned
90// with Kafka's default `socket.request.max.bytes`, which is the largest request the broker will attempt
91// to process.
92var MaxRequestSize int32 = 100 * 1024 * 1024
93
94// MaxResponseSize is the maximum size (in bytes) of any response that Sarama will attempt to parse. If
95// a broker returns a response message larger than this value, Sarama will return a PacketDecodingError to
96// protect the client from running out of memory. Please note that brokers do not have any natural limit on
97// the size of responses they send. In particular, they can send arbitrarily large fetch responses to consumers
98// (see https://issues.apache.org/jira/browse/KAFKA-2063).
99var MaxResponseSize int32 = 100 * 1024 * 1024