Changelog

Version 1.21.0 (2019-02-24)

New Features:

  • Add CreateAclRequest, DescribeAclRequest, DeleteAclRequest (1236).
  • Add DescribeTopic, DescribeConsumerGroup, ListConsumerGroups, ListConsumerGroupOffsets admin requests (1178).
  • Implement SASL/OAUTHBEARER (1240).

Improvements:

  • Add Go mod support (1282).
  • Add error codes 73—76 (1239).
  • Add retry backoff function (1160).
  • Maintain metadata in the producer even when retries are disabled (1189).
  • Include ReplicaAssignment in ListTopics (1274).
  • Add producer performance tool (1222).
  • Add support LogAppend timestamps (1258).

Bug Fixes:

  • Fix potential deadlock when a heartbeat request fails (1286).
  • Fix consuming compacted topic (1227).
  • Set correct Kafka version for DescribeConfigsRequest v1 (1277).
  • Update kafka test version (1273).

Version 1.20.1 (2019-01-10)

New Features:

  • Add optional replica id in offset request (1100).

Improvements:

  • Implement DescribeConfigs Request + Response v1 & v2 (1230).
  • Reuse compression objects (1185).
  • Switch from png to svg for GoDoc link in README (1243).
  • Fix typo in deprecation notice for FetchResponseBlock.Records (1242).
  • Fix typos in consumer metadata response file (1244).

Bug Fixes:

  • Revert to individual msg retries for non-idempotent (1203).
  • Respect MaxMessageBytes limit for uncompressed messages (1141).

Version 1.20.0 (2018-12-10)

New Features:

  • Add support for zstd compression (#1170).
  • Add support for Idempotent Producer (#1152).
  • Add support support for Kafka 2.1.0 (#1229).
  • Add support support for OffsetCommit request/response pairs versions v1 to v5 (#1201).
  • Add support support for OffsetFetch request/response pair up to version v5 (#1198).

Improvements:

  • Export broker's Rack setting (#1173).
  • Always use latest patch version of Go on CI (#1202).
  • Add error codes 61 to 72 (#1195).

Bug Fixes:

  • Fix build without cgo (#1182).
  • Fix go vet suggestion in consumer group file (#1209).
  • Fix typos in code and comments (#1228).

Version 1.19.0 (2018-09-27)

New Features:

  • Implement a higher-level consumer group (#1099).

Improvements:

  • Add support for Go 1.11 (#1176).

Bug Fixes:

  • Fix encoding of MetadataResponse with version 2 and higher (#1174).
  • Fix race condition in mock async producer (#1174).

Version 1.18.0 (2018-09-07)

New Features:

  • Make Partitioner.RequiresConsistency vary per-message (#1112).
  • Add customizable partitioner (#1118).
  • Add ClusterAdmin support for CreateTopic, DeleteTopic, CreatePartitions, DeleteRecords, DescribeConfig, AlterConfig, CreateACL, ListAcls, DeleteACL (#1055).

Improvements:

  • Add support for Kafka 2.0.0 (#1149).
  • Allow setting LocalAddr when dialing an address to support multi-homed hosts (#1123).
  • Simpler offset management (#1127).

Bug Fixes:

  • Fix mutation of ProducerMessage.MetaData when producing to Kafka (#1110).
  • Fix consumer block when response did not contain all the expected topic/partition blocks (#1086).
  • Fix consumer block when response contains only constrol messages (#1115).
  • Add timeout config for ClusterAdmin requests (#1142).
  • Add version check when producing message with headers (#1117).
  • Fix MetadataRequest for empty list of topics (#1132).
  • Fix producer topic metadata on-demand fetch when topic error happens in metadata response (#1125).

Version 1.17.0 (2018-05-30)

New Features:

  • Add support for gzip compression levels (#1044).
  • Add support for Metadata request/response pairs versions v1 to v5 (#1047, #1069).
  • Add versioning to JoinGroup request/response pairs (#1098)
  • Add support for CreatePartitions, DeleteGroups, DeleteRecords request/response pairs (#1065, #1096, #1027).
  • Add Controller() method to Client interface (#1063).

Improvements:

  • ConsumerMetadataReq/Resp has been migrated to FindCoordinatorReq/Resp (#1010).
  • Expose missing protocol parts: msgSet and recordBatch (#1049).
  • Add support for v1 DeleteTopics Request (#1052).
  • Add support for Go 1.10 (#1064).
  • Claim support for Kafka 1.1.0 (#1073).

Bug Fixes:

  • Fix FindCoordinatorResponse.encode to allow nil Coordinator (#1050, #1051).
  • Clear all metadata when we have the latest topic info (#1033).
  • Make PartitionConsumer.Close idempotent (#1092).

Version 1.16.0 (2018-02-12)

New Features:

  • Add support for the Create/Delete Topics request/response pairs (#1007, #1008).
  • Add support for the Describe/Create/Delete ACL request/response pairs (#1009).
  • Add support for the five transaction-related request/response pairs (#1016).

Improvements:

  • Permit setting version on mock producer responses (#999).
  • Add NewMockBrokerListener helper for testing TLS connections (#1019).
  • Changed the default value for Consumer.Fetch.Default from 32KiB to 1MiB which results in much higher throughput in most cases (#1024).
  • Reuse the time.Ticker across fetch requests in the PartitionConsumer to reduce CPU and memory usage when processing many partitions (#1028).
  • Assign relative offsets to messages in the producer to save the brokers a recompression pass (#1002, #1015).

Bug Fixes:

  • Fix producing uncompressed batches with the new protocol format (#1032).
  • Fix consuming compacted topics with the new protocol format (#1005).
  • Fix consuming topics with a mix of protocol formats (#1021).
  • Fix consuming when the broker includes multiple batches in a single response (#1022).
  • Fix detection of PartialTrailingMessage when the partial message was truncated before the magic value indicating its version (#1030).
  • Fix expectation-checking in the mock of SyncProducer.SendMessages (#1035).

Version 1.15.0 (2017-12-08)

New Features:

  • Claim official support for Kafka 1.0, though it did already work (#984).
  • Helper methods for Kafka version numbers to/from strings (#989).
  • Implement CreatePartitions request/response (#985).

Improvements:

  • Add error codes 45-60 (#986).

Bug Fixes:

  • Fix slow consuming for certain Kafka 0.11/1.0 configurations (#982).
  • Correctly determine when a FetchResponse contains the new message format (#990).
  • Fix producing with multiple headers (#996).
  • Fix handling of truncated record batches (#998).
  • Fix leaking metrics when closing brokers (#991).

Version 1.14.0 (2017-11-13)

New Features:

  • Add support for the new Kafka 0.11 record-batch format, including the wire protocol and the necessary behavioural changes in the producer and consumer. Transactions and idempotency are not yet supported, but producing and consuming should work with all the existing bells and whistles (batching, compression, etc) as well as the new custom headers. Thanks to Vlad Hanciuta of Arista Networks for this work. Part of (#901).

Bug Fixes:

  • Fix encoding of ProduceResponse versions in test (#970).
  • Return partial replicas list when we have it (#975).

Version 1.13.0 (2017-10-04)

New Features:

  • Support for FetchRequest version 3 (#905).
  • Permit setting version on mock FetchResponses (#939).
  • Add a configuration option to support storing only minimal metadata for extremely large clusters (#937).
  • Add PartitionOffsetManager.ResetOffset for backtracking tracked offsets (#932).

Improvements:

  • Provide the block-level timestamp when consuming compressed messages (#885).
  • Client.Replicas and Client.InSyncReplicas now respect the order returned by the broker, which can be meaningful (#930).
  • Use a Ticker to reduce consumer timer overhead at the cost of higher variance in the actual timeout (#933).

Bug Fixes:

  • Gracefully handle messages with negative timestamps (#907).
  • Raise a proper error when encountering an unknown message version (#940).

Version 1.12.0 (2017-05-08)

New Features:

  • Added support for the ApiVersions request and response pair, and Kafka version 0.10.2 (#867). Note that you still need to specify the Kafka version in the Sarama configuration for the time being.
  • Added a Brokers method to the Client which returns the complete set of active brokers (#813).
  • Added an InSyncReplicas method to the Client which returns the set of all in-sync broker IDs for the given partition, now that the Kafka versions for which this was misleading are no longer in our supported set (#872).
  • Added a NewCustomHashPartitioner method which allows constructing a hash partitioner with a custom hash method in case the default (FNV-1a) is not suitable (#837, #841).

Improvements:

  • Recognize more Kafka error codes (#859).

Bug Fixes:

  • Fix an issue where decoding a malformed FetchRequest would not return the correct error (#818).
  • Respect ordering of group protocols in JoinGroupRequests. This fix is transparent if you're using the AddGroupProtocol or AddGroupProtocolMetadata helpers; otherwise you will need to switch from the GroupProtocols field (now deprecated) to use OrderedGroupProtocols (#812).
  • Fix an alignment-related issue with atomics on 32-bit architectures (#859).

Version 1.11.0 (2016-12-20)

Important: As of Sarama 1.11 it is necessary to set the config value of Producer.Return.Successes to true in order to use the SyncProducer. Previous versions would silently override this value when instantiating a SyncProducer which led to unexpected values and data races.

New Features:

  • Metrics! Thanks to Sébastien Launay for all his work on this feature (#701, #746, #766).
  • Add support for LZ4 compression (#786).
  • Add support for ListOffsetRequest v1 and Kafka 0.10.1 (#775).
  • Added a HighWaterMarks method to the Consumer which aggregates the HighWaterMarkOffset values of its child topic/partitions (#769).

Bug Fixes:

  • Fixed producing when using timestamps, compression and Kafka 0.10 (#759).
  • Added missing decoder methods to DescribeGroups response (#756).
  • Fix producer shutdown when Return.Errors is disabled (#787).
  • Don't mutate configuration in SyncProducer (#790).
  • Fix crash on SASL initialization failure (#795).

Version 1.10.1 (2016-08-30)

Bug Fixes:

  • Fix the documentation for HashPartitioner which was incorrect (#717).
  • Permit client creation even when it is limited by ACLs (#722).
  • Several fixes to the consumer timer optimization code, regressions introduced in v1.10.0. Go's timers are finicky (#730, #733, #734).
  • Handle consuming compressed relative offsets with Kafka 0.10 (#735).

Version 1.10.0 (2016-08-02)

Important: As of Sarama 1.10 it is necessary to tell Sarama the version of Kafka you are running against (via the config.Version value) in order to use features that may not be compatible with old Kafka versions. If you don't specify this value it will default to 0.8.2 (the minimum supported), and trying to use more recent features (like the offset manager) will fail with an error.

Also: The offset-manager's behaviour has been changed to match the upstream java consumer (see #705 and #713). If you use the offset-manager, please ensure that you are committing one greater than the last consumed message offset or else you may end up consuming duplicate messages.

New Features:

  • Support for Kafka 0.10 (#672, #678, #681, and others).
  • Support for configuring the target Kafka version (#676).
  • Batch producing support in the SyncProducer (#677).
  • Extend producer mock to allow setting expectations on message contents (#667).

Improvements:

  • Support nil compressed messages for deleting in compacted topics (#634).
  • Pre-allocate decoding errors, greatly reducing heap usage and GC time against misbehaving brokers (#690).
  • Re-use consumer expiry timers, removing one allocation per consumed message (#707).

Bug Fixes:

  • Actually default the client ID to "sarama" like we say we do (#664).
  • Fix a rare issue where Client.Leader could return the wrong error (#685).
  • Fix a possible tight loop in the consumer (#693).
  • Match upstream's offset-tracking behaviour (#705).
  • Report UnknownTopicOrPartition errors from the offset manager (#706).
  • Fix possible negative partition value from the HashPartitioner (#709).

Version 1.9.0 (2016-05-16)

New Features:

  • Add support for custom offset manager retention durations (#602).
  • Publish low-level mocks to enable testing of third-party producer/consumer implementations (#570).
  • Declare support for Golang 1.6 (#611).
  • Support for SASL plain-text auth (#648).

Improvements:

  • Simplified broker locking scheme slightly (#604).
  • Documentation cleanup (#605, #621, #654).

Bug Fixes:

  • Fix race condition shutting down the OffsetManager (#658).

Version 1.8.0 (2016-02-01)

New Features:

  • Full support for Kafka 0.9:
    • All protocol messages and fields (#586, #588, #590).
    • Verified that TLS support works (#581).
    • Fixed the OffsetManager compatibility (#585).

Improvements:

  • Optimize for fewer system calls when reading from the network (#584).
  • Automatically retry InvalidMessage errors to match upstream behaviour (#589).

Version 1.7.0 (2015-12-11)

New Features:

  • Preliminary support for Kafka 0.9 (#572). This comes with several caveats:
    • Protocol-layer support is mostly in place (#577), however Kafka 0.9 renamed some messages and fields, which we did not in order to preserve API compatibility.
    • The producer and consumer work against 0.9, but the offset manager does not (#573).
    • TLS support may or may not work (#581).

Improvements:

  • Don't wait for request timeouts on dead brokers, greatly speeding recovery when the TCP connection is left hanging (#548).
  • Refactored part of the producer. The new version provides a much more elegant solution to #449. It is also slightly more efficient, and much more precise in calculating batch sizes when compression is used (#549, #550, #551).

Bug Fixes:

  • Fix race condition in consumer test mock (#553).

Version 1.6.1 (2015-09-25)

Bug Fixes:

  • Fix panic that could occur if a user-supplied message value failed to encode (#449).

Version 1.6.0 (2015-09-04)

New Features:

  • Implementation of a consumer offset manager using the APIs introduced in Kafka 0.8.2. The API is designed mainly for integration into a future high-level consumer, not for direct use, although it is possible to use it directly. (#461).

Improvements:

  • CRC32 calculation is much faster on machines with SSE4.2 instructions, removing a major hotspot from most profiles (#255).

Bug Fixes:

  • Make protocol decoding more robust against some malformed packets generated by go-fuzz (#523, #525) or found in other ways (#528).
  • Fix a potential race condition panic in the consumer on shutdown (#529).

Version 1.5.0 (2015-08-17)

New Features:

  • TLS-encrypted network connections are now supported. This feature is subject to change when Kafka releases built-in TLS support, but for now this is enough to work with TLS-terminating proxies (#154).

Improvements:

  • The consumer will not block if a single partition is not drained by the user; all other partitions will continue to consume normally (#485).
  • Formatting of error strings has been much improved (#495).
  • Internal refactoring of the producer for code cleanliness and to enable future work (#300).

Bug Fixes:

  • Fix a potential deadlock in the consumer on shutdown (#475).

Version 1.4.3 (2015-07-21)

Bug Fixes:

  • Don't include the partitioner in the producer's "fetch partitions" circuit-breaker (#466).
  • Don't retry messages until the broker is closed when abandoning a broker in the producer (#468).
  • Update the import path for snappy-go, it has moved again and the API has changed slightly (#486).

Version 1.4.2 (2015-05-27)

Bug Fixes:

  • Update the import path for snappy-go, it has moved from google code to github (#456).

Version 1.4.1 (2015-05-25)

Improvements:

  • Optimizations when decoding snappy messages, thanks to John Potocny (#446).

Bug Fixes:

  • Fix hypothetical race conditions on producer shutdown (#450, #451).

Version 1.4.0 (2015-05-01)

New Features:

  • The consumer now implements Topics() and Partitions() methods to enable users to dynamically choose what topics/partitions to consume without instantiating a full client (#431).
  • The partition-consumer now exposes the high water mark offset value returned by the broker via the HighWaterMarkOffset() method (#339).
  • Added a kafka-console-consumer tool capable of handling multiple partitions, and deprecated the now-obsolete kafka-console-partitionConsumer (#439, #442).

Improvements:

  • The producer's logging during retry scenarios is more consistent, more useful, and slightly less verbose (#429).
  • The client now shuffles its initial list of seed brokers in order to prevent thundering herd on the first broker in the list (#441).

Bug Fixes:

  • The producer now correctly manages its state if retries occur when it is shutting down, fixing several instances of confusing behaviour and at least one potential deadlock (#419).
  • The consumer now handles messages for different partitions asynchronously, making it much more resilient to specific user code ordering (#325).

Version 1.3.0 (2015-04-16)

New Features:

  • The client now tracks consumer group coordinators using ConsumerMetadataRequests similar to how it tracks partition leadership using regular MetadataRequests (#411). This adds two methods to the client API:
    • Coordinator(consumerGroup string) (*Broker, error)
    • RefreshCoordinator(consumerGroup string) error

Improvements:

  • ConsumerMetadataResponses now automatically create a Broker object out of the ID/address/port combination for the Coordinator; accessing the fields individually has been deprecated (#413).
  • Much improved handling of OffsetOutOfRange errors in the consumer. Consumers will fail to start if the provided offset is out of range (#418) and they will automatically shut down if the offset falls out of range (#424).
  • Small performance improvement in encoding and decoding protocol messages (#427).

Bug Fixes:

  • Fix a rare race condition in the client's background metadata refresher if it happens to be activated while the client is being closed (#422).

Version 1.2.0 (2015-04-07)

Improvements:

  • The producer's behaviour when Flush.Frequency is set is now more intuitive (#389).
  • The producer is now somewhat more memory-efficient during and after retrying messages due to an improved queue implementation (#396).
  • The consumer produces much more useful logging output when leadership changes (#385).
  • The client's GetOffset method will now automatically refresh metadata and retry once in the event of stale information or similar (#394).
  • Broker connections now have support for using TCP keepalives (#407).

Bug Fixes:

  • The OffsetCommitRequest message now correctly implements all three possible API versions (#390, #400).

Version 1.1.0 (2015-03-20)

Improvements:

  • Wrap the producer's partitioner call in a circuit-breaker so that repeatedly broken topics don't choke throughput (#373).

Bug Fixes:

  • Fix the producer's internal reference counting in certain unusual scenarios (#367).
  • Fix the consumer's internal reference counting in certain unusual scenarios (#369).
  • Fix a condition where the producer's internal control messages could have gotten stuck (#368).
  • Fix an issue where invalid partition lists would be cached when asking for metadata for a non-existant topic (#372).

Version 1.0.0 (2015-03-17)

Version 1.0.0 is the first tagged version, and is almost a complete rewrite. The primary differences with previous untagged versions are:

  • The producer has been rewritten; there is now a SyncProducer with a blocking API, and an AsyncProducer that is non-blocking.
  • The consumer has been rewritten to only open one connection per broker instead of one connection per partition.
  • The main types of Sarama are now interfaces to make depedency injection easy; mock implementations for Consumer, SyncProducer and AsyncProducer are provided in the github.com/Shopify/sarama/mocks package.
  • For most uses cases, it is no longer necessary to open a Client; this will be done for you.
  • All the configuration values have been unified in the Config struct.
  • Much improved test suite.