kafka package

Submodules

kafka.cluster module

class kafka.cluster.ClusterMetadata(**configs)[source]

Bases: object

A class to manage kafka cluster metadata.

This class does not perform any IO. It simply updates internal state given API responses (MetadataResponse, GroupCoordinatorResponse).

Keyword Arguments:
 
  • retry_backoff_ms (int) – Milliseconds to backoff when retrying on errors. Default: 100.
  • metadata_max_age_ms (int) – The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. Default: 300000
DEFAULT_CONFIG = {'retry_backoff_ms': 100, 'metadata_max_age_ms': 300000}
add_group_coordinator(group, response)[source]

Update with metadata for a group coordinator

Parameters:
  • group (str) – name of group from GroupCoordinatorRequest
  • response (GroupCoordinatorResponse) – broker response
Returns:

True if metadata is updated, False on error

Return type:

bool

add_listener(listener)[source]

Add a callback function to be called on each metadata update

available_partitions_for_topic(topic)[source]

Return set of partitions with known leaders

Parameters:topic (str) – topic to check for partitions
Returns:{partition (int), ...}
Return type:set
broker_metadata(broker_id)[source]

Get BrokerMetadata

Parameters:broker_id (int) – node_id for a broker to check
Returns:BrokerMetadata or None if not found
brokers()[source]

Get all BrokerMetadata

Returns:{BrokerMetadata, ...}
Return type:set
coordinator_for_group(group)[source]

Return node_id of group coordinator.

Parameters:group (str) – name of consumer group
Returns:node_id for group coordinator
Return type:int
failed_update(exception)[source]

Update cluster state given a failed MetadataRequest.

leader_for_partition(partition)[source]

Return node_id of leader, -1 unavailable, None if unknown.

partitions_for_broker(broker_id)[source]

Return TopicPartitions for which the broker is a leader.

Parameters:broker_id (int) – node id for a broker
Returns:{TopicPartition, ...}
Return type:set
partitions_for_topic(topic)[source]

Return set of all partitions for topic (whether available or not)

Parameters:topic (str) – topic to check for partitions
Returns:{partition (int), ...}
Return type:set
refresh_backoff()[source]

Return milliseconds to wait before attempting to retry after failure

remove_listener(listener)[source]

Remove a previously added listener callback

request_update()[source]

Flags metadata for update, return Future()

Actual update must be handled separately. This method will only change the reported ttl()

Returns:kafka.future.Future (value will be the cluster object after update)
topics(exclude_internal_topics=True)[source]

Get set of known topics.

Parameters:exclude_internal_topics (bool) – Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to True the only way to receive records from an internal topic is subscribing to it. Default True
Returns:{topic (str), ...}
Return type:set
ttl()[source]

Milliseconds until metadata should be refreshed

update_metadata(metadata)[source]

Update cluster state given a MetadataResponse.

Parameters:metadata (MetadataResponse) – broker response to a metadata request

Returns: None

with_partitions(partitions_to_add)[source]

Returns a copy of cluster metadata with partitions added

kafka.client module

class kafka.client.SimpleClient(hosts, client_id='kafka-python', timeout=120, correlation_id=0)[source]

Bases: object

CLIENT_ID = 'kafka-python'
DEFAULT_SOCKET_TIMEOUT_SECONDS = 120
close()[source]
copy()[source]

Create an inactive copy of the client object, suitable for passing to a separate thread.

Note that the copied connections are not initialized, so reinit() must be called on the returned copy.

ensure_topic_exists(topic, timeout=30)[source]
get_partition_ids_for_topic(topic)[source]
has_metadata_for_topic(topic)[source]
load_metadata_for_topics(*topics, **kwargs)[source]

Fetch broker and topic-partition metadata from the server.

Updates internal data: broker list, topic/partition list, and topic/partition -> broker map. This method should be called after receiving any error.

Note: Exceptions will not be raised in a full refresh (i.e. no topic list). In this case, error codes will be logged as errors. Partition-level errors will also not be raised here (a single partition w/o a leader, for example).

Parameters:
  • *topics (optional) – If a list of topics is provided, the metadata refresh will be limited to the specified topics only.
  • ignore_leadernotavailable (bool) – suppress LeaderNotAvailableError so that metadata is loaded correctly during auto-create. Default: False.
Raises:
  • UnknownTopicOrPartitionError – Raised for topics that do not exist, unless the broker is configured to auto-create topics.
  • LeaderNotAvailableError – Raised for topics that do not exist yet, when the broker is configured to auto-create topics. Retry after a short backoff (topics/partitions are initializing).
reinit()[source]
reset_all_metadata()[source]
reset_topic_metadata(*topics)[source]
send_consumer_metadata_request(payloads=(), fail_on_error=True, callback=None)[source]
send_fetch_request(payloads=(), fail_on_error=True, callback=None, max_wait_time=100, min_bytes=4096)[source]

Encode and send a FetchRequest

Payloads are grouped by topic and partition so they can be pipelined to the same brokers.

send_list_offset_request(payloads=(), fail_on_error=True, callback=None)[source]
send_metadata_request(payloads=(), fail_on_error=True, callback=None)[source]
send_offset_commit_request(group, payloads=(), fail_on_error=True, callback=None)[source]
send_offset_fetch_request(group, payloads=(), fail_on_error=True, callback=None)[source]
send_offset_fetch_request_kafka(group, payloads=(), fail_on_error=True, callback=None)[source]
send_offset_request(payloads=(), fail_on_error=True, callback=None)[source]
send_produce_request(payloads=(), acks=1, timeout=1000, fail_on_error=True, callback=None)[source]

Encode and send some ProduceRequests

ProduceRequests will be grouped by (topic, partition) and then sent to a specific broker. Output is a list of responses in the same order as the list of payloads specified

Parameters:
  • payloads (list of ProduceRequest) – produce requests to send to kafka ProduceRequest payloads must not contain duplicates for any topic-partition.
  • acks (int, optional) – how many acks the servers should receive from replica brokers before responding to the request. If it is 0, the server will not send any response. If it is 1, the server will wait until the data is written to the local log before sending a response. If it is -1, the server will wait until the message is committed by all in-sync replicas before sending a response. For any value > 1, the server will wait for this number of acks to occur (but the server will never wait for more acknowledgements than there are in-sync replicas). defaults to 1.
  • timeout (int, optional) – maximum time in milliseconds the server can await the receipt of the number of acks, defaults to 1000.
  • fail_on_error (bool, optional) – raise exceptions on connection and server response errors, defaults to True.
  • callback (function, optional) – instead of returning the ProduceResponse, first pass it through this function, defaults to None.
Returns:

list of ProduceResponses, or callback results if supplied, in the order of input payloads

topics

kafka.codec module

kafka.codec.gzip_decode(payload)[source]
kafka.codec.gzip_encode(payload, compresslevel=None)[source]
kafka.codec.has_gzip()[source]
kafka.codec.has_lz4()[source]
kafka.codec.has_snappy()[source]
kafka.codec.lz4_decode_old_kafka(payload)[source]
kafka.codec.lz4_encode_old_kafka(payload)[source]

Encode payload for 0.8/0.9 brokers – requires an incorrect header checksum.

kafka.codec.lz4f_decode(payload)[source]

Decode payload using interoperable LZ4 framing. Requires Kafka >= 0.10

kafka.codec.snappy_decode(payload)[source]
kafka.codec.snappy_encode(payload, xerial_compatible=True, xerial_blocksize=32768)[source]

Encodes the given data with snappy compression.

If xerial_compatible is set then the stream is encoded in a fashion compatible with the xerial snappy library.

The block size (xerial_blocksize) controls how frequent the blocking occurs 32k is the default in the xerial library.

The format winds up being:

Header Block1 len Block1 data Blockn len Blockn data
16 bytes BE int32 snappy bytes BE int32 snappy bytes

It is important to note that the blocksize is the amount of uncompressed data presented to snappy at each block, whereas the blocklen is the number of bytes that will be present in the stream; so the length will always be <= blocksize.

kafka.common module

kafka.conn module

class kafka.conn.BrokerConnection(host, port, afi, **configs)[source]

Bases: object

Initialize a Kafka broker connection

Keyword Arguments:
 
  • client_id (str) – a name for this client. This string is passed in each request to servers and can be used to identify specific server-side log entries that correspond to this client. Also submitted to GroupCoordinator for logging with respect to consumer group administration. Default: ‘kafka-python-{version}’
  • reconnect_backoff_ms (int) – The amount of time in milliseconds to wait before attempting to reconnect to a given host. Default: 50.
  • reconnect_backoff_max_ms (int) – The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the backoff resulting in a random range between 20% below and 20% above the computed value. Default: 1000.
  • request_timeout_ms (int) – Client request timeout in milliseconds. Default: 40000.
  • max_in_flight_requests_per_connection (int) – Requests are pipelined to kafka brokers up to this number of maximum requests per broker connection. Default: 5.
  • receive_buffer_bytes (int) – The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. Default: None (relies on system defaults). Java client defaults to 32768.
  • send_buffer_bytes (int) – The size of the TCP send buffer (SO_SNDBUF) to use when sending data. Default: None (relies on system defaults). Java client defaults to 131072.
  • socket_options (list) – List of tuple-arguments to socket.setsockopt to apply to broker connection sockets. Default: [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]
  • security_protocol (str) – Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. Default: PLAINTEXT.
  • ssl_context (ssl.SSLContext) – pre-configured SSLContext for wrapping socket connections. If provided, all other ssl_* configurations will be ignored. Default: None.
  • ssl_check_hostname (bool) – flag to configure whether ssl handshake should verify that the certificate matches the brokers hostname. default: True.
  • ssl_cafile (str) – optional filename of ca file to use in certificate veriication. default: None.
  • ssl_certfile (str) – optional filename of file in pem format containing the client certificate, as well as any ca certificates needed to establish the certificate’s authenticity. default: None.
  • ssl_keyfile (str) – optional filename containing the client private key. default: None.
  • ssl_password (callable, str, bytes, bytearray) – optional password or callable function that returns a password, for decrypting the client private key. Default: None.
  • ssl_crlfile (str) – optional filename containing the CRL to check for certificate expiration. By default, no CRL check is done. When providing a file, only the leaf certificate will be checked against this CRL. The CRL can only be checked with Python 3.4+ or 2.7.9+. default: None.
  • api_version (tuple) – Specify which Kafka API version to use. Accepted values are: (0, 8, 0), (0, 8, 1), (0, 8, 2), (0, 9), (0, 10). Default: (0, 8, 2)
  • api_version_auto_timeout_ms (int) – number of milliseconds to throw a timeout exception from the constructor when checking the broker api version. Only applies if api_version is None
  • state_change_callback (callable) – function to be called when the connection state changes from CONNECTING to CONNECTED etc.
  • metrics (kafka.metrics.Metrics) – Optionally provide a metrics instance for capturing network IO stats. Default: None.
  • metric_group_prefix (str) – Prefix for metric names. Default: ‘’
  • sasl_mechanism (str) – Authentication mechanism when security_protocol is configured for SASL_PLAINTEXT or SASL_SSL. Valid values are: PLAIN, GSSAPI. Default: PLAIN
  • sasl_plain_username (str) – username for sasl PLAIN authentication. Default: None
  • sasl_plain_password (str) – password for sasl PLAIN authentication. Default: None
  • sasl_kerberos_service_name (str) – Service name to include in GSSAPI sasl mechanism handshake. Default: ‘kafka’
DEFAULT_CONFIG = {'reconnect_backoff_ms': 50, 'ssl_check_hostname': True, 'sasl_mechanism': 'PLAIN', 'receive_buffer_bytes': None, 'sasl_plain_password': None, 'ssl_password': None, 'sasl_plain_username': None, 'ssl_cafile': None, 'request_timeout_ms': 40000, 'ssl_keyfile': None, 'sasl_kerberos_service_name': 'kafka', 'max_in_flight_requests_per_connection': 5, 'api_version': (0, 8, 2), 'ssl_certfile': None, 'reconnect_backoff_max_ms': 1000, 'send_buffer_bytes': None, 'ssl_crlfile': None, 'ssl_context': None, 'metrics': None, 'node_id': 0, 'client_id': 'kafka-python-1.3.6.dev', 'metric_group_prefix': '', 'security_protocol': 'PLAINTEXT', 'socket_options': [(6, 1, 1)], 'state_change_callback': <function <lambda>>}
SASL_MECHANISMS = ('PLAIN', 'GSSAPI')
SECURITY_PROTOCOLS = ('PLAINTEXT', 'SSL', 'SASL_PLAINTEXT', 'SASL_SSL')
blacked_out()[source]

Return true if we are disconnected from the given node and can’t re-establish a connection yet

can_send_more()[source]

Return True unless there are max_in_flight_requests_per_connection.

check_version(timeout=2, strict=False)[source]

Attempt to guess the broker version.

Note: This is a blocking call.

Returns: version tuple, i.e. (0, 10), (0, 9), (0, 8, 2), ...

close(error=None)[source]

Close socket and fail all in-flight-requests.

Parameters:error (Exception, optional) – pending in-flight-requests will be failed with this exception. Default: kafka.errors.ConnectionError.
connect()[source]

Attempt to connect and return ConnectionState

connected()[source]

Return True iff socket is connected.

connecting()[source]

Returns True if still connecting (this may encompass several different states, such as SSL handshake, authorization, etc).

connection_delay()[source]
disconnected()[source]

Return True iff socket is closed

recv()[source]

Non-blocking network receive.

Return response if available

requests_timed_out()[source]
send(request)[source]

send request, return Future()

Can block on network if request is larger than send_buffer_bytes

class kafka.conn.BrokerConnectionMetrics(metrics, metric_group_prefix, node_id)[source]

Bases: object

class kafka.conn.ConnectionStates[source]

Bases: object

AUTHENTICATING = '<authenticating>'
CONNECTED = '<connected>'
CONNECTING = '<connecting>'
DISCONNECTED = '<disconnected>'
DISCONNECTING = '<disconnecting>'
HANDSHAKE = '<handshake>'
kafka.conn.collect_hosts(hosts, randomize=True)[source]

Collects a comma-separated set of hosts (host:port) and optionally randomize the returned list.

kafka.conn.get_ip_port_afi(host_and_port_str)[source]

Parse the IP and port from a string in the format of:

  • host_or_ip <- Can be either IPv4 address literal or hostname/fqdn
  • host_or_ipv4:port <- Can be either IPv4 address literal or hostname/fqdn
  • [host_or_ip] <- IPv6 address literal
  • [host_or_ip]:port. <- IPv6 address literal

Note

IPv6 address literals with ports must be enclosed in brackets

Note

If the port is not specified, default will be returned.

Returns:tuple (host, port, afi), afi will be socket.AF_INET or socket.AF_INET6 or socket.AF_UNSPEC

kafka.context module

Context manager to commit/rollback consumer offsets.

class kafka.context.OffsetCommitContext(consumer)[source]

Bases: object

Provides commit/rollback semantics around a SimpleConsumer.

Usage assumes that auto_commit is disabled, that messages are consumed in batches, and that the consuming process will record its own successful processing of each message. Both the commit and rollback operations respect a “high-water mark” to ensure that last unsuccessfully processed message will be retried.

Example:

consumer = SimpleConsumer(client, group, topic, auto_commit=False)
consumer.provide_partition_info()
consumer.fetch_last_known_offsets()

while some_condition:
    with OffsetCommitContext(consumer) as context:
        messages = consumer.get_messages(count, block=False)

        for partition, message in messages:
            if can_process(message):
                context.mark(partition, message.offset)
            else:
                break

        if not context:
            sleep(delay)

These semantics allow for deferred message processing (e.g. if can_process compares message time to clock time) and for repeated processing of the last unsuccessful message (until some external error is resolved).

commit()[source]

Commit this context’s offsets:

  • If the high-water mark has moved, commit up to and position the consumer at the high-water mark.
  • Otherwise, reset to the consumer to the initial offsets.
commit_partition_offsets(partition_offsets)[source]

Commit explicit partition/offset pairs.

handle_out_of_range()[source]

Handle out of range condition by seeking to the beginning of valid ranges.

This assumes that an out of range doesn’t happen by seeking past the end of valid ranges – which is far less likely.

mark(partition, offset)[source]

Set the high-water mark in the current context.

In order to know the current partition, it is helpful to initialize the consumer to provide partition info via:

consumer.provide_partition_info()
rollback()[source]

Rollback this context:

  • Position the consumer at the initial offsets.
update_consumer_offsets(partition_offsets)[source]

Update consumer offsets to explicit positions.

kafka.protocol module

kafka.util module

class kafka.util.ReentrantTimer(t, fn, *args, **kwargs)[source]

Bases: object

A timer that can be restarted, unlike threading.Timer (although this uses threading.Timer)

Parameters:
  • t – timer interval in milliseconds
  • fn – a callable to invoke
  • args – tuple of args to be passed to function
  • kwargs – keyword arguments to be passed to function
start()[source]
stop()[source]
class kafka.util.WeakMethod(object_dot_method)[source]

Bases: object

Callable that weakly references a method and the object it is bound to. It is based on http://stackoverflow.com/a/24287465.

Parameters:object_dot_method – A bound instance method (i.e. ‘object.method’).
kafka.util.crc32(data)[source]
kafka.util.group_by_topic_and_partition(tuples)[source]
kafka.util.read_short_string(data, cur)[source]
kafka.util.relative_unpack(fmt, data, cur)[source]
kafka.util.try_method_on_system_exit(obj, method, *args, **kwargs)[source]
kafka.util.write_int_string(s)[source]

Module contents

class kafka.KafkaConsumer(*topics, **configs)[source]

Bases: kafka.vendor.six.Iterator

Consume records from a Kafka cluster.

The consumer will transparently handle the failure of servers in the Kafka cluster, and adapt as topic-partitions are created or migrate between brokers. It also interacts with the assigned kafka Group Coordinator node to allow multiple consumers to load balance consumption of topics (requires kafka >= 0.9.0.0).

The consumer is not thread safe and should not be shared across threads.

Parameters:

*topics (str) – optional list of topics to subscribe to. If not set, call subscribe() or assign() before consuming records.

Keyword Arguments:
 
  • bootstrap_servers – ‘host[:port]’ string (or list of ‘host[:port]’ strings) that the consumer should contact to bootstrap initial cluster metadata. This does not have to be the full node list. It just needs to have at least one broker that will respond to a Metadata API Request. Default port is 9092. If no servers are specified, will default to localhost:9092.
  • client_id (str) – A name for this client. This string is passed in each request to servers and can be used to identify specific server-side log entries that correspond to this client. Also submitted to GroupCoordinator for logging with respect to consumer group administration. Default: ‘kafka-python-{version}’
  • group_id (str or None) – The name of the consumer group to join for dynamic partition assignment (if enabled), and to use for fetching and committing offsets. If None, auto-partition assignment (via group coordinator) and offset commits are disabled. Default: None
  • key_deserializer (callable) – Any callable that takes a raw message key and returns a deserialized key.
  • value_deserializer (callable) – Any callable that takes a raw message value and returns a deserialized value.
  • fetch_min_bytes (int) – Minimum amount of data the server should return for a fetch request, otherwise wait up to fetch_max_wait_ms for more data to accumulate. Default: 1.
  • fetch_max_wait_ms (int) – The maximum amount of time in milliseconds the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy the requirement given by fetch_min_bytes. Default: 500.
  • fetch_max_bytes (int) – The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. NOTE: consumer performs fetches to multiple brokers in parallel so memory usage will depend on the number of brokers containing partitions for the topic. Supported Kafka version >= 0.10.1.0. Default: 52428800 (50 Mb).
  • max_partition_fetch_bytes (int) – The maximum amount of data per-partition the server will return. The maximum total memory used for a request = #partitions * max_partition_fetch_bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. Default: 1048576.
  • request_timeout_ms (int) – Client request timeout in milliseconds. Default: 40000.
  • retry_backoff_ms (int) – Milliseconds to backoff when retrying on errors. Default: 100.
  • reconnect_backoff_ms (int) – The amount of time in milliseconds to wait before attempting to reconnect to a given host. Default: 50.
  • reconnect_backoff_max_ms (int) – The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the backoff resulting in a random range between 20% below and 20% above the computed value. Default: 1000.
  • max_in_flight_requests_per_connection (int) – Requests are pipelined to kafka brokers up to this number of maximum requests per broker connection. Default: 5.
  • auto_offset_reset (str) – A policy for resetting offsets on OffsetOutOfRange errors: ‘earliest’ will move to the oldest available message, ‘latest’ will move to the most recent. Any other value will raise the exception. Default: ‘latest’.
  • enable_auto_commit (bool) – If True , the consumer’s offset will be periodically committed in the background. Default: True.
  • auto_commit_interval_ms (int) – Number of milliseconds between automatic offset commits, if enable_auto_commit is True. Default: 5000.
  • default_offset_commit_callback (callable) – Called as callback(offsets, response) response will be either an Exception or an OffsetCommitResponse struct. This callback can be used to trigger custom actions when a commit request completes.
  • check_crcs (bool) – Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. Default: True
  • metadata_max_age_ms (int) – The period of time in milliseconds after which we force a refresh of metadata, even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. Default: 300000
  • partition_assignment_strategy (list) – List of objects to use to distribute partition ownership amongst consumer instances when group management is used. Default: [RangePartitionAssignor, RoundRobinPartitionAssignor]
  • heartbeat_interval_ms (int) – The expected time in milliseconds between heartbeats to the consumer coordinator when using Kafka’s group management feature. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session_timeout_ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. Default: 3000
  • session_timeout_ms (int) – The timeout used to detect failures when using Kafka’s group management facilities. Default: 30000
  • max_poll_records (int) – The maximum number of records returned in a single call to poll(). Default: 500
  • receive_buffer_bytes (int) – The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. Default: None (relies on system defaults). The java client defaults to 32768.
  • send_buffer_bytes (int) – The size of the TCP send buffer (SO_SNDBUF) to use when sending data. Default: None (relies on system defaults). The java client defaults to 131072.
  • socket_options (list) – List of tuple-arguments to socket.setsockopt to apply to broker connection sockets. Default: [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]
  • consumer_timeout_ms (int) – number of milliseconds to block during message iteration before raising StopIteration (i.e., ending the iterator). Default block forever [float(‘inf’)].
  • skip_double_compressed_messages (bool) – A bug in KafkaProducer <= 1.2.4 caused some messages to be corrupted via double-compression. By default, the fetcher will return these messages as a compressed blob of bytes with a single offset, i.e. how the message was actually published to the cluster. If you prefer to have the fetcher automatically detect corrupt messages and skip them, set this option to True. Default: False.
  • security_protocol (str) – Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL. Default: PLAINTEXT.
  • ssl_context (ssl.SSLContext) – Pre-configured SSLContext for wrapping socket connections. If provided, all other ssl_* configurations will be ignored. Default: None.
  • ssl_check_hostname (bool) – Flag to configure whether ssl handshake should verify that the certificate matches the brokers hostname. Default: True.
  • ssl_cafile (str) – Optional filename of ca file to use in certificate verification. Default: None.
  • ssl_certfile (str) – Optional filename of file in pem format containing the client certificate, as well as any ca certificates needed to establish the certificate’s authenticity. Default: None.
  • ssl_keyfile (str) – Optional filename containing the client private key. Default: None.
  • ssl_password (str) – Optional password to be used when loading the certificate chain. Default: None.
  • ssl_crlfile (str) – Optional filename containing the CRL to check for certificate expiration. By default, no CRL check is done. When providing a file, only the leaf certificate will be checked against this CRL. The CRL can only be checked with Python 3.4+ or 2.7.9+. Default: None.
  • api_version (tuple) –

    Specify which Kafka API version to use. If set to None, the client will attempt to infer the broker version by probing various APIs. Different versions enable different functionality.

    Examples

    (0, 9) enables full group coordination features with automatic
    partition assignment and rebalancing,
    (0, 8, 2) enables kafka-storage offset commits with manual
    partition assignment only,
    (0, 8, 1) enables zookeeper-storage offset commits with manual
    partition assignment only,
    (0, 8, 0) enables basic functionality but requires manual
    partition assignment and offset management.

    For the full list of supported versions, see KafkaClient.API_VERSIONS. Default: None

  • api_version_auto_timeout_ms (int) – number of milliseconds to throw a timeout exception from the constructor when checking the broker api version. Only applies if api_version set to ‘auto’
  • metric_reporters (list) – A list of classes to use as metrics reporters. Implementing the AbstractMetricsReporter interface allows plugging in classes that will be notified of new metric creation. Default: []
  • metrics_num_samples (int) – The number of samples maintained to compute metrics. Default: 2
  • metrics_sample_window_ms (int) – The maximum age in milliseconds of samples used to compute metrics. Default: 30000
  • selector (selectors.BaseSelector) – Provide a specific selector implementation to use for I/O multiplexing. Default: selectors.DefaultSelector
  • exclude_internal_topics (bool) – Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to True the only way to receive records from an internal topic is subscribing to it. Requires 0.10+ Default: True
  • sasl_mechanism (str) – String picking sasl mechanism when security_protocol is SASL_PLAINTEXT or SASL_SSL. Currently only PLAIN is supported. Default: None
  • sasl_plain_username (str) – Username for sasl PLAIN authentication. Default: None
  • sasl_plain_password (str) – Password for sasl PLAIN authentication. Default: None
  • sasl_kerberos_service_name (str) – Service name to include in GSSAPI sasl mechanism handshake. Default: ‘kafka’

Note

Configuration parameters are described in more detail at https://kafka.apache.org/documentation/#newconsumerconfigs

DEFAULT_CONFIG = {'reconnect_backoff_ms': 50, 'ssl_check_hostname': True, 'fetch_max_bytes': 52428800, 'sasl_mechanism': None, 'partition_assignment_strategy': (<class 'kafka.coordinator.assignors.range.RangePartitionAssignor'>, <class 'kafka.coordinator.assignors.roundrobin.RoundRobinPartitionAssignor'>), 'bootstrap_servers': 'localhost', 'sasl_plain_password': None, 'ssl_password': None, 'ssl_cafile': None, 'request_timeout_ms': 40000, 'enable_auto_commit': True, 'heartbeat_interval_ms': 3000, 'ssl_keyfile': None, 'max_in_flight_requests_per_connection': 5, 'max_poll_records': 500, 'selector': <class 'kafka.vendor.selectors34.EpollSelector'>, 'client_id': 'kafka-python-1.3.6.dev', 'connections_max_idle_ms': 540000, 'security_protocol': 'PLAINTEXT', 'group_id': None, 'fetch_min_bytes': 1, 'sasl_kerberos_service_name': 'kafka', 'skip_double_compressed_messages': False, 'receive_buffer_bytes': None, 'auto_offset_reset': 'latest', 'consumer_timeout_ms': inf, 'metadata_max_age_ms': 300000, 'exclude_internal_topics': True, 'sasl_plain_username': None, 'default_offset_commit_callback': <function <lambda>>, 'metric_reporters': [], 'api_version': None, 'ssl_certfile': None, 'api_version_auto_timeout_ms': 2000, 'reconnect_backoff_max_ms': 1000, 'key_deserializer': None, 'send_buffer_bytes': None, 'ssl_crlfile': None, 'max_partition_fetch_bytes': 1048576, 'ssl_context': None, 'check_crcs': True, 'metrics_num_samples': 2, 'metric_group_prefix': 'consumer', 'session_timeout_ms': 30000, 'auto_commit_interval_ms': 5000, 'retry_backoff_ms': 100, 'metrics_sample_window_ms': 30000, 'value_deserializer': None, 'socket_options': [(6, 1, 1)], 'fetch_max_wait_ms': 500}
assign(partitions)[source]

Manually assign a list of TopicPartitions to this consumer.

Parameters:

partitions (list of TopicPartition) – Assignment for this instance.

Raises:
  • IllegalStateError – If consumer has already called
  • subscribe().

Warning

It is not possible to use both manual partition assignment with assign() and group assignment with subscribe().

Note

This interface does not support incremental assignment and will replace the previous assignment (if there was one).

Note

Manual topic assignment through this method does not use the consumer’s group management functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic metadata change.

assignment()[source]

Get the TopicPartitions currently assigned to this consumer.

If partitions were directly assigned using assign(), then this will simply return the same partitions that were previously assigned. If topics were subscribed using subscribe(), then this will give the set of topic partitions currently assigned to the consumer (which may be None if the assignment hasn’t happened yet, or if the partitions are in the process of being reassigned).

Returns:{TopicPartition, ...}
Return type:set
beginning_offsets(partitions)[source]

Get the first offset for the given partitions.

This method does not change the current consumer position of the partitions.

Note

This method may block indefinitely if the partition does not exist.

Parameters:

partitions (list) – List of TopicPartition instances to fetch offsets for.

Returns:

int}``: The earliest available offsets for the given partitions.

Return type:

``{TopicPartition

Raises:
  • UnsupportedVersionError – If the broker does not support looking up the offsets by timestamp.
  • KafkaTimeoutError – If fetch failed in request_timeout_ms.
close(autocommit=True)[source]

Close the consumer, waiting indefinitely for any needed cleanup.

Keyword Arguments:
 autocommit (bool) – If auto-commit is configured for this consumer, this optional flag causes the consumer to attempt to commit any pending consumed offsets prior to close. Default: True
commit(offsets=None)[source]

Commit offsets to kafka, blocking until success or error.

This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. To avoid re-processing the last message read if a consumer is restarted, the committed offset should be the next message your application should consume, i.e.: last_offset + 1.

Blocks until either the commit succeeds or an unrecoverable error is encountered (in which case it is thrown to the caller).

Currently only supports kafka-topic offset storage (not zookeeper).

Parameters:offsets (dict, optional) – {TopicPartition: OffsetAndMetadata} dict to commit with the configured group_id. Defaults to currently consumed offsets for all subscribed partitions.
commit_async(offsets=None, callback=None)[source]

Commit offsets to kafka asynchronously, optionally firing callback.

This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. To avoid re-processing the last message read if a consumer is restarted, the committed offset should be the next message your application should consume, i.e.: last_offset + 1.

This is an asynchronous call and will not block. Any errors encountered are either passed to the callback (if provided) or discarded.

Parameters:
  • offsets (dict, optional) – {TopicPartition: OffsetAndMetadata} dict to commit with the configured group_id. Defaults to currently consumed offsets for all subscribed partitions.
  • callback (callable, optional) – Called as callback(offsets, response) with response as either an Exception or an OffsetCommitResponse struct. This callback can be used to trigger custom actions when a commit request completes.
Returns:

kafka.future.Future

committed(partition)[source]

Get the last committed offset for the given partition.

This offset will be used as the position for the consumer in the event of a failure.

This call may block to do a remote call if the partition in question isn’t assigned to this consumer or if the consumer hasn’t yet initialized its cache of committed offsets.

Parameters:partition (TopicPartition) – The partition to check.
Returns:The last committed offset, or None if there was no prior commit.
configure(**configs)[source]
end_offsets(partitions)[source]

Get the last offset for the given partitions. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.

This method does not change the current consumer position of the partitions.

Note

This method may block indefinitely if the partition does not exist.

Parameters:

partitions (list) – List of TopicPartition instances to fetch offsets for.

Returns:

int}``: The end offsets for the given partitions.

Return type:

``{TopicPartition

Raises:
  • UnsupportedVersionError – If the broker does not support looking up the offsets by timestamp.
  • KafkaTimeoutError – If fetch failed in request_timeout_ms
fetch_messages()[source]
get_partition_offsets(topic, partition, request_time_ms, max_num_offsets)[source]
highwater(partition)[source]

Last known highwater offset for a partition.

A highwater offset is the offset that will be assigned to the next message that is produced. It may be useful for calculating lag, by comparing with the reported position. Note that both position and highwater refer to the next offset – i.e., highwater offset is one greater than the newest available message.

Highwater offsets are returned in FetchResponse messages, so will not be available if no FetchRequests have been sent for this partition yet.

Parameters:partition (TopicPartition) – Partition to check
Returns:Offset if available
Return type:int or None
metrics(raw=False)[source]

Get metrics on consumer performance.

This is ported from the Java Consumer, for details see: https://kafka.apache.org/documentation/#new_consumer_monitoring

Warning

This is an unstable interface. It may change in future releases without warning.

offsets(group=None)[source]
offsets_for_times(timestamps)[source]

Look up the offsets for the given partitions by timestamp. The returned offset for each partition is the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition.

This is a blocking call. The consumer does not have to be assigned the partitions.

If the message format version in a partition is before 0.10.0, i.e. the messages do not have timestamps, None will be returned for that partition. None will also be returned for the partition if there are no messages in it.

Note

This method may block indefinitely if the partition does not exist.

Parameters:

timestamps (dict) – {TopicPartition: int} mapping from partition to the timestamp to look up. Unit should be milliseconds since beginning of the epoch (midnight Jan 1, 1970 (UTC))

Returns:

OffsetAndTimestamp}``: mapping from partition to the timestamp and offset of the first message with timestamp greater than or equal to the target timestamp.

Return type:

``{TopicPartition

Raises:
  • ValueError – If the target timestamp is negative
  • UnsupportedVersionError – If the broker does not support looking up the offsets by timestamp.
  • KafkaTimeoutError – If fetch failed in request_timeout_ms
partitions_for_topic(topic)[source]

Get metadata about the partitions for a given topic.

Parameters:topic (str) – Topic to check.
Returns:Partition ids
Return type:set
pause(*partitions)[source]

Suspend fetching from the requested partitions.

Future calls to poll() will not return any records from these partitions until they have been resumed using resume().

Note: This method does not affect partition subscription. In particular, it does not cause a group rebalance when automatic assignment is used.

Parameters:*partitions (TopicPartition) – Partitions to pause.
paused()[source]

Get the partitions that were previously paused using pause().

Returns:{partition (TopicPartition), ...}
Return type:set
poll(timeout_ms=0, max_records=None)[source]

Fetch data from assigned topics / partitions.

Records are fetched and returned in batches by topic-partition. On each poll, consumer will try to use the last consumed offset as the starting offset and fetch sequentially. The last consumed offset can be manually set through seek() or automatically set as the last committed offset for the subscribed list of partitions.

Incompatible with iterator interface – use one or the other, not both.

Parameters:
  • timeout_ms (int, optional) – Milliseconds spent waiting in poll if data is not available in the buffer. If 0, returns immediately with any records that are available currently in the buffer, else returns empty. Must not be negative. Default: 0
  • max_records (int, optional) – The maximum number of records returned in a single call to poll(). Default: Inherit value from max_poll_records.
Returns:

Topic to list of records since the last fetch for the

subscribed list of topics and partitions.

Return type:

dict

position(partition)[source]

Get the offset of the next record that will be fetched

Parameters:partition (TopicPartition) – Partition to check
Returns:Offset
Return type:int
resume(*partitions)[source]

Resume fetching from the specified (paused) partitions.

Parameters:*partitions (TopicPartition) – Partitions to resume.
seek(partition, offset)[source]

Manually specify the fetch offset for a TopicPartition.

Overrides the fetch offsets that the consumer will use on the next poll(). If this API is invoked for the same partition more than once, the latest offset will be used on the next poll().

Note: You may lose data if this API is arbitrarily used in the middle of consumption to reset the fetch offsets.

Parameters:
  • partition (TopicPartition) – Partition for seek operation
  • offset (int) – Message offset in partition
Raises:

AssertionError – If offset is not an int >= 0; or if partition is not currently assigned.

seek_to_beginning(*partitions)[source]

Seek to the oldest available offset for partitions.

Parameters:*partitions – Optionally provide specific TopicPartitions, otherwise default to all assigned partitions.
Raises:AssertionError – If any partition is not currently assigned, or if no partitions are assigned.
seek_to_end(*partitions)[source]

Seek to the most recent available offset for partitions.

Parameters:*partitions – Optionally provide specific TopicPartitions, otherwise default to all assigned partitions.
Raises:AssertionError – If any partition is not currently assigned, or if no partitions are assigned.
set_topic_partitions(*topics)[source]
subscribe(topics=(), pattern=None, listener=None)[source]

Subscribe to a list of topics, or a topic regex pattern.

Partitions will be dynamically assigned via a group coordinator. Topic subscriptions are not incremental: this list will replace the current assignment (if there is one).

This method is incompatible with assign().

Parameters:
  • topics (list) – List of topics for subscription.
  • pattern (str) – Pattern to match available topics. You must provide either topics or pattern, but not both.
  • listener (ConsumerRebalanceListener) –

    Optionally include listener callback, which will be called before and after each rebalance operation.

    As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger:

    • Number of partitions change for any of the subscribed topics
    • Topic is created or deleted
    • An existing member of the consumer group dies
    • A new member is added to the consumer group

    When any of these events are triggered, the provided listener will be invoked first to indicate that the consumer’s assignment has been revoked, and then again when the new assignment has been received. Note that this listener will immediately override any listener set in a previous call to subscribe. It is guaranteed, however, that the partitions revoked/assigned through this interface are from topics subscribed in this call.

Raises:
  • IllegalStateError – If called after previously calling assign().
  • AssertionError – If neither topics or pattern is provided.
  • TypeError – If listener is not a ConsumerRebalanceListener.
subscription()[source]

Get the current topic subscription.

Returns:{topic, ...}
Return type:set
task_done(message)[source]
topics()[source]

Get all topics the user is authorized to view.

Returns:topics
Return type:set
unsubscribe()[source]

Unsubscribe from all topics and clear all assigned partitions.

class kafka.KafkaProducer(**configs)[source]

Bases: object

A Kafka client that publishes records to the Kafka cluster.

The producer is thread safe and sharing a single producer instance across threads will generally be faster than having multiple instances.

The producer consists of a pool of buffer space that holds records that haven’t yet been transmitted to the server as well as a background I/O thread that is responsible for turning these records into requests and transmitting them to the cluster.

send() is asynchronous. When called it adds the record to a buffer of pending record sends and immediately returns. This allows the producer to batch together individual records for efficiency.

The ‘acks’ config controls the criteria under which requests are considered complete. The “all” setting will result in blocking on the full commit of the record, the slowest but most durable setting.

If the request fails, the producer can automatically retry, unless ‘retries’ is configured to 0. Enabling retries also opens up the possibility of duplicates (see the documentation on message delivery semantics for details: http://kafka.apache.org/documentation.html#semantics ).

The producer maintains buffers of unsent records for each partition. These buffers are of a size specified by the ‘batch_size’ config. Making this larger can result in more batching, but requires more memory (since we will generally have one of these buffers for each active partition).

By default a buffer is available to send immediately even if there is additional unused space in the buffer. However if you want to reduce the number of requests you can set ‘linger_ms’ to something greater than 0. This will instruct the producer to wait up to that number of milliseconds before sending a request in hope that more records will arrive to fill up the same batch. This is analogous to Nagle’s algorithm in TCP. Note that records that arrive close together in time will generally batch together even with linger_ms=0 so under heavy load batching will occur regardless of the linger configuration; however setting this to something larger than 0 can lead to fewer, more efficient requests when not under maximal load at the cost of a small amount of latency.

The buffer_memory controls the total amount of memory available to the producer for buffering. If records are sent faster than they can be transmitted to the server then this buffer space will be exhausted. When the buffer space is exhausted additional send calls will block.

The key_serializer and value_serializer instruct how to turn the key and value objects the user provides into bytes.

Keyword Arguments:
 
  • bootstrap_servers – ‘host[:port]’ string (or list of ‘host[:port]’ strings) that the producer should contact to bootstrap initial cluster metadata. This does not have to be the full node list. It just needs to have at least one broker that will respond to a Metadata API Request. Default port is 9092. If no servers are specified, will default to localhost:9092.
  • client_id (str) – a name for this client. This string is passed in each request to servers and can be used to identify specific server-side log entries that correspond to this client. Default: ‘kafka-python-producer-#’ (appended with a unique number per instance)
  • key_serializer (callable) – used to convert user-supplied keys to bytes If not None, called as f(key), should return bytes. Default: None.
  • value_serializer (callable) – used to convert user-supplied message values to bytes. If not None, called as f(value), should return bytes. Default: None.
  • acks (0, 1, 'all') –

    The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common:

    0: Producer will not wait for any acknowledgment from the server.
    The message will immediately be added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won’t generally know of any failures). The offset given back for each record will always be set to -1.
    1: Wait for leader to write the record to its local log only.
    Broker will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.
    all: Wait for the full set of in-sync replicas to write the record.
    This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee.

    If unset, defaults to acks=1.

  • compression_type (str) – The compression type for all data generated by the producer. Valid values are ‘gzip’, ‘snappy’, ‘lz4’, or None. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). Default: None.
  • retries (int) – Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without setting max_in_flight_requests_per_connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. Default: 0.
  • batch_size (int) – Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). Default: 16384
  • linger_ms (int) – The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay; that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle’s algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch_size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will ‘linger’ for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger_ms=5 would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. Default: 0.
  • partitioner (callable) – Callable used to determine which partition each message is assigned to. Called (after key serialization): partitioner(key_bytes, all_partitions, available_partitions). The default partitioner implementation hashes each non-None key using the same murmur2 algorithm as the java client so that messages with the same key are assigned to the same partition. When a key is None, the message is delivered to a random partition (filtered to partitions with available leaders only, if possible).
  • buffer_memory (int) – The total bytes of memory the producer should use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block up to max_block_ms, raising an exception on timeout. In the current implementation, this setting is an approximation. Default: 33554432 (32MB)
  • max_block_ms (int) – Number of milliseconds to block during send() and partitions_for(). These methods can be blocked either because the buffer is full or metadata unavailable. Blocking in the user-supplied serializers or partitioner will not be counted against this timeout. Default: 60000.
  • max_request_size (int) – The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Default: 1048576.
  • metadata_max_age_ms (int) – The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. Default: 300000
  • retry_backoff_ms (int) – Milliseconds to backoff when retrying on errors. Default: 100.
  • request_timeout_ms (int) – Client request timeout in milliseconds. Default: 30000.
  • receive_buffer_bytes (int) – The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. Default: None (relies on system defaults). Java client defaults to 32768.
  • send_buffer_bytes (int) – The size of the TCP send buffer (SO_SNDBUF) to use when sending data. Default: None (relies on system defaults). Java client defaults to 131072.
  • socket_options (list) – List of tuple-arguments to socket.setsockopt to apply to broker connection sockets. Default: [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]
  • reconnect_backoff_ms (int) – The amount of time in milliseconds to wait before attempting to reconnect to a given host. Default: 50.
  • reconnect_backoff_max_ms (int) – The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the backoff resulting in a random range between 20% below and 20% above the computed value. Default: 1000.
  • max_in_flight_requests_per_connection (int) – Requests are pipelined to kafka brokers up to this number of maximum requests per broker connection. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). Default: 5.
  • security_protocol (str) – Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. Default: PLAINTEXT.
  • ssl_context (ssl.SSLContext) – pre-configured SSLContext for wrapping socket connections. If provided, all other ssl_* configurations will be ignored. Default: None.
  • ssl_check_hostname (bool) – flag to configure whether ssl handshake should verify that the certificate matches the brokers hostname. default: true.
  • ssl_cafile (str) – optional filename of ca file to use in certificate veriication. default: none.
  • ssl_certfile (str) – optional filename of file in pem format containing the client certificate, as well as any ca certificates needed to establish the certificate’s authenticity. default: none.
  • ssl_keyfile (str) – optional filename containing the client private key. default: none.
  • ssl_password (str) – optional password to be used when loading the certificate chain. default: none.
  • ssl_crlfile (str) – optional filename containing the CRL to check for certificate expiration. By default, no CRL check is done. When providing a file, only the leaf certificate will be checked against this CRL. The CRL can only be checked with Python 3.4+ or 2.7.9+. default: none.
  • api_version (tuple) – Specify which Kafka API version to use. If set to None, the client will attempt to infer the broker version by probing various APIs. For a full list of supported versions, see KafkaClient.API_VERSIONS. Default: None
  • api_version_auto_timeout_ms (int) – number of milliseconds to throw a timeout exception from the constructor when checking the broker api version. Only applies if api_version set to ‘auto’
  • metric_reporters (list) – A list of classes to use as metrics reporters. Implementing the AbstractMetricsReporter interface allows plugging in classes that will be notified of new metric creation. Default: []
  • metrics_num_samples (int) – The number of samples maintained to compute metrics. Default: 2
  • metrics_sample_window_ms (int) – The maximum age in milliseconds of samples used to compute metrics. Default: 30000
  • selector (selectors.BaseSelector) – Provide a specific selector implementation to use for I/O multiplexing. Default: selectors.DefaultSelector
  • sasl_mechanism (str) – string picking sasl mechanism when security_protocol is SASL_PLAINTEXT or SASL_SSL. Currently only PLAIN is supported. Default: None
  • sasl_plain_username (str) – username for sasl PLAIN authentication. Default: None
  • sasl_plain_password (str) – password for sasl PLAIN authentication. Default: None
  • sasl_kerberos_service_name (str) – Service name to include in GSSAPI sasl mechanism handshake. Default: ‘kafka’

Note

Configuration parameters are described in more detail at https://kafka.apache.org/0100/configuration.html#producerconfigs

DEFAULT_CONFIG = {'reconnect_backoff_ms': 50, 'max_block_ms': 60000, 'metadata_max_age_ms': 300000, 'metrics_sample_window_ms': 30000, 'ssl_certfile': None, 'max_request_size': 1048576, 'send_buffer_bytes': None, 'ssl_crlfile': None, 'ssl_context': None, 'batch_size': 16384, 'selector': <class 'kafka.vendor.selectors34.EpollSelector'>, 'request_timeout_ms': 30000, 'receive_buffer_bytes': None, 'ssl_check_hostname': True, 'client_id': None, 'sasl_plain_username': None, 'bootstrap_servers': 'localhost', 'api_version_auto_timeout_ms': 2000, 'key_serializer': None, 'sasl_plain_password': None, 'metric_reporters': [], 'retries': 0, 'ssl_password': None, 'connections_max_idle_ms': 540000, 'socket_options': [(6, 1, 1)], 'metrics_num_samples': 2, 'retry_backoff_ms': 100, 'sasl_mechanism': None, 'ssl_cafile': None, 'compression_type': None, 'partitioner': <kafka.partitioner.default.DefaultPartitioner object>, 'linger_ms': 0, 'security_protocol': 'PLAINTEXT', 'buffer_memory': 33554432, 'acks': 1, 'ssl_keyfile': None, 'reconnect_backoff_max': 1000, 'sasl_kerberos_service_name': 'kafka', 'value_serializer': None, 'max_in_flight_requests_per_connection': 5, 'api_version': None}
close(timeout=None)[source]

Close this producer.

Parameters:timeout (float, optional) – timeout in seconds to wait for completion.
flush(timeout=None)[source]

Invoking this method makes all buffered records immediately available to send (even if linger_ms is greater than 0) and blocks on the completion of the requests associated with these records. The post-condition of flush() is that any previously sent record will have completed (e.g. Future.is_done() == True). A request is considered completed when either it is successfully acknowledged according to the ‘acks’ configuration for the producer, or it results in an error.

Other threads can continue sending messages while one thread is blocked waiting for a flush call to complete; however, no guarantee is made about the completion of messages sent after the flush call begins.

Parameters:timeout (float, optional) – timeout in seconds to wait for completion.
Raises:KafkaTimeoutError – failure to flush buffered records within the provided timeout
metrics(raw=False)[source]

Get metrics on producer performance.

This is ported from the Java Producer, for details see: https://kafka.apache.org/documentation/#producer_monitoring

Warning

This is an unstable interface. It may change in future releases without warning.

partitions_for(topic)[source]

Returns set of all known partitions for the topic.

send(topic, value=None, key=None, partition=None, timestamp_ms=None)[source]

Publish a message to a topic.

Parameters:
  • topic (str) – topic where the message will be published
  • value (optional) – message value. Must be type bytes, or be serializable to bytes via configured value_serializer. If value is None, key is required and message acts as a ‘delete’. See kafka compaction documentation for more details: http://kafka.apache.org/documentation.html#compaction (compaction requires kafka >= 0.8.1)
  • partition (int, optional) – optionally specify a partition. If not set, the partition will be selected using the configured ‘partitioner’.
  • key (optional) – a key to associate with the message. Can be used to determine which partition to send the message to. If partition is None (and producer’s partitioner config is left as default), then messages with the same key will be delivered to the same partition (but if key is None, partition is chosen randomly). Must be type bytes, or be serializable to bytes via configured key_serializer.
  • timestamp_ms (int, optional) – epoch milliseconds (from Jan 1 1970 UTC) to use as the message timestamp. Defaults to current time.
Returns:

resolves to RecordMetadata

Return type:

FutureRecordMetadata

Raises:

KafkaTimeoutError – if unable to fetch topic metadata, or unable to obtain memory buffer prior to configured max_block_ms

class kafka.KafkaClient(*args, **kwargs)[source]

Bases: kafka.client.SimpleClient

class kafka.BrokerConnection(host, port, afi, **configs)[source]

Bases: object

Initialize a Kafka broker connection

Keyword Arguments:
 
  • client_id (str) – a name for this client. This string is passed in each request to servers and can be used to identify specific server-side log entries that correspond to this client. Also submitted to GroupCoordinator for logging with respect to consumer group administration. Default: ‘kafka-python-{version}’
  • reconnect_backoff_ms (int) – The amount of time in milliseconds to wait before attempting to reconnect to a given host. Default: 50.
  • reconnect_backoff_max_ms (int) – The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the backoff resulting in a random range between 20% below and 20% above the computed value. Default: 1000.
  • request_timeout_ms (int) – Client request timeout in milliseconds. Default: 40000.
  • max_in_flight_requests_per_connection (int) – Requests are pipelined to kafka brokers up to this number of maximum requests per broker connection. Default: 5.
  • receive_buffer_bytes (int) – The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. Default: None (relies on system defaults). Java client defaults to 32768.
  • send_buffer_bytes (int) – The size of the TCP send buffer (SO_SNDBUF) to use when sending data. Default: None (relies on system defaults). Java client defaults to 131072.
  • socket_options (list) – List of tuple-arguments to socket.setsockopt to apply to broker connection sockets. Default: [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]
  • security_protocol (str) – Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. Default: PLAINTEXT.
  • ssl_context (ssl.SSLContext) – pre-configured SSLContext for wrapping socket connections. If provided, all other ssl_* configurations will be ignored. Default: None.
  • ssl_check_hostname (bool) – flag to configure whether ssl handshake should verify that the certificate matches the brokers hostname. default: True.
  • ssl_cafile (str) – optional filename of ca file to use in certificate veriication. default: None.
  • ssl_certfile (str) – optional filename of file in pem format containing the client certificate, as well as any ca certificates needed to establish the certificate’s authenticity. default: None.
  • ssl_keyfile (str) – optional filename containing the client private key. default: None.
  • ssl_password (callable, str, bytes, bytearray) – optional password or callable function that returns a password, for decrypting the client private key. Default: None.
  • ssl_crlfile (str) – optional filename containing the CRL to check for certificate expiration. By default, no CRL check is done. When providing a file, only the leaf certificate will be checked against this CRL. The CRL can only be checked with Python 3.4+ or 2.7.9+. default: None.
  • api_version (tuple) – Specify which Kafka API version to use. Accepted values are: (0, 8, 0), (0, 8, 1), (0, 8, 2), (0, 9), (0, 10). Default: (0, 8, 2)
  • api_version_auto_timeout_ms (int) – number of milliseconds to throw a timeout exception from the constructor when checking the broker api version. Only applies if api_version is None
  • state_change_callback (callable) – function to be called when the connection state changes from CONNECTING to CONNECTED etc.
  • metrics (kafka.metrics.Metrics) – Optionally provide a metrics instance for capturing network IO stats. Default: None.
  • metric_group_prefix (str) – Prefix for metric names. Default: ‘’
  • sasl_mechanism (str) – Authentication mechanism when security_protocol is configured for SASL_PLAINTEXT or SASL_SSL. Valid values are: PLAIN, GSSAPI. Default: PLAIN
  • sasl_plain_username (str) – username for sasl PLAIN authentication. Default: None
  • sasl_plain_password (str) – password for sasl PLAIN authentication. Default: None
  • sasl_kerberos_service_name (str) – Service name to include in GSSAPI sasl mechanism handshake. Default: ‘kafka’
DEFAULT_CONFIG = {'reconnect_backoff_ms': 50, 'ssl_check_hostname': True, 'sasl_mechanism': 'PLAIN', 'receive_buffer_bytes': None, 'sasl_plain_password': None, 'ssl_password': None, 'sasl_plain_username': None, 'ssl_cafile': None, 'request_timeout_ms': 40000, 'ssl_keyfile': None, 'sasl_kerberos_service_name': 'kafka', 'max_in_flight_requests_per_connection': 5, 'api_version': (0, 8, 2), 'ssl_certfile': None, 'reconnect_backoff_max_ms': 1000, 'send_buffer_bytes': None, 'ssl_crlfile': None, 'ssl_context': None, 'metrics': None, 'node_id': 0, 'client_id': 'kafka-python-1.3.6.dev', 'metric_group_prefix': '', 'security_protocol': 'PLAINTEXT', 'socket_options': [(6, 1, 1)], 'state_change_callback': <function <lambda>>}
SASL_MECHANISMS = ('PLAIN', 'GSSAPI')
SECURITY_PROTOCOLS = ('PLAINTEXT', 'SSL', 'SASL_PLAINTEXT', 'SASL_SSL')
blacked_out()[source]

Return true if we are disconnected from the given node and can’t re-establish a connection yet

can_send_more()[source]

Return True unless there are max_in_flight_requests_per_connection.

check_version(timeout=2, strict=False)[source]

Attempt to guess the broker version.

Note: This is a blocking call.

Returns: version tuple, i.e. (0, 10), (0, 9), (0, 8, 2), ...

close(error=None)[source]

Close socket and fail all in-flight-requests.

Parameters:error (Exception, optional) – pending in-flight-requests will be failed with this exception. Default: kafka.errors.ConnectionError.
connect()[source]

Attempt to connect and return ConnectionState

connected()[source]

Return True iff socket is connected.

connecting()[source]

Returns True if still connecting (this may encompass several different states, such as SSL handshake, authorization, etc).

connection_delay()[source]
disconnected()[source]

Return True iff socket is closed

recv()[source]

Non-blocking network receive.

Return response if available

requests_timed_out()[source]
send(request)[source]

send request, return Future()

Can block on network if request is larger than send_buffer_bytes

class kafka.SimpleClient(hosts, client_id='kafka-python', timeout=120, correlation_id=0)[source]

Bases: object

CLIENT_ID = 'kafka-python'
DEFAULT_SOCKET_TIMEOUT_SECONDS = 120
close()[source]
copy()[source]

Create an inactive copy of the client object, suitable for passing to a separate thread.

Note that the copied connections are not initialized, so reinit() must be called on the returned copy.

ensure_topic_exists(topic, timeout=30)[source]
get_partition_ids_for_topic(topic)[source]
has_metadata_for_topic(topic)[source]
load_metadata_for_topics(*topics, **kwargs)[source]

Fetch broker and topic-partition metadata from the server.

Updates internal data: broker list, topic/partition list, and topic/partition -> broker map. This method should be called after receiving any error.

Note: Exceptions will not be raised in a full refresh (i.e. no topic list). In this case, error codes will be logged as errors. Partition-level errors will also not be raised here (a single partition w/o a leader, for example).

Parameters:
  • *topics (optional) – If a list of topics is provided, the metadata refresh will be limited to the specified topics only.
  • ignore_leadernotavailable (bool) – suppress LeaderNotAvailableError so that metadata is loaded correctly during auto-create. Default: False.
Raises:
  • UnknownTopicOrPartitionError – Raised for topics that do not exist, unless the broker is configured to auto-create topics.
  • LeaderNotAvailableError – Raised for topics that do not exist yet, when the broker is configured to auto-create topics. Retry after a short backoff (topics/partitions are initializing).
reinit()[source]
reset_all_metadata()[source]
reset_topic_metadata(*topics)[source]
send_consumer_metadata_request(payloads=(), fail_on_error=True, callback=None)[source]
send_fetch_request(payloads=(), fail_on_error=True, callback=None, max_wait_time=100, min_bytes=4096)[source]

Encode and send a FetchRequest

Payloads are grouped by topic and partition so they can be pipelined to the same brokers.

send_list_offset_request(payloads=(), fail_on_error=True, callback=None)[source]
send_metadata_request(payloads=(), fail_on_error=True, callback=None)[source]
send_offset_commit_request(group, payloads=(), fail_on_error=True, callback=None)[source]
send_offset_fetch_request(group, payloads=(), fail_on_error=True, callback=None)[source]
send_offset_fetch_request_kafka(group, payloads=(), fail_on_error=True, callback=None)[source]
send_offset_request(payloads=(), fail_on_error=True, callback=None)[source]
send_produce_request(payloads=(), acks=1, timeout=1000, fail_on_error=True, callback=None)[source]

Encode and send some ProduceRequests

ProduceRequests will be grouped by (topic, partition) and then sent to a specific broker. Output is a list of responses in the same order as the list of payloads specified

Parameters:
  • payloads (list of ProduceRequest) – produce requests to send to kafka ProduceRequest payloads must not contain duplicates for any topic-partition.
  • acks (int, optional) – how many acks the servers should receive from replica brokers before responding to the request. If it is 0, the server will not send any response. If it is 1, the server will wait until the data is written to the local log before sending a response. If it is -1, the server will wait until the message is committed by all in-sync replicas before sending a response. For any value > 1, the server will wait for this number of acks to occur (but the server will never wait for more acknowledgements than there are in-sync replicas). defaults to 1.
  • timeout (int, optional) – maximum time in milliseconds the server can await the receipt of the number of acks, defaults to 1000.
  • fail_on_error (bool, optional) – raise exceptions on connection and server response errors, defaults to True.
  • callback (function, optional) – instead of returning the ProduceResponse, first pass it through this function, defaults to None.
Returns:

list of ProduceResponses, or callback results if supplied, in the order of input payloads

topics
class kafka.SimpleProducer(*args, **kwargs)[source]

Bases: kafka.producer.base.Producer

A simple, round-robin producer.

See Producer class for Base Arguments

Additional Arguments:
random_start (bool, optional): randomize the initial partition which
the first message block will be published to, otherwise if false, the first message block will always publish to partition 0 before cycling through each partition, defaults to True.
send_messages(topic, *msg)[source]
class kafka.KeyedProducer(*args, **kwargs)[source]

Bases: kafka.producer.base.Producer

A producer which distributes messages to partitions based on the key

See Producer class for Arguments

Additional Arguments:
partitioner: A partitioner class that will be used to get the partition
to send the message to. Must be derived from Partitioner. Defaults to HashedPartitioner.
send(topic, key, msg)[source]
send_messages(topic, key, *msg)[source]
class kafka.RoundRobinPartitioner(partitions=None)[source]

Bases: kafka.partitioner.base.Partitioner

partition(key, all_partitions=None, available_partitions=None)[source]
kafka.HashedPartitioner

alias of LegacyPartitioner

kafka.create_message(payload, key=None)[source]

Construct a Message

Parameters:
  • payload – bytes, the payload to send to Kafka
  • key – bytes, a key used for partition routing (optional)
kafka.create_gzip_message(payloads, key=None, compresslevel=None)[source]

Construct a Gzipped Message containing multiple Messages

The given payloads will be encoded, compressed, and sent as a single atomic message to Kafka.

Parameters:
  • payloads – list(bytes), a list of payload to send be sent to Kafka
  • key – bytes, a key used for partition routing (optional)
kafka.create_snappy_message(payloads, key=None)[source]

Construct a Snappy Message containing multiple Messages

The given payloads will be encoded, compressed, and sent as a single atomic message to Kafka.

Parameters:
  • payloads – list(bytes), a list of payload to send be sent to Kafka
  • key – bytes, a key used for partition routing (optional)
class kafka.SimpleConsumer(client, group, topic, auto_commit=True, partitions=None, auto_commit_every_n=100, auto_commit_every_t=5000, fetch_size_bytes=4096, buffer_size=4096, max_buffer_size=32768, iter_timeout=None, auto_offset_reset='largest')[source]

Bases: kafka.consumer.base.Consumer

A simple consumer implementation that consumes all/specified partitions for a topic

Parameters:
  • client – a connected SimpleClient
  • group – a name for this consumer, used for offset storage and must be unique If you are connecting to a server that does not support offset commit/fetch (any prior to 0.8.1.1), then you must set this to None
  • topic – the topic to consume
Keyword Arguments:
 
  • partitions – An optional list of partitions to consume the data from
  • auto_commit – default True. Whether or not to auto commit the offsets
  • auto_commit_every_n – default 100. How many messages to consume before a commit
  • auto_commit_every_t – default 5000. How much time (in milliseconds) to wait before commit
  • fetch_size_bytes – number of bytes to request in a FetchRequest
  • buffer_size – default 4K. Initial number of bytes to tell kafka we have available. This will double as needed.
  • max_buffer_size – default 16K. Max number of bytes to tell kafka we have available. None means no limit.
  • iter_timeout – default None. How much time (in seconds) to wait for a message in the iterator before exiting. None means no timeout, so it will wait forever.
  • auto_offset_reset – default largest. Reset partition offsets upon OffsetOutOfRangeError. Valid values are largest and smallest. Otherwise, do not reset the offsets and raise OffsetOutOfRangeError.

Auto commit details: If both auto_commit_every_n and auto_commit_every_t are set, they will reset one another when one is triggered. These triggers simply call the commit method on this class. A manual call to commit will also reset these triggers

get_message(block=True, timeout=0.1, get_partition_info=None)[source]
get_messages(count=1, block=True, timeout=0.1)[source]

Fetch the specified number of messages

Keyword Arguments:
 
  • count – Indicates the maximum number of messages to be fetched
  • block – If True, the API will block till all messages are fetched. If block is a positive integer the API will block until that many messages are fetched.
  • timeout – When blocking is requested the function will block for the specified time (in seconds) until count messages is fetched. If None, it will block forever.
reset_partition_offset(partition)[source]

Update offsets using auto_offset_reset policy (smallest|largest)

Parameters:partition (int) – the partition for which offsets should be updated

Returns: Updated offset on success, None on failure

seek(offset, whence=None, partition=None)[source]

Alter the current offset in the consumer, similar to fseek

Parameters:
  • offset – how much to modify the offset
  • whence

    where to modify it from, default is None

    • None is an absolute offset
    • 0 is relative to the earliest available offset (head)
    • 1 is relative to the current offset
    • 2 is relative to the latest known offset (tail)
  • partition – modify which partition, default is None. If partition is None, would modify all partitions.
class kafka.MultiProcessConsumer(client, group, topic, partitions=None, auto_commit=True, auto_commit_every_n=100, auto_commit_every_t=5000, num_procs=1, partitions_per_proc=0, **simple_consumer_options)[source]

Bases: kafka.consumer.base.Consumer

A consumer implementation that consumes partitions for a topic in parallel using multiple processes

Parameters:
  • client – a connected SimpleClient
  • group – a name for this consumer, used for offset storage and must be unique If you are connecting to a server that does not support offset commit/fetch (any prior to 0.8.1.1), then you must set this to None
  • topic – the topic to consume
Keyword Arguments:
 
  • partitions – An optional list of partitions to consume the data from
  • auto_commit – default True. Whether or not to auto commit the offsets
  • auto_commit_every_n – default 100. How many messages to consume before a commit
  • auto_commit_every_t – default 5000. How much time (in milliseconds) to wait before commit
  • num_procs – Number of processes to start for consuming messages. The available partitions will be divided among these processes
  • partitions_per_proc – Number of partitions to be allocated per process (overrides num_procs)

Auto commit details: If both auto_commit_every_n and auto_commit_every_t are set, they will reset one another when one is triggered. These triggers simply call the commit method on this class. A manual call to commit will also reset these triggers

get_messages(count=1, block=True, timeout=10)[source]

Fetch the specified number of messages

Keyword Arguments:
 
  • count – Indicates the maximum number of messages to be fetched
  • block – If True, the API will block till all messages are fetched. If block is a positive integer the API will block until that many messages are fetched.
  • timeout – When blocking is requested the function will block for the specified time (in seconds) until count messages is fetched. If None, it will block forever.
stop()[source]