kafka package

Submodules

kafka.client module

class kafka.client.SimpleClient(hosts, client_id='kafka-python', timeout=120, correlation_id=0)

Bases: object

CLIENT_ID = 'kafka-python'
close()
copy()

Create an inactive copy of the client object, suitable for passing to a separate thread.

Note that the copied connections are not initialized, so reinit() must be called on the returned copy.

ensure_topic_exists(topic, timeout=30)
get_partition_ids_for_topic(topic)
has_metadata_for_topic(topic)
load_metadata_for_topics(*topics, **kwargs)

Fetch broker and topic-partition metadata from the server.

Updates internal data: broker list, topic/partition list, and topic/parition -> broker map. This method should be called after receiving any error.

Note: Exceptions will not be raised in a full refresh (i.e. no topic list). In this case, error codes will be logged as errors. Partition-level errors will also not be raised here (a single partition w/o a leader, for example).

Parameters:
  • *topics (optional) – If a list of topics is provided, the metadata refresh will be limited to the specified topics only.
  • ignore_leadernotavailable (bool) – suppress LeaderNotAvailableError so that metadata is loaded correctly during auto-create. Default: False.
Raises:
  • UnknownTopicOrPartitionError – Raised for topics that do not exist, unless the broker is configured to auto-create topics.
  • LeaderNotAvailableError – Raised for topics that do not exist yet, when the broker is configured to auto-create topics. Retry after a short backoff (topics/partitions are initializing).
reinit()
reset_all_metadata()
reset_topic_metadata(*topics)
send_consumer_metadata_request(payloads=[], fail_on_error=True, callback=None)
send_fetch_request(payloads=[], fail_on_error=True, callback=None, max_wait_time=100, min_bytes=4096)

Encode and send a FetchRequest

Payloads are grouped by topic and partition so they can be pipelined to the same brokers.

send_metadata_request(payloads=[], fail_on_error=True, callback=None)
send_offset_commit_request(group, payloads=[], fail_on_error=True, callback=None)
send_offset_fetch_request(group, payloads=[], fail_on_error=True, callback=None)
send_offset_fetch_request_kafka(group, payloads=[], fail_on_error=True, callback=None)
send_offset_request(payloads=[], fail_on_error=True, callback=None)
send_produce_request(payloads=[], acks=1, timeout=1000, fail_on_error=True, callback=None)

Encode and send some ProduceRequests

ProduceRequests will be grouped by (topic, partition) and then sent to a specific broker. Output is a list of responses in the same order as the list of payloads specified

Parameters:
  • payloads (list of ProduceRequest) – produce requests to send to kafka ProduceRequest payloads must not contain duplicates for any topic-partition.
  • acks (int, optional) – how many acks the servers should receive from replica brokers before responding to the request. If it is 0, the server will not send any response. If it is 1, the server will wait until the data is written to the local log before sending a response. If it is -1, the server will wait until the message is committed by all in-sync replicas before sending a response. For any value > 1, the server will wait for this number of acks to occur (but the server will never wait for more acknowledgements than there are in-sync replicas). defaults to 1.
  • timeout (int, optional) – maximum time in milliseconds the server can await the receipt of the number of acks, defaults to 1000.
  • fail_on_error (bool, optional) – raise exceptions on connection and server response errors, defaults to True.
  • callback (function, optional) – instead of returning the ProduceResponse, first pass it through this function, defaults to None.
Returns:

list of ProduceResponses, or callback results if supplied, in the order of input payloads

topics

kafka.codec module

kafka.codec.gzip_decode(payload)
kafka.codec.gzip_encode(payload, compresslevel=None)
kafka.codec.has_gzip()
kafka.codec.has_lz4()
kafka.codec.has_snappy()
kafka.codec.lz4_decode(payload)
kafka.codec.lz4_encode(payload)
kafka.codec.snappy_decode(payload)
kafka.codec.snappy_encode(payload, xerial_compatible=True, xerial_blocksize=32768)

Encodes the given data with snappy compression.

If xerial_compatible is set then the stream is encoded in a fashion compatible with the xerial snappy library.

The block size (xerial_blocksize) controls how frequent the blocking occurs 32k is the default in the xerial library.

The format winds up being:

Header Block1 len Block1 data Blockn len Blockn data
16 bytes BE int32 snappy bytes BE int32 snappy bytes

It is important to note that the blocksize is the amount of uncompressed data presented to snappy at each block, whereas the blocklen is the number of bytes that will be present in the stream; so the length will always be <= blocksize.

kafka.common module

exception kafka.common.AsyncProducerQueueFull(failed_msgs, *args)

Bases: kafka.common.KafkaError

class kafka.common.BrokerMetadata(nodeId, host, port)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

host

Alias for field number 1

nodeId

Alias for field number 0

port

Alias for field number 2

exception kafka.common.BrokerNotAvailableError

Bases: kafka.common.BrokerResponseError

description = 'This is not a client facing error and is used mostly by tools when a broker is not alive.'
errno = 8
message = 'BROKER_NOT_AVAILABLE'
exception kafka.common.BrokerResponseError

Bases: kafka.common.KafkaError

description = None
errno = None
message = None
exception kafka.common.BufferUnderflowError

Bases: kafka.common.KafkaError

exception kafka.common.Cancelled

Bases: kafka.common.KafkaError

retriable = True
exception kafka.common.ChecksumError

Bases: kafka.common.KafkaError

exception kafka.common.ClusterAuthorizationFailedError

Bases: kafka.common.BrokerResponseError

description = 'Returned by the broker when the client is not authorized to use an inter-broker or administrative API.'
errno = 31
message = 'CLUSTER_AUTHORIZATION_FAILED'
exception kafka.common.ConnectionError

Bases: kafka.common.KafkaError

invalid_metadata = True
retriable = True
exception kafka.common.ConsumerFetchSizeTooSmall

Bases: kafka.common.KafkaError

class kafka.common.ConsumerMetadataRequest(groups)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

groups

Alias for field number 0

class kafka.common.ConsumerMetadataResponse(error, nodeId, host, port)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

error

Alias for field number 0

host

Alias for field number 2

nodeId

Alias for field number 1

port

Alias for field number 3

exception kafka.common.ConsumerNoMoreData

Bases: kafka.common.KafkaError

exception kafka.common.ConsumerTimeout

Bases: kafka.common.KafkaError

exception kafka.common.CorrelationIdError

Bases: kafka.common.KafkaError

retriable = True
exception kafka.common.FailedPayloadsError(payload, *args)

Bases: kafka.common.KafkaError

class kafka.common.FetchRequestPayload(topic, partition, offset, max_bytes)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

max_bytes

Alias for field number 3

offset

Alias for field number 2

partition

Alias for field number 1

topic

Alias for field number 0

class kafka.common.FetchResponsePayload(topic, partition, error, highwaterMark, messages)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

error

Alias for field number 2

highwaterMark

Alias for field number 3

messages

Alias for field number 4

partition

Alias for field number 1

topic

Alias for field number 0

exception kafka.common.GroupAuthorizationFailedError

Bases: kafka.common.BrokerResponseError

description = 'Returned by the broker when the client is not authorized to access a particular groupId.'
errno = 30
message = 'GROUP_AUTHORIZATION_FAILED'
exception kafka.common.GroupCoordinatorNotAvailableError

Bases: kafka.common.BrokerResponseError

description = 'The broker returns this error code for group coordinator requests, offset commits, and most group management requests if the offsets topic has not yet been created, or if the group coordinator is not active.'
errno = 15
message = 'CONSUMER_COORDINATOR_NOT_AVAILABLE'
retriable = True
exception kafka.common.GroupLoadInProgressError

Bases: kafka.common.BrokerResponseError

description = 'The broker returns this error code for an offset fetch request if it is still loading offsets (after a leader change for that offsets topic partition), or in response to group membership requests (such as heartbeats) when group metadata is being loaded by the coordinator.'
errno = 14
message = 'OFFSETS_LOAD_IN_PROGRESS'
retriable = True
exception kafka.common.IllegalArgumentError

Bases: kafka.common.KafkaError

exception kafka.common.IllegalGenerationError

Bases: kafka.common.BrokerResponseError

description = 'Returned from group membership requests (such as heartbeats) when the generation id provided in the request is not the current generation.'
errno = 22
message = 'ILLEGAL_GENERATION'
exception kafka.common.IllegalStateError

Bases: kafka.common.KafkaError

exception kafka.common.InconsistentGroupProtocolError

Bases: kafka.common.BrokerResponseError

description = 'Returned in join group when the member provides a protocol type or set of protocols which is not compatible with the current group.'
errno = 23
message = 'INCONSISTENT_GROUP_PROTOCOL'
exception kafka.common.InvalidCommitOffsetSizeError

Bases: kafka.common.BrokerResponseError

description = 'This error indicates that an offset commit was rejected because of oversize metadata.'
errno = 28
message = 'INVALID_COMMIT_OFFSET_SIZE'
exception kafka.common.InvalidFetchRequestError

Bases: kafka.common.BrokerResponseError

description = 'The message has a negative size.'
errno = 4
message = 'INVALID_FETCH_SIZE'
exception kafka.common.InvalidGroupIdError

Bases: kafka.common.BrokerResponseError

description = 'Returned in join group when the groupId is empty or null.'
errno = 24
message = 'INVALID_GROUP_ID'
exception kafka.common.InvalidMessageError

Bases: kafka.common.BrokerResponseError

description = 'This indicates that a message contents does not match its CRC.'
errno = 2
message = 'INVALID_MESSAGE'
exception kafka.common.InvalidRequiredAcksError

Bases: kafka.common.BrokerResponseError

description = 'Returned from a produce request if the requested requiredAcks is invalid (anything other than -1, 1, or 0).'
errno = 21
message = 'INVALID_REQUIRED_ACKS'
exception kafka.common.InvalidSessionTimeoutError

Bases: kafka.common.BrokerResponseError

description = 'Return in join group when the requested session timeout is outside of the allowed range on the broker'
errno = 26
message = 'INVALID_SESSION_TIMEOUT'
exception kafka.common.InvalidTopicError

Bases: kafka.common.BrokerResponseError

description = 'For a request which attempts to access an invalid topic (e.g. one which has an illegal name), or if an attempt is made to write to an internal topic (such as the consumer offsets topic).'
errno = 17
message = 'INVALID_TOPIC'
exception kafka.common.KafkaConfigurationError

Bases: kafka.common.KafkaError

exception kafka.common.KafkaError

Bases: exceptions.RuntimeError

invalid_metadata = False
retriable = False
class kafka.common.KafkaMessage(topic, partition, offset, key, value)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

key

Alias for field number 3

offset

Alias for field number 2

partition

Alias for field number 1

topic

Alias for field number 0

value

Alias for field number 4

exception kafka.common.KafkaTimeoutError

Bases: kafka.common.KafkaError

exception kafka.common.KafkaUnavailableError

Bases: kafka.common.KafkaError

exception kafka.common.LeaderNotAvailableError

Bases: kafka.common.BrokerResponseError

description = 'This error is thrown if we are in the middle of a leadership election and there is currently no leader for this partition and hence it is unavailable for writes.'
errno = 5
invalid_metadata = True
message = 'LEADER_NOT_AVAILABLE'
retriable = True
class kafka.common.Message(magic, attributes, key, value)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

attributes

Alias for field number 1

key

Alias for field number 2

magic

Alias for field number 0

value

Alias for field number 3

exception kafka.common.MessageSizeTooLargeError

Bases: kafka.common.BrokerResponseError

description = 'The server has a configurable maximum message size to avoid unbounded memory allocation. This error is thrown if the client attempt to produce a message larger than this maximum.'
errno = 10
message = 'MESSAGE_SIZE_TOO_LARGE'
class kafka.common.MetadataRequest(topics)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

topics

Alias for field number 0

class kafka.common.MetadataResponse(brokers, topics)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

brokers

Alias for field number 0

topics

Alias for field number 1

exception kafka.common.NoBrokersAvailable

Bases: kafka.common.KafkaError

invalid_metadata = True
retriable = True
exception kafka.common.NoError

Bases: kafka.common.BrokerResponseError

description = 'No error--it worked!'
errno = 0
message = 'NO_ERROR'
exception kafka.common.NodeNotReadyError

Bases: kafka.common.KafkaError

retriable = True
exception kafka.common.NotCoordinatorForGroupError

Bases: kafka.common.BrokerResponseError

description = 'The broker returns this error code if it receives an offset fetch or commit request for a group that it is not a coordinator for.'
errno = 16
message = 'NOT_COORDINATOR_FOR_CONSUMER'
retriable = True
exception kafka.common.NotEnoughReplicasAfterAppendError

Bases: kafka.common.BrokerResponseError

description = 'Returned from a produce request when the message was written to the log, but with fewer in-sync replicas than required.'
errno = 20
message = 'NOT_ENOUGH_REPLICAS_AFTER_APPEND'
exception kafka.common.NotEnoughReplicasError

Bases: kafka.common.BrokerResponseError

description = 'Returned from a produce request when the number of in-sync replicas is lower than the configured minimum and requiredAcks is -1.'
errno = 19
message = 'NOT_ENOUGH_REPLICAS'
exception kafka.common.NotLeaderForPartitionError

Bases: kafka.common.BrokerResponseError

description = 'This error is thrown if the client attempts to send messages to a replica that is not the leader for some partition. It indicates that the clients metadata is out of date.'
errno = 6
invalid_metadata = True
message = 'NOT_LEADER_FOR_PARTITION'
retriable = True
class kafka.common.OffsetAndMessage(offset, message)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

message

Alias for field number 1

offset

Alias for field number 0

class kafka.common.OffsetAndMetadata(offset, metadata)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

metadata

Alias for field number 1

offset

Alias for field number 0

class kafka.common.OffsetCommitRequestPayload(topic, partition, offset, metadata)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

metadata

Alias for field number 3

offset

Alias for field number 2

partition

Alias for field number 1

topic

Alias for field number 0

class kafka.common.OffsetCommitResponsePayload(topic, partition, error)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

error

Alias for field number 2

partition

Alias for field number 1

topic

Alias for field number 0

class kafka.common.OffsetFetchRequestPayload(topic, partition)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

partition

Alias for field number 1

topic

Alias for field number 0

class kafka.common.OffsetFetchResponsePayload(topic, partition, offset, metadata, error)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

error

Alias for field number 4

metadata

Alias for field number 3

offset

Alias for field number 2

partition

Alias for field number 1

topic

Alias for field number 0

exception kafka.common.OffsetMetadataTooLargeError

Bases: kafka.common.BrokerResponseError

description = 'If you specify a string larger than configured maximum for offset metadata.'
errno = 12
message = 'OFFSET_METADATA_TOO_LARGE'
exception kafka.common.OffsetOutOfRangeError

Bases: kafka.common.BrokerResponseError

description = 'The requested offset is outside the range of offsets maintained by the server for the given topic/partition.'
errno = 1
message = 'OFFSET_OUT_OF_RANGE'
class kafka.common.OffsetRequestPayload(topic, partition, time, max_offsets)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

max_offsets

Alias for field number 3

partition

Alias for field number 1

time

Alias for field number 2

topic

Alias for field number 0

class kafka.common.OffsetResponsePayload(topic, partition, error, offsets)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

error

Alias for field number 2

offsets

Alias for field number 3

partition

Alias for field number 1

topic

Alias for field number 0

class kafka.common.PartitionMetadata(topic, partition, leader, replicas, isr, error)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

error

Alias for field number 5

isr

Alias for field number 4

leader

Alias for field number 2

partition

Alias for field number 1

replicas

Alias for field number 3

topic

Alias for field number 0

class kafka.common.ProduceRequestPayload(topic, partition, messages)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

messages

Alias for field number 2

partition

Alias for field number 1

topic

Alias for field number 0

class kafka.common.ProduceResponsePayload(topic, partition, error, offset)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

error

Alias for field number 2

offset

Alias for field number 3

partition

Alias for field number 1

topic

Alias for field number 0

exception kafka.common.ProtocolError

Bases: kafka.common.KafkaError

exception kafka.common.RebalanceInProgressError

Bases: kafka.common.BrokerResponseError

description = 'Returned in heartbeat requests when the coordinator has begun rebalancing the group. This indicates to the client that it should rejoin the group.'
errno = 27
message = 'REBALANCE_IN_PROGRESS'
exception kafka.common.RecordListTooLargeError

Bases: kafka.common.BrokerResponseError

description = 'If a message batch in a produce request exceeds the maximum configured segment size.'
errno = 18
message = 'RECORD_LIST_TOO_LARGE'
exception kafka.common.ReplicaNotAvailableError

Bases: kafka.common.BrokerResponseError

description = 'If replica is expected on a broker, but is not (this can be safely ignored).'
errno = 9
message = 'REPLICA_NOT_AVAILABLE'
exception kafka.common.RequestTimedOutError

Bases: kafka.common.BrokerResponseError

description = 'This error is thrown if the request exceeds the user-specified time limit in the request.'
errno = 7
message = 'REQUEST_TIMED_OUT'
retriable = True
class kafka.common.RetryOptions(limit, backoff_ms, retry_on_timeouts)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

backoff_ms

Alias for field number 1

limit

Alias for field number 0

retry_on_timeouts

Alias for field number 2

exception kafka.common.StaleControllerEpochError

Bases: kafka.common.BrokerResponseError

description = 'Internal error code for broker-to-broker communication.'
errno = 11
message = 'STALE_CONTROLLER_EPOCH'
exception kafka.common.StaleLeaderEpochCodeError

Bases: kafka.common.BrokerResponseError

errno = 13
message = 'STALE_LEADER_EPOCH_CODE'
exception kafka.common.StaleMetadata

Bases: kafka.common.KafkaError

invalid_metadata = True
retriable = True
exception kafka.common.TooManyInFlightRequests

Bases: kafka.common.KafkaError

retriable = True
exception kafka.common.TopicAuthorizationFailedError

Bases: kafka.common.BrokerResponseError

description = 'Returned by the broker when the client is not authorized to access the requested topic.'
errno = 29
message = 'TOPIC_AUTHORIZATION_FAILED'
class kafka.common.TopicPartition(topic, partition)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

partition

Alias for field number 1

topic

Alias for field number 0

exception kafka.common.UnknownError

Bases: kafka.common.BrokerResponseError

description = 'An unexpected server error.'
errno = -1
message = 'UNKNOWN'
exception kafka.common.UnknownMemberIdError

Bases: kafka.common.BrokerResponseError

description = 'Returned from group requests (offset commits/fetches, heartbeats, etc) when the memberId is not in the current generation.'
errno = 25
message = 'UNKNOWN_MEMBER_ID'
exception kafka.common.UnknownTopicOrPartitionError

Bases: kafka.common.BrokerResponseError

description = 'This request is for a topic or partition that does not exist on this broker.'
errno = 3
invalid_metadata = True
message = 'UNKNOWN_TOPIC_OR_PARTITON'
exception kafka.common.UnrecognizedBrokerVersion

Bases: kafka.common.KafkaError

exception kafka.common.UnsupportedCodecError

Bases: kafka.common.KafkaError

kafka.common.check_error(response)
kafka.common.for_code(error_code)
kafka.common.x

alias of UnknownTopicOrPartitionError

kafka.conn module

class kafka.conn.BrokerConnection(host, port, **configs)

Bases: object

DEFAULT_CONFIG = {'reconnect_backoff_ms': 50, 'receive_buffer_bytes': None, 'request_timeout_ms': 40000, 'client_id': 'kafka-python-1.0.2', 'max_in_flight_requests_per_connection': 5, 'send_buffer_bytes': None, 'api_version': (0, 8, 2)}
blacked_out()

Return true if we are disconnected from the given node and can’t re-establish a connection yet

can_send_more()

Return True unless there are max_in_flight_requests.

close(error=None)

Close socket and fail all in-flight-requests.

Parameters:error (Exception, optional) – pending in-flight-requests will be failed with this exception. Default: kafka.common.ConnectionError.
connect()

Attempt to connect and return ConnectionState

connected()

Return True iff socket is connected.

recv(timeout=0)

Non-blocking network receive.

Return response if available

send(request, expect_response=True)

send request, return Future()

Can block on network if request is larger than send_buffer_bytes

class kafka.conn.ConnectionStates

Bases: object

CONNECTED = '<connected>'
CONNECTING = '<connecting>'
DISCONNECTED = '<disconnected>'
class kafka.conn.InFlightRequest(request, response_type, correlation_id, future, timestamp)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

correlation_id

Alias for field number 2

future

Alias for field number 3

request

Alias for field number 0

response_type

Alias for field number 1

timestamp

Alias for field number 4

class kafka.conn.KafkaConnection(host, port, timeout=120)

Bases: thread._local

A socket connection to a single Kafka broker

Parameters:
  • host – the host name or IP address of a kafka broker
  • port – the port number the kafka broker is listening on
  • timeout – default 120. The socket timeout for sending and receiving data in seconds. None means no timeout, so a request can block forever.
close()

Shutdown and close the connection socket

copy()

Create an inactive copy of the connection object, suitable for passing to a background thread.

The returned copy is not connected; you must call reinit() before using.

get_connected_socket()
recv(request_id)

Get a response packet from Kafka

Parameters:request_id – can be any int (only used for debug logging...)
Returns:Encoded kafka packet response from server
Return type:str
reinit()

Re-initialize the socket connection close current socket (if open) and start a fresh connection raise ConnectionError on error

send(request_id, payload)

Send a request to Kafka

Arguments::
request_id (int): can be any int (used only for debug logging...) payload: an encoded kafka packet (see KafkaProtocol)
kafka.conn.collect_hosts(hosts, randomize=True)

Collects a comma-separated set of hosts (host:port) and optionally randomize the returned list.

kafka.context module

Context manager to commit/rollback consumer offsets.

class kafka.context.OffsetCommitContext(consumer)

Bases: object

Provides commit/rollback semantics around a SimpleConsumer.

Usage assumes that auto_commit is disabled, that messages are consumed in batches, and that the consuming process will record its own successful processing of each message. Both the commit and rollback operations respect a “high-water mark” to ensure that last unsuccessfully processed message will be retried.

Example:

consumer = SimpleConsumer(client, group, topic, auto_commit=False)
consumer.provide_partition_info()
consumer.fetch_last_known_offsets()

while some_condition:
    with OffsetCommitContext(consumer) as context:
        messages = consumer.get_messages(count, block=False)

        for partition, message in messages:
            if can_process(message):
                context.mark(partition, message.offset)
            else:
                break

        if not context:
            sleep(delay)

These semantics allow for deferred message processing (e.g. if can_process compares message time to clock time) and for repeated processing of the last unsuccessful message (until some external error is resolved).

__enter__()

Start a new context:

  • Record the initial offsets for rollback
  • Reset the high-water mark
__exit__(exc_type, exc_value, traceback)

End a context.

  • If there was no exception, commit up to the current high-water mark.
  • If there was an offset of range error, attempt to find the correct initial offset.
  • If there was any other error, roll back to the initial offsets.
__nonzero__()

Return whether any operations were marked in the context.

commit()

Commit this context’s offsets:

  • If the high-water mark has moved, commit up to and position the consumer at the high-water mark.
  • Otherwise, reset to the consumer to the initial offsets.
commit_partition_offsets(partition_offsets)

Commit explicit partition/offset pairs.

handle_out_of_range()

Handle out of range condition by seeking to the beginning of valid ranges.

This assumes that an out of range doesn’t happen by seeking past the end of valid ranges – which is far less likely.

mark(partition, offset)

Set the high-water mark in the current context.

In order to know the current partition, it is helpful to initialize the consumer to provide partition info via:

consumer.provide_partition_info()
rollback()

Rollback this context:

  • Position the consumer at the initial offsets.
update_consumer_offsets(partition_offsets)

Update consumer offsets to explicit positions.

kafka.protocol module

kafka.util module

class kafka.util.ReentrantTimer(t, fn, *args, **kwargs)

Bases: object

A timer that can be restarted, unlike threading.Timer (although this uses threading.Timer)

Parameters:
  • t – timer interval in milliseconds
  • fn – a callable to invoke
  • args – tuple of args to be passed to function
  • kwargs – keyword arguments to be passed to function
start()
stop()
class kafka.util.WeakMethod(object_dot_method)

Bases: object

Callable that weakly references a method and the object it is bound to. It is based on http://stackoverflow.com/a/24287465.

Parameters:object_dot_method – A bound instance method (i.e. ‘object.method’).
__call__(*args, **kwargs)

Calls the method on target with args and kwargs.

kafka.util.crc32(data)
kafka.util.group_by_topic_and_partition(tuples)
kafka.util.read_int_string(data, cur)
kafka.util.read_short_string(data, cur)
kafka.util.relative_unpack(fmt, data, cur)
kafka.util.write_int_string(s)
kafka.util.write_short_string(s)

Module contents

class kafka.KafkaConsumer(*topics, **configs)

Bases: six.Iterator

Consume records from a Kafka cluster.

The consumer will transparently handle the failure of servers in the Kafka cluster, and adapt as topic-partitions are created or migrate between brokers. It also interacts with the assigned kafka Group Coordinator node to allow multiple consumers to load balance consumption of topics (requires kafka >= 0.9.0.0).

Parameters:

*topics (str) – optional list of topics to subscribe to. If not set, call subscribe() or assign() before consuming records.

Keyword Arguments:
 
  • bootstrap_servers – ‘host[:port]’ string (or list of ‘host[:port]’ strings) that the consumer should contact to bootstrap initial cluster metadata. This does not have to be the full node list. It just needs to have at least one broker that will respond to a Metadata API Request. Default port is 9092. If no servers are specified, will default to localhost:9092.
  • client_id (str) – a name for this client. This string is passed in each request to servers and can be used to identify specific server-side log entries that correspond to this client. Also submitted to GroupCoordinator for logging with respect to consumer group administration. Default: ‘kafka-python-{version}’
  • group_id (str or None) – name of the consumer group to join for dynamic partition assignment (if enabled), and to use for fetching and committing offsets. If None, auto-partition assignment (via group coordinator) and offset commits are disabled. Default: ‘kafka-python-default-group’
  • key_deserializer (callable) – Any callable that takes a raw message key and returns a deserialized key.
  • value_deserializer (callable) – Any callable that takes a raw message value and returns a deserialized value.
  • fetch_min_bytes (int) – Minimum amount of data the server should return for a fetch request, otherwise wait up to fetch_max_wait_ms for more data to accumulate. Default: 1.
  • fetch_max_wait_ms (int) – The maximum amount of time in milliseconds the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy the requirement given by fetch_min_bytes. Default: 500.
  • max_partition_fetch_bytes (int) – The maximum amount of data per-partition the server will return. The maximum total memory used for a request = #partitions * max_partition_fetch_bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. Default: 1048576.
  • request_timeout_ms (int) – Client request timeout in milliseconds. Default: 40000.
  • retry_backoff_ms (int) – Milliseconds to backoff when retrying on errors. Default: 100.
  • reconnect_backoff_ms (int) – The amount of time in milliseconds to wait before attempting to reconnect to a given host. Default: 50.
  • max_in_flight_requests_per_connection (int) – Requests are pipelined to kafka brokers up to this number of maximum requests per broker connection. Default: 5.
  • auto_offset_reset (str) – A policy for resetting offsets on OffsetOutOfRange errors: ‘earliest’ will move to the oldest available message, ‘latest’ will move to the most recent. Any ofther value will raise the exception. Default: ‘latest’.
  • enable_auto_commit (bool) – If true the consumer’s offset will be periodically committed in the background. Default: True.
  • auto_commit_interval_ms (int) – milliseconds between automatic offset commits, if enable_auto_commit is True. Default: 5000.
  • default_offset_commit_callback (callable) – called as callback(offsets, response) response will be either an Exception or a OffsetCommitResponse struct. This callback can be used to trigger custom actions when a commit request completes.
  • check_crcs (bool) – Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. Default: True
  • metadata_max_age_ms (int) – The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. Default: 300000
  • partition_assignment_strategy (list) – List of objects to use to distribute partition ownership amongst consumer instances when group management is used. Default: [RangePartitionAssignor, RoundRobinPartitionAssignor]
  • heartbeat_interval_ms (int) – The expected time in milliseconds between heartbeats to the consumer coordinator when using Kafka’s group management feature. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session_timeout_ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. Default: 3000
  • session_timeout_ms (int) – The timeout used to detect failures when using Kafka’s group managementment facilities. Default: 30000
  • send_buffer_bytes (int) – The size of the TCP send buffer (SO_SNDBUF) to use when sending data. Default: None (relies on system defaults). The java client defaults to 131072.
  • receive_buffer_bytes (int) – The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. Default: None (relies on system defaults). The java client defaults to 32768.
  • consumer_timeout_ms (int) – number of millisecond to throw a timeout exception to the consumer if no message is available for consumption. Default: -1 (dont throw exception)
  • api_version (str) – specify which kafka API version to use. 0.9 enables full group coordination features; 0.8.2 enables kafka-storage offset commits; 0.8.1 enables zookeeper-storage offset commits; 0.8.0 is what is left. If set to ‘auto’, will attempt to infer the broker version by probing various APIs. Default: auto

Note

Configuration parameters are described in more detail at https://kafka.apache.org/090/configuration.html#newconsumerconfigs

DEFAULT_CONFIG = {'reconnect_backoff_ms': 50, 'receive_buffer_bytes': None, 'partition_assignment_strategy': (<class 'kafka.coordinator.assignors.range.RangePartitionAssignor'>, <class 'kafka.coordinator.assignors.roundrobin.RoundRobinPartitionAssignor'>), 'auto_offset_reset': 'latest', 'consumer_timeout_ms': -1, 'bootstrap_servers': 'localhost', 'request_timeout_ms': 40000, 'enable_auto_commit': True, 'heartbeat_interval_ms': 3000, 'max_in_flight_requests_per_connection': 5, 'api_version': 'auto', 'metadata_max_age_ms': 300000, 'key_deserializer': None, 'send_buffer_bytes': None, 'max_partition_fetch_bytes': 1048576, 'check_crcs': True, 'client_id': 'kafka-python-1.0.2', 'connections_max_idle_ms': 540000, 'session_timeout_ms': 30000, 'auto_commit_interval_ms': 5000, 'retry_backoff_ms': 100, 'value_deserializer': None, 'group_id': 'kafka-python-default-group', 'fetch_max_wait_ms': 500, 'fetch_min_bytes': 1}
assign(partitions)

Manually assign a list of TopicPartitions to this consumer.

Parameters:partitions (list of TopicPartition) – assignment for this instance.
Raises:IllegalStateError – if consumer has already called subscribe()

Warning

It is not possible to use both manual partition assignment with assign() and group assignment with subscribe().

Note

This interface does not support incremental assignment and will replace the previous assignment (if there was one).

Note

Manual topic assignment through this method does not use the consumer’s group management functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic metadata change.

assignment()

Get the TopicPartitions currently assigned to this consumer.

If partitions were directly assigned using assign(), then this will simply return the same partitions that were previously assigned. If topics were subscribed using subscribe(), then this will give the set of topic partitions currently assigned to the consumer (which may be none if the assignment hasn’t happened yet, or if the partitions are in the process of being reassigned).

Returns:{TopicPartition, ...}
Return type:set
close()

Close the consumer, waiting indefinitely for any needed cleanup.

commit(offsets=None)

Commit offsets to kafka, blocking until success or error

This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. To avoid re-processing the last message read if a consumer is restarted, the committed offset should be the next message your application should consume, i.e.: last_offset + 1.

Blocks until either the commit succeeds or an unrecoverable error is encountered (in which case it is thrown to the caller).

Currently only supports kafka-topic offset storage (not zookeeper)

Parameters:offsets (dict, optional) – {TopicPartition: OffsetAndMetadata} dict to commit with the configured group_id. Defaults to current consumed offsets for all subscribed partitions.
commit_async(offsets=None, callback=None)

Commit offsets to kafka asynchronously, optionally firing callback

This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. To avoid re-processing the last message read if a consumer is restarted, the committed offset should be the next message your application should consume, i.e.: last_offset + 1.

This is an asynchronous call and will not block. Any errors encountered are either passed to the callback (if provided) or discarded.

Parameters:
  • offsets (dict, optional) – {TopicPartition: OffsetAndMetadata} dict to commit with the configured group_id. Defaults to current consumed offsets for all subscribed partitions.
  • callback (callable, optional) – called as callback(offsets, response) with response as either an Exception or a OffsetCommitResponse struct. This callback can be used to trigger custom actions when a commit request completes.
Returns:

kafka.future.Future

committed(partition)

Get the last committed offset for the given partition

This offset will be used as the position for the consumer in the event of a failure.

This call may block to do a remote call if the partition in question isn’t assigned to this consumer or if the consumer hasn’t yet initialized its cache of committed offsets.

Parameters:partition (TopicPartition) – the partition to check
Returns:The last committed offset, or None if there was no prior commit.
configure(**configs)
fetch_messages()
get_partition_offsets(topic, partition, request_time_ms, max_num_offsets)
highwater(partition)

Last known highwater offset for a partition

A highwater offset is the offset that will be assigned to the next message that is produced. It may be useful for calculating lag, by comparing with the reported position. Note that both position and highwater refer to the next offset – i.e., highwater offset is one greater than the newest available message.

Highwater offsets are returned in FetchResponse messages, so will not be available if not FetchRequests have been sent for this partition yet.

Parameters:partition (TopicPartition) – partition to check
Returns:offset if available
Return type:int or None
offsets(group=None)
partitions_for_topic(topic)

Get metadata about the partitions for a given topic.

Parameters:topic (str) – topic to check
Returns:partition ids
Return type:set
pause(*partitions)

Suspend fetching from the requested partitions.

Future calls to poll() will not return any records from these partitions until they have been resumed using resume(). Note that this method does not affect partition subscription. In particular, it does not cause a group rebalance when automatic assignment is used.

Parameters:*partitions (TopicPartition) – partitions to pause
poll(timeout_ms=0)

Fetch data from assigned topics / partitions.

Records are fetched and returned in batches by topic-partition. On each poll, consumer will try to use the last consumed offset as the starting offset and fetch sequentially. The last consumed offset can be manually set through seek(partition, offset) or automatically set as the last committed offset for the subscribed list of partitions.

Incompatible with iterator interface – use one or the other, not both.

Parameters:timeout_ms (int, optional) – milliseconds spent waiting in poll if data is not available in the buffer. If 0, returns immediately with any records that are available currently in the buffer, else returns empty. Must not be negative. Default: 0
Returns:topic to list of records since the last fetch for the subscribed list of topics and partitions
Return type:dict
position(partition)

Get the offset of the next record that will be fetched

Parameters:partition (TopicPartition) – partition to check
Returns:offset
Return type:int
resume(*partitions)

Resume fetching from the specified (paused) partitions.

Parameters:*partitions (TopicPartition) – partitions to resume
seek(partition, offset)

Manually specify the fetch offset for a TopicPartition.

Overrides the fetch offsets that the consumer will use on the next poll(). If this API is invoked for the same partition more than once, the latest offset will be used on the next poll(). Note that you may lose data if this API is arbitrarily used in the middle of consumption, to reset the fetch offsets.

Parameters:
  • partition (TopicPartition) – partition for seek operation
  • offset (int) – message offset in partition
Raises:

AssertionError – if offset is not an int >= 0; or if partition is not currently assigned.

seek_to_beginning(*partitions)

Seek to the oldest available offset for partitions.

Parameters:*partitions – optionally provide specific TopicPartitions, otherwise default to all assigned partitions
Raises:AssertionError – if any partition is not currently assigned, or if no partitions are assigned
seek_to_end(*partitions)

Seek to the most recent available offset for partitions.

Parameters:*partitions – optionally provide specific TopicPartitions, otherwise default to all assigned partitions
Raises:AssertionError – if any partition is not currently assigned, or if no partitions are assigned
set_topic_partitions(*topics)
subscribe(topics=(), pattern=None, listener=None)

Subscribe to a list of topics, or a topic regex pattern

Partitions will be dynamically assigned via a group coordinator. Topic subscriptions are not incremental: this list will replace the current assignment (if there is one).

This method is incompatible with assign()

Parameters:
  • topics (list) – List of topics for subscription.
  • pattern (str) – Pattern to match available topics. You must provide either topics or pattern, but not both.
  • listener (ConsumerRebalanceListener) –

    Optionally include listener callback, which will be called before and after each rebalance operation.

    As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger:

    • Number of partitions change for any of the subscribed topics
    • Topic is created or deleted
    • An existing member of the consumer group dies
    • A new member is added to the consumer group

    When any of these events are triggered, the provided listener will be invoked first to indicate that the consumer’s assignment has been revoked, and then again when the new assignment has been received. Note that this listener will immediately override any listener set in a previous call to subscribe. It is guaranteed, however, that the partitions revoked/assigned through this interface are from topics subscribed in this call.

Raises:
  • IllegalStateError – if called after previously calling assign()
  • AssertionError – if neither topics or pattern is provided
  • TypeError – if listener is not a ConsumerRebalanceListener
subscription()

Get the current topic subscription.

Returns:{topic, ...}
Return type:set
task_done(message)
topics()

Get all topics the user is authorized to view.

Returns:topics
Return type:set
unsubscribe()

Unsubscribe from all topics and clear all assigned partitions.

class kafka.KafkaProducer(**configs)

Bases: object

A Kafka client that publishes records to the Kafka cluster.

The producer is thread safe and sharing a single producer instance across threads will generally be faster than having multiple instances.

The producer consists of a pool of buffer space that holds records that haven’t yet been transmitted to the server as well as a background I/O thread that is responsible for turning these records into requests and transmitting them to the cluster.

The send() method is asynchronous. When called it adds the record to a buffer of pending record sends and immediately returns. This allows the producer to batch together individual records for efficiency.

The ‘acks’ config controls the criteria under which requests are considered complete. The “all” setting will result in blocking on the full commit of the record, the slowest but most durable setting.

If the request fails, the producer can automatically retry, unless ‘retries’ is configured to 0. Enabling retries also opens up the possibility of duplicates (see the documentation on message delivery semantics for details: http://kafka.apache.org/documentation.html#semantics ).

The producer maintains buffers of unsent records for each partition. These buffers are of a size specified by the ‘batch_size’ config. Making this larger can result in more batching, but requires more memory (since we will generally have one of these buffers for each active partition).

By default a buffer is available to send immediately even if there is additional unused space in the buffer. However if you want to reduce the number of requests you can set ‘linger_ms’ to something greater than 0. This will instruct the producer to wait up to that number of milliseconds before sending a request in hope that more records will arrive to fill up the same batch. This is analogous to Nagle’s algorithm in TCP. Note that records that arrive close together in time will generally batch together even with linger_ms=0 so under heavy load batching will occur regardless of the linger configuration; however setting this to something larger than 0 can lead to fewer, more efficient requests when not under maximal load at the cost of a small amount of latency.

The buffer_memory controls the total amount of memory available to the producer for buffering. If records are sent faster than they can be transmitted to the server then this buffer space will be exhausted. When the buffer space is exhausted additional send calls will block.

The key_serializer and value_serializer instruct how to turn the key and value objects the user provides into bytes.

Keyword Arguments:
 
  • bootstrap_servers – ‘host[:port]’ string (or list of ‘host[:port]’ strings) that the producer should contact to bootstrap initial cluster metadata. This does not have to be the full node list. It just needs to have at least one broker that will respond to a Metadata API Request. Default port is 9092. If no servers are specified, will default to localhost:9092.

  • client_id (str) – a name for this client. This string is passed in each request to servers and can be used to identify specific server-side log entries that correspond to this client. Default: ‘kafka-python-producer-#’ (appended with a unique number per instance)

  • key_serializer (callable) – used to convert user-supplied keys to bytes If not None, called as f(key), should return bytes. Default: None.

  • value_serializer (callable) – used to convert user-supplied message values to bytes. If not None, called as f(value), should return bytes. Default: None.

  • acks (0, 1, ‘all’) – The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common:

    0: Producer will not wait for any acknowledgment from the server.

    The message will immediately be added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won’t generally know of any failures). The offset given back for each record will always be set to -1.

    1: Wait for leader to write the record to its local log only.

    Broker will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.

    all: Wait for the full set of in-sync replicas to write the record.

    This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee.

    If unset, defaults to acks=1.

  • compression_type (str) – The compression type for all data generated by the producer. Valid values are ‘gzip’, ‘snappy’, ‘lz4’, or None. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). Default: None.

  • retries (int) – Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. Default: 0.

  • batch_size (int) – Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). Default: 16384

  • linger_ms (int) – The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay; that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle’s algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch_size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will ‘linger’ for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger_ms=5 would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. Default: 0.

  • partitioner (callable) – Callable used to determine which partition each message is assigned to. Called (after key serialization): partitioner(key_bytes, all_partitions, available_partitions). The default partitioner implementation hashes each non-None key using the same murmur2 algorithm as the java client so that messages with the same key are assigned to the same partition. When a key is None, the message is delivered to a random partition (filtered to partitions with available leaders only, if possible).

  • buffer_memory (int) – The total bytes of memory the producer should use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block up to max_block_ms, raising an exception on timeout. In the current implementation, this setting is an approximation. Default: 33554432 (32MB)

  • max_block_ms (int) – Number of milliseconds to block during send() when attempting to allocate additional memory before raising an exception. Default: 60000.

  • max_request_size (int) – The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Default: 1048576.

  • metadata_max_age_ms (int) – The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. Default: 300000

  • retry_backoff_ms (int) – Milliseconds to backoff when retrying on errors. Default: 100.

  • request_timeout_ms (int) – Client request timeout in milliseconds. Default: 30000.

  • receive_buffer_bytes (int) – The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. Default: None (relies on system defaults). Java client defaults to 32768.

  • send_buffer_bytes (int) – The size of the TCP send buffer (SO_SNDBUF) to use when sending data. Default: None (relies on system defaults). Java client defaults to 131072.

  • reconnect_backoff_ms (int) – The amount of time in milliseconds to wait before attempting to reconnect to a given host. Default: 50.

  • max_in_flight_requests_per_connection (int) – Requests are pipelined to kafka brokers up to this number of maximum requests per broker connection. Default: 5.

  • api_version (str) – specify which kafka API version to use. If set to ‘auto’, will attempt to infer the broker version by probing various APIs. Default: auto

Note

Configuration parameters are described in more detail at https://kafka.apache.org/090/configuration.html#producerconfigs

close(timeout=None)

Close this producer.

flush(timeout=None)

Invoking this method makes all buffered records immediately available to send (even if linger_ms is greater than 0) and blocks on the completion of the requests associated with these records. The post-condition of flush() is that any previously sent record will have completed (e.g. Future.is_done() == True). A request is considered completed when either it is successfully acknowledged according to the ‘acks’ configuration for the producer, or it results in an error.

Other threads can continue sending messages while one thread is blocked waiting for a flush call to complete; however, no guarantee is made about the completion of messages sent after the flush call begins.

partitions_for(topic)

Returns set of all known partitions for the topic.

send(topic, value=None, key=None, partition=None)

Publish a message to a topic.

Parameters:
  • topic (str) – topic where the message will be published
  • value (optional) – message value. Must be type bytes, or be serializable to bytes via configured value_serializer. If value is None, key is required and message acts as a ‘delete’. See kafka compaction documentation for more details: http://kafka.apache.org/documentation.html#compaction (compaction requires kafka >= 0.8.1)
  • partition (int, optional) – optionally specify a partition. If not set, the partition will be selected using the configured ‘partitioner’.
  • key (optional) – a key to associate with the message. Can be used to determine which partition to send the message to. If partition is None (and producer’s partitioner config is left as default), then messages with the same key will be delivered to the same partition (but if key is None, partition is chosen randomly). Must be type bytes, or be serializable to bytes via configured key_serializer.
Returns:

resolves to RecordMetadata

Return type:

FutureRecordMetadata

Raises:

KafkaTimeoutError – if unable to fetch topic metadata, or unable to obtain memory buffer prior to configured max_block_ms

class kafka.KafkaClient(*args, **kwargs)

Bases: kafka.client.SimpleClient

class kafka.BrokerConnection(host, port, **configs)

Bases: object

DEFAULT_CONFIG = {'reconnect_backoff_ms': 50, 'receive_buffer_bytes': None, 'request_timeout_ms': 40000, 'client_id': 'kafka-python-1.0.2', 'max_in_flight_requests_per_connection': 5, 'send_buffer_bytes': None, 'api_version': (0, 8, 2)}
blacked_out()

Return true if we are disconnected from the given node and can’t re-establish a connection yet

can_send_more()

Return True unless there are max_in_flight_requests.

close(error=None)

Close socket and fail all in-flight-requests.

Parameters:error (Exception, optional) – pending in-flight-requests will be failed with this exception. Default: kafka.common.ConnectionError.
connect()

Attempt to connect and return ConnectionState

connected()

Return True iff socket is connected.

recv(timeout=0)

Non-blocking network receive.

Return response if available

send(request, expect_response=True)

send request, return Future()

Can block on network if request is larger than send_buffer_bytes

class kafka.SimpleClient(hosts, client_id='kafka-python', timeout=120, correlation_id=0)

Bases: object

CLIENT_ID = 'kafka-python'
close()
copy()

Create an inactive copy of the client object, suitable for passing to a separate thread.

Note that the copied connections are not initialized, so reinit() must be called on the returned copy.

ensure_topic_exists(topic, timeout=30)
get_partition_ids_for_topic(topic)
has_metadata_for_topic(topic)
load_metadata_for_topics(*topics, **kwargs)

Fetch broker and topic-partition metadata from the server.

Updates internal data: broker list, topic/partition list, and topic/parition -> broker map. This method should be called after receiving any error.

Note: Exceptions will not be raised in a full refresh (i.e. no topic list). In this case, error codes will be logged as errors. Partition-level errors will also not be raised here (a single partition w/o a leader, for example).

Parameters:
  • *topics (optional) – If a list of topics is provided, the metadata refresh will be limited to the specified topics only.
  • ignore_leadernotavailable (bool) – suppress LeaderNotAvailableError so that metadata is loaded correctly during auto-create. Default: False.
Raises:
  • UnknownTopicOrPartitionError – Raised for topics that do not exist, unless the broker is configured to auto-create topics.
  • LeaderNotAvailableError – Raised for topics that do not exist yet, when the broker is configured to auto-create topics. Retry after a short backoff (topics/partitions are initializing).
reinit()
reset_all_metadata()
reset_topic_metadata(*topics)
send_consumer_metadata_request(payloads=[], fail_on_error=True, callback=None)
send_fetch_request(payloads=[], fail_on_error=True, callback=None, max_wait_time=100, min_bytes=4096)

Encode and send a FetchRequest

Payloads are grouped by topic and partition so they can be pipelined to the same brokers.

send_metadata_request(payloads=[], fail_on_error=True, callback=None)
send_offset_commit_request(group, payloads=[], fail_on_error=True, callback=None)
send_offset_fetch_request(group, payloads=[], fail_on_error=True, callback=None)
send_offset_fetch_request_kafka(group, payloads=[], fail_on_error=True, callback=None)
send_offset_request(payloads=[], fail_on_error=True, callback=None)
send_produce_request(payloads=[], acks=1, timeout=1000, fail_on_error=True, callback=None)

Encode and send some ProduceRequests

ProduceRequests will be grouped by (topic, partition) and then sent to a specific broker. Output is a list of responses in the same order as the list of payloads specified

Parameters:
  • payloads (list of ProduceRequest) – produce requests to send to kafka ProduceRequest payloads must not contain duplicates for any topic-partition.
  • acks (int, optional) – how many acks the servers should receive from replica brokers before responding to the request. If it is 0, the server will not send any response. If it is 1, the server will wait until the data is written to the local log before sending a response. If it is -1, the server will wait until the message is committed by all in-sync replicas before sending a response. For any value > 1, the server will wait for this number of acks to occur (but the server will never wait for more acknowledgements than there are in-sync replicas). defaults to 1.
  • timeout (int, optional) – maximum time in milliseconds the server can await the receipt of the number of acks, defaults to 1000.
  • fail_on_error (bool, optional) – raise exceptions on connection and server response errors, defaults to True.
  • callback (function, optional) – instead of returning the ProduceResponse, first pass it through this function, defaults to None.
Returns:

list of ProduceResponses, or callback results if supplied, in the order of input payloads

topics
class kafka.SimpleProducer(*args, **kwargs)

Bases: kafka.producer.base.Producer

A simple, round-robin producer.

See Producer class for Base Arguments

Additional Arguments:
random_start (bool, optional): randomize the initial partition which
the first message block will be published to, otherwise if false, the first message block will always publish to partition 0 before cycling through each partition, defaults to True.
send_messages(topic, *msg)
class kafka.KeyedProducer(*args, **kwargs)

Bases: kafka.producer.base.Producer

A producer which distributes messages to partitions based on the key

See Producer class for Arguments

Additional Arguments:
partitioner: A partitioner class that will be used to get the partition
to send the message to. Must be derived from Partitioner. Defaults to HashedPartitioner.
send(topic, key, msg)
send_messages(topic, key, *msg)
class kafka.RoundRobinPartitioner(partitions)

Bases: kafka.partitioner.base.Partitioner

Implements a round robin partitioner which sends data to partitions in a round robin fashion

partition(key, partitions=None)
kafka.HashedPartitioner

alias of LegacyPartitioner

kafka.create_message(payload, key=None)

Construct a Message

Parameters:
  • payload – bytes, the payload to send to Kafka
  • key – bytes, a key used for partition routing (optional)
kafka.create_gzip_message(payloads, key=None, compresslevel=None)

Construct a Gzipped Message containing multiple Messages

The given payloads will be encoded, compressed, and sent as a single atomic message to Kafka.

Parameters:
  • payloads – list(bytes), a list of payload to send be sent to Kafka
  • key – bytes, a key used for partition routing (optional)
kafka.create_snappy_message(payloads, key=None)

Construct a Snappy Message containing multiple Messages

The given payloads will be encoded, compressed, and sent as a single atomic message to Kafka.

Parameters:
  • payloads – list(bytes), a list of payload to send be sent to Kafka
  • key – bytes, a key used for partition routing (optional)
class kafka.SimpleConsumer(client, group, topic, auto_commit=True, partitions=None, auto_commit_every_n=100, auto_commit_every_t=5000, fetch_size_bytes=4096, buffer_size=4096, max_buffer_size=32768, iter_timeout=None, auto_offset_reset='largest')

Bases: kafka.consumer.base.Consumer

A simple consumer implementation that consumes all/specified partitions for a topic

Parameters:
  • client – a connected SimpleClient
  • group – a name for this consumer, used for offset storage and must be unique If you are connecting to a server that does not support offset commit/fetch (any prior to 0.8.1.1), then you must set this to None
  • topic – the topic to consume
Keyword Arguments:
 
  • partitions – An optional list of partitions to consume the data from
  • auto_commit – default True. Whether or not to auto commit the offsets
  • auto_commit_every_n – default 100. How many messages to consume before a commit
  • auto_commit_every_t – default 5000. How much time (in milliseconds) to wait before commit
  • fetch_size_bytes – number of bytes to request in a FetchRequest
  • buffer_size – default 4K. Initial number of bytes to tell kafka we have available. This will double as needed.
  • max_buffer_size – default 16K. Max number of bytes to tell kafka we have available. None means no limit.
  • iter_timeout – default None. How much time (in seconds) to wait for a message in the iterator before exiting. None means no timeout, so it will wait forever.
  • auto_offset_reset – default largest. Reset partition offsets upon OffsetOutOfRangeError. Valid values are largest and smallest. Otherwise, do not reset the offsets and raise OffsetOutOfRangeError.

Auto commit details: If both auto_commit_every_n and auto_commit_every_t are set, they will reset one another when one is triggered. These triggers simply call the commit method on this class. A manual call to commit will also reset these triggers

get_message(block=True, timeout=0.1, get_partition_info=None)
get_messages(count=1, block=True, timeout=0.1)

Fetch the specified number of messages

Keyword Arguments:
 
  • count – Indicates the maximum number of messages to be fetched
  • block – If True, the API will block till all messages are fetched. If block is a positive integer the API will block until that many messages are fetched.
  • timeout – When blocking is requested the function will block for the specified time (in seconds) until count messages is fetched. If None, it will block forever.
reset_partition_offset(partition)

Update offsets using auto_offset_reset policy (smallest|largest)

Parameters:partition (int) – the partition for which offsets should be updated

Returns: Updated offset on success, None on failure

seek(offset, whence=None, partition=None)

Alter the current offset in the consumer, similar to fseek

Parameters:
  • offset – how much to modify the offset
  • whence

    where to modify it from, default is None

    • None is an absolute offset
    • 0 is relative to the earliest available offset (head)
    • 1 is relative to the current offset
    • 2 is relative to the latest known offset (tail)
  • partition – modify which partition, default is None. If partition is None, would modify all partitions.
class kafka.MultiProcessConsumer(client, group, topic, partitions=None, auto_commit=True, auto_commit_every_n=100, auto_commit_every_t=5000, num_procs=1, partitions_per_proc=0, **simple_consumer_options)

Bases: kafka.consumer.base.Consumer

A consumer implementation that consumes partitions for a topic in parallel using multiple processes

Parameters:
  • client – a connected SimpleClient
  • group – a name for this consumer, used for offset storage and must be unique If you are connecting to a server that does not support offset commit/fetch (any prior to 0.8.1.1), then you must set this to None
  • topic – the topic to consume
Keyword Arguments:
 
  • partitions – An optional list of partitions to consume the data from
  • auto_commit – default True. Whether or not to auto commit the offsets
  • auto_commit_every_n – default 100. How many messages to consume before a commit
  • auto_commit_every_t – default 5000. How much time (in milliseconds) to wait before commit
  • num_procs – Number of processes to start for consuming messages. The available partitions will be divided among these processes
  • partitions_per_proc – Number of partitions to be allocated per process (overrides num_procs)

Auto commit details: If both auto_commit_every_n and auto_commit_every_t are set, they will reset one another when one is triggered. These triggers simply call the commit method on this class. A manual call to commit will also reset these triggers

__iter__()

Iterator to consume the messages available on this consumer

get_messages(count=1, block=True, timeout=10)

Fetch the specified number of messages

Keyword Arguments:
 
  • count – Indicates the maximum number of messages to be fetched
  • block – If True, the API will block till all messages are fetched. If block is a positive integer the API will block until that many messages are fetched.
  • timeout – When blocking is requested the function will block for the specified time (in seconds) until count messages is fetched. If None, it will block forever.
stop()