Kinesis I/O: Quickstart
The Samza Kinesis connector allows you to interact with Amazon Kinesis Data Streams,
Amazon’s data streaming service. The
hello-samza project includes an example of processing Kinesis streams using Samza. Here is the complete source code and configs.
You can build and run this example using this tutorial.
Like a Kafka topic, a Kinesis stream can have multiple shards with producers and consumers. Each message consumed from the stream is an instance of a Kinesis Record. Samza’s KinesisSystemConsumer wraps the Record into a KinesisIncomingMessageEnvelope.
Consuming from Kinesis
Here is the required configuration for consuming messages from Kinesis, through
The Kinesis system consumer does not rely on Samza’s coordination mechanism. Instead, it uses the Kinesis client library (KCL) for coordination and distributing available shards among available instances. Hence, you should
grouper configuration to
Each Kinesis stream in a given AWS region can be accessed by providing an access key. An Access key consists of two parts: an access key ID (for example,
AKIAIOSFODNN7EXAMPLE) and a secret access key (for example,
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY) which you can use to send programmatic requests to AWS.
Kinesis Client Library Configs
As an example, the below configuration is equivalent to invoking
kclClient#WithTableName(myTable) on the KCL instance.
AWS Client configs
KinesisSystemDescriptor you can also set the proxy host and proxy port to be used by the Kinesis Client:
Unlike other connectors where Samza stores and manages checkpointed offsets, Kinesis checkpoints are stored in a DynamoDB table. These checkpoints are stored and managed by the KCL library internally. You can reset the checkpoints by configuring a different name for the DynamoDB table.
When you reset checkpoints, you can configure your job to start consuming from either the earliest or latest offset in the stream.
Alternately, if you want to start from a particular offset in the Kinesis stream, you can login to the AWS console and edit the offsets in your DynamoDB Table. By default, the table-name has the following format: “<job name>-<job id>-<kinesis stream>”.
The following limitations apply to Samza jobs consuming from Kinesis streams :
- Stateful processing (eg: windows or joins) is not supported on Kinesis streams. However, you can accomplish this by chaining two Samza jobs where the first job reads from Kinesis and sends to Kafka while the second job processes the data from Kafka.
- Kinesis streams cannot be configured as bootstrap or broadcast streams.
- Kinesis streams must be used only with the AllSspToSingleTaskGrouperFactory as the Kinesis consumer does the partition management by itself. No other grouper is currently supported.
- A Samza job that consumes from Kinesis cannot consume from any other input source. However, you can send your results to any destination (eg: Kafka, EventHubs), and have another Samza job consume them.
Producing to Kinesis
The KinesisSystemProducer for Samza is not yet implemented.