Kafka Push
Kafka push provides a scalable and robust way to receive real-time IoT data from Globe Tracker devices. When events occur, our system publishes messages to your specified Kafka broker and topic(s). We support both standard Kafka servers and managed providers such as Confluent Cloud.
Setup Requirements
Section titled “Setup Requirements”To receive Kafka push notifications, you need to provide us with:
- Kafka broker address (hostname/IP and port)
- Topic(s) to publish to
- Authentication credentials (if required by your broker or provider)
- Ability to publish to the relevant topic(s)
- Support for JSON payloads
Kafka Configuration
Section titled “Kafka Configuration”Broker and Topic Configuration
Section titled “Broker and Topic Configuration”You must provide the Kafka broker details and the topic(s) to which you want to receive events. All object types can be sent to the same topic, or you can specify distinct topics per object type.
Example topic structure:
globe-tracker.events(all events)globe-tracker.events.location(location updates)globe-tracker.events.reeferdata(Reefer telematics)
Partitioning
Section titled “Partitioning”If you require specific partitioning, let us know your partitioning strategy (e.g., by device ID, event type, etc.). By default, events are distributed evenly across available partitions. You can also choose any of the values in the payload as key.
Authentication
Section titled “Authentication”We support several authentication methods for Kafka brokers:
- SASL/PLAIN (username and password)
- SASL/SCRAM
- SSL/TLS client certificates
- Anonymous (if your broker allows it)
For Confluent Cloud, provide your API key and secret.
TLS/SSL Encryption
Section titled “TLS/SSL Encryption”We recommend using TLS/SSL encryption for secure Kafka communication. Provide your broker’s CA certificate if required.
Object Types
Section titled “Object Types”Push services send objects from object-sync services. The following object types are defined in the Object Sync OpenAPI Specification:
Represents an asset in the system.
Required fields:
id(string): Unique identifier for the asset (global_id)name(string): Asset name/serial numberownerId(string): Owner/customer global IDwhenCreated(string, date-time): ISO 8601 timestamp when the asset was createdwhenUpdated(string, date-time): ISO 8601 timestamp when the asset was last updated
Optional fields:
boxId(string, nullable): Box global IDdisabled(boolean): Whether the asset is disablednote(string, nullable): Optional note about the assettags(array of strings, nullable): Array of customer tags associated with the asset
Location
Section titled “Location”Represents a GPS location record.
Required fields:
id(integer, int64): Unique identifier for the location recordassetId(string): Asset global IDownerId(string): Owner/customer global IDwhenCreated(string, date-time): ISO 8601 timestamp when the location was recorded (GPS time)latitude(number, double): GPS latitude coordinatelongitude(number, double): GPS longitude coordinate
Optional fields:
altitude(number, double, nullable): GPS altitude in metersspeed(number, double, nullable): Speed in the unit configured for the assetheading(number, double, nullable): Heading/bearing in degrees (0-360)zoneId(string, nullable): Zone identifier where the location was recorded
SensorData
Section titled “SensorData”Represents sensor data from an asset.
Required fields:
id(integer, int64): Unique identifier for the sensor data recordassetId(string): Asset global IDownerId(string): Owner/customer global IDsensorId(string): Sensor identifiersensorType(string): Type of sensorwhenCreated(string, date-time): ISO 8601 timestamp when the sensor data was recordedvalue(string): Sensor value (can be string representation of number or text)type(string): Data type of the sensor value
Optional fields:
when_received(integer, int64, nullable): Unix timestamp in milliseconds when the data was received
ReeferData
Section titled “ReeferData”Represents reefer (refrigerated container) data.
Required fields:
id(integer, int64): Unique identifier for the reefer data recordassetId(string): Asset global IDownerId(string): Owner/customer global IDwhenCreated(string, date-time): ISO 8601 timestamp when the reefer data was recordedreeferClass(string): Class/type of reefer unitdata(object): JSON object containing reefer-specific data payload (additionalProperties: true)
GensetData
Section titled “GensetData”Represents generator set (genset) data.
Required fields:
id(integer, int64): Unique identifier for the genset data recordassetId(string): Asset global IDownerId(string): Owner/customer global IDwhenCreated(string, date-time): ISO 8601 timestamp when the genset data was recordedgensetType(string): Type/model of gensetdata(object): JSON object containing genset-specific data payload (additionalProperties: true)
Optional fields:
boxId(string, nullable): Box global ID
GtoemData
Section titled “GtoemData”Represents GTOEM (Globetracker OEM) data.
Required fields:
id(integer, int64): Unique identifier for the GTOEM data recordassetId(string): Asset global IDownerId(string): Owner/customer global IDwhenCreated(string, date-time): ISO 8601 timestamp when the GTOEM data was createdpayloadType(integer, int32): Type identifier for the payloaddata(object): JSON object containing GTOEM-specific data payload (additionalProperties: true)
Optional fields:
boxId(string, nullable): Box global IDwhen_received(integer, int64, nullable): Unix timestamp in milliseconds when the data was received
For complete schema definitions, you can download the full OpenAPI schema file for use in API documentation tools, code generation, or validation.
Additional Fields
Section titled “Additional Fields”Push services may add additional fields to the object-sync service objects in some cases. These fields are added by the push service infrastructure and are not part of the base object schema.
Kafka Push
Section titled “Kafka Push”Kafka push typically sends objects as-is from the object-sync service without additional fields. However, depending on configuration, additional metadata fields may be added. Contact your Globe Tracker representative for details about any additional fields that may be configured for your Kafka integration.
Message Format
Section titled “Message Format”All Kafka messages use JSON payloads. We support only single event delivery mode for Kafka:
Single Event Delivery
Section titled “Single Event Delivery”Each event is published as a single JSON object. The payload contains the object from the object-sync service directly:
Example: Location object
{ "id": 12345, "assetId": "550e8400-e29b-41d4-a716-446655440000", "ownerId": "660e8400-e29b-41d4-a716-446655440001", "whenCreated": "2024-03-20T14:30:00Z", "latitude": 55.676098, "longitude": 12.568337, "altitude": 10.5, "speed": 0, "heading": 90}Example: ReeferData object
{ "id": 12345, "assetId": "550e8400-e29b-41d4-a716-446655440000", "ownerId": "660e8400-e29b-41d4-a716-446655440001", "whenCreated": "2024-03-20T14:30:00Z", "reeferClass": "CARRIER", "data": { "temperature": 4.5, "humidity": 60, "setpoint": 4.0 }}Acknowledgement and Delivery Semantics
Section titled “Acknowledgement and Delivery Semantics”Kafka acknowledgements depend on the broker configuration and the required acks setting:
- acks=0: No acknowledgement required
- acks=1: Wait for leader acknowledgement
- acks=all: Wait for all in-sync replicas to acknowledge
We track delivery status based on Kafka protocol acknowledgements. Messages will be retried until acknowledged according to the configured acks level.
Error Handling and Retry Logic
Section titled “Error Handling and Retry Logic”- If the broker is unavailable, we will retry connection and message delivery using exponential backoff.
- Messages are retried until acknowledged by the broker.
- If a message cannot be delivered after multiple attempts, it is logged for monitoring and support follow-up.
Monitoring and Delivery History
Section titled “Monitoring and Delivery History”We store detailed metadata for each Kafka delivery attempt, including:
- Timestamp of the attempt
- Topic name
- Partition
- Delivery status (success/failure)
- Error details (if any)
Accessing Delivery History
Section titled “Accessing Delivery History”You can access Kafka delivery history and logs in several ways:
Using the Flowcore CLI
Section titled “Using the Flowcore CLI”To extract Kafka push logs for the last day:
flowcore stream https://flowcore.io/globe-tracker-aps/example-data/kafka-push-results.0/created.0.stream \ --no-live \ -s 1d \ --payload \ --jsonUsing data-pump sdk
Section titled “Using data-pump sdk”You can use the data-pump NPM package to retrieve results
Using the Dashboard
Section titled “Using the Dashboard”We provide a dashboard where you can:
- View Kafka delivery history
- Check delivery status and error logs
- Inspect payload contents
- Track retry attempts and final delivery status
Contact your Globe Tracker representative to get access to your dashboard.
Best Practices
Section titled “Best Practices”- Provision Topics: Ensure your topics are created and configured before integration
- Use Appropriate Acks: Choose an acks level that matches your reliability needs
- Idempotency: Implement idempotent processing to handle potential duplicate deliveries
- Monitor Connection: Reconnect promptly if the connection to the broker is lost
- Secure Your Broker: Use TLS/SSL and strong authentication
- Partitioning: Use partitioning strategies that match your consumption patterns