Skip to content

Kafka Push

Kafka push provides a scalable and robust way to receive real-time IoT data from Globe Tracker devices. When events occur, our system publishes messages to your specified Kafka broker and topic(s). We support both standard Kafka servers and managed providers such as Confluent Cloud.

To receive Kafka push notifications, you need to provide us with:

  • Kafka broker address (hostname/IP and port)
  • Topic(s) to publish to
  • Authentication credentials (if required by your broker or provider)
  • Ability to publish to the relevant topic(s)
  • Support for JSON payloads

You must provide the Kafka broker details and the topic(s) to which you want to receive events. All object types can be sent to the same topic, or you can specify distinct topics per object type.

Example topic structure:

  • globe-tracker.events (all events)
  • globe-tracker.events.location (location updates)
  • globe-tracker.events.reeferdata (Reefer telematics)

If you require specific partitioning, let us know your partitioning strategy (e.g., by device ID, event type, etc.). By default, events are distributed evenly across available partitions. You can also choose any of the values in the payload as key.

We support several authentication methods for Kafka brokers:

  • SASL/PLAIN (username and password)
  • SASL/SCRAM
  • SSL/TLS client certificates
  • Anonymous (if your broker allows it)

For Confluent Cloud, provide your API key and secret.

We recommend using TLS/SSL encryption for secure Kafka communication. Provide your broker’s CA certificate if required.

Push services send objects from object-sync services. The following object types are defined in the Object Sync OpenAPI Specification:

Represents an asset in the system.

Required fields:

  • id (string): Unique identifier for the asset (global_id)
  • name (string): Asset name/serial number
  • ownerId (string): Owner/customer global ID
  • whenCreated (string, date-time): ISO 8601 timestamp when the asset was created
  • whenUpdated (string, date-time): ISO 8601 timestamp when the asset was last updated

Optional fields:

  • boxId (string, nullable): Box global ID
  • disabled (boolean): Whether the asset is disabled
  • note (string, nullable): Optional note about the asset
  • tags (array of strings, nullable): Array of customer tags associated with the asset

Represents a GPS location record.

Required fields:

  • id (integer, int64): Unique identifier for the location record
  • assetId (string): Asset global ID
  • ownerId (string): Owner/customer global ID
  • whenCreated (string, date-time): ISO 8601 timestamp when the location was recorded (GPS time)
  • latitude (number, double): GPS latitude coordinate
  • longitude (number, double): GPS longitude coordinate

Optional fields:

  • altitude (number, double, nullable): GPS altitude in meters
  • speed (number, double, nullable): Speed in the unit configured for the asset
  • heading (number, double, nullable): Heading/bearing in degrees (0-360)
  • zoneId (string, nullable): Zone identifier where the location was recorded

Represents sensor data from an asset.

Required fields:

  • id (integer, int64): Unique identifier for the sensor data record
  • assetId (string): Asset global ID
  • ownerId (string): Owner/customer global ID
  • sensorId (string): Sensor identifier
  • sensorType (string): Type of sensor
  • whenCreated (string, date-time): ISO 8601 timestamp when the sensor data was recorded
  • value (string): Sensor value (can be string representation of number or text)
  • type (string): Data type of the sensor value

Optional fields:

  • when_received (integer, int64, nullable): Unix timestamp in milliseconds when the data was received

Represents reefer (refrigerated container) data.

Required fields:

  • id (integer, int64): Unique identifier for the reefer data record
  • assetId (string): Asset global ID
  • ownerId (string): Owner/customer global ID
  • whenCreated (string, date-time): ISO 8601 timestamp when the reefer data was recorded
  • reeferClass (string): Class/type of reefer unit
  • data (object): JSON object containing reefer-specific data payload (additionalProperties: true)

Represents generator set (genset) data.

Required fields:

  • id (integer, int64): Unique identifier for the genset data record
  • assetId (string): Asset global ID
  • ownerId (string): Owner/customer global ID
  • whenCreated (string, date-time): ISO 8601 timestamp when the genset data was recorded
  • gensetType (string): Type/model of genset
  • data (object): JSON object containing genset-specific data payload (additionalProperties: true)

Optional fields:

  • boxId (string, nullable): Box global ID

Represents GTOEM (Globetracker OEM) data.

Required fields:

  • id (integer, int64): Unique identifier for the GTOEM data record
  • assetId (string): Asset global ID
  • ownerId (string): Owner/customer global ID
  • whenCreated (string, date-time): ISO 8601 timestamp when the GTOEM data was created
  • payloadType (integer, int32): Type identifier for the payload
  • data (object): JSON object containing GTOEM-specific data payload (additionalProperties: true)

Optional fields:

  • boxId (string, nullable): Box global ID
  • when_received (integer, int64, nullable): Unix timestamp in milliseconds when the data was received

For complete schema definitions, you can download the full OpenAPI schema file for use in API documentation tools, code generation, or validation.

Push services may add additional fields to the object-sync service objects in some cases. These fields are added by the push service infrastructure and are not part of the base object schema.

Kafka push typically sends objects as-is from the object-sync service without additional fields. However, depending on configuration, additional metadata fields may be added. Contact your Globe Tracker representative for details about any additional fields that may be configured for your Kafka integration.

All Kafka messages use JSON payloads. We support only single event delivery mode for Kafka:

Each event is published as a single JSON object. The payload contains the object from the object-sync service directly:

Example: Location object

{
"id": 12345,
"assetId": "550e8400-e29b-41d4-a716-446655440000",
"ownerId": "660e8400-e29b-41d4-a716-446655440001",
"whenCreated": "2024-03-20T14:30:00Z",
"latitude": 55.676098,
"longitude": 12.568337,
"altitude": 10.5,
"speed": 0,
"heading": 90
}

Example: ReeferData object

{
"id": 12345,
"assetId": "550e8400-e29b-41d4-a716-446655440000",
"ownerId": "660e8400-e29b-41d4-a716-446655440001",
"whenCreated": "2024-03-20T14:30:00Z",
"reeferClass": "CARRIER",
"data": {
"temperature": 4.5,
"humidity": 60,
"setpoint": 4.0
}
}

Kafka acknowledgements depend on the broker configuration and the required acks setting:

  • acks=0: No acknowledgement required
  • acks=1: Wait for leader acknowledgement
  • acks=all: Wait for all in-sync replicas to acknowledge

We track delivery status based on Kafka protocol acknowledgements. Messages will be retried until acknowledged according to the configured acks level.

  • If the broker is unavailable, we will retry connection and message delivery using exponential backoff.
  • Messages are retried until acknowledged by the broker.
  • If a message cannot be delivered after multiple attempts, it is logged for monitoring and support follow-up.

We store detailed metadata for each Kafka delivery attempt, including:

  • Timestamp of the attempt
  • Topic name
  • Partition
  • Delivery status (success/failure)
  • Error details (if any)

You can access Kafka delivery history and logs in several ways:

To extract Kafka push logs for the last day:

Terminal window
flowcore stream https://flowcore.io/globe-tracker-aps/example-data/kafka-push-results.0/created.0.stream \
--no-live \
-s 1d \
--payload \
--json

You can use the data-pump NPM package to retrieve results

We provide a dashboard where you can:

  • View Kafka delivery history
  • Check delivery status and error logs
  • Inspect payload contents
  • Track retry attempts and final delivery status

Contact your Globe Tracker representative to get access to your dashboard.

  1. Provision Topics: Ensure your topics are created and configured before integration
  2. Use Appropriate Acks: Choose an acks level that matches your reliability needs
  3. Idempotency: Implement idempotent processing to handle potential duplicate deliveries
  4. Monitor Connection: Reconnect promptly if the connection to the broker is lost
  5. Secure Your Broker: Use TLS/SSL and strong authentication
  6. Partitioning: Use partitioning strategies that match your consumption patterns