Under the hood, the DataHub Cloud Event Source communicates with DataHub Cloud to extract change events in realtime.
The state of progress is automatically saved to DataHub Cloud after messages are processed, allowing you to seamlessly pause and restart the consumer, using the provided `name` to uniquely identify the consumer state.
On initial startup of a new consumer id, the DataHub event source will automatically begin the _latest_ events by default. Afterwards, the message stream processed offsets will be continually saved. However, the source can also optionally be configured to "look back" in time
by a certain number of days on initial bootstrap using the `lookback_days` parameter. To reset all previously saved offsets for a consumer,
you can set `reset_offsets` to `True`.
### Processing Guarantees
This event source implements an "ack" function which is invoked if and only if an event is successfully processed
by the Actions framework, meaning that the event made it through the Transformers and into the Action without
any errors. Under the hood, the "ack" method synchronously commits DataHub Cloud Consumer Offsets on behalf of the Action. This means that by default, the framework provides _at-least once_ processing semantics. That is, in the unusual case that a failure occurs when attempting to commit offsets back to Kafka, that event may be replayed on restart of the Action.
If you've configured your Action pipeline `failure_mode` to be `CONTINUE` (the default), then events which
fail to be processed will simply be logged to a `failed_events.log` file for further investigation (dead letter queue). The DataHub Cloud Event Source will continue to make progress against the underlying topics and continue to commit offsets even in the case of failed messages.
If you've configured your Action pipeline `failure_mode` to be `THROW`, then events which fail to be processed result in an Action Pipeline error. This in turn terminates the pipeline before committing offsets back to DataHub Cloud. Thus the message will not be marked as "processed" by the Action consumer.
- [Metadata Change Log V1](../events/metadata-change-log-event.md) (By changing config > topics to include `MetadataChangeLog_Versioned_v1` and `MetadataChangeLog_Timeseries_v1`)
| `topics` | ❌ | `PlatformEvent_v1` | The name of the topic from which events will be consumed. By default only produces `EntityChangeEvent_v1` events. To include `MetadataChangeLog_v1` events, set this value to include ["MetadataChangeLog_Versioned_v1", "MetadataChangeLog_Timeseries_v1"] |
| `lookback_days` | ❌ | None | Optional number of days to look back when polling for events. |
| `reset_offsets` | ❌ | `False` | When set to `True`, the consumer will ignore any stored offsets and start fresh. |
| `kill_after_idle_timeout` | ❌ | `False` | If `True`, stops the consumer after being idle for the specified timeout duration. |
| `idle_timeout_duration_seconds` | ❌ | `30` | Duration in seconds after which, if no events are received, the consumer is considered idle. |
| `event_processing_time_max_duration_seconds` | ❌ | `30` | Maximum allowed time in seconds for processing events before timing out. |
1. Is there a way to always start processing from the end of the topics on Actions start?
Yes, simply set `reset_offsets` to True for a single run of the action. Remember to disable this for subsequent runs if you don't want to miss any events!
2. What happens if I have multiple actions with the same pipeline `name` running? Can I scale out horizontally?
Today, there is undefined behavior deploying multiple actions with the same name using the DataHub Cloud Events Source.
All events must be processed by a single running action