description: This page provides an overview of working with DataHub Freshness Assertions
---
import FeatureAvailability from '@site/src/components/FeatureAvailability';
# Freshness Assertions
<FeatureAvailabilitysaasOnly/>
> ⚠️ The **Freshness Assertions** feature is currently in private beta, part of the **Acryl Observe** module, and may only be available to a
> limited set of design partners.
>
> If you are interested in trying it and providing feedback, please reach out to your Acryl Customer Success
> representative.
## Introduction
Can you remember a time when a Data Warehouse Table that you depended on went days, weeks, or even
months without being updated with fresh data?
Perhaps a bug had been introduced into an upstream Airflow DAG
or worse, the person in charge of maintaining the Table has departed from your organization entirely.
There are many reasons why an important Table on Snowflake, Redshift, or BigQuery may fail to be updated as often as expected.
What if you could reduce the time to detect these incidents, so that the people responsible for the data were made aware of data
issues _before_ anyone else? What if you could communicate commitments about the freshness or change frequency
of a table? With Acryl DataHub Freshness Assertions, you can.
Acryl DataHub allows users to define expectations about when a particular Table in the warehouse
should change, and then monitor those expectations over time, with the ability to be notified when things go wrong.
In this article, we'll cover the basics of monitoring Freshness Assertions - what they are, how to configure them, and more - so that you and your team can
start building trust in your most important data assets.
It is important that our clicks Table continues to be updated each hour because if it stops being updated, it could mean
that our downstream metrics dashboard becomes incorrect. And the risk of this situation is obvious: our organization
may make bad decisions based on incomplete information.
In such cases, we can use a **Freshness Assertion** that checks whether the Snowflake "clicks" Table is being updated with
fresh data each and every hour as expected. If an hour goes by without any changes, we can immediately notify our team, to prevent any
negative impacts.
### Anatomy of a Freshness Assertion
At the most basic level, **Freshness Assertions** consist of a few important parts:
1. An **Evaluation Schedule**
2. A **Change Window**
2. A **Change Source**
In this section, we'll give an overview of each.
#### 1. Evaluation Schedule
The **Evaluation Schedule**: This defines how often to check a given warehouse Table for new updates. This should usually
be configured to match the expected change frequency of the Table, although is can also be more frequently.
If the Table changes daily, it should be daily. If it changes hourly, it should be hourly. You can also specify specific days of the week, hours in the day, or even
minutes in an hour.
#### 2. Change Window
The **Change Window**: This defines the window of time that is used when determining whether a change has been made to a Table.
We can either check for change to the Table
- _Since the freshness check was last evaluated_. For example, if the evaluation schedule is set to run every day at
8am PST, we can check whether a change was made between the previous day at 8am and the following day at 8am.
- _Within a specific amount of time of the freshness check being evaluated_ (A fixed interval). For example, if the evaluation schedule is set to run
every day at 8am PST, we can check whether a change was made in the _8 hours before_ the check is evaluated, which would mean
in the time between midnight (12:00am) and 8:00am PST.
#### 3. Change Source
The **Change Source**: This is the mechanism that Acryl DataHub should use to determine whether the Table has changed. The supported
Change Source types vary by the platform, but generally fall into these categories:
- **Audit Log** (Default): A metadata API or Table that is exposed by the Data Warehouse which contains captures information about the
operations that have been performed to each Table. It is usually efficient to check, but some useful operations are not
fully supported across all major Warehouse platforms.
- **Information Schema**: A system Table that is exposed by the Data Warehouse which contains live information about the Databases
and Tables stored inside the Data Warehouse. It is usually efficient to check, but lacks detailed information about the _type_
of change that was last made to a specific table (e.g. the operation itself - INSERT, UPDATE, DELETE, number of impacted rows, etc)
- **Last Modified Column**: A Date or Timestamp column that represents the last time that a specific _row_ was touched or updated.
Adding a Last Modified Column to each warehouse Table is a pattern is often used for existing use cases around change management.
If this change source is used, a query will be issued to the Table to search for rows that have been modified within a specific
window of time (based on the Change Window)
- **High Watermark Column**: A column that contains a constantly-incrementing value - a date, a time, or another always-increasing number.
If this change source is used, a query will be issued to the Table to look for rows with a new "high watermark", e.g. a value that
is higher than the previously observed value, in order to determine whether the Table has been changed within a given period of time.
Note that this approach is only supported if the Change Window does not use a fixed interval.
- **DataHub Operation**: A DataHub "Operation" aspect contains timeseries information used to describe changes made to an entity. Using this
option avoids contacting your data platform, and instead uses the DataHub Operation metadata to evaluate Freshness Assertions.
This relies on Operations being reported to DataHub, either via ingestion or via use of the DataHub APIs (see [Report Operation via API](#reporting-operations-via-api)).
Note if you have not configured an ingestion source through DataHub, then this may be the only option available. By default, any operation type found will be considered a valid change. Use the **Operation Types** dropdown when selecting this option to specify which operation types should be considered valid changes. You may choose from one of DataHub's standard Operation Types, or specify a "Custom" Operation Type by typing in the name of the Operation Type.
Using either of the column value approaches (**Last Modified Column** or **High Watermark Column**) to determine whether a Table has changed can be useful because it can be customized to determine whether specific types of important changes have been made to a given Table.
5. Configure the evaluation **schedule**. This is the frequency that the table will be checked for changes. This represents you
expectation about the frequency at which the table should be updated.
6. Configure the evaluation **period**. This defines the period of time that will be considered when looking for changes to the table. Choose between _Since the previous check_ to check whether the table has changed since the past evaluation,
or _In the past X hours_ to configure a fixed interval that is used when checking the table.
_Check whether the table has changed between subsequent evaluations of the check_
the check. Each Data Platform supports different options including Audit Log, Information Schema, Last Modified Column, High Watermark Column, and DataHub Operation.
- **Audit Log**: Check the Data Platform operational audit log to determine whether the table changed within the evaluation period.
- **Information Schema**: Check the Data Platform system metadata tables to determine whether the table changed within the evaluation period.
- **Last Modified Column**: Check for the presence of rows using a "Last Modified Time" column, which should reflect the time at which a given row was last changed in the table, to
determine whether the table changed within the evaluation period.
- **High Watermark Column**: Monitor changes to a continuously-increasing "high watermark" column value to determine whether a table
has been changed. This option is particularly useful for tables that grow consistently with time, for example fact or event (e.g. click-stream) tables. It is not available
DataHub Operations can be used to capture changes made to entities. This is useful for cases where the underlying data platform does not provide a mechanism
to capture changes, or where the data platform's mechanism is not reliable. In order to report an operation, you can use the `reportOperation` GraphQL mutation.
##### Examples
```json
mutation reportOperation {
reportOperation(
input: {
urn: "<urnofthedatasetbeingreported>",
operationType: INSERT,
sourceType: DATA_PLATFORM,
timestampMillis: 1693252366489
}
)
}
```
Use the `timestampMillis` field to specify the time at which the operation occurred. If no value is provided, the current time will be used.