Skip to main content

Pipeline Operations

Beyond building and scheduling your transformation pipeline, Sundial provides tools to validate data quality, trace dependencies, monitor freshness, and manage historical reprocessing. This page covers everything you need to keep your pipeline healthy and transparent.

Data quality testing

Sundial lets you attach tests to any Source or SQL View. Tests run automatically whenever the table is refreshed, catching issues before they propagate downstream.

Pre-built tests

Apply common checks without writing SQL:

  • Row count — Set minimum or maximum thresholds for expected row counts. Detect unexpected drops or spikes in volume.
  • Null checks — Set a maximum allowed percentage of null values per column. Identify columns with data quality problems.
  • Value range — Ensure numeric columns stay within expected bounds (e.g. revenue is positive, percentages are between 0 and 100).

Custom SQL tests

For more complex validation, write SQL queries that define your own assertions:

  • Validate business rules (e.g. every order has a customer, no duplicate transaction IDs).
  • Compare aggregates across time periods to detect anomalies.
  • Check that derived calculations match expected values.

Handling failures

When a test fails, Sundial:

  1. Emits an alert through configured channels (Slack, email).
  2. Logs the failure details — test name, reason, and affected data — in the UI console.
  3. Lets you investigate and triage: is it a data drift, an upstream change, a code regression, or an infrastructure issue?
  4. Supports backfill to reprocess affected data once the root cause is fixed.

For a full list of available tests and configuration options, see the Testing reference.

Visual lineage

Sundial automatically tracks how data flows through your pipeline — from source tables through intermediate views to final metrics and dimensions.

What you can do with lineage

  • Debug metric issues — Trace a metric back through its dependency chain to find where a discrepancy was introduced.
  • Assess impact — Before changing a table's schema or logic, see every downstream table, metric, and dimension that depends on it.
  • Audit data flows — Document how data moves through the organization for governance and compliance.

Lineage views

The lineage UI offers several perspectives:

  • Table View — Relationships between tables and their refresh status.
  • Schema View — Column-level schemas and data types.
  • Metric View — How metrics trace back to underlying tables.
  • Focused View — Isolate a specific table or metric and see only its direct upstream and downstream dependencies.

For details, see the Lineage reference.

Freshness and staleness checks

Monitor how current your data is. Sundial tracks when each table was last refreshed and flags tables that are overdue based on their schedule. Use freshness checks to:

  • Quickly spot which tables are stale and need attention.
  • Set expectations with stakeholders about data currency.
  • Trigger alerts when critical tables fall behind.

Schema browsing

Explore the full schema of any table directly from the Data Catalog — column names, data types, and descriptions. Schema browsing helps you:

  • Understand the shape of upstream data before writing a SQL View.
  • Verify that a transformation produced the expected output columns.
  • Onboard new team members by letting them explore the data model interactively.

Pending change preview

Before running a materialization, review exactly what has changed since the last run. The pending changes view shows:

  • Newly added models and sources.
  • Edited SQL queries or configuration.
  • Renamed or removed columns.

This lets you review and validate changes before they take effect, reducing the risk of unexpected downstream impact.

Backfill configuration

When you need to reprocess historical data, configure a backfill directly from the view's settings in the Data Catalog. See Backfill for the full guide on modes, configuration, and best practices.


Still have questions?