Skip to main content

Compute Optimization

The Compute Optimization surface helps you right-size warehouse compute across your entire pipeline fleet. Instead of manually auditing expensive queries, Zingle identifies over-provisioned pipelines and suggests cheaper alternatives — all backed by pull requests.


The optimization table

Navigate to Compute Optimization in the sidebar. The main view is a governance table showing all pipelines:

ColumnDescription
IDPipeline identifier
NameHuman-readable pipeline name
Current compute enginee.g., Snowflake Large, Snowflake X-Small, DuckDB
CostMonthly compute cost for this pipeline
CriticalityPriority level (P0 through P3)
PIIWhether the pipeline handles personally identifiable information
OwnerResponsible team member
Compute suggestionsAI-generated recommendation (toggle via column controls)
Estimated cost savingsProjected monthly savings (toggle via column controls)

Column controls

Use the Columns dropdown to toggle visibility of optional columns. Enable Compute suggestions and Estimated cost savings to see AI-powered optimization recommendations.


Applying optimization suggestions

  1. Review the suggestion

    Look at the Compute suggestions column. Common recommendations include:

    • "Switch from Snowflake Large to Snowflake Small"
    • "Switch to managed DuckDB"
    • "Downgrade warehouse size for low-volume pipeline"

    Each suggestion includes estimated cost savings.

  2. Open the suggestion dialog

    Click the suggestion pill in the table row. A dialog opens where you can:

    • Review the full recommendation details
    • Select a GitHub repository for the PR (must have allow_prs enabled)
  3. Create the optimization PR

    Click Create PR. Zingle:

    1. Validates the selected repository and workspace
    2. Creates a branch (e.g., zingle/compute-optimization-{pipeline_id})
    3. Commits the compute engine change
    4. Opens a pull request with the suggestion, pipeline context, and cost impact

The AI assistant

From the table header, click AI Assistant to open the compute optimization chat interface. Use it to:

  • Ask which pipelines are the best candidates for cheaper compute
  • Compare cost/latency trade-offs between engines
  • Get explanations for why a specific engine was recommended
  • Turn conversations into actionable optimization PRs

Compute engine options

EngineBest forRelative cost
Snowflake X-SmallLow-volume, simple transformations$
Snowflake SmallStandard workloads$$
Snowflake MediumModerate complexity$$$
Snowflake LargeHeavy joins, large datasets$$$$
Snowflake X-Large+Extreme workloads$$$$$
Managed DuckDBLightweight, non-production, or dev workloads$

Tips

  • Enable both suggestion columns. You need visibility into both the recommendation and the savings to make informed decisions.
  • Start with P2/P3 pipelines. Lower-criticality pipelines are safer candidates for compute downgrades.
  • Review cost trends over time. A pipeline that was expensive last month may have changed in volume.
  • Use the AI assistant for bulk analysis. Instead of reviewing pipelines one by one, ask the assistant to rank all pipelines by savings potential.