Compute Optimization
The Compute Optimization surface helps you right-size warehouse compute across your entire pipeline fleet. Instead of manually auditing expensive queries, Zingle identifies over-provisioned pipelines and suggests cheaper alternatives — all backed by pull requests.
The optimization table
Navigate to Compute Optimization in the sidebar. The main view is a governance table showing all pipelines:
| Column | Description |
|---|---|
| ID | Pipeline identifier |
| Name | Human-readable pipeline name |
| Current compute engine | e.g., Snowflake Large, Snowflake X-Small, DuckDB |
| Cost | Monthly compute cost for this pipeline |
| Criticality | Priority level (P0 through P3) |
| PII | Whether the pipeline handles personally identifiable information |
| Owner | Responsible team member |
| Compute suggestions | AI-generated recommendation (toggle via column controls) |
| Estimated cost savings | Projected monthly savings (toggle via column controls) |
Column controls
Use the Columns dropdown to toggle visibility of optional columns. Enable Compute suggestions and Estimated cost savings to see AI-powered optimization recommendations.
Applying optimization suggestions
Review the suggestion
Look at the Compute suggestions column. Common recommendations include:
- "Switch from Snowflake Large to Snowflake Small"
- "Switch to managed DuckDB"
- "Downgrade warehouse size for low-volume pipeline"
Each suggestion includes estimated cost savings.
Open the suggestion dialog
Click the suggestion pill in the table row. A dialog opens where you can:
- Review the full recommendation details
- Select a GitHub repository for the PR (must have
allow_prsenabled)
Create the optimization PR
Click Create PR. Zingle:
- Validates the selected repository and workspace
- Creates a branch (e.g.,
zingle/compute-optimization-{pipeline_id}) - Commits the compute engine change
- Opens a pull request with the suggestion, pipeline context, and cost impact
The AI assistant
From the table header, click AI Assistant to open the compute optimization chat interface. Use it to:
- Ask which pipelines are the best candidates for cheaper compute
- Compare cost/latency trade-offs between engines
- Get explanations for why a specific engine was recommended
- Turn conversations into actionable optimization PRs
Compute engine options
| Engine | Best for | Relative cost |
|---|---|---|
| Snowflake X-Small | Low-volume, simple transformations | $ |
| Snowflake Small | Standard workloads | $$ |
| Snowflake Medium | Moderate complexity | $$$ |
| Snowflake Large | Heavy joins, large datasets | $$$$ |
| Snowflake X-Large+ | Extreme workloads | $$$$$ |
| Managed DuckDB | Lightweight, non-production, or dev workloads | $ |
Tips
- Enable both suggestion columns. You need visibility into both the recommendation and the savings to make informed decisions.
- Start with P2/P3 pipelines. Lower-criticality pipelines are safer candidates for compute downgrades.
- Review cost trends over time. A pipeline that was expensive last month may have changed in volume.
- Use the AI assistant for bulk analysis. Instead of reviewing pipelines one by one, ask the assistant to rank all pipelines by savings potential.