Overview
A high-level look at what SQL Performance Intelligence™ does, what it avoids, and why its read-only approach is safe for production.
SQL Performance Intelligence is a read-only analysis application for SQL Server. It helps you inspect CPU, waits, blocking, indexes, SQL Agent jobs, security posture, and object-level SQL details without applying automatic schema changes.
- Reads performance metadata, Query Store history, and object definitions.
- Builds evidence-backed findings and exportable reports.
- Lets you add either a local or cloud LLM for deeper interpretation.
- It does not create or alter your application schema automatically.
- It does not need table-data read access to business rows for normal usage.
- It does not start with a full setup by itself; first-run settings must be completed by the user.
Follow this order. It is the fastest way to reach a working first analysis without guessing which setting matters first.
- Install the desktop app and complete license or trial activation.
- Ask your DBA to prepare a read-only SQL login or approved Windows account.
- Enable Query Store on the database you want to analyze.
- Add one database connection profile in Settings > Database.
- Add one AI provider in Settings > AI / LLM, either local Ollama or a cloud model.
- Connect and start with Dashboard, then move into Query Statistics or Object Explorer.
Ask your DBA for a read-only SQL login or a Windows account that has the minimum monitoring permissions. For most first-time users, the important permissions are VIEW SERVER STATE, VIEW DATABASE STATE, and VIEW DEFINITION.
Ask for Query Store to be enabled on the database you want to analyze. Modules such as Query Statistics, Index Advisor, and wait-to-plan correlation work better when historical Query Store evidence exists.
Decide early whether you want a local model such as Ollama or a cloud provider such as OpenAI, Azure OpenAI, Anthropic, or DeepSeek. That choice affects which credentials and fields you need later.
Do not start by tuning thresholds, prompt rules, or advanced caching. First make one connection, one AI provider, and one successful analysis. Optimize later.
Query Store is not required for every screen, but it is strongly recommended for real analysis. Without it, the app falls back to more limited live DMV data and loses historical depth.
- Query Statistics can rank important queries with better historical evidence.
- Index Advisor can estimate dependency and drop risk with stronger workload context.
- Wait Statistics can use richer query-correlation and wait-history scenarios.
- Regression and plan-change analysis become more trustworthy than DMV-only snapshots.
- Best when data should stay on your own machine or inside your own network.
- Works well for offline or controlled environments.
- Requires a running Ollama service and an installed local model.
- Best when you already operate OpenAI, Azure OpenAI, Anthropic, or DeepSeek.
- Usually easier to scale and easier to standardize across many users.
- Requires API credentials and, for some providers, endpoint or deployment fields.
You can configure multiple providers, but keep one clear default provider for daily work. For a new user, one working provider is better than many partially tested providers.
- Open Dashboard to see whether CPU, memory, IO, or TempDB pressure is already obvious.
- Open Query Statistics if the problem looks query-driven or workload-driven.
- Open Object Explorer when you need source code, object stats, dependencies, or AI Tune for one object.
- Open Wait Statistics or Blocking Analysis when contention is the main symptom.