[color=green][size=medium][url]http://stackoverflow.com/questions/18783/favourite-performance-tuning-tricks[/url]
Here is the handy-dandy list of things I always give to someone asking me about optimisation.
We mainly use Sybase, but most of the advice will apply across the board.
SQL Server, for example, comes with a host of performance monitoring / tuning bits, but if you don't have anything like that (and maybe even if you do) then I would consider the following...
99% of problems I have seen are caused by putting too many tables in a join. The fix for this is to do half the join (with some of the tables) and cache the results in a temporary table. Then do the rest of the query joining on that temporary table.
[b]Query Optimisation Checklist[/b]
[list]
[*]Run UPDATE STATISTICS on the underlying tables
[*]Many systems run this as a scheduled weekly job
[*]Delete records from underlying tables (possibly archive the deleted records)
[*]Consider doing this automatically once a day or once a week.
[*]Rebuild Indexes
[*]Rebuild Tables (bcp data out/in)
[*]Dump / Reload the database (drastic, but might fix corruption)
[*]Build new, more appropriate index
[*]Run DBCC to see if there is possible corruption in the database
[*]Locks / Deadlocks
[*]Ensure no other processes running in database
[*]Especially DBCC
[*]Are you using row or page level locking?
[*]Lock the tables exclusively before starting the query
[*]Check that all processes are accessing tables in the same order
[*]Are indices being used appropriately?
[*]Joins will only use index if both expressions are exactly the same data type
[*]Index will only be used if the first field(s) on the index are matched in the query
[*]Are clustered indices used where appropriate?
[*]range data
[*]WHERE field between value1 and value2
[*]Small Joins are Nice Joins
[*]By default the optimiser will only consider the tables 4 at a time.
[*]This means that in joins with more than 4 tables, it has a good chance of choosing a non-optimal query plan
[*]Break up the Join
[*]Can you break up the join?
[*]Pre-select foreign keys into a temporary table
[*]Do half the join and put results in a temporary table
[*]Are you using the right kind of temporary table?
[*]#temp tables may perform much better than @table variables with large volumes (thousands of rows).
[*]Maintain Summary Tables
[*]Build with triggers on the underlying tables
[*]Build daily / hourly / etc.
[*]Build ad-hoc
[*]Build incrementally or teardown / rebuild
[*]See what the query plan is with SET SHOWPLAN ON
[*]See what’s actually happenning with SET STATS IO ON
[*]Force an index using the pragma: (index: myindex)
[*]Force the table order using SET FORCEPLAN ON
[*]Parameter Sniffing:
[*]Break Stored Procedure into 2
[*]call proc2 from proc1
[*]allows optimiser to choose index in proc2 if @parameter has been changed by proc1
[*]Can you improve your hardware?
[*]What time are you running? Is there a quieter time?
[*]Is Replication Server (or other non-stop process) running? Can you suspend it? Run it eg. hourly?
[/list]
[/size][/color]
Here is the handy-dandy list of things I always give to someone asking me about optimisation.
We mainly use Sybase, but most of the advice will apply across the board.
SQL Server, for example, comes with a host of performance monitoring / tuning bits, but if you don't have anything like that (and maybe even if you do) then I would consider the following...
99% of problems I have seen are caused by putting too many tables in a join. The fix for this is to do half the join (with some of the tables) and cache the results in a temporary table. Then do the rest of the query joining on that temporary table.
[b]Query Optimisation Checklist[/b]
[list]
[*]Run UPDATE STATISTICS on the underlying tables
[*]Many systems run this as a scheduled weekly job
[*]Delete records from underlying tables (possibly archive the deleted records)
[*]Consider doing this automatically once a day or once a week.
[*]Rebuild Indexes
[*]Rebuild Tables (bcp data out/in)
[*]Dump / Reload the database (drastic, but might fix corruption)
[*]Build new, more appropriate index
[*]Run DBCC to see if there is possible corruption in the database
[*]Locks / Deadlocks
[*]Ensure no other processes running in database
[*]Especially DBCC
[*]Are you using row or page level locking?
[*]Lock the tables exclusively before starting the query
[*]Check that all processes are accessing tables in the same order
[*]Are indices being used appropriately?
[*]Joins will only use index if both expressions are exactly the same data type
[*]Index will only be used if the first field(s) on the index are matched in the query
[*]Are clustered indices used where appropriate?
[*]range data
[*]WHERE field between value1 and value2
[*]Small Joins are Nice Joins
[*]By default the optimiser will only consider the tables 4 at a time.
[*]This means that in joins with more than 4 tables, it has a good chance of choosing a non-optimal query plan
[*]Break up the Join
[*]Can you break up the join?
[*]Pre-select foreign keys into a temporary table
[*]Do half the join and put results in a temporary table
[*]Are you using the right kind of temporary table?
[*]#temp tables may perform much better than @table variables with large volumes (thousands of rows).
[*]Maintain Summary Tables
[*]Build with triggers on the underlying tables
[*]Build daily / hourly / etc.
[*]Build ad-hoc
[*]Build incrementally or teardown / rebuild
[*]See what the query plan is with SET SHOWPLAN ON
[*]See what’s actually happenning with SET STATS IO ON
[*]Force an index using the pragma: (index: myindex)
[*]Force the table order using SET FORCEPLAN ON
[*]Parameter Sniffing:
[*]Break Stored Procedure into 2
[*]call proc2 from proc1
[*]allows optimiser to choose index in proc2 if @parameter has been changed by proc1
[*]Can you improve your hardware?
[*]What time are you running? Is there a quieter time?
[*]Is Replication Server (or other non-stop process) running? Can you suspend it? Run it eg. hourly?
[/list]
[/size][/color]