Not considering service levelagreements (SLAs) when designing a database environment and/or not consideringthe need for scheduled downtime for service pack/hotfix application,
Defining "disaster" toonarrowly and not simulating/practicing a disaster recovery (DR) plan.Having a DR plan is fine, but how do you know it will work (and several peoplecan follow it) when required?
Designing a storage system from acapacity perspective alone,
Assuming a SAN will meet/exceedperformance requirements. Just because SANs are (typically) expensive does notmean the storage design process can be skipped,
Failing to track align diskpartitions and formatting them with the default Windows allocation unit size,
Using RAID 5 volumes for writeintensive applications,
Failing to validate an I/Osubsystem for performance and validity before production implementation. TheSQLIO/SIM tools are free; there's no reason not to use them,
Virtualizing/consolidating SQLServer instances and databases without consideration of the scalability,licensing, support, administration and performance profile implications,
Installing service packs,cumulative updates or hot fixes without reading the release notes and/or notinstalling them in a test environment first,
Installing all SQL Server features on the off chancethey may be needed at some point in the future. Doing so increases the attacksurface area and results in unnecessary running services that may reduceperformance,
Installing multi-instance clusters without consideringthe resource implications of failover situations,
Creating logins/jobs with elevated privileges.Implementing least privilege can be tough work, but is essential in lockingdown a system for maximum security,
Changing configuration values from their defaultsettings without adequate research and/or a detailed change log,
Placing data and transaction logs on the same physicaldisk(s),
Storing backups on the same disk as the databasefiles,
Relying on Autogrow for file sizing, and leaving theTempDB database at its default size,
Not taking backups and/or not checking their validityand/or not practicing and documenting various recovery situations. All of theseare equally as bad,
Leaving the database in the full recovery modelwithout taking transaction log backups. A 2GB database with a 200GBtransasction log ... Yes, we've all seen that before,
Implementing database mirroring in high safety(synchronous) mode without considering network latency and/or transaction logusage from index maintenance,
Not running regular DBCC checks,
Running REPAIR_ALLOW_DATA_LOSS as the primary/defaultrecovery response, and not following up corruption events with a root causeanalysis,
Not evaluating index usage and/or fragmentation levelsas part of an index maintenance routine,
Updating statistics using the default sampling rate after a full index rebuild,
Using SQL Profiler in place of server-side traces, andusing it as the primary performance analysis/tuning technique,
Manual administration with SQL Server ManagementStudio. For maximum efficiency and minimal errors, tasks should be scripted andautomated along with appropriate monitoring and alerting mechanisms such as MOMor SQL Agent operators & alerts