Migrating Oracle on UNIX to SQL Server on Windows


Published: April 27, 2005
On This Page

Appendix A: SQL Server for Oracle Professionals Appendix A: SQL Server for Oracle Professionals 
Appendix B: Getting the Best Out of SQL Server 2000 and Windows Appendix B: Getting the Best Out of SQL Server 2000 and Windows 
Appendix C: Baselining Appendix C: Baselining 
Appendix D: Installing Common Drivers and Applications Appendix D: Installing Common Drivers and Applications 
Installing FreeTDS Installing FreeTDS 
Installing unixODBC Installing unixODBC 
Installing ActiveState Perl Installing ActiveState Perl 
Appendix E: Reference Resources Appendix E: Reference Resources 

Appendix A: SQL Server for Oracle Professionals

One of the assumptions of this guide has been that both Oracle and Microsoft® SQL Server™ database administration experience is available in the project team. The success of the migration depends on how well the requirements of the current environment are translated into the SQL Server environment. The knowledge and involvement of the custodians of the current environment, the Oracle DBAs, is very important. A separate set of Oracle and SQL Server DBAs working on the migration has the disadvantage of possible communication problems and cost. Hence training the Oracle DBAs in SQL Server will serve the purpose of performing the migration, retaining DBAs with valuable knowledge of the business and the databases, and preparing them to manage the new SQL Server environment.

The purpose of this appendix is to provide a primer for Oracle DBAs in the workings of SQL Server and its administration. The transition from Oracle to SQL Server is eased by the several similarities that exist between the two RDBMSs. Some of the key commonalities include:

  • Relational engine. Both Oracle and SQL Server use an optimizer to generate an optimal execution plan from alternative solutions using statistics and access paths. The execution plan can be influenced by optimizer hints.

  • Process architecture. Both Oracle and SQL Server have specialized processes for user connections (shared) and dedicated database functions. SQL Server uses threads and provides CPU affinity, features that are found in Oracle on Microsoft Windows®.

  • Memory architecture. In both Oracle and SQL Server, memory is broken up into buffers or caches with separate memory areas for SQL, procedural SQL, data dictionary, and session. Database buffers or caches are manipulated in terms of pages/blocks. Both have similar buffer replacement policies (Least Recently Used policy).

  • Storage architecture. In both Oracle and SQL Server, the physical database is structured as data files, system files, transaction logs, and control files. Logical structures to complement the physical structures are hierarchical in nature.

  • Backup options. Oracle and SQL Server provide various options for backing up databases, such as online backups, full and partial backups, and transaction log backups.

  • Recovery model. Both Oracle and SQL Server use transaction logs (redo) and rollback. Recovery is possible using single file backups, transaction logs, and so on.

  • Tools. Both Oracle and SQL Server employ Enterprise Manager, SQL client


An understanding of SQL Server architecture and how it compares and contrasts with Oracle is fundamental in shaping the migration as well as extracting the optimal performance out of the SQL Server platform.

Oracle and SQL Server are very similar in their architecture and internal workings. However, the same terms have different meanings in the two environments. For example,

in Oracle the term instance is used for the memory and processes that support an Oracle database. However, in SQL Server, the term instance contains the memory, processes, and also the user databases. Irrespective of the terminologies used, SQL Server also uses memory and process components in a manner similar to Oracle. In this discussion, SQL Server is presented to the Oracle DBA using an Oracle-like view.

Database and Instance

A database, by definition, is the repository for data and metadata (data dictionary). This definition is universal in nature. In Oracle, the term database is used to specifically refer to the files that are used to store the database's data and metadata. The term instance is used for the memory structures (System Global Area is the main component) and processes that are required to access and perform work against the database. In SQL Server, the instance is used collectively for the data files, memory, and processes. Figure A.1 illustrates the similarities between instances of Oracle and SQL Server with respect to memory and processes. Only the important components of an instance—the SGA and the processes — are covered in the figure. Details of the SQL Server architecture can be found at http://msdn.microsoft.com/SQL/sqlarchitecture/default.aspx.

Figure A.1 Oracle and SQL Server 2000 – High level comparison of Instance and Database in Oracle and SQL Server to show architectural similarities

Figure A.1 Oracle and SQL Server 2000 – High level comparison of Instance and Database in Oracle and SQL Server to show architectural similarities

Apart from the defined meaning of the terms instance and database in Oracle and SQL Server, the terms are used very loosely to mean an occurrence of the database. The phrase “multiple instances or multiple databases running on a single database server” is a typical usage of this terminology. Database administrators should be able to infer the meaning from the context.

What can be confusing with SQL Server is the presence of several system and user-created “databases” inside a SQL Server database. Hence the term database system has been coined here to mean an occurrence of Oracle or SQL Server. Multiple Database Systems (Instances)

In Oracle, multiple database systems can be created on a single server and the creation is independent of the software installation (using the CREATE DATABASE SQL command). The same is not true with SQL Server. The initial database system (default or named) is created as part of the software installation. The software distribution is also needed to create additional database systems. For information on working with named and multiple instances of SQL Server 2000, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/instsql/in_runsetup_2xmb.asp.

The following components are shared between all of the instances running on the same computer:

  • There is only one SQL Server 2000 program group (Microsoft SQL Server) on the computer, and only one copy of the utility represented by each icon in the program group. There is only one copy of SQL Server Books Online.

  • The versions of the utilities in the program group are from the first version of SQL Server installed on the computer.

  • There is only one copy of the MSSearchService that manages full-text searches against all of the instances of SQL Server on the computer.

  • There is only one copy each of the English Query and Microsoft SQL Server 2000 Analysis Services servers.

  • The registry keys associated with the client software are not duplicated between instances.

  • There is only one copy of the SQL Server development libraries (include and .lib files) and sample applications.

Extending out to the client tier, the Oracle clients and SQL Server clients connect to databases in similar, albeit proprietary, protocols: Transparent Network Substrate (TNS) for Oracle and Tabular Data Stream (TDS) for SQL Server. Figure A.2 compares the user-instance-database interaction paths.

Figure A.2 Oracle and SQL Server 2000 — User-instance database interaction compared

Figure A.2 Oracle and SQL Server 2000 — User-instance database interaction compared

Unlike Oracle, SQL Server does not store client configuration information in an operating system file. SQL Server uses registry keys to store such information. By default, SQL Server is configured to listen on TCP/IP network protocol, which should suffice for most installations migrating from Oracle. The server network utility (part of the server installation) can be used to configure SQL Server to listen on named pipes, multiprotocol, NWLink, IPX/SPX, Banyan VINES, and Appletalk protocols.

On the client side, the client network utility (part of the client installation) can be used to set up alias names for SQL Servers. The alias names can be mapped to an IP address or a named pipe address. The naming varies based on the type of installation. Clients can connect to:

  • A default instance by specifying the servername.

  • A named instance by specifying the servername\instancename.

The proper instance naming will have to be used while defining Data Source Names (DSN) in the connection string of ODBC, ADO, DBI::DBD, Enterprise Manager, Query Analyzer, and isql utility.

Database Architecture

A database has one or more physical data files that contain all the database data. This fact has not changed from the early days of data repositories, such as Sequential Access Methods (SAM), Indexed Sequential Access Method (ISAM), and Virtual Storage Access Method (VSAM), to the modern day relational database management systems (RDBMS). Though hardware throughput has improved in the past several years, the improvement in data access rates is not purely hardware-related, but also in part to the evolution of the database storage architectures.

Physical Storage Architecture

The physical architecture is made up of files that contain the system (or catalog) and application data. As with Oracle, SQL Server also has support for raw devices. The physical architecture is used to provide separation of data based on its type, such as metadata from user data; heap data from index data; user data from DBMS data (including transaction logs), and permanent data from temporary data.

The Oracle physical architecture is made up of data files, redo log files, and control files. SQL Server database systems have four system databases (master, model, tempdb, and msdb) and one or more user databases. Each of these databases has one or more files. Each database has its own transaction log files, which are separate from the data files.

Logical Storage Architecture

For the convenience of administration and efficiency of use, the physical storage is broken down into smaller logical structures. By dividing each physical data file into several logical structures, and allocating space to each database object in increments of these smaller logical structures, access to the database objects can be insulated from the physical file storage on the operating system. SQL Server has a storage hierarchy which is similar to Oracle’s block-extent-segment-tablespace implementation.

This enables you to load smaller chunks of data into memory for faster data access and also enables you to move the physical location of a data file in the file system, transparent to the database objects or the applications that access them. Figure A.3 shows the hierarchy of storage structures available in Oracle and SQL Server.

Figure A.3 Physical and Logical storage structure hierarchies in Oracle and SQL Server

Figure A.3 Physical and Logical storage structure hierarchies in Oracle and SQL Server

In SQL Server, the term page is used instead of blocks. The data files are formatted into blocks (pages) of the same size (8 KB). The unit of transfer for data between storage and database memory is a block or page. The composition of the SQL Server page is similar to the Oracle block and is made up of page header, data rows and row offset array (row directory). Free and used space is also tracked and managed similarly. SQL Server does not allow rows larger than 8060 bytes. This restriction, however, does not apply to rows containing large data types such as text, image, and so on, which can be stored separately.

Although the data is stored in blocks, the block is too small a unit for allocation to the database objects. A bigger unit called extent, which corresponds to a specific number of contiguous data blocks, is used for this purpose. SQL Server only supports fixed size extents of 64 KB (8 pages).

For more details on the two types of extents and how they are used, refer to the “Pages and Extents” article athttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_4iur.asp.

Also refer to the “Managing Extent Allocations and Free Space” article at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_4lgl.asp.

The SQL Server equivalent of the Oracle tablespace is called the filegroup. Each SQL Server database is created with a primary file belonging to the default primary filegroup. Optionally, secondary datafiles can be added to the primary filegroup or additional filegroups can be created. Files and filegroups in SQL Server are implemented along the same lines as datafiles and tablespaces in Oracle.

Instance Architecture

This section covers the two components that make up an Oracle instance: memory and processes and its SQL Server equivalents.

Memory Architecture

The design of database memory architecture in both Oracle and SQL Server are based on the same objective. This objective is to acquire memory from the system and make it available to the RDBMS to perform its work. Because the available memory is a very small percentage of the database size, the configuration of memory is very important to the performance of the database system. The memory performance has to be optimized not only for application data, but also the data dictionary and the needs of the relational engine. For SQL this includes procedures, execution plan, cursors, temporary objects, and sorting.

The SQL Server memory address space is illustrated in Figure A.4:

Figure A.4 SQL Server Address Space

Figure A.4 SQL Server Address Space

A 32-bit process is normally limited to addressing 2 GB of memory, or 3 GB if the system was booted using the /3G boot switch in boot.ini, even if there is more physical memory available. However, both Oracle and SQL Server can use the MS Windows 2000 Address Windowing Extensions (AWE) API to support up to a maximum of 64 GB of physical memory. The specific amount of memory is dependent on the hardware and operating system versions. The AWE-enabled server configuration parameter is available in SQL Server for the purpose. The Microsoft Knowledge Base Article 274750, "How to configure memory for more than 2 GB in SQL Server," is available at http://support.microsoft.com/default.aspx?scid=kb;en-us;274750. It provides information on the maximum amount of memory that various Microsoft Windows Operating System versions can support and how to configure memory options.

An overview of the internals of memory management facilities of SQL Server 2000 is available at http://msdn.microsoft.com/sql/default.aspx?pull=/library/en-us/dnsqldev/html/sqldev_01262004.asp.

The SQL Server 2000 memory manager uses different algorithms to manage memory on different versions of Windows. While memory allocation to the RDBMS has been rigidly controlled in Oracle by the configuration (initialization) parameters, the sizes of the components in SQL Server 2000 address space are auto-tuned dynamically in cooperation between the RDBMS and the operating system. All memory areas within the memory pool are also dynamically adjusted by the SQL Server code to optimize performance and do not need any administrator input.

The Memory Pool is the SQL Server equivalent of the SGA. The composition of the memory pool can be found athttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_sa_1zu4.asp.

Process/Thread Architecture

The process architecture identifies the various database related processes and their functionality. As is true with Oracle on Windows, SQL Server also uses a thread-based architecture. SQL Server does not differ significantly from Oracle in its use of pools of processes (threads) for system functions and user requests. Table A.1 compares SQL Server functionality with respect to Oracle background processes.

Table A.1: Mapping of Oracle and SQL Server Background Processes

Oracle Process (Required Status)*

Oracle Identifier

Min / Max

SQL Server Process (Required Status)*

Min / Max

Process Monitor (M)


1 / 1

Open Data Services (M)

1 / 1

System Monitor (M)


1 / 1

Database cleanup /  shrinking (M)

1 / 1

Database Writers (M)


1 / 20

Lazywriter (M)

1 / 1

Checkpoint Process (M)


1 / 1

Database checkpoint (M)

1 / 1

Recoverer (O)


0 / 1


0 / 1

Log Writer (M)


1 / 1

Logwriter (M)

1 / 1

Archive Processes (O)


0 / 10



Job Queue Processes (O)


0 / 1000

SQL Agent (O)

0 / 1

Job Queue Coordinators (O)


0 / 1

SQL Agent (O)

0 / 1

Queue Monitor Processes (O)


0 / 10

SQL Agent (O)

0 / 1

Parallel Query Slave Processes (O)


0 / 3600

Worker threads (M)

32 / 32767

Dispatcher (O)


0 / 5

Network thread (M)

1 / 1

Shared Servers (O)


0 / OS

Worker threads (M)

32 / 32767

SQL Server employs sophisticated shared server architecture. On startup, SQL Server creates a User Mode Scheduler (UMS) object for each processor using the affinity mask setting. A pool of worker threads is created by Open Data Services (ODS) to handle user commands, and their control is distributed among the UMS schedulers. This architecture mimics the shared server-dispatcher concept in Oracle. While in Oracle the shared server processes are scheduled by the operating system, SQL Server uses the UMS to schedule worker threads.

The internal workings of the User Mode Scheduler are available at: http://msdn.microsoft.com/SQL/sqlarchitecture/default.aspx?pull=/library/en-us/dnsqldev/html/sqldev_02252004.asp?frame=true.

Additional references on topics related to process architecture include:

Relational Engine Architecture

The Relational Engine is the part of the RDBMS that is responsible for parsing, optimizing, and executing the SQL statements received from end users and returning results (also known as fetching) to the end users.

Figure A.5 illustrates the components of SQL Server relational engine.

Figure A.5 Components of SQL Server Relational Engine

Figure A.5 Components of SQL Server Relational Engine

Figue A.5 illustrates the main components of the relational engine portion of SQL Server. The illustrated components can be organized into three groupings of subsystems: Compilation, Execution, and SQL Manager. The components parser, T-SQL compiler, normalizer, and query optimizer belong to the compilation subsystem, which processes the SQL statements. These statements typically come in as TDS messages. The SQL Manager, in the middle of the figure, forms the second subsystem, which controls the flow of everything inside the SQL Server. Remote Procedure Call (RPC) messages are handled directly by the SQL Manager. T-SQL execution, query execution, and expression service form the execution subsystem. The query results come out of the expression service and are sent back out by ODS, after formatting the results into TDS messages. The expression services library performs data conversion, predicate evaluation or filtering, and arithmetic calculations.

The catalog services component handles data definition statements (DDL). The UMS is a scheduler internal to SQL Server that handles the threads and fibers. The system-stored procedures are self evident.

For a more detailed discussion of the relational engine, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql7/html/sqlquerproc.asp.

The concepts of execution plan, cost based optimization, and hints are common to Oracle and SQL Server.

A white paper on query optimizer and statistics is available at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql2k/html/statquery.asp.

Transaction Architecture

Users and applications interact with the database using transactions. Both Oracle and SQL Server offer both optimistic and pessimistic concurrency control. Pessimistic locking is the default for SQL Server. Microsoft SQL Server supports all four levels of isolation: read uncommitted, read committed, repeatable read, and serialization. Read committed is the default level of isolation for SQL Server.

Like most RDBMSs, Oracle and SQL Server achieve isolation by controlling concurrent access to shared resources (such as schema objects) and their subcomponents (such as data rows), and internal database structures using locks. Transactions acquire locks at different levels of granularity. In Oracle, the granularities are row and table, while in SQL Server the granularities are row (RID and rowid), key (row lock within an index), page, extent, table, and database. A major difference between Oracle and SQL Server in their use of locks is that Oracle does not escalate locks, while SQL Server escalates locks to reduce the system overhead of maintaining a large number of finer-grained locks. SQL Server automatically escalates row locks and page locks into table locks when a transaction exceeds its escalation threshold.

Table A.2 compares the types of locking available in Oracle and SQL Server.

Table A.2: Comparison of the Modes of Locking in Oracle and SQL Server


SQL Server


Share (S)

Shared (S)

Used for operations that do not change data, such as SELECT statements

Row Share (RS)

Update (U)

Used on resources that can be updated, such as SELECT ... FOR UPDATE in Oracle and SELECT statements with the UPDLOCK lock hint in SQL Server

Row Exclusive (RX)

Exclusive (X)

Used for data modification operations, such as INSERT, UPDATE, DELETE

Share Row Exclusive (SRX)


Only allows non-update S and RS locks

Exclusive (X)

Exclusive (X)

Disables all other updates


Intent Shared (IS)

Indicates intention to read some resources lower in the hierarchy


Intent Exclusive (IX)

Indicates intention to modify some resources lower in the hierarchy


Shared with Intent Exclusive (SIX)

Indicates intention to read all resources at a lower level and modify some of them using IX locks

Exclusive DDL

Schema Modification (Sch-M)

For performing DDL

Breakable Parse

Schema Stability (Sch-S)

For compiling queries


Bulk Update (BU)

For bulk copying data into a table

SQL Server Books Online has very useful information on lock modes, lock hints, lock compatibility, and deadlocks. More information is available athttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_sa2_2sit.asp.

Security Architecture

The database has security mechanisms such as logins, privileges, and roles to provide control over privileges to connect to the database, access schema objects, and manipulate their structure and data. Both Oracle and SQL Server utilize a layered approach to security, from logins to roles to system (statement) and object privileges.


Both Oracle and SQL Server provide logins for authorized users to connect to the database. In Oracle, the login is called user or username, and in SQL Server, it is called login identifier or simply login. Any operation the user can perform is controlled by the privileges granted to the login.


Both Oracle and SQL Server allow authentication by the operating system or by the database (server). In SQL Server, the operating system mode is called Windows Authentication Mode and the database mode is called SQL Server Authentication Mode. SQL Server can operate in either Windows authentication mode or Windows and SQL Server authentication mode.


The features and functionality related to passwords, such as complexity, aging, or lock out, that exist with Oracle logins, can only be found in Windows logins and not SQL Server authenticated logins.


Oracle and SQL Server have a similar model to secure schema objects and application data and system objects and metadata from unauthorized users. This is achieved by creating two sets of privileges: system (statement) privileges (permissions) and object privileges (permissions). Privileges can be assigned to users and roles using the GRANT statement and removed using the REVOKE statement. Roles are used to grant privileges to users, but indirectly, are discussed next.

For more information on the privileges available in SQL Server and their management, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_security_94dv.asp.


Oracle provides predefined roles, the most familiar being CONNECT, RESOURCE, DBA, and so on. Similarly, SQL Server has several predefined roles with specific permissions. There are two types of predefined roles: fixed server roles and fixed database roles. Both Oracle and SQL Server offer user-defined roles.

For information on creating user-defined roles in SQL Server, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_security_6x5x.asp.

Microsoft SQL Server 2000 SP3 Security Features and Best Practices is available at:http://www.microsoft.com/sql/techinfo/administration/2000/security/securityWP.asp. It provides a detailed account of SQL Server security model and best practices.

Data Dictionary

The data dictionary, referred to in SQL Server as the system catalog, is broken up into a system-level component in the master database and individual database-level components in each of the other databases. It has the following characteristics:

  • Each of the databases contains tables to maintain its own database objects (tables, indexes, constraints, users, privileges, replication definition) and other system structures (filegroups, files).

  • The centralized system catalog in the master database contains information which is a combination of Oracle control files and data dictionary, such as individual database names and primary file location, server level logins, system messages, configuration (initialization parameter) values, remote servers, linked servers, system procedures (such as the DBMS_ Oracle stored programs), and so on.

The features of the data dictionary that an Oracle DBA is familiar with — system tables, views, functions, and procedures — can also be found in SQL Server in the following forms:

  • System tables

    These serve the same function as Oracle’s data dictionary tables. The system tables should be used for information only.

    System tables available under the master database store server-level system information. For example:

    • master..syslogins — Available login accounts

    • master..sysdatabases — Available databases

    The following tables store database-level system information for the respective database:

    • sysusers — Available user accounts with privileges on database

    • sysobjects — Available objects in database

    • sysindexes — Available indexes in database

    For a complete listing of system tables and their use, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sys_00_690z.asp.

  • Information schema views

    SQL Server offers information schema views which are equivalent to the ALL_ views found in Oracle. Information can be retrieved from these views by querying the corresponding INFORMATION_SCHEMA.view_name. For example, the views that are visible to a user can be verified by using INFORMATION_SCHEMA.VIEWS. Below is a list of common information schema views.

    • information_schema.tables — Available tables in a database

    • information_schema.columns — Available columns in a database

    • information_schema.table_privileges — Available privileges on tables in a database

    The information schema views topic is discussed at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_ia-iz_4pbn.asp.

  • System functions

    Equivalents for built-in Oracle SQL functions can be found in SQL Server under the System functions heading. Some commonly used functions include:

    • User_name(id)

    • getdate()

    • system_user()

    For a list of all available SQL Server system functions, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_fa-fz_79f7.asp.

  • System stored procedures

    The system stored procedures aid the administrator in performing common functions by supplementing the DDL. They also provide information from system tables, prepackaged to save the administrator from writing his or her own queries and views. The system stored procedures can be considered to be the equivalent of Oracle's DBMS and UTL packages. These procedures are designed to be comprehensive and remove the burden of having to remember DDL syntax and system table names.

    For a complete listing of system stored procedures, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sp_00_519s.asp


This section provides brief introductions to topics such as export, import, backup, recovery, and monitoring.


In SQL Server, data can be imported and exported using the Data Transformation Services (DTS) tool, Transact-SQL statements (INSERT INTO and BULK INSERT), and Bulk Copy utility (bcp), which provides the same functionality as SQL*Loader.


SQL Server does not have an equivalent of the Oracle export utility to move data into a binary format file. Individual schema objects can be backed up to text-based flat files in any of the several available file formats, or they can be exported to any of the several OLE DB destinations and restored using tools and utilities.

One of the following methods can be used to extract or spool data into flat files:

  • Using the tool osql.exe is similar to SQL*Plus in the way the users can run commands at the prompt.

  • Using bcp, the bulk copy utility

  • Using the Data Transformation Services (DTS) tool


SQL Server does have an equivalent to the Oracle import utility, but as mentioned in the previous section, individual schema objects that have been exported to flat files with any of the several supported file formats or to any OLE DB destination can be imported into a database using one of the many tools and utilities.

Three ways that data can be imported into SQL Server include:

  • Using the BULK INSERT command, which acts as an interface to the bcp utility. The structure of the BULK INSERT command is similar to the structure of the control file used in SQL*Loader.

  • Using bcp, the bulk copy utility.

  • Using the Data Transformation Services (DTS) tool.

The functionality and use of bcp, BULK INSERT, and DTS for moving data from Oracle into SQL Server have been demonstrated in Chapter 8. DTS has also been discussed in more detail in Appendix B: "Getting the Best out of SQL Server 2000 and Windows."

Some additional references on these utilities are:


In Oracle, backup methods can be categorized at a high level as physical and logical backups. Comparable methods for backing up the database can be found in SQL Server. Table A.3 provides a comparison of the available methods:

Table A.3: Backup Methods in Oracle and SQL Server

Backup Method


SQL Server



bcp or DTS









Transaction log or Differential


Archive log

Transaction log

Logical Backups

The goal of a logical backup is to be able to recover at the individual schema object level. Although Oracle's export and import utilities are designed for moving Oracle data, they can be used as a supplemental method of protecting data in an Oracle database. It is not recommended to use logical backups as the sole method of data backup.

SQL Server does not support logical backups to proprietary binary format files. Individual schema objects, however, can be backed up to flat files in any of the several supported file formats and restored using tools such as the bcp utility and DTS tools.

Physical Backups

In Oracle, a physical backup involves making copies of database files including datafiles, control files, and, if the database is in ARCHIVELOG MODE, archived redo log files. The same is true in SQL Server, though a backup is viewed to be at the database level. Larger databases can utilize filegroup backups to back up sections of a database. The physical backups available are:

  • Cold (offline) backups. A cold backup or a closed backup can be described as a backup of one or more database files taken while the database is closed and is not available for user access. Even though the term cold backup is not mentioned in the documentation, the method can be applied for performing backups in SQL Server.

  • Online backups. A backup is termed as an online backup or a hot backup if it is taken while the database is open and is accessible to users. SQL Server full backup backs up a complete database and includes transaction log entries. File and filegroup backups can be made using the BACKUP DATABASE statement or through Enterprise Manager ->Backup Database -> File and filegroup. Transaction logs can be backed up separately as well.

Incremental Backups

Physical incremental backups are performed to capture only the changed blocks, thereby reducing the time and space needed for the backups. Incremental backups are performed after an initial complete backup has been performed. In SQL Server, differential backups contain only data that has changed since a last full backup. Differential database backups can be made using the BACKUP DATABASE statement or through Enterprise Manager -> Backup Database -> Database — differential.


The three recovery models offered by SQL Server—Full, Bulk-logged, and Simple—are discussed below.

  • Full recovery model. This is used when the data is critical and must be recoverable to point of failure. All recovery options are available in this recovery model. This is equivalent to Oracle’s ARCHIVELOG mode (when the NOLOGGING option is not specified at any level), where all transactions are logged and logs are archived for full recoverability.

  • Bulk-logged recovery model. This is the mid-level recovery model available for bulk operations such as bulk copy, SELECT INTO, and text processing. This recovery model does not provide point-in-time recovery past the beginning of any bulk operation. This is similar to setting the NOLOGGING option at the tablespace, object level, or for individual commands to avoid logging of bulk operations.

  • Simple recovery model. This is used when it is not important to completely recover the database or the lost data can be recreated. This recovery model has the lowest logging overhead. This is equivalent to running the database in NOARCHIVELOG mode.

For more information on the backup and recovery architecture of SQL Server, refer to http://msdn.microsoft.com/library/en-us/architec/8_ar_aa_9iw5.asp.

A detailed account of the backup and recovery options and techniques, as well as guidance for performing these administrative tasks, is available athttp://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops4.mspx.


Monitoring should be performed for availability, errors, and performance.


Availability should cover the server(s), node(s), database services, and database.

  • Server. Monitoring of the server should also include the network access path from the application or client to the server. The most common method to achieve this is a ping test.

  • Database Services. A common mode of monitoring Oracle databases is to check on the instance-specific processes such as SMON or PMON. Because SQL Server uses thread architecture, only the main process (sqlservr.exe) can be monitored.

  • Database. Even when the services are running, connecting to a database may fail due to some errors. A second level monitoring involving connecting to the database and performing some simple tasks should expose such errors.

The command line utility scm (Service Control Manager) can be used to check the health of a SQL Server instance. Scripts can also be executed using SQL Server command line utilities such as isql or osql.


In a database environment, errors, failures, or faults can occur in a number of areas. Errors have to be monitored by viewing (or mining) the error logs. The server event logs and the database instance logs are a good source for errors and violations.

The event log, which contains entries for all database server instances and other applications running on the server, can be accessed through the Microsoft Windows Event Viewer utility. This can be accessed through Start -> Program -> Administrative Tools -> Event Viewer. The events in the event log can be filtered by type, source, date range, and so on.

SQL Server has predefined error codes and error messages (see master..sysmessages for a complete list) that give information such as unique error number, severity level, error state number, and error message on all errors.

The SQL Server error logs provide complete information regarding all events, auditing messages, errors, and so on, that have occurred against an instance. The error logs can be viewed using a text editor or through Enterprise Manager. In Enterprise Manager, the logs can be found using SQL Server Group -> Server Instance -> Management -> SQL Server Logs.


Performance Monitor and Task Manager are two of the several tools that can be used to monitor resource usage at the server level as well as the SQL Server instance level. The following is a listing of the methods by which various server and database resources can be monitored:

  • CPU. Task Manager: Performance, Performance Monitor: Processor

  • Memory. Task Manager: Performance, Performance Monitor: Memory

  • Process. Task Manager: Processes, Performance Monitor: Process

  • Virtual Memory. Task Manager: Performance, Performance Monitor: Paging File

  • Network. Task Manager: Networking, Performance Monitor: Network Interface

  • I/O. Task Manager: Processes, Performance Monitor: LogicalDisk and Peformance Monitor: PhysicalDisk

  • Storage. My Computer, Windows Explorer

Some references on monitoring SQL Server performance are:

Appendix B: Getting the Best Out of SQL Server 2000 and Windows

This appendix provides references on various aspects of a SQL Server 2000 deployment that can be exploited to ensure optimal performance in an enterprise environment. Performance is also critical to scalability.

Several of the links provided here have been referenced elsewhere in this guidance, but they are repeated here for your convenience.


The performance of a SQL Server installation is dependent on several factors covering database design, application design, query design, access methods (indexing schemes and views), hardware, and software resources. The following references cover the entire breadth of topics that are critical to getting the best performance out of SQL Server.


Scalability is the capability to handle increased volume of data of Very Large Databases (VLDBs) and activity. The size of the VLDBs may be due to a few large tables, or a large number of smaller tables, or a combination of both. Activity scalability is measured in the number of user connections, response time, throughput, and so on. Oracle and SQL Server are continually working on adding features and functionality which are directed at improving their respective products to meet these demands. While some of these features, such as clustering, replication, and parallelism, are highlighted in specification sheets and documentation, much of the scalability is built in at the lower levels of the architecture, such as use of bitmaps instead of lists to represent storage, in-row versus out-of-row data storage, and so on. The overall scalability is, however, dependent not only on the RDBMS, but also on the hardware and the application. The following references answer questions about scalability related to scaling out, scaling up, data partitioning, 64-bit architecture, storage technology, operating system, and other related topics.

High Availability

Availability refers to the ability to connect to and perform actions against the database server, the database, and the data. The features in SQL Server that contribute to availability are discussed here.

"SQL Server 2000 High Availability Series" provides the most complete set of white papers on planning and deploying a highly available environment containing SQL Server. This series is available at: http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/harag01.mspx.

Also refer to Microsoft SQL Server 2000 High Availability (Microsoft Press, 2003).

Apart from covering the hardware technologies, the three SQL Server technologies covered in this series are:

  • Clustering

  • Standby database or log shipping

  • Replication

These topics are discussed under the following headings.


Both Oracle and SQL Server offer high availability support through the use of clusters of servers. The two DBMSs depend on the underlying hardware and system software to provide cluster management to detect and manage failures. SQL Server can be run in an Active-Passive or Active-Active configuration using the Microsoft Cluster Services (MSCS).

Information on Windows clustering and SQL Server failover clustering is available athttp://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/failclus.mspx.

Standby Database or Log Shipping

Both Oracle and SQL Server offer the standby database capability where a copy (secondary) of the entire (primary) database is maintained in a separate location to provide recovery from server node and database failures as well as catastrophic disasters. Changes made to the primary are captured in redo or transaction logs, shipped to the secondary site, and applied to the standby database. The application of the logs to the standby database can be controlled to be near-synchronous, or lag behind the primary to suit recovery needs.

For more information about SQL Server log shipping, refer to:


Replication is a set of technologies for copying and distributing data and database objects from one database to another and synchronizing the data between them for consistency. Replication is based on the master-slave technique (called publisher-subscriber in SQL Server) in both Oracle and SQL Server. Replication is popularly used to provide high availability of shared data over a WAN.

SQL Server offers the following forms of replication:

  • Snapshot Replication. This is a materialized view (indexed view in SQL Server) containing a snapshot of data at a particular point in time.

  • Transactional Replication. This is a progression from the snapshot, with changes sent to the subscriber at the transaction level. This enables data modifications made at the publisher to be propagated to the subscribers and also enables subscribers to make occasional updates back to the publisher.

  • Merge Replication. Merge replication is similar to the multimaster replication technique in Oracle that allows several instances of the object available at several SQL Server sites.

For an overview, planning, tools, implementation, and other details of replication, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/replsql/replover_694n.asp.

Features and Tools

A brief overview of some of the tools that are very valuable in managing a SQL Server environment is provided under the following headings.

Performance Monitor

Performance Monitor is the most important tool in monitoring SQL Server performance. When SQL Server is installed on a server, it adds a set of measuring and monitoring counters for SQL Server instances.

In addition to visually monitoring performance counters, the performance of SQL Server can be logged using counter logs. All counters available for monitoring are also available for logging whose set up is separate from the monitoring.

Similar to Oracle Enterprise Manager, where events can be set up to track space, resources, availability, performance, and faults, alerts can be set up to track SQL Server activity using performance monitor alerts and SQL Server Agent.

The "Monitoring SQL Server Performance" chapter in SQL Server 2000 Administrator’s Pocket Consultant (Stanek, 2000) is a good reference for using Performance Monitor. It is available at http://www.microsoft.com/technet/prodtechnol/sql/2000/books/c10ppcsq.mspx.


The SQL Profiler utility can be used to monitor the performance of instance components, Transact-SQL statements, stored procedures, and auditing activity. This utility provides a robust set of functionality that allows you to optionally include a variety of events, report information (data columns), and filtering capabilities.

The article "How To: Use SQL Profiler" provides information on how to track long running queries and heavily used stored procedures. It is available at http://msdn.microsoft.com/sql/default.aspx?pull=/library/en-us/dnpag/html/scalenethowto15.asp.

SQL Server Health and History Tool (SQLH2)

This tool allows collection of data from a running instance of SQL Server which can then be reviewed and reported on. For more information on SQLH2, refer to http://www.microsoft.com/downloads/details.aspx?FamilyID=eedd10d6-75f7-4763-86de-d2347b8b5f89&DisplayLang=en.

The tool is especially useful for measuring service uptime and performance trends.


Data Transformation Services (DTS) offer import and export of data by providing a set of tools that let you extract, transform, and consolidate data from disparate sources into single or multiple destinations supported by DTS connectivity. By using DTS tools to graphically build DTS packages, or by programming a package with the DTS object model, you can create custom data movement solutions tailored to the specialized data transfer needs.

For more information on DTS, refer to the "Data Transformation Services (DTS) in Microsoft SQL Server" white paper available athttp://msdn.microsoft.com/SQL/sqlwarehouse/DTS/default.aspx.

Query Analyzer

The SQL Query Analyzer is a graphical tool that can be used to execute queries directly against the database. It also has the functionality to debug query performance problems such as Show Execution Plan, Show Server Trace, Show Client Statistics and Index Tuning Wizard.

An overview of Query Analyzer is available at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/qryanlzr/qryanlzr_1zqq.asp.

For an overview of some of the important features of query analyzer, refer to http://www.sql-server-performance.com/query_analyzer_tips.asp.

Index Tuning Wizard

The Index Tuning Wizard allows you to select and create an optimal set of indexes and statistics based on input from Query Analyzer or a SQL Profiler log for a SQL Server database without requiring an expert understanding of the structure of the database, the workload, or the internals of SQL Server.

For information on the features and use of the Index Tuning Wizard, refer to "Index Tuning Wizard for Microsoft SQL Server 2000" athttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql2k/html/itwforsql.asp.

Appendix C: Baselining

This appendix is concerned with creating baselines, capturing statistics, and comparing and reporting results.

Creating Baselines

Baselines are a collection of facts and figures that reflect the state of a system. Most commonly, the term baseline is used with respect to performance. In the Stabilizing Phase, this term is used to refer to a snapshot of the entire system specification.

Unlike a new development project where there is no reference point, the state of the current solution can be captured in a migration project. This can be used as a baseline for comparison with the migrated solution being tested.

After every successful cycle of testing, it is recommended that all elements, such as scripts, applications, application configuration, database configuration, and databases, are backed up and are baselined with appropriate tags for identification. Baselines from each cycle of testing can be compared with each other to see the progression towards a solution that can be packaged for production.

You should create a baseline during the Stabilizing Phase. These baseline values can be used later while troubleshooting performance problems and also for proactive monitoring. The Performance Monitor, also known as System Monitor tool, can be used to monitor key system resources (CPU, memory, Disk I/O, network, as well as SQL Server counters) while you run the Stabilizing Phase tests.

The following tables (A.4 through A.8) list some of the performance monitor counters that you can use for creating a baseline. These counters can also be used for general monitoring of the solution after deployment.

Table A.4: Memory Management

Performance Monitor Counters


\Memory\Available Mbytes

Available MBytes is the amount of physical memory available to processes running on the computer, in Megabytes. Lack of memory puts pressure on the CPU and Disk I/O.


Pages/sec is the rate at which pages are read from or written to disk to resolve hard page faults. Hard page faults occur when a process requires code or data that is not in its working set or elsewhere in physical memory and must be retrieved from disk. Note that some paging will usually be present because of the way the operating system works, but the optimal level for average pages/sec should be close to 0.

Table A.5: Network Analysis

Performance Monitor Counters


\Network Interface (Network card)\Bytes Total/sec

Bytes Total/sec is the rate at which bytes are sent and received over each network adapter, including framing characters. Average value should be less than 50% of NIC capacity.

\Network Segment\% Net Utilization

The percentage of network bandwidth in use on a network segment. Network Monitor Driver can be installed and used to monitor the network utilization. The threshold value varies based on the network configuration.

Table A.6: CPU Monitoring

Performance Monitor Counters


\Processor% Processor Time

This counter is the primary indicator of processor activity, and it displays the average percentage of busy time observed during the sample interval. It is calculated by monitoring the time that the service is inactive and subtracting that value from 100%.

\Processor\% Privileged Time and \Processor\% User Time

* Privileged Time is the percentage of elapsed time that the process threads spent executing code in privileged mode. Average values above 10% indicate possible CPU pressure.

* User Time is the percentage of elapsed time the processor spends in the user mode.

* Privileged Time should be about 15% less than the total User Time.

\System\Context switches/sec

This counter indicates the combined rate at which all the processors are switched from one thread to another. The pressure on memory can cause page faults, which can cause an increase in context switch/sec value, and a decrease in the overall performance.

\System\Processor Queue Length

This counter indicates the number of threads in the processor queue. There is a single queue for processor time, even on computers with multiple processors. A sustained processor queue length of less than 10 threads per processor is normally acceptable, dependent on the workload. In other words, there are more threads ready to run than the current number of processors can service in an optimal way.

Table A.7: Disk I/O Monitoring

Performance Monitor Counters


\PhysicalDisk\Avg. Disk Queue Length

\PhysicalDisk\Current Disk Queue Length

* Avg. Disk Queue Length is the average number of requests (both read and write) that were queued for the selected disk during the sample interval.

* Current Disk Queue Length is the number of requests outstanding on the disk at the time the performance data is collected. These numbers should be less than two per usable physical disk in the RAID array, and may be higher on the SAN systems.

\PhysicalDisk\Avg. Disk sec/Read

\PhysicalDisk\Avg. Disk sec/Write

Average time in seconds to read/write data to the disk.

Table A.8: Monitoring SQL Server Monitoring

Performance Monitor Counters


\SQLServer:Access Methods\Free Space Scans/sec

Number of scans initiated to search for free space to insert a new record fragment. Inserts on a heap with clustered index will impact performance as against a heap with no clustered indexes.

\SQLServer:Access Methods\Full Scans/sec

Tables with missing indexes, or when too many rows are requested, end up with unrestricted full scans of the base table or indexes. This baseline counter can be an indicator of an increase in the use of temporary tables because they do not tend to have indexes.

\SQLServer:Latches\Total Latch Wait Time (ms)

SQL Server uses latches to protect the integrity of internal structures. This counter monitors the total wait time for latch requests that had to wait in the last second.

\SQLServer:Locks(Total)\Lock Timeouts/sec and Lock Wait Time (ms)

Lock Timeouts counter indicates the number of lock requests that timed out. This includes internal requests for NOWAIT locks. Lock Wait Time (ms) indicates the total wait time (in milliseconds) for locks in the last second. Lock Timeouts should be low and Wait Time should be zero.

\SQLServer:Locks(Total)\Number of Deadlocks/sec

Number of lock requests that resulted in a deadlock. Should be zero.

\SQL Server:Databases\Transactions/sec

Number of transactions started for the database. It is an excellent indicator of growth related to transaction load.

\SQLServer:Buffer Manager\Buffer cache hit ratio

Percentage of pages that were found in the buffer pool without having to incur a read from disk. The higher the value of the buffer cache hit ratio the better. In a transaction processing system where a set user base is operating against the database, this value may reach 98-99 percent. If this value is small, then adding memory to the SQL Server may help. This should be looked at with table scan frequency and might indicate that indexes/partitioning would improve performance.

\SQLServer:SQL Statistics\SQL Re-Compilations/sec

Recompilation adds overhead on the processor. Consistent high values for this counter should be investigated.

\SQLServer:Batch Requests/sec and \SQLServer:General Statistics\User Connections

These counters, along with Transactions/sec, are a good indicator of load on the server.

\SQLServer:Memory Manager\Memory Grants Pending

Current number of processes waiting for a workspace memory grant. This value should remain around 0.

\SQLServer:Memory Manager\Target Server Memory (KB)

The total amount of dynamic memory the server is willing to consume. On a dedicated server, this value should remain close to the actual physical memory size.

\SQLServer:Memory Manager\Total Server Memory (KB)

The total amount of dynamic memory the server is currently consuming. On a dedicated server, this value should remain close to the actual physical memory size.

You must determine which server you will use to monitor SQL Server. If you run Performance Monitor on the same machine as the SQL Server, it should not add major overhead, except some disk I/O and disk space requirement for the performance log files, depending on counters that you are monitoring and the interval. A large number of counters with a short sampling interval might require a lot of disk space; however, larger intervals will produce less accurate results. If you run Performance Monitor on a different machine, be aware that it might congest traffic on your network.

For the purpose of creating the baseline, if you have disk space available on the SQL Server machine, it is recommended you run the Performance Monitor on the same machine, add as many counters as desired, and use the short sampling interval. Once you have established the baseline, for proactive monitoring you can monitor just the counters absolutely required and increase the interval.

To create the baseline chart, follow these steps:

  1. Click Start, then All Programs, then Administrative Tools, and then Performance.

  2. Expand Performance Logs and Alerts node in the tree in the left pane.

  3. Right-click Counter Logs and select New Log Settings.

  4. Provide a name, click OK, add the counters, verify the details on the Log Files and Schedule tab, and click OK.

Capturing Statistics

Some of the points to be considered for capturing statistics are:

  • Types of data. The type of data being captured — qualitative or quantitative.

  • Characteristic. The statistic must be relevant to the test being performed. It must suit the comparisons being made or analysis to be performed. CPU usage measurement is an example of a statistic that is relevant to testing.

  • Unit of measurement. The unit of measurement would be dependent on the magnitude of the measured characteristic. An example is measuring and comparing memory: the unit to be used is bytes or Mbytes.

  • Frequency. The frequency of measurement of a statistic can be a critical decision. Too much data can be cumbersome and too little could be harmful.

Some examples of resources on which statistics can be captured are:

  • Memory usage of the system

  • Memory usage of the database

  • CPU usage

  • Disk I/O rate

  • Number of user connections to the database

  • Procedure cache usage

  • Number of transactions executed per second

  • Data cache usage

  • Network bandwidth usage

Comparing and Reporting Results

Test results have only as much value as the inferences that can be drawn from them. Test results can have either a qualitative value or a quantitative value. In a development project, the expected behavior of the solution is based solely on business requirements. For a migration, measurements from the existing solution form the expected result. During the testing, the data from the migrated solution is compared with the expected results. Deviations from expected results cannot be construed as defects. As a result, analyzing and reporting results is a topic of serious consequence. A thorough analysis of the results has to be made and conclusions drawn based on several measurable and non-measurable factors (assumptions). The capabilities of the systems available for performing stabilization tests may be one such factor.

Appendix D: Installing Common Drivers and Applications

This appendix contains installation information for FreeTDS, unixODBC, and ActiveState Perl:

  • FreeTDS provides an implementation of the Tabular Data Stream (TDS) protocol, emulating the different versions of TDS that are used by Oracle and SQL Server.

  • unixODBC is an implementation of the ODBC protocol and API for UNIX; the unixODBC library allows applications running under UNIX to communicate with database servers, such as SQL Server, using ODBC. Applications developed using unixODBC can execute under UNIX and SFU.

  • ActiveState Perl is the industry-standard Perl distribution. This Perl distribution can be used from within the Window environment.

Installing FreeTDS

A precompiled and directly installable version of FreeTDS (currently version 0.62.3) is available at http://interopsystems.com/tools. This can be installed under SFU 3.5 using the pkg_add command. The package can be downloaded from

Configuring FreeTDS

When FreeTDS is installed, you should configure it to connect to your SQL Server databases using the following procedure:

To configure FreeTDS connectivity, follow these steps:

  1. Move to the /usr/local/etc directory and edit the freetds.conf file. This file contains information about the Sybase and SQL Server database servers it can access and the versions of TDS to use.

  2. Add the following entries to the end of the file. Replace

    with the IP address or DNS name of the UNIX computer running SQL Server (DNS name is recommended), and
    with the port that SQL Server is listening to on this computer (SQL Server usually listens on port 1433).


    host = servername
    port = bbbb
    tds version = 8.0

    Note The name, SQLServer, does not have to be the same as the name of the computer running SQL Server — it is an identifier used by the FreeTDS library functions to locate the entry in this file. It is a name that uniquely identifies this entry in the configuration file, and is used with the -S option for most of the scripts.

  3. Save the file.

Testing the FreeTDS Configuration

You can test the FreeTDS configuration using tsql, a tool provided with FreeTDS, by following this procedure:

To test the installation of FreeTDS, follow these steps:

  1. At a shell prompt, type the following command. Replace

    with the entry in freetds.conf that you want to test (SQLServer), and
    with the sa password for the selected server:


    tsql –S server –U sa –P password

    Windows NT Authentication, called "integrated security," is not supported by FreeTDS.

  2. At the 1> prompt, type the following commands. The result should be a list of databases available on the selected server:


    SELECT name FROM sysdatabasesGo
  3. At the 1> prompt, type the following commands. The result should be a list of connections currently established to the selected server:


    EXEC sp_whoGo
  4. At the 1> prompt, type the following command to leave tsql.



Installing unixODBC

unixODBC is available in source form (in the file unixODBC-2.2.6.tar.gz) from the unixODBC Web site at http://www.unixodbc.com. This guide uses unixODBC version 2.2.6. The procedures assume you have already downloaded the file unixODBC-2.2.6.tar.gz.

To install unixODBC in UNIX, follow these steps:

  1. Log in to UNIX as the root user.

  2. At a shell prompt, access the directory where the downloaded unixODBC source code is located (default/usr/local/unixODBC), and then type the following command to unzip the file unixODBC-2.2.6.tar.gz to the file unixODBC-2.2.6.tar.


    gunzip unixODBC-2.2.6.tar.gz
  3. Type the following command to unpack the file unixODBC-2.2.6.tar into the directory unixODBC-2.2.6:


    tar xvf unixODBC-2.2.6.tar
  4. Move to the unixODBC-2.2.6 directory and type the following command to generate the files needed to build unixODBC. This command will generate a number of messages on the screen as it examines your UNIX configuration and generates the appropriate make files.



    Note The configure script uses a number of well-known tricks to ascertain which tools and libraries are available that it can use to compile unixODBC. If the configure script fails, it is usually because the script cannot find a particular tool, file, or library. You can run the script supplying parameters to help it analyze your environment. Execute

    ./configure -h
    for more details.
  5. Type the following command to build unixODBC. As before, you will see a large number of messages as the build process progresses:


  6. Type the following command to install unixODBC:


    make install
  7. Type the following command to check that unixODBC was installed successfully:



    If unixODBC is installed correctly, you will see the following message:


    * unixODBC - isql                            *
    * Syntax                                     *
    *                                            *
    *        isql DSN [UID [PWD]] [options]      *
    *                                            *
    * Options                                    *
    *                                            *
    * -b        batch.(no prompting etc)         *
    * -dx       delimit columns with x           *
    * -x0xXX    delimit columns with XX, where   *
    *           x is in hex, ie 0x09 is tab      *
    * -w        wrap results in an HTML table    *
    * -c        column names on first row.       *
    *           (only used when -d)              *
    * -mn       limit column display width to n  *
    * -v        verbose.                         *
    * --version version                          *
    *                                            *
    * Notes                                      *
    *                                            *
    * isql supports redirection and piping       *
    * for batch processing.                      *
    *                                            *
    * Examples                                   *
    *                                            *
    * cat My.sql | isql WebDB MyID MyPWD -w      *
    *                                            *
    * Each line in My.sql must contain           *
    * exactly 1 SQL command except for the       *
    * last line which must be blank.             *
    *                                            *
    * Please visit;                              *
    *                                            *
    * http://www.unixodbc.org                    *
    * pharvey@codebydesign.com                   *
    * nick@easysoft.com                          *

Installing ActiveState Perl

ActiveState Perl can be downloaded from the ActiveState Web site at http://www.activestate.com. ActiveState Perl is available in prebuilt binary form for Linux, Solaris, and Windows, but the source code is also available and can be used to build ActiveState Perl on other UNIX platforms.

This section describes how to install ActiveState Perl using the prebuilt Microsoft Installer (MSI) package ActivePerl-

To install ActiveState Perl under Windows, follow these steps:

  1. Execute the file ActivePerl- Windows Installer starts the installation.

  2. In the ActivePerl 5.6.1 Build 635 Setup screen, click Next.

  3. In the ActivePerl 5.6.1 Build 635 License Agreement screen, select I accept the terms in the License Agreement and click Next.

  4. In the Custom Setup screen, make sure all features are selected to be installed (the default) and click Next.

  5. In the New features in PPM screen, clear the checkbox for Enable PPM3 to send profile info to ASPN and click Next.

  6. In the Choose Setup Options screen, check all four options and click Next.

  7. In the Ready to Install screen, click Install. The Installing ActivePerl 5.6.1 Build 635 screen will appear and indicate the progress of the installation process.

  8. In the Completing the ActivePerl 5.6.1 Build 635 Setup Wizard screen, remove the check mark from Display the release notes and clickFinish.

Appendix E: Reference Resources

For detailed discussion of the concepts and practices presented in this guide, you should refer to the following resources.


  • 0
  • 0
  • 0
  • 扫一扫,分享海报

评论将由博主筛选后显示,对所有人可见 | 还能输入1000个字符
©️2022 CSDN 皮肤主题:编程工作室 设计师:CSDN官方博客 返回首页
钱包余额 0