Abstract: 这个读书笔记系列是关于Tom的大作《Expert Oracle Database Architecture》
Chapter 1: Developing Successful Oracle Application
1. Bitmap index
In the first session:
Note: Don’t commit or rollback for now.
Session Id is
In the second session:
This session is hanging!
Session Id is
The Lock view (v$lock) shows..
Description on Lock view
ADDR | RAW(4 | 8) | Address of lock state object |
KADDR | RAW(4 | 8) | Address of lock |
SID | NUMBER | Identifier for session holding or acquiring the lock |
TYPE | VARCHAR2(2) | Type of user or system lock The locks on the user types are obtained by user applications. Any process that is blocking others is likely to be holding one of these locks. The user type locks are: TM - DML enqueue TX - Transaction enqueue UL - User supplied The locks on the system types are held for extremely short periods of time. |
ID1 | NUMBER | Lock identifier #1 (depends on type) |
ID2 | NUMBER | Lock identifier #2 (depends on type) |
LMODE | NUMBER | Lock mode in which the session holds the lock:
|
REQUEST | NUMBER | Lock mode in which the process requests the lock:
|
CTIME | NUMBER | Time since current mode was granted |
BLOCK | NUMBER | The lock is blocking another lock |
Explanation on “Bitmap
Inde
Explanation on “Bitmap Index”
In a bitmap index, a single key entry points to many rows—hundreds or more of them. If you update a bitmap index key, the hundreds of records to which that key points are effectively locked as well as the single row you are actually updating.
So, someone inserting a new N record would lock an N key in the bitmap index, effectively locking hundreds of other N records as well. Meanwhile, the process trying to read this table and process the records would be prevented from modifying some N record to be a Y (processed) record, because in order for it to update this column from N to Y, it would need to lock that same bitmap index key. In fact, other sessions just trying to insert a new record into this table would be blocked as well, as they would be attempting to lock this same bitmap key entry.
2. Connection difference between Oracle & SQL Server
Use a Single Connection in Oracle
In SQL Server it is a very common practice to open a connection to the database for each concurrent statement you want to execute. If you are going to do five queries, you might well see five connections in SQL Server. SQL Server was designed that way—much like Windows was designed for multithreading, not multiprocessing. In Oracle, whether you want to do five queries or five hundred queries, the maximum number of connections you want to open is one. Oracle was designed that way.
3. Use Bind Variables
From the previous description, it should be fairly obvious that parsing a statement with hard-coded variables (called a hard parse) will take longer and consume many more resources than reusing an already parsed query plan (called a soft parse). What may not be so obvious is the extent to which the former will reduce the number of users your system can support. This is due in part to the increased resource consumption, but an even larger factor arises due to the latching mechanisms for the library cache. When you hard-parse a query, the database will spend more time holding certain low-level serialization devices called latches. These latches protect the data structures in the shared memory of Oracle from concurrent modifications by two sessions (otherwise Oracle would end up with corrupt data structures) and from someone reading a data structure while it is being modified. The longer and more frequently you have to latch these data structures, the longer the queue to get these latches will become.
4. Oracle Locking policy
(1) Row level lock, no block or table level lock under normal circumstances.
(2) Never lock for read.
(3) Write does NOT block read.
(4) Read does NOT block write.
Preventing Lost Updates
In the first session:
Notice “Select … for update” (where id = 1, only lock the rows with id = 1)
In another session:
Update the rows with id = 2 won’t be blocked. While updating the rows with id = 1 will be blocked,
The sqlplus hangs now!
5. Multi-Versioning
Bear in mind that Oracle does not “answer” the query. It does not copy the data anywhere when you open a cursor—imagine how long it would take to open a cursor on a 1-billion-row table if it did. The cursor opens instantly and it answers the query as it goes along. In other words, it just reads data from the table as you fetch from it.
6. Multi-Versioning and Flashback
SCN is Oracle’s internal clock: every time a commit occurs, this clock ticks upward (increments).
If you receive the error “ORA-08189: cannot flashback the table because row movement is not enabled using the FLASHBACK command,” you must issue ALTER TABLE EMP ENABLE ROW MOVEMENT.
This, in effect, gives Oracle the permission to change the rowid assigned to a row. In Oracle, when you insert a row, a rowid is assigned to it and that row will forever have that rowid. The flashback table process will perform a DELETE against EMP and reinsert the rows, hence assigning them a new rowid. You must allow Oracle to do this operation in order to flash back.
7. Read Consistency and Non-Blocking Reads
Oracle uses multi-versioning to get the answer, as it existed at the point in time the query began, and the query will take place without locking a single thing.
Oracle does NOT have a “shared read” lock because it doesn’t need it.
Whenever you modify data, Oracle creates undo entries. These entries are written to undo segments. If your transaction fails and needs to be undone, Oracle will read the “before” image from the rollback segment and restore the data. In addition to using this rollback segment data to undo transactions, Oracle uses it to undo changes to blocks as it is reading them to restore the block to the point in time your query began. This gives you the ability to read right through a lock and to get consistent, correct answers without locking any data yourself.
8. In Oracle…
If you created the tables in a stored procedure in Oracle, you would find that
• Doing DDL is a scalability inhibitor.
• Doing DDL constantly is not fast.
• Doing DDL commits your transaction.
• You would have to use dynamic SQL in all of your stored procedures to access this table—no static SQL.
• Dynamic SQL in PL/SQL is not as fast or as optimized as static SQL.
9. Autonomous transaction
http://www.oracle-base.com/articles/misc/AutonomousTransactions.php
Autonomous transactions allow you to leave the context of the calling transaction, perform an independent transaction, and return to the calling transaction without affecting its state. The autonomous transaction has no link to the calling transaction, so only commited data can be shared by both transactions.
The following types of PL/SQL blocks can be defined as autonomous transactions:
· Stored procedures and functions.
· Local procedures and functions defined in a PL/SQL declaration block.
· Packaged procedures and functions.
· Type methods.
· Top-level anonymous blocks.
Notice that I issued the “commit” in the anonymous block.
The 2 rows inserted by our current session (transaction) have been rolled back, while the rows inserted by the autonomous transactions remain. The presence of the
PRAGMA AUTONOMOUS_TRANSACTION compiler directive made the anonymous block run in its own transaction, so the internal commit statement did not affect the calling session. As a result rollback was still able to affect the DML issued by the current statement.
Autonomous transactions are commonly used by error logging routines, where the error messages must be preserved, regardless of the commit/rollback status of the transaction.
Be careful how you use autonomous transactions. If they are used indiscriminately they can lead to deadlocks, and cause confusion when analyzing session trace. To hammer this point home, here's a quote from Tom Kyte posted on my blog (here):
... in 999 times out of 1000, if you find yourself "forced" to use an autonomous transaction - it likely means you have a serious data integrity issue you haven't thought about.
Where do people try to use them?
· in that trigger that calls a procedure that commits (not an error logging routine). Ouch, that has to hurt when you rollback.
· in that trigger that is getting the mutating table constraint. Ouch, that hurts *even more*
Error logging - OK.
Almost everything else - not OK.
10. Profile
-- To Be Continued --