Trevor -- Thanks for the question regarding "Parse CPU to Parse Elapsed", version 8.1.7
You Asked
Hi Tom, I got given a statspack report to check out. The Parse CPU to Parse Elapsed seems terrible. In you book Expert 1 on 1 page 481 your example was 87.88% and you say for every CPU second it spent about 1.13 seconds of wall clock time. How did you work out 1.13 seconds please? Regards Trevor STATSPACK report for DB Name DB Id Instance Inst Num Release OPS Host ------- ----------- ------------ -------- ----------- --- ------------ PRD 1921956436 XPS_PRD 1 8.1.7.4.0 NO bd01 Snap Id Snap Time Sessions ------- ------------------ -------- Begin Snap: 26 31-May-03 10:58:42 93 End Snap: 31 31-May-03 16:46:07 93 Elapsed: 347.42 (mins) Cache Sizes ~~~~~~~~~~~ db_block_buffers: 230400 log_buffer: 1048576 db_block_size: 8192 shared_pool_size: 524288000 Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 146,489.66 29,758.19 Logical reads: 111,987.73 22,749.40 Block changes: 353.31 71.77 Physical reads: 446.84 90.77 Physical writes: 132.29 26.87 User calls: 1,161.46 235.94 Parses: 107.85 21.91 Hard parses: 1.78 0.36 Sorts: 71.24 14.47 Logons: 0.03 0.01 Executes: 122.73 24.93 Transactions: 4.92 % Blocks changed per Read: 0.32 Recursive Call %: 5.19 Rollback per transaction %:0.00 Rows per Sort: 45.56 Instance Efficiency Percentages (Target 100% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 100.00 Redo NoWait %: 100.00 Buffer Hit %: 99.60 In-memory Sort %: 100.00 Library Hit %: 99.28 Soft Parse %: 98.35 Execute to Parse %: 12.12 Latch Hit %: 99.85 Parse CPU to Parse Elapsd %: 2.38 % Non-Parse CPU: 98.98 Shared Pool Statistics Begin End ------ ------ Memory Usage %: 95.29 95.25 % SQL with executions>1: 36.44 70.67 % Memory for SQL w/exec>1: 29.17 41.40 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits Time (cs) Wt Time ------------------------------------ ------------ ------------ ------- latch free 1,335,053 3,161,609 74.83 db file sequential read 1,367,706 438,661 10.38 db file scattered read 697,792 310,837 7.36 log file sync 105,273 128,159 3.03 log file parallel write 106,085 108,835 2.58 ------------------------------------------------------------- -> cs - centisecond - 100th of a second -> ms - millisecond - 1000th of a second -> ordered by wait time desc, waits desc (idle events last)
and we said...
It is just percentages. if you have 87.88% (.8788) you just divide: ops$tkyte@ORA920> select 1/.8788 from dual; 1/.8788 ---------- 1.13791534 If parse cpu to parse elapsed was 87.88%, then for 1.13 seconds must be elapsed for each 1 cpu second in order to get the ratio 87.88% Looking at your latch frees, I would say your system doesn't use bind variables (thats in the book as well, I think almost every page says "use them or you lose") 347 minutes is 330 minutes TOO LONG for statspack. Use a 15 minute window! Use this script:
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1163635055580
to find your problem queries. I've got a feeling however, many of your queries are going to pop out in that report. The developers MUST use binds. You'll need to get your soft parse % up well over 99%
Reviews | |
---|---|
Developers sent me a link... AGAINST BINDING! June 5, 2003 - 1am Central time zone
Reviewer:
Alvin from Philippines
http://www.oreilly.com/catalog/jorajdbc/chapter/ch19.html I have a difficult time convincing them to use prepared statements (PS) in JAVA. And for that i get below..... ================================ STATSPACK report for DB Name DB Id Instance Inst Num Release OPS Host ------------ ----------- ------------ -------- ----------- --- ------------ xxxx xxxxxxxxxxx 1 8.1.7.0.0 NO dbname Snap Id Snap Time Sessions ------- ------------------ -------- Begin Snap: 31 05-Jun-03 03:02:19 139 End Snap: 32 05-Jun-03 04:09:33 139 Elapsed: 67.23 (mins) Cache Sizes ~~~~~~~~~~~ db_block_buffers: 87500 log_buffer: 163840 db_block_size: 8192 shared_pool_size: 512000000 Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 8,970.49 2,589.41 Logical reads: 6,458.52 1,864.31 Block changes: 47.08 13.59 Physical reads: 68.60 19.80 Physical writes: 27.95 8.07 User calls: 301.12 86.92 Parses: 52.53 15.16 Hard parses: 10.26 2.96 Sorts: 21.27 6.14 Logons: 0.14 0.04 Executes: 52.37 15.12 Transactions: 3.46 % Blocks changed per Read: 0.73 Recursive Call %: 17.58 Rollback per transaction %: 0.00 Rows per Sort: 5.01 Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 100.00 Redo NoWait %: 99.98 Buffer Hit %: 98.94 In-memory Sort %: 99.94 Library Hit %: 87.04 Soft Parse %: 80.47 Execute to Parse %: -0.30 Latch Hit %: 99.98 Parse CPU to Parse Elapsd %: 83.51 % Non-Parse CPU: 99.99 Shared Pool Statistics Begin End ------ ------ Memory Usage %: 84.24 86.20 % SQL with executions>1: 51.80 65.49 % Memory for SQL w/exec>1: 22.83 42.57 ================================ How come i have a high Non parse CPU ? shouldn't the cpu be parsing all the time based from the stats ? ================================= Get Spin & Latch Name Requests Misses Sleeps Sleeps 1->4 -------------------------- -------------- ----------- ----------- ------------ library cache 3,310,698 8,945 11,174 3036/1648/33 56/905/0 shared pool 1,556,136 1,570 365 1329/134/95/ 12/0 redo writing 77,126 1,018 2 1016/2/0/0/0 row cache objects 1,527,365 488 7 482/5/1/0/0 cache buffers chains 51,607,754 231 30 211/17/1/2/0 session idle bit 2,631,371 129 46 86/41/1/1/0 session allocation 408,936 34 7 27/7/0/0/0 cache buffers lru chain 198,693 10 4 6/4/0/0/0 enqueues 275,358 8 2 6/2/0/0/0 latch wait list 6,111 7 2 5/2/0/0/0 ------------------------------------------------------------- It's bad i know... and i get 4031 errors !! and i'm not a bit surprised !! 1. my question is why does exec statspack.snap take around 10 mins++ to finish ? 2. what does this mean 'library cache kglhdgn: child: 0 10,531 4,082 " The head java leader is getting annoyed of me asking them to use PS. I was surprise that they found the above link. Followup June 5, 2003 - 7am Central time zone: 1) because your database is a mess and the java guys are KILLING YOU. Literally KILLING YOU. It is funny you know. I use that exact example in my next book -- to prove that the java guy in that chapter has got not a single clue about what he is talking about -- not at all The test is 100% flawed. Here is a short excerpt from my forthcoming book on this topic: Bind Variables and Java One of the recent questions on the website asktom.Oracle.com recently was this interesting one. It was about programming in Java with JDBC. This discussion applies equally to Visual Basic programmers using VB with ODBC as the concept of "Statements" and "PreparedStatements" exists in ODBC in more or less the same fashion. It questioned the use of Statements versus a PreparedStatement. When using Statements in JDBC - you must use the "string concatenation approach". Using PreparedStatements allows you to use bind variables. The question was: Tom -- Please briefly skim this link (link omitted for obvious reasons) which gives an excerpt for JDBC performance. It says always use statements (no bind variables allowed) instead of preparedstatements because they perform better without discussing the impact on the database, only in terms of a single apps metrics. Is this accurate or is this information just extremely short sighted with regards to overall db impact? Well, that was easy - I gave them the proof from above - case closed. PreparedStatements with bind variables are absolutely without question the only way to do it. Of course, later on, I got a followup: For a moment keep aside shared pool, hard parse and soft parse and talk about PreparedStatement and Statement as they are the only way to execute statements from java. I wrote this benchmark code that shows a Statement performs better than a PreparedStatement unless you execute the same statement a whole lot of times. So, I reproduced the findings of the above link and prove that Statements are better than PreparedStatements. I had some issues with this one - they missed the point. They start with "for a moment keep aside shared pool, hard parse and soft parse". Well, if we were to ignore those - we totally miss the boat on this topic as they are the only things to consider. The facts are: o Hard Parsing incurs many latches o Latches are serialization devices o Serialization is not a scalable thing o Therefore as you add users, the system that uses Statements instead of PreparedStatements with bind variables will fail. I quite simply could not observe their request to put aside the shared pool and hard/soft parse. They are the relevant topics - they must be considered. That was my initial response - but you know, this bothered me so much, I had to explore it further. So, starting with their benchmark code which simply inserted into a database table, I made it a multi-user benchmark to demonstrate the fact that if you expand this simple, single user benchmark out to a real world example with multiple users - you will see clearly what the issue is and why you need to avoid statements. But, an interesting thing happened. I could not reproduce their findings that a Statement in JDBC without bind variables versus a PreparedStatement using bind variables. When I ran their code - I could, using my code - I found that a single statement executed using either of a Statement or PreparedStatement took the same amount of time initially and if we executed the SQL over and over - the PreparedStatement was always much faster. This conflicted with their observations totally. So, I set out to find out why. We'll walk through this process here because it does cover two interesting things: o If for some reason your test isn't meeting your hypothesis - either your hypothesis is wrong or your test is flawed o The seemingly simple, convincing test can be the most misleading thing in the world We'll walk through this simple benchmark now, starting with their original test case and working up to the "real thing". It used a single table TESTXXXPERF which was created using the script perftest.sql as follows: scott@ORA920> drop table testxxxperf; Table dropped. scott@ORA920> create table testxxxperf 2 ( id number, 3 code varchar2(25), 4 descr varchar2(25), 5 insert_user varchar2(30), 6 insert_date date ); Table created. scott@ORA920> exit Then, the main java code was supplied. It consisted of three subroutines basically - a main that connected to the database and then called a routine to insert into that table using statement and then called a routine to do the same with prepared statements. The code piece by piece is: import java.sql.*; import oracle.jdbc.OracleDriver; import java.util.Date; public class perftest { public static void main (String arr[]) throws Exception { Connection con = null; DriverManager.registerDriver(new oracle.jdbc.OracleDriver()); con = DriverManager.getConnection ("jdbc:oracle:thin:@aria-dev:1521:ora920", "scott", "tiger"); con.setAutoCommit(false); Integer iters = new Integer(arr[0]); doStatement (con, iters.intValue() ); doPreparedStatement(con, iters.intValue() ); con.commit(); con.close(); } That is the main routine which simply connects to my Oracle 9iR2 instance as scott/tiger - disables the autocommit JDBC uses by default and then invokes the subroutine to execute a Statement N times and then a PreparedStatement N times. I set it up to allow us to pass "N" into the Java routine so we can run multiple simulations. Next, we'll look at the doStatement routine: static void doStatement(Connection con, int count) throws Exception { long start = new Date().getTime(); Statement st = con.createStatement(); for (int i = 0; i < count; i++) { st.executeUpdate ("insert into testxxxperf " + "(id, code, descr, insert_user, insert_date)" + " values (" + i + ", 'ST - code" + i + "'" + ", 'St - descr" + i + "'" + ", user, sysdate ) "); } long end = new Date().getTime(); st.close(); con.commit(); System.out.println ("statement " + count + " times in " + (end - start) + " milli seconds"); } Very straight forward - it simply creates a statement object and then loops "count" times and builds a unique - never before seen INSERT statement and executes it. It is somewhat scaled back from reality in that it is not checking for quotes in strings and fixing them up - but we'll let that go for now. Also note that it retrieves the time before and after executing the statement and prints out the results. Next, we look at the prepared statement: static void doPreparedStatement (Connection con, int count) throws Exception { long start = new Date().getTime(); PreparedStatement ps = con.prepareStatement ("insert into testxxxperf " + "(id, code, descr, insert_user, insert_date)" + " values (?,?,?, user, sysdate)"); for (int i = 0; i < count; i++) { ps.setInt(1,i); ps.setString(2,"PS - code" + i); ps.setString(3,"PS - desc" + i); ps.executeUpdate(); } long end = new Date().getTime(); con.commit(); System.out.println ("pstatement " + count + " times in " + (end - start) + " milli seconds"); } } Basically the same code but this uses a PreparedStatement to insert "count" rows. It accomplishes the same exact task as the doStatement routine - just using a PreparedStatement. Lastly, I set up a shell script to execute this: !#/bin/csh -f sqlplus scott/tiger @perftest java perftest $1 A CMD file for Windows might look like: sqlplus scott/tiger @perftest java perftest %1 Now, I ran this with inputs of 1 (do one statement/prepared statement), 10, 100 and 1,000 and the results were: Rows to Insert Statement PrepareStatement 1 0.05 seconds 0.92 seconds 10 0.34 seconds 1.03 seconds 100 2.69 seconds 2.35 seconds 1000 26.68 seconds 15.74 seconds So, at first glance - it looks like they might have something here. If you were to ignore the database (which I'm not inclined to do personally). If I just look at this test - I might conclude that if I'm not going to execute the same statement over and over - about 100 times - I would best be served by using a Statement. The problem is there is a FLAW in our test! I discovered this flaw when I rewrote the code a little to go "multi-user". I knew in a multi-user test, using System.out.println would not be a very "scalable" testing tool. It would be hard to collect and analyze the results. So, I did what I always do when benchmarking and setup a database table to hold the timing results. The slightly modified Java code had an extra subroutine "saveTimes" to save the timing information into the database. That routine you can add to the test program above is: static PreparedStatement saveTimesPs; static void saveTimes( Connection con, String which, long elap ) throws Exception { if ( saveTimesPs == null ) saveTimesPs = con.prepareStatement ("insert into timings " + "( which, elap ) values "+ "( ?, ? )" ); saveTimesPs.setString(1,which); saveTimesPs.setLong(2,elap); saveTimesPs.executeUpdate(); } Then, I modified the doStatement and doPreparedStatement routines like this: static void doStatement (Connection con, int count) throws Exception { long start = new Date().getTime(); Statement st = con.createStatement(); for (int i = 0; i < count; i++) { st.executeUpdate ("insert into testxxxperf " + "(id, code, descr, insert_user, insert_date)" + " values (" + i + ", 'ST - code" + i + "'" + ", 'St - descr" + i + "'" + ", user, sysdate ) "); } st.close(); con.commit(); long end = new Date().getTime(); //System.out.println( "STMT" + " (" + (end-start) + ")" ); saveTimes( con, "STMT", end-start ); } And I did likewise for the PreparedStatement routine. This would simply save the times in a database table: create table timings ( which varchar2(10), elap number ); so we could run a query to get average/min/max timings from multiple users. So, remembering that the only thing I changed was to comment out the System.out.printlns and add a routine to record the time - I ran this in single user mode to test. I found: Rows to Insert Statement PrepareStatement 1 0.05 seconds 0.05 seconds 10 0.30 seconds 0.18 seconds 100 2.69 seconds 1.44 seconds 1000 28.25 seconds 15.25 seconds That's different - very different. Surprisingly different. All of a sudden - there is not only no penalty ever for using a PreparedStatement - but it quickly benefits us in single user mode even to use it. What could be the cause. The code being timed was no different. Not a single byte of code was changed. Sure, we commented out a System.out.println and added a call to saveTimes - but that code was never timed before. So, what did change then? Well, it turns out the saveTimes routine was the culprit here. If you look at that code - it uses a PreparedStatement. It "warmed up" the PreparedStatement class. It paid a one time penalty to load that class - java dynamically loads classes as you use them. The simple act of connecting did that for the Statement class (it is used during the connection to Oracle). Once the timing of the initial load of the PreparedStatement class was factored out - it turns out that a PreparedStatement is no more expensive to execute than a Statement is in JDBC. The entire premise of a Statement being "lighter weight", "more efficient" for small numbers of statements was flawed - wrong. If you used a single PreparedStatement anywhere in your code - you would have paid this "load" penalty (which is pretty small when you look at it over all) for ALL PreparedStatements. That was the interesting part of this example - that the basic test itself was flawed, we were timing an unrelated "thing". Since most non-trivial Java JDBC programs are going to have to use a PreparedStatement somewhere - they all pay this "load" penalty. Not only that but this "load penalty" isn't a penalty at all - but simply the price of admission to building a scalable application on Oracle. If you don't use Prepared statements - if you insist on using Statements and "gluing the values in" - opening yourself up to the SQL Injection security flaw and buggy code - your application will not scale as you add users. There is no "maybe" here, there is no "might not scale", your application will not scale - period.
Amazing! June 5, 2003 - 9am Central time zone
Reviewer:
Christo Kutrovsky from Ottawa, ON Canada
That's a very important piece of code to know. I am not yet involved in JAVA, but I sure will be some day, and I'll need to know how to prove the BIND variables point. Thank you Tom.
Lots of PreparedStatement questions left unanswered March 15, 2004 - 10pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
Thanks for the education about bind variables and the chapter from your book (and many related similar pages). I'm getting 00604 and an 04031 errors, but I don't know why. It doesn't happen here, just in the field, with the exact same environment, and near identical usage. Perhaps it's the lack of PreparedStatement use in places, but I don't use connections that long, yet the errors keep happening- it seems like closing connections (not pooling, at least in our code...) isn't releasing the memory! Q: Shouldn't all these resources be released when I close the connection? I get the multi-vs-single user issue, which doesn't apply to this project, but not the shared memory issue. Q) If I do sequential inserts using Statement (not PreparedStatement), does it keep around the memory and the latches after I use the statement object a second time? Q) How about if I close the statement and then create a new one? The Java docs imply that re-use and closing both release associated Java and database resources. I also get the parse-time issue- that's big. But I don't get why more latching occurs with Statements. Q) Does some latching occur during the parse? Why? I have only 3 processes using the database at a time, and only a few very lightly used tables are possibly read and written at the same time. Note: I prefer Statement over PreparedStatement because the latter are so hard to debug. If there's a bad value, it doesn't even say which field! The errors are obscure. I guess I should subclass it and all the setType methods and salt away the values and reconstruct them for printing if there's an exception- Q) Has anyone done this? About 90% of the time PreparedStatement seems to work. So I went back to Statement for the rest. Statement is so much easier to use/debug that it saves a lot of development time, ie, money, so I use it for selects, which I do many of, but each only once per hour (and never multi-user). I understand the lower parsing time, but so many of my statements are infrequent. Q) Sometimes I have constants in my PreparedStatements. Is this bad? (I put jobs in a table: time,0,0,0,0,0. Later on when parts complete, they update the column.) What if I put the whole string literal in a PreparedStatement. Q) Is a PreparedStatement containing no ? fields still more efficient than a Statement? Is it less efficient than putting some/all the fields in later with ps.setInteger()? I appreciate the help, and will try using PreparedStatements again where they failed before. I'd like some more help at times- I know you're busy. In the 6 months I've been on this project this site has refused questions... Q) Where can I go to buy some help, per question or per hour? (other pet peeve: you can't query a connection for the number of open statements. If this were possible, I could leave connections open with confidence... I guess I have to take the time to subclass everything and keep track myself, sigh..., and trap exceptions in the class, and when one happens, close the statement and then re-throw...) Thanks, obyas Followup March 16, 2004 - 7am Central time zone: if connections are not being released, you are using a connection pool of some sort. but -- you would know it as you would be setting it up. yes, if you log off, all resources associated with that connection are released. However, if you are seeing ora-4031's and you haven't been using prepared statements every where and you have lots of activity, well, it is pretty straightforward what is happening (you are doing it to yourself by using statements). Prepared statements are *not* just about single user vs multi-user. They are about "doing it right" vs "doing it wrong". They kill performance in both cases. Not only that but the "common" solution is to make the shared pool really large which just pushed the problem back a couple of feet and made it BIGGER (the correct approach is to downsize the shared pool, make it smaller, not bigger -- while you fix your bug) If you are doing inserts with a statement -- there IS no second time. If you are doing inserts with a statement -- each insert is unique, never before seen. It is quite simply the very *worst* thing on the planet you can do. Nothing worse really. Latches are quick in/out type of locks. latches are held very brief moments while we access a shared data structure (like the shared pool). You use them very briefly. You don't even need to finish the statement to give them up. Using statements is bad bad bad. cannot say it any other way. Why is is hard to understand that statements result in higher latching then pstatements? with statements every execute is a parse. a parse takes many latches. with prepared statements, only a parse is a parse -- you can execute over and over and over without parsing. Tell me -- would you compile your "methods" (aka subroutines) before each invocation and then throw out the code? would you compile your subroutines thousands of times in a single program execution? No, you wouldn't. so, why are you doing just that??? that is what you are doing by using a statement. Show me how a statement makes "debugging" easier -- do that for me ok? So, you get an ora-1401. How the HECK does a statement help you out there????? How how how -- just give me a teeny tiny small example??? I don't get it. It wastes time It is harder to develop with It costs $$$$$$$ to use statements. Look at the money you are wasting right here, right now. This is cheaper????????????????? (can you guess what MY pet peeve is by now?) You do understand that statements introduce a HUGE security risk right? (search google for "sql injection") You do understand that statements consume HUGE java resources right? (betcha just create String temporaries all over the place) You do understand that statements literally can kill the database right? (you are there, you should know that -- you are doing it and seeing the results) A prepared statement with "?" does not make sense. I cannot beleive you do the same exact insert (what happened to the primary key fields eh?) over and over. But yes, if in fact you have a statement with all literal values that you execute over and over -- a prepared statement is more efficient than a statement -- but I cannot see the use case for such a beast. In the 6 months you've been on this project this site has taken ask_tom@ASKUS> select count(*) from WWC_ASK_SUBMITTED_QUESTIONS$ 2 where timestamp > add_months(sysdate,-6); COUNT(*) ---------- 1013 new questions and performed: ask_tom@ASKUS> select count(*) from WWC_ASK_REVIEW_FOLLOWUPS$ 2 where created_on > add_months(sysdate,-6); COUNT(*) ---------- 5387 followups like this one....... but what resources are out there? how about o groups.google.com -> comp.databases.oracle.* The internet usenet newsgroups. There you'll get lots of people willing to provide you advice. o otn.oracle.com -> get support -> discussion forums. Ditto Sorry, we did not write the jdbc spec, we just implemented it. However, it would be somewhat trivial I believe to have a hash table based on you connection and every time you parse, you add to the hash table, when you close, you remove. Not very difficult (sigh) Tell me this -- when you used to process FILES in the olden days -- did you leak file handles? or did you close them when you were done, open them when you needed them. statements/prepared statements are no different than a file.
Answers to your questions, and a few things are not yet clear March 16, 2004 - 2pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
It seems like from your answer, if I create a statement, use it and close it, it keeps using more and more shared memory until I close the connection. I can accept that. (Note that it still doesn't make sense- why would Oracle keep it around? If people aren't getting it, perhaps you're not sharing a piece of the reason. I can take it on faith, but I can internalize it much more deeply if I know why.) Maybe the problem is different though- You say: > statements/prepared statements are no different than a file. So which has more memory in use at a single time: Statement foo = conn.createStatement(); foo.execute("select foo from bar where xxx"); foo.execute("select foo2 from bar where yyy"); foo.close() Or Statement foo; ResultSet rs; foo = conn.createStatement(); rs = foo.execute("select foo from bar where xxx"); foo.close() foo = conn.createStatement(); rs = foo.execute("select foo2 from bar where yyy"); foo.close() I was assuming they were the same, based on the docs, that re-using foo closes rs AND any resources associated with the first select. But my results would be explained if it were not true, that Statement.execute() opens a file till the close() (besides the ResultSet), rather than a createStatement() opens a file. > if connections are not being released, you are using a connection pool Nope, no pooling: Open a connection, create statements A and B do 3-10 queries with statement A each with 1-20000 inserts with statement B close the statements close the connection So I'll change the inserts to all use PreparedStatements. If I can't get them to work, I'll close/reopen the statement every 50 or so inserts. You asked why PreparedStatement is hard to debug. With a Statement, I can print out the String either before the exception happens, or as part of the exception. Then I can copy/paste it into SQLPlus and get a real error message that points me to some exact place in the query. With a PreparedStatement, I have to write my own code that assembles the string and prints it out. Sometimes I assembled the statement by hand and it worked fine, but the PreparedStatement still complained about a bind variable (whoever left out WHICH variable and its value out of the error msg should be shot.) I know it's not the way to run a project, but I don't have much time, and the Statement worked, and there's no hint in the JDBC API docs that there was anything wrong with Statement, except I figured it took a lot longer, but performance was fine. Until now. Yes, I'll fix it. Often in development I'd work and re-work queries. Doing this in a string, the Select query LOOKS like itself. In my queries, which return lots of rows, the "parameters" configure the query rather than pinpointing one answer, so seeing them in the query helps a lot. In some of them the database table was a parameter. So semantically, the "variables" are different from what Oracle thinks of as the "bind variables." Having it all laid out is clearer. Putting in a ?, it's easier to set things in the wrong order. Say I rework three things in the the query and then go fix the order and ooops, I fixed the last two stmt.setString() but not the first. Since the data in my queries is meaningful to the programmer, separating the data from the query makes it more error prone. Being called in to learn a new industry and their poorly designed database and undocumented database/application/industry is bad enough, using programming constructs that obscure clarity is a hindrance. (In fact, were I to do it over again, the only change I'd make is to PLAN to change to PreparedStatements at the END.) Does it really matter if the 1-10 Selects are Statements vs PreparedStatements? Each happens once per connection so it won't save any time. I can easily change to closing each one after I use it. I don't have time to change all the Selects, and it has to ship, so too many changes is destabilizing. Q) How do I get Oracle to tell me the amount of connections and statements that are open and the amount of shared mem in use? Again, many thanks for the help, and the pointers to others. I have two days to fix a problem that takes 4 days to manifest... Thanks, obyas Followup March 16, 2004 - 2pm Central time zone: that is not what I said at all -- if you use a statement, close it, it is gone from the client side. But over in your shared pool you have a bunch of stuff that has to be managed, but that no one is every going to use again. A waste of energy, we compiled a subroutine that'll never get used a again and now we have to manage it until we figure that out and hopefully age it out of the shared pool subpool it is in over time... We keep it around in the hopes that the applications were designed to be efficient ;) sometimes we are wrong and the applications are not designed that way. I cannot compare those two snippets as the first piece of code is not "sensible". you don't just "execute" a SELECT. You have to get and process a result set (else -- why bother??) If you executed that select AND you lost the "result set" (which is not the statement but yet another object) that would be like opening a file and not closeing it and letting the file handle go out of scope. For all we know, you'll come back in 5 minutes and want that result set -- we cannot close it, only the well formed program can do that. Maybe what you really want to do (if you want things fast, easy to code, easy to debug, maintainable, BOUND) is to use PLSQL actually. Java is a seriously hard language to write code (especially database code) in by hand. If you code your data logic in PLSQL you'll be coding circles around the other developers. a) you query the v$ tables -- a wealth of information exists therein. http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96536/ch3.htm#1109131
The end is near, thanks! March 16, 2004 - 8pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
> But over in your shared ... A waste of energy, we compiled a subroutine .. and .. manage it until .. age it out of the shared pool subpool ... > We keep it around in the hopes that the applications were designed to be So if I am running the same query once every hour, that's fine to use as a Statement, because the procedure et al will either have aged out or will be re-used? It just seems that when available memory in the pool gets low, Oracle should jettison these cached procedures, instead of throwing an out-of-memory error... But thanks, that explains most of it. But it still leaves 2 questions: 1. If my PreparedStatement contains literals, like "SELECT foo FROM table WHERE a=0 AND b=?", does the use of '0' cause the same sort of memory pool loss as a Statement? Is it better? Perhaps when this is closed the procedure made for it is completely freed? 2. How can I monitor the shared memory pools in SQLPlus? Thanks much, obyas After this ships, I will register and follow that link and learn about Pl/Sql. Followup March 16, 2004 - 8pm Central time zone: if it is EXACTLY the same query -- yes. a statement would be OK in that particular case. Oracle tries to jettison them but... the shared pool is sub-divided into sub pools - especially as it gets larger and larger (soon it takes longer to age out than to actually do the work you ask us). So, the larger you make it in a system that isn't binding -- the worse it actually gets. 1) a=0 is GOOD when coupled with "and b=?". the a=0 is invariant, the b=? is variant. you bind things that change (like you parameterize inputs to subroutines).... 2) select * from v$sgastat PLSQL rocks, you will like it.
One last question (I promise) March 17, 2004 - 12am Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
If I make a PreparedStatement with no bind variables, but still execute it as a prepared statement, is this exactly the same as a Statement, or will the PreparedStatement's procedures et al be nuked when the PreparedStatement is closed (where the Statement's procedure et al is cached in the shared pool)? Followup March 17, 2004 - 7am Central time zone: The prepared statement, when closed, will be just like a closed statement.
Thanks March 17, 2004 - 12pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
Running out of memory due to inability to flush a cache seems like a serious Oracle bug. I'd submit it if I knew how. Thanks much for the education. It sounds like the way the customer used my program in testing aggravated the problem, that the cache would have aged out most queries in production. With the fixes, using PreparedStatements for the voluminous queries, there should be no problems. Long term, I'll subclass PreparedStatement to handle errors better and systematically and track the number of open statement, to support keeping connections open. Or go to Pl/Sql (I'll surf this site for a book req.) Thanks MUCH- your fast responses were a godsend on this one. -obyas
How long does it take for something to age out of the cache? March 21, 2004 - 12am Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
How long does it take for a query to age out of the cache? (I worked 70 hours last week, and thee's a ton of other things to do- I don't want to take the extra time to change 20 complex queries into PreparedStatements- each is used once per hour (most of the time would be in testing.) The changes I already made reduced the number of Statements by a factor of 100 so far, but my boss is afraid there'll be pressure to change all of them. ) Followup March 21, 2004 - 9am Central time zone: that is like asking how long does it take ice to melt or paint to dry. it depends. they may never age out. they may age out really fast. they may age out pretty soon. they may age out after a long time.
March 22, 2004 - 2pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
Can you give me any sense of average times, or the algorithm used to determine leaving the queue, or how to use the output of select * from v$sgastat to determine this? The customer wants a guarantee that it won't happen again (but wants it shipped this week, with other tasks, as well.) ------ CREATE table tmp_DS AS (SELECT * FROM DS WHERE ts BETWEEN ? and ? ) ORA-01027: bind variables not allowed for data definition operations Interesting. I suppose I could make a temporary table, but the values in there, and use its values for this query... I can see why you recommend PL/Sql Followup March 22, 2004 - 3pm Central time zone: there is no way to gauge this. depends on the size of your shared pool which subpool your statement hashes to what other statements just happen to get assigned to the same place. etc.... DDL btw is not in the shared pool, therefore, bind variable = not used for DDL as it is not shareable.
One more, sigh March 22, 2004 - 7pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
Okay, I wrote 3 subclassed which delegate to Connection, Statement and PrepStatement For PreparedStatement I also make an array of the parameters that come in so if an exception happens, I can print out the query and paste it into SQLPlus. Here's what my routine produces: Error: prep.executeUpdate() with query INSERT INTO LastDataFeed (t1, s1, t1p, s1p, t2, s2, periodMin, curtime, num) values (?, ?, ?, ?, ?, ?, ?, ?, ?) INSERT INTO LastDataFeed (t1, s1, t1p, s1p, t2, s2, periodMin, curtime, num) values (1069804800000, '2003-11-26 00:00:00.000', 1069805100000, '2003-11-26 00:05:00.000', 1070496000000, '00:00:00', 5, '23:28:44', 0); java.sql.SQLException: ORA-01006: bind variable does not exist And when I paste it into SQLPlus, it works! Hmmm, if in my code I respond to the exception by calling prep.executeUpdate(failedQueryString); I get the same error. But if I make a Statement and call stmt.executeUpdate(failedQueryString); IT WORKS! For completeness, the table is: Name Type ------------------- T1 NUMBER S1 VARCHAR2(25) T1P NUMBER S1P VARCHAR2(25) T2 NUMBER S2 VARCHAR2(25) PERIODMIN NUMBER CURTIME VARCHAR2(20) NUM NUMBER(9) (I did a search on ORA-01006, but the 3 results were just for Pl/Sql... Maybe I should just hack- if the prepStatement doesn't work, maybe I should submit the mfg'd query as a Statement, sigh...) -obyas Followup March 22, 2004 - 9pm Central time zone: that means your subclass isn't doing the binding right? java.sql.SQLException: ORA-01006: bind variable does not exist you are "binding a variable that doesn't exist".. doesn't matter if you can print a pretty string, you are not calling the bind routine with the proper inputs.
But what does that mean? March 22, 2004 - 11pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
Does this mean one of the field names is wrong? If that were true the string produced out of the query+args would not produce errors as well. I think the problem was a combination of incomplete docs and presumtuousness on my part. I was also using the PreparedStatement as a Statement-- it inherits Statement's executeUpdate(queryString) routine, so I assumed it would execute a query (with no variables) without disturbing the prepared-query. I guess the queryString overwrote the earlier one that had the '?'s, and it was complaining that it couldn't bind the vars I gave it to the string with no '?'s... 'My bad. Thanks, -obyas Followup March 23, 2004 - 6am Central time zone: No, it means you have a bug in the code you layered on top of the code.
New, related problem March 30, 2004 - 5pm Central time zone
Reviewer:
Old, but young at SQL from Silicon Valley
Maybe I didn't explain the previous one well. Executing a statement using a PreparedStatement object was nuking the PreparedStatement. 1 PreparedStatement p = con.prepareStatement("select x from foo where bar=?"); 2 ResultSet rs = p.executeQuery("select count(*) from foo"); ... 3 ResultSet rs = p.executeQuery(); throws an error. Making a separate Statement object for executing line 2 allows line 3 to succeed. ------ The code also makes temporary views that focus on different aspects of an hour of data, then queries use these views. The view name changes, to avoid multithreading problems. My guess is that even preparedStatement queries won't be reused from hour to hour because each references a different viewname. True? One solution would be to pool and reuse the view names. Thanks, Obyas Dear Tom, Great answers....Can you please explain with regards to the Oracle Architecture as to How does the use of bind variables protects against SQL injection. Followup May 18, 2004 - 2pm Central time zone: if you have my book effective Oracle by design -- i go into this in great detail. Basically: select * from emp where ename = :x; -- no matter what you put into :x, it'll treated as a string to be compared to ENAME. select * from emp where ename = '&X'; and I supply KING' or 'a' = 'a as &X, all of a sudden, I dump the entire table instead of a row, or even: KING' or (select some_function_I_normally_wouldnt_run(x) from dual) = '1 and I run a function I shouldn't and so on. You are injecting ANY SQL you want into the system without binds.
Strange CPU x Elapsed time May 18, 2004 - 5pm Central time zone
Reviewer:
A reader
Why might CPU time be greater than elapsed time in tkprof ? I'm using 9ir2. I'm not using binds -- this is only a test case. SELECT COUNT(*) FROM ticket join historico on ticket.ID_ULTIMO_HISTORICO = historico.id_historico join problema on problema.id_problema = historico.id_problema join status on status.id_status = historico.id_status join RESP_CAUSA on ticket.ID_RESP_CAUSA = RESP_CAUSA.ID_RESP_CAUSA --join RESP_CAUSA tbl_resp_item on historico.ID_RESP_ITEM = tbl_resp_item.ID_RESP_CAUSA join causa on causa.id_causa = ticket.id_causa --join previsao_retorno on historico.id_prev_retorno = previsao_retorno.id_previsao join qry_clientes on qry_clientes.id = ticket.id_cliente join qry_operadores on historico.id_pessoa=qry_operadores.id join qry_empresa on qry_operadores.id_empresa=qry_empresa.id_empresa where ( qry_clientes.id in ( 575, 2571, 22720, 2 15, 1390, 1104, 36220, 1917, 26481, 20740, 3824, 170, 349, 33100, 231, 34000, 2128, 21103, 393, 850, 250, 2627, 67, 447, 676, 884, 2173, 1531, 921, 3360, 5873, 352, 30740, 1453, 38104 ) and ( ( historico.id_status<>2 and historico.id_status<>5 ) or ( historico.id_status in (2,5) and historico.data_evento > to_date('2004/05/14 13:50:24', 'YYYY/MM/DD HH24:MI:SS') ) ) ) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 1.06 1.03 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.94 0.91 593 1424 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 2.00 1.95 593 1424 0 1 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 95 Rows Row Source Operation ------- --------------------------------------------------- 1 SORT AGGREGATE 195 NESTED LOOPS 195 HASH JOIN 195 NESTED LOOPS 195 NESTED LOOPS 195 HASH JOIN 7783 NESTED LOOPS 7783 NESTED LOOPS 7783 HASH JOIN 32 NESTED LOOPS 35 INLIST ITERATOR 35 INDEX RANGE SCAN PK_EMPRESA (object id 42459) 32 INDEX UNIQUE SCAN PK_CLIENTE (object id 42596) 7783 INDEX FAST FULL SCAN IDX_T_UL_H (object id 44783) 7783 BITMAP CONVERSION TO ROWIDS 7783 BITMAP INDEX SINGLE VALUE NDX_TBL_RESP_CAUSA (object id 41387) 7783 INDEX UNIQUE SCAN CAUSA_PK11049574450818 (object id 33567) 19429 TABLE ACCESS FULL HISTORICO 195 INDEX UNIQUE SCAN STATUS_PK11049574471311 (object id 33576) 195 INDEX UNIQUE SCAN PROBLEMA_PK11049484157278 (object id 33573) 7430 VIEW 7430 TABLE ACCESS FULL PESSOA 195 INDEX UNIQUE SCAN PK_EMPRESA (object id 42459) Followup May 19, 2004 - 7am Central time zone: it is "measurement errors" and different granularities of the clocks. If you have either "Effective Oracle by Design" or "Optimizing Oracle Performance" by Cary Millsap we both go into this -- he does more so. Basically, when you measure lots of tiny events -- you can sometimes get just 0's and 1's back (if they take about the time your clock ticks take) -- or they could be off by 1 clock tick in general. add up a couple thousand of them and they tend to average out -- but it can be all by a little. basically -- it just means in your case that cpu and ela are the same in effect. Thanks Tom for the explanation you have provided. But I would appreciate if you give the same example in JAVA because I am having a hard time convincing the JAVA developers the same thing. I managed to convince them regarding bind variables and Shared Pool usage but I am not able to convince them the same for SQL Injection since they are saying how come a Statement object and PreparedStatement object matters. I don't have much experience with JAVA so I hope you can give me a example which I can show to my JAVA Developers.Regarding your new book that is the first thing I am going to purchase this weekend. Followup May 19, 2004 - 9am Central time zone: get them my book "Effective Oracle by Design" there i not only benchmark it in that "language du-jour", but explain the sql injection in detail. they cannot make the leap? they don't understand the FUNDEMENTAL difference between: select * from emp where ename = :x and "select * from emp where ename = '" & anything_the_end_users_want_to_send_us & "'"; ??????? really? they don't get that simple concept -- that the end user filling in a form can send anything they want? have they googled "sql injection" on the web???? as a database manager, I reject out of hand as buggy any code that isn't making use of bind variables. developers (of which I happen to be one actually) -- can do whatever they want in test, but in production -- they don't make the rules. really. Here is a snippet from the book: (3)Without Bind Variables, Your Code Is Less Secure Another concern when you don't use bind variables revolves around security, specifically the risk of SQL injection. To understand how it works, suppose that we have an application that asks a user for a username and password. We execute this query: select count(*) from user_table where username = THAT_USER and password = THAT_PASSWORD This seems innocent enough right? Well, let's use SQL*Plus to test that theory and see what happens with and without bind variables. Tom: Please re-break the highlighted code throughout the chapter. The maximum code lengths for Oracle books are 78 characters (regular font) and 82 characters (small font). Thanks, Monika ops$tkyte@ORA920> create table user_table 2 ( username varchar2(30), password varchar2(30) ); Table created. ops$tkyte@ORA920> insert into user_table values 2 ( 'tom', 'top_secret_password' ); 1 row created. ops$tkyte@ORA920> commit; Commit complete. ops$tkyte@ORA920> accept Uname prompt "Enter username: " Enter username: tom ops$tkyte@ORA920> accept Pword prompt "Enter password: " Enter password: i_dont_know' or 'x' = 'x Notice the password we just used. It incorporates a little SQL there doesn't it? Since we are just gluing strings together, not binding, the end user can actually type in arbitrary SQL and have it executed! Our application takes this string now and continues on: ops$tkyte@ORA920> select count(*) 2 from user_table 3 where username = '&Uname' 4 and password = '&Pword' 5 / old 3: where username = '&Uname' new 3: where username = 'tom' old 4: and password = '&Pword' new 4: and password = 'i_dont_know' or 'x' = 'x' COUNT(*) ---------- 1 Look at that. Apparently, the password `i_dont_know' or `x' = `x' is our password. But if we use bind variables instead and accept the exact input from the end user, we see this: ops$tkyte@ORA920> variable uname varchar2(30); ops$tkyte@ORA920> variable pword varchar2(30); ops$tkyte@ORA920> exec :uname := 'tom'; ops$tkyte@ORA920> exec :pword := 'i_dont_know'' or ''x'' = ''x'; PL/SQL procedure successfully completed. ops$tkyte@ORA920> ops$tkyte@ORA920> select count(*) 2 from user_table 3 where username = :uname 4 and password = :pword 5 / COUNT(*) ---------- 0 We get the correct answer. Think about this the next time you put an application out on the Internet. How many hidden vulnerabilities might you have lurking in there if you develop your application using string concatenation instead of bind variables? Think of the "neat" side effects someone could have on yo 来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/9225895/viewspace-1035031/,如需转载,请注明出处,否则将追究法律责任。
请登录后发表评论
登录
全部评论
<%=items[i].createtime%>
<%=items[i].content%> <%if(items[i].items.items.length) { %>
<%for(var j=0;j
<%}%> <%}%>
<%=items[i].items.items[j].createtime%>
<%=items[i].items.items[j].username%> 回复 <%=items[i].items.items[j].tousername%>: <%=items[i].items.items[j].content%>
还有<%=items[i].items.total-5%>条评论
) data-count=1 data-flag=true>点击查看
<%}%>
最新文章
|
转载于:http://blog.itpub.net/9225895/viewspace-1035031/