Versions:
Hibernate 2.1
mysql-connector-java-3.1.7
c3p0-0.9.0
MySQL 4.1.11
I am finally running c3p0-0.9.0 in production (q.v.
http://forum.hibernate.org/viewtopic.php?t=943426) which solved the original problem (NPEs) in that link.
Unfortunately, in place of NPEs, I'm now getting messages such as the following (again only occasionally/unpredictably, and after running for a while):
Code:
2005-09-06@06:29:12.383 WARN [Timer-0] - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@130b13c -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
2005-09-06@06:29:12.383 WARN [Timer-0] - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@130b13c -- APPARENT DEADLOCK!!! Complete Status: [num_managed_threads: 3, num_active: 3; activeTasks: com.mchange.v2.resourcepool.BasicResourcePool$5@98a2e8 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1), com.mchange.v2.resourcepool.BasicResourcePool$5@ef4331 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0), com.mchange.v2.resourcepool.BasicResourcePool$5@244b92 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2); pendingTasks: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@dd9094, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@3f03c3, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@11033db, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@bc80d8, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@1af24e2, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@19e900a, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@102b5ab, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@e51c3b, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@1a06d20, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@15c82f9, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@9ac6da, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@17547dd, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@c10304, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@1ad3677, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@d2eead, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@33ab8f, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@4473dc, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@12800a, com.mchange.v2.c3p0.stmt.GooGooStatementCache$2@9420f9]
I found these two threads, but neither seems to apply, as I'm on MySql as stated above.
http://forum.hibernate.org/viewtopic.php?t=941477
http://forum.hibernate.org/viewtopic.php?t=943141
My hibernate.properties c3p0 lines:
hibernate.c3p0.min_size=2
hibernate.c3p0.max_size=10
hibernate.c3p0.timeout=300
hibernate.c3p0.max_statements=100
hibernate.c3p0.idle_test_period=1800
hibernate.c3p0.acquire_increment=2
Thanks,
Mark
Help me Steve Waldman, you're my only hope. ^lame droid humor ;)
| | Top |
| |
moonlight |
Post subject:
Posted: Tue Sep 06, 2005 4:38 pm
| |
Regular | | Joined: Mon Aug 29, 2005 9:46 am Posts: 102 |
Change "max_statements" to 0 and try... I'm not sure it will work, but that's what worked for me. The difference is that I was using Firebird, and there was already a related issue.
| | Top |
| |
indgosky |
Post subject:
Posted: Tue Sep 06, 2005 4:48 pm
| |
Joined: Mon Jun 06, 2005 9:26 pm Posts: 8 |
moonlight wrote:
Change "max_statements" to 0 and try
That would likely get rid of the statement cache messages, but unless I misunderstand the feature, it would be "fixed" at the cost of totally disabling the prepared statement cache. I'd kinda like to retain the performance value gained from reusing statements. But thanks anyway!
| | Top |
| |
indgosky |
Post subject: still hoping
Posted: Thu Sep 08, 2005 2:10 pm
| |
Joined: Mon Jun 06, 2005 9:26 pm Posts: 8 |
I'm surprised that no other theories or options have come out... Is there really no way to deal with the cached statement warnings other than to completely disable prepared statement caching? If so, that reduces the value of c3p0 to connection pool only. I'd appreciate any other insights on the subject... Thanks.
| | Top |
| |
swaldman |
Post subject:
Posted: Thu Sep 08, 2005 4:14 pm
| |
C3P0 Developer | | Joined: Tue Jan 06, 2004 8:58 pm Posts: 145 |
Mark,
May the StatementCache be with you.
Regarding your deadlocks, they are interesting. The tasks that are failing to terminate are not the Statement cache tasks, but Connection close() tasks. Nevertheless, past deadlocks have occurred because of Statement close was happening simultaneous to Connection close(), and I'm suspicious that we are seeing more of the same here, since you are clearly churning though a lot of PreparedStatements. (You have a zillion pending statement close tasks, and three Connection close tasks that are failing.)
0.9.0 cleared up the most obvious bad interaction between Statement and Connection close(), but it occurs to me that the strategy does not yet guarantee that a PreparedStatement and Connection close cannot occur asynchronously and near simultaneously by happenstance, especially when Statements are being close()ed quite frequently. So, my job is to tighten that up. [Even though in theory it should be foreseen and handled cleanly, many drivers don't like Statements to be used simultaneous to Connection ops or after Connection close().]
Your job is this: 1) take moonlight's suggestion for just long enough to persuade yourself that the statement cache is implicated in the apparent deadlock, and write back to let me know the problem goes away when statement caching is turned off; 2) try making max_statements much, much larger, or set it to 0 and set c3p0.maxStatementsPerConnection in c3p0.properties to as many distinct PreparedStatements as your application ever uses. This will both improve the performance of your application (statement caching doesn't much help if you are always churning through the statements), and make the deadlock you are seeing, presuming it is an artifact of statement close simultaneous with connection close, much less likely. The only likely downside of a large max_statements is the potential memory footprint (see how it works for you), but there may be many upsides. Do give it a shot.
Good luck!
Steve (c3p0 guy)
| | Top |
| |
indgosky |
Post subject:
Posted: Mon Sep 12, 2005 6:58 pm
| |
Joined: Mon Jun 06, 2005 9:26 pm Posts: 8 |
swaldman wrote:
Your job is this: 1) take moonlight's suggestion for just long enough to persuade yourself that the statement cache is implicated in the apparent deadlock, and write back to let me know the problem goes away when statement caching is turned off
I set my production statement cache to zero on Thursday, and as of 4 days later there hasn't been a single instance of any warnings or exceptions (whereas before ther would have been tens of them). So this result would seem to support the theory that:
Quote:
past deadlocks have occurred because of Statement close was happening simultaneous to Connection close(), and I'm suspicious that we are seeing more of the same here
as for:
Quote:
2) try making max_statements much, much larger, ... The only likely downside of a large max_statements is the potential memory footprint
That will not likely be workable for me, as I am not running just a single program (ie tomcat) that can afford a bunch of new overhead. My environment is a bunch of separate daemons and jobs that connect to the db to do separate work, and I can't allow these N processes to all gain a bunch of new overhead.
I guess I'll leave caching off for now (which isn't currently a performance issue) and hope that there will be a concrete fix for it before I absolutely need to turn caching back on.
Thanks!
Mark
| | Top |
| |
ramvem |
Post subject: c3p0 Apparent Deadlock messages
Posted: Mon Jul 10, 2006 8:05 pm
| |
Joined: Mon Jul 10, 2006 5:52 pm Posts: 6 Location: Bay Area, California, USA |
I am seeing similar issues with MySQL 4.1 using the JDBC driver version 3.1.11. We did find a bug in MySQL JDBC driver souce code where statement close & connection close could run into a deadlock. Is there any kill mechanism in c3p0 for a hanging close() call? Do we interrupt the thread or something like that?
Thanks Steve for your notes on making statement close less likely to be used. I haven't tried it but I'm pretty sure I wouldn't be able to use this because I'm running a highly multi-threaded text indexing application that is somewhat sensitive to memory usage.
| | Top |
| |
swaldman |
Post subject:
Posted: Sat Sep 02, 2006 9:15 pm
| |
C3P0 Developer | | Joined: Tue Jan 06, 2004 8:58 pm Posts: 145 |
ramvem,
c3p0-0.9.1 will have a "maxAdministrativeTaskTime" parameter, which will call interrupt() on c3p0's helper threads when an administrative task [Connection acquisition or test, Connection or cached Statement destruction] is taking too long. "APPARENT DEADLOCK"s will also include stack traces of the deadlocking tasks if on a 1.5+ VM (which offers the Thread.getStackTrace() method). Though these APPARENT DEADLOCKS seem to be a lot less common in 0.9.0.x than they used to be, they still do occur for some users, and I'm hoping to get a much better handle on why...
c3p0-0.9.1-pre7 will be available within a few days with this functionality (and quite a few other changes). If you have the luxury of being able to test a gree prerelease, please do give it a good hard go.
smiles,
Steve
| | Top |
| |
moraleslos |
Post subject:
Posted: Thu Dec 07, 2006 12:42 pm
| |
Beginner | | Joined: Fri Mar 12, 2004 1:02 pm Posts: 23 |
I'm actually running into the same issue using hibernate-3.2.1.ga, c3p0-0.9.1-pre6, mysql-connector-java-5.0.4 on mysql 5.0.27. The caveat is that it happens when I run it under ServiceMix. The issue thread can be found here:
http://www.nabble.com/issues-with-deadl ... 12049.html
I wanted to try upgrading to c3p0-0.9.1-pre7 but the c3p0 site is currently down. If you have any suggestions for me to troubleshoot, please let me know. Thanks in advance.
-los
| | Top |
| |
swaldman |
Post subject:
Posted: Thu Dec 07, 2006 9:36 pm
| |
C3P0 Developer | | Joined: Tue Jan 06, 2004 8:58 pm Posts: 145 |
hi,
please try c3p0-0.9.1-pre11. i think most of these APPARENT DEADLOCK issues are now resolved, though of course you can prove me wrong...
if you have trouble reaching c3p0's website on mchange.com (we're having some network issues), please go to sourceforge.net and search for c3p0. sourceforge hosts the actual releases.
good luck!
steve
| | Top |
| |
moraleslos |
Post subject:
Posted: Thu Dec 07, 2006 10:00 pm
| |
Beginner | | Joined: Fri Mar 12, 2004 1:02 pm Posts: 23 |
Hi,
I actually switched to pre11 and its still giving me "too many connections" issues. Can I send you my stacktrace directly via email? Thanks.
-los
| | Top |
| |
moraleslos |
Post subject:
Posted: Fri Dec 08, 2006 1:40 pm
| |
Beginner | | Joined: Fri Mar 12, 2004 1:02 pm Posts: 23 |
I've got it working. Its actually an issue with Hibernate 3.2.1.ga. Not changing any of my code, when I switched back to 3.2.0.ga using c3p0pre11, everything worked out fine. So its either a bug with hibernate or c3p0 needs to update to reflect any 3.2.1 changes (I think its the former though).
-los
| | Top |
| |
hoist2k |
Post subject:
Posted: Fri Dec 15, 2006 2:54 pm
| |
Joined: Wed Dec 13, 2006 3:21 pm Posts: 4 |
moraleslos wrote:
I've got it working. Its actually an issue with Hibernate 3.2.1.ga. Not changing any of my code, when I switched back to 3.2.0.ga using c3p0pre11, everything worked out fine. So its either a bug with hibernate or c3p0 needs to update to reflect any 3.2.1 changes (I think its the former though).
I've been on pre11 and 3.2.1 for a little while and having these problems, usually when reloading web apps from the tomcat manager. Just rolled back to 3.2.0 and haven't had any problems yet.
| | Top |
| |
hoist2k |
Post subject:
Posted: Tue Jan 16, 2007 12:21 pm
| |
Joined: Wed Dec 13, 2006 3:21 pm Posts: 4 |
Uh oh. Just had this problem again after a number of weeks without it. I'm still using hibernate 3.2.0.
I'll paste my stack trace in case it's helpful for anyone else. Otherwise, I'm out of ideas:
Code:
08:15:02,086 INFO ConnectionProviderFactory:72 - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider
08:15:02,362 INFO AbstractPoolBackedDataSource:444 - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> jpgfmc7kc4elm0iyc8r5|43a6684f, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.mysql.jdbc.Driver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> jpgfmc7kc4elm0iyc8r5|43a6684f, idleConnectionTestPeriod -> 0, initialPoolSize -> 4, jdbcUrl -> jdbc:mysql://localhost:3306/mysql, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 50, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 4, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, preferredTestQuery -> null, properties -> {user=******, password=******}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ]
08:15:22,164 WARN ThreadPoolAsynchronousRunner:608 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@4d480ea -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
08:15:22,172 WARN ThreadPoolAsynchronousRunner:624 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@4d480ea -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 3
Active Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@158105e8 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2)
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@49aacd5f (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1)
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@6c913dc1 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0)
Pending Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@336d8196
Pool thread stack traces:
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2,5,main]
java.net.PlainSocketImpl.socketConnect(Native Method)
java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
java.net.Socket.connect(Socket.java:516)
java.net.Socket.connect(Socket.java:466)
java.net.Socket.<init>(Socket.java:366)
java.net.Socket.<init>(Socket.java:208)
com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:147)
com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:276)
com.mysql.jdbc.Connection.createNewIO(Connection.java:2641)
com.mysql.jdbc.Connection.<init>(Connection.java:1531)
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:266)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:135)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:978)
com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1774)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1,5,main]
java.net.PlainSocketImpl.socketConnect(Native Method)
java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
java.net.Socket.connect(Socket.java:516)
java.net.Socket.connect(Socket.java:466)
java.net.Socket.<init>(Socket.java:366)
java.net.Socket.<init>(Socket.java:208)
com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:147)
com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:276)
com.mysql.jdbc.Connection.createNewIO(Connection.java:2641)
com.mysql.jdbc.Connection.<init>(Connection.java:1531)
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:266)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:135)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:978)
com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1774)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0,5,main]
java.net.PlainSocketImpl.socketConnect(Native Method)
java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
java.net.Socket.connect(Socket.java:516)
java.net.Socket.connect(Socket.java:466)
java.net.Socket.<init>(Socket.java:366)
java.net.Socket.<init>(Socket.java:208)
com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:147)
com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:276)
com.mysql.jdbc.Connection.createNewIO(Connection.java:2641)
com.mysql.jdbc.Connection.<init>(Connection.java:1531)
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:266)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:135)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:978)
com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1774)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
| | Top |
| |
hoist2k |
Post subject:
Posted: Tue Jan 16, 2007 12:38 pm
| |
Joined: Wed Dec 13, 2006 3:21 pm Posts: 4 |
hoist2k wrote:
Uh oh. Just had this problem again after a number of weeks without it.
Nevermind. It was a iptables rule problem. My bad.
| |
hi,
please try c3p0-0.9.1-pre11. i think most of these APPARENT DEADLOCK issues are now resolved, though of course you can prove me wrong...
if you have trouble reaching c3p0's website on mchange.com (we're having some network issues), please go to sourceforge.net and search for c3p0. sourceforge hosts the actual releases.
good luck!
steve
We're using c3p0-0.9.1.2, and this problem happens consistently during heavy loads (especially if the machine itself is loaded to the point where it's swapping).
As far as I can tell, after reading the c3p0 code for a while, this is a fundamental problem with the way the DeadlockDetector code is written. The detection algorithm results in false positives under heavy system load.
The detection algotihm is simply that if no background task completes and no new tasks are posted between consecutive runs of the detector, then a deadlock is assumed and the detector interrupts all the pool threads and starts new ones.
Since there is no problem other than a really loaded machine, these thread interruptions cause resource leakages, since no cleanup is done by the background tasks when they are interrupted, and the process is highly likely to be repeated again in the future.
Am I off in the weeds here, or is this what's going on? We're pretty close to just hacking away the deadlock detector code -- the cure is much worse than the disease, it seems.
There's a second problem we've run into that's somewhat self-inflicted, but points out another vulnerability in the code. We configure acquireRetryAttempts to zero (meaning retry indefinitely). The problem is that this will cause resource starvation under certain conditions. For example, if all pool threads are blocked on acquiring connections, then no threads are available to refurbish released connections, which causes things to grind to a halt.
This is easy to configure around, but seems like something that should at least be documented as a vulnerability in the code.
| |