MQ链接杂症

1. java.lang.NoClassDefFoundError: com/ibm/disthub2/spi/ClientTranslate

Problem(Abstract)

You get "java.lang.NoClassDefFoundError: com/ibm/disthub2/spi/ClientTranslate" every time you restart your WebSphere Application Server 6.0.2.25 using the com.ibm.mq.jar and com.ibm.mqjms.jar provided by WebSphere MQ 6.0.2.2 or WebSphere MQ 6.0.2.3.

Cause

The java.lang.NoClassDefFoundError: com/ibm/disthub2/spi/ClientTranslate is reporting that we can not find the ClientTranslate class. ClientTranslate is packaged as part of the com.ibm.disthub2.spi package, which is included in dhbcore.jar for WebSphere MQ V6 client. In WebSphere MQ 5.3 it was included as part of the com.ibm.disthubmq.spi package and bundled inside com.ibm.mqjms.jar.

Resolving the problem

In order for MQ and, or WebSphere Application Server to find the ClientTranslate class you must include dhbcore.jar in your classpath. 


Documentation APAR IY76882 was raised to note that this requirement (to include dhbcore.jar in your classpath) was not specified in the original versions of the MQ Using Java™ manual.

找到 缺失的jar包  dhbcore.jar, 加入到项目的library中:

俺的路径是

<dir>\RAD7\runtimes\base_v61\lib\WMQ\java\lib\dhbcore.jar 

 

同时还要保证MQ还需要的jar包也加入到项目的library中:

<dir>\RAD7\runtimes\base_v61\lib\WMQ\java\lib\com.ibm.mqjms.jar

<dir>\RAD7\runtimes\base_v61\lib\WMQ\java\lib\com.ibm.mq.jar

2.MQException 出现:完成代码是 2,原因为 2009 MQJE016:

Problem(Abstract)

The IBM® WebSphere® MQ Reason Code 2009 (MQRC_CONNECTION_BROKEN) may occur when an application installed in WebSphere Application Server V5 tries to connect to a WebSphere MQ or Embedded Messaging queue manager. Here are some examples of errors that are caused by Reason Code 2009:

The following exception was logged javax.jms.JMSException:
MQJMS2008: failed to open MQ queue
com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2009

javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for'mynode:WAS_mynode_server1'
at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:556)
at com.ibm.mq.jms.MQConnection.createQM(MQConnection.java:1736)
...
com.ibm.mq.MQException: MQJE001: An MQException occurred: Completion Code 2, Reason 2009
MQJE003: IO error transmitting message buffer
at com.ibm.mq.MQManagedConnectionJ11.<init>(MQManagedConnectionJ11.java:239)
...

WMSG0019E: Unable to start MDB Listener MyMessageDrivenBean, JMSDestination
jms/MyQueue : javax.jms.JMSException: MQJMS2005: failed to create
MQQueueManager for 'mynode:WAS_mynode_server1'
at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:556)
at com.ibm.mq.jms.MQConnection.createQM(MQConnection.java:1736)
...

Cause

The connection may be broken for a number of different reasons; the 2009 return code indicates that something prevented a successful connection to the Queue Manager. The most common causes for this are the following:


1. A firewall that is terminating the connection.
2. An IOException that causes the socket to be closed.
3. An explicit action to cause the socket to be closed by one end.
4. The queue manager is offline.
5. The maximum number of channels allowed by the queue manager are open.
6. A configuration problem in the Queue Connection Factory (QCF).


Resolving the problem

Preventing the firewall from terminating connections
Configure the Connection Pool and Session Pool settings for the QCF that is configured in WebSphere Application Server so that WebSphere can remove connections from the pool before they are dropped by the firewall. Change the value of Min Connections to 0 and set the Unused Timeout to half the number of seconds as the firewall timeout. For example, if the firewall times out connections after 15 minutes (900 seconds), set the Unused Timeout to 450 seconds.

Configuring to minimize the possibility of an IOException 
On a UNIX® system, configure the TCP stanza of the qm.ini for your queue manager to contain this entry:
KeepAlive=YES
This setting causes TCP/IP to check periodically that the other end of the connection is still available. If it is not, the channel is closed.

Also follow the instructions in Tuning operating systems in the WebSphere Application Server Info Center. These will have you set the operating system configuration for TCP/IP to try to prevent sockets that are in use from being closed unexpectedly. For example, on Solaris, you will set the TCP_KEEPALIVE_INTERVAL setting on the WebSphere MQ machine. This should be set to be less than the firewall timeout value. If you do not set the TCP_KEEPALIVE_INTERVAL to be lower than the firewall timeout, then the keepalive packets will not be frequent enough to keep the connection open between WebSphere Application Server and MQ.

NOTE: You must be sure that the firewall is configured to allow keepalive packets to pass through. A connection broken error could be caused by the firewall not letting the keepalive packets through.

An explicit action can cause this
An action such as stopping the queue manager or restarting the queue manager would also cause Reason Code 2009. There are also some MQ defects that could result in unexpected 2009 errors. When this document was written, APARs that addressed these defects included IY59675, IC42636, PQ87316, and PQ93130. It is a good idea to install the latest available Fix Pack for WebSphere MQ or Interim Fix for Embedded Messaging.

The maximum number of channels has been reached
This could be due to the number of channels for the JMS provider not being large enough, or there could be some errors occurring that are causing channels to not close, so that they cannot be reused. For additional information, refer to these technotes, MQ Manager Stops Responding To JMS Requests. Also, WebSphere Application Server and MQ do not agree on the number of JMS connections.

A QCF Configuration problem 
This problem could also occur because of a QCF configuration problem. If the Queue Manager, Host, Port, and Channel properties are not set correctly, a Reason Code 2009 would occur when an application uses the QCF to try to connect to the queue manager.

Other best practices

  1. Set the Purge Policy of the QCF Connection Pool and Session Pool to EntirePool. The default value is FailingConnectionOnly. When the Purge Policy is set to EntirePool, the WebSphere connection pool manager will flush the entire connection pool when a fatal connection error, such as Reason Code 2009, occurs. This will prevent the application from getting other bad connections from the pool.
  2. If the Reason Code 2009 error occurs when a message-driven bean (MDB) tries to connect to the queue manager, configure the MAX.RECOVERY.RETRIES and RECOVERY.RETRY.INTERVAL properties so that the message listener service will retry the connection. See Message listener service custom properties for more information on these properties.
  3. If you are not using an MDB, but the Reason Code 2009 error occurs for an application that sends messages to a queue, the application should to have logic to retry the connection when the error occurs. See, Developing a J2EE application to use JMS, for information on how to program your application to use a JMS connection. Also, see Tips for troubleshooting WebSphere Messaging for additional details.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值