Troubleshooting 'enq: TX - index contention' Waits in a RAC Environment

系统存在enq: TX - index contention 等待事件

转自mos 上的文章

https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?id=873243.1&type=DOCUMENT&displayIndex=1&returnToSrId=&srnum=&org.apache.myfaces.trinidadinternal.webapp.AdfacesFilterImpl.IS_RETURNING=true&_adf.ctrl-state=ai02bwdac_4

Applies to:

  Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.3 - Release: 10.2 to 11.2
Information in this document applies to any platform.

Goal

This document explains the how to troubleshoot and resolve 'enq: TX - index contention' waits in a RAC environment.


Solution

When we run OLTP systems in a Oracle RAC environment, it’s possible that we see high TX enqueue contention on index associated with  tables, which are having high concurrency from the application.  This usually happens when the application performs lot of INSERTs and DELETEs  concurrently from all the instances . 

The reason for this is the index block splits while inserting a new row into the index. The transactions will have to wait  for  TX lock in mode 4, until the session that is doing the block splits completes the operations.

A session will initiate a index block split, when  it can’t find space in an index block where it needs to insert a new row.  Before starting the split, it would clean out all the keys in the block to check whether there is enough sufficient space in the block.deleted

Splitter has to do the following activities:

   o          Allocate a new block.
   o          Copy a percentage of rows to the new buffer.
   o          Add the new buffer to the index structure and commit the operation.

In RAC environments, this could be an expensive operation, due to the global cache operations included. The impact will be more if the split is happening at a branch or root block level.

Causes:

Most probable reasons are:

        o               Indexes on the tables which are being accessed heavily from the application.
   o               Indexes on table columns which are having values inserted by a monotonically increasing.

Identifying the Hot index:

The indexes which are having contention can be identified from  the AWR reports taken during the time of the issue.

Top 5 Timed Events:

Event                       Waits      Time(s)  Avg Wait(ms)  % Total Call Time Wait Class
en: TX - index contention   89,350     40,991    459          63.3         Concurrency
db file sequential read    1,458,288 12,562   9            19.4         User I/O
CPU time                               5,352                  8.3  

Instance Activity Stats:

Statistic               Total    per Second    per Trans
branch node splits      945      0.26         0.00
leaf node 90-10 splits  1,670    0.46         0.00
leaf node splits        35,603   9.85         0.05                         

And the objects can be found either from V$SEGMENT_STATISTICS or from the 'Segments by Row Lock Waits' of the AWR reports.


Segments by Row Lock Waits:

Owner    Tablespace  Object Name           Obj.Type  Row Lock Waits % of Capture
ACSSPROD ACSS_IDX03 ACSS_ORDER_HEADER_PK   INDEX     3,425          43.62
ACSSPROD ACSS_IDX03 ACSS_ORDER_HEADER_ST   INDEX     883            11.25
ACSSPROD ACSS_IDX03 ACSS_ORDER_HEADER_DT   INDEX      682            8.69

Solutions:

Solution here is to tune the indexes avoid heavy access on a few set of blocks.

Following are the options we could try:

o       Rebuild the as reverse key indexes or hash partition the indexes which are listed in the 'Segments by Row Lock Waits' of the AWR reports

From the Performance Tuning Guide -

Reverse key indexes are designed to eliminate index hot spots on insert applications.  These indexes are excellent for insert performance.  But the downside of it is that, it may affect the performance of index range scans.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/design.htm#sthref112


The hash method can improve performance of indexes where a small number leaf blocks in the index have high contention in multiuser OLTP environment. In some OLTP applications, index insertions happen only at the right edge of the index. This could happen when the index is defined on monotonically increasing  columns. In such situations right edge of the index becomes a hotspot because of contention for index pages, buffers, latches for update, and additional index  maintenance activity, which results in performance degradation.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/data_acc.htm#i2678

Its recommended to test the application performance,  after rebuilding the indexes as  reverse key or hash partitioned.

o       Consider increasing the CACHE size of the sequences.

When we use monotonically increasing sequences for populating column values, the leaf block which is having high sequence key will be changing with every  insert, which makes it a hot block and potential candidate for a block split.  

With CACHE SIZE (and probably with NOORDER option), each instance would use start using the sequence keys with a different range reduces the index keys getting insert same set of leaf blocks.

深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值