您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.
Azure NetApp 文件单卷上的 Oracle 数据库性能Oracle database performance on Azure NetApp Files single volumes
09/30/2020
本文内容
本文介绍有关云中 Oracle 的以下主题。This article addresses the following topics about Oracle in the cloud. 对于数据库管理员、云架构师或存储架构师,这些主题可能会特别感兴趣:These topics might be of particular interest to a database administrator, cloud architect, or storage architect:
当你 (OLTP) 工作负荷 (大多数随机 i/o) 或联机分析处理 (OLAP) 工作负荷 (大部分顺序 i/o) 时,性能会如下所示:When you drive an online transaction processing (OLTP) workload (mostly random I/O) or an online analytical processing (OLAP) workload (mostly sequential I/O), what does performance look like?
常规 Linux 内核 NFS (kNFS) 客户端和 Oracle 自己的直接 NFS 客户端之间的性能差异是多少?What is the difference in performance between the regular Linux kernel NFS (kNFS) client and Oracle’s own Direct NFS client?
就带宽而言,单个 Azure NetApp 文件的性能是否足够?As far as bandwidth is concerned, is the performance of a single Azure NetApp Files volume enough?
测试环境和组件Testing environment and components
下图说明了用于测试的环境。The following diagram illustrates the environment used for testing. 为了保持一致性和简易性,使用了 Ansible 的行动手册来部署测试平台的所有元素。For consistency and simplicity, Ansible playbooks were used to deploy all elements of the test bed.
虚拟机配置Virtual machine configuration
这些测试为虚拟机使用了以下设置:The tests used the following setup for the virtual machine:
操作系统:Operating system:
RedHat Enterprise Linux 7.8 (wle-ora01)RedHat Enterprise Linux 7.8 (wle-ora01)
实例类型:Instance types:
测试中使用了两个模型– D32s_v3 和 D64s_v3Two models were used in testing – D32s_v3 and D64s_v3
网络接口计数:Network interface count:
一个 (1) 置于子网3中One (1) placed in subnet 3
盘Disks:
Oracle 二进制文件和操作系统放置在单个高级磁盘中Oracle binaries and OS were placed in a single premium disk
Azure NetApp 文件配置Azure NetApp Files configuration
这些测试使用了以下 Azure NetApp 文件配置:The tests used the following Azure NetApp Files configuration:
容量池大小:Capacity pool size:
已配置池的各种大小: 4 TiB、8 TiB、16 TiB、32 TiBVarious sizes of the pool were configured: 4 TiB, 8 TiB, 16 TiB, 32 TiB
服务级别:Service level:
超高 (128 MiB/s,每 1 TiB 分配的卷容量)Ultra (128 MiB/s of bandwidth per 1 TiB of allocated volume capacity)
卷:Volumes:
已评估一个和两个卷测试One and two volume tests were evaluated
工作负荷生成器Workload generator
测试使用的工作负荷生成了 SLOB 2.5.4.2。The tests used workload generated SLOB 2.5.4.2. SLOB (较为愚蠢的 Oracle 基准测试) 在 Oracle 空间中提供了一个已知的工作负荷生成器,旨在使用 SGA 缓冲的物理 i/o 工作负载对 i/o 子系统进行压力和测试。SLOB (Silly Little Oracle Benchmark) is a well-known workload generator in the Oracle space designed to stress and test the I/O subsystem with an SGA-buffered physical I/O workload.
SLOB 2.5.4.2 不支持可插接式数据库 (PDB) 。SLOB 2.5.4.2 does not support the pluggable database (PDB). 因此,已将更改添加到 setup.sh 和脚本, runit.sh 以向其添加 PDB 支持。As such, a change was added to the setup.sh and runit.sh scripts to add PDB support to it.
以下各节介绍了测试中使用的 SLOB 变量。The SLOB variables used in the tests are described in the following sections.
工作负荷80% 选择,20% 更新 |随机 i/o – slob.conf 变量Workload 80% SELECT, 20% UPDATE | Random I/O – slob.conf variables
UPDATE_PCT=20
SCAN_PCT=0
RUN_TIME=600
WORK_LOOP=0
SCALE=75G
SCAN_TABLE_SZ=50G
WORK_UNIT=64
REDO_STRESS=LITE
LOAD_PARALLEL_DEGREE=12
工作负荷100% 选择 |顺序 i/o – slob.conf 变量Workload 100% SELECT | Sequential I/O – slob.conf variables
UPDATE_PCT=0
SCAN_PCT=100
RUN_TIME=600
WORK_LOOP=0
SCALE=75G
SCAN_TABLE_SZ=50G
WORK_UNIT=64
REDO_STRESS=LITE
LOAD_PARALLEL_DEGREE=12
数据库Database
用于测试的 Oracle 版本 Oracle Database Enterprise Edition 19.3.0.0。The Oracle version used for the tests is Oracle Database Enterprise Edition 19.3.0.0.
Oracle 参数如下所示:The Oracle parameters are as follows:
sga_max_size: 4096Msga_max_size: 4096M
sga_target:4096sga_target: 4096
db_writer_processes:12db_writer_processes: 12
awr_pdb_autoflush_enabled: trueawr_pdb_autoflush_enabled: true
filesystemio_options: SETALLfilesystemio_options: SETALL
log_buffer:134217728log_buffer: 134217728
已为 SLOB 数据库创建了 PDB。A PDB was created for the SLOB database.
下图显示了一个名为 PERFIO 且大小为 600 GB (20 个数据文件的表空间,每个) 创建为托管四个 SLOB 用户架构。The following diagram shows the tablespace named PERFIO with 600 GB in size (20 data files, 30 GB each) created to host four SLOB user schemas. 每个用户架构的大小为 125 GB。Each user schema was 125 GB in size.
性能指标Performance metrics
目标是报告应用程序遇到的 IO 性能。The goal was to report the IO performance as experienced by the application. 因此,本文中的所有关系图使用 Oracle 数据库通过其自动工作负荷存储库报告的指标 (AWR) 报表。Therefore, all diagrams in this article use metrics reported by the Oracle database via its Automatic Workload Repository (AWR) reports. 图表中使用的指标如下:The metrics used in the diagrams are as follows:
平均 IO 请求数/秒Average IO Requests/sec
对应于负载配置节中平均读取 IO 请求数/秒和平均写入 IO 请求数的总和Corresponds to the sum of average Read IO Requests/sec and average Write IO Requests/sec from the load profile section
平均 IO MB/秒Average IO MB/sec
对应于负载配置文件部分的平均读取 IO MB/秒和平均写入 IO MB/秒的总和Corresponds to the sum of average Read IO MB/sec and average Write IO MB/sec from the load profile section
平均读取延迟Average Read latency
对应于 Oracle 等待事件 "数据库文件顺序读取" 的平均延迟(微秒)Corresponds to the average latency of the Oracle Wait Event “db file sequential read” in microseconds
线程数/架构Number of threads/schema
对应于每个用户架构的 SLOB 线程数Corresponds to the number of SLOB threads per user schema
性能测量结果Performance measurement results
本部分介绍性能度量的结果。This section describes the results of performance measurement.
Linux kNFS 客户端和 Oracle 直接 NFSLinux kNFS Client vs. Oracle Direct NFS
此方案在 Azure VM Standard_D32s_v3 上运行, (Intel 2673 v4 @ 2.30 GHz) 。This scenario was running on an Azure VM Standard_D32s_v3 (Intel E5-2673 v4 @ 2.30 GHz). 工作负荷为75% 选择和25% 更新,大部分为随机 i/o,数据库缓冲区命中 ~ 7.5%。The workload is 75% SELECT and 25% UPDATE, mostly random I/O, and with a database buffer hit of ~7.5%.
如下图所示,Oracle DNFS 客户端比常规 Linux kNFS 客户端提供的吞吐量高达 2.8 x:As shown in the following diagram, the Oracle DNFS client delivered up to 2.8x more throughput than the regular Linux kNFS Client:
下图显示了读取操作的滞后时间曲线。The following diagram shows the latency curve for the read operations. 在这种情况下,kNFS 客户端的瓶颈是在客户端与 NFS 服务器之间建立的单个 NFS TCP 套接字连接 (Azure NetApp 文件量) 。In this context, the bottleneck for the kNFS client is the single NFS TCP socket connection established between the client and the NFS server (the Azure NetApp Files volume).
由于能够创建数百个 TCP 套接字连接,DNFS 客户端能够每秒推送更多 IO 请求,因此利用了并行度。The DNFS client was able to push more IO requests/sec due to its ability to create hundreds of TCP socket connections, therefore taking advantage of the parallelism. 如 Azure NetApp 文件配置中所述,分配的容量的每个额外 TiB 都允许额外的 128MiB/秒带宽。As described in Azure NetApp Files configuration, each additional TiB of capacity allocated allows for an additional 128MiB/s of bandwidth. DNFS 表面的吞吐量为 1 GiB/秒,这是 8 TiB 容量选择施加的限制。DNFS topped out at 1 GiB/s of throughput, which is the limit imposed by the 8-TiB capacity selection. 如果有更多的容量,则会有更多的吞吐量。Given more capacity, more throughput would have been driven.
吞吐量只是一个注意事项。Throughput is only one of the considerations. 另一个考虑因素是延迟,这会对用户体验产生重大影响。Another consideration is latency, which has the primary impact on user experience. 如下图所示,kNFS 与 DNFS 相比,延迟增加可能会更快。As the following diagram shows, latency increases can be expected far more rapidly with kNFS than with DNFS.
直方图提供对数据库延迟的出色了解。Histograms provide excellent insight into database latencies. 下图提供了一个完整的视图,从记录的 "数据库文件顺序读取" 的角度来看,同时在最高并发数据点上使用 DNFS (32 线程/架构) 。The following diagram provides a complete view from the perspective of the recorded "db file sequential read", while using DNFS at the highest concurrency data point (32 threads/schema). 如下图所示,47% 的所有读取操作都在512微秒和1000微秒之间进行,而所有读取操作的90% 在延迟时间低于 2 ms。As shown in the following diagram, 47% of all read operations were honored between 512 microseconds and 1000 microseconds, while 90% of all read operations were served at a latency below 2 ms.
总之,在改善 NFS 上的 Oracle 数据库实例的性能时,DNFS 是必需的。In conclusion, it's clear that DNFS is a must-have when it comes to improving the performance of an Oracle database instance on NFS.
单个卷性能限制Single volume performance limits
本部分介绍带有随机 i/o 和顺序 i/o 的单个卷的性能限制。This section describes the performance limits of a single volume with random I/O and sequential I/O.
随机 i/oRandom I/O
与 8 TB 的 Azure NetApp 文件性能配额相比,DNFS 能够消耗更多的带宽。DNFS is capable of consuming far more bandwidth than what is provided by an 8-TB Azure NetApp Files performance quota. 通过将 Azure NetApp 文件卷容量增加到16个 TiB,这是一种即时更改,从 1024 MiB/s 增加到 2048 MiB/s 的卷带宽量。By increasing the Azure NetApp Files volume capacity to 16 TiB, which is an instantaneous change, the amount of volume bandwidth increased from 1024 MiB/s by 2X to 2048 MiB/s.
下图显示了80% 选择和20% 更新工作负荷的配置,以及数据库缓冲区命中率为8%。The following diagram shows a configuration for an 80% select and 20% update workload, and with a database buffer hit ratio of 8%. SLOB 可以为每秒 200000 NFS i/o 请求驱动器。SLOB was able to drive a single volume to 200,000 NFS I/O requests per second. 考虑到每个操作的大小为 8-KiB,所测试的系统可以提供 ~ 200000 IO 请求数/秒或 1600 MiB/秒。Considering that each operation is 8-KiB size, the system under test was able to deliver ~200,000 IO requests/sec or 1600 MiB/s.
以下读取滞后时间曲线关系图显示,随着读取吞吐量的增加,延迟将在 1 ms 行下顺畅地增长,并按 ~ 165000 的平均读取 IO 请求数(约1.3 毫秒)到达曲线的 knee。The following read latency curve diagram shows that, as the read throughput increases, the latency increases smoothly below the 1-ms line, and it hits the knee of the curve at ~165,000 average read IO requests/sec at the average read latency of ~1.3 ms. 对于 Azure 云中几乎任何其他技术,此值是 i/o 速率无法完成的令人难以置信的延迟值。This value is an incredible latency value for an I/O rate unachievable with almost any other technology in the Azure Cloud.
顺序 i/oSequential I/O
如下面的关系图中所示,并非所有 i/o 都是随机的,考虑 RMAN 备份或全表扫描,例如,工作负荷需要尽可能多的带宽。As shown in the following diagram, not all I/O is random in nature, considering an RMAN backup or a full table scan, for example, as workloads requiring as much bandwidth as they can get. 使用前面所述的相同配置,但将卷调整到了 32 TiB,下图显示单个 Oracle DB 实例可驱动超过 3900 MB/秒的吞吐量,非常接近于 Azure NetApp 文件量的性能配额 32 TB (128 MB/秒 * 32 = 4096 MB/s) 。Using the same configuration as described previously but with the volume resized to 32 TiB, the following diagram shows that a single Oracle DB instance can drive upwards of 3,900 MB/s of throughput, very close to the Azure NetApp Files volume's performance quota of 32 TB (128 MB/s * 32 = 4096 MB/s).
总而言之,Azure NetApp 文件可帮助你将 Oracle 数据库带入云中。In summary, Azure NetApp Files helps you take your Oracle databases to the cloud. 它在数据库要求时提供性能。It delivers on performance when the database demands it. 你可以随时动态且无中断地调整卷配额。You can dynamically and non-disruptively resize your volume quota at any time.
后续步骤Next steps