Cost-Based Oracle - Fundamentals: Test cases applied against 10.2.0.4 and 11.1.0.6 (Part 2)

Cost-Based Oracle - Fundamentals: Test cases applied against 10.2.0.4 and 11.1.0.6 (Part 2)

Back to part 1

This part of this series covers chapter 2 of the book, the "Tablescan".

Running the test cases of chapter 2 against 10.2.0.4 and 11.1.0.6 revealed that apart from a few oddities all of the results were consistent with those reported by Jonathan for the previous releases.

In the following I'll attempt to cover the most important findings.

Effects of Block Sizes

One particular issue that the book already covered in detail is the effect of using non-standard block sizes on the optimizer's estimates.

Let's start with re-iterating what has already been written by Jonathan. Using traditional I/O only costing (CPU costing disabled), 8KB default blocksize and a db_file_multiblock_read_count = 8 the following costs were reported for a full table scan of an 80MB table segment:


Block Size Cost Adjusted dbf_mbrc Number of blocks
---------- ---------- ----------------- ----------------
2KB 2,439 16.39 40,000
4KB 1,925 10.40 20,000
8KB 1,519 6.59 10,000
16KB 1,199 4.17 5,000


What can be deduced from these findings? The optimizer is consistent with the runtime engine in that it adjusts the db_file_multiblock_read_count (dbf_mbrc) according to the block size, e.g. the 2KB block size segment is scanned using an dbf_mbrc of 32, because the default block size of 8KB times the dbf_mbrc of 8 results in a multi-block read request of 64KB.

A dbf_mbrc of 32 results in an adjusted dbf_mbrc used for cost calculation of 16.39, as can be seen in the previous examples of the book, presumably because the optimizer assumes that multi-block reads requesting more blocks have a higher probability of getting reduced due to blocks already in the buffer cache. It's this assumption that tempts to deduce that there is a element of truth that using a larger block size might be able to reduce the number of I/O requests.

Consider that in the case of smaller blocks (e.g. 2KB) some of them are already in the buffer cache due to single row/block access. This could actually require to split a 32 block multi-block read request into multiple smaller ones, increasing the number of I/O requests required to read the data.

On the other hand, in the same scenario using a larger block size (e.g. 16KB) the single rows accessed might possibly be located in less blocks, leading to less fragmentation of the 4 block multi-block read requests.

But the usage of the larger block size in this scenario might also lead to much greater portion of the segment competing for the buffer cache with other concurrent requests due to the fact that a single row request requires the whole (larger) block to be read possibly lowering the overall effectiveness of the buffer cache, so it probably depends largely on the actual usage pattern and it's likely that both effects outweigh each other in most cases.

Let's move on to CPU costing and the effects of non-standard block sizes. Again the same 80MB table segment is read. The dbf_mbrc is set to 8, and in case of the WORKLOAD system statistics ("Normal" column below) the values have been set deliberately to mimic the NOWORKLOAD statistics, which means a MBRC of 8, sreadtim = 12 and mreadtim = 26. The results shown can be reproduced using the script "tablescan_04.sql".


Block size Noworkload Normal Number of Blocks
---------- ---------- ------ ----------------
2KB 7,729 10,854 40,000
4KB 4,387 5,429 20,000
8KB 2,717 2,717 10,000
16KB 1,881 1,361 5,000


Let's recall the formula to calculate the I/O cost with CPU costing enabled, reduced to the multi-block read requests part that matter in this particular case:

Cost = (number of multi-block read requests)*mreadtim/sreadtim

And here is the formula that is used to calculate mreadtim and sreadtim in the case of NOWORKLOAD statistics, which records only ioseektim (disk random seek time in milliseconds) and iotrfspeed (disk transfer speed in bytes per millisecond) for the I/O calculation:

MBRC = dbf_mbrc
sreadtim = ioseektim + db_block_size / iotfrspeed
mreadtim = ioseektim + dbf_mbrc * db_block_size / iotfrspeed

NOWORKLOAD system statistics

Slotting in the values we have, considering that we have non-standard block sizes and an adjusted dbf_mbrc at runtime:

2KB, NOWORKLOAD:

MBRC = 32 (adjusted for 2KB block size)
sreadtim = 10 + 2,048 / 4,096 = 10 + 0.5 = 10.5
mreadtim = 10 + 32 * 2,048 / 4,096 = 26

Cost = (40,000 / 32) * 26 / 10.5 = 3,095 (rounded), actually => 7,729 !

Clearly, something is going wrong here, the difference can't be explained by some kind of "rounding" issues.

After fiddling a bit around, it becomes obvious that the optimizer uses a different set of values for the formula:

2KB, NOWORKLOAD (actual):

MBRC = 32 (adjusted for 2KB block size)
sreadtim = 10 + 8192 / 4,096 = 10 + 2 = 12
mreadtim = 10 + 32 * 8,192 / 4,096 = 74 (!)

Cost = (40,000 / 32) * 74 / 12 = 7,708 (rounded), actually => 7,729

That's close enough to explain the remaining part with the CPU cost. So the optimizer uses an odd mixture of adjusted and unadjusted values which might be deliberate, but seem to be questionable at least, in particular a multi-block read request calculated to take 74ms.

The MBRC is adjusted, but obviously the default block size is used instead of the non-standard block size.

Let's check the results for the 16KB block size, first by looking at what we expect to get when slotting in the obvious values:

16KB, NOWORKLOAD:

MBRC = 4 (adjusted for 16KB block size)
sreadtim = 10 + 16,384 / 4,096 = 10 + 4 = 14
mreadtim = 10 + 4 * 16,384 / 4,096 = 26

Cost = (5,000 / 4) * 26 / 14 = 2,321 (rounded), actually => 1,881 !

Again the difference is significant, let's try the modified formula:

16KB, NOWORKLOAD (actual):

MBRC = 4 (adjusted for 16KB block size)
sreadtim = 10 + 8,192 / 4,096 = 10 + 2 = 12
mreadtim = 10 + 4 * 8,192 / 4,096 = 18

Cost = (5,000 / 4) * 18 / 12 = 1,875 (rounded), actually => 1,881

So it looks like my theory applies and the obvious question remains why the optimizer uses a quite unintuitive and odd set of values for the cost calculation of the NOWORKLOAD statistics.

Looking at the results when using the correct non-standard block size in the formula it's obvious that the variation in cost calculation for the non-standard block sizes would be much smaller than the actual variation observed. The actual smallest cost encountered was 1,881 for 16KB block size and 7,729 for 2KB. Using the correct block size this would be 2,321 (+6 CPU cost) for 16KB block size and 3,095 (+21 CPU cost) for the 2KB block size, which is much closer to the default block size cost of 2,717.

I've added another test case "tablescan_04a.sql" to the code depot which is not part of the official distribution. Its purpose is to check how the optimizer deals with the formula when dealing with a query based on multiple objects residing in multiple block sizes. The results show that for each individual object above formula is still used to estimate the cost, which makes the behaviour even more questionable why the optimizer is then not using the correct block sizes for each individual calculation.

WORKLOAD system statistics

What about the results of the WORKLOAD system statistics? How is it going to deal with the non-standard block sizes?

Let's check the formula:

MBRC = 8
mreadtim = 26
sreadtim = 12

as set in "tablescan_04.sql"

2KB, WORKLOAD:

Assuming an adjusted MRBC of 32 for the 2KB block size:

Cost = (40,000/32)*26/12 = 2,708 (rounded), actually 10,854 !

So again, something is clearly not following the assumptions. Let's try with the MBRC left unchanged (unadjusted):

2KB, WORKLOAD (actual):

Cost = (40,000/8)*26/12 = 10,833 (rounded), actually 10,854

Checking the same for the 16KB block size:

16KB, WORKLOAD:

Assuming an adjusted MRBC of 4 for the 16KB block size:

Cost = (5,000/4)*26/12 = 2,708 (rounded), actually 1,361 !

16KB, WORKLOAD (actual):

Cost = (5,000/8)*26/12 = 1,354 (rounded), actually 1,361

So the obvious problem of the WORKLOAD system statistics with non-standard block sizes is that the MBRC is not adjusted, but the number of blocks decrease/increase according to the block size used, resulting in a even larger variation in cost than the NOWORKLOAD statistics.

So I can only repeat what Jonathan has already written in his book: Be very cautious with using different block sizes for different objects. In particular with CPU costing enabled the cost based optimizer uses some questionable values to calculate the I/O cost leading to large variations in costs that very likely don't reflect the actual cost difference encountered at runtime. The result is that objects that reside in larger block sizes obviously are going to favor full table scans due to the lower cost, and the opposite applies to objects in smaller non-standard block sizes.

CPU costing and predicate re-ordering

One of the unique features of CPU costing is the ability to cost the predicate evaluation order and possibly perform a re-ordering to lower the cost (which can be prevented by using the ORDERED_PREDICATES hint). Running the test case provided by Jonathan against 11.1.0.6 and 11.1.0.7 shows a oddity regarding the costing of the TO_NUMBER function. Obviously the cost of 100 for a single call to TO_NUMBER doesn't apply in 11.1. This might be deliberate but seems to be questionable; other conversion functions like TO_CHAR or TO_DATE were still showing the same cost as in the other versions.

Here is the output from 10.2.0.4 of the "cpu_costing.sql" script:


Predicted cost (9.2.0.6): 1070604

Filter Predicate CPU cost
------------------------------------------------------------ ------------
TO_NUMBER("V1")=1 AND "N2"=18 AND "N1"=998 1,070,604

Predicted cost (9.2.0.6): 762787

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N1"=998 AND "N2"=18 AND TO_NUMBER("V1")=1 762,786

Predicted cost (9.2.0.6): 1070232

Filter Predicate CPU cost
------------------------------------------------------------ ------------
TO_NUMBER("V1")=1 AND "N1"=998 AND "N2"=18 1,070,231

Predicted cost (9.2.0.6): 762882

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N1"=998 AND TO_NUMBER("V1")=1 AND "N2"=18 762,881

Predicted cost (9.2.0.6): 770237

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N2"=18 AND "N1"=998 AND TO_NUMBER("V1")=1 770,236

Predicted cost (9.2.0.6): 785604

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N2"=18 AND TO_NUMBER("V1")=1 AND "N1"=998 785,604

Left to its own choice of predicate order

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N1"=998 AND "N2"=18 AND TO_NUMBER("V1")=1 762,786

And one last option where the coercion on v1 is not needed
Predicted cost (9.2.0.6): 770604

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"V1"='1' AND "N2"=18 AND "N1"=998 770,604


Apart from some minor rounding issues the results correspond to those from 9.2 and 10.1.

Here's the output running the same against 11.1.0.7:


Predicted cost (9.2.0.6): 1070604

Filter Predicate CPU cost
------------------------------------------------------------ ------------
TO_NUMBER("V1")=1 AND "N2"=18 AND "N1"=998 770,604

Predicted cost (9.2.0.6): 762787

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N1"=998 AND "N2"=18 AND TO_NUMBER("V1")=1 762,781

Predicted cost (9.2.0.6): 1070232

Filter Predicate CPU cost
------------------------------------------------------------ ------------
TO_NUMBER("V1")=1 AND "N1"=998 AND "N2"=18 770,231

Predicted cost (9.2.0.6): 762882

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N1"=998 AND TO_NUMBER("V1")=1 AND "N2"=18 762,781

Predicted cost (9.2.0.6): 770237

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N2"=18 AND "N1"=998 AND TO_NUMBER("V1")=1 770,231

Predicted cost (9.2.0.6): 785604

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N2"=18 AND TO_NUMBER("V1")=1 AND "N1"=998 770,604

Left to its own choice of predicate order

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"N1"=998 AND TO_NUMBER("V1")=1 AND "N2"=18 762,781

And one last option where the coercion on v1 is not needed
Predicted cost (9.2.0.6): 770604

Filter Predicate CPU cost
------------------------------------------------------------ ------------
"V1"='1' AND "N2"=18 AND "N1"=998 770,604


It's rather obvious that the TO_NUMBER function is not costed in the same way as previously, up to the point where 11g comes to a different conclusion when left on its own choice, and I doubt that the change is for the better, because the TO_NUMBER function is more often evaluated than necessary.

I've added some more test cases ("cpu_costing_2.sql" to "cpu_costing_5.sql") that show that other functions don't show this different treatment, so it seems to be particular issue of the TO_NUMBER function costing.

Single table selectivity, unknown bind variables and range comparisons

This difference showed up as a side effect of the "partition.sql" script. The recent releases (9.2.0.8, 10.2.0.4 and 11.1.0.6/7) seem to have been extended by an additional sanity check when applying the default 5% based selectivity of bind variables when performing range comparisons.

As pointed out by Jonathan it's quite unrealistic that a range comparison to an unknown bind variable results in a estimated cardinality lower than the cardinality estimated for an individual value. Consider e.g. a table consisting of 1,200 rows, 12 distinct values and a uniform distribution. A single value corresponds to 100 rows, but a range comparison to an unknown bind variables resulted previously in a reported cardinality of 60 (1/20 resp. 5% hard coded).

This is no longer the case with 9.2.0.8, 10.2.0.4 and 11.1.0.6/7: Obviously a lower limit of 1/NUM_DISTINCT applies, in this particular case this would be 100 instead of 60.

The rule seems to be more complex, and is again different in 11.1.0.6/7 than in previous versions, in particular when dealing with multiple predicates, but that's something to be left for Chapter 3 which deals with the "Single Table selectivity".

Updated code depot

You can download the updated code depot containing all scripts and the spool results from here:  My homepage (SQLTools++, an open source lightweight SQL Oracle GUI for Windows)

The original code depot (still maintained by Jonathan) can be found  here.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
目 录 第1章 成本的含义 1 1.1 优化器选项 2 1.2 成本的定义 3 1.3 变换和成本计算 5 1.4 所见未必即所得 8 1.5 本章小结 8 1.6 测试用例 8 第2章 表扫描 9 2.1 入门 10 2.2 提高 14 2.2.1 块大小的影响 14 2.2.2 CPU成本计算 16 2.2.3 CPU成本计算的作用 22 2.3 BCHR 24 2.4 并行执行 27 2.5 索引快速全扫描 30 2.6 分区 32 2.7 本章小结 37 2.8 测试用例 37 第3章 单表选择率 39 3.1 入门 40 3.2 空值 42 3.3 使用列表 43 3.4 区间谓词 48 3.5 双谓词 52 3.6 多谓词的相关问题 54 3.7 本章小结 56 3.8 测试用例 57 第4章 简单B树访问 59 4.1 索引成本计算的基础知识 60 4.2 入门 61 4.2.1 有效索引选择率 63 4.2.2 有效表选择率 64 4.2.3 clustering_factor 65 4.2.4 综合计算 67 4.2.5 扩展算法 68 4.2.6 3个选择率 74 4.3 CPU成本计算 78 4.4 待处理的零碎问题 80 4.5 本章小结 81 4.6 测试用例 81 第5章 群集因子 83 5.1 基本示例 84 5.1.1 减少表争用 (多个自由列表) 86 5.1.2 减少叶块的争用(反转键 索引,ReverseKey Index) 89 5.1.3 减少表的争用(ASSM) 92 5.1.4 减少RAC中的争用 (自由列表群) 95 5.2 列顺序 96 5.3 额外的列 99 5.4 校正统计信息 101 5.4.1 sys_op_countchg()技术 101 5.4.2 非正式策略 105 5.5 待处理的零碎问题 106 5.6 本章小结 107 5.7 测试用例 107 第6章 选择率的相关问题 109 6.1 不同的数据类型 110 6.1.1 日期类型 110 6.1.2 字符类型 110 6.1.3 愚蠢的数据类型 112 6.2 前导零 116 6.3 致命的默认值 117 6.4 离散数据的风险 119 6.5 令人惊奇的sysdate 123 6.6 函数表示 125 6.7 相互关联的列 126 6.7.1 动态采样 129 6.7.2 优化器配置文件 132 6.8 传递闭包 133 6.9 产生约束的谓词 136 6.10 本章小结 139 6.11 测试用例 139 第7章 直方图 141 7.1 入门 142 7.2 普通直方图 147 7.2.1 直方图和绑定变量 147 7.2.2 Oracle何时忽略直方图 149 7.3 频率直方图 152 7.3.1 伪造频率直方图 155 7.3.2 注意事项 156 7.4 “高度均衡”直方图 157 7.5 重新审视数据问题 163 7.5.1 愚蠢的数据类型 163 7.5.2 危险的默认值 166 7.6 本章小结 167 7.7 测试用例 168 第8章 位图索引 169 8.1 入门 170 8.1.1 索引组件 174 8.1.2 表组件 175 8.2 位图合并 177 8.2.1 较低的基数 179 8.2.2 空值列 182 8.3 CPU成本计算 185 8.4 一些有趣的示例 186 8.4.1 多列索引 187 8.4.2 位图连接索引 187 8.4.3 位图转换 188 8.5 本章小结 191 8.6 测试用例 192 第9章 查询变换 193 9.1 入门 194 9.2 过滤 197 9.2.1 过滤优化 200 9.2.2 标量子查询 202 9.2.3 子查询分解 208 9.2.4 复杂视图合并 213 9.2.5 推入谓词 215 9.3 一般子查询 216 9.3.1 子查询参数 218 9.3.2 分类 219 9.3.3 半连接 224 9.3.4 反连接 226 9.3.5 反连接异常 228 9.3.6 Null和Notin 229 9.3.7 有序提示 231 9.4 星型变换连接 232 9.5 星型连接 237 9.6 展望 239 9.7 本章小结 240 9.8 测试用例 241 第10章 连接基数 243 10.1 基本的连接基数 244 10.2 实际SQL的连接基数 249 10.3 扩展和异常情况 252 10.3.1 使用范围的连接 252 10.3.2 不等于 253 10.3.3 重叠 256 10.3.4 直方图 257 10.3.5 传递闭包 260 10.4 三表连接 264 10.5 空值 267 10.6 实现问题 270 10.7 困难之处 274 10.8 特性 276 10.9 另一观点 278 10.10 本章小结 279 10.11 测试用例 279 第11章 嵌套循环 281 11.1 基本机制 282 11.2 实际示例 286 11.3 完备性检查 287 11.4 本章小结 291 11.5 测试用例 291 第12章 散列连接 293 12.1 入门 294 12.1.1 最优散列连接 297 12.1.2 一遍散列连接 299 12.1.3 多遍散列连接 304 12.2 追踪文件 308 12.2.1 event 10104 308 12.2.2 event 10053 309 12.3 难点 311 12.3.1 传统成本计算 311 12.3.2 现代成本计算 312 12.4 比较 313 12.5 多表连接 318 12.6 本章小结 321 12.7 测试用例 321 第13章 排序与归并连接 323 13.1 入门 324 13.1.1 内存的使用 329 13.1.2 CPU的使用 330 13.1.3 sort_area_retained_size 333 13.1.4 pga_aggregate_target 334 13.1.5 实际I/O 337 13.2 排序的成本 339 13.3 比较 343 13.4 归并连接 346 13.4.1 归并机制 347 13.4.2 无最初排序的归并连接 351 13.4.3 笛卡尔归并连接 352 13.5 聚集及其他 354 13.5.1 索引 358 13.5.2 集合运算 359 13.6 最后一次提醒 363 13.7 本章小结 365 13.8 测试用例 366 第14章 10053 trace文件 367 14.1 查询 368 14.2 执行计划 369 14.3 环境 370 14.4 追踪文件 371 14.4.1 参数设置 372 14.4.2 查询块 375 14.4.3 存储统计信息 376 14.4.4 单表 378 14.4.5 完备性检查 379 14.4.6 一般计划 380 14.4.7 Join order[1] 380 14.4.8 Join order[2] 386 14.4.9 Join order[3] 387 14.4.10 Join order[4] 388 14.4.11 Join order[5] 388 14.4.12 Join order[6] 392 14.4.13 Join order[7] 392 14.4.14 Join order[8] 395 14.4.15 Join order[9] 397 14.4.16 Join order[10] 398 14.4.17 Join order[11] 398 14.4.18 Join order[12] 401 14.4.19 Join order[13] 404 14.4.20 Join order[14] 405 14.4.21 Join order[15] 406 14.4.22 Join order[16] 407 14.4.23 Join order[17] 407 14.4.24 Join order[18] 409 14.5 连接评估小结 410 14.6 测试用例 413 附录A 升级问题 415 A.1 dbms_stats 416 A.2 频率直方图 417 A.3 CPU成本计算 417 A.4 舍入误差 417 A.5 绑定变量窥视 418 A.6 连接间的空值 418 A.7 B树到位图的转换 418 A.8 索引跳跃扫描 419 A.9 AND-Equal 419 A.10 索引散列连接 420 A.11 修正的In-List 420 A.12 传递闭包 420 A.13 sysdate算术修正 421 A.14 对空值的索引 422 A.15 pga_aggregate_target 422 A.16 排序 422 A.17 分组 423 A.18 完备性检查 423 A.19 超出界限的情况 423 A.20 关于类型 423 A.21 optimizer_mode 424 A.22 降序索引 424 A.23 复杂视图合并 424 A.24 非嵌套子查询 424 A.25 标量和过滤子查询 425 A.26 并行查询策略的两次变化 425 A.27 动态采样 425 A.28 临时表 425 A.29 字典统计 426 附录B 优化器参数 427 B.1 optimizer_features_enable 428 B.2 10053 trace文件 430 B.3 v$sql_optimizer_env 435
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值