python模糊集合_REST-ler: 自动化智能REST API模糊测试

REST-ler是一款自动化智能REST API安全测试工具,分析Swagger规格说明生成测试。它通过轻量级静态分析推断请求依赖,并基于响应动态反馈生成测试。通过测试GitLab,发现新bug。主要贡献包括请求依赖关系推断、动态反馈分析及三种搜索策略的比较。
32b25fb7c6450503212bd04d55555fd8.png

论文摘要:

随着Amazon Web Services和Microsoft Azure的出现,云服务在最近的几年中增长迅速。大多数云服务,不管是SaaS,PaaS还是IaaS,都是通过REST API访问的,而Swagger可以说是最受欢迎的REST API描述语言。

通过REST API自动测试云服务并检查这些服务是否可靠和安全的工具仍处于起步阶段。本文介绍了一个自动化智能REST API安全测试工具REST-ler,它分析Swagger规格说明并生成通过REST API执行的相应服务的测试。与其他REST API测试工具不同,REST-ler对整个Swagger规格说明执行轻量级静态分析,然后生成并执行通过其REST API执行相应云服务的测试。REST-ler生成测试的智能化体现在两个方面:(1)推断Swagger规格说明中声明的请求类型之间的依赖关系(例如,由于请求B要将请求A返回的一个资源ID x作为输入,因此推断请求B应该在请求A后面执行);(2)分析在先前测试执行期间观察到的响应的动态反馈,生成新的测试(例如,发现请求C在请求A之后执行,请求B被服务拒绝,因此在后续的测试中避免把二者组合起来)。我们发现这两种技术对于在缩小搜索空间的同时执行更全面的测试是必要的。通过REST-ler工具测试GitLab,我们还发现了新的bug。

技术介绍:

Fuzzing意味着自动测试的生成和执行,其目的是找到安全漏洞。本文中的研究对象是那些有Swagger规格说明的REST API的云服务。R

开始LED光谱优化... 数据加载完成,开始优化... 使用差分进化算法进行全局优化... differential_evolution step 1: f(x)= inf differential_evolution step 2: f(x)= inf differential_evolution step 3: f(x)= inf differential_evolution step 4: f(x)= inf differential_evolution step 5: f(x)= inf differential_evolution step 6: f(x)= inf differential_evolution step 7: f(x)= inf differential_evolution step 8: f(x)= inf differential_evolution step 9: f(x)= inf differential_evolution step 10: f(x)= inf differential_evolution step 11: f(x)= inf differential_evolution step 12: f(x)= inf differential_evolution step 13: f(x)= inf differential_evolution step 14: f(x)= inf differential_evolution step 15: f(x)= inf differential_evolution step 16: f(x)= inf differential_evolution step 17: f(x)= inf differential_evolution step 18: f(x)= inf differential_evolution step 19: f(x)= inf differential_evolution step 20: f(x)= inf differential_evolution step 21: f(x)= inf differential_evolution step 22: f(x)= inf differential_evolution step 23: f(x)= inf differential_evolution step 24: f(x)= inf differential_evolution step 25: f(x)= inf differential_evolution step 26: f(x)= inf differential_evolution step 27: f(x)= inf differential_evolution step 28: f(x)= inf differential_evolution step 29: f(x)= inf differential_evolution step 30: f(x)= inf differential_evolution step 31: f(x)= inf differential_evolution step 32: f(x)= inf differential_evolution step 33: f(x)= inf differential_evolution step 34: f(x)= inf differential_evolution step 35: f(x)= inf differential_evolution step 36: f(x)= inf differential_evolution step 37: f(x)= inf differential_evolution step 38: f(x)= inf differential_evolution step 39: f(x)= inf differential_evolution step 40: f(x)= inf differential_evolution step 41: f(x)= inf differential_evolution step 42: f(x)= inf differential_evolution step 43: f(x)= inf differential_evolution step 44: f(x)= inf differential_evolution step 45: f(x)= inf differential_evolution step 46: f(x)= inf differential_evolution step 47: f(x)= inf differential_evolution step 48: f(x)= inf differential_evolution step 49: f(x)= inf differential_evolution step 50: f(x)= inf differential_evolution step 51: f(x)= inf differential_evolution step 52: f(x)= inf differential_evolution step 53: f(x)= inf differential_evolution step 54: f(x)= inf differential_evolution step 55: f(x)= inf differential_evolution step 56: f(x)= inf differential_evolution step 57: f(x)= inf differential_evolution step 58: f(x)= inf differential_evolution step 59: f(x)= inf differential_evolution step 60: f(x)= inf differential_evolution step 61: f(x)= inf differential_evolution step 62: f(x)= inf differential_evolution step 63: f(x)= inf differential_evolution step 64: f(x)= inf differential_evolution step 65: f(x)= inf differential_evolution step 66: f(x)= inf differential_evolution step 67: f(x)= inf differential_evolution step 68: f(x)= inf differential_evolution step 69: f(x)= inf differential_evolution step 70: f(x)= inf differential_evolution step 71: f(x)= inf differential_evolution step 72: f(x)= inf differential_evolution step 73: f(x)= inf differential_evolution step 74: f(x)= inf differential_evolution step 75: f(x)= inf differential_evolution step 76: f(x)= inf differential_evolution step 77: f(x)= inf differential_evolution step 78: f(x)= inf differential_evolution step 79: f(x)= inf differential_evolution step 80: f(x)= inf differential_evolution step 81: f(x)= inf differential_evolution step 82: f(x)= inf differential_evolution step 83: f(x)= inf differential_evolution step 84: f(x)= inf differential_evolution step 85: f(x)= inf differential_evolution step 86: f(x)= inf differential_evolution step 87: f(x)= inf differential_evolution step 88: f(x)= inf differential_evolution step 89: f(x)= inf differential_evolution step 90: f(x)= inf differential_evolution step 91: f(x)= inf differential_evolution step 92: f(x)= inf differential_evolution step 93: f(x)= inf differential_evolution step 94: f(x)= inf differential_evolution step 95: f(x)= inf differential_evolution step 96: f(x)= inf differential_evolution step 97: f(x)= inf differential_evolution step 98: f(x)= inf differential_evolution step 99: f(x)= inf differential_evolution step 100: f(x)= inf Polishing solution with 'trust-constr' D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_differentialevolution.py:502: UserWarning: differential evolution didn't find a solution satisfying the constraints, attempting to polish from the least infeasible solution ret = solver.solve() D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_trustregion_constr\equality_constrained_sqp.py:203: UserWarning: Singular Jacobian matrix. Using SVD decomposition to perform the factorizations. Z, LS, Y = projections(A, factorization_method) D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_differentiable_functions.py:504: UserWarning: delta_grad == 0.0. Check if the approximated function is linear. If the function is linear better results can be obtained by defining the Hessian as zero instead of using quasi-Newton approximations. self.H.update(delta_x, delta_g) D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_differentiable_functions.py:231: UserWarning: delta_grad == 0.0. Check if the approximated function is linear. If the function is linear better results can be obtained by defining the Hessian as zero instead of using quasi-Newton approximations. self.H.update(self.x - self.x_prev, self.g - self.g_prev) D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_trustregion_constr\equality_constrained_sqp.py:80: UserWarning: Singular Jacobian matrix. Using SVD decomposition to perform the factorizations. Z, LS, Y = projections(A, factorization_method) 全局优化失败,尝试备用算法... Positive directional derivative for linesearch (Exit mode 8) Current function value: 11.85740523891356 Iterations: 12 Function evaluations: 86 Gradient evaluations: 8 所有优化失败,使用均匀权重 最终性能评估: 1. 色温准确度: |4849.9 - 6000| = 1150.1K 2. 色度坐标误差: 0.0525 3. 保真度指数: 96.94 (目标>88) 4. 色域指数: 21.91 (目标95-105) ⚠️ Rg约束未满足: 21.91 ✅ Rf约束满足 D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_slsqp_py.py:437: RuntimeWarning: Values in x were outside bounds during a minimize step, clipping to bounds fx = wrapped_fun(x) D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_slsqp_py.py:441: RuntimeWarning: Values in x were outside bounds during a minimize step, clipping to bounds g = append(wrapped_grad(x), 0.0) D:\anacondaxiaz\Lib\site-packages\scipy\optimize\_slsqp_py.py:501: RuntimeWarning: Values in x were outside bounds during a minimize step, clipping to bounds a_ieq = vstack([con['jac'](x, *con['args'])
最新发布
08-17
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值