开源自动化测试框架httprunner4.x的学习-2

使用教程(接口自动化)

1.如何安装

以我正在使用的v4.3.0为例

pip install httprunner==4.3.0

安装完成后检查一下

hrp -v

看到如下版本信息就说明安装成功
看下如下版本信息

2.创建脚手架

执行命令hrp startproject demo,其中demo为项目的名字,名字尽量不要取带有关键字的如httprunner等等这种

(venv) PS D:\autotest\PycharmProjects\test_httprunner> hrp startproject demo
4:02PM INF Set log to color console
4:02PM ??? Set log level
4:02PM INF create file path="demo\\.env"
4:02PM INF create file path="demo\\testcases\\demo.json"
4:02PM INF create file path="demo\\testcases\\requests.json"
4:02PM INF create file path="demo\\testcases\\requests.yml"
4:02PM INF create file path="demo\\testcases\\ref_testcase.yml"
4:02PM INF start to create hashicorp python plugin
4:02PM INF create file path="demo\\debugtalk.py"
4:02PM INF ensure python3 venv packages=["funppy==v0.5.0","httprunner==v4.3.0"] python3="C:\\Users\\Lee\\.hrp\\venv\\Scripts\\python.exe"
4:02PM INF python package is ready name=funppy version=v0.5.0
4:02PM INF python package is ready name=httprunner version=v4.3.0
4:02PM INF set python3 executable path Python3Executable="C:\\Users\\Lee\\.hrp\\venv\\Scripts\\python.exe"
4:02PM INF create scaffold success projectName=demo

这样一个项目基础的架构就建好了
项目基础架构

3.基础架构解析:

架构
har:一般用于存放har文件,在httprunner中支持将har文件转换成yaml、json、py(也支持yaml、json、py文件的转换)等格式的文件,不熟悉的同学可能会问har文件是什么,har即HTTPArchive Json缩写为har,主流的抓包工具(F12、fildder、Charles等)都可以将接口导出成har格式的文件,如下图为fildder导出文件的截图,选择要导出的接口导出到本地(尽量选择单接口多次导出,因为后面需要将单独的接口拎出来按照API–testcase–testsuite的这种方式实现调用)
fildder导出文件
reports: 存放测试报告,
testcases: 存放测试用例,
env: 全局的环境变量,
gitignore: 配置上传git时需要忽略的文件,比如配置logs的文件夹即上传到git仓库时,不需要上传此文件夹
debugtalk.py:用于写封装类函数的文件,用于yaml文件中函数即写在此文件中,在yaml通过${函数名称(参数)}实现调用

4.如何使用
1) 通过抓包工具抓到请求接口导出成har格式的文件

博主以fildder为例:
选择单个接口,导出成har格式的文件,保存到项目中har文件夹下面
fildder导出示例

2) 通过CLI转换文件生成测试用例

当前版本支持将har文件转换成yaml/json/py/go文件,共4类格式的文件

###CLI
(venv) PS D:\autotest\PycharmProjects\test_httprunner\demo> hrp convert -h
  curl        convert curl command to httprunner testcase

Flags:
  -h, --help                help for convert
  -d, --output-dir string   specify output directory, default to the same dir with har file
  -p, --profile string      specify profile path to override headers and cookies
      --to-gotest           convert to gotest scripts (TODO)
      --to-json             convert to JSON scripts (default)
      --to-pytest           convert to pytest scripts
      --to-yaml             convert to YAML scripts

Global Flags:
      --log-json           set log to json format
  -l, --log-level string   set log level (default "INFO")
      --venv string        specify python3 venv path

Use "hrp convert [command] --help" for more information about a command.

将har文件使用参数–to-yaml转换成yaml文件,使用参数-d指定输出路径,通过日志可以看出文件名是在原始文件名后面拼接了_test.yaml,其他几种形式的文件也是类似的仅后缀名称不一样
生成用例

3) 配置pytest.ini文件
[pytest]
addopts= -vs -n=auto --alluredir=reports/temp --clean-alluredir
filterwarnings =
    ignore::UserWarning
timeout = 60

具体可以用参数,如果没有基础的小伙伴可以配使用说明来学习

(venv) PS D:\autotest\PycharmProjects\test_httprunner> hrun -h
usage: hrun [options] [file_or_dir] [file_or_dir] [...]

positional arguments:
  file_or_dir

general:
  -k EXPRESSION         Only run tests which match the given substring expression. An expression is a Python evaluatable expression where all names are substring-matched against test names and their parent      
                        classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that   
                        don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra
                        names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive.
  -m MARKEXPR           Only run tests matching given mark expression. For example: -m 'mark1 and not mark2'.
  --markers             show markers (builtin, plugin and per-project ones).
  -x, --exitfirst       Exit instantly on first error or failed test
  --fixtures, --funcargs
                        Show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v')
  --fixtures-per-test   Show fixtures per test
  --pdb                 Start the interactive Python debugger on errors or KeyboardInterrupt
  --pdbcls=modulename:classname
                        Specify a custom interactive Python debugger for use with --pdb.For example: --pdbcls=IPython.terminal.debugger:TerminalPdb
  --trace               Immediately break when running each test
  --capture=method      Per-test capturing method: one of fd|sys|no|tee-sys
  -s                    Shortcut for --capture=no
  --runxfail            Report the results of xfail tests as if they were not marked
  --lf, --last-failed   Rerun only the tests that failed at the last run (or all if none failed)
  --ff, --failed-first  Run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown.
  --nf, --new-first     Run tests from new files first, then the rest of the tests sorted by file mtime
  --cache-show=[CACHESHOW]
                        Show cache contents, don't perform collection or tests. Optional argument: glob (default: '*').
  --cache-clear         Remove all cache contents at start of test run
  --lfnf={all,none}, --last-failed-no-failures={all,none}
                        Which tests to run with no previously (known) failures
  --sw, --stepwise      Exit on test failure and continue from last failing test next time
  --sw-skip, --stepwise-skip
                        Ignore the first failing test but stop on the next failing test. Implicitly enables --stepwise.

Reporting:
  --durations=N         Show N slowest setup/test durations (N=0 for all)
  --durations-min=N     Minimal duration in seconds for inclusion in slowest list. Default: 0.005.
  -v, --verbose         Increase verbosity
  --no-header           Disable header
  --no-summary          Disable summary
  -q, --quiet           Decrease verbosity
  --verbosity=VERBOSE   Set verbosity. Default: 0.
  -r chars              Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are  
                        enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE').
  --disable-warnings, --disable-pytest-warnings
                        Disable warnings summary
  -l, --showlocals      Show locals in tracebacks (disabled by default)
  --no-showlocals       Hide locals in tracebacks (negate --showlocals passed through addopts)
  --tb=style            Traceback print mode (auto/long/short/line/native/no)
  --show-capture={no,stdout,stderr,log,all}
                        Controls how captured stdout/stderr/log is shown on failed tests. Default: all.
  --full-trace          Don't cut any tracebacks (default is to cut)
  --color=color         Color terminal output (yes/no/auto)
  --code-highlight={yes,no}
                        Whether code should be highlighted (only if --color is also enabled). Default: yes.
  --pastebin=mode       Send failed|all info to bpaste.net pastebin service
  --junit-xml=path      Create junit-xml style report file at given path
  --junit-prefix=str    Prepend prefix to classnames in junit-xml output
  --html=path           create html report file at given path.
  --self-contained-html
                        create a self-contained html file containing all necessary styles, scripts, and images - this means that the report may not render or function where CSP restrictions are in place (see      
                        https://developer.mozilla.org/docs/Web/Security/CSP)
  --css=path            append given css file content to report style file.

pytest-warnings:
  -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
                        Set which warnings to report, see -W option of Python itself
  --maxfail=num         Exit after first num failures or errors
  --strict-config       Any warnings encountered while parsing the `pytest` section of the configuration file raise errors
  --strict-markers      Markers not registered in the `markers` section of the configuration file raise errors
  --strict              (Deprecated) alias to --strict-markers
  -c file               Load configuration from `file` instead of trying to locate one of the implicit configuration files
  --continue-on-collection-errors
                        Force test execution even if collection errors occur
  --rootdir=ROOTDIR     Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'.       

collection:
  --collect-only, --co  Only collect tests, don't execute them
  --pyargs              Try to interpret all arguments as Python packages
  --ignore=path         Ignore path during collection (multi-allowed)
  --ignore-glob=path    Ignore path pattern during collection (multi-allowed)
  --deselect=nodeid_prefix
                        Deselect item (via node id prefix) during collection (multi-allowed)
  --confcutdir=dir      Only load conftest.py's relative to specified dir
  --noconftest          Don't load any conftest.py files
  --keep-duplicates     Keep duplicate tests
  --collect-in-virtualenv
                        Don't ignore tests in a local virtualenv directory
  --import-mode={prepend,append,importlib}
                        Prepend/append to sys.path when importing test modules and conftest files. Default: prepend.
  --doctest-modules     Run doctests in all .py modules
  --doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
                        Choose another output format for diffs on doctest failure
  --doctest-glob=pat    Doctests file matching pattern, default: test*.txt
  --doctest-ignore-import-errors
                        Ignore doctest ImportErrors
  --doctest-continue-on-failure
                        For a given doctest, continue to run after the first failure

test session debugging and configuration:
  --basetemp=dir        Base temporary directory for this test run. (Warning: this directory is removed if it exists.)
  -V, --version         Display pytest version and information about plugins. When given twice, also display information about plugins.
  -h, --help            Show help message and configuration info
  -p name               Early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`.
  --trace-config        Trace considerations of conftest.py files
  --debug=[DEBUG_FILE_NAME]
                        Store internal tracing debug information in this log file. This file is opened with 'w' and truncated as a result, care advised. Default: pytestdebug.log.
  -o OVERRIDE_INI, --override-ini=OVERRIDE_INI
                        Override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.
  --assert=MODE         Control assertion debugging tools.
                        'plain' performs no assertion debugging.
                        'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information.
  --setup-only          Only setup fixtures, do not execute tests
  --setup-show          Show setup of fixtures while executing tests
  --setup-plan          Show what fixtures and tests would be executed but don't execute anything

logging:
  --log-level=LEVEL     Level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default.
  --log-format=LOG_FORMAT
                        Log format used by the logging module
  --log-date-format=LOG_DATE_FORMAT
                        Log date format used by the logging module
  --log-cli-level=LOG_CLI_LEVEL
                        CLI logging level
  --log-cli-format=LOG_CLI_FORMAT
                        Log format used by the logging module
  --log-cli-date-format=LOG_CLI_DATE_FORMAT
                        Log date format used by the logging module
  --log-file=LOG_FILE   Path to a file when logging will be written to
  --log-file-level=LOG_FILE_LEVEL
                        Log file logging level
  --log-file-format=LOG_FILE_FORMAT
                        Log format used by the logging module
  --log-file-date-format=LOG_FILE_DATE_FORMAT
                        Log date format used by the logging module
  --log-auto-indent=LOG_AUTO_INDENT
                        Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.
  --log-disable=LOGGER_DISABLE
                        Disable a logger by name. Can be passed multipe times.

Custom options:
  --metadata=key value  additional metadata.
  --metadata-from-json=METADATA_FROM_JSON
                        additional metadata from a json string.
  --metadata-from-json-file=METADATA_FROM_JSON_FILE
                        additional metadata from a json file.

[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg|pyproject.toml file found:

  markers (linelist):   Markers for test functions
  empty_parameter_set_mark (string):
                        Default marker for empty parametersets
  norecursedirs (args): Directory patterns to avoid for recursion
  testpaths (args):     Directories to search for tests when no files or directories are given on the command line
  filterwarnings (linelist):
                        Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings.
  usefixtures (args):   List of default fixtures to be used with this project
  python_files (args):  Glob-style file patterns for Python test module discovery
  python_classes (args):
                        Prefixes or glob names for Python test class discovery
  python_functions (args):
                        Prefixes or glob names for Python test function and method discovery
  disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):
                        Disable string escape non-ASCII characters, might cause unwanted side effects(use at your own risk)
  console_output_style (string):
                        Console output: "classic", or with additional progress information ("progress" (percentage) | "count" | "progress-even-when-capture-no" (forces progress even when capture=no)
  xfail_strict (bool):  Default for the strict parameter of xfail markers when not given explicitly (default: False)
  tmp_path_retention_count (string):
                        How many sessions should we keep the `tmp_path` directories, according to `tmp_path_retention_policy`.
  tmp_path_retention_policy (string):
                        Controls which directories created by the `tmp_path` fixture are kept around, based on test outcome. (all/failed/none)
  enable_assertion_pass_hook (bool):
                        Enables the pytest_assertion_pass hook. Make sure to delete any previously generated pyc cache files.
  junit_suite_name (string):
                        Test suite name for JUnit report
  junit_logging (string):
                        Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all
  junit_log_passing_tests (bool):
                        Capture log information for passing tests to JUnit report:
  junit_duration_report (string):
                        Duration time to report: one of total|call
  junit_family (string):
                        Emit XML for schema: one of legacy|xunit1|xunit2
  doctest_optionflags (args):
                        Option flags for doctests
  doctest_encoding (string):
                        Encoding used for doctest files
  cache_dir (string):   Cache directory path
  log_level (string):   Default value for --log-level
  log_format (string):  Default value for --log-format
  log_date_format (string):
                        Default value for --log-date-format
  log_cli (bool):       Enable log display during test run (also known as "live logging")
  log_cli_level (string):
                        Default value for --log-cli-level
  log_cli_format (string):
                        Default value for --log-cli-format
  log_cli_date_format (string):
                        Default value for --log-cli-date-format
  log_file (string):    Default value for --log-file
  log_file_level (string):
                        Default value for --log-file-level
  log_file_format (string):
                        Default value for --log-file-format
  log_file_date_format (string):
                        set the maximum filename length for assets attached to the html report.
  environment_table_redact_list (linelist):
                        A list of regexes corresponding to environment table variables whose values should be redacted from the report

Environment variables:
  PYTEST_ADDOPTS           Extra command line options
  PYTEST_PLUGINS           Comma-separated plugins to load during startup
  PYTEST_DISABLE_PLUGIN_AUTOLOAD Set to disable plugin auto-loading
  PYTEST_DEBUG             Set to enable debug tracing of pytest's internals


to see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option

4) 执行测试用例

鉴于之前遇到的问题,建议使用hrun执行接口自动化测试用例

###执行测试用例集
hrun dir
####执行单个用例
hrun dir/xxx.yaml
hrun dir/xxx.py
hrun dir/xxx.json
5)如何实现接口关联

经过自己的熟悉和理解,可以使用两种方式实现接口关联
方式1:官方提供
下例中是一个上传文件的接口api的定义,在实现实现上传文件接口中,需要依赖于好几个接口的数据,如token、projectId等参数,框架提供了接口关联的方法即在testcase中,把需要的接口都调用一遍并把需要的参数export到变量中供别的接口使用

config:
  name: testcase description
  verify: false
  base_url: https://xxxxx
teststeps:
  - name: "调用uploadOneFile API"
    variables:
      token: $token
      fileBizType: py
      projectId: $projectId
      publicStorage: "false"
      folderName: ${get_folder_name()}
    request:
      upload:
        file: file_data/reward.py
        fileBizType: $fileBizType
        projectId: $projectId
        publicStorage: $publicStorage
        folderName: $folderName
      method: POST
      url: file/uploadOneFile
      headers:
        Content-Type: multipart/form-data;
        Authorization: $token
        RequestId: xxxx
    validate:
      - check: status_code
        assert: equals
        expect: 200
        msg: assert response status code
      - check: headers."Content-Type"
        assert: equals
        expect: application/json;charset=UTF-8
        msg: assert response header Content-Type
      - check: body.code
        assert: equals
        expect: General.Success
        msg: assert response body code
      - check: body.msg
        assert: equals
        expect: 文件上传成功
        msg: assert response body msg

定义testcase的yaml文件,step中都是一个个具体的api文件

config:
  name: file_api测试用例
teststeps:
  - name: 1.获取验证码
    api: api/captcha_api.yaml
  - name: 2.登录获取token
    api: api/signIn_api.yaml
  - name: 3.获取项目列表
    api: api/getProjectList_api.yaml
  - name: 4.上传文件
    api: api/file_api.yaml

这是登录接口文件,其中通过关键字extract提取token然后再通过export导出为全局变量,方便之后在testcase中不同的step都能获取到此token值

config:
  name: index
  verify: false
  base_url: https://xxxxx
  variables:
    type: type.password
    capCode: $capCode
    capUuid: $capUuid
    username: username
    password: password
  export:
    - token
teststeps:
  - name: "调用signIn API"
    request:
      method: POST
      url: index/signIn
      headers:
        Accept: application/json, text/plain, */*
        Accept-Encoding: gzip, deflate, br
        Accept-Language: zh-CN,zh;q=0.9
        Connection: keep-alive
        Content-Length: "295"
        Content-Type: application/json;charset=UTF-8
        RequestId: 5093a9dc8fb14c90866af5d851e8598a
        Sec-Fetch-Dest: empty
        Sec-Fetch-Mode: cors
        Sec-Fetch-Site: same-site
        User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36
        sec-ch-ua: '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"'
        sec-ch-ua-mobile: ?0
        sec-ch-ua-platform: '"Windows"'
      body:
        password: $password
        type: $type
        username: $username
        capCode: $capCode
        capUuid: $capUuid
    extract:
      token: body.data.accessToken
    validate:
      - check: status_code
        assert: equals
        expect: 200
        msg: assert response status code
      - check: headers."Content-Type"
        assert: equals
        expect: application/json;charset=UTF-8
        msg: assert response header Content-Type
      - check: body.code
        assert: equals
        expect: General.Success
        msg: assert response body code
      - check: body.msg
        assert: equals
        expect: 接口调用成功
        msg: assert response body msg

方式2:自己琢磨
思路:通过后置处理,将这些参数接口调用的时候就存入一个中间文件,我这里使用的是yaml文件,然后调用别的接口时都从这个文件中取值(注:这里面需要调整原始框架中extract和teardown_hooks的执行顺序)

def clear_keys(key, file='token.yml'):
    import yaml
    with open(file, encoding='utf-8') as f:
        res = yaml.safe_load(f)
        print(res)
        if res is not None and key in res:
            with open(file, encoding='utf-8', mode='w') as stream:
                if len(res) > 1:
                    del res[key]
                    print(res)
                    yaml.safe_dump(res, stream)


def write_yaml(key, value, file='token.yml'):
    import yaml
    clear_keys(key, file)
    with open(file, mode='a') as f:
        if value is not None:
            data = {key: value}
            print(data)
            yaml.safe_dump(data, f)

httprunner框架额外还需要了解的知识:
1.jmespath语法,具体参考官方文档
jmespath官方文档
2.yam语法,具体参考文档
yam语法文档

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值