Which course does this assignment belong to | 2301-MUSE社区-CSDN社区云 |
---|---|
Where are the requirements for this assignment | Teamwork——Alpha Sprint-CSDN社区 |
The goal of this assignment | Describing the arrangement of project testing work, selection and application of testing tools, test case documents, test experiences, and project testing comments |
Team name | THEMIS |
The team placed the top collection essay link | Themis-Alpha Sprint Essay Collection-CSDN博客 |
Other references |
Content
II. Test tool selection and application
IV. Project test review/summary
I. Division of testing work
1.1 App
1.1.1 Front End
In general, we mainly use black-box testing for the front end, and use RIDE for automated testing of the interface supplemented by manual testing.
- In terms of the arrangement of testers, the black box test of all staff was first carried out, and then two front-end module developers focused on testing the module of the page
Student ID | work content |
---|---|
- | Repeated black box test by all staff |
832101121,832101123,832101220 | The client has a total of 19 modules |
1.2 Back End
Since our back end mainly uses springboot framework for development, most of us can only unit test through local configuration test case data, and then interact with the front and back end to correct errors.
- The tester arrangement is that developers are responsible for their own module testing, including apis.
Student ID | Work content |
---|---|
832101128,832101125 | Goods_order |
832101109 | person_message |
832101108,832101128 | Product release/evaluation/reporting/collection module |
832101128 | home page |
II. Test tool selection and application
test items | test tool |
Front end | RIDE,artificial work |
Back end | Uniapp, postman, artificial work |
III. Test case documentation
IV. Project test review/summary
We went through a rough testing phase in software development. Automated testing is not as easy as we would like it to be, and many times manual intervention is still required. Despite our efforts to write automated test scripts, the actual results do not always go as smoothly as expected. Various issues arose, such as the complexity of the environment configuration, the difficulty of some team members using certain testing tools, and the different results of the same script in different hands, which was somewhat confusing. What's more, it takes quite a bit of time to design a high-level script.
In the process of testing, we found some challenges, although we learned a lot of knowledge about software quality and testing, but in the end, we found that the most widely used is manual testing and API testing. Automated testing is our goal, but the reality is that we still need to rely on human intervention to ensure the accuracy and quality of testing.
Through this test experience, we have a deeper understanding of the importance of testing. This experience has given us a deeper understanding of the value of testing, and in the future we will pay more attention to the planning and execution of testing to ensure that software quality can be more strongly guaranteed.