CHC6072: Software Analysis and Testing
(Semester 1, 2023-2024)
Coursework Specification
Analysis and Testing of a Web Application
1Assessed Learning Outcomes
This coursework counts for 70% of the total assessment of this module. It is designed to develop and assess your attainment of the following learning outcomes.
1 Create effective software test plans to demonstrate an understanding of the principles and theoretical foundations of software quality assurance processes and systems, and software quality assurance methodologies, models and techniques
2 Evaluate the strengths and weaknesses of different approaches, based on the theoretical foundations of software measurement and metrics and select and apply appropriate metrics in the context of software creation
3 Understand the theoretical foundations of software testing, both static and dynamic, manual and automated;
understand the range of applicability of different approaches and techniques, and select and apply appropriate techniques in practical situations
4 Design and conduct systematic experiments, using both quantitative and qualitative methods; collect data from the experiments systematically and analyze the results
2The problem to be solved
In this coursework, you are required to work individually as a software quality assurance and testing engineer to perform testing and analysis of a given web-based application, and to develop test suites for automated regression testing of the application
The application is the website of Manchester University at the URL https://www.manchester.ac.uk/ . The specific function of this web-based application to be tested is the online facility for finding undergraduate courses provided by the university. Here, we will focus on Computer Science, Accounting, or Biochemistry courses.
Note: The URL given is an external application beyond the lecturer's control; you must be prepared for possible changes to the website when you write your automated test scripts.
3Tasks to do
The following is a brief description of the tasks and the marking scheme (in terms of the distribution of their weights in the assessment). A detailed marking scheme is given in a separate file, also available on the student’s website.
Task 1: Developing a test plan. (25%)
In this task, you are required to:
Explore the website to be tested;
Construct a hyperlink graph model of the application;
Write a user story or a set of user stories in JBehave format based on the hyperlink graph model of the system. Each user story should include:
a)a narrative description of the user story
b)a set of scenarios of using the function
Note that your test plan should achieve node coverage and hyperlink coverage of the hyperlink graph model.
Task 2: Developing automated test scripts. (25%)
In this task, you are required to follow the steps below to develop a set of automated test scripts based on the result of Task 1.
a)Select a subset of user stories/scenarios (at least three) in your test plan as the test cases of your choice.
b)For each scenario of your choice, follow the scenario description to perform a manual test of the web application and record your manual testing process using Selenium IDE, which you are required to install on your computer by yourself.
c)Edit and revise the recorded test process of your manual tests to make automated test scripts that are suitable for future regression testing.
d)Group the test scripts for one user story into a test suite.
Note that your chosen test cases should be the most complicated and important ones. Please read the detailed marking scheme for the quality standard that your test cases should have.
Task 3: Performing automated testing (30%)
In this task, you are required to perform the following.
a)Execute the test scripts using Selenium IDE to test the application in the Firefox web browser and record test executions in a log file.
b)Translate your test scripts in Selenese into one Java JUnit Test Code and execute the Java Test Code using JUnit with your software to test the web application in the Chrome web browser. You are required to install Selenium WebDriver for Chrome and take screen snapshots to demonstrate the successful executions of test code.
Task 4: Measuring test adequacy (20%)
In this task, you are required to calculate the adequacy of your testing by measuring your test’s node coverage
and hyperlink coverage of the hyperlink graph that you developed in Task 1.
4Submission of Coursework
4.1When to submit
The submission deadline is at 12:00pm on Friday of Week 9.
4.2What to be submitted
Each student must submit a compressed (zip) file that contains a set of files for the coursework. The file names and their contents to be included in coursework submission must follow the convention given in the table below. The text in red font below should be replaced by the student's own id number.
File name Format Example The content
CHC6072_CW_StdID.zip Zip file CHC6072_CW_156789023.zip The zip file should contain all the files of the coursework submission. StdID is the student ID number.
UserStory.docx MS Word UserStory.docx The user story and scenario in JBehave format.
TestSuite.side SIDE TestSuite.side The test suite saved into one Selenium test suite file.
TestCaseName.side SIDE FindUGCourse.side The test scripts. One file per test case. You
may need multiple test script files.
Testcase_screenshot.docx MS Word Testcase_screenshot.docx Screenshot of your different test script, and save into word file
LogFile.xlsx Text LogFile.txt The log file of all the test executions in Selenium IDE. One file per student.
TestClassName.java Java code FindPGCourse.java The test code in java for JUnit testing with WebDriver. One file for each Junit class.
TestResult.jpeg Jpeg TestResult.jpg The screen snapshots of executing Junit and your Java Code
LinkGraphAdequacy.docx MS Word Adequacy.docx The hyperlink graph and your
VX:codehelp