Data Analytics ECS784U/ P Coursework 2 specification for 2023-24Python

Java Python Coursework 2 specification for 2023-24

Data Analytics ECS784U/ P,

1. Important Dates

•   Release date: Week 9, Wednesday 20th March 2024 at 17:00 afternoon.

•    Submission deadline: Week 13, Friday 19th April 2024 at 10:00AM.

•   Late submission deadline (cumulative penalty applies): Within 7 days after deadline.

General information (same as Coursework 1):

i.      Students will sometimes upload their coursework and not hit the submit button. Make sure you fully complete the submission process.

ii.      A penalty will be applied automatically by the system for late submissions.

a.   Lecturers cannot remove the penalty!

b.   Penalties   can   only   be    challenged   via   submission    of   an   Extenuating Circumstances (EC) form which can be found on your Student Support page. All the information you need to know is on that page, including how to submit an EC claim along with the deadline dates and full guidelines.

c.   Deadline extensions can only be granted through approval of an EC claim

d.   If you submit an EC form, your case will be reviewed by a panel. When the panel reaches a decision, they will inform. both you and the module organiser.

e.   If you miss both the submission deadline and the late submission deadline, you will automatically receive a score of 0.

iii.      Submissions via e-mail are not accepted.

iv.      The School requires that we set the deadline during a weekday at 10:00 AM.

v.      For more details on submission regulations, please refer to your relevant student handbook.

2. Coursework overview

Coursework 2 involves applying causal machine learning to a data set of your choice. You will have to complete a series of tasks, and then answer a set of questions.

•   This coursework is based on the lecture material covered between Weeks 6 and 12, and on the lab material covered between Weeks 9 and 11.

•   The coursework must be completed individually.

•    Submission should be a single file (Word or PDF) containing your answers to each of the questions.

o Ensure you clearly indicate which answer corresponds to what question.

o Data sets and other relevant files are not needed for submission, but do save them in case we ask to have a look at them.

•   To  complete  the  coursework,  follow  the  tasks  below  and  answer  ALL  questions enumerated in Section 3. It is recommended that you read this document in full before you start completing Task 1.

•   You can start working on your answers as early as you want, but keep in mind that you need to go through up to Week’s 11 material to gain the knowledge needed to answer all the questions.

TASK 1: Set up and reading

a)  Visithttp://bayesian-ai.eecs.qmul.ac.uk/bayesys/

b)  Download the Bayesys user manual.

c)   Setup the NetBeans project by following the steps in Section 1 of the manual.

d)  Read Sections 2, 3, 4 and 5 of the manual.

e)   Skip Section 6.

f)   Read Section 7 and repeat the example.

i.      Skip subsections 7.3 and 7.4.

g)  Read Section 8 and repeat the example.

h)  Skip Sections 9, 10, 11 and 12.

i)   Read Section 13.

i.      Skip subsection 13.6.

TASK 2: Determine research area and prepare data set

You are free to choose or collate your own data set. As with Coursework 1, we recommend that you address a problem you are interested in or related to your professional field. If you are motivated by the subject matter, the project will be more fun for you, and you will likely perform. better.

Data requirements:

•   Size of data: The data set must contain at least 8 variables (yes, penalty applies for using <8 variables). There is no upper-bound restriction on the number of the variables. However, we recommend using <50 variables for the purposes of the coursework to make it much easier for you to visualise the causal graph, and to save computational runtime. While the vast majority of submissions typically rely on relatively small data sets that take a few seconds to ‘learn’, keep in mind some algorithms might take hours to complete learning when given more than 100 variables!

i.      You do not need to use a special technique for feature selection – it is up to you to decide which variables to keep. We will not be assessing feature selection decisions.

ii.      There is no sample-size restriction and you are free to use apart ofthe samples. For example, your data set may contain millions of rows and you may want to use fewer to speed-up learning.

•   Re-use data from CW1: You are allowed to reuse the data set you have prepared for Coursework 1, as long as: a) you consider that data set to be suitable for causal structure learning (refer to Q1 in Section 3), and b) it contains at least 8 variables.

•   Bayesys repository: You are not allowed to use any of the data sets available in the Bayesys repository for this coursework.

•   Categorical data: Bayesys assumes the input data are categorical or discrete; e.g., {"low", "medium", "high"},  {"yellow", "blue", "green"},  {" <  10", "10-20", "20 + "} etc, rather than a continuous range of numbers. If your data set contains continuous variables, Bayesys will consider each value of a continuous variable as a different category.  This  will  cause  problems  with  model  dimensionality,  leading  to  poor accuracy and high runtime (if this is not clear why, refer to the Conditional Probability Tables (CPTs) covered in the lectures).

To address this issue, you should discretise all continuous variables to reduce the number of states to reasonable levels. For example, a variable with continuous values ranging from 1 to 100 (e.g., {"14.34", "78.56", "89.23"}) can be discretised into categories  such  as  {"1to20", "21to40", "41to60", "61to80", "81to100"}.   Because Coursework 2 is not concerned with data pre-processing, you are free to follow any approach you wish to discretise continuous variables. You could discretise the variables manually as discussed in the above example, or even use k-means which we covered in previous lectures, or any other data discretisation approach. We will not be assessing data discretisation decisions.

•   Missing data values: The input data set must not contain missing values/empty cells. If it does, the easiest solution would be to replace ALL empty cells with a new category value called missing (or use a different relevant name). This will force the algorithms to consider missing values as an additional state. Alternatively, you could use any data imputation approach, such as MissForest. We will not be assessing data imputation decisions.

Once you ensure your data set is consistent with what has been stated above, rename your data set to trainingData.csv and place it in folder Input.

TASK 3: Draw out your knowledge-based graph

1.   Use your own knowledge to produce a knowledge-based causal graph based on the variables you decide to keep in your data set. Remember that this graph is based on your knowledge, and it is not necessarily correct or incorrect. You will compare the graphs learnt by the different algorithms with reference to your knowledge graph.

You may find it easier if you start drawing the graph by hand, and then record the directed relationships in the DAGtrue.csv file. In creating your DAGtrue.csv file, werecommend that you edit one of the sample files that come with Bayesys; e.g., create a  copy  of  the  DAGtrue_ASIA.csv   file  available  in  the  directory  Sample  input files/Structure learning, then rename the file to DAGtrue.csv, and then replace the directed relationships with those present in your knowledge graph.

NOTE: Your knowledge graph should have a maximum node in-degree of 11; i.e., no node  in  the  graph  should  have  more  than   11  parents  (this  is  a  library/package restriction).

2.   Once  you  are  happy  with  the  graph  you  have  prepared,  ensure  the  file  is  called DAGtrue.csv and placed in folder Input.

NOTE: If your OS is not showing the file extensions (e.g., .CSV or .PDF),name your file DAGtrue and not DAGtrue.csv; otherwise, the file might end up being called DAGtrue.csv.csv unintentionally (when the file extension is not visible). If this happens, Bayesys will be unable to locate the file.

3.   Make a copy of the DAGtrue.csv file, and rename this copy into DAGlearned.csv and place it in folder Output. You can discard the copied file once you complete Task 3.

4.   Ensure that your DAGtrue.csv and trainingData.csv (from Task 2) files are in folder Input, and the DAGlearned.csv file is in folder Output. Run Bayesys in NetBeans. Under tab Main, select Evaluate graph and then click on the first subprocess as shown below. Then hit the Run button found at the bottom of tab Main.&nbs Data Analytics ECS784U/ P, Coursework 2 specification for 2023-24Python p;

The  above  process  will  generate  output  information  in  the  terminal  window  of NetBeans. Save the last three lines, as highlighted in the Fig below; you will need this information later when answering some of the questions in Section 3.

Additionally, the above process should have generated one PDF files in folder Input called DAGtrue.pdf. Save this file as you will need it for later.

This only concerns MAC/Linux users: The above process might return an error while creating the PDF file, due to compatibility issues. Even if the system completes the process without errors, the PDF files generated may be corrupted and not open on MAC/Linux. If this happens, you should use the online GraphViz editor to produce your graphs,  available  here: https://edotor.net/  ,  which  converts  text  into  a  visual drawing. As an example, copy the code shown below in the web editor:

digraph {

Earthquake -> Alarm Burglar -> Alarm

Alarm -> Call

}

If you are drawing a CPDAG containing undirected edges, then consider:

digraph {

Earthquake -> Alarm Burglar -> Alarm

Alarm -> Call [arrowhead=none]; }

You can then edit the above code to be consistent with your DAGtrue.csv. You could copy-and-paste the variable relationships (e.g., Earthquake → Alarm) directly from DAGtrue.csv into the code editor, taking care to remove commas and quote any variable names containing spaces.

TASK 4: Perform structure learning

1.   Run Bayesys. Under tab Main, select Structure learning and algorithm HC (default selection). Select Evaluate graph and then click on the last two (out of four) options so that you also generate the learned DAG and CPDAG in PDF files, in addition to the DAGlearned.csv file which is generated by default. Then, hit the Run button.

2.   Once the above process completes, you should see:

i.      Relevant text generated in the terminal window of NetBeans.

ii.      The files DAGlearned.csv, DAGlearned.pdf and CPDAGlearned.pdf should be generated in folder Output. As stated in Task 3, the PDF files maybe corrupted on MAC/Linux, and you will have to use the online GraphViz editor to produce the graph corresponding to DAGlearned.csv (simply copy the relationships from the CSV file into the editor as discussed in Task 3).

3.   Repeat the above process for the other four algorithms; i.e., TABU, SaiyanH, MAHC and GES. Save the same output information and files that each algorithm produces (ensure you first read the NOTE below).

NOTE: As stated in the manual, Bayesys overwrites the output files everytime it runs. You need to remember to either rename or move the output files to another folder before running the next algorithm.

Similarly, if you happen to have one of the output files open - for example, viewing the DAGlearned.pdf  in  Adobe  Reader while running  structure  learning  - Bayesys will fail to replace the PDF file, and the output file will not reflect the latest iteration. Ensure you close all output files before running structure learning.

3. Questions

This coursework involves applying five different structure learning algorithms to your data set. We do not expect you to have a detailed understanding of how the algorithms operate. None of the  Questions  focuses  on  the  algorithms  and  hence,  your  answers  should  not  focus  on discussing differences between algorithms.

•   You should answer ALL questions.

•   You should answer the questions in your own words.

•   Do not exceed the maximum number of words specified for each question. If a question restricts the answer to, say 100 words, only the first 100 words will be considered when marking the answer.

•   Marking is out of 100.

QUESTION 1: Discuss the research area and the data set you have prepared, along with pointers to your data sources. Screen-capture part of the final version of your data set and present it here as a Figure. For example, if your data  set contains  15 variables and  1,000 samples, you could present the first 10 columns and a small part of the sample size. Explain why you considered this data set to be suitable for structure learning, and what questions you expect a structure learning algorithm to answer.

Maximum number of words: 150 Marks: 10

QUESTION 2: Present your knowledge-based DAG (i.e., DAGtrue.pdf or the corresponding DAGtrue.csv graph visualised through the web editor), and briefly describe the information you have considered to produce this graph. For example, did you refer to the literature to obtain the necessary knowledge, or did you consider your own knowledge to be sufficient for this problem? If you referred to the literature to obtain additional information, provide references and very briefly describe the knowledge gained from each paper. If you did not refer to the literature, justify why you considered your own knowledge to be sufficient in determining the knowledge-based graph.

NOTE: It is possible to obtain maximum marks without referring to the literature, as long as you clearly justify why you considered your personal knowledge alone to be sufficient. Any references provided will not be counted towards the word limit.

Maximum number of words: 200 Marks: 10

QUESTION 3: Complete Table Q3 below with the results you have obtained by applying each of the algorithms to your data set during Task 4. Compare your CPDAG scores produced by F1, SHD and BSF with the corresponding CPDAG scores shown in Table 3.1 (page 13) in the Bayesys manual.

Specifically, are your scores mostly lower, similar, or higher compared to those shown in Table

3.1 in the manual? Why do you think this is? Is this the result you expected? Explain why.

Table Q3. The scores of the five algorithms when applied to your data set.

 

Algorithm

CPDAG scores

Log-Likelihood (LL) score

BIC  score

# free

parameters

Structure learning elapsed time

BSF

SHD

F1

HC

 

 

 

 

 

 

 

TABU

 

 

 

 

 

 

 

SaiyanH

 

 

 

 

 

 

 

MAHC

 

 

 

 

 

 

 

GES

 

 

 

 

 

 

 

Maximum number of words: 250 Marks: 15

QUESTION  4:  Present  the  CPDAG  generated  by  HC  (i.e.,  CPDAGlearned.pdf  or  the corresponding CPDAGlearned.csv graph visualised through the web editor). Highlight the three causal classes in the CPDAG. You only need to highlight one example for each causal class. If a causal class is not present in the CPDAG, explain why this might be the case         

  • 12
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值