[coursera/StructuringMLProjects/week1&2]ML Strategy1(summary&question)

Welcome to the third course which I think is the most important helpful videos in this data specialization!

Week1 ML Strategy(1)

1.1 Introduction

Prepare carefully for our projects.

training set: a bigger network; Adam optimization; early stopping

dev set: regularization; bigger training set; early stopping

test set: bigger dev set

1.2 Set up your goal(metric)

Single number evaluation metric ( precision & recall & F1 score)

make a standard evaluation of our projects

Precision and recall is not the only standard score we can use.

Satisficing and Optimizing metric:

Sometimes, we have a constraint(satisficing metric) like the accuracy must be higher than 90% or the runtime should be limited in 10sec.


Train/dev/test distributions

Size of the dev and test sets :

for example: training set : dev set : test set = 98:1:1

When to change dev/test sets and metrics
Sometimes, an algorithm with high accuracy may not fit well into the true world.(Cancer detector: True Negative is deadly) 

So we need to change our metrics or even change our dev/test sets.

1.3 Comparing to human-level performances

human-level error < Bayes optimal error

When our accuracy is higher than human-level error, the accuracy slows down.


Why human-level performance?

Avoidable bias = training error - Bayes optimal error(human-level error)


Understanding human-level performance

human-level error ≈ baye error



avoidable error = training error - human-level error

variance = dev error - training error


Surpassing human-level performance
This phenomenon is possible many research areas.


Improving your model performance(Summary for this week)



 QUIZ for WEEK 1( This really interests me.): 

Bird recognition in the city of Peacetopia (case study)


How to set right direction in the process of building up a successful machine Learning Projects is significant. As a leader of project, the main task is to make sure your team aren’t moving away from your goals. The key method is adopt appropriate strategies including setting metrics, structuring your data, considering dataset distribution, choosing optimal methods, defining right human-level performance, Speeding up your work etc. Making decicions of those methods based on actual conditions.

The following content actually are case study about recognition of birds in city.

This case study is origenally from a test in coursera. You can find it in course Structuring Machine Learning Projects.

1. Problem Statement

This example is adapted from a real production application, but with details disguised to protect confidentiality.

You are a famous researcher in the City of Peacetopia. The people of Peacetopia have a common characteristic: they are afraid of birds. To save them, you have to build an algorithm that will detect any bird flying over Peacetopia and alert the population.

The City Council gives you a dataset of 10,000,000 images of the sky above Peacetopia, taken from the city’s security cameras. They are labelled:


  • y = 0: There is no bird on the image
  • y = 1: There is a bird on the image

Your goal is to build an algorithm able to classify new images taken by security cameras from Peacetopia.

There are a lot of decisions to make:


  • What is the evaluation metric?
  • How do you structure your data into train/dev/test sets?


Metric of success

The City Council tells you the following that they want an algorithm that

Has high accuracy

Runs quickly and takes only a short time to classify a new image.

Can fit in a small amount of memory, so that it can run in a small processor that the city will attach to many different security cameras.

Note: Having three evaluation metrics makes it harder for you to quickly choose between two different algorithms, and will slow down the speed with which your team can iterate. True/False?

True.

Tips: There should be some accute restrictions.

2. Choosing Model

After further discussions, the city narrows down its criteria to:

“We need an algorithm that can let us know a bird is flying over Peacetopia as accurately as possible.”

“We want the trained model to take no more than 10sec to classify a new image.”

“We want the model to fit in 10MB of memory.”

If you had the three following models, which one would you choose?


Correct

tips: Correct! As soon as the runtime is less than 10 seconds you're good. So, you may simply maximize the test accuracy after you made sure the runtime is <10sec.

Based on the city’s requests, which of the following would you say is true?

Accuracy is an optimizing metric; running time and memory sizeis a satisficing metrics. Correct

tips: Satisficing metric makes us drop model B, In the remaining options, model D performance best at Test Accuracy. So D is a better choice.

3. Structuring your data

Before implementing your algorithm, you need to split your data into train/dev/test sets. Which of these do you think is the best choice?


Correct

tips: For big data, espcially more than 1,000,000, we should use a big part to train our model and leave a small part to develop and test. So C is a better choice.

4. Change training set distribution

After setting up your train/dev/test sets, the City Council comes across another 1,000,000 images, called the “citizens’ data”. Apparently the citizens of Peacetopia are so scared of birds that they volunteered to take pictures of the sky and label them, thus contributing these additional 1,000,000 images. These images are different from the distribution of images the City Council had originally given you, but you think it could help your algorithm.

You should not add the citizens’ data to the training set, because this will cause the training and dev/test set distributions to become different, thus hurting dev and test set performance.

False

tips: Adding this data to the training set will change the training set distribution. However, it is not a problem to have different training and dev distribution. On the contrary, it would be very problematic to have different dev and test set distributions.

5. Change testing set distribution

One member of the City Council knows a little about machine learning, and thinks you should add the 1,000,000 citizens’ data images to the test set. You object because:

  • A bigger test set will slow down the speed of iterating because of the computational expense of evaluating models on the test set.
  • The 1,000,000 citizens’ data images do not have a consistent x-->y mapping as the rest of the data (similar to the New York City/Detroit housing prices example from lecture).
  • The test set no longer reflects the distribution of data (security cameras) you most care about.
  • This would cause the dev and test set distributions to become different. This is a bad idea because you’re not aiming where you want to hit.

C, D is correct.

6. Next move

You train a system, and its errors are as follows (error = 100%-Accuracy):


This suggests that one good avenue for improving performance is to train a bigger network so as to drive down the 4.0% training error. Do you agree?

No, because there is insufficient information to tell.

7. human-level performance

You ask a few people to label the dataset so as to find out what is human-level performance. You find the following levels of accuracy:


If your goal is to have “human-level performance” be a proxy (or estimate) for Bayes error, how would you define “human-level performance”?

0.3% (accuracy of expert #1) is your best choice, cause someone can achive 0.3% error means that the Bayes error is beter than 0.3.

Correct

8.Bayes level

Which of the following statements do you agree with?

 A learning algorithm’s performance can be better human-level performance but it can never be better than Bayes error.

 A learning algorithm’s performance can never be better human-level performance but it can be better than Bayes error.

 A learning algorithm’s performance can never be better than human-level performance nor better than Bayes error.

 A learning algorithm’s performance can be better than human-level performance and better than Bayes error.

The first statement is correct.

9. Optimize strategy for bias

You find that a team of ornithologists debating and discussing an image gets an even better 0.1% performance, so you define that as “human-level performance.” After working further on your algorithm, you end up with the following:


Based on the evidence you have, which two of the following four options seem the most promising to try? (Check two options.)

  • Try decreasing regularization.
  • Try increasing regularization.
  • Train a bigger model to try to do better on the training set.
  • Get a bigger training set to reduce variance.


10. Optimaze strategy for overfit

You also evaluate your model on the test set, and find the following:


What does this mean? (Check the two best options.)


  • You have underfitted to the dev set.
  • You have overfitted to the dev set.
  • You should get a bigger test set.
  • You should try to get a bigger dev set.


11. Surpass human level

After working on this project for a year, you finally achieve:


What can you conclude? (Check all that apply.)

  • This is a statistical anomaly (or must be the result of statistical noise) since it should not be possible to surpass human-level performance.
  •  With only 0.09% further progress to make, you should quickly be able to close the remaining gap to 0%
  •  It is now harder to measure avoidable bias, thus progress will be slower going forward.
  •  If the test set is big enough for the 0,05% error estimate to be accurate, this implies Bayes error is ≤0.05


12. Set appropriate metric

It turns out Peacetopia has hired one of your competitors to build a system as well. Your system and your competitor both delivery systems with about the same running time and memory size. However, your system has higher accuracy! However, when Peacetopia tries out your and your competitor’s systems, they conclude they actually like your competitor’s system better, because even though you have higher overall accuracy, you have more false negatives (failing to raise an alarm when a bird is in the air). What should you do?

  • Look at all the models you’ve developed during the development process and find the one with the lowest false negative error rate.
  • Ask your team to take into account both accuracy and false negative rate during development.
  • Rethink the appropriate metric for this task, and ask your team to tune to the new metric.
  • Pick false negative rate as the new metric, and use this new metric to drive all further development.


13. Adding new data

You’ve handily beaten your competitor, and your system is now deployed in Peacetopia and is protecting the citizens from birds! But over the last few months, a new species of bird has been slowly migrating into the area, so the performance of your system slowly degrades because your data is being tested on a new type of data.

You have only 1,000 images of the new species of bird. The city expects a better system from you within the next 3 months. Which of these should you do first?

  • Use the data you have to define a new evaluation metric (using a new dev/test set) taking into account the new species, and use that to drive further progress for your team.
  • Put the 1,000 images into the training set so as to try to do better on these birds.
  • Try data augmentation/data synthesis to get more images of the new type of bird.
  • Add the 1,000 images to your dataset and reshuffle into a new train/dev/test split.


14. Speed up your work

The City Council thinks that having more Cats in the city would help scare off birds. They are so happy with your work on the Bird detector that they also hire you to build a Cat detector. (Wow Cat detectors are just incredibly useful aren’t they.) Because of years of working on Cat detectors, you have such a huge dataset of 100,000,000 cat images that training on this data takes about two weeks. Which of the statements do you agree with? (Check all that agree.)


  • Buying faster computers could speed up your teams’ iteration speed and thus your team’s productivity.
  • If 100,000,000 examples is enough to build a good enough Cat detector, you might be better of training with just 10,000,000 examples to gain a ≈10x improvement in how quickly you can run experiments, even if each model performs a bit worse because it’s trained on less data.
  • Having built a good Bird detector, you should be able to take the same model and hyperparameters and just apply it to the Cat dataset, so there is no need to iterate.
  • Needing two weeks to train will limit the speed at which you can iterate.




  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值