https://docs.google.com/forms/d/e/1FAIpQLSd2WrXaGcPcVNaG8SYW_QAQ-hn8CPwthZ-KwmuxuReMm1ZN_Q/viewform?usp=sf_link
https://www.youtube.com/watch?v=aircAruvnKk
Instead of specifying features or giving instructions, we "taught" you who Louie was by designing a game. The game was a lot like studying with flashcards. After looking at each image, you checked its label to see how you did. As you saw more images and got more feedback, you began to be able to successfully differentiate between the images that contained Louie and those that didn't. Your success was measured by a decline in loss.
学机器学习学出人生哲理了!
open DIGITS
http://18.219.102.143/digits/login?next=%2Fdigits%2Fmodels%2Fimages%2Fclassification%2Fnew
There are four categories of levers that you can manipulate to improve performance. Time spent learning about each of them will pay off in the performance of your models.
1) Data - A large and diverse enough dataset to represent the environment where our model should work. Data curation is an art form in itself.
2) Hyperparameters - Making changes to options like learning rate are like changing your training "style." Currently, finding the right hyperparameters is a manual process learned through experimentation. As you build intuition about what types of jobs respond well to what hyperparameters, your performance will increase.
3) Training time - More epochs improve performance to a point. At some point, too much training will result in overfitting (humans are guilty of this too), so this can not be the only intervention you apply.
4) Network architecture - We'll begin to experiement with network architecture in the next section. This is listed as the last intervention to push back against a false myth that to engage in solving problems with deep learning, people need mastery of network architecture. This field is fascinating and powerful, and improving your skills is a study in math.