It is an interesting textbook called Mining of Massive Datasets. And this book is mainly about data mining.
Just leave my footprints and note something remarkable.
Excerpt:
- The model of the data is simply the answer to a complex query about it.
- A theorem of statistics, known as the Bonferroni correction gives a statistically sound way to avoid most of these bogus positive responses to a search through the data.
- Bonferroni’s principle, that helps us avoid treating random occurrences as if they were real. Calculate the expected number of occurrences of the events you are looking for, on the assumption that data is random. If this number is significantly larger than the number of real instances you hope to find, then you must expect almost anything you find to be bogus, i.e., a statistical artifact rather than evidence of what you are looking for.
- The formal measure of how concentrated into relatively few documents are the occurrences of a given word is called TF.IDF (Term Frequency times Inverse Document Frequency).
- As for the hash method, the key value should be larger than the number of baskets (tricks like grouping & weighting are useful).
- Power law is similar as Matthew Effect which is "the rich get richer" in plain. And this can be abstracted in mathematical form "log(y) = b + a*log(x)".