Today ,i have read three computing model from books,they are map/reduce ,trident, spark.
i can't feel their difference when writing program.because when computing, they are all transparent. .maybe i should learn to understand how the system works again.
i like to analyse big distribute systems. i am good at analysing.
thess three models all have their own computing way, just three steps: no matter how ,the first thing is to get source for inputing, then start to analyse source data. the last step is to persist them in the database or hdfs.
when using we should thought how to use theri computing capacity , because they are a big cluster, they could be used to solve these problems like these
1.theres is a lot of something repeat and needed to be processed .
2,if there is something needed many times iterator,like machine learning and graph calculate
i hold the opinion if you could feel the computing capacity and have a clear concept about how they work in your mind,you would know how to use them.