1.Assumption:(1)Data occurs in sequences.(2)Categorical labels for each position.(3)Labels are correlated
2.classify vs. sequence tagging:n‐way vs. n^T‐way.
3.avoid exponential blowup:(1)Markov property (2)Dynamic programming
4.HHM:(1)fully generative,P(Labels|Data)=p(Data,Labels)/P(Data).(2)simple (independent) output space:NSF_funded.FeatureSequence(int[])
5.CRF:(1)conditonal,P(Labels|Data).(2)arbitrarily complicated outputs:NSF_funded,CAPATILIZED,ENDS_WITH_ED.FeatureVectorSequence
(FeatureVector[])
6.IMporting Data:SimpleTagger format: one word per line, with instances delimited by a blank line.
7.Sliding
window
features:a@-1&love@1
8.Training
a
transducer:
9.Evaluating a transducer
10.Applying
a
transducer