dynamic time warping (1)

Dynamic Time Warping : theory


http://www.mblondel.org/journal/2009/08/31/dynamic-time-warping-theory/

Dijkstra


Recently, I’ve been working on a new handwriting recognition engine for Tegaki based on Dynamic Time Warping and I figured it would be interesting to make a short, informal introduction to it.

Dynamic Time Warping (DTW) is a well-known algorithm which aims at comparing and aligning two sequences of data points (a.k.atime series). Although it was originally developed for speech recognition (see [1]), it has also been applied to many other fields likebioinformatics,econometrics and, of course, handwriting recognition.

Consider two sequences A and B, composed respectively of n and m feature vectors.

Each feature vector is d-dimensional and can thus be represented as a point in a d-dimensionalspace. For example, in handwriting recognition, we could directly use the raw (x,y) coordinates of the pen movement and that would make us sequences of 2-dimensional vectors. In practice however, one would extract more useful features from (x,y) and create vectors of dimension possibly greater than 2. It’s also worth noting that the sequences A and B can be of different length.

Time warping

DTW works by warping (hence the name) the time axis iteratively until an optimal match between the two sequences is found.

In the figure above, which is an example of two sequences of data points with only 1 dimension, the time axis is warped so that each data point in the green sequence is optimally aligned to a point in the blue sequence.

Best path

We can construct a n x m distance matrix. In this matrix, each cell (i,j) represents the distance between the i-th element of sequence A and the j-th element of sequence B. The distancemetric used depends on the application but a common metric is theeuclidean distance.

Finding the best alignment between two sequences can be seen as finding the shortest path to go from the bottom-left cell to the top-right cell of that matrix. The length of a path is simply the sum of all the cells that were visited along that path. The further away the optimal path wanders from the diagonal, the more the two sequences need to be warped to match together.

The brute force approach to finding the shortest path would be to try each path one by one and finally select the shortest one. However it’s apparent that it would result in an explosion of paths to explore, especially if the two sequences are long. To solve this problem, DTW uses two things: constraints and dynamic programming.

Constraints

DTW can impose several kinds of reasonable constraints, to limit the number of paths to explore.

  • Monotonicity: The alignment path doesn’t go back in time index. This guarantees that features are not repeated in the alignment.
  • Continuity: The alignment doesn’t jump in time index. This guarantees that important features are not omitted.
  • Boundary: The alignment starts at the bottom-left and ends at the top-right. This guarantees that the sequences are not considered only partially.
  • Warping window: A good alignment path is unlikely to wander too far from the diagonal. This guarantees that the alignment doesn’t try to skip different features or get stuck at similar features.
  • Shape: Aligned paths shouldn’t be too steep or too shallow. This prevents short sequences to be aligned with long ones.

These constraints are best visualized in [3].

Dynamic Programming

Taking advantage of such constraints, DTW uses dynamic programming to find the best alignment in a recursive way. Previously, the cell (i,j) of the distance matrix was defined as “the distance between the i-th element of sequence A and the j-th element of sequence B”. In the dynamic programming way of thinking, this definition is changed, and instead, the cell (i,j) is defined as the length of the shortest pathup to that cell. Assuming local constraints like below,

it allows us to define the cell (i,j) recursively:

cell(i,j) = local_distance(i,j) + MIN(cell(i-1,j), cell(i-1,j-1), cell(i, j-1))

Here, recursively means that the shortest path up to the cell (i,j) is defined in terms of the shortest path up to the adjacent cells. A lot of different local constraints can be defined (see thistable) and thus there are many variations in the way DTW can be implemented.

DTW as a distance metric

Once the algorithm has reached the top-right cell, we can use backtracking in order to retrieve the best alignment. If we’re just interested in comparing the two sequences however, then the top-right cell of the matrix just happens to be the length of the shortest path. We can therefore use the value stored in this cell as the distance between the two sequences. DTW has the nice property to be symmetric so DTW(a,b) = DTW(b,a). Also, DTW doesn’t fulfill thetriangle inequality but it isn’t a problem in practice.

Related algorithms

DTW looks almost identical to the Levenshtein algorithm, an algorithm to compare strings, and is very similar to theSmith-Waterman algorithm, an algorithm forsequence alignment.

References

[1] Sakoe, H. and Chiba, S., Dynamic programming algorithm optimization for spoken word recognition, IEEE Transactions on Acoustics, Speech and Signal Processing, 26(1) pp. 43- 49, 1978

[2] DTW algorithm @ GenTχWarper

[3] PowerPoint presentation by Elena Tsiporkova


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值