The proof of partial theorems in ‘TENSOR-TRAIN DECOMPOSITION‘

  本文记录著名的张量 TT 分解论文 TENSOR-TRAIN DECOMPOSITION 的部分内容. 因为感觉原论文的部分证明写的不清楚, 故重写.

1. Theorem 2.1

  By the tensor train decomposition, the n 1 × n 2 × ⋯ × n d n_1 \times n_2 \times \cdots \times n_d n1×n2××nd tensor A \mathcal A A can be writen as A ( i 1 , . . . , i d ) = ∑ α 0 , . . . , α d G 1 ( α 0 , i 1 , α 1 ) G 2 ( α 1 , i 2 , α 2 ) ⋯ G d ( α d − 1 , i d , α d ) . (1.1) \mathcal{A}\left( i_1,...,i_d \right) =\sum_{\alpha _0,...,\alpha _d}{\mathcal G_1\left( \alpha _0,i_1,\alpha _1 \right) \mathcal G_2\left( \alpha _1,i_2,\alpha _2 \right) \cdots \mathcal G_d\left( \alpha _{d-1},i_d,\alpha _d \right)} .\tag{1.1} A(i1,...,id)=α0,...,αdG1(α0,i1,α1)G2(α1,i2,α2)Gd(αd1,id,αd).(1.1) where α 0 = α d = 1 \alpha_0=\alpha_d=1 α0=αd=1. Let set the unfolding matrix of A \mathcal A A as follow: A k = A k ( i 1 , . . . i k ‾ , i k + 1 , . . . , i d ‾ ) = A ( i 1 , . . . i k , i k + 1 , . . . , i d ) . (1.2) A_k=A_k\left( \overline{i_1,...i_k},\overline{i_{k+1},...,i_d} \right) =\mathcal{A}\left( i_1,...i_k,i_{k+1},...,i_d \right). \tag{1.2} Ak=Ak(i1,...ik,ik+1,...,id)=A(i1,...ik,ik+1,...,id).(1.2) where the i 1 , . . . i k ‾ \overline{i_1,...i_k} i1,...ik is the multi-index of the matrix, which is defined as i 1 , . . . i k ‾ = i 1 + ( i 2 − 1 ) ⋅ n 1 + ⋯ + ( i k − 1 ) ⋅ n 1 n 2 ⋯ n k − 1 . \overline{i_1,...i_k}=i_1+\left( i_2-1 \right) \cdot n_1+\cdots +\left( i_k-1 \right) \cdot n_1n_2\cdots n_{k-1}. i1,...ik=i1+(i21)n1++(ik1)n1n2nk1. The size of this matrix is ( ∏ s = 1 k n s × ∏ s = k + 1 d n s ) \left(\prod_{s=1}^k{n_s}\times \prod_{s=k+1}^d{n_s}\right) (s=1kns×s=k+1dns), and it can be obtained from the tensor A \mathcal A A by a single call to the reshape function in MatLab: A k = reshape ( A , [ ∏ s = 1 k n s , ∏ s = k + 1 d n s ] ) . A_k=\text{reshape}\left( \mathcal{A},\left[ \prod_{s=1}^k{n_s},\prod_{s=k+1}^d{n_s} \right] \right) . Ak=reshape(A,[s=1kns,s=k+1dns]). Then the theorem 2.1 as follow.

  Theorem 2.1 If for each unfolding matrix A k A_k Ak of form ( 1.2 ) (1.2) (1.2) of a d-dimensional tensor A \mathcal A A with rank ( A k ) = r k , (1.3) \text{rank}\left( A_k \right) =r_k, \tag{1.3} rank(Ak)=rk,(1.3) then there exists a decomposition ( 1.1 ) (1.1) (1.1) with TT-ranks not higher than r k r_k rk.

  proof Consider the unfolding matrix A 1 A_1 A1. As its rank is equal to r k r_k rk, it can be decomposed by eigenvalue decompesition as follow: A 1 = U V T , (1.4) A_1=UV^{\text{T}}, \tag{1.4} A1=UVT,(1.4) with U U U is a ( n 1 × r 1 ) (n_1 \times r_1) (n1×r1)-matrix, V T V^{\text{T}} VT is a ( r 1 × ∏ s = 2 d n s ) (r_1 \times \prod_{s=2}^d{n_s}) (r1×s=2dns)-matrix. Then the ( 1.4 ) (1.4) (1.4) can be written as A 1 ( i 1 , i 2 ⋯ i d ‾ ) = ∑ α 1 = 1 r 1 U ( i 1 , α 1 ) V T ( α 1 , i 2 ⋯ i d ‾ ) = ∑ α 1 = 1 r 1 U ( i 1 , α 1 ) V ( i 2 ⋯ i d ‾ , α 1 ) (1.5) \begin{aligned} A_1\left( i_1,\overline{i_2\cdots i_d} \right) &=\sum_{\alpha _1=1}^{r_1}{U\left( i_1,\alpha _1 \right) V^{\text{T}}\left( \alpha _1,\overline{i_2\cdots i_d} \right)}\\ &=\sum_{\alpha _1=1}^{r_1}{U\left( i_1,\alpha _1 \right) V\left( \overline{i_2\cdots i_d},\alpha _1 \right)}\\ \end{aligned} \tag{1.5} A1(i1,i2id)=α1=1r1U(i1,α1)VT(α1,i2id)=α1=1r1U(i1,α1)V(i2id,α1)(1.5) V T ( α 1 , i 2 ⋯ i d ‾ ) V^{\text{T}}\left( \alpha _1,\overline{i_2\cdots i_d} \right) VT(α1,i2id) can be treated as a tensor V \mathcal V V, so ( 1.5 ) (1.5) (1.5) can be written as A 1 ( i 1 , i 2 ⋯ i d ‾ ) = ∑ α 1 = 1 r 1 U ( i 1 , α 1 ) V ( α 1 , i 2 , ⋯   , i d ) . (1.6) A_1\left( i_1,\overline{i_2\cdots i_d} \right) =\sum_{\alpha _1=1}^{r_1}{U\left( i_1,\alpha _1 \right) \mathcal{V}\left( \alpha _1,i_2,\cdots, i_d \right)}. \tag{1.6} A1(i1,i2id)=α1=1r1U(i1,α1)V(α1,i2,,id).(1.6)

  The matrix V V V can be expressed as V = A 1 T U ( U T U ) − 1 = A 1 T W , V=A_{1}^{\text{T}}U\left( U^{\text{T}}U \right) ^{-1}=A_{1}^{\text{T}}W, V=A1TU(UTU)1=A1TW, which means that V ( i 2 ⋯ i d ‾ , α 1 ) = ∑ i 1 = 1 n 1 A 1 T ( i 2 ⋯ i d ‾ , i 1 ) W ( i 1 , α 1 ) = ∑ i 1 = 1 n 1 A 1 ( i 1 , i 2 ⋯ i d ‾ ) W ( i 1 , α 1 ) = ∑ i 1 = 1 n 1 A ( i 1 , . . . , i d ) W ( i 1 , α 1 ) . (1.7) \begin{aligned} V\left( \overline{i_2\cdots i_d},\alpha _1 \right) &=\sum_{i_1=1}^{n_1}{A_{1}^{\text{T}}\left( \overline{i_2\cdots i_d},i_1 \right) W\left( i_1,\alpha _1 \right)} \\ &=\sum_{i_1=1}^{n_1}{A_1\left( i_1,\overline{i_2\cdots i_d} \right) W\left( i_1,\alpha _1 \right)} \\ &=\sum_{i_1=1}^{n_1}{\mathcal A\left( i_1,...,i_d \right) W\left( i_1,\alpha _1 \right)}. \end{aligned} \tag{1.7} V(i2id,α1)=i1=1n1A1T(i2id,i1)W(i1,α1)=i1=1n1A1(i1,i2id)W(i1,α1)=i1=1n1A(i1,...,id)W(i1,α1).(1.7) As V ( i 2 ⋯ i d ‾ , α 1 ) = V T ( α 1 , i 2 ⋯ i d ‾ ) = V ( α 1 , i 2 ⋯ i d ) V\left( \overline{i_2\cdots i_d},\alpha _1 \right) =V^{\text{T}}\left( \alpha _1,\overline{i_2\cdots i_d} \right) =\mathcal{V}\left( \alpha _1,i_2\cdots i_d \right) V(i2id,α1)=VT(α1,i2id)=V(α1,i2id), the tensor V \mathcal V V can be written as V ( α 1 , i 2 ⋯ i d ) = ∑ i 1 = 1 n 1 A ( i 1 , . . . , i d ) W ( i 1 , α 1 ) . (1.8) \mathcal{V}\left( \alpha _1,i_2\cdots i_d \right) =\sum_{i_1=1}^{n_1}{\mathcal{A}\left( i_1,...,i_d \right) W\left( i_1,\alpha _1 \right)}. \tag{1.8} V(α1,i2id)=i1=1n1A(i1,...,id)W(i1,α1).(1.8) Now the V \mathcal V V can be treated as a ( d − 1 ) (d − 1) (d1)-dimensional tensor V 1 \mathbf V_1 V1 with ( α 1 i 2 ) (α_1i_2) (α1i2) as one long index: V 1 ( α 1 i 2 , i 3 , ⋯   , i d ) = V ( α 1 , i 2 , i 3 , ⋯   , i d ) . \mathbf{V}_1\left( \alpha _1i_2,i_3,\cdots ,i_d \right)=\mathcal{V}\left( \alpha _1,i_2,i_3,\cdots ,i_d \right) . V1(α1i2,i3,,id)=V(α1,i2,i3,,id). Then V 1 \mathbf V_1 V1 is a ( r 1 × n 2 ) × n 3 × ⋯ × n d (r_1 \times n_2) \times n_3 \times \cdots \times n_d (r1×n2)×n3××nd-dimension tensor.

  Now consider its unfolding matrices V 2 , . . . , V d V_2,...,V_d V2,...,Vd. We know that A k A_k Ak is a ( ∏ s = 1 k n s , ∏ s = k + 1 d n s ) \left( \prod_{s=1}^k{n_s},\prod_{s=k+1}^d{n_s} \right) (s=1kns,s=k+1dns)-matrix with rank ( A k ) = r k \text{rank}\left( A_k \right) =r_k rank(Ak)=rk, then A k A_k Ak can be decomposed by A k = F k ⋅ G k A_k=F_k\cdot G_k Ak=FkGk, where F k F_k Fk is a ( ∏ s = 1 k n s , r k ) \left( \prod_{s=1}^k{n_s},r_k \right) (s=1kns,rk)-matrix and G k G_k Gk is a ( r k , ∏ s = k + 1 d n s ) \left( r_k,\prod_{s=k+1}^d{n_s} \right) (rk,s=k+1dns)-matrix. Therefore, A \mathcal A A can be expressed as A ( i 1 , . . . , i d ) = A k ( i 1 ⋯ i k ‾ , i k + 1 ⋯ i d ‾ ) = ∑ β = 1 r k F k ( i 1 ⋯ i k ‾ , β ) G k ( β , i k + 1 ⋯ i d ‾ ) = ∑ β = 1 r k F ( i 1 , . . . , i k , β ) G ( β , i k + 1 , . . . , i d ) (1.9) \begin{aligned} \mathcal{A}\left( i_1,...,i_d \right) &=A_k\left( \overline{i_1\cdots i_k},\overline{i_{k+1}\cdots i_d} \right) \\ &=\sum_{\beta =1}^{r_k}{F_k\left( \overline{i_1\cdots i_k},\beta \right) G_k\left( \beta ,\overline{i_{k+1}\cdots i_d} \right)}\\ &=\sum_{\beta =1}^{r_k}{\mathcal{F}\left( i_1,...,i_k,\beta \right) \mathcal{G}\left( \beta ,i_{k+1},...,i_d \right)} \tag{1.9} \end{aligned} A(i1,...,id)=Ak(i1ik,ik+1id)=β=1rkFk(i1ik,β)Gk(β,ik+1id)=β=1rkF(i1,...,ik,β)G(β,ik+1,...,id)(1.9) Using equ ( 1.8 ) (1.8) (1.8) and ( 1.9 ) (1.9) (1.9), we obtain V k = V T ( α 1 i 2 ⋯ i k ‾ , i k + 1 ⋯ i d ‾ ) = ∑ i 1 = 1 n 1 A ( i 1 , ⋯   , i d ) W ( i 1 , α 1 ) = ∑ i 1 = 1 n 1 [ ∑ β = 1 r k F ( i 1 , . . . , i k , β ) G ( β , i k + 1 , . . . , i d ) ] W ( i 1 , α 1 ) = ∑ i 1 = 1 n 1 ∑ β = 1 r k F ( i 1 , . . . , i k , β ) G ( β , i k + 1 , . . . , i d ) W ( i 1 , α 1 ) = ∑ β = 1 r k ∑ i 1 = 1 n 1 W ( i 1 , α 1 ) F ( i 1 , . . . , i k , β ) G ( β , i k + 1 , . . . , i d ) = ∑ β = 1 r k H ( α 1 i 2 , . . . , i k , β ) G ( β , i k + 1 , . . . , i d ) . (1.10) \begin{aligned} V_k &=V^{\text{T}}\left( \overline{\alpha _1i_2\cdots i_k},\overline{i_{k+1}\cdots i_d} \right) \\ &=\sum_{i_1=1}^{n_1}{\mathcal{A}\left( i_1,\cdots ,i_d \right) W\left( i_1,\alpha _1 \right)}\\ &=\sum_{i_1=1}^{n_1}{\left[ \sum_{\beta =1}^{r_k}{\mathcal{F}\left( i_1,...,i_k,\beta \right) \mathcal{G}\left( \beta ,i_{k+1},...,i_d \right)} \right] W\left( i_1,\alpha _1 \right)} \\ &=\sum_{i_1=1}^{n_1}{\sum_{\beta =1}^{r_k}{\mathcal{F}\left( i_1,...,i_k,\beta \right) \mathcal{G}\left( \beta ,i_{k+1},...,i_d \right)}W\left( i_1,\alpha _1 \right)} \\ &=\sum_{\beta =1}^{r_k}{\sum_{i_1=1}^{n_1}{W\left( i_1,\alpha _1 \right) \mathcal{F}\left( i_1,...,i_k,\beta \right) \mathcal{G}\left( \beta ,i_{k+1},...,i_d \right)}} \\ &=\sum_{\beta =1}^{r_k}{\mathcal{H}\left( \alpha _1i_2,...,i_k,\beta \right) \mathcal{G}\left( \beta ,i_{k+1},...,i_d \right)}. \end{aligned} \tag{1.10} Vk=VT(α1i2ik,ik+1id)=i1=1n1A(i1,,id)W(i1,α1)=i1=1n1 β=1rkF(i1,...,ik,β)G(β,ik+1,...,id) W(i1,α1)=i1=1n1β=1rkF(i1,...,ik,β)G(β,ik+1,...,id)W(i1,α1)=β=1rki1=1n1W(i1,α1)F(i1,...,ik,β)G(β,ik+1,...,id)=β=1rkH(α1i2,...,ik,β)G(β,ik+1,...,id).(1.10) where
H ( α 1 i 2 , . . . , i k , β ) = ∑ i 1 = 1 n 1 W ( i 1 , α 1 ) F ( i 1 , . . . , i k , β ) . \mathcal{H}\left( \alpha _1i_2,...,i_k,\beta \right) = \sum_{i_1=1}^{n_1}{W\left( i_1,\alpha _1 \right) \mathcal{F}\left( i_1,...,i_k,\beta \right)}. H(α1i2,...,ik,β)=i1=1n1W(i1,α1)F(i1,...,ik,β). Therefore, rank ( V k ) ≤ r k \text{rank}(V_k) \le r_k rank(Vk)rk. The process can be continued by induction.

  According the process of this proof, we can construct the composition representing as equ. ( 1.1 ) (1.1) (1.1). Let the G 1 \mathcal G_1 G1 be equal to the U U U of equ. ( 1.4 ) (1.4) (1.4). Then A \mathcal A A can be represented as A ( i 1 , ⋯   , i d ) = A 1 ( i 1 , i 2 ⋯ i d ‾ ) = ∑ α 1 = 1 r 1 U ( i 1 , α 1 ) V T ( α 1 , i 2 ⋯ i d ‾ ) = ∑ α 1 = 1 r 1 U ( i 1 , α 1 ) V ( α 1 , i 2 , ⋯   , i d ) = ∑ α 1 = 1 r 1 G 1 ( i 1 , α 1 ) V 1 ( α 1 i 2 , i 3 , ⋯   , i d ) = ∑ α 1 = 1 r 1 G 1 ( α 0 , i 1 , α 1 ) V 1 ( α 1 i 2 , i 3 , ⋯   , i d ) . (1.11) \begin{aligned} \mathcal{A}\left( i_1,\cdots ,i_d \right) &=A_1\left( i_1,\overline{i_2\cdots i_d} \right) \\ & =\sum_{\alpha _1=1}^{r_1}{U\left( i_1,\alpha _1 \right) V^{\text{T}}\left( \alpha _1,\overline{i_2\cdots i_d} \right)} \\ & =\sum_{\alpha _1=1}^{r_1}{U\left( i_1,\alpha _1 \right) \mathcal{V}\left( \alpha _1,i_2,\cdots ,i_d \right)} \\ & =\sum_{\alpha _1=1}^{r_1}{\mathcal{G}_1\left( i_1,\alpha _1 \right) \mathbf{V}_1\left( \alpha _1i_2,i_3,\cdots ,i_d \right)} \\ & =\sum_{\alpha _1=1}^{r_1}{\mathcal{G}_1\left( \alpha_0,i_1,\alpha _1 \right) \mathbf{V}_1\left( \alpha _1i_2,i_3,\cdots ,i_d \right)}. \end{aligned} \tag{1.11} A(i1,,id)=A1(i1,i2id)=α1=1r1U(i1,α1)VT(α1,i2id)=α1=1r1U(i1,α1)V(α1,i2,,id)=α1=1r1G1(i1,α1)V1(α1i2,i3,,id)=α1=1r1G1(α0,i1,α1)V1(α1i2,i3,,id).(1.11) where α 0 = 1 \alpha_0 = 1 α0=1.

  Now considering the tensor V 1 \mathbf V_1 V1, let V 2 V_2 V2 define as V 2 ( α 1 i 2 ‾ , i 3 ⋯ i d ‾ ) = V 1 ( α 1 i 2 , i 3 , ⋯   , i d ) . V_2\left( \overline{\alpha _1i_2},\overline{i_3\cdots i_d} \right) =\mathbf{V}_1\left( \alpha _1i_2,i_3,\cdots ,i_d \right) . V2(α1i2,i3id)=V1(α1i2,i3,,id).Its dimension is ( r 1 × n 2 ) × ( n 3 × ⋯ × n d ) (r_1 \times n_2) \times (n_3 \times \cdots \times n_d) (r1×n2)×(n3××nd). In other words, V 2 = reshape ( V 1 , [ r 1 n 2 , ∏ s = 3 d n s ] ) . V_2=\text{reshape}\left( \mathbf{V}_1,\left[ r_1n_2,\prod_{s=3}^d{n_s} \right] \right) . V2=reshape(V1,[r1n2,s=3dns]). We set rank ( V 2 ) = r 2 \text{rank}(V_2) = r_2 rank(V2)=r2, then V 2 V_2 V2 can be expressed as V 2 ( α 1 i 2 ‾ , i 3 ⋯ i d ‾ ) = ∑ α 2 = 1 r 2 U ′ ( α 1 i 2 ‾ , α 2 ) V ′ ( α 2 , i 3 ⋯ i d ‾ ) (1.12) V_2\left( \overline{\alpha _1i_2},\overline{i_3\cdots i_d} \right) =\sum_{\alpha _2=1}^{r_2}{U'\left( \overline{\alpha _1i_2},\alpha _2 \right) V'\left( \alpha _2,\overline{i_3\cdots i_d} \right)}\tag{1.12} V2(α1i2,i3id)=α2=1r2U(α1i2,α2)V(α2,i3id)(1.12) Like V 1 \mathbf V_1 V1, existing tensor V ′ \mathcal V' V satisfies V ′ ( α 2 , i 3 , ⋯   , i d ) = V ′ ( α 2 , i 3 ⋯ i d ‾ ) \mathcal{V}'\left( \alpha _2,i_3,\cdots ,i_d \right) =V'\left( \alpha _2,\overline{i_3\cdots i_d} \right) V(α2,i3,,id)=V(α2,i3id).And V ′ \mathcal V' V can be treat as a ( d − 2 ) (d-2) (d2)-dimension tensor V 2 \mathbf V_2 V2, which can expressed as V 2 ( α 2 i 3 , i 4 , . . . , i d ) = V ′ ( α 2 , i 3 , ⋯   , i d ) . (1.13) \mathbf{V}_2\left( \alpha _2i_3,i_4,...,i_d \right) =\mathcal{V}'\left( \alpha _2,i_3,\cdots ,i_d \right) . \tag{1.13} V2(α2i3,i4,...,id)=V(α2,i3,,id).(1.13) And U ′ U' U in ( 1.12 ) (1.12) (1.12) can be also treates as a tensor satisfying G 2 ( α 1 , i 2 , α 2 ) = U ′ ( α 1 i 2 ‾ , α 2 ) . \mathcal{G}_2\left( \alpha _1,i_2,\alpha _2 \right) =U'\left( \overline{\alpha _1i_2},\alpha _2 \right) . G2(α1,i2,α2)=U(α1i2,α2).Therefore, A \mathcal A A can be expressed as A ( i 1 , ⋯   , i d ) = ∑ α 1 = 1 r 1 G 1 ( α 0 , i 1 , α 1 ) V 1 ( α 1 i 2 , i 3 , ⋯   , i d ) = ∑ α 1 = 1 r 1 G 1 ( α 0 , i 1 , α 1 ) [ ∑ α 2 = 1 r 2 G 2 ( α 1 , i 2 , α 2 ) V 2 ( α 2 i 3 , i 4 , . . . , i d ) ] = ∑ α 2 = 1 r 2 ∑ α 1 = 1 r 1 G 1 ( α 0 , i 1 , α 1 ) G 2 ( α 1 , i 2 , α 2 ) V 2 ( α 2 i 3 , i 4 , . . . , i d ) . (1.14) \begin{aligned} \mathcal{A}\left( i_1,\cdots ,i_d \right) &=\sum_{\alpha _1=1}^{r_1}{\mathcal{G}_1\left( \alpha _0,i_1,\alpha _1 \right) \mathbf{V}_1\left( \alpha _1i_2,i_3,\cdots ,i_d \right)} \\ &=\sum_{\alpha _1=1}^{r_1}{\mathcal{G}_1\left( \alpha _0,i_1,\alpha _1 \right) \left[ \sum_{\alpha _2=1}^{r_2}{\mathcal{G}_2\left( \alpha _1,i_2,\alpha _2 \right) \mathbf{V}_2\left( \alpha _2i_3,i_4,...,i_d \right)} \right]} \\ &=\sum_{\alpha _2=1}^{r_2}{\sum_{\alpha _1=1}^{r_1}{\mathcal{G}_1\left( \alpha _0,i_1,\alpha _1 \right) \mathcal{G}_2\left( \alpha _1,i_2,\alpha _2 \right) \mathbf{V}_2\left( \alpha _2i_3,i_4,...,i_d \right)}} \\ \end{aligned}. \tag{1.14} A(i1,,id)=α1=1r1G1(α0,i1,α1)V1(α1i2,i3,,id)=α1=1r1G1(α0,i1,α1)[α2=1r2G2(α1,i2,α2)V2(α2i3,i4,...,id)]=α2=1r2α1=1r1G1(α0,i1,α1)G2(α1,i2,α2)V2(α2i3,i4,...,id).(1.14)
Repeating the proceduce up to G d \mathcal G_d Gd, and we can get the TT-representation as ( 1.1 ) (1.1) (1.1).

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Color-based model-free tracking is a popular technique used in computer vision to track objects in video sequences. Despite its simplicity, it has demonstrated high accuracy and robustness in various applications, such as surveillance, sports analysis, and human-computer interaction. One of the key advantages of color-based model-free tracking is its real-time performance. Unlike model-based tracking, which requires complex training and computation, color-based tracking can be implemented using simple algorithms that can run in real-time on low-power devices. This makes it suitable for applications that require fast response time, such as robotics and autonomous systems. Another advantage of color-based tracking is its ability to handle occlusions and partial occlusions. Since color features are less sensitive to changes in lighting and viewing conditions, the tracker can still maintain its accuracy even when the object is partially hidden or obstructed by other objects in the scene. Critics of color-based tracking argue that it is not effective in complex scenes where the object of interest may have similar colors to the background or other objects in the scene. However, recent advancements in machine learning and deep learning have enabled the development of more sophisticated color-based tracking algorithms that can accurately detect and track objects even in challenging scenarios. In summary, color-based model-free tracking is a simple yet effective technique for tracking objects in video sequences. Its real-time performance, robustness, and ability to handle occlusions make it a popular choice for various applications. While it may not be suitable for all scenarios, advancements in machine learning are making it more effective in complex scenes.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值