Tensor Contraction (1) : Decompositions, Gauge Freedom, Canonical Forms

# Home:Home | Tensors.net

# Overview (Specialize in Contraction, Decomposition, Gauge Freedom):Frontiers | A Practical Guide to the Numerical Implementation of Tensor Networks I: Contractions, Decompositions, and Gauge Freedom (frontiersin.org)

# Play ground(More about SVD-prune in NN):Kahsolt/nn-prune-svd: Neural network weights prune in a static LoRA–like way (github.com)

* Tensor Decomposition:

    * Diag Tensor:

    * Unitary & Isometric Tensor :

 Unitary : 在相同的维度收缩

 U^{+}U = I ; UU^{+} = I

 张量的酉收缩:U.reshape(d*d,d*d) ->I = U.dot(U) -> I.reshape(d,d,d,d)  

 Isometric :在较大的维度收缩 ~ I,反之就是投影算符

 U^{+}U = I ; UU^{+} = P

 收缩和上面类似,算符等距性是比Unitary稍微宽松的条件 

 算符被设计为楔形,往维度较小的方向指。往往是收缩掉较大的维度 ~ I ;收缩掉较小的维度 ~ P

 投影算符(作为厄密阵)的本征值只会是0或1:\lambda**2 - \lambda = 0

    * SVD Decomposition :Tensor ~ reshape

​​import numpy as np

d1 = 20
d2 = 10
d = 10

A0 = np.random.rand(d1,d2)
A1 = np.random.rand(d,d,d)

U0,S0,V0 = np.linalg.svd(A0,full_matrices = False)
U1,S1,V1 = np.linalg.svd(A1.reshape(d**2,d),full_matrices = False) 

    * Spectral Decomposition :

H0 = 0.5*(A0 + A0.T)
H1 = 0.5*(A1 + A1.transpose(2,3,0,1))

eig_vals,eig_vecs = np.linalg.eig(H0)
D = np.diag(eig_vals)

eig_valseig_vecs = np.linalg.eig(H1.reshape(d**2,d**2))
D = np.diag(eig_vals)

    * SVD & Spectral Decomposition:

A0 = np.random.rand(d1,d2)
V0,S0,U0 = LA.svd(A0,full_matrices=False)
# Notice that full_matrices=False means S0 should be k x k
s0,u0= LA.eigh(np.dot(A0,A0.T))
print(s0[::-1] - S0**2)

    * QR Decomposition :

Q0,R0 = np.linalg.qr(A0)
print(R0 - np.dot(S0,V0))
print(U0 - Q0)

Q1,R1​ = np.linalg.qr(A1.reshape(d**2,d))
print(R1 - np.dot(S1,V1))
print(U1 -Q1)

    * Frobenius norm:

 F-2 : 算不同坐标系下向量模长的常用

  提供一些思路:

(0)用Ncon,einsum:conj(A),A收缩掉对应指标

(1)向量化:A.reshape(1,-1),再收缩 / 其实 np.sum( abs(A.flatten())**2 )

(2)norm:linalg.norm(A)

d = 10
A = np.random.rand(d,d,d,d)
# ncon:
cvec = [k+1 for k in range(np.ndim(A))]
frobA0 = np.sqrt(ncon([A,np.conj(A)],[cvec,cvec]))
# sum + flatten:
frobA1 = np.sqrt(sum(abs(A.flatten())**2))
# linalg.norm:
frobA2 = np.linalg.norm(A)​

    * Tensor Trancation:

​import numpy as np
import numpy.linalg as LA
d = 10
A = np.random.rand(d,d,d,d,d)

##### SVD(false): Um(d**3 x d**2),S(d**2 x d**2),Vhm(d**2 x d**2)
##### SVD(true ): Um(d**3 x d**3),S(d**3 x d**2),Vhm(d**2 x d**2)
Um,S,Vhm = LA.svd(A.reshape(d**3,d**2),full_matrices=False)

##### truncation
chi = 8;
Vhtilda = Vh[:chi,:,:]
Stilda = np.diag(S[:chi])
Utilda = U[:,:,:,:chi]
B = ncon([Utilda,Stilda,Vhtilda],[[-1,-2,-3,1],[1,2],[2,-4,-5]])
##### compare
epsAB = LA.norm(A-B) / LA.norm(A)

* Gauge Freedom:

    * 规范变换旨在对表达式进行等价变形,在TN就是插入I,I的分拆与再分配。

    * 以树张量网络(无环图)为例子:许多包含闭环的网络所缺乏的良好属性,因此更容易操作。

    * 规范自由度的大多数结果都可以推广到包含闭环的网络的情况 1801.05390.pdf (arxiv.org)

    * 良好性质:比如选择一个张量作为中心(或根节点),那么总是可以将树张量网络理解为由一组从所选张量延伸的不同分支组成,在没有闭环的网络中,不同分支之间的连接是不可能的。

    * T 是一个张量网络,在所有内部索引收缩的情况下,计算结果为某个张量 D,分解的唯一性:网络中是否有不同的张量选择,这些张量仍将计算为相同的张量 D ?

    * 答案是有的,插入XX^{-1}=I,并且 A,X \rightarrow \hat{A}Y^{-1},C\rightarrow \hat{C} ,分解+reshape就可

    * 我们将这种在内部指标引入单位阵的自我分解以重构成不同表达但完全收缩后等价的能力称为规范自由度,属于一种无穷大量。为了在规范自由度中固定一个有效的规范,我们引入正交中心。

    * 正交中心是按照等距分支的取向所限定出来的,如果我们将分支中的每个单独张量转换为(正确定向的)等距,那么整个分支共同成为等距,那么整个张量将形成正交中心A。

    * Creating a center of orthogonality(a)- QR

         

    (1)这就是TRN的优越性,无环:可以直观地展示,归纳并返回一个全局等距张量

    (2)当层张量取为最外层,并逐张量进行QR分解,保留Q矩阵,将R矩阵留给下层张量并收缩

    (3)下层张量作为最外层,并重复步骤(2),直到A

    复杂度反正不低,想要降低复杂度可以丢奇异值(裁剪),拿SVD ( matrices  = True ),丢掉部分奇异值,并将 SV\dagger 留给下层收缩。在这里就不太适合einsum编码了。

from ncon import ncon
import numpy as np 

d = 3
A = np.random.rand(d,d,d,d); B = np.random.rand(d,d,d) 
C = np.random.rand(d,d,d); D = np.random.rand(d,d,d) 
E = np.random.rand(d,d,d); F = np.random.rand(d,d,d) 
G = np.random.rand(d,d,d)

# iterate QR decomps
DQ, DR = LA.qr(D.reshape(d**2,d)); DQ = DQ.reshape(d,d,d)
EQ, ER = LA.qr(E.reshape(d**2,d)); EQ = EQ.reshape(d,d,d)

# 1st layer contractions + decomps:
Btilda = ncon([B,DR,ER],[[1,2,-3],[-1,1],[-2,2]])

BQ, BR = LA.qr(Btilda.reshape(d**2,d)); BQ = BQ.reshape(d,d,d)
FQ, FR = LA.qr(F.reshape(d**2,d)); FQ = FQ.reshape(d,d,d)
GQ, GR = LA.qr(G.reshape(d**2,d)); GQ = GQ.reshape(d,d,d)

# 1st layer contractions + decomps:
Ctilda = ncon([C,GR],[[1,-2,-3],[-1,1]])

CQ, CR = LA.qr(Ctilda.reshape(d**2,d)); CQ = CQ.reshape(d,d,d)
Aprime = ncon([A,BR,FR,CR],[[1,-2,2,3],[-1,1],[-3,2],[-4,3]])

# new network is formed from tensors: {Aprime,BQ,CQ,DQ,EQ,FQ,GQ}.

# check both networks evaluate to the same tensor 
# compared with former: {A,B,C,D,E,F,G}

connectlist = [[3,-5,4,5],[1,2,3],[6,-10,5],[-1,-2,1],[-3,-4,2],[-6,-7,4],[-8,-9,6]]
H0 = ncon([A,B,C,D,E,F,G],connectlist)
H1 = ncon([Aprime,BQ,CQ,DQ,EQ,FQ,GQ],connectlist)
dH = LA.norm(H0-H1) / LA.norm(H0)

    * Creating a center of orthogonality(b)- Direct orthogonalization

(0)QR-Pulling Through :最外侧张量反向限定内层正交中心,递归地获得低浮点误差的等距张量,且每个根节点张量都可作为叶张量的正交中心。浮点误差带来的影响较小一点。

  Direct orthogonalization :如果正交中心显然,仅针对分支等距形式,变换邻域张量,等效成为全局张量的等距形式,显然无法具有Pulling Through的逐节点正交中心性,但运算简便很多。

(1)找出 B,F,C的分支并收缩,获得(d,d)的矩阵,平方根分解

          Xi = U  sqrt(eig_vals)  U\dagger

         Xi_Inv = U 1/sqrt(eig_vals) U\dagger

(2)更新 B,F,C:分别收缩掉 Xi_Inv,,,,

##### Ex.3.3(c): Creating a center of orthogonality with 'direct orthogonalization' 

# define tensors
d = 3
A = np.random.rand(d,d,d,d); B = np.random.rand(d,d,d) 
C = np.random.rand(d,d,d); D = np.random.rand(d,d,d) 
E = np.random.rand(d,d,d); F = np.random.rand(d,d,d) 
G = np.random.rand(d,d,d)

# compute density matrices and their principle square roots

rho1 = ncon([B,D,E,B,D,E],[[5,6,-2],[1,2,5],[3,4,6],[7,8,-1],[1,2,7],[3,4,8]])
rho2 = ncon([F,F],[[1,2,-2],[1,2,-1]])
rho3 = ncon([C,G,C,G],[[3,5,-2],[1,2,3],[4,5,-1],[1,2,4]])

d1, u1 = LA.eigh(rho1); sq_d1 = np.sqrt(abs(d1))
d2, u2 = LA.eigh(rho2); sq_d2 = np.sqrt(abs(d2))
d3, u3 = LA.eigh(rho3); sq_d3 = np.sqrt(abs(d3))

X1 = u1 @ np.diag(sq_d1) @ u1.T; X1inv = u1 @ np.diag(1/sq_d1) @ u1.T
X2 = u2 @ np.diag(sq_d2) @ u2.T; X2inv = u2 @ np.diag(1/sq_d2) @ u2.T
X3 = u3 @ np.diag(sq_d3) @ u3.T; X3inv = u3 @ np.diag(1/sq_d3) @ u3.T

# execute gauge changes
Aprime = ncon([A,X1,X2,X3],[[1,-2,2,3],[-1,1],[-3,2],[-4,3]])
Bprime = ncon([B,X1inv],[[-1,-2,1],[1,-3]])
Fprime = ncon([F,X2inv],[[-1,-2,1],[1,-3]])
Cprime = ncon([C,X3inv],[[-1,-2,1],[1,-3]])
# new network is formed from tensors: {Aprime,Bprime,Cprime,D,E,Fprime,G}

# check both networks evaluate to the same tensor
connectlist = [[3,-5,4,5],[1,2,3],[6,-10,5],[-1,-2,1],[-3,-4,2],[-6,-7,4],[-8,-9,6]]
H0 = ncon([A,B,C,D,E,F,G],connectlist)
H1 = ncon([Aprime,Bprime,Cprime,D,E,Fprime,G],connectlist)
dH = LA.norm(H0 - H1) / LA.norm(H0)

直接正交化虽好,但不是很妙在于,eig_vals不总是正数。对非正定的因素要进行裁剪。

    * Tensor decompositions within networks 

正交中心用来干嘛的,很值得一说的是,我们观察矩阵的F2范数,

所有除开正交中心以外的都是等距相合为I,最后只剩下 HH^{\dagger} = AA^{\dagger} 

因此仅仅用范数取衡量两个被张量表达的多体算符之间的距离,我们会发现 :

HH^{\dagger} = AA^{\dagger}  其中对A 进行 SVD 裁剪获得 A' ,全局张量为 H' :

(H-H')(H-H')^{\dagger} = (A-A')(A-A')^{\dagger}

正交中心将各部件信息压缩进一个张量里,简明地进行网络张量的收缩+最佳分解+裁剪

* Canonical Forms:

    *  Based on Multi-stage tensor decompositions:

         ( 容我好好想想,为什么要Transpose)

d = 5
chi = 3
## 为什么要倒序
H0 = (np.sqrt(1+np.arange(d**7))).reshape(d,d,d,d,d,d,d).transpose(6,5,4,3,2,1,0) 

# First Decomposition:
    
utemp,stemp,vtemp = LA.svd(H0.reshape(d**2,d**5),full_matrices=False)
U0 = (utemp[:,:chi]).reshape(d,d,chi)
H1 = np.dot(np.diag(stemp[:chi]),vtemp[:chi,:]).reshape(chi,d,d,d,d,d)

# Second Decomposition:
    
utemp,stemp,vtemp = LA.svd(H1.transpose(1,2,0,3,4,5).reshape(d*d,chi*d*d*d),full_matrices=False)
U1 = utemp[:,:chi].reshape(d,d,chi)
## 为什么这里是10234
H2 = np.dot(np.diag(stemp[:chi]),vtemp[:chi,:]).reshape(chi,chi,d,d,d).transpose(1,0,2,3,4)

# Third decomposition

utemp,stemp,vhtemp = LA.svd(H2.reshape(chi**2,d**3),full_matrices=False)
U2 = utemp[:,:chi].reshape(chi,chi,chi)
H3 = np.dot(np.diag(stemp[:chi]),vhtemp[:chi,:]).reshape(chi,d,d,d)

# Fourth decomposition

utemp,stemp,vhtemp = LA.svd(H3.reshape(chi*d,d**2),full_matrices=False)
V3 = vhtemp[:chi,:].reshape(chi,d,d)
H4 = np.dot(utemp[:,:chi] , np.diag(stemp[:chi])).reshape(chi,d,chi)

    * Center of orthogonality (link centered):

值得一提的是这个过程可以揭示,纠缠谱**2 = 约化密度矩阵的本征值 

##### Ex4.2(a): set B-C link as center of orthogonality
d = 5 # index dimension
A = np.random.rand(d,d,d)
B = np.random.rand(d,d,d)
C = np.random.rand(d,d,d)
Sig = np.eye(d); # initial link matrix

# generate gauge change matrices
rho1 = ncon([A,A,B,B],[[1,2,3],[1,2,4],[3,5,-1],[4,5,-2]])
rho2 = ncon([C,C],[[-1,1,2],[-2,1,2]])
d1, u1 = LA.eigh(rho1); sq_d1 = np.sqrt(abs(d1))
d2, u2 = LA.eigh(rho2); sq_d2 = np.sqrt(abs(d2))
X1 = u1 @ np.diag(sq_d1) @ u1.T; X1inv = u1 @ np.diag(1/sq_d1) @ u1.T
X2 = u2 @ np.diag(sq_d2) @ u2.T; X2inv = u2 @ np.diag(1/sq_d2) @ u2.T
# implement gauge change
Bprime = ncon([B,X1inv],[[-1,-2,1],[1,-3]])
Cprime = ncon([X2inv,C],[[-1,1],[1,-2,-3]])
Sig_prime = X1 @ Sig @ X2
# check result
H0 = ncon([A,B,C],[[-1,-2,1],[1,-3,2],[2,-4,-5]])
H1 = ncon([A,Bprime,Sig_prime,Cprime],[[-1,-2,1],[1,-3,2],[2,3],[3,-4,-5]])
totErr = LA.norm(H0 - H1) / LA.norm(H0) 

对角化 \ Sig_prime:左右都凭svd结果插入U ~ U\daggerV~V\dagger, 中间收缩如 \sigma {\widetilde{}''} = U^{\dagger} \sigma {\widetilde{}'} V 是个对角阵

##### Ex4.2(b): setting a link as a center of orthogonality
##### (continues from Ex4.2(a))

# perform unitary gauge change to diagonalize link matrix
utemp, Sig_pp, vhtemp = LA.svd(Sig_prime,full_matrices=False)
Bpp = ncon([Bprime,utemp],[[-1,-2,1],[1,-3]])
Cpp = ncon([Cprime,vhtemp],[[1,-2,-3],[-1,1]])
# check result
H2 = ncon([A,Bpp,np.diag(Sig_pp),Cpp],[[-1,-2,1],[1,-3,2],[2,3],[3,-4,-5]])
totErr = LA.norm(H0 - H2) / LA.norm(H0) 

 SVD作用:相当将规范中心放在键中央,与之同时可揭示,纠缠谱**2 = 约化密度矩阵的本征谱  

##### Ex4.2(c): equivalence of center of orthogonality and SVD
##### (continues from Ex4.2(b))
H = ncon([A,B,C],[[-1,-2,1],[1,-3,2],[2,-4,-5]])
utemp,stemp,vhtemp = LA.svd(H.reshape(d**3,d**2),full_matrices=False)
UH = utemp[:,:d].reshape(d,d,d,d)
SH = stemp[:d]
VH = vhtemp[:d,:].reshape(d,d,d)

# compare with previous tensors from orthonormal form 
ErrU = LA.norm(abs(UH) - abs(ncon([A,Bpp],[[-1,-2,1],[1,-3,-4]])))
ErrS = LA.norm(SH - Sig_pp) 
ErrV = LA.norm(abs(VH) - abs(Cpp))
# all three results should be vanishingly small!!!

    * Canonical forms:

 An essential paper:The minimal canonical form of a tensor network (arxiv.org)

we can easily fix any chosen tensor as a center of orthogonality

Canonical forms的好处:

  • 以最佳方式截断任何链接以缩小维度,只需丢弃相应链接矩阵中的最小奇异值,并轻松理解相应的截断错误。

  • 由于规范形式的唯一性,基本上消除了网络描述中的规范歧义。

  • 以简化的方式从网络提取某些类型的信息(例如期望),包含等距张量的网络可以大量抵消:

  • 例如:

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值