张量笔记(4):张量网络

张量分解通常是将高维张量分解成一系列较低维的张量,表示能力相对较低。而张量网络可以表示复杂的高维数据结构,通过连接多个张量形成网络结构,可以更灵活地表示和处理复杂的数据关系。本节主要介绍HT和TT网络。
2.5.1 HT分解——首先我们引入了维度树的相关定义,然后给出了HT分解的定义以及一个例子,并给出了HT分解的两种算法,最后给出了HT分解的一个推广:TTNS;
2.5.2 TT分解——给出了TT分解的定义,并给出了TT分解的ALS算法,然后介绍了TR分解的定义及算法,最后粗略介绍了TT分解的其他推广,如PEPS、HCL、MERA。
本节记号较多,因此在最后附上了张量展开的定义和记号。

Modern applications in engineering and data science are increasingly based on multidimensional data of exceedingly high volume, variety, and structural richness. However, standard machine learning algo- rithms typically scale exponentially with data volume and complex- ity of cross-modal couplings - the so called curse of dimensionality - which is prohibitive to the analysis of large-scale, multi-modal and multi-relational datasets. Given that such data are often efficiently represented as multiway arrays or tensors, it is therefore timely and valuable for the multidisciplinary machine learning and data analytic communities to review low-rank tensor decompositions and tensor net- works as emerging tools for dimensionality reduction and large scale optimization problems. Our particular emphasis is on elucidating that, by virtue of the underlying low-rank approximations, tensor networks have the ability to alleviate the curse of dimensionality in a number of applied areas. In Part 1 of this monograph we provide innovative solutions to low-rank tensor network decompositions and easy to in- terpret graphical representations of the mathematical operations on tensor networks. Such a conceptual insight allows for seamless migra- tion of ideas from the flat-view matrices to tensor network operations and vice versa, and provides a platform for further developments, prac- tical applications, and non-Euclidean extensions. It also permits the introduction of various tensor network operations without an explicit notion of mathematical expressions, which may be beneficial for many research communities that do not directly rely on multilinear algebra. Our focus is on the Tucker and tensor train (TT) decompositions and their extensions, and on demonstrating the ability of tensor networks to provide linearly or even super-linearly (e.g., logarithmically) scalable solutions, as illustrated in detail in Part 2 of this monograph.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值