张量网络论文学习1——张量网络的背景和意义

本文介绍了张量网络作为处理量子多体系统问题的工具,特别是其在凝聚态物理中的应用。文章指出,由于传统方法难以描述复杂系统,张量网络提供了一种更灵活的描述波函数的方式,能够直观展示系统的纠缠结构,并且特别适用于高维希尔伯特空间的多体系统。张量网络拓宽了模拟方法的边界,成为理解和模拟量子现象的关键途径。
摘要由CSDN通过智能技术生成

系列背景简介

我算是非常偶然接触到这个领域的,因为虽然作为物理系的学生,但却一直想着去搞计算机,对于物理专业来说,除了本科要学的四大力学,以及大略知道什么固体物理,原分光冷原子之类的一系列名词,就没有更多的了解了。很快就要分流了,才发现自己实际上对于物理的方向还很不了解,才发现物理的方向多种多样,内容也丰富多彩。张量网络恰好是我学术导师的研究领域,觉得蛮有意思,就试着了解了解。

既然开始玩CSDN了,那我就把学习的内容和体会也发上来,估计不会有太多人看,如果有幸有了,那就大家一起来讨论,也纠正纠正我的错误好了。

张量网络的背景

量子多体系统的问题

量子多体系统是凝聚态物理当中非常有挑战性的一类问题,例如仍然未解决的高温超导体的问题。虽然很多这样的量子体系很难以理解和研究,但人们对于它们当中新奇的相的兴趣却在与日俱增,例如拓扑相,量子自旋液体等等。

传统标准的方法是提出一个简化的模型来描述这个相互作用的系统,比如说再超导的问题中就是库伯对或者T-J模型等,然后如果某一个条件下它恰好是好解的,那么就用各种数学操作把它解出来。但显然,这样的方法并不能很好的描述比较复杂的系统,或者说,在测度的意义上,它几乎不能用来描述任何系统。

因此我们需要一些数值的模拟的做法,来更好地描述这些复杂量子系统的性质。目前已经有了很多数值模拟的方法,但是Tensor Network具有其独特的优势(要么干嘛说它~)。

张量网络方法的意义

Modern applications in engineering and data science are increasingly based on multidimensional data of exceedingly high volume, variety, and structural richness. However, standard machine learning algo- rithms typically scale exponentially with data volume and complex- ity of cross-modal couplings - the so called curse of dimensionality - which is prohibitive to the analysis of large-scale, multi-modal and multi-relational datasets. Given that such data are often efficiently represented as multiway arrays or tensors, it is therefore timely and valuable for the multidisciplinary machine learning and data analytic communities to review low-rank tensor decompositions and tensor net- works as emerging tools for dimensionality reduction and large scale optimization problems. Our particular emphasis is on elucidating that, by virtue of the underlying low-rank approximations, tensor networks have the ability to alleviate the curse of dimensionality in a number of applied areas. In Part 1 of this monograph we provide innovative solutions to low-rank tensor network decompositions and easy to in- terpret graphical representations of the mathematical operations on tensor networks. Such a conceptual insight allows for seamless migra- tion of ideas from the flat-view matrices to tensor network operations and vice versa, and provides a platform for further developments, prac- tical applications, and non-Euclidean extensions. It also permits the introduction of various tensor network operations without an explicit notion of mathematical expressions, which may be beneficial for many research communities that do not directly rely on multilinear algebra. Our focus is on the Tucker and tensor train (TT) decompositions and their extensions, and on demonstrating the ability of tensor networks to provide linearly or even super-linearly (e.g., logarithmically) scalable solutions, as illustrated in detail in Part 2 of this monograph.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值