In Memory Computing(存内计算、存算一体、内存内计算)

存内计算(In Memory Computing)通过将数据运算过程集中在内存内完成,大幅提升计算速度。这种技术受到深度学习和新型存储器发展的推动,应用于片外和片内存储产品。IBM、GridGain、Syntiant、Mythic等公司正研发相关产品,解决计算精度低、不适合训练场景等问题,力求在功耗和性能间找到平衡。科研界关注光芯片上的内存内计算及新型计算架构。
摘要由CSDN通过智能技术生成

什么是In Memory Computing(存内计算、存算一体、内存内计算)?

  • In-memory Computing 技术就是以 RAM 取代 hard disk ,将 data 与 CPU 之间的距离缩短,在 RAM 内完成所有运算工作,此举可将速度提升 5,000 甚至 10,000 倍。

  • 传统的运算方式,是从 hard disk (硬盘)取得资料,交到 RAM ,再传送到 CPU 计算,然后再放回 dard disk ,但这样很花时间。所以要从 RAM (记忆体)入手,即是 In-memory Computing 技术。

  • 在这里插入图片描述在这里插入图片描述

推进In Memory Computing的动力

  • 深度神经网络的发展
    • 深度神经网络计算量比较大,现有的冯诺伊曼计算机架构凸显瓶颈
    • 人工智能希望能普及到移动端和嵌入式设备中
  • 新的存储器
    • 举例:ReRAM使用电阻调制来实现数据存储,因此每一位的读出使用的是电流信号而非传统的电荷信号。这样一来,由于电流做累加运算是非常自然而然的操作(把
My first acquaintance with High load systems was at the beginning of 2007, and I started working on a real-world project since 2009. From that moment, I spent most of my office time with Cassandra, Hadoop, and numerous CEP tools. Our first Hadoop project (the year 2011-2012) with a cluster of 54 nodes often disappointed me with its long startup time. I have never been satisfied with the performance of our applications and was always looking for something new to boost the performance of our information systems. During this time, I have tried HazelCast, Ehcache, Oracle Coherence as in-memory caches to gain the performance of the applications. I was usually disappointed from the complexity of using these libraries or from their functional limitations. When I first encountered Apache Ignite, I was amazed! It was the platform that I’d been waiting on for a long time: a simple spring based framework with a lot of awesome features such as DataBase caching, Big data acceleration, Streaming and compute/service grids. In 2015, I had participated in Russian HighLoad++ conference1 with my presentation and started blogging in Dzone/JavaCodeGeeks and in my personal blog2 about developing High-load systems. They became popular shortly, and I received a lot of feedback from the readers. Through them, I clarified the idea behind the book. The goal of the book was to provide a guide for those who really need to implement an in-memory platform in their projects. At the same time, the idea behind the book is not writing a manual. Although the Apache Ignite platform is very big and growing day by day, we concentrate only on the features of the platform (from our point of view) that can really help to improve the performance of the applications. We hope that High-performance in-memory computing with Apache Ignite will be the go-to guide for architects and developers: both new and at an intermediate level, to get up and to develop with as little friction as possible.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Memory_66

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值