-
With the breakthroughs in deep learning, recent years have witnessed a booming of artificial intelligence (AI) applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance.
-
With the rapid expansion and penetration of Internet of Things (IoT) mobile devices (MDs), an increasing number of deep learning (DL)-driven intelligent IoT applications, such as e-health, smart home, video surveillance, and intelligent personal assistant, are emerging.
-
DRIVEN by the breakthroughs in deep learning, recent years have witnessed a booming of artificial intelligence (AI) applications and services, ranging from face recognition [1], video analytics [2] to natural language processing [3]. In the meantime, with the proliferation of mobile Internet and
-
Internet of Things (IoT), a large number of mobile and IoT devices are deployed at the network edge and generate a huge amount of data.
-
Despite the soaring use of convolutional neural networks (CNNs) in mobile applications, uniformly sustaining high-performance inference on mobile has been elusive due to the excessive computational demands of modern CNNs and the increasing diversity of deployed devices.
-
As a matter of fact, traditional computation offloading schemes of edge computing can hardly deal with DNN related tasks in real-time due to its large data volume and high computation overhead.
-
DNN right-sizing that further reduces computing latency via early exiting inference at an appropriate intermediate DNN layer.
-
early-exiting takes advantage of the fact that easy samples can be accurately classified using the low-level features that can be found in the earlier layers of a CNN. In this manner, each sample would ideally exit the network at the appropriate depth, saving time and computational resources.
-
DNN architectures are promising solutions in achieving remarkable results in a wide range of machine learning applications, including, but not limited to computer vision, speech recognition, language modeling, and autonomous cars. (有潜力的解决方案,包括但不限于)
-
The considerable improvements in DNNs are usually achieved by increasing computational complexity which requires more resources for both training and inference.
-
Deep neural networks (DNN) are the defacto solution behind many intelligent applications of today, ranging from machine translation to autonomous driving. (背后的事实解决方案)
-
Remarkable achievements have been attained by deep neural networks in various applications. (XXX取得了显著的成就)
-
For the sake of simplicity, Eq. (7) does not take the queuing delays into account since we are optimizing inference for respective single image input. Streaming input, in which case the queuing delays significantly matter, are left for future work. (简单起见)
-
Deep neural networks (DNN) are the de-facto solution behind many intelligent applications of today, ranging from machine translation to autonomous driving.
-
Distributing DNN inference in a fog network aims at minimizing the total execution time of a given DNN by partitioning and offloading it to one or more fog nodes. (在problem fomulation的时候把全文的优化抛出)
-
Finally, Eq. (4e) states that parallel execution of tasks at fog nodes should take less time than processing the same tasks locally as a sequence. (并行执行和串行执行)
-
The example shows the advantage of the latter choice, as storing P2 only requires 52 bytes as opposed to the 120 bytes of a dense representation. (后者的优点, 相对于)
-
The technological evolution of handheld devices has been followed by a rapid increase of user demand for a variety of mobile applications such as mobile augmented reality, face and object recognition, and real-time voice and video [1], [2]. Yet, the computational capabilities of today’s devices are not sufficient to meet the delay and computational requirements of these applications. (用户对应用需求的增加促进移动设备的发展,转折,然而当前设备的性能仍然无法满足应用需求)
-
Owing to the proximity of the edge cloud to the end users, MEC can provide significantly lower response times for individual devices than conventional centralized clouds such as Microsoft Azure or Amazon. (归功于...,放在句首突出原因,然后写得益于此,有什么优势)
-
However, edge clouds are not as computationally powerful as centralized clouds, which together with the limited wireless resources may adversely affect the response times when many devices attempt to offload computations simultaneously [5], [6]. Therefore, in order to fully exploit the potential of MEC, wireless and computing resources have to be jointly managed. (边缘服务器能力不如云强大,因此在许多设备同时卸载计算的时候,有限的计算和传输资源会不利于响应时间。因此为了充分利用边缘计算的潜力,无线和传输资源需要被联合调度。------这句话很好的诠释了在边缘计算当中资源分配的重要性)
-
Nonetheless, the decisions of the WDs interact with the computing resource and allocation policies of the operator. (任务的卸载决策和资源分配决策相互影响)
-
Traditional techniques to optimize flow-level metrics do not perform well in optimizing such collections, because the network is largely agnostic to application-level requirements. The recently proposed coflow abstraction bridges this gap and creates new opportunities for network scheduling. (to do做后置定语,bridges gap弥补差距)
-
DNN models (e.g., [3, 4]) usually have complicated structures with numerous parameters, hence a high demand in computation and storage. (描述DNN的推断需要大量的计算资源)
-
To this end, researchers have spent a great deal of effort to improve the performance of mobile deep vision applications. (相关工作的第一句可以这么写)
-
However, these techniques often lead to compromised model accuracy due to the fundamental trade-off between model size and model accuracy. (揭示了模型剪枝等技术对模型精度的本质影响)
-
Nevertheless, due to the fundamental limits of size and power, mobile devices still fall short to meet the requirements of target applications. (可用于替换fail)
-
The complexity of a brute-force searching grows exponentially with the meeting size and number of bitrate options. (暴力搜索导致复杂度指数增长)
英文论文写作表达积累(持续更新)
于 2021-03-09 14:41:27 首次发布