Machine learning: Trends,perspectives, and prospects

大数据机器学习实验室 专栏收录该内容
29 篇文章 0 订阅

Machine learning: Trends,perspectives, and prospects

M. I. Jordan1* and T. M. Mitchell2*



  Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.




  Machine learning is a discipline focused on two interrelated questions: How can one construct computer systems that auto-matically improve through experience? and What are the fundamental statistical-computational-information-theoretic laws that govern all learning systems, including computers, humans, and organizations? The study of machine learning is important both for addressing these fundamental scientific and engineering questions and for the highly practical computer software it has produced and fielded across many applications.
  Machine learning has progressed dramatically over the past two decades, from laboratory curiosity to a practical technology in widespread commercial use. Within artificial intelligence (AI), machine learning has emerged as the method of choice for developing practical software for computer vision, speech recognition, natural language processing, robot control, and other applications. Many developers of AI systems now recognize that, for many applications, it can be far easier to train a system by showing it examples of desired input-output behavior than to program it manually by anticipating the desired response for all possible inputs. The effect of machine learning has also been felt broadly across computer science and across a range of industries concerned with data-intensive issues, such as consumer services, the diagnosis of faults in complex systems, and the control of logistics chains. There has been a similarly broad range of effects across empirical sciences, from biology to cosmology to social science, as machine-learning methods have been developed to analyze highthroughput experimental data in novel ways. See Fig. 1 for a depiction of some recent areas of application of machine learning.



图片1 机器学习的应用。
  A learning problem can be defined as the problem of improving some measure of performance when executing some task, through some type of training experience. For example, in learning to detect credit-card fraud, the task is to assign a label of “fraud” or“not fraud” to any given credit-card transaction. The performance metric to be improved might be the accuracy of this fraud classifier, and the training experience might consist of a collection of historical credit-card transactions, each labeled in retrospect as fraudulent or not. Alternatively, one might define a different performance metric that assigns a higher penalty when “fraud” is labeled “not fraud” than when “not fraud” is incorrectly labeled “fraud.” One might also define a different type of training experience—for example, by including unlabeled credit-card transactions along with labeled examples.   A diverse array of machine-learning algorithms has been developed to cover the wide variety of data and problem types exhibited across different machine-learning problems (1, 2). Conceptually, machine-learning algorithms can be viewed as searching through a large space of candidate programs, guided by training experience, to find a program that optimizes the performance metric. Machine-learning algorithms vary greatly, in part by the way in which they represent candidate programs (e.g., decision trees, mathematical functions, and general programming languages) and in part by the way in which they search through this space of programs (e.g., optimization algorithms with well-understood convergence guarantees and evolutionary search methods that evaluate successive generations of randomly mutated programs). Here, we focus on approaches that have been particularly successful to date.   Many algorithms focus on function approximation problems, where the task is embodied in a function (e.g., given an input transaction, output a “fraud” or “not fraud” label), and the learning problem is to improve the accuracy of that function, with experience consisting of a sample of known input-output pairs of the function. In some cases, the function is represented explicitly as a parameterized functional form; in other cases, the function is implicit and obtained via a search process, a factorization, an optimization procedure, or a simulation-based procedure. Even when implicit, the function generally depends on parameters or other tunable degrees of freedom, and training corresponds to finding values for these parameters that optimize the performance metric.   Whatever the learning algorithm, a key scientific and practical goal is to theoretically characterize the capabilities of specific learning algorithms and the inherent difficulty of any given learning problem: How accurately can the algorithm learn from a particular type and volume of training data? How robust is the algorithm to errors in its modeling assumptions or to errors in the training data? Given a learning problem with a given volume of training data, is it possible to design a successful algorithm or is this learning problem fundamentally intractable? Such theoretical characterizations of machine-learning algorithms and problems typically make use of the familiar frameworks of statistical decision theory and computational complexity theory.In fact, attempts to characterize machine-learning algorithms theoretically have led to blends of statistical and computational theory in which the goal is to simultaneously characterize the sample complexity (how much data are required to learn accurately) and the computational complexity (how much computation is required) and to specify how these depend on features of the learning algorithm such as the representation it uses for what it learns (3–6). A specific form of computational analysis that has proved particularly useful in recent years has been that of optimization theory, with upper and lower bounds on rates of convergence of optimization procedures merging well with the formulation of machine-learning problems as the optimization of a performance metric (7, 8).   As a field of study, machine learning sits at the crossroads of computer science, statistics and a variety of other disciplines concerned with automatic improvement over time, and inference and decision-making under uncertainty. Related disciplines include the psychological study of human learning, the study of evolution, adaptive control theory, the study of educational practices, neuroscience, organizational behavior, and economics. Although the past decade has seen increased crosstalk with these other fields, we are just beginning to tap the potential synergies and the diversity of formalisms and experimental methods used across these multiple fields for studying systems that improve with experience.


Drivers of machine-learning progress

  The past decade has seen rapid growth in the ability of networked and mobile computing systems to gather and transport vast amounts of data, a phenomenon often referred to as “Big Data.” The scientists and engineers who collect such data have often turned to machine learning for solutions to the problem of obtaining useful insights, predictions, and decisions from such data sets. Indeed, the sheer size of the data makes it essential to develop scalable procedures that blend computational and statistical considerations, but the issue is more than the mere size of modern data sets; it is the granular,personalized nature of much of these data. Mobile devices and embedded computing permit large amounts of data to be gathered about individual humans, and machine-learning algorithms can learn from these data to customize their services to the needs and circumstances of each individual. Moreover, these personalized services can be connected, so that an overall service emerges that takes advantage of the wealth and diversity of data from many individuals while still customizing to the needs and circumstances of each. Instances of this trend toward capturing and mining large quantities of data to improve services and productivity can be found across many fields of commerce, science, and government. Historical medical records are used to discover which patients will respond best to which treatments; historical traffic data are used to improve traffic control and reduce congestion; historical crime data are used to help allocate local police to specific locations at specific times; and large experimental data sets are captured and curated to accelerate progress in biology, astronomy, neuroscience, and other data intensive empirical sciences. We appear to be at the beginning of a decades-long trend toward increasingly data-intensive,evidence-based decision making across many aspects of science, commerce,and government.
  With the increasing prominence of large-scale data in all areas of human endeavor has come awave of new demands on the underlying machine learning algorithms. For example, huge data sets require computationally tractable algorithms, highly personal data raise the need for algorithms that minimize privacy effects, and the availability of huge quantities of unlabeled data raises the challenge of designing learning algorithms to take advantage of it. The next sections survey some of the effects of these demands on recent work in machine-learning algorithms, theory, and practice.



Core methods and recent progress

  The most widely used machine-learning methods are supervised learning methods. Supervised learning systems, including spam classifiers of e-mail, face recognizers over images, and medical diagnosis systems for patients, all exemplify the function approximation problem discussed earlier, where the training data take the form of a collection of (x, y) pairs and the goal is to produce a prediction y* in response to a query x*. The inputs x may be classical vectors or they may be more complex objects such as documents, images, DNA sequences, or graphs. Similarly, many different kinds of output y have been studied. Much progress has been made by focusing on the simple binary classification problem in which y takes on one of two values (for example, “spam” or “not spam”), but there has also been abundant research on problems such as multiclass classification (where y takes on one of K labels), multilabel classification (where y is labeled simultaneously by several of the K labels), ranking problems (where y provides a partial order on some set), and general structured prediction problems (where y is a combinatorial object such as a graph, whose components may be required to satisfy some set of constraints). An example of the latter problem is part-of-speech tagging, where the goal is to simultaneously label every word in an input sentence x as being a noun, verb, or some other part of speech. Supervised learning also includes cases in which y has realvalued components or a mixture of discrete and real-valued components.
  Supervised learning systems generally form their predictions via a learned mapping f(x), which produces an output y for each input x (or a probability distribution over y given x). Many different forms of mapping f exist, including decision trees, decision forests, logistic regression, support vector machines, neural networks, kernel machines, and Bayesian classifiers. A variety of learning algorithms has been proposed to estimate these different types of mappings, and there are also generic procedures such as boosting and multiple kernel learning that combine the outputs of multiple learning algorithms. Procedures for learning f from data often make use of ideas from optimization theory or numerical analysis, with the specific form of machinelearning problems (e.g., that the objective function or function to be integrated is often the sum over a large number of terms) driving innovations. This diversity of learning architectures and algorithms reflects the diverse needs of applications, with different architectures capturing different kinds of mathematical structures, offering different levels of amenability to post-hoc visualization and explanation, and providing varying trade-offs between computational complexity, the amount of data, and performance.
  One high-impact area of progress in supervised learning in recent years involves deep networks, which are multilayer networks of threshold units, each of which computes some simple parameterized function of its inputs(9, 10). Deep learning systems make use of gradient-based optimization algorithms to adjust parameters throughout such a multilayered network based on errors at its output. Exploiting modern parallel computing architectures, such as graphics processing units originally developed for video gaming, it has been possible to build deep learning systems that contain billions of parameters and that can be trained on the very large collections of images, videos, and speech samples available on the Internet. Such large-scale deep learning systems have had a major effect in recent years in computer vision (11) and speech recognition (12), where they have yielded major improvements in performance over previous approaches (see Fig. 2). Deep network methods are being actively pursued in a variety of additional applications from natural language translation to collaborative filtering.




图2. 自动生成具有深层网络图像的文本标题

  The internal layers of deep networks can be viewed as providing learned representations of the input data. While much of the practical success in deep learning has come from supervised learning methods for discovering such representations, efforts have also been made to develop deep learning algorithms that discover useful representations of the input without the need for labeled training data (13). The general problem is referred to as unsupervised learning, a second paradigm in machine-learning research (2).
  Broadly, unsupervised learning generally involves the analysis of unlabeled data under assumptions about structural properties of the data (e.g., algebraic, combinatorial, or probabilistic). For example, one can assume that data lie on a low-dimensional manifold and aim to identify that manifold explicitly from data. Dimension reduction methods—including principal components analysis, manifold learning, factor analysis, random projections, and autoencoders (1, 2)—make different specific assumptions regarding the underlying manifold (e.g., that it is a linear subspace, a smooth nonlinear manifold, or a collection of submanifolds). Another example of dimension reduction is the topic modeling framework depicted in Fig. 3. A criterion function is defined that embodies these assumptions—often making use of general statistical principles such as maximum likelihood, the method of moments, or Bayesian integration—and optimization or sampling algorithms are developed to optimize the criterion. As another example, clustering is the problem of finding a partition of the observed data (and a rule for predicting future data) in the absence of explicit labels indicating a desired partition. A wide range of clustering procedures has been developed, all based on specific assumptions regarding the nature of a “cluster.” In both clustering and dimension reduction, the concern with computational complexity is paramount, given that the goal is to exploit the particularly large data sets that are available if one dispenses with supervised labels.



  主题建模是一种用于分析文档的方法,其中文档被视为单词的集合,文档中的单词被视为由基础主题集(由图中的颜色表示)生成。 主题是单词之间的概率分布(最左列),每个文档的特征是主题之间的概率分布(直方图)。 这些分布是根据对文档集合的分析得出的,可以查看这些分布以对文档的内容进行分类,索引和汇总。

  A third major machine-learning paradigm is reinforcement learning (14, 15). Here, the information available in the training data is intermediate between supervised and unsupervised learning. Instead of training examples that indicate the correct output for a given input, the training data in reinforcement learning are assumed to provide only an indication as to whether an action is correct or not; if an action is incorrect, there remains the problem of finding the correct action. More generally, in the setting of sequences of inputs, it is assumed that reward signals refer to the entire sequence; the assignment of credit or blame to individual actions in the sequence is not directly provided. Indeed, although simplified versions of reinforcement learning known as bandit problems are studied, where it is assumed that rewards are provided after each action, reinforcement learning problems typically involve a general control-theoretic setting in which the learning task is to learn a control strategy (a “policy”) for an agent acting in an unknown dynamical environment, where that learned strategy is trained to chose actions for any given state, with the objective of maximizing its expected reward over time. The ties to research in control theory and operations research have increased over the years, with formulations such as Markov decision processes and partially observed Markov decision processes providing points of contact (15, 16). Reinforcement-learning algorithms generally make use of ideas that are familiar from the control-theory literature, such as policy iteration, value iteration, rollouts, and variance reduction, with innovations arising to address the specific needs of machine learning (e.g., largescale problems, few assumptions about the unknown dynamical environment, and the use of supervised learning architectures to represent policies). It is also worth noting the strong ties between reinforcement learning and many decades of work on learning in psychology and neuroscience, one notable example being the use of reinforcement learning algorithms to predict the response of dopaminergic neurons in monkeys learning to associate a stimulus light with subsequent sugar reward (17).
  Although these three learning paradigms help to organize ideas, much current research involves blends across these categories. For example, semisupervised learning makes use of unlabeled data to augment labeled data in a supervised learning context, and discriminative training blends architectures developed for unsupervised learning with optimization formulations that make use of labels. Model selection is the broad activity of using training data not only to fit a model but also to select from a family of models, and the fact that training data do not directly indicate which model to use leads to the use of algorithms developed for bandit problems and to Bayesian optimization procedures. Active learning arises when the learner is allowed to choose data points and query the trainer to request targeted information, such as the label of an otherwise unlabeled example. Causal modeling is the effort to go beyond simply discovering predictive relations among variables, to distinguish which variables causally influence others (e.g., a high white-blood-cell count can predict the existence of an infection, but it is the infection that causes the high white-cell count). Many issues influence the design of learning algorithms across all of these paradigms, including whether data are available in batches or arrive sequentially over time, how data have been sampled, requirements that learned models be interpretable by users, and robustness issues that arise when data do not fit prior modeling assumptions.


Emerging trends

  The field of machine learning is sufficiently young that it is still rapidly expanding, often by inventing new formalizations of machine-learning problems driven by practical applications. (An example is the development of recommendation systems, as described in Fig. 4.) One major trend driving this expansion is a growing concern with the environment in which a machine-learning algorithm operates. The word “environment” here refers in part to the computing architecture; whereas a classical machine-learning system involved a single program running on a single machine, it is now common for machine-learning systems to be deployed in architectures that include many thousands or ten of thousands of processors, such that communication constraints and issues of parallelism and distributed processing take center stage. Indeed, as depicted in Fig. 5, machine-learning systems are increasingly taking the form of complex collections of software that run on large-scale parallel and distributed computing platforms and provide a range of algorithms and services to data analysts.






  The word “environment” also refers to the source of the data, which ranges from a set of people who may have privacy or ownership concerns, to the analyst or decision-maker who may have certain requirements on a machine-learning system (for example, that its output be visualizable), and to the social, legal, or political framework surrounding the deployment of a system. The environment also may include other machine learning systems or other agents, and the overall collection of systems may be cooperative or adversarial. Broadly speaking, environments provide various resources to a learning algorithm and place constraints on those resources. Increasingly, machine-learning researchers are formalizing these relationships, aiming to design algorithms that are provably effective in various environments and explicitly allow users to express and control trade-offs among resources.
  As an example of resource constraints, let us suppose that the data are provided by a set of individuals who wish to retain a degree of privacy. Privacy can be formalized via the notion of “differential privacy,” which defines a probabilistic channel between the data and the outside world such that an observer of the output of the channel cannot infer reliably whether particular individuals have supplied data or not (18). Classical applications of differential privacy have involved insuring that queries (e.g., “what is the maximum balance across a set of accounts?”) to a privatized database return an answer that is close to that returned on the nonprivate data. Recent research has brought differential privacy into contact with machine learning, where queries involve predictions or other inferential assertions (e.g., “given the data I’ve seen so far, what is the probability that a new transaction is fraudulent?”) (19, 20). Placing the overall design of a privacy-enhancing machine-learning system within a decision-theoretic framework provides users with a tuning knob whereby they can choose a desired level of privacy that takes into account the kinds of questions that will be asked of the data and their own personal utility for the answers. For example, a person may be willing to reveal most of their genome in the context of research on a disease that runs in their family but may ask for more stringent protection if information about their genome is being used to set insurance rates.
  Communication is another resource that needs to be managed within the overall context of a distributed learning system. For example, data may be distributed across distinct physical locations because their size does not allow them to be aggregated at a single site or because of administrative boundaries. In such a setting, we may wish to impose a bit-rate communication constraint on the machine-learning algorithm. Solving the design problem under such a constraint will generally show how the performance of the learning system degrades under decrease in communication bandwidth, but it can also reveal how the performance improves as the number of distributed sites (e.g., machines or processors) increases, trading off these quantities against the amount of data (21, 22). Much as in classical information theory, this line of research aims at fundamental lower bounds on achievable performance and specific algorithms that achieve those lower bounds.
  A major goal of this general line of research is to bring the kinds of statistical resources studied in machine learning (e.g., number of data points, dimension of a parameter, and complexity of a hypothesis class) into contact with the classical computational resources of time and space. Such a bridge is present in the “probably approximately correct” (PAC) learning framework, which studies the effect of adding a polynomial-time computation constraint on this relationship among error rates, training data size, and other parameters of the learning algorithm (3). Recent advances in this line of research include various lower bounds that establish fundamental gaps in performance achievable in certain machine-learning problems (e.g., sparse regression and sparse principal components analysis) via polynomial-time and exponential-time algorithms (23). The core of the problem, however, involves time-data tradeoffs that are far from the polynomial/exponential boundary. The large data sets that are increasingly the norm require algorithms whose time and space requirements are linear or sublinear in the problem size (number of data points or number of dimensions). Recent research focuses on methods such as subsampling, random projections, and algorithm weakening to achieve scalability while retaining statistical control (24, 25). The ultimate goal is to be able to supply time and space budgets to machine-learning systems in addition to accuracy requirements, with the system finding an operating point that allows such requirements to be realized.

差异隐私的经典应用涉及确保对私有化数据库的查询(例如“一组帐户中的最大余额是多少?”)返回的答案与对非私有数据返回的答案相近。最近的研究已将差异性隐私与机器学习联系起来,其中查询涉及预测或其他推断性断言(例如,“鉴于我到目前为止所看到的数据,新交易有欺诈性的可能性是多少?”)(19, 20)。
  这条研究路线的主要目标是将机器学习中研究的统计资源的种类(例如,数据点的数量,参数的维数,假设类的复杂性)与时间和空间的经典计算资源联系起来。“probably approximately correct”(PAC)学习框架可以将这些结合起来,该框架研究在误差率,训练数据大小和学习算法的其他参数之间的关系上添加多项式时间计算约束的影响(3)。

Opportunities and challenges

  Despite its practical and commercial successes, machine learning remains a young field with many underexplored research opportunities. Some of these opportunities can be seen by contrasting current machine-learning approaches to the types of learning we observe in naturally occurring systems such as humans and other animals, organizations, economies, and biological evolution. For example, whereas most machinelearning algorithms are targeted to learn one specific function or data model from one single data source, humans clearly learn many different skills and types of knowledge, from years of diverse training experience, supervised and unsupervised, in a simple-to-more-difficult sequence (e.g., learning to crawl, then walk, then run). This has led some researchers to begin exploring the question of how to construct computer lifelong or never-ending learners that operate nonstop for years, learning thousands of interrelated skills or functions within an overall architecture that allows the system to improve its ability to learn one skill based on having learned another (26–28). Another aspect of the analogy to natural learning systems suggests the idea of team-based, mixed-initiative learning. For example, whereas current machinelearning systems typically operate in isolation to analyze the given data, people often work in teams to collect and analyze data (e.g., biologists have worked as teams to collect and analyze genomic data, bringing together diverse experiments and perspectives to make progress on this difficult problem). New machine-learning methods capable of working collaboratively with humans to jointly analyze complex data sets might bring together the abilities of machines to tease out subtle statistical regularities from massive data sets with the abilities of humans to draw on diverse background knowledge to generate plausible explanations and suggest new hypotheses. Many theoretical results in machine learning apply to all learning systems, whether they are computer algorithms, animals, organizations, or natural evolution. As the field progresses, we may see machine-learning theory and algorithms increasingly providing models for understanding learning in neural systems, organizations, and biological evolution and see machine learning benefit from ongoing studies of these other types of learning systems.
  As with any powerful technology, machine learning raises questions about which of its potential uses society should encourage and discourage. The push in recent years to collect new kinds of personal data, motivated by its economic value, leads to obvious privacy issues, as mentioned above. The increasing value of data also raises a second ethical issue: Who will have access to, and ownership of, online data, and who will reap its benefits? Currently, much data are collected by corporations for specific uses leading to improved profits, with little or no motive for data sharing. However, the potential benefits that society could realize, even from existing online data, would be considerable if those data were to be made available for public good.
  To illustrate, consider one simple example of how society could benefit from data that is already online today by using this data to decrease the risk of global pandemic spread from infectious diseases. By combining location data from online sources (e.g., location data from cell phones, from credit-card transactions at retail outlets, and from security cameras in public places and private buildings) with online medical data (e.g., emergency room admissions), it would be feasible today to implement a simple system to telephone individuals immediately if a person they were in close contact with yesterday was just admitted to the emergency room with an infectious disease, alerting them to the symptoms they should watch for and precautions they should take. Here, there is clearly a tension and trade-off between personal privacy and public health, and society at large needs to make the decision on how to make this trade-off. The larger point of this example, however, is that, although the data are already online, we do not currently have the laws, customs, culture, or mechanisms to enable society to benefit from them, if it wishes to do so. In fact, much of these data are privately held and owned, even though they are data about each of us. Considerations such as these suggest that machine learning is likely to be one of the most transformative technologies of the 21st century. Although it is impossible to predict the future, it appears essential that society begin now to consider how to maximize its benefits.



  • 1
  • 0
  • 2
  • 扫一扫,分享海报

The book covers various aspects of VHDL programming and FPGA interfacing with examples and sample codes giving an overview of VLSI technology, digital circuits design with VHDL, programming, components, functions and procedures, and arithmetic designs followed by coverage of the core of external I/O programming, algorithmic state machine based system design, and real-world interfacing examples. Part I – Basic System Modeling and Programming Techniques Chapter 1 presents the history of VLSI technology with the features and architecture of FPGA. Reviews of microelectronics, device technologies, complementary metal oxide semiconductor (CMOS) layout design, subsystem development, ASIC design flow and the requirements of VHDL are also presented. Chapter 2 provides digital system design, system representation, development flow, software tools, and usage and capability of a hardware description language (HDL). A series of simple codes is used to introduce the basic modeling concepts of VHDL, data type conversions (signed, unsigned, integer, std_logic_vector, numbers, bit vector), operators and attributes and concurrent and sequential codes. Flip-flops, parallel to serial converters and multifrequency and signal generators are used as examples to explain simple application circuit design with VHDL. Chapter 3 describes system design based on packages, components, functions and procedures. Advantages of digital circuit design using these standards are explained and their significance is highlighted with systems developed for a signal generator, seven-segment display, half/full adder/subtractor, N-bit signed magnitude comparator, digital clock design, counter designs and pulse width modulation (PWM) signal generation. Chapter 4 covers arithmetic, logical and special function programming. Arithmetic and logical operations, trigonometric function approximation, serial/parallel adders/subtractors, multipliers, divider multiply-accumulate units, arithmetic-logical units, read-only memory (ROM), programmable logic arrays, programmable array logic and programmable logic devices are the design examples in this chapter. Preface xxxvii Part II – Custom Input/Output Peripheral Interfacing Chapter 5 presents external input/output (I/O) device interfacing and programming techniques. This chapter explains the system construction methods for interfacing lightemitting diodes and multisegment displays, buzzer controls, liquid crystal and graphical liquid crystal displays, dip switch and matrix keypads, dual-tone multifrequency encoders, infrared and proximity sensors, pyrometric sensors, metal detectors, light-dependent resisters and cup anemometers (wind speed measurement). Chapter 6 gives an in-depth overview of formulation of the ASM and realization of relevant real-time systems. Finite-state machine design based on the Moore and Mealy techniques is also detailed. Further, the designs of input code classifiers, sequence detectors, code converters, vending machine controllers, traffic light controllers, escalator controllers, dice games and electronic model train controllers are discussed in detail to explore more about system design with ASM. Chapter 7 covers the construction of more sophisticated combinational and sequential real-world interfacing circuits. Interfacing digital logic includes analog to digital converter (A/D), optical and temperature sensors, universal asynchronous receiver transmitters, multichannel data logging, bidirectional A/D, optical beam tracking, aligning and positioning, radio frequency (RF) Tx/Rx, pseudorandom binary sequence generators, time division multiple access and low/high voltage digital to analog converters (D/A). These examples show how to transform conceptual ideas into real-time working systems/hardware. Part III – Hardware Accelerated Designs Chapter 8 deals with the principles, construction and design methodology of real-time clock and data communication protocol programming. This chapter explains digital circuit design for interfacing many specialized sensors that follow a two-wire interface, serial peripheral interface, inter-integrated circuit (I2C) interface, global system for mobile (GSM) Wr/Rd interface, global positioning system (GPS) interface, video graphics array (VGA) interface and Ps/2 interface for data transmission, initialization, resetting, writing/reading the measurement data, monitoring the battery level and so on. Chapter 9 is devoted to describing various motor controls and switching of high-voltage control devices. The design examples include relay and direct current (DC) motor control, alternating current (AC) motor and brush less direct current (BLDC) motor control, stepper motor control systems, automatic fuel level (solenoid valve) control, voltage and current measurement, power electronics such as thyristors/silicon-controlled rectifiers (SCRs), Triacs and Diacs and real-time process control. Chapter 10 is devoted to floating-point number representations, their arithmetic computations and top-level designs using the system generator tools. Varieties of design examples are given to illustrate how the system generator tools are integrated into the MATLAB®/ VHDL environment. Design procedures and methodology that can be used for different types of designs are explained and the relevant issues are highlighted. Real number representation, fixed and floating point arithmetic operations, hardware co-simulations and system generator implementations are the main examples in this chapter. xxxviii Preface Part IV – Miscellaneous Design and Applications Chapter 11 covers some of the important digital signal processing applications. Architectural design and hardware implementation corresponding to all the applications covered in this chapter are detailed. Real-time modules for Z-transforms, finite impulse response (FIR) filters, infinite impulse response (IIR) filters, discrete Fourier transform (DFT), residual number arithmetic systems, distributed arithmetic systems, booth multipliers and adaptive equalizers are designed, and the results associated with the implementation are explained. Chapter 12 gives different advanced IP-based design examples. This chapter is designed with system generator–based fast Fourier transform (FFT), DFT design, coordinate rotation digital computer (CORDIC) algorithm design, FIR and IIR filter design, multiply and accumulator-based bandpass filter design, image processing and data-text reading/writing examples. Chapter 13 describes differential pulse code modulation (DPCM), RF data encryption/ decryption, optical up/down link data coding, fuzzy logic controllers, artificial neural network controllers, bit error rate tester design, error control codes (linear and convolutional codes), pick and place robotic controls and audio-codec interfaces. Audience: This book is framed based on industrial evolution to provide in-depth knowledge/coverage of system development and the synthesis of efficient, portable and scalable programming using VHDL. This book is intended for use in university/college-level bachelor’s/master’s degree courses teaching advanced digital system design and real-world hardware interfaces. It is an ideal source for gaining knowledge very rapidly and starting project design straightaway. Readers should have taken an introductory digital course. Knowledge of VHDL would be helpful but is not necessary, since this book emphasizes hardware-interfacing methodology rather than language constructs. Therefore, this book is a more informative and handy guide to building real-world systems for academicians, research scholars, postdoctoral students, engineers, scientists, small/medium-level system developers, project designers, practicing technicians, hardware engineers, electronics scientists, hobbyists and so on. It not only establishes a foundation of VHDL, but also provides a comprehensive treatment of FPGA interfacing for various engineering design requirements that make this book useful for career interviews and competitive/comprehensive examinations. From this background, the design and interfacing of FPGA-based real-time digital systems can be explored, and readers can sharpen their design skills and learn the effective use of today’s synthesis software and tools.
©️2022 CSDN 皮肤主题:大白 设计师:CSDN官方博客 返回首页
钱包余额 0