This is the atmosphere in which new computational thinking departments and academic computational thinking were formed.
The founders worried about curriculum and industry demand in the context of a set of consensus-seeking departments fiercely guarding their prerogatives, always concerned with public image and identities.
The new departments proposed by the founders were split off from their existing departments.
Their home departments often did not support the split because they would lose students, budget, and identity.
The founders encountered a lot of resistance from other departments that did not deem a new department focused on computer technology to be legitimately science or engineering, or see it would provide a unique intellectual perspective.
Foring a consensus favoring formation of a new department was a challenge.
These snapshots show how the concerted efforts of computing pioneers to articulate a unique identity for computer science led them to recognize computational thinking as a distinguishing aspect from the beginning.
事后看来,我们可以看到四个时代,描述了大学对计算的看法,以及计算思维的观点是如何变化的:
Phenomena surrounding computers(1950s-1970s)
Programming as art and science(1970s)
Computing as automation(1980s)
Computing as pervasive information processes(1990s to present)
But computer science was not always the receiver of resistance.
There were two important instances when computer science was the giver.
One was the computational science movement in the 1980s, which was eschewed by many computer scientists.
A common reaction to an announcement by the physics or biology department that they were setting up a computational science branch would be a howl of protest that those departments were impinging on the territory of computing.
一些计算机科学家认为,物理学和生物学已经认识到计算机的重要性,正试图劫持他们曾经强烈反对的领域。
Eventually computer scientists got over this and now work collaboratively with computational sciences.
我们将在第 7 章中讨论 computational sciences。
软件工程也发生了类似的过程。
The computing departments that viwed themselves as science were not receptive to the practices of teaching and doing projects common in engineering.
ACM 和 IEEE 于 1950 年初 ,为年轻人创办了期刊。
The Moore School, home of the ENIAC project, was an early starter of computing education in 1946 with a two-month intensive course on "theory and techniques for design of electronic digital computer."
In the 1950s the Moore School offered a multi-discipline degree in computing that included numerical analysis, programming, and programming language design.
其他学校也开始了自己的课程。
These early efforts of establish computing as unacademic discipline were slow to gain traction.
The impediment was more than a cautionary hesitancy to see if computers were here to stay; it was a deep doubt about whether computing and academic substance beyond mathematics, electrical engineering ,and physics.
Outsiders typically saw the computing field of the 1950s as an impenetrable and anarchistic thicket of idiosyncratic technology tricks.
What is more, the different perspectives to thinking about computing were disunited: those who designed computing machines were mostly unaware of important developments in the theory of computing such as Turing on computatble numbers, Church on lambda calculus, Post on string manipulation, Kleene on regular expressions, Rabin and Scott on nondeterministic machines, and Chomsky on the relation between grammars and classes of automata.
Academics who proposed full-fledged computer science departments or programs in research universities met stiff resistance.
Many critics did not believe in the value of computing's new ways: common objections included lack of unique intellectual content and lack of adequate theoretical basis.
Purists argued that computers were human-made artifacts and not natural occurrences, and thus their study could not be counted among the noble natural sciences.
On top of all that, many doubted whether computing would last.
Until there was a consensus among many departments, no one could found a computer science department.
This tide began to change in 1962, when Purdue established the first computer science department and Stanford followed soon thereafter.
They wrote:"Wherever three are phenomena, there can be a science to describe and explain those phenomena. Thus, ... botany is the study of plants, ... zoology is the study of animals, astronomy the study of stars, and so on. Phenomena bread sciences. ... There are computers. The phenomena surrounding computers are varied, complex, rich."
From this basis they quickly dismissed six objections, including the one that computers are human-made and are therefore not legitimate objects of a science.
Herb Simon, a Nobel laureate in economics , so objected to the notion that there could be no science, surrounding human-made objects that he wrote a this idea.
He gave an example from time-sharing systems(computers that allow many simultaneous users): The early development of time-sharing systems could not have been guided by theory as there was none, and most predictions about how time-sharing systems would behave were astonishingly inaccurate.
It was not possible to develop a theory of time-sharing systems without actually building those systems; after they were built, empirical research on their behavior led to a rich theoretical base about them.
In other words, computational thinking could not approach problems from one direction only-the engineering aspects and scientific-mathematical aspects of computing evolved in a synergistic way to yield a science that was not purely a natural science.
The notion of computing as the study of phenomena surrounding computers quickly gained traction, and by the end of the 1960s was taken as the definition of computing.
A view of the field's uniqueness started to form around that notion.
The term "algorithm thinking" was used to describe the most obvious aspect of new kind of thinking.
The field's unique aims, typical problems, methods of solving those problems, and kinds of solutions were the basis of computational thinking.
The computing pioneers expanded computational thinking beyond what they inherited from the long history of computation.
They focused on the construction principles of programs, computing machines, and operating systems.
The hardware flavor was followed by computer engineers in the engineering school; the software flavor by software designers and computing theorists in the science school.
Programming as Art and science
20 世纪 60 年代是计算机的成熟期,计算机科学家们对 ta 们的思考方式产生了相当丰富的内容。
The subfield of operating systems was born in the early 1960s to bring cheap, interactive computing to large user communities---computational thinking acquired a systems attitude.
The subfield of software engineering was born in the late 1960s from a concern that existing models of programming were incapable of developing reliable and dependable production software---computer thinking acquired an engineering attitude.
The subfield of networking was born in 1967 when the ARPANET project was started --- computational thinking acquired a networking attitude.
With a solid, reliable technology base in place, the field's attention shifted to programs and programming.
随着标准编程方法的出现,许多编程语言应运而生。
A huge interest informal verification of programs welled up, seeking a theory-based way to demonstrate that programs were reliable and correct.
A similar interest in computational complexity also welled up, seeking analytical ways to assess just how much computational work the different algorithms required.
Computer programs are expressions of algorithms in a formal language that, when compiled to machine-executable form, control the actions of a machine.
The first widely adopted programming languages introduced a plethora of new computational thinking concepts that had few or no counterparts in other intellectual traditions.
Donald Knuth, in his major works The Art of computer Programming and Literate Programming, and Edsger Dijkstra in his work on structured programming, epitomized the idea that computing is about algorithms in this sense.
到 1980 年,大多数计算机科学家都说计算机思维是一套与算法和软件开发有关的技能和知识。
But things got tricky when the proponents of algorithmic thinking had to describe what algorithmic thinking was and how it differed from other kinds of thinking.
Knuth 比较了数学教科书和计算机教科书中的推理模式,确定了俩者的典型模式。
He concluded that algorithmic thinking differed from mathematical thinking in several aspects: by the ways in which it reduces complex problems to interconnected simple ones, emphasizes in formation structures, pays attention to how actions alter the states of data, and formulates symbolic representations of reality.
In his own studies, Dijkstra differentiated computer scientists from mathematicians by their capacity for expressing algorithms in natural as well as formal language, for devising notations that simplified the computations, for mastering complexity, for shifting between abstraction levels, and for inventing concepts, objects, notations, and theories when necessary.
Today's descriptions of the mental tools of computational thinking are typically much less mathematical in their orientation than were many early descriptions of algorithmic thinking.
Over time, many have argued that programming and algorithmic thinking are as important as reading, writing, and arithmetic---the traditional three Rs of education---but the proposal to add them(as a new combined "R") to that list has yet to be accepted.
Computing's leaders have a long history of disagreement on this point.
Some computing pioneers considered computing's ways of thinking to be a generic tool for everyone, on a par with mathematics and language.
Others considered algorithmic thinking to be a rather rare, innate ability---present with about one person in fifty.
The former view has more support among educators because it embraces the idea that everyone can learn computational thinking: computational thinking is a skill to be learned and not an ability that one is born with.
The programming and algorithms view of computing spawned new additions to the computational thinking toolbox.
The engineering-technology side provided compilers(for converting human-readable programs to executable machine codes), parsing methods (for breaking programming language statements into components), code optimization, operating systems, and empirical testing and debugging methods(for finding errors in programs).
The math science side provided a host of methods for algorithms analysis such as O-notation for estimating the efficiency of algorithms, different models of computation, and proofs of program correctness.
By the late 1970s, it was clear that computing moved on an intellectual trajectory with concepts, concerns, and skills very different from other academic disciplines.
自动化计算
Despite all its richness, the view of computing as the study and design of algorithms was seen as too narrow.
By the late 1970s, there were many other questions under investigation.
How do you design a new programming language ?
How do you increase programmer productivity ?
How do you design a secure operating system ?
How do you design fault-tolerant software systems and machines ?
How do you transmit data reliably over a packet network ?
How do you protect systems against data theft by intruders or malware ?
How do you find the bottlenecks of a computer system or network ?
How do you find the response time of a system ?
How do you get a system to do work previously done by human operators ?
The study of algorithms focused on individual algorithms but rarely on their interactions with humans or the effects of their computations on other users of systems and networks.
It could hardly provide complete answers to these questions.
The idea emerged that the common factor in all these questions, and the soul of computational thinking, was that computing enabled automation in many fields.
Automation generally meant one of two things: the control of processes by mechanical means with minimal human intervention, or the carrying out of a process by a machine.
Many wanted to return to the 1960s notion that automation was the ultimate purpose of computers and among the most intriguing questions of the modern age.
自动化似乎是所有计算机科学中的共同因素,而计算机思维似乎是为了提高自动化的效率。
In 1978 the US National Science Foundation launched a comprehensive project to map what is essential in computing.
It was called the "Computer Science and Engineering Research Study"(COSERS).
In 1980 they released What Can Be Automated ?, a thousand-page tome that examined numerous aspects of computing and its applications from the standpoint of efficient automation.
That study answered many of the questions above, and for many years, the COSERS report offered the most complete picture of computing and the era's computational thinking.
It is still a very relevant resource for anyone who wants an overview, written by famous computing pioneers, of many central themes, problems, and questions in computing.
Well into the 1990s, the computing-as-automation idea was adopted in books, research reports, and influential policy documents as the "fundamental question underlying computing."
This idea reonated well with the history of computational thinking:As we discussed in the previous chapters, automatic computing realized the dream of applied and correctly without relying on human intuition and judgment.
Theoreticians such Alan Turing were fascinated by the idea of mechanizing computing.
Practitioners saw their programs as automations of tasks.
By 1990,"What can be automated?"
became a popular slogan in explanations of computing to outsiders and a carrying theme of computational thinking.
Ironically, the question of "what can be automated" led to the undoing of the automation interpretation because the boundary between what can and cannot be automated is ambiguous.
由于新的算法或更快的硬件,以前不可能实现自动化的事情现在可能成为可能。
到 20 世纪 70 年代,计算机科学家已经发展出一套丰富的计算复杂性理论。
By the 1970s, computer scientists had developed a rich theory of computational complexity, which classified problems according to how many computational steps algorithms solving them needed.
For example, searching an unordered list of N items for a specific item takes time proportional to N steps.
Printing a list of all subsets of N items takes time proportional to .
The search problem is of "linear diffculty", the sorting problem is of "quadratic diffculty," and the printing problem is of "exponential difficulty."
Search is fast, enumeration is slow; computational complexity theorists call the former "easy" and the latter "hard."
To see how vast the difference is between easy and hard problems, imagine that we have a computer that can do 1 billion() instructions or per second.
To search a list of 100 items would take 100 instructions 0.1 microseconds.
To enumerate and print all the subsets of 100 items would take instructions, a process that would take around years.
That is 10,000 times longer than the age of the universe, which is very roughly around years old.
Even though we can write an algorithm to do that, there is no computer that could complete the job in a reasonable amount of time.
Translating this to automation, an algorithm to automate something might take an impossibly long time.
Not everything for which we have an algorithms is automatable in practice.
Over time, new generations of more powerful machines enable the automation of previously intractable tasks.
Heuristic algorithms make the question of computational hardness asks us to pack a subset of items into a weight-limited knapsack to maximize the value of items packed.
The algorithm for doing this is similar to the enumeration problem and would take an impossibly long time for most knapsacks.
But we have a rule-of-thumb(a "heuristic") that says "rate each item with its value-until the knapsack is full."
This rule of thumb packs very good knapsacks fast, but not necessarily the best.
Many hard problems are like this.
There are fast heuristic algorithms that do a good job but not necessarily the best.
We can automate them only if we find a good heuristic algorithm.
The early findings about what things cannot be done in computing, either because they are impossible or just too long, led to pessimism about whether computing could help with most practical problems.
Today the mood is much more optimistic.
A skilled computational thinker uses a sophisticated understanding of computational complexity, logic, and optimization methods to design good heuristic algorithms.
Although all parts of computing contribute to automation, the field of artificial intelligence(AI) has emerged as a focal point in computing for automating human cognitive tasks and other human work.
The computational Thinking toolbox accumulated heuristic methods for searching solution spaces of games, for deducing conclusions from given information, and for machine-learning methods that find problem solutions by generalizing from examples.
这四位大咖,都是人工智能领域非常有份量的大人物,包括斯坦福大学的计算机科学系的副教授、全球人工智能和机器学习领域最权威的学者之一吴恩达、纽约大学计算机科学家杨立昆、埃森哲应用智能部的总经理鲁曼·乔德赫里,还有数据科学和机器学习咨询机构 Fast Forward Labs 的创始人兼 CEO 希拉里·梅森。
珀尔要做的,是让计算机学会因果关系,让 AI 真正理解 ta 在干什么,这样 AI 就和人一样可以推理了。
In technological revolutions,past created new and often better kinds of work. The automation revolution, however, could break that pattern and all of work would be automated。 —— Fortune
The spread of computing into many fields in the 1990s was another factor in the disintegration of the automation consensus of computational thinking in the academic world.
Scientists who ran simulations or evaluated mathematical models were clearly thinking computationally but their interest was not about automating human tasks.
A computational interpretation of the universe started to gain a foothold in sciences(see the next section, "The Universe as a Computer").
The nail went into the automation coffin when scientists from other fields started saying around 2000 that they worked with naturally occurring information processes.
Biologists, for example, said that the natural process of DNA transcription was computational.
There was nothing to automate; instead they wanted to understand and then modify the process.
Biology is not alone.
Cognitive scientists see many brain processes as computational and have designed new materials by computing the reactions that yield them.
Drug companies use simulations and search, instead of tedious lab experiments, to find new compounds to treat diseases.
Physicists see quantum mechanics as a way to explain all particles and forces as information processes.
The list goes on.
What is more, many new innovations like blogging, image recognition , encryption, machine learning, natural language processing, and blockchains are all innovations made possible by computing.
But none of the above was an automation of any existing process-each created an altogether new process.
What a radical change from the days of Newell, Perlis, and Simon !
Then the very idea of computer science was. attacked because it did not study natural processes.
Today much of computing is directly relevant to understanding natural processes.
The Universe as a Computer
Some researchers say there is another stage of evolution beyond this: the idea that the universe is itself a computer.
Everything we think we see, and everything we think, is computed by a natural process.
Instead of using computational to understand nature, they say, we will eventually accept that everything in nature is computation.
In that case, computational thinking is not just another skill to be learned, it is the natural behavior of the brain.
Hollywood screenwriters love this story line.
They have taken it into popular science-fiction movies based on the notion that everything we think we see is produced for us by a computer simulation, and indeed every thought we think we have is an illusion given by a computation.
It might be an engaging story, but there is little evidence to support it.
This claim is a generalization of a distinction familiar in artificial intelligence.
Strong AI refers to the belief that suitably programmed machines can be literally intelligent.
Weak AI refers to the belief that, through smart programming, machines can simulate mental activities so well they appear intelligent without being intelligent.
For example, virtual assistants like Siri and Alexa are weak AI because they do a good job at recognizing common commands and acting on them without "understanding" them.
The pursuit for strong AI dominated the AI agenda from the founding of the AI field in 1950 until the late 1990s.
It produced very little insight into intelligence and no machines came close to anything that could be considered intelligent in the same way humans are intelligent.
The pursuit for specialized, weak AI applications rose to ascendance beginning in the 1990s and is responsible for the amazing innovations with neural networks and biodata analysis.
Similar to the weak-strong distinction in AI, the "strong" computational view of the universe holds that the universe itself, along with every living being, is a digital computer.
Every dimension of space and time is discrete and every movement of matter or energy is a computation.
In contrast, the "weak" computational view of the universe dose not claim that the world computes, but only that computational interpretations of the world are very useful for studying phenomena: we can model, simulate, and study the world using computation.
The strong computational view is highly speculative, and while it has some passionate proponents, it faces numerous problems both empirical and philosophical.
Its rise is understandable as a continuation of the ongoing quest to understand the world through the latest available technology.
For instance, in the Age of Enlightenment, the world was compared to the clockwork.
The brain has successively been compared to the mill, the telegraph system, hydraulic systems, electromagnetic systems, and the computer.
The newest stage in this progression is to interpret the world is not a classical computer but a quantum computer.