@TODO

In mathematics and computer science, an algorithm (/ˈælɡərɪðəm/ (listen)) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can perform automated deductions (referred to as automated reasoning) and use mathematical and logical tests to divert the code execution through various routes (referred to as automated decision-making). Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as “memory”, “search” and “stimulus”.

In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result.

As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing “output” and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

在这里插入图片描述

Flowchart of an algorithm (Euclid’s algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≥ A yields “yes” or “true” (more accurately, the number b in location B is greater than or equal to the number a in location A) THEN, the algorithm specifies B ← B − A (meaning the number b − a replaces the old b). Similarly, IF A > B, THEN A ← A − B. The process terminates when (the contents of) B is 0, yielding the g.c.d. in A. (Algorithm derived from Scott 2009:13; symbols and drawing style from Tausworthe 1977).

在这里插入图片描述

Ada Lovelace’s diagram from “note G”, the first published computer algorithm

1 History

The concept of algorithm has existed since antiquity. Arithmetic algorithms, such as a division algorithm, were used by ancient Babylonian mathematicians c. 2500 BC and Egyptian mathematicians c. 1550 BC. Greek mathematicians later used algorithms in 240 BC in the sieve of Eratosthenes for finding prime numbers, and the Euclidean algorithm for finding the greatest common divisor of two numbers. Arabic mathematicians such as al-Kindi in the 9th century used cryptographic algorithms for code-breaking, based on frequency analysis.

The word algorithm is derived from the name of the 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī, whose nisba (identifying him as from Khwarazm) was Latinized as Algoritmi (Arabized Persian الخوارزمی c. 780–850). Muḥammad ibn Mūsā al-Khwārizmī was a mathematician, astronomer, geographer, and scholar in the House of Wisdom in Baghdad, whose name means ‘the native of Khwarazm’, a region that was part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, which was translated into Latin during the 12th century. The manuscript starts with the phrase Dixit Algorizmi (‘Thus spake Al-Khwarizmi’), where “Algorizmi” was the translator’s Latinization of Al-Khwarizmi’s name. Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through another of his books, the Algebra. In late medieval Latin, algorismus, English ‘algorism’, the corruption of his name, simply meant the “decimal number system”. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos), ‘number’ (cf. ‘arithmetic’), the Latin word was altered to algorithmus, and the corresponding English term ‘algorithm’ is first attested in the 17th century; the modern sense was introduced in the 19th century.

Indian mathematics was predominantly algorithmic. Algorithms that are representative of the Indian mathematical tradition range from the ancient Śulbasūtrās to the medieval texts of the Kerala School.

In English, the word algorithm was first used in about 1230 and then by Chaucer in 1391. English adopted the French term, but it was not until the late 19th century that “algorithm” took on the meaning that it has in modern English.

Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu. It begins with:

Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris.

which translates to:

Algorism is the art by which at present we use those Indian figures, which number two times five.

The poem is a few hundred lines long and summarizes the art of calculating with the new styled Indian dice (Tali Indorum), or Hindu numerals.

A partial formalization of the modern concept of algorithm began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert in 1928. Later formalizations were framed as attempts to define “effective calculability” or “effective method”. Those formalizations included the GödelHerbrandKleene recursive functions of 1930, 1934 and 1935, Alonzo Church’s lambda calculus of 1936, Emil Post’s Formulation 1 of 1936, and Alan Turing’s Turing machines of 1936–37 and 1939.

2 Informal definition

For a detailed presentation of the various points of view on the definition of “algorithm”, see Algorithm characterizations.

An informal definition could be “a set of rules that precisely defines a sequence of operations”, which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure or cook-book recipe.

In general, a program is only an algorithm if it stops eventually—even though infinite loops may sometimes prove desirable.

A prototypical example of an algorithm is the Euclidean algorithm, which is used to determine the maximum common divisor of two integers; an example (there are others) is described by the flowchart above and as an example in a later section.

Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word “algorithm” in the following quotation:

No human being can write fast enough, or long enough, or small enough† ( †“smaller and smaller without limit … you’d be trying to write on molecules, on atoms, on electrons”) to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the n n nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols.

An “enumerably infinite set” is one whose elements can be put into one-to-one correspondence with the integers. Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that “creates” output integers from an arbitrary “input” integer or integers that, in theory, can be arbitrarily large. For example, an algorithm can be an algebraic equation such as y = m + n (i.e., two arbitrary “input variables” m and n that produce an output y), but various authors’ attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example):

Precise instructions (in a language understood by “the computer”) for a fast, efficient, “good” process that specifies the “moves” of “the computer” (machine or human, equipped with the necessary internally contained information and capabilities) to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = … and “effectively” produce, in a “reasonable” time, output-integer y at a specified place and in a specified format.

The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.

Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.

3 Formalization

Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform—in a specific order—to carry out a specified task, such as calculating employees’ paychecks or printing students’ report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987), and Gurevich (2000):

Minsky: “But we will also maintain, with Turing … that any procedure which could “naturally” be called effective, can, in fact, be realized by a (simple) machine. Although this may seem extreme, the arguments … in its favor are hard to refute”. Gurevich: “… Turing’s informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine … according to Savage [1987], an algorithm is a computational process defined by a Turing machine”.

Turing machines can define computational processes that do not terminate. The informal definitions of algorithms generally require that the algorithm always terminates. This requirement renders the task of deciding whether a formal procedure is an algorithm impossible in the general case—due to a major theorem of computability theory known as the halting problem.

Typically, when an algorithm is associated with processing information, data can be read from an input source, written to an output device and stored for further processing. Stored data are regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures.

For some of these computational processes, the algorithm must be rigorously defined: and specified in the way it applies in all possible circumstances that could arise. This means that any conditional steps must be systematically dealt with, case by case; the criteria for each case must be clear (and computable).

Because an algorithm is a precise list of precise steps, the order of computation is always crucial to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting “from the top” and going “down to the bottom”—an idea that is described more formally by flow of control.

So far, the discussion on the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception—one which attempts to describe a task in discrete, “mechanical” means. Unique to this conception of formalized algorithms is the assignment operation, which sets the value of a variable. It derives from the intuition of “memory” as a scratchpad. An example of such an assignment can be found below.

For some alternate conceptions of what constitutes an algorithm, see functional programming and logic programming.

4 Expressing algorithms

Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in the statements based on natural language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are also often used as a way to define or document algorithms.

There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see finite-state machine, state transition table and control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a form of rudimentary machine code or assembly code called “sets of quadruples” (see Turing machine for more).

Representations of algorithms can be classed into three accepted levels of Turing machine description, as follows:

1 High-level description

“…prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head.”

2 Implementation description

“…prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function.”

3 Formal description

Most detailed, “lowest level”, gives the Turing machine’s “state table”.

For an example of the simple algorithm “Add m+n” described in all three levels, see Examples.

5 Design

See also: Algorithm § By design paradigm

Algorithm design refers to a method or a mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern.

One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g. an algorithm’s run-time growth as the size of its input increases.

Typical steps in the development of algorithms:

  1. Problem definition
  2. Development of a model
  3. Specification of the algorithm
  4. Designing an algorithm
  5. Checking the correctness of the algorithm
  6. Analysis of algorithm
  7. Implementation of algorithm
  8. Program testing
  9. Documentation preparation

6 Computer algorithms

7 Examples

7.1 Algorithm example

7.2 Euclid’s algorithm

7.2.1 Computer language for Euclid’s algorithm

7.2.2 An inelegant program for Euclid’s algorithm

7.2.3 An elegant program for Euclid’s algorithm

7.3 Testing the Euclid algorithms

7.4 Measuring and improving the Euclid algorithms

8 Algorithmic analysis

8.1 Formal versus empirical

8.2 Execution efficiency

9 Classification

9.1 By implementation

9.2 By design paradigm

9.3 Optimization problems

9.4 By field of study

9.5 By complexity

9.6 Continuous algorithms

10 Legal issues

11 History: Development of the notion of “algorithm”

11.1 Ancient Near East

11.2 Discrete and distinguishable symbols

11.3 Manipulation of symbols as “place holders” for numbers: algebra

11.4 Cryptographic algorithms

11.5 Mechanical contrivances with discrete states

11.6 Mathematics during the 19th century up to the mid-20th century

11.7 Emil Post (1936) and Alan Turing (1936–37, 1939)

11.8 J.B. Rosser (1939) and S.C. Kleene (1943)

11.9 History after 1950

12 See also

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值