算法时间复杂度计算_简短的算法时间复杂度介绍

算法时间复杂度是衡量程序效率的关键指标,通过Big O表示法来近似算法在最坏情况下的运行时间。文章介绍了不同时间复杂度类别的概念,如常量、线性、二次方、立方体、对数等,并通过实例展示了如何确定和改进代码的时间复杂度,以实现更高效的编程。此外,还强调了在编写代码前估算算法效率的重要性。
摘要由CSDN通过智能技术生成

算法时间复杂度计算

Just like writing your very first for loop, understanding time complexity is an integral milestone to learning how to write efficient complex programs. Think of it as having a superpower that allows you to know exactly what type of program might be the most efficient in a particular situation — before even running a single line of code.

就像编写您的第一个for循环一样,了解时间复杂度是学习如何编写有效的复杂程序的重要里程碑。 可以认为它具有超强功能,可以使您确切地知道哪种类型的程序在特定情况下可能是最高效的,甚至可以只运行一行代码。

The fundamental concepts of complexity analysis are well worth studying. You’ll be able to better understand how the code you’re writing will interact with the program’s input, and as a result, you’ll spend a lot less wasted time writing slow and problematic code.

复杂性分析的基本概念非常值得研究。 您将能够更好地了解所编写的代码将如何与程序的输入进行交互,因此,您将花费更少的时间来编写缓慢而有问题的代码。

It won’t take long to go over all you need to know in order to start writing more efficient programs — in fact, we can do it in about fifteen minutes. You can go grab a coffee right now (or tea, if that’s your thing) and I’ll take you through it before your coffee break is over. Go ahead, I’ll wait.

为了开始编写更高效的程序,花很长时间可以遍历所有您需要了解的知识-实际上,我们可以在大约十五分钟内完成。 您现在可以去喝杯咖啡(或茶,如果那是您的事),我会在您的咖啡休息时间结束之前帮您煮一杯。 来吧,我等。

All set? Let’s do it!

可以了,好了? 我们开始做吧!

无论如何,“时间复杂度”是什么? (What is “time complexity” anyway?)

The time complexity of an algorithm is an approximation of how long that algorithm will take to process some input. It describes the efficiency of the algorithm by the magnitude of its operations. This is different than the number of times an operation repeats. I’ll expand on that later. Generally, the fewer operations the algorithm has, the faster it will be.

算法的时间复杂度是该算法处理某些输入将花费多长时间的近似值 。 它通过运算的大小来描述算法的效率。 这与操作重复的次数不同。 我将在稍后进行扩展。 通常,算法执行的运算越少,运算速度就越快。

We write about time complexity using Big O notation, which looks something like O(n). There’s rather a lot of math involved in its formal definition, but informally we can say that Big O notation gives us our algorithm’s approximate run time in the worst case, or in other words, its upper bound. It is inherently relative and comparative.

我们使用Big O表示法来编写时间复杂度,看起来像O ( n )。 它的正式定义涉及很多数学运算,但非正式地,我们可以说Big O符号为我们提供了算法在最坏情况下的近似运行时间,或者换句话说,它的上限。 它本质上是相对的和比较的。

We’re describing the algorithm’s efficiency relative to the increasing size of its input data, n. If the input is a string, then n is the length of the string. If it’s a list of integers, n is the length of the list.

我们正在描述算法相对于输入数据n的大小增加的效率。 如果输入是字符串,则n是字符串的长度。 如果它是整数列表,则n是列表的长度。

It’s easiest to picture what Big O notation represents with a graph:

用图形描绘Big O表示法最简单:

Here are the main important points to remember as you read the rest of this article:

阅读本文的其余部分时,请记住以下主要要点:

  • Time complexity is an approximation

    时间复杂度是一个近似值
  • An algorithm’s time complexity approximates its worst case run time

    算法的时间复杂度接近其最坏情况的运行时间

确定时间复杂度 (Determining time complexity)

There are different classes of complexity that we can use to quickly understand an algorithm. I’ll illustrate some of these classes using nested loops and other examples.

我们可以使用不同类别的复杂性来快速了解算法。 我将使用嵌套循环和其他示例来说明其中一些类。

多项式时间复杂度 (Polynomial time complexity)

A polynomial, from the Greek poly meaning “many,” and Latin nomen meaning “name,” describes an expression comprised of constant variables, and addition, multiplication, and exponentiation to a non-negative integer power. That’s a super math-y way to say that it contains variables usually denoted by letters, and symbols that look like these:

多项式 ,由希腊语poly表示“许多”,而拉丁语nomen表示“ name”,描述了一个表达式,该表达式包含常量变量以及加,乘和乘幂到非负整数幂。 这是一种超级数学的说法,它包含通常用字母表示的变量和如下所示的符号:

The below classes describe polynomial algorithms. Some have food examples.

以下类描述多项式算法。 有些有食物的例子。

不变 (Constant)

A constant time algorithm doesn’t change its running time in response to the input data. No matter the size of the data it receives, the algorithm takes the same amount of time to run. We denote this as a time complexity of O(1).

恒定时间算法不会响应输入数据而更改其运行时间。 无论接收到的数据大小如何,该算法都会花费相同的时间来运行。 我们将其表示为O (1)的时间复杂度。

Here’s one example of a constant algorithm that takes the first item in a slice.

这是一个常量算法的示例,该算法采用切片中的第一项。

func takeCupcake(cupcakes []int) int {
	return cupcakes[0]
}

With this contant-time algorithm, no matter how many cupcakes are on offer, you just get the first one. Oh well. Flavours are overrated anyway.

使用这种竞争时间算法,无论提供多少杯形蛋糕,您都只会得到第一个。 那好吧。 无论如何,口味被高估了。

线性的 (Linear)

The running duration of a linear algorithm is constant. It will process the input in n number of operations. This is often the best possible (most efficient) case for time complexity where all the data must be examined.

线性算法的运行持续时间是恒定的。 它将以n个操作处理输入。 对于时间复杂度,这通常是最好的(最有效的)情况,其中必须检查所有数据。

Here’s an example of code with time complexity of O(n):

这是时间复杂度为O ( n )的代码示例:

func eatChips(bowlOfChips int) {
	for chip := 0; chip <= bowlOfChips; chip++ {
		// dip chip
	}
}

Here’s another example of code with time complexity of O(n):

这是时间复杂度为O ( n )的另一个代码示例:

func eatChips(bowlOfChips int) {
	for chip := 0; chip <= bowlOfChips; chip++ {
		// double dip chip
	}
}

It doesn’t matter whether the code inside the loop executes once, twice, or any number of times. Both these loops process the input by a constant factor of n, and thus can be described as linear.

循环内的代码执行一次,两次或任意多次都没有关系。 这两个回路都以常数n处理输入,因此可以描述为线性。

二次方的 (Quadratic)

Now here’s an example of code with time complexity of O(n2):

现在这是一个时间复杂度为O ( n 2)的代码示例:

func pizzaDelivery(pizzas int) {
	for pizza := 0; pizza <= pizzas; pizza++ {
		// slice pizza
		for slice := 0; slice <= pizza; slice++ {
			// eat slice of pizza
		}
	}
}

Because there are two nested loops, or nested linear operations, the algorithm process the input n2times.

由于存在两个嵌套循环或嵌套线性运算,因此该算法将n次处理输入2次。

立方体 (Cubic)

Extending on the previous example, this code with three nested loops has time complexity of O(n3):

扩展前面的示例,此代码具有三个嵌套循环,其时间复杂度为O ( n 3):

func pizzaDelivery(boxesDelivered int) {
	for pizzaBox := 0; pizzaBox <= boxesDelivered; pizzaBox++ {
		// open box
		for pizza := 0; pizza <= pizzaBox; pizza++ {
			// slice pizza
			for slice := 0; slice <= pizza; slice++ {
				// eat slice of pizza
			}
		}
	}
}
对数 (Logarithmic)

A logarithmic algorithm is one that reduces the size of the input at every step. We denote this time complexity as O(log n), where log, the logarithm function, is this shape:

对数算法是一种在每一步都减小输入大小的算法。 我们将此时间复杂度表示为O (log n ),其中log (对数函数)为以下形状:

One example of this is a binary search algorithm that finds the position of an element within a sorted array. Here’s how it would work, assuming we’re trying to find the element x:

一个示例是二进制搜索算法 ,该算法可查找元素在排序数组中的位置。 假设我们试图找到元素x ,这是它的工作方式:

  1. If x matches the middle element m of the array, return the position of m.

    如果x与数组的中间元素m相匹配,则返回m的位置

  2. If x doesn’t match m, see if m is larger or smaller than x. If larger, discard all array items greater than m. If smaller, discard all array items smaller than m.

    如果xm不匹配,请查看m是大于还是小于x。 如果更大,则丢弃所有大于m的数组项 如果较小,则丢弃所有小于m的数组项

  3. Continue by repeating steps 1 and 2 on the remaining array until x is found.

    在其余阵列上重复步骤1和2,直到找到x

I find the clearest analogy for understanding binary search is imagining the process of locating a book in a bookstore aisle. If the books are organized by author’s last name and you want to find “Terry Pratchett,” you know you need to look for the “P” section.

我发现理解二进制搜索最清晰的类比是想像在书店过道中查找书籍的过程。 如果这些书是按作者的姓氏来组织的,并且您想查找“ Terry Pratchett”,则您需要查找“ P”部分。

You can approach the shelf at any point along the aisle and look at the author’s last name there. If you’re looking at a book by Neil Gaiman, you know you can ignore all the rest of the books to your left, since no letters that come before “G” in the alphabet happen to be “P.” You would then move down the aisle to the right any amount, and repeat this process until you’ve found the Terry Pratchett section, which should be rather sizable if you’re at any decent bookstore, because wow did he write a lot of books.

您可以在过道的任何位置接近书架,并在此处查看作者的姓氏。 如果您正在看尼尔·盖曼(Neil Gaiman)的书,那么您知道可以忽略左侧的所有其他书,因为字母表中“ G”之前的字母都不是“ P”。 然后,您可以将走道向右下移任意数量,并重复此过程,直到找到“特里·普拉切特”部分为止,如果您在任何一家不错的书店中,该部分都应该足够大,因为哇,他写了很多书吗。

Quasilinear (Quasilinear)

Often seen with sorting algorithms, the time complexity O(n log n) can describe a data structure where each operation takes O(log n) time. One example of this is quick sort, a divide-and-conquer algorithm.

通常在排序算法中可以看到,时间复杂度O ( n log n )可以描述每个操作花费O (log n )时间的数据结构。 一个例子就是快速排序 ,即分而治之算法。

Quick sort works by dividing up an unsorted array into smaller chunks that are easier to process. It sorts the sub-arrays, and thus the whole array. Think about it like trying to put a deck of cards in order. It’s faster if you split up the cards and get five friends to help you.

快速排序通过将未排序的数组划分为更易于处理的较小块而起作用。 它对子数组进行排序,从而对整个数组进行排序。 考虑一下它,就像尝试整理一副纸牌一样。 如果您分拆卡片并得到五个朋友来帮助您,则速度会更快。

非多项式时间复杂度 (Non-polynomial time complexity)

The below classes of algorithms are non-polynomial.

以下几类算法是非多项式的。

阶乘 (Factorial)

An algorithm with time complexity O(n!) often iterates through all permutations of the input elements. One common example is a brute-force search, seen in the traveling salesman problem. It tries to find the least costly path between a number of points by enumerating all possible permutations and finding the ones with the lowest cost.

时间复杂度为O ( n !)的算法通常会遍历输入元素的所有排列。 一个常见的例子是在旅行推销员问题中发现的蛮力搜索 。 它试图通过枚举所有可能的排列并找到成本最低的排列来找到多个点之间最便宜的路径。

指数的 (Exponential)

An exponential algorithm often also iterates through all subsets of the input elements. It is denoted O(2n) and is often seen in brute-force algorithms. It is similar to factorial time except in its rate of growth, which, as you may not be surprised to hear, is exponential. The larger the data set, the more steep the curve becomes.

指数算法通常还会迭代输入元素的所有子集。 它表示为O (2 n ),通常在蛮力算法中看到。 它与阶乘时间相似,不同之处在于其增长率,如您可能并不惊讶地看到的,它是指数级的。 数据集越大,曲线越陡峭。

In cryptography, a brute-force attack may systematically check all possible elements of a password by iterating through subsets. Using an exponential algorithm to do this, it becomes incredibly resource-expensive to brute-force crack a long password versus a shorter one. This is one reason that a long password is considered more secure than a shorter one.

在密码术中,暴力攻击可以通过遍历子集来系统地检查密码的所有可能元素。 使用指数算法来做到这一点,用暴力破解长密码而不是短密码就变得非常耗费资源。 这是长密码比短密码更安全的原因之一。

There are further time complexity classes less commonly seen that I won’t cover here, but you can read about these and find examples in this handy table.

还有一些时间复杂度较弱的类,在这里我将不介绍,但是您可以阅读这些内容并在此方便的表格中找到示例。

递归时间复杂度 (Recursion time complexity)

As I described in my article explaining recursion using apple pie, a recursive function calls itself under specified conditions. Its time complexity depends on how many times the function is called and the time complexity of a single function call. In other words, it’s the product of the number of times the function runs and a single execution’s time complexity.

如我在解释使用Apple Pie进行递归的文章中所述,递归函数在指定条件下会自行调用。 它的时间复杂度取决于调用该函数的次数以及单个函数调用的时间复杂度。 换句话说,它是函数运行次数与单次执行时间复杂度的乘积。

Here’s a recursive function that eats pies until no pies are left:

这是一个递归函数,它将吃掉馅饼直到没有剩下的馅饼:

func eatPies(pies int) int {
	if pies == 0 {
		return pies
	}
	return eatPies(pies - 1)
}

The time complexity of a single execution is constant. No matter how many pies are input, the program will do the same thing: check to see if the input is 0. If so, return, and if not, call itself with one fewer pie.

单个执行的时间复杂度是恒定的。 不管输入多少个派,该程序都会做同样的事情:检查输入是否为0。如果是,则返回,否则返回一个更少的派。

The initial number of pies could be any number, and we need to process all of them, so we can describe the input as n. Thus, the time complexity of this recursive function is the product O(n).

派的初始数量可以是任意数量,我们需要处理所有派,因此我们可以将输入描述为n 。 因此,该递归函数的时间复杂度为乘积O ( n )。

最坏情况下的时间复杂度 (Worst case time complexity)

So far, we’ve talked about the time complexity of a few nested loops and some code examples. Most algorithms, however, are built from many combinations of these. How do we determine the time complexity of an algorithm containing many of these elements strung together?

到目前为止,我们已经讨论了一些嵌套循环和一些代码示例的时间复杂性。 但是,大多数算法是根据这些算法的许多组合构建的。 我们如何确定包含许多这些元素的算法的时间复杂度?

Easy. We can describe the total time complexity of the algorithm by finding the largest complexity among all of its parts. This is because the slowest part of the code is the bottleneck, and time complexity is concerned with describing the worst case for the algorithm’s run time.

简单。 我们可以通过在算法所有部分中找到最大的复杂度来描述该算法的总时间复杂度。 这是因为代码最慢的部分是瓶颈,并且时间复杂度与描述算法运行时间的最坏情况有关。

Say we have a program for an office party. If our program looks like this:

假设我们有一个办公室聚会的程序。 如果我们的程序如下所示:

package main

import "fmt"

func takeCupcake(cupcakes []int) int {
	fmt.Println("Have cupcake number",cupcakes[0])
	return cupcakes[0]
}

func eatChips(bowlOfChips int) {
	fmt.Println("Have some chips!")
	for chip := 0; chip <= bowlOfChips; chip++ {
		// dip chip
	}
	fmt.Println("No more chips.")
}

func pizzaDelivery(boxesDelivered int) {
	fmt.Println("Pizza is here!")
	for pizzaBox := 0; pizzaBox <= boxesDelivered; pizzaBox++ {
		// open box
		for pizza := 0; pizza <= pizzaBox; pizza++ {
			// slice pizza
			for slice := 0; slice <= pizza; slice++ {
				// eat slice of pizza
			}
		}
	}
	fmt.Println("Pizza is gone.")
}

func eatPies(pies int) int {
	if pies == 0 {
		fmt.Println("Someone ate all the pies!")
		return pies
	}
	fmt.Println("Eating pie...")
	return eatPies(pies - 1)
}

func main() {
	takeCupcake([]int{1, 2, 3})
	eatChips(23)
	pizzaDelivery(3)
	eatPies(3)
	fmt.Println("Food gone. Back to work!")
}

We can describe the time complexity of all the code by the complexity of its most complex part. This program is made up of functions we’ve already seen, with the following time complexity classes:

我们可以通过其最复杂部分的复杂性来描述所有代码的时间复杂性。 该程序由我们已经看到的函数组成,具有以下时间复杂度类:

To describe the time complexity of the entire office party program, we choose the worst case. This program would have the time complexity O(n3).

为了描述整个办公室聚会程序的时间复杂性,我们选择最坏的情况。 该程序的时间复杂度为O ( n 3)。

Here’s the office party soundtrack, just for fun.

这是办公室聚会的配乐,只是为了好玩。

Have cupcake number 1
Have some chips!
No more chips.
Pizza is here!
Pizza is gone.
Eating pie...
Eating pie...
Eating pie...
Someone ate all the pies!
Food gone. Back to work!

P vs NP,NP完全和NP困难 (P vs NP, NP-complete, and NP-hard)

You may come across these terms in your explorations of time complexity. Informally, P (for Polynomial time), is a class of problems that is quick to solve. NP, for Nondeterministic Polynomial time, is a class of problems where the answer can be quickly verified in polynomial time. NP encompasses P, but also another class of problems called NP-complete, for which no fast solution is known. Outside of NP, but still including NP-complete, is yet another class called NP-hard, which includes problems that no one has been able to verifiably solve with polynomial algorithms.

在探索时间复杂度时,您可能会遇到这些术语。 非正式地, P (对于多项式时间)是一类可以快速解决的问题。 对于不确定性多项式时间, NP是一类问题,可以在多项式时间中快速验证答案。 NP包含P,但也包含另一类称为NP-complete的问题,对此尚无快速解决方案。 在NP之外,但仍然包括NP-complete,是另一类称为NP-hard ,它包括没有人能够使用多项式算法来验证地解决的问题。

P versus NP is an unsolved, open question in computer science.

P与NP是计算机科学中尚未解决的未解决问题。

Anyway, you don’t generally need to know about NP and NP-hard problems to begin taking advantage of understanding time complexity. They’re a whole other Pandora’s box.

无论如何,您通常不需要了解NP和NP难题即可开始了解时间复杂度。 他们是潘多拉盒子的另一个盒子。

在编写代码之前估算算法的效率 (Approximate the efficiency of an algorithm before you write the code)

So far, we’ve identified some different time complexity classes and how we might determine which one an algorithm falls into. So how does this help us before we’ve written any code to evaluate?

到目前为止,我们已经确定了一些不同的时间复杂度类别以及如何确定算法属于哪种类别。 那么,在编写任何代码进行评估之前,这对我们有什么帮助?

By combining a little knowledge of time complexity with an awareness of the size of our input data, we can take a guess at an efficient algorithm for processing our data within a given time constraint. We can base our estimation on the fact that a modern computer can perform some hundreds of millions of operations in a second. The following table from the Competitive Programmer’s Handbook offers some estimates on required time complexity to process the respective input size in a time limit of one second.

通过将对时间复杂性的一点了解与对我们输入数据大小的了解相结合,我们可以猜测一种在给定时间约束内处理数据的有效算法。 我们可以基于以下事实进行估算:现代计算机可以在一秒钟内执行数亿个操作。 下表来自《竞争程序员手册》,提供了一些估计的时间复杂度,以在1秒的时限内处理相应的输入大小。

Keep in mind that time complexity is an approximation, and not a guarantee. We can save a lot of time and effort by immediately ruling out algorithm designs that are unlikely to suit our constraints, but we must also consider that Big O notation doesn’t account for constant factors. Here’s some code to illustrate.

请记住,时间复杂度只是近似值,而不是保证值。 通过立即排除不太可能满足约束条件的算法设计,我们可以节省大量时间和精力,但是我们还必须考虑到Big O表示法并不能说明恒定因素 。 这是一些代码来说明。

The following two algorithms both have O(n) time complexity.

以下两种算法都具有O ( n )时间复杂度。

func makeCoffee(scoops int) {
	for scoop := 0; scoop <= scoops; scoop++ {
		// add instant coffee
	}
}
func makeStrongCoffee(scoops int) {
	for scoop := 0; scoop <= 3*scoops; scoop++ {
		// add instant coffee
	}
}

The first function makes a cup of coffee with the number of scoops we ask for. The second function also makes a cup of coffee, but it triples the number of scoops we ask for. To see an illustrative example, let’s ask both these functions for a cup of coffee with a million scoops.

第一个功能是按我们要求的勺子量杯咖啡。 第二个功能还可以煮一杯咖啡,但是它使我们要求的勺数增加了三倍。 为了看一个说明性的例子,让我们问一下这两个功能是否要喝一杯百万勺的咖啡。

Here’s the output of the Go test:

这是Go测试的输出:

Benchmark_makeCoffee-4          1000000000             0.29 ns/op
Benchmark_makeStrongCoffee-4    1000000000             0.86 ns/op

Our first function, makeCoffee, completed in an average 0.29 nanoseconds. Our second function, makeStrongCoffee, completed in an average of 0.86 nanoseconds. While those may both seem like pretty small numbers, consider that the stronger coffee took nearly three times longer to make. This should make sense intuitively, since we asked it to triple the scoops. Big O notation alone wouldn’t tell you this, since the constant factor of the tripled scoops isn’t accounted for.

我们的第一个函数makeCoffee平均在0.29纳秒内完成。 我们的第二个功能makeStrongCoffee平均在0.86纳秒内完成。 虽然这两个数字似乎都很少,但考虑到浓咖啡的制作时间却要长将近三倍。 从直觉上讲,这应该是有意义的,因为我们要求它将勺数增加三倍。 单单使用大O表示法就无法告诉您这一点,因为没有考虑到三倍瓢的常数因素。

改善现有代码的时间复杂度 (Improve time complexity of existing code)

Becoming familiar with time complexity gives us the opportunity to write code, or refactor code, to be more efficient. To illustrate, I’ll give a concrete example of one way we can refactor a bit of code to improve its time complexity.

熟悉时间复杂度使我们有机会编写代码或重构代码,从而提高效率。 为了说明这一点,我将给出一个具体示例,说明我们可以重构一些代码以提高其时间复杂度的一种方法。

Let’s say a bunch of people at the office want some pie. Some people want pie more than others. The amount that everyone wants some pie is represented by an int > 0:

假设一群人在办公室要些馅饼。 有些人比其他人更想要馅饼。 每个人想要一些馅饼的数量用一个int > 0表示:

diners := []int{2, 88, 87, 16, 42, 10, 34, 1, 43, 56}

Unfortunately, we’re bootstrapped and there are only three forks to go around. Since we’re a cooperative bunch, the three people who want pie the most will receive the forks to eat it with. Even though they’ve all agreed on this, no one seems to want to sort themselves out and line up in an orderly fashion, so we’ll have to make do with everybody jumbled about.

不幸的是,我们被引导了,只有三把叉子可以走了。 由于我们是一个合作社,所以最想吃馅饼的三个人会收到叉子来一起吃。 即使他们都同意这一点,似乎没有人愿意整理自己并以有序的方式排队,因此我们必须对每个混乱的人都做些努力。

Without sorting the list of diners, return the three largest integers in the slice.

在不对用餐者列表进行排序的情况下,返回切片中的三个最大整数。

Here’s a function that solves this problem and has O(n2) time complexity:

这是一个解决此问题并具有O ( n 2)时间复杂度的函数:

func giveForks(diners []int) []int {
	// make a slice to store diners who will receive forks
	var withForks []int
	// loop over three forks
	for i := 1; i <= 3; i++ {
		// variables to keep track of the highest integer and where it is
		var max, maxIndex int
		// loop over the diners slice
		for n := range diners {
			// if this integer is higher than max, update max and maxIndex
			if diners[n] > max {
				max = diners[n]
				maxIndex = n
			}
		}
		// remove the highest integer from the diners slice for the next loop
		diners = append(diners[:maxIndex], diners[maxIndex+1:]...)
		// keep track of who gets a fork
		withForks = append(withForks, max)
	}
	return withForks
}

This program works, and eventually returns diners [88 87 56]. Everyone gets a little impatient while it’s running though, since it takes rather a long time (about 120 nanoseconds) just to hand out three forks, and the pie’s getting cold. How could we improve it?

该程序有效,最终返回了食客[88 87 56] 。 但是,每个人在运行时都会有些不耐烦,因为要花很长时间(大约120纳秒)才能派出三把叉子,馅饼变得越来越冷。 我们如何改善它?

By thinking about our approach in a slightly different way, we can refactor this program to have O(n) time complexity:

通过以稍微不同的方式考虑我们的方法,我们可以将该程序重构为O ( n )时间复杂度:

func giveForks(diners []int) []int {
	// make a slice to store diners who will receive forks
	var withForks []int
	// create variables for each fork
	var first, second, third int
	// loop over the diners
	for i := range diners {
		// assign the forks
		if diners[i] > first {
			third = second
			second = first
			first = diners[i]
		} else if diners[i] > second {
			third = second
			second = diners[i]
		} else if diners[i] > third {
			third = diners[i]
		}
	}
	// list the final result of who gets a fork
	withForks = append(withForks, first, second, third)
	return withForks
}

Here’s how the new program works:

新程序的工作方式如下:

Initially, diner 2 (the first in the list) is assigned the first fork. The other forks remain unassigned.

最初,晚餐2 (列表中的first )被分配了first叉子。 其他货叉保持未分配状态。

Then, diner 88 is assigned the first fork instead. Diner 2 gets the second one.

然后,代餐者88被分配第一叉。 晚餐2获得second个。

Diner 87 isn’t greater than first which is currently 88, but it is greater than 2 who has the second fork. So, the second fork goes to 87. Diner 2 gets the third fork.

晚餐87不大于first ,目前为88 ,但second叉子大于2 。 因此, second前叉转到87 。 晚餐2获得third叉子。

Continuing in this violent and rapid fork exchange, diner 16 is then assigned the third fork instead of 2, and so on.

继续进行这种剧烈而又快速的分叉交换,然后为晚餐16分配了third分叉而不是2 ,依此类推。

We can add a print statement in the loop to see how the fork assignments play out:

我们可以在循环中添加打印语句,以查看派生分配如何发挥作用:

0 0 0
2 0 0
88 2 0
88 87 2
88 87 16
88 87 42
88 87 42
88 87 42
88 87 42
88 87 43
[88 87 56]

This program is much faster, and the whole epic struggle for fork domination is over in 47 nanoseconds.

这个程序要快得多,整个争夺叉子统治地位的史诗般的斗争已经超过了47纳秒。

As you can see, with a little change in perspective and some refactoring, we’ve made this simple bit of code faster and more efficient.

如您所见,在透视图上进行了少许更改并进行了一些重构,我们使这段简单的代码变得更快,更高效。

Well, it looks like our fifteen minute coffee break is up! I hope I’ve given you a comprehensive introduction to calculating time complexity. Time to get back to work, hopefully applying your new knowledge to write more effective code! Or maybe just sound smart at your next office party. :)

好吧,看来我们15分钟的咖啡休息时间到了! 希望我已经给您全面介绍了计算时间复杂度的方法。 是时候恢复工作了,希望运用您的新知识来编写更有效的代码! 或者在您下一次办公室聚会上听起来很聪明。 :)

资料来源 (Sources)

“If I have seen further it is by standing on the shoulders of Giants.” –Isaac Newton, 1675
“如果我看得更远,那就是站在巨人的肩膀上。” –艾萨克·牛顿(Isaac Newton),1675年
  1. Antti Laaksonen. Competitive Programmer’s Handbook (pdf), 2017

    Antti Laaksonen。 竞争程序员手册(pdf) 2017

  2. Wikipedia: Big O notation

    维基百科: 大O符号

  3. StackOverflow: What is a plain English explanation of “Big O” notation?

    StackOverflow: “ Big O”符号的简单英语解释是什么?

  4. Wikipedia: Polynomial

    维基百科: 多项式

  5. Wikipedia: NP-completeness

    维基百科: NP完整性

  6. Wikipedia: NP-hardness

    维基百科: NP硬度

  7. Desmos graph calculator

    Desmos图形计算器

Thanks for reading! If you found this post useful, please share it with someone else who might benefit from it too!

谢谢阅读! 如果您发现此帖子有用,请与可能也从中受益的其他人分享!

翻译自: https://www.freecodecamp.org/news/a-coffee-break-introduction-to-time-complexity-of-algorithms-64df7dd8338e/

算法时间复杂度计算

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值