什么会导致一个算法有O(日志log n)的复杂性?复杂性、算法、日志、log

2023-09-11 00:22:20 作者:一吃就胖星人

This这可能会导致算法有Ø前面的问题解决的一些因素(log n)的复杂性。

This earlier question addresses some of the factors that might cause an algorithm to have O(log n) complexity.

什么会导致一个算法有时间复杂度为O(log日志N)?

What would cause an algorithm to have time complexity O(log log n)?

推荐答案

为O(log log n)的条款可以在各种不同的地方出现,但通常有两种主要途径,将在这个运行时到达。

O(log log n) terms can show up in a variety of different places, but there are typically two main routes that will arrive at this runtime.

正如答案的链接的问题,一种算法的常用方法有时间复杂度为O(log n)是该算法通过反复努力削减投入的规模下降了一些常数因子在每次迭代。如果是这样的情况下,算法必须终止O(log n)的迭代之后,因为这样做Ø后,由一个常数(log n)的部门,该算法必须收缩问题的规模下降到0或1。这是为什么,例如,二进制搜索具有复杂度为O(log n)的。

As mentioned in the answer to the linked question, a common way for an algorithm to have time complexity O(log n) is for that algorithm to work by repeatedly cut the size of the input down by some constant factor on each iteration. If this is the case, the algorithm must terminate after O(log n) iterations, because after doing O(log n) divisions by a constant, the algorithm must shrink the problem size down to 0 or 1. This is why, for example, binary search has complexity O(log n).

有趣的是,有向下收缩的产生形式澳运行时出现问题的大小类似的方式(对数log n)的。代替在每一层将输入在半的,如果我们的会发生什么情况采取的大小的平方根的每一层?

Interestingly, there is a similar way of shrinking down the size of a problem that yields runtimes of the form O(log log n). Instead of dividing the input in half at each layer, what happens if we take the square root of the size at each layer?

例如,让我们来数65,536。多少次,我们有2个,直到我们得到下降到1分呢?如果我们这样做,我们得到

For example, let's take the number 65,536. How many times do we have to divide this by 2 until we get down to 1? If we do this, we get

在65,536 / 2 = 32,768 在32,768 / 2 = 16,384 在16,384 / 2 = 8,192 在8,192 / 2 = 4096 4096/2 = 2048 2048/2 = 1024 1024/2 = 512 二分之五百十二= 256 二分之二百五十六= 128 二分之一百二十八= 64 二分之六十四= 32 32/2 = 16 16/2 = 8 在8/2 = 4 4/2 = 2 2的/ 2 = 1 65,536 / 2 = 32,768 32,768 / 2 = 16,384 16,384 / 2 = 8,192 8,192 / 2 = 4,096 4,096 / 2 = 2,048 2,048 / 2 = 1,024 1,024 / 2 = 512 512 / 2 = 256 256 / 2 = 128 128 / 2 = 64 64 / 2 = 32 32 / 2 = 16 16 / 2 = 8 8 / 2 = 4 4 / 2 = 2 2 / 2 = 1

这个过程需要16个步骤,而且它也可以说65,536 = 2 16 。

This process takes 16 steps, and it's also the case that 65,536 = 216.

但是,如果我们采取的平方根在每个级别中,我们得到

But, if we take the square root at each level, we get

&拉迪奇; 65,536 = 256 &拉迪奇; 256 = 16 &拉迪奇; 16 = 4 &拉迪奇; 4 = 2

请注意,只需要四个步骤来一路下跌到2这是为什么?好吧,让我们来改写这个序列中两个大国而言:

Notice that it only takes four steps to get all the way down to 2. Why is this? Well, let's rewrite this sequence in terms of powers of two:

&拉迪奇; 65,536 =拉迪奇; 2 16 =(2 16 ) 1/2 = 2 8 = 256 &拉迪奇; 256 =拉迪奇; 2 8 =(2 8 ) 1/2 = 2 4 = 16 &拉迪奇; 16 =拉迪奇; 2 4 =(2 4 ) 1/2 = 2 2 = 4 &拉迪奇; 4 =拉迪奇; 2 2 =(2 2 ) 1/2 = 2 1 = 2 √65,536 = √216 = (216)1/2 = 28 = 256 √256 = √28 = (28)1/2 = 24 = 16 √16 = √24 = (24)1/2 = 22 = 4 √4 = √22 = (22)1/2 = 21 = 2

请注意,我们遵循的顺序2 16 → 2 8 → 2 4 → 2 2 → 2 1 。在每次迭代中,我们切两个一半的功率的指数。这很有趣,因为这个连回我们已经知道 - 你只能把数K的一半为O(log k)的时间才降到零

Notice that we followed the sequence 216 → 28 → 24 → 22 → 21. On each iteration, we cut the exponent of the power of two in half. That's interesting, because this connects back to what we already know - you can only divide the number k in half O(log k) times before it drops to zero.

因此​​采取任何数n,并把它写成N = 2 K 。每次取n的平方根,您减半这个等式中的指数。因此,可以有仅为O(log k)的平方根施加之前ķ下降到1或更低(在时n降至2以下)。由于n = 2 K 中,这意味着当k =日志 2 n,并且因此平方根采取的数目是为O(log K)=为O(log log n)的。因此,如果有算法,它通过反复降低问题大小的子问题,它是原始问题大小的平方根,即算法将邻后终止(日志log n)的步骤。

So take any number n and write it as n = 2k. Each time you take the square root of n, you halve the exponent in this equation. Therefore, there can be only O(log k) square roots applied before k drops to 1 or lower (in which case n drops to 2 or lower). Since n = 2k, this means that k = log2 n, and therefore the number of square roots taken is O(log k) = O(log log n). Therefore, if there is algorithm that works by repeatedly reducing the problem to a subproblem of size that is the square root of the original problem size, that algorithm will terminate after O(log log n) steps.

这样的一个现实世界的例子是面包车昂德博阿斯树(VEB树)数据结构。一个VEB队列树是一个专门的数据结构来存储整数范围为0,...,N - 1。它的工作原理如下:树的根节点和拉迪奇; N指针在里面,分裂范围0,...,N - 1成&拉迪奇; N桶各持有一系列大致与拉迪奇; N个整数。这些铲斗然后每个内部分为&拉迪奇;(&拉迪奇N)桶,其中每个保持大致&拉迪奇;(&拉迪奇N)的元件。要遍历树,你从根开始,确定哪些斗你属于,然后递归在相应的子树继续。由于方式对VEB树的结构,你可以在O确定(1)时间下降而进入树和O后因此(日志日志N)的步骤,你会到达树的底部。因此,在一个VEB队列树查找需要的时间仅仅为O(log日志N)。

One real-world example of this is the van Emde Boas tree (vEB-tree) data structure. A vEB-tree is a specialized data structure for storing integers in the range 0 ... N - 1. It works as follows: the root node of the tree has √N pointers in it, splitting the range 0 ... N - 1 into √N buckets each holding a range of roughly √N integers. These buckets are then each internally subdivided into √(√ N) buckets, each of which holds roughly √(√ N) elements. To traverse the tree, you start at the root, determine which bucket you belong to, then recursively continue in the appropriate subtree. Due to the way the vEB-tree is structured, you can determine in O(1) time which subtree to descend into, and so after O(log log N) steps you will reach the bottom of the tree. Accordingly, lookups in a vEB-tree take time only O(log log N).

又如 Hopcroft-财富最接近的一对分算法。这个算法试图找到最接近的两个点的二维点的集合。它的工作原理是建立桶网格和点分配到这些水桶。如果在该算法中的任何点的铲斗发现有超过&拉迪奇; N点在它的算法递归地处理该桶。因此,递归的最大深度为O(log log n)的,并使用递归树可以证明,在树的每一层呢O(n)的工作进行了分析。因此,该算法的总运行时间为O(n日志log n)的。

Another example is the Hopcroft-Fortune closest pair of points algorithm. This algorithm attempts to find the two closest points in a collection of 2D points. It works by creating a grid of buckets and distributing the points into those buckets. If at any point in the algorithm a bucket is found that has more than √N points in it, the algorithm recursively processes that bucket. The maximum depth of the recursion is therefore O(log log n), and using an analysis of the recursion tree it can be shown that each layer in the tree does O(n) work. Therefore, the total runtime of the algorithm is O(n log log n).

有一些是O其它算法(日志log n)的使用算法,如对大小O对象(log n)的二进制搜索的运行时间。例如,的x快速线索的数据结构执行以上的各层二进制搜索在高度Ô树(日志U),因此运行时的部分业务是为O(log日志U)。相关 Y型快速线索通过维持邻均衡BSTS得到了一些它的澳(日志中记录U)运行时间(日志U)每个节点,允许那些树在搜索时间为O运行(日志中记录U)。该探戈树和相关的 multisplay树数据结构,结束了一个O(日志log n)的任期在他们的分析,因为他们认为含有O(log n)的树的每个项目。

There are some other algorithms that achieve O(log log n) runtimes by using algorithms like binary search on objects of size O(log n). For example, the x-fast trie data structure performs a binary search over the layers of at tree of height O(log U), so the runtime for some of its operations are O(log log U). The related y-fast trie gets some of its O(log log U) runtimes by maintaining balanced BSTs of O(log U) nodes each, allowing searches in those trees to run in time O(log log U). The tango tree and related multisplay tree data structures end up with an O(log log n) term in their analyses because they maintain trees that contain O(log n) items each.

其它算法实现运行时间为O(log log n)的其他方式。 插值搜索已经预计运行时间为O(log log n)的找到了一些在排序后的数组,但分析是相当棘手。最终,分析的工作原理是表示该迭代次数等于编号k使得n 2 -k功能功能&乐; 2,对于哪些日志日志n是正确的解决方案。有些算法,如切瑞顿-的Tarjan MST算法,在运行时涉及Ô到达(日志日志N)通过求解复杂的约束优化问题。

Other algorithms achieve runtime O(log log n) in other ways. Interpolation search has expected runtime O(log log n) to find a number in a sorted array, but the analysis is fairly involved. Ultimately, the analysis works by showing that the number of iterations is equal to the number k such that n2-k ≤ 2, for which log log n is the correct solution. Some algorithms, like the Cheriton-Tarjan MST algorithm, arrive at a runtime involving O(log log n) by solving a complex constrained optimization problem.

希望这有助于!