这是一个最佳的黄金发生器?这是一个、发生器、黄金

2023-09-11 06:35:49 作者:喃颩知莪噫

以任何方式这是寻找素数的最佳解决方案?我不是想添加的阳光下每一个优化的,但主要的好处呢?

 高清primesUpto(个体经营,X):
    素数= [2​​]
    筛= [2]
    I = 3
    而I< = X:
        复合=假
        J = 0
        而J< LEN(筛):
            筛[J] =筛[J]  -  1
            如果筛[J] == 0:
                复合= TRUE
                筛[J] =素数[J]。
            J + 1 =
        如果没有复合:
            primes.append㈠
            sieve.append(I * I-I)
        I + = 1
    返回素数
 

解决方案

嗯,很有趣。您的code是实际的诚实善良的的埃拉托色尼的真正筛的恕我直言,通过递减每个计数器,它设置为每个主要沿升自然数计算的途中遭遇,通过1对每一个步骤。

和它是非常低效的。 测试在Ideone 的运行在相同的增长的实证订单 〜N ^ 2.2 (在生产几千个素数的测试范围)作为著名的低效特纳的审判庭筛(哈斯克尔)。

为什么呢?几个原因。 第一的,没有初救助在您的测试:当你发现它是一个复合,你继续处理的计数器阵列,。你有因为在的第二个原因来,的:您计数的区别按每个递减每个计数器1步骤,用0重presenting您的当前位置。这是原来的筛恕我直言,最忠实的EX pression,这是非常低效的:今天我们的CPU知道如何为O添加数字(1)时间(如果这些数字都属于一定的范围内,0。2 ^ 32,或0。2 ^ 64,当然)。

此外,我们的计算机还具有直接存取存储器现在,并且具有计算的遥远数,我们可以在一个随机存取阵列将其标记。这是埃拉托色尼对现代计算机筛效率的基础 - 无论是直接计算,和直接的倍数标记

和的第三的,也许是最直接的原因是效率​​低下,是在的 premature 的处理倍数:当你遇到5作为总理,您添加了第一次多(还没有遇到过),即25日的马上的进柜台的阵列,(即目前点多, I * II )。那是得太快了。加入25必须的的推迟的,直到...好,直到我们遇到25升自然数之一。开始来处理每个黄金prematurely的倍数(以代替 P P * P )导致其太多的专柜维护 - O(N)人(其中 N 是素数的数量生产),而不是仅仅 0(π(开方(N日志N)))= O(开方(N /日志N))

在延迟的应用在一个类似的计数筛子在Haskell 拿来当优化其增长的实证订单〜N ^ 2.3 ... 2.6 N = 1000 .. 6000 素数下降到略高于〜N ^ 1.5 (有明显的巨大收获速度)。当计数进一步通过直接添加更换,增长导致测得的经验订单〜N ^ 1.2。1.3 生产达hlaf百万素数(虽然在所有的可能性它会获得关于〜N ^ 1.5 更大的范围)。

财经57号 涨涨涨 屡创近年价格新高的黄金仍是避险 不二之选 吗

Is this in any way an optimal solution for finding primes? I am not trying to add every optimization under the sun, but is the principal good?

def primesUpto(self, x):
    primes = [2]
    sieve = [2]
    i = 3
    while i <= x:
        composite = False
        j = 0
        while j < len(sieve):
            sieve[j] = sieve[j] - 1
            if sieve[j] == 0:
                composite = True
                sieve[j] = primes[j]
            j += 1
        if not composite:
            primes.append(i)
            sieve.append(i*i-i)
        i += 1
    return primes

解决方案

Hmm, very interesting. Your code is actual honest to goodness genuine sieve of Eratosthenes IMHO, counting its way along the ascending natural numbers by decrementing each counter that it sets up for each prime encountered, by 1 on each step.

And it is very inefficient. Tested on Ideone it runs at the same empirical order of growth ~ n^2.2 (at the tested range of few thousand primes produced) as the famously inefficient Turner's trial division sieve (in Haskell).

Why? Several reasons. First, no early bailout in your test: when you detect it's a composite, you continue processing the array of counters, sieve. You have to, because of the second reason: you count the difference by decrementing each counter by 1 on each step, with 0 representing your current position. This is the most faithful expression of the original sieve IMHO, and it is very inefficient: today our CPUs know how to add numbers in O(1) time (if those numbers belong to a certain range, 0 .. 2^32, or 0 .. 2^64, of course).

Moreover, our computers also have direct access memory now, and having calculated the far-off number we can mark it in a random access array. Which is the foundation of the efficiency of the sieve of Eratosthenes on modern computers - both the direct calculation, and the direct marking of multiples.

And third, perhaps the most immediate reason for inefficiency, is the premature handling of the multiples: when you encounter 5 as a prime, you add its first multiple (not yet encountered) i.e. 25, right away into the array of counters, sieve (i.e. the distance between the current point and the multiple, i*i-i). That is much too soon. The addition of 25 must be postponed until ... well, until we encounter 25 among the ascending natural numbers. Starting to handle the multiples of each prime prematurely (at p instead of p*p) leads to having way too many counters to maintain - O(n) of them (where n is the number of primes produced), instead of just O(π(sqrt(n log n))) = O(sqrt(n / log n)).

The postponement optimization when applied on a similar "counting" sieve in Haskell brought its empirical orders of growth from ~ n^2.3 .. 2.6 for n = 1000 .. 6000 primes down to just above ~ n^1.5 (with obviously enormous gains in speed). When counting was further replaced by direct addition, the resulting measured empirical orders of growth were ~ n^1.2 .. 1.3 in producing up to hlaf a million primes (although in all probability it would gain on ~ n^1.5 for bigger ranges).