独特的随机数在C编程语言中的整数数组随机数、整数、数组、编程语言

2023-09-10 22:26:28 作者:不爱我就别来感动我

可能重复:   独特的随机数为O(1)?

如何填充具有唯一值(无重复)C中的整数数组?

  INT VEKTOR [10];

对于(I = 0; I&小于10;我++){
    VEKTOR [I] =兰特()%100 + 1;
}

//没有的独特性在这里
 

解决方案

有几种方法可以解决你的问题,每个人都有自己的优点和缺点。

首先,我想指出,你已经有了不少的反应,请执行以下操作:它们产生一个随机数,然后检查是否不知何故它已经用在阵列中,如果已经使用,他们刚生成另一个数字,直到他们找到一个未使用的。 这是一个天真,真要这样说,有严重缺陷的方法。的问题是与数产生的环状试错误性质(如果已被使用,再次尝试)。如果数字范围(比如,[1..1])接近所需的阵列(例如,M)的长度,然后接近尾声的算法可能会花大量的时间去寻找下一个号码。如果随机数发生器是连一点点破损(例如,从不生成一些数量,或是否它很少),然后用N == M中的算法是保证循环永远(或对于一个非常长的时间)。通常这种试错法是无用的一个,或一个有缺陷的1充其量

另一种方法已经$这里psented p $正在产生随机排列在大小为N的阵列随机置换的构思是一种很有前途的一个,但尺寸的阵列上做N(当M&所述; n种)肯定比光产生更多的热量,讲比喻。

好的办法解决这个问题,可以发现,例如,在Bentley的编程珠玑(其中一些是来自高德纳)。

的克努特算法。这是一个很简单的算法用的O(N)的复杂性(即数字范围),这意味着它是最有用的,当M是接近N.但是,这种算法不要求添加任何额外的内存,以你的 VEKTOR 阵列,而不是已经提供变体的排列(这意味着它需要O(M)的内存,而不是Ø (N)其他排列为基础的算法,建议在这里)。后者使之成为可行的算法,即使是M<< n个个案。

该算法的工作原理如下:遍历从1到N的所有号码,并选择当前号码的概率是 RM / RN ,其中 RM 是多少数字,我们仍然需要找到和 RN 是多少数字,我们还需要遍历。这里是一个可能实现的情况下

 的#define M 10
#定义N 100

诠释中,即时通讯;

即时= 0;

为(在= 0;与< N&安培;&安培; IM<米; ++中){
  INT RN = N  - 中;
  INT RM = M  - 即时通讯;
  如果(兰特()%RN< RM)
    /* 拿去 */
    VEKTOR [IM +在+ = 1; / * +1,因为你的范围开始从1 * /
}

断言(IM == M);
 
c语言随机产生20个 的正整数存放到数组中,并求数组中所有元素最大值 最小值 平均值和各元素之和

在这个周期中,我们得到一个数组 VEKTOR 充满随机选择号码的升序排列的。而升序位是我们不需要在这里。因此,以修复,我们只是做一个随机排列的元素VEKTOR ,我们正在做。请注意,那这是一个O(M)置换,无需额外的内存。 (我离开了置换算法的实现。大量的链接被赋予在这里了。)。

如果你仔细观察一下这里提出的排列为基础的算法,长度为N阵列上运行,你会发现他们大多是pretty的多少这同克努特的算法,而是重新制定中号==ñ。在这种情况下,上述选择周期将选择了[1..1]每一个数字的范围与概率的1,有效地转变成一个N阵列与数字1到N考虑到这一点的初始化,我认为它变得相当显然,在运行此算法中号==ñ然后截断结果(可能放弃大部分),使远低于只是运行该算法在其原来的形式感的M原值和获得的结果向右走,没有任何截断。

的Floyd算法(见here).这种方法具有的约0(M)的复杂性(取决于所使用的搜索结构),所以最好是适合当M&其中;&所述; N。该方法可以跟踪已产生的随机数,因此它需要额外的存储器然而,它的优点在于它的不的做任何这些可恶的试错迭代,试图找到一个未使用的随机数。该算法保证在每次通话后,生成一个唯一的随机数给随机数发生器。

下面是一个可能实现它你的情况。 (有不同的方法来跟踪已使用的数字。我就用标志的数组,假设N是不是过于大)

 的#define M 10
#定义N 100

unsigned char型is_used [N] = {0}; / *标志* /
诠释中,即时通讯;

即时= 0;

为(在= N  - 米;与< N&安培;&安培; IM<米; ++中){
  INT R =兰特()%(以+ 1); / *产生一个随机数R* /

  如果(is_used [R])
    / *我们已经有'R'* /
    R =的; / *使用IN,而不是生成的编号* /

  断言(is_used [R]!);
  VEKTOR [即时++] = R + 1; / * +1,因为你的范围开始从1 * /
  is_used [R] = 1;
}

断言(IM == M);
 

为什么上面的作品是不是很明显。但是,它的工作原理。从[1..1]范围内将均匀分布采摘。正好m号

请注意,对于大N,你就可以使用基于搜索的结构来存储已被使用的数字,从而获得一个不错的O(M登入)算法O(M)的内存要求。

(还有一件事对这种算法虽然:而产生的磁盘阵列不会下令,原1..N订购了一定的影响力仍将是present的结果。例如,它是。显然,数N,如果选择了,只能是合成数组的最后一个成员。如果结果被意外订货的这种污染是不能接受的,得到的 VEKTOR 数组可以是随机的洗牌,就像在Khuth算法)。

注意在这两个algoritms的设计中观察到的很关键的一点:他们从来没有的循环的,试图找到一个新的未使用的随机数。任何算法,使得试误差迭代与随机数是从实用的角度有缺陷的。此外,这些算法的内存消耗是依赖于男,不是到N

要在OP我会建议弗洛伊德的算法,因为在他的申请中号似乎是小于N相当少,而且它没有(或可能不会)需要额外的通行证置换。然而,对于N个这样的小的值的差异可能是微不足道的。

Possible Duplicate: Unique random numbers in O(1)?

How do I fill an integer array with unique values (no duplicates) in C?

int vektor[10];   

for (i = 0; i < 10; i++) {
    vektor[i] = rand() % 100 + 1;
}

//No uniqueness here

解决方案

There are several ways to solve your problem, each has its own advantages and disadvantages.

First I'd like to note that you already got quite a few of responses that do the following: they generate a random number, then check somehow whether it was already used in the array, and if it was already used, they just generate another number until they find an unused one. This is a naive and, truth to be said, seriously flawed approach. The problem is with the cyclic trial-and-error nature of the number generation ("if already used, try again"). If the numeric range (say, [1..N]) is close to the length of the desired array (say, M), then towards the end the algorithm might spend a huge amount of time trying to find the next number. If the random number generator is even a little bit broken (say, never generates some number, or does it very rarely), then with N == M the algorithm is guaranteed to loop forever (or for a very long time). Generally this trial-and-error approach is a useless one, or a flawed one at best.

Another approach already presented here is generating a random permutation in an array of size N. The idea of random permutation is a promising one, but doing it on an array of size N (when M << N) will certainly generate more heat than light, speaking figuratively.

Good solutions to this problem can be found, for example, in Bentley's "Programming Pearls" (and some of them are taken from Knuth).

The Knuth algorithm. This is a very simple algorithm with a complexity of O(N) (i.e. the numeric range), meaning that it is most usable when M is close to N. However, this algorithm doesn't require any extra memory in addition to your vektor array, as opposed to already offered variant with permutations (meaning that it takes O(M) memory, not O(N) as other permutation-based algorithms suggested here). The latter makes it a viable algorithm even for M << N cases.

The algorithm works as follows: iterate through all numbers from 1 to N and select the current number with probability rm / rn, where rm is how many numbers we still need to find, and rn is how many numbers we still need to iterate through. Here's a possible implementation for your case

#define M 10
#define N 100

int in, im;

im = 0;

for (in = 0; in < N && im < M; ++in) {
  int rn = N - in;
  int rm = M - im;
  if (rand() % rn < rm)    
    /* Take it */
    vektor[im++] = in + 1; /* +1 since your range begins from 1 */
}

assert(im == M);

After this cycle we get an array vektor filled with randomly chosen numbers in ascending order. The "ascending order" bit is what we don't need here. So, in order to "fix" that we just make a random permutation of elements of vektor and we are done. Note, that the this is a O(M) permutation requiring no extra memory. (I leave out the implementation of the permutation algorithm. Plenty of links was given here already.).

If you look carefully at the permutation-based algorithms proposed here that operate on an array of length N, you'll see that most of them are pretty much this very same Knuth algorithm, but re-formulated for M == N. In that case the above selection cycle will chose each and every number in [1..N] range with probabilty 1, effectively turning into initialization of an N-array with numbers 1 to N. Taking this into account, I think it becomes rather obvious that running this algorithm for M == N and then truncating the result (possibly discarding most of it) makes much less sense than just running this algorithm in its original form for the original value of M and getting the result right away, without any truncation.

The Floyd algorithm (see here). This approach has the complexity of about O(M) (depends on the search structure used), so it is better suitable when M << N. This approach keeps track of already generated random numbers, so it requires extra memory. However, the beauty of it is that it does not make any of those abominable trial-and-error iterations, trying to find an unused random number. This algorithm is guaranteed to generate one unique random number after each call to the random number generator.

Here's a possible implementation for it for your case. (There are different ways to keep track of already used numbers. I'll just use an array of flags, assuming that N is not prohibitively large)

#define M 10
#define N 100    

unsigned char is_used[N] = { 0 }; /* flags */
int in, im;

im = 0;

for (in = N - M; in < N && im < M; ++in) {
  int r = rand() % (in + 1); /* generate a random number 'r' */

  if (is_used[r])
    /* we already have 'r' */
    r = in; /* use 'in' instead of the generated number */

  assert(!is_used[r]);
  vektor[im++] = r + 1; /* +1 since your range begins from 1 */
  is_used[r] = 1;
}

assert(im == M);

Why the above works is not immediately obvious. But it works. Exactly M numbers from [1..N] range will be picked with uniform distribution.

Note, that for large N you can use a search-based structure to store "already used" numbers, thus getting a nice O(M log M) algorithm with O(M) memory requirement.

(There's one thing about this algorithm though: while the resultant array will not be ordered, a certain "influence" of the original 1..N ordering will still be present in the result. For example, it is obvious that number N, if selected, can only be the very last member of the resultant array. If this "contamination" of the result by the unintended ordering is not acceptable, the resultant vektor array can be random-shuffled, just like in the Khuth algorithm).

Note the very critical point observed in the design of these two algoritms: they never loop, trying to find a new unused random number. Any algorithm that makes trial-and-error iterations with random numbers is flawed from practical point of view. Also, the memory consumption of these algorithms is tied to M, not to N

To the OP I would recommend the Floyd's algorithm, since in his application M seems to be considerably less than N and that it doesn't (or may not) require an extra pass for permutation. However, for such small values of N the difference might be negligible.