如何检测溢出而乘以2发2互补整数?整数

2023-09-11 03:38:46 作者:凉酒

欲两个数相乘,并检测是否有溢出。什么是做到这一点的最简单的方法是什么?

I want to multiply two numbers, and detect if there was an overflow. What is the simplest way to do that?

推荐答案

乘两个32位数字产生一个64位的答案,二787-8给予16等二进制乘法仅仅是转移和增加。因此,如果你不得不说的两个32位操作数位17集操作数和任何高于15或16集的操作数的位b,你会溢出一个32位的结果。 17位左移16位33加到32。

Multiplying two 32 bit numbers results in a 64 bit answer, two 8s give a 16, etc. binary multiplication is simply shifting and adding. so if you had say two 32 bit operands and bit 17 set in operand A and any of the bits above 15 or 16 set in operand b you will overflow a 32 bit result. bit 17 shifted left 16 is bit 33 added to a 32.

所以,问题又是什么你输入的大小和结果的大小,如果结果是相同的大小,那么你必须要找到最显著1两个操作数的添加这些位的位置,如果该结果是更大比你的成绩,你的空间会溢出。

So the question again is what are the size of your inputs and the size of your result, if the result is the same size then you have to find the most significant 1 of both operands add those bit locations if that result is bigger than your results space you will overflow.

修改

是相乘两个3比特数将导致在任一5位数字或6位的数目,如果有进位,在添加。同样,一个2位和5位会导致6或7位,等等。如果对这个问题的海报问题的原因是,看看你有你的结果变量空间的答案那么这个解决方案将工作,是比较快的最语言上的大多数处理器。它可以是显著更快一些与显著慢于他人。这是一般快(取决于它是如何实现的,当然),以单看位的操作数的数量。加倍的最大操作数的大小,如果你可以在你的语言或处理器做一个安全的赌注。鸿沟是彻头彻尾的昂贵(慢),大多数的处理器不有一个在操作数大小的任意加倍少得多。当然最快的是要下降到汇编做乘法,并期待在溢位(或零比较的结果寄存器之一)。如果您的处理器不能做硬件乘法,然后它会不管你做什么,是缓慢的。我猜测,ASM是不正确的答案,这个帖子尽管是目前为止最快的,并具有最准确的溢出状态。

Yes multiplying two 3 bit numbers will result in either a 5 bit number or 6 bit number if there is a carry in the add. Likewise a 2 bit and 5 bit can result in 6 or 7 bits, etc. If the reason for this question posters question is to see if you have space in your result variable for an answer then this solution will work and is relatively fast for most languages on most processors. It can be significantly faster on some and significantly slower on others. It is generically fast (depending on how it is implemented of course) to just look at the number of bits in the operands. Doubling the size of the largest operand is a safe bet if you can do it within your language or processor. Divides are downright expensive (slow) and most processors dont have one much less at an arbitrary doubling of operand sizes. The fastest of course is to drop to assembler do the multiply and look at the overflow bit (or compare one of the result registers with zero). If your processor cant do the multiply in hardware then it is going to be slow no matter what you do. I am guessing that asm is not the right answer to this post despite being by far the fastest and has the most accurate overflow status.

二进制使得乘法小巫见大巫为十进制,比如拿二进制数

binary makes multiplication trivial compared to decimal, for example take the binary numbers


0b100 *
0b100 

就像在学校十进制数学,你(可以)开始与较低的操作数的最低显著位和乘以针对上操作的所有位置,除了与二进制只有两个,你选择乘上零的意思你不要添加到结果,或者你一个,这意味着你只移位和加法,没有实际乘法是必要的,就像您在小数乘法。

Just like decimal math in school you (can) start with the least significant bit on the lower operand and multiply it against all the locations in the upper operand, except with binary there are only two choices you multiply by zero meaning you dont have to add to the result, or you multiply by one which means you just shift and add, no actual multiplication is necessary like you would have in decimal.


  000 : 0 * 100
 000  : 0 * 100
100   : 1 * 100

添加了列,答案是0b10000

Add up the columns and the answer is 0b10000

同十进制数学1在数百列方式复制上面的数字,并添加两个零,它可以在任何其他基地一样好。所以0b100时次0b110是0b1000,一个在上,以便复制并在第三列上,以便复制并添加两个零点= 0b11000添加零+ 0b10000一的第二列。

Same as decimal math a 1 in the hundreds column means copy the top number and add two zeros, it works the same in any other base as well. So 0b100 times 0b110 is 0b1000, a one in the second column over so copy and add a zero + 0b10000 a one in the third column over so copy and add two zeros = 0b11000.

这导致寻找最显著位在这两个数字。 0b1xx * 0b1xx保证了的1xxxx加入到答复,这是在添加最大位的位置,没有其他单输入到最后的加载有柱填充或填充的更显著列。从那里,你需要在情况下,只有多个位的其他位被添加了事业的随身携带。

This leads to looking at the most significant bits in both numbers. 0b1xx * 0b1xx guarantees a 1xxxx is added to the answer, and that is the largest bit location in the add, no other single inputs to the final add have that column populated or a more significant column populated. From there you need only more bit in case the other bits being added up cause a carry.

它与最坏的情况下发生的所有那些时间所有的人,0b111 * 0b111

Which happens with the worst case all ones times all ones, 0b111 * 0b111

 
0b00111 +
0b01110 +
0b11100 

这会导致产生0b110001增加了一个进位。 6位。 3位操作数次,3位操作数3 + 3 = 6 6位最糟糕的情况。

This causes a carry bit in the addition resulting in 0b110001. 6 bits. a 3 bit operand times a 3 bit operand 3+3=6 6 bits worst case.

因此​​,使用最显著位(未持有的值的寄存器的大小)的操作数的大小决定了最坏的情况下的存储需求。

So size of the operands using the most significant bit (not the size of the registers holding the values) determines the worst case storage requirement.

嗯,这是真实的假设正操作数。如果你考虑一些这些数字为负它改变的东西,但不是很多。

Well, that is true assuming positive operands. If you consider some of these numbers to be negative it changes things but not by much.

减4次5,0b1111 ... 111100 * 0b0000 .... 000101 = -20或0b1111..11101100

Minus 4 times 5, 0b1111...111100 * 0b0000....000101 = -20 or 0b1111..11101100

它需要4位,重新present负4和4位重新present积极的5(不要忘记你的符号位)。我们的结果需要6位,如果你剥夺了所有的符号位。

it takes 4 bits to represent a minus 4 and 4 bits to represent a positive 5 (dont forget your sign bit). Our result required 6 bits if you stripped off all the sign bits.

让我们看看4位极端案例

Lets look at the 4 bit corner cases


-8 * 7 = -56
0b1000 * 0b0111 = 0b1001000 
-1 * 7 = -7 = 0b1001
-8 * -8 = 64 = 0b01000000
-1 * -1 = 2 = 0b010
-1 * -8 = 8 = 0b01000
7 * 7 = 49 = 0b0110001

可以说,我们算的正数为最显著1加一,负最显著0加1。

Lets say we count positive numbers as the most significant 1 plus one and negative the most significant 0 plus one.


-8 * 7 is 4+4=8 bits   actual 7
-1 * 7 is 1+4=5 bits, actual 4 bits
-8 * -8 is 4+4=8 bits, actual 8 bits
-1 * -1 is 1+1=2 bits, actual 3 bits
-1 * -8 is 1+4=5 bits, actual 5 bits
7 * 7 is 4+4=8 bits, actual 7 bits.

所以这个规则的作用,除-1 * -1,你可以看到,我叫了减一一位,为加一件事找到零加一。总之,我认为,如果这是一个4位* 4位机的定义,你将有4位结果,至少和我之间$ P $角的问题,如何可能会超过4位,我需要安全地存储回答。因此,这条规则的作用是回答这个问题的二进制补数学。

So this rule works, with the exception of -1 * -1, you can see that I called a minus one one bit, for the plus one thing find the zero plus one. Anyway, I argue that if this were a 4 bit * 4 bit machine as defined, you would have 4 bits of result at least and I interpret the question as how may more than 4 bits do I need to safely store the answer. So this rule serves to answer that question for 2s complement math.

如果你的问题是要准确地确定溢出,然后速度是次要的话,那么这将是真的真的对于一些系统缓慢,为每一位乘你做。如果这是你问的问题,得到一些速度回你需要调整它好一点的语言和/或处理器。翻倍最大的操作,如果可以的话,并检查上面的结果大小非零位,或者使用分频和比较。如果你不能双操作数的大小,划分和比较。除法之前检查零。

If your question was to accurately determine overflow and then speed is secondary, then, well it is going to be really really slow for some systems, for every multiply you do. If this is the question you are asking, to get some of the speed back you need to tune it a little better for the language and/or processor. Double up the biggest operand, if you can, and check for non-zero bits above the result size, or use a divide and compare. If you cant double the operand sizes, divide and compare. Check for zero before the divide.

其实你的问题犯规指定哪些大小溢出你所谈论的要么。好老8086的16位时代16位向32位的结果(硬件),它可以永远不会溢出。怎么样一些有一个乘法,32位时间32位,32位结果,容易溢出的怀抱。什么是你的操作数为这个问题的大小,它们是相同的大小或者他们双倍的输入大小?你是否愿意执行乘法的硬件不能做(没有溢出)?你写一个编译器库,并试图确定是否可以操作数喂到硬件的速度,或者您必须执行的数学没有一个硬件乘法。这是你得到,如果你投了操作数,编译器库将尝试做乘法,根据不同的编译器,当然它的图书馆前回落施放操作数之类的话。它会使用数位伎俩确定要使用的硬件乘法或软件之一。

Actually your question doesnt specify what size of overflow you are talking about either. Good old 8086 16 bit times 16 bit gives a 32 bit result (hardware), it can never overflow. What about some of the ARMs that have a multiply, 32 bit times 32 bit, 32 bit result, easy to overflow. What is the size of your operands for this question, are they the same size or are they double the input size? Are you willing to perform multiplies that the hardware cannot do (without overflowing)? Are you writing a compiler library and trying to determine if you can feed the operands to the hardware for speed or if you have to perform the math without a hardware multiply. Which is the kind of thing you get if you cast up the operands, the compiler library will try to cast the operands back down before doing the multiply, depending on the compiler and its library of course. And it will use the count the bit trick determine to use the hardware multiply or a software one.

在这里我的目标是展示如何二元多生一个消化的形式工作,所以你可以看到你是多么的最大存储需要寻找在每个操作单个位的位置。现在,你能多快找到的每个操作数位的伎俩。如果你正在寻找最小的存储需求不是最大的是一个不同的故事,因为涉及的显著位在两个操作数的不只是一个每个操作数位,你必须做乘法来确定最低储存每一个人。如果你不关心最大或最小存储你只是做乘法,寻找非零上方的定义的溢出限制或使用鸿沟,如果你有时间或硬件。

My goal here was to show how binary multiply works in a digestible form so you can see how much maximum storage you need by finding the location of a single bit in each operand. Now how fast you can find that bit in each operand is the trick. If you were looking for minimum storage requirements not maximum that is a different story because involves every single one of the significant bits in both operands not just one bit per operand, you have to do the multiply to determine minimum storage. If you dont care about maximum or minimum storage you have to just do the multiply and look for non zeros above your defined overflow limit or use a divide if you have the time or hardware.

您的标签意味着你不感兴趣的浮点运算,浮点运算是一个完全不同的野兽,你可以不适用任何这些定点规则浮点,他们不工作。

Your tags imply you are not interested in floating point, floating point is a completely different beast, you cannot apply any of these fixed point rules to floating point, they DO NOT work.