我能同时执行两个线程一个小的测试应用程序。其中增加一个静态长_value
,另一个递减它。我已经确保了与 ProcessThread.ProcessorAffinity
的线程不同的物理(无HT)内核相关迫使处理器内部通信,我已经确保他们在执行时间上重叠的一个显著的时间量
I have a small test application that executes two threads simultaneously. One increments a static long _value
, the other one decrements it. I've ensured with ProcessThread.ProcessorAffinity
that the threads are associated with different physical (no HT) cores to force intra processor communication and I have ensured that they overlap in execution time for a significant amount of time.
当然,以下不会导致零:
Of course, the following does not lead to zero:
for (long i = 0; i < 10000000; i++)
{
_value += offset;
}
因此,合乎逻辑的结论将是:
So, the logical conclusion would be to:
for (long i = 0; i < 10000000; i++)
{
Interlocked.Add(ref _value, offset);
}
这当然会导致零。
Which of course leads to zero.
但是,以下也导致零:
for (long i = 0; i < 10000000; i++)
{
lock (_syncRoot)
{
_value += offset;
}
}
当然,锁定
语句保证了读取和写入,没有重新排序,因为它采用了全栅栏。但是,我找不到有关处理器的高速缓存同步的任何信息。如果不会有任何缓存同步,我想我应该会看到偏离0两个线程都结束后?
Of course, the lock
statement ensures that the reads and writes are not reordered because it employs a full fence. However, I cannot find any information concerning synchronization of processor caches. If there wouldn't be any cache synchronization, I'd think I should be seeing deviation from 0 after both threads were finished?
有人能向我解释如何锁定
/ Monitor.Enter /退出
确保处理器缓存( L1 / L2缓存)同步?
Can someone explain to me how lock
/Monitor.Enter/Exit
ensures that processor caches (L1/L2 caches) are synchronized?
在这种情况下,高速缓存一致性不依赖于锁定
。如果你使用锁定
语句它确保你的汇编指令不混合。
A + = B
不是原子处理器,它看起来像:
Cache coherence in this case does not depend on lock
. If you use lock
statement it ensures that your assembler commands are not mixed.
a += b
is not an atomic to processor, it looks like:
和无锁可能是:
在将数据加载到寄存器X从内存 在将数据加载到寄存器Y从内存 在增量数据(X) 在递减数据(Y) 在存储数据备份(从x) 在存储数据备份(从Y)//在这种情况下,增量都将丢失。但它不是关于高速缓存一致性,这是一个更高层次的功能。
But it's not about cache coherence, it's a more high-level feature.
因此,锁定
不保证缓存同步。高速缓存同步是处理器内部的功能,它不依赖于code。你可以阅读一下这里。
So, lock
does not ensures that the caches are synchronized. Cache synchronization is a processor internal feature which does not depend on code. You can read about it here.
在一个核心将值写入内存,然后在第二个核心尝试读取值时,它不会在其缓存的实际拷贝,除非其高速缓存项将失效,从而发生缓存失配。而这个缓存未命中的力量缓存条目进行更新,以实际值。
When one core writes a value to memory and then when the second core try to read that value it won't have the actual copy in its cache unless its cache entry is invalidated so a cache miss occurs. And this cache miss forces cache entry to be updated to actual value.
上一篇:Java RMI中的代码库到底有什么意义?有什么意义、代码、Java、RMI
下一篇:Java 11 RMI NoClassDefFoundErrorJava、NoClassDefFoundError、RMI