大对象堆碎片碎片、对象

2023-09-02 01:20:17 作者:逢仒三分笑

在C#/。NET应用程序我正在患的是一个缓慢的内存泄漏。我已经使用国家开发银行与SOS,尝试确定发生了什么,但数据似乎没有任何意义,所以我希望你们中的一个可能之前都经历过这一点。

The C#/.NET application I am working on is suffering from a slow memory leak. I have used CDB with SOS to try to determine what is happening but the data does not seem to make any sense so I was hoping one of you may have experienced this before.

该应用程序在64位架构上运行。它不断地计算和连载数据到远程主机,并击中了大对象堆(LOH)公平一点。然而,大多数LOH的对象,我希望是暂时的:一旦计算完成,并已发送到远程主机,内存应该被释放。我所看到的,但是,大量交错的可用内存块,例如(活)对象的数组,采取随机段从LOH的:

The application is running on the 64 bit framework. It is continuously calculating and serialising data to a remote host and is hitting the Large Object Heap (LOH) a fair bit. However, most of the LOH objects I expect to be transient: once the calculation is complete and has been sent to the remote host, the memory should be freed. What I am seeing, however, is a large number of (live) object arrays interleaved with free blocks of memory, e.g., taking a random segment from the LOH:

0:000> !DumpHeap 000000005b5b1000  000000006351da10
         Address               MT     Size
...
000000005d4f92e0 0000064280c7c970 16147872
000000005e45f880 00000000001661d0  1901752 Free
000000005e62fd38 00000642788d8ba8     1056       <--
000000005e630158 00000000001661d0  5988848 Free
000000005ebe6348 00000642788d8ba8     1056
000000005ebe6768 00000000001661d0  6481336 Free
000000005f214d20 00000642788d8ba8     1056
000000005f215140 00000000001661d0  7346016 Free
000000005f9168a0 00000642788d8ba8     1056
000000005f916cc0 00000000001661d0  7611648 Free
00000000600591c0 00000642788d8ba8     1056
00000000600595e0 00000000001661d0   264808 Free
...

很显然,我希望这是事实,如果我的应用程序分别计算过程中产生的长寿命,大对象。 (它确实做到这一点,我承认会有一定程度的LOH的碎片,但不是这里的问题。)问题是非常小的(1056字节)的对象数组,你可以在上面的垃圾,我不能在$瞧瞧C $ C上形成并且剩余扎根不知。

Obviously I would expect this to be the case if my application were creating long-lived, large objects during each calculation. (It does do this and I accept there will be a degree of LOH fragmentation but that is not the problem here.) The problem is the very small (1056 byte) object arrays you can see in the above dump which I cannot see in code being created and which are remaining rooted somehow.

另外请注意,国开行没有报告的类型,当堆段甩了:我不知道这是相关或不相关。如果我转储标记(小于 - )对象,国开行/ SOS报告是罚款:

Also note that CDB is not reporting the type when the heap segment is dumped: I am not sure if this is related or not. If I dump the marked (<--) object, CDB/SOS reports it fine:

0:015> !DumpObj 000000005e62fd38
Name: System.Object[]
MethodTable: 00000642788d8ba8
EEClass: 00000642789d7660
Size: 1056(0x420) bytes
Array: Rank 1, Number of elements 128, Type CLASS
Element Type: System.Object
Fields:
None

对象数组的元素都是字符串,字符串是识别从我们的应用程序code。

The elements of the object array are all strings and the strings are recognisable as from our application code.

另外,我无法找到自己的GC根作为!GCRoot命令挂起,永不回来(我甚至试过通宵留)。

Also, I am unable to find their GC roots as the !GCRoot command hangs and never comes back (I have even tried leaving it overnight).

所以,我非常AP preciate它,如果任何人都可以摆脱任何光线,为什么这些小(小于85K)对象数组结束了对蕙:什么情况下会.NET把小对象数组在那边?此外,没有人知道发生了确定这些对象的根的另一种方式?

So, I would very much appreciate it if anyone could shed any light as to why these small (<85k) object arrays are ending up on the LOH: what situations will .NET put a small object array in there? Also, does anyone happen to know of an alternative way of ascertaining the roots of these objects?

更新1

另一种理论我昨天来了迟到的是,这些对象数组开始时大,但已今非昔比留下的可用内存是显而易见的,在内存转储的块。是什么让我怀疑的是,对象数组始终显示为1056字节(128元),128 * 8的参照和32个字节的开销。

Another theory I came up with late yesterday is that these object arrays started out large but have been shrunk leaving the blocks of free memory that are evident in the memory dumps. What makes me suspicious is that the object arrays always appear to be 1056 bytes long (128 elements), 128 * 8 for the references and 32 bytes of overhead.

我们的想法是,也许有些不安全code在图书馆或在CLR被破坏阵头中的元素场数。一个长镜头,我知道有点...

The idea is that perhaps some unsafe code in a library or in the CLR is corrupting the number of elements field in the array header. Bit of a long shot I know...

更新2

感谢布赖恩拉斯穆森(见接受的答案)问题已被确定为蕙的碎片造成的字符串实习生表!我写了一个快速测试应用程序,以确认这一点:

Thanks to Brian Rasmussen (see accepted answer) the problem has been identified as fragmentation of the LOH caused by the string intern table! I wrote a quick test application to confirm this:

static void Main()
{
    const int ITERATIONS = 100000;

    for (int index = 0; index < ITERATIONS; ++index)
    {
        string str = "NonInterned" + index;
        Console.Out.WriteLine(str);
    }

    Console.Out.WriteLine("Continue.");
    Console.In.ReadLine();

    for (int index = 0; index < ITERATIONS; ++index)
    {
        string str = string.Intern("Interned" + index);
        Console.Out.WriteLine(str);
    }

    Console.Out.WriteLine("Continue?");
    Console.In.ReadLine();
}

应用程序首次创建并取消引用唯一字符串在一个循环。这仅仅是为了证明所述存储器没有在此方案中泄漏。显然,这不应该与它没有。

The application first creates and dereferences unique strings in a loop. This is just to prove that the memory does not leak in this scenario. Obviously it should not and it does not.

在第二循环中,唯一的字符串创建和拘禁。这一行动根他们在实习生表。什么我不知道是怎么实习生表重新presented。看来它由一组页面 - 128个字符串元素对象数组 - 这将在LOH创建。这是比较明显的CDB / SOS:

In the second loop, unique strings are created and interned. This action roots them in the intern table. What I did not realise is how the intern table is represented. It appears it consists of a set of pages -- object arrays of 128 string elements -- that are created in the LOH. This is more evident in CDB/SOS:

0:000> .loadby sos mscorwks
0:000> !EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x00f7a9b0
generation 1 starts at 0x00e79c3c
generation 2 starts at 0x00b21000
ephemeral segment allocation context: none
 segment    begin allocated     size
00b20000 00b21000  010029bc 0x004e19bc(5118396)
Large object heap starts at 0x01b21000
 segment    begin allocated     size
01b20000 01b21000  01b8ade0 0x00069de0(433632)
Total Size  0x54b79c(5552028)
------------------------------
GC Heap Size  0x54b79c(5552028)

以蕙段的转储发现我在泄漏应用程序看到的模式:

Taking a dump of the LOH segment reveals the pattern I saw in the leaking application:

0:000> !DumpHeap 01b21000 01b8ade0
...
01b8a120 793040bc      528
01b8a330 00175e88       16 Free
01b8a340 793040bc      528
01b8a550 00175e88       16 Free
01b8a560 793040bc      528
01b8a770 00175e88       16 Free
01b8a780 793040bc      528
01b8a990 00175e88       16 Free
01b8a9a0 793040bc      528
01b8abb0 00175e88       16 Free
01b8abc0 793040bc      528
01b8add0 00175e88       16 Free    total 1568 objects
Statistics:
      MT    Count    TotalSize Class Name
00175e88      784        12544      Free
793040bc      784       421088 System.Object[]
Total 1568 objects

请注意,对象数组大小为528(而不是1056),因为我的工作站是32位和应用程序服务器是64位。对象数组还是128个元素。

Note that the object array size is 528 (rather than 1056) because my workstation is 32 bit and the application server is 64 bit. The object arrays are still 128 elements long.

所以,道德这个故事是要非常小心实习。如果你正在实习不知道该字符串是一组有限的成员则应用程序将泄漏是由于LOH的碎片,至少在CLR的版本2。

So the moral to this story is to be very careful interning. If the string you are interning is not known to be a member of a finite set then your application will leak due to fragmentation of the LOH, at least in version 2 of the CLR.

在我们的应用程序的情况下,人们普遍code在解组期间实习生实体标识符deserialisation code路线:我现在强烈怀疑这是罪魁祸首。然而,开发者的意图是明显的好,因为他们想要确保,如果同一个实体deserialised多次再标识符字符串只有一个实例将保留在内存中。

In our application's case, there is general code in the deserialisation code path that interns entity identifiers during unmarshalling: I now strongly suspect this is the culprit. However, the developer's intentions were obviously good as they wanted to make sure that if the same entity is deserialised multiple times then only one instance of the identifier string will be maintained in memory.

推荐答案

CLR使用蕙至preallocate几个对象(如the用于实习的字符串数组的)。其中有些是小于85000字节,因此通常不会分配在蕙。

The CLR uses the LOH to preallocate a few objects (such as the array used for interned strings). Some of these are less than 85000 bytes and thus would not normally be allocated on the LOH.

这是一个实现细节,但我认为这样做的原因是为了避免被假设为长期生存的过程中,它自身的实例不必要的垃圾收集。

It is an implementation detail, but I assume the reason for this is to avoid unnecessary garbage collection of instances that are supposed to survive as long as the process it self.

由于玄之又玄优化的1000个或更多的元素,任何双[] 同时也分配在LOH。

Also due to a somewhat esoteric optimization, any double[] of 1000 or more elements is also allocated on the LOH.