是什么原因导致jobb工具扔FAT全IOException异常?异常、工具、jobb、IOException

2023-09-07 12:35:01 作者:我爱的少年他如歌—EXO

我试图使用Android jobb工具创建大型OBB文件,我的应用程序,但我的胖满IOException异常的困扰:

I am trying to use the Android jobb tool to create a large OBB file for my application, but I am plagued by the "FAT Full" IOException:

java.io.IOException: FAT Full (XXXX, YYYY)
    at de.waldheinz.fs.fat.Fat.allocNew(Fat.java:298)
    at de.waldheinz.fs.fat.Fat.allocAppend(Fat.java:376)
    at de.waldheinz.fs.fat.ClusterChain.setChainLength(ClusterChain.java:175)
    at de.waldheinz.fs.fat.ClusterChain.setSize(ClusterChain.java:132)
    at de.waldheinz.fs.fat.FatFile.setLength(FatFile.java:91)
    at de.waldheinz.fs.fat.FatFile.write(FatFile.java:154)
    at com.android.jobb.Main$1.processFile(Main.java:495)
    at com.android.jobb.Main.processAllFiles(Main.java:604)
    at com.android.jobb.Main.processAllFiles(Main.java:600)
    at com.android.jobb.Main.main(Main.java:417)
Exception in thread "main" java.lang.RuntimeException: Error getting/writing file with name: LAST_PROCESSED_FILE
    at com.android.jobb.Main$1.processFile(Main.java:501)
    at com.android.jobb.Main.processAllFiles(Main.java:604)
    at com.android.jobb.Main.processAllFiles(Main.java:600)
    at com.android.jobb.Main.main(Main.java:417)

在上面的错误消息, XXXX 总是打印为比只有一个积分值较低的 YYYY ,然后再presents可用的集群的数量(我不精通足够的存储术语确切地知道这是什么意思)。 YYYY重新presents上次成功分配的集群索引,这在我的经验是总是一样最后一个可用的群集索引(所述阵列的大小为xxxx + 2,所以xxxx + 1,它是相同YYYY是最后可用的指数)。

In the above error message, XXXX is always printed as exactly one integral value lower than YYYY, and represents the number of usable "clusters" (I'm not versed enough in storage jargon to know exactly what this means). YYYY represents the last successfully allocated cluster index, which in my experience is always the same as the last usable cluster index (the array is sized at XXXX + 2, so XXXX + 1 which is the same as YYYY is the last usable index).

崩溃似乎出现在该文件总大小超过511 MB的点(在实际限制是 536193820字节,一个字节更导致溢出! ),所以 LAST_PROCESSED_FILE 是相当武断,但它列出了崩溃发生时正在处理的文件。由于存储格式为FAT16(从我一直在说),应该不是最大文件大小,然后是2 GB?

The crash seems to appear at the point at which the total file size exceeds 511 MB (the actual limit is 536,193,820 bytes, a single byte more causes the overflow!), so LAST_PROCESSED_FILE is rather arbitrary, but it lists the file being processed when the crash occurred. Given that the storage format is FAT16 (from what I've been told), shouldn't the maximum file size then be 2 GB?

我已经通过各种渠道,超过500 MB的目录内空的或小的目录或文件,小文件总大小,或单个文件可能导致本次大跌读(虽然我一直无法确定为什么)。所有这些理由,适用于我的情况下(这又是基于总文件大小)。

I have read through various sources that empty or small directories or files, small total file size, or individual files within the directory over 500 MB can cause this crash (though I have not been able to determine why). None of these reasons are applicable to my case (which again, is based on total file size).

我自己的 没有提供任何见解jobb工具的来源。任何人都可以请有何启示在这个问题上?

My own review of the jobb tool source has not provided any insight. Can anyone please shed any light on this issue?

推荐答案

事实证明,很多与jobb工具的问题都涉及到FAT文件系统库,它使用,这是不正确地确定FAT16的最大尺寸存储单元是与所述; 512 MB(而在现实中,它应该是2 GB)。

It turns out that a lot of the issues with the jobb tool are related to the FAT Filesystem library it uses, which is incorrectly determining the maximum size of a FAT16 storage unit to be < 512 MB (while in reality it should be 2 GB).

通过修改FAT库我能超过512 MB的jobb工具,成功地构建OBB文件。这也是相关的,为什么在4 MB OBB文件是无效的原因。该jobb工具源代码也应该更新,因为预期的文件系统应总是是FAT16。小户型要细,它应该只给问题是否有有价值的数据超过2 GB。

By modifying the FAT library I am able to successfully build OBB files over 512 MB with the jobb tool. This is also relevant to the reason why OBB files under 4 MB are invalid. The jobb tool source should also be updated because the expected file system should always be FAT16. Small units should be fine, and it should only give issue if there are more than 2 GB worth of data.

我将公布本作中jobb工具在FAT库中的缺陷,以及一个问题。

I will be reporting this as a bug in the FAT library, and an issue in the jobb tool.