背包与来自不同群体的选择背包、群体、不同

2023-09-11 23:18:44 作者:落花浮王杯

我对背包问题的一种变体,我努力寻找一个有效的解决方案。

I have a variation on the Knapsack Problem that I'm struggling to find an efficient solution for.

让我们假设你有多个团体项目。每个组可以有项目,每一个值和重量的一个任意数字。问题是要找到一组具有最大总价值,重量与所述物品;一些限制,以及(最棘手的部分)只设置比包括从每个组中的一个项目是有效的。

Let's say you have multiple groups of items. Each group can have an arbitrary number of items, each with a value and weight. The problem is to find the set of items with maximum total value, weight < some limit, and (the tricky part) only sets than include one item from EVERY group are valid.

也就是说,假设你有数百个项目的挑选,但是你必须带一个三明治,一种饮料,一是零食,一是手电筒等不仅如此,你不能把多个来自任何一组,但你必须在一天结束最终与g总恰好项是否存在克团。

That is, imagine you have hundreds of items to pick from, but you must take one sandwich, one beverage, one snack, one flashlight, etc. Not just that you can't take more than one from any group, but you must at the end of the day end up with exactly g total items if there are g groups.

看起来这应该是快做比基本问题,因为这么多的组合是无效的,但我在努力寻找解决办法。

It seems like this should be faster to do than the basic problem, because so many combinations are invalid, but I'm struggling to find a solution.

推荐答案

有关整数权重并不算大的限制,你可以使用通常的动态规划方法(略有修改)。

For integer weights and not too large limit you could apply usual dynamic programming approach (slightly modified).

使用一对每一个可能的权重映射值阵列。其中一个阵列( A )持有的结果对于那些已经处理组。其它阵列( B )用于从所述第一阵列和从下组当前处理的物品接收值的总和。当从一个组来一个,这些交换阵列和阵列清晰 B 。在结束(像往常一样),你必须从阵列获取最大的价值 B

Use a pair of arrays that map every possible weight to value. One of these arrays (A) holds the result for those groups that are already processed. Other array (B) is used to receive sum of values from the first array and from items of the group being currently processed. When coming from one group to another, swap these arrays and clear array B. At the end (as usual) you have to get the largest value from array B.

渐近复杂性是一样的通常的动态规划算法。但是,你的结论是,这种应该是快做比基本问题是有些真实的,因为你可以相互独立地处理同一组中的每个元素,所以通常这种修改算法是更好的并行。

Asymptotic complexities are the same as for usual dynamic programming algorithm. But your conclusion that this should be faster to do than the basic problem is somewhat true because you could process each element of the same group independently from each other, so this modification of usual algorithm is better parallelizable.