论文标题

通过混合计划进行拍卖的上下文储备价格优化

Contextual Reserve Price Optimization in Auctions via Mixed-Integer Programming

论文作者

Huchette, Joey, Lu, Haihao, Esfandiari, Hossein, Mirrokni, Vahab

论文摘要

我们研究了学习线性模型以在给定上下文信息中设定储备金的线性模型的问题,以最大程度地提高卖方一方的预期收入。首先,我们表明,除非\ emph {指数时间假设}失败,否则不可能在多项式时间内解决此问题。其次,我们为此问题提供了强大的混合智能编程(MIP)公式,该计划能够精确地对非概念和不连续的预期奖励函数进行建模。此外,我们表明这种MIP公式是单个印象的收入功能的理想(即最强的公式)。由于在实践中准确求解MIP公式可能在计算上很昂贵,因此我们还研究了线性编程(LP)放松的性能。尽管它在实践中可能运作良好,但我们表明,不幸的是,在最坏的情况下,LP放松的最佳目标可能是O(样本数)倍(样本数)倍于真正问题的最佳目标。最后,我们提出了计算结果,表明MIP公式及其LP弛豫能够达到与真实和合成数据集中的最新算法相比,与最新的算法相比,MIP公式能够实现卓越的样本外部和样本外性能。更广泛地说,我们认为这项工作提供了优化方法的强度,例如MIP,可以准确地模拟机器学习问题中的内在不连续性。

We study the problem of learning a linear model to set the reserve price in an auction, given contextual information, in order to maximize expected revenue from the seller side. First, we show that it is not possible to solve this problem in polynomial time unless the \emph{Exponential Time Hypothesis} fails. Second, we present a strong mixed-integer programming (MIP) formulation for this problem, which is capable of exactly modeling the nonconvex and discontinuous expected reward function. Moreover, we show that this MIP formulation is ideal (i.e. the strongest possible formulation) for the revenue function of a single impression. Since it can be computationally expensive to exactly solve the MIP formulation in practice, we also study the performance of its linear programming (LP) relaxation. Though it may work well in practice, we show that, unfortunately, in the worst case the optimal objective of the LP relaxation can be O(number of samples) times larger than the optimal objective of the true problem. Finally, we present computational results, showcasing that the MIP formulation, along with its LP relaxation, are able to achieve superior in- and out-of-sample performance, as compared to state-of-the-art algorithms on both real and synthetic datasets. More broadly, we believe this work offers an indication of the strength of optimization methodologies like MIP to exactly model intrinsic discontinuities in machine learning problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源