论文标题
评估与预期暴露的随机排名
Evaluating Stochastic Rankings with Expected Exposure
论文作者
论文摘要
我们介绍了\ emph {预期暴露}的概念,因为通过相同查询的重复样本从用户那里获得的平均注意力排名项目。此外,我们主张采用平等预期的原则:鉴于固定的信息需求,没有任何项目应比相同相关等级的任何其他项目获得更多或更少的预期曝光。我们认为,对于许多检索目标和场景,包括局部多样性和公平排名,这一原则是可取的。利用现有检索指标的用户模型,我们提出了一种基于预期暴露的一般评估方法,并在信息检索评估中与相关指标建立了连接。重要的是,这种方法放宽了经典信息检索假设,允许系统响应查询,可以产生\ emph {排名分布},而不是单个固定排名。我们研究了在各种信息访问条件下的预期暴露度量和随机排名者的行为,包括\ emph {Ad hoc}检索和建议。我们认为,使用随机化测量和优化预期的暴露指标为检索算法开发和进步打开了一个新领域。
We introduce the concept of \emph{expected exposure} as the average attention ranked items receive from users over repeated samples of the same query. Furthermore, we advocate for the adoption of the principle of equal expected exposure: given a fixed information need, no item should receive more or less expected exposure than any other item of the same relevance grade. We argue that this principle is desirable for many retrieval objectives and scenarios, including topical diversity and fair ranking. Leveraging user models from existing retrieval metrics, we propose a general evaluation methodology based on expected exposure and draw connections to related metrics in information retrieval evaluation. Importantly, this methodology relaxes classic information retrieval assumptions, allowing a system, in response to a query, to produce a \emph{distribution over rankings} instead of a single fixed ranking. We study the behavior of the expected exposure metric and stochastic rankers across a variety of information access conditions, including \emph{ad hoc} retrieval and recommendation. We believe that measuring and optimizing expected exposure metrics using randomization opens a new area for retrieval algorithm development and progress.