论文标题
有时大小没关系
Sometimes size does not matter
论文作者
论文摘要
传统上,宇宙学微调与必须找到物理模型参数以使生活成为可能的间隔的狭窄相关。一种更彻底的方法重点是间隔的概率,而不是其大小。大多数尝试测量给定参数的寿命渗透间隔的概率的尝试都取决于贝叶斯统计方法,该方法对此参数的先前分布是均匀的。但是,这些模型中的参数通常在无限大小的空间中采用值,因此无法实现统一性假设。这被称为归一化问题。本文解释了一个框架来测量调整,除其他外,假设先前的分布属于一类最大熵(Maxent)分布。通过分析此类分布的调谐概率的上限该方法解决了所谓的弱人类原则,并至少在这种情况下为最大值的最大分布不变性提供了解决方案。这种方法的含义是,由于所有数学模型都需要参数,因此调整不仅是自然科学的问题,而且是数学建模的问题。因此,宇宙学是对更一般情况的特殊实例。因此,每当使用数学模型来描述自然时,不仅在物理学,而且在所有科学中都存在调整。以及对于给定参数而言,调整是否良好还是粗略的问题 - 如果参数所在的间隔分别具有低概率或高概率 - 不仅取决于间隔,还取决于假定的先前分布类别。提出了用于调整概率的新型上限。
Cosmological fine-tuning has traditionally been associated with the narrowness of the intervals in which the parameters of the physical models must be located to make life possible. A more thorough approach focuses on the probability of the interval, not on its size. Most attempts to measure the probability of the life-permitting interval for a given parameter rely on a Bayesian statistical approach for which the prior distribution of the parameter is uniform. However, the parameters in these models often take values in spaces of infinite size, so that a uniformity assumption is not possible. This is known as the normalization problem. This paper explains a framework to measure tuning that, among others, deals with normalization, assuming that the prior distribution belongs to a class of maximum entropy (maxent) distributions. By analyzing an upper bound of the tuning probability for this class of distributions the method solves the so-called weak anthropic principle, and offer a solution, at least in this context, to the well-known lack of invariance of maxent distributions. The implication of this approach is that, since all mathematical models need parameters, tuning is not only a question of natural science, but also a problem of mathematical modeling. Cosmological tuning is thus a particular instantiation of a more general scenario. Therefore, whenever a mathematical model is used to describe nature, not only in physics but in all of science, tuning is present. And the question of whether the tuning is fine or coarse for a given parameter -- if the interval in which the parameter is located has low or high probability, respectively -- depends crucially not only on the interval but also on the assumed class of prior distributions. Novel upper bounds for tuning probabilities are presented.