arXiv Journal 2021-01-17 with focus on Stefano Carrazza

  • hep-ph: 1 paper
  • quant-ph: 1 paper

Let us also give a focus on studies of Stefano Carrazza, whose papers can be found Stefano Carraza. This productive physicist contributes a lot to codes and programs relating to the field of QCD global analysis. I will give a list on his works, especially those I have not known yet.



hep-ph: 1 paper

Title: Test of the 4-th quark generation from the Cabibbo-Kobayashi-Maskawa matrix [arXiv:2101.05386]

Abstract: The structure of the mixing matrix, in the electroweak quark sector with four generations of quarks is investigated. We conclude that the area of the unitarity cuadrangle is not a good choice as a possible measure of the CP violation. We analyze how the existence of the 4-thquark family may influence on the values of the Cabibbo-Kobayashi-Maskawa matrix of the known quarks and we propose a test of the existence of the 4-th generation.

Comments: Interesting phenomenological study. Let us follow their logic and see what we can learn.



quant-ph: 1 paper

Title: Wilson Loops and Area Laws in Lattice Gauge Theory Tensor Networks [arXiv:2101.05289]

Abstract: Tensor network states have been a very prominent tool for the study of quantum many-bodyphysics, thanks to their physically relevant entanglement properties and their ability to encode symmetries. In the last few years, the formalism has been extended and applied to theories with local symmetries to - lattice gauge theories. In the contraction of tensor network states as well as correlation functions of physical observables with respect to them, one uses the so-called transfer operator, whose local properties dictate the long-range behaviour of the state. In this work we study transfer operators of tensor network states (in particular, PEPS - projected entangled pair states)in the context of lattice gauge theories, and consider the implications of the local symmetry on their structure and properties. We focus on the Wilson loop - a nonlocal, gauge-invariant observable which is central to pure gauge theories, whose long range decay behaviour probes the confinement or deconfinement of static charges. Using the symmetry, we show how to handle its contraction,and formulate conditions relating local properties to its decay fashion.

Comments: Interesting.



Stefano Carrazza

Title: Determining the proton content with a quantum computer [arXiv:2011.13934]

Abstract: We present a first attempt to design a quantum circuit for the determination of the parton content of the proton through the estimation of parton distribution functions (PDFs), in the context of high energy physics (HEP). The growing interest in quantum computing and the recent developments of new algorithms and quantum hardware devices motivates the study of methodologies applied to HEP. In this work we identify architectures of variational quantum circuits suitable for PDFs representation (qPDFs). We show experiments about the deployment of qPDFs on real quantum devices, taking into consideration current experimental limitations. Finally, we perform a global qPDF determination from LHC data using quantum computer simulation on classical hardware and we compare the obtained partons and related phenomenological predictions involving hadronic processes to modern PDFs.

Comments:

  1. What is a quantum circuit, why it is necessary to apply quantum computer on determining PDFs, how this job is done?

Title: VegasFlow: accelerating Monte Carlo simulation across platforms [arXiv:2010.09341]

Abstract: In this work we demonstrate the usage of the VegasFlow library on multidevice situations: multi-GPU in one single node and multi-node in a cluster. VegasFlow is a new software for fast evaluation of highly parallelizable integrals based on Monte Carlo integration. It is inspired on the Vegas algorithm, very often used as the driver of cross section integrationds and based on Google’s powerful TensorFlow library. In this proceedings we consider a typical multi-gpu configuration to benchmark how different batch sizes can increase (or decrease) the performance on a Leading Order example integration.

Comments:


Title: PineAPPL: combining EW and QCD correctionsfor fast evaluation of LHC processes [arXiv:2008.12789]

Abstract: We introduce PineAPPL, a library that produces fast-interpolation grids of physical cross sections, computed with a general-purpose Monte Carlo generator, accurate to fixed order in the strong, electroweak, and combined strong–electroweak couplings.We demonstrate this unique ability, that distinguishes PineAPPL from similar software available in the literature, by interfacing it toMadGraph5_aMC@NLO. We compute fast-interpolation grids, accurate to next-to-leading order in the strong and electroweak couplings, for a representative set of LHC processes for which EW corrections may have a sizeable effect on the accuracy of the corresponding theoretical predictions. We formulate a recommendation on the format of the experimental deliverables in order to consistently compare them with computations that incorporate EW corrections, and specifically to determine parton distribution functions to the same accuracy.

Comments:


**Title:**The Prime state and its quantum relatives [arXiv:2005.02422]

Abstract: The Prime state of n qubits, |Pn〉, is defined as the uniform superposition of all the computational-basis states corresponding to prime numbers smaller than 2n. This state encodes, quantum mechanically, arithmetic properties of the primes. We first show that the Quantum Fourier Transform of the Prime state provides a direct access to Chebyshev-like biases in the distribution of prime numbers. We next study the entanglement entropy of |Pn〉up to n= 30 qubits, and find a relation between its scaling and the Shannon entropy of the density of square-free integers. This relation also holds when the Prime state is constructed using a qudit basis, showing that this property is intrinsic to the distribution of primes. The same feature is found when considering states built from the superposition of primes in arithmetic progressions. Finally, we explore the properties of other number-theoretical quantum states, such as those defined from odd composite numbers, square-free integers and starry primes. For this study, we have developed an open-source library that diagonalizes matrices using floats of arbitrary precision.

Comments: What is this paper all about! I want to know more.


Title: Can New Physics hide inside the proton? [arXiv:1905.05215]

Abstract: Modern global analyses of the structure of the proton include collider measurements which probe energies well above the electroweak scale. While these provide powerful constraints on the parton distribution functions (PDFs), they are also sensitive to beyond the Standard Model (BSM) dynamics if these affect the fitted distributions. Here we present a first simultaneous determination of the PDFs and BSM effects from deep-inelastic structure function data by means of the NNPDF framework. We consider representative four-fermion operators from the SM Effective Field Theory(SMEFT), quantify to which extent their effects modify the fitted PDFs, and assess how the resulting bounds on the SMEFT degrees of freedom are modified. Our results demonstrate how BSM effects that might otherwise be reabsorbed into the PDFs can be systematically disentangled.

Comments: Given the title, this paper should be interesting.

  1. Let us learn how the study is done. In the abstract, it says they apply a NNPDF global fitting on PDFs and NP effects.
  2. The striking punch is the last sentence of the abstract.

Title: Parton Distributions with Theory Uncertainties:General Formalism and First Phenomenological Studies [arXiv:1906.10698]

Abstract: We formulate a general approach to the inclusion of theoretical uncertainties, specifically those related to the missing higher order uncertainty (MHOU), in the determination of parton distribution functions (PDFs). We demonstrate how, under quite generic assumptions, theory uncertainties can be included as an extra contribution to the covariance matrix when determining PDFs from data. We then review, clarify, and systematize the use of renormalization and factorization scale variations as a means to estimate MHOUs consistently in deep inelastic and hadronic processes. We define a set of prescriptions for constructing a theory covariance matrix using scale variations, which can be used in global fits of data from a wide range of different processes, based on choosing a set of independent scale variations suitably correlated within and across processes. We set up an algebraic framework for the choice and validation of an optimal prescription by comparing the estimate of MHOU encoded in the next-to-leading order (NLO)theory covariance matrix to the observed shifts between NLO and NNLO predictions. We perform a NLO PDF determination which includes the MHOU, assess the impact of the inclusion of MHOUs on the PDF central values and uncertainties, and validate the results by comparison to the known shift between NLO and NNLO PDFs. We finally study the impact of the inclusion of MHOUs in a global PDF determination on LHC cross-sections, and provide guidelines for their use in precision phenomenology. In addition, we also compare the results based on the theory covariance matrix formalism to those obtained by performing PDF determinations based on different scale choices.

Comments: Let us first guess what they are doing here. They somehow fit the MHOUs, and use the results on MHOUs in future study. The key points are how the estimate MHOUs and how they use them.


Title: Machine Learning in High Energy Physics Community White Paper [arXiv:1807.02876]

Abstract: Machine learning has been applied to several problems in particle physics research, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas for machine learning in particle physics. We detail a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit.

Comments: I am interested in the current status on the application of Machine Learning.


Title: Sampling the Riemann-Theta Boltzmann Machine [arXiv:1804.07768]

Abstract: We show that the visible sector probability density function of the Riemann-Theta Boltzmann machine corresponds to a Gaussian mixture model consisting of an infinite number of component multi-variate Gaussians. The weights of the mixture are given by a discrete multi-variate Gaussian over the hidden state space. This allows us to sample the visible sector density function in a straight-forward manner. Furthermore, we show that the visible sector probability density function possesses an affine transform property, similar to the multi-variate Gaussian density.

Comments: Interesting! I want to know more.


Title: Precision determination of the strong coupling constantwithin a global PDF analysis [arXiv:1802.03398]

Abstract: We present a determination of the strong coupling constant αs(mZ) based on the NNPDF3.1determination of parton distributions, which for the first time includes constraints from jet production, top-quark pair differential distributions, and the Z pT distributions using exact NNLO theory. Our result is based on a novel extension of the NNPDF methodology — the correlated replica method — which allows for a simultaneous determination ofαsand the PDFs with all correlations between them fully taken into account. We study in detail all relevant sources of experimental, methodological and theoretical uncertainty. At NNLO we find αs(mZ) = 0.1185±0.0005(exp)±0.0001(meth), showing that methodological uncertainties are negligible. We conservatively estimate the theoretical uncertainty due to missing higher order QCD corrections (N3LO and beyond) from half the shift between the NLO and NNLO αs values,finding ∆α_ths= 0.0011.

Comments: I have to know more on this topic.


Title: Minimisation strategies for the determination ofparton density functions [arXiv:1711.09991]

Abstract: We discuss the current minimisation strategies adopted by research projects involving the determination of parton distribution functions (PDFs) and fragmentation functions (FFs) through the training of neural networks. We present a short overview of a proton PDF determination obtained using the covariance matrix adaptation evolution strategy(CMA-ES) optimisation algorithm. We perform comparisons between the CMA-ES and the standard nodal genetic algorithm (NGA) adopted by the NNPDF collaboration.

Comments: I have to know more on this topic. It is important.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: ARXIV GR-QC 数据集是一个存放有关引力理论和量子计算的学术论文的数据集,它来自 ArXiv 学术论文库。要下载这个数据集,您可以在 ArXiv 网站上进行搜索,并使用相应的筛选器将结果限制在 GR-QC 分类下。您可以通过点击论文标题并使用浏览器的“另存为”功能来下载每篇论文。 此外,您还可以使用 ArXiv API 来访问和下载数据集。API 提供了一组程序接口,您可以使用它来搜索论文、获取论文摘要和元数据等。有关如何使用 ArXiv API 的更多信息,请参阅 ArXiv API 文档。 ### 回答2: ARXIV GR-QC 数据集是一个用于研究和分析的科学论文数据库。该数据库主要收集了与广义相对论(GR)和量子力学(QC)相关的论文。要下载这个数据集,首先需要访问 ARXIV GR-QC 数据集的官方网站。 在该网站上,可以找到数据集的下载链接或相关的信息。点击相应的链接,可以选择下载整个数据集或特定的部分。下载的文件通常是以压缩包的形式提供,需要用解压软件将其解压后得到数据文件。 一旦下载了数据集,就可以开始使用它进行分析和研究。数据集中包含了很多科学论文的元数据(如标题、作者、摘要等),并且可能还包含全文或相关的研究数据。可以使用各种数据分析工具和技术来探索和提取数据集中的信息。 ARXIV GR-QC 数据集的下载对于科研人员、学生和其他对广义相对论和量子力学感兴趣的人非常有用。这个数据集可以帮助研究者更好地了解最新的研究进展,探索新的研究领域,并且可以为他们的研究提供重要的参考和支持。 总的来说,ARXIV GR-QC 数据集是一个重要的资源,可以通过官方网站下载。通过使用该数据集,研究者可以更加深入地了解广义相对论和量子力学领域的最新动态,并进行进一步的分析和研究。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值