- hep-ph: 1 paper
- quant-ph: 1 paper
Let us also give a focus on studies of Stefano Carrazza, whose papers can be found Stefano Carraza. This productive physicist contributes a lot to codes and programs relating to the field of QCD global analysis. I will give a list on his works, especially those I have not known yet.
hep-ph: 1 paper
Title: Test of the 4-th quark generation from the Cabibbo-Kobayashi-Maskawa matrix [arXiv:2101.05386]
Abstract: The structure of the mixing matrix, in the electroweak quark sector with four generations of quarks is investigated. We conclude that the area of the unitarity cuadrangle is not a good choice as a possible measure of the CP violation. We analyze how the existence of the 4-thquark family may influence on the values of the Cabibbo-Kobayashi-Maskawa matrix of the known quarks and we propose a test of the existence of the 4-th generation.
Comments: Interesting phenomenological study. Let us follow their logic and see what we can learn.
quant-ph: 1 paper
Title: Wilson Loops and Area Laws in Lattice Gauge Theory Tensor Networks [arXiv:2101.05289]
Abstract: Tensor network states have been a very prominent tool for the study of quantum many-bodyphysics, thanks to their physically relevant entanglement properties and their ability to encode symmetries. In the last few years, the formalism has been extended and applied to theories with local symmetries to - lattice gauge theories. In the contraction of tensor network states as well as correlation functions of physical observables with respect to them, one uses the so-called transfer operator, whose local properties dictate the long-range behaviour of the state. In this work we study transfer operators of tensor network states (in particular, PEPS - projected entangled pair states)in the context of lattice gauge theories, and consider the implications of the local symmetry on their structure and properties. We focus on the Wilson loop - a nonlocal, gauge-invariant observable which is central to pure gauge theories, whose long range decay behaviour probes the confinement or deconfinement of static charges. Using the symmetry, we show how to handle its contraction,and formulate conditions relating local properties to its decay fashion.
Comments: Interesting.
Stefano Carrazza
Title: Determining the proton content with a quantum computer [arXiv:2011.13934]
Abstract: We present a first attempt to design a quantum circuit for the determination of the parton content of the proton through the estimation of parton distribution functions (PDFs), in the context of high energy physics (HEP). The growing interest in quantum computing and the recent developments of new algorithms and quantum hardware devices motivates the study of methodologies applied to HEP. In this work we identify architectures of variational quantum circuits suitable for PDFs representation (qPDFs). We show experiments about the deployment of qPDFs on real quantum devices, taking into consideration current experimental limitations. Finally, we perform a global qPDF determination from LHC data using quantum computer simulation on classical hardware and we compare the obtained partons and related phenomenological predictions involving hadronic processes to modern PDFs.
Comments:
- What is a quantum circuit, why it is necessary to apply quantum computer on determining PDFs, how this job is done?
Title: VegasFlow: accelerating Monte Carlo simulation across platforms [arXiv:2010.09341]
Abstract: In this work we demonstrate the usage of the VegasFlow library on multidevice situations: multi-GPU in one single node and multi-node in a cluster. VegasFlow is a new software for fast evaluation of highly parallelizable integrals based on Monte Carlo integration. It is inspired on the Vegas algorithm, very often used as the driver of cross section integrationds and based on Google’s powerful TensorFlow library. In this proceedings we consider a typical multi-gpu configuration to benchmark how different batch sizes can increase (or decrease) the performance on a Leading Order example integration.
Comments:
Title: PineAPPL: combining EW and QCD correctionsfor fast evaluation of LHC processes [arXiv:2008.12789]
Abstract: We introduce PineAPPL, a library that produces fast-interpolation grids of physical cross sections, computed with a general-purpose Monte Carlo generator, accurate to fixed order in the strong, electroweak, and combined strong–electroweak couplings.We demonstrate this unique ability, that distinguishes PineAPPL from similar software available in the literature, by interfacing it toMadGraph5_aMC@NLO. We compute fast-interpolation grids, accurate to next-to-leading order in the strong and electroweak couplings, for a representative set of LHC processes for which EW corrections may have a sizeable effect on the accuracy of the corresponding theoretical predictions. We formulate a recommendation on the format of the experimental deliverables in order to consistently compare them with computations that incorporate EW corrections, and specifically to determine parton distribution functions to the same accuracy.
Comments:
**Title:**The Prime state and its quantum relatives [arXiv:2005.02422]
Abstract: The Prime state of n qubits, |Pn〉, is defined as the uniform superposition of all the computational-basis states corresponding to prime numbers smaller than 2n. This state encodes, quantum mechanically, arithmetic properties of the primes. We first show that the Quantum Fourier Transform of the Prime state provides a direct access to Chebyshev-like biases in the distribution of prime numbers. We next study the entanglement entropy of |Pn〉up to n= 30 qubits, and find a relation between its scaling and the Shannon entropy of the density of square-free integers. This relation also holds when the Prime state is constructed using a qudit basis, showing that this property is intrinsic to the distribution of primes. The same feature is found when considering states built from the superposition of primes in arithmetic progressions. Finally, we explore the properties of other number-theoretical quantum states, such as those defined from odd composite numbers, square-free integers and starry primes. For this study, we have developed an open-source library that diagonalizes matrices using floats of arbitrary precision.
Comments: What is this paper all about! I want to know more.
Title: Can New Physics hide inside the proton? [arXiv:1905.05215]
Abstract: Modern global analyses of the structure of the proton include collider measurements which probe energies well above the electroweak scale. While these provide powerful constraints on the parton distribution functions (PDFs), they are also sensitive to beyond the Standard Model (BSM) dynamics if these affect the fitted distributions. Here we present a first simultaneous determination of the PDFs and BSM effects from deep-inelastic structure function data by means of the NNPDF framework. We consider representative four-fermion operators from the SM Effective Field Theory(SMEFT), quantify to which extent their effects modify the fitted PDFs, and assess how the resulting bounds on the SMEFT degrees of freedom are modified. Our results demonstrate how BSM effects that might otherwise be reabsorbed into the PDFs can be systematically disentangled.
Comments: Given the title, this paper should be interesting.
- Let us learn how the study is done. In the abstract, it says they apply a NNPDF global fitting on PDFs and NP effects.
- The striking punch is the last sentence of the abstract.
Title: Parton Distributions with Theory Uncertainties:General Formalism and First Phenomenological Studies [arXiv:1906.10698]
Abstract: We formulate a general approach to the inclusion of theoretical uncertainties, specifically those related to the missing higher order uncertainty (MHOU), in the determination of parton distribution functions (PDFs). We demonstrate how, under quite generic assumptions, theory uncertainties can be included as an extra contribution to the covariance matrix when determining PDFs from data. We then review, clarify, and systematize the use of renormalization and factorization scale variations as a means to estimate MHOUs consistently in deep inelastic and hadronic processes. We define a set of prescriptions for constructing a theory covariance matrix using scale variations, which can be used in global fits of data from a wide range of different processes, based on choosing a set of independent scale variations suitably correlated within and across processes. We set up an algebraic framework for the choice and validation of an optimal prescription by comparing the estimate of MHOU encoded in the next-to-leading order (NLO)theory covariance matrix to the observed shifts between NLO and NNLO predictions. We perform a NLO PDF determination which includes the MHOU, assess the impact of the inclusion of MHOUs on the PDF central values and uncertainties, and validate the results by comparison to the known shift between NLO and NNLO PDFs. We finally study the impact of the inclusion of MHOUs in a global PDF determination on LHC cross-sections, and provide guidelines for their use in precision phenomenology. In addition, we also compare the results based on the theory covariance matrix formalism to those obtained by performing PDF determinations based on different scale choices.
Comments: Let us first guess what they are doing here. They somehow fit the MHOUs, and use the results on MHOUs in future study. The key points are how the estimate MHOUs and how they use them.
Title: Machine Learning in High Energy Physics Community White Paper [arXiv:1807.02876]
Abstract: Machine learning has been applied to several problems in particle physics research, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas for machine learning in particle physics. We detail a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit.
Comments: I am interested in the current status on the application of Machine Learning.
Title: Sampling the Riemann-Theta Boltzmann Machine [arXiv:1804.07768]
Abstract: We show that the visible sector probability density function of the Riemann-Theta Boltzmann machine corresponds to a Gaussian mixture model consisting of an infinite number of component multi-variate Gaussians. The weights of the mixture are given by a discrete multi-variate Gaussian over the hidden state space. This allows us to sample the visible sector density function in a straight-forward manner. Furthermore, we show that the visible sector probability density function possesses an affine transform property, similar to the multi-variate Gaussian density.
Comments: Interesting! I want to know more.
Title: Precision determination of the strong coupling constantwithin a global PDF analysis [arXiv:1802.03398]
Abstract: We present a determination of the strong coupling constant αs(mZ) based on the NNPDF3.1determination of parton distributions, which for the first time includes constraints from jet production, top-quark pair differential distributions, and the Z pT distributions using exact NNLO theory. Our result is based on a novel extension of the NNPDF methodology — the correlated replica method — which allows for a simultaneous determination ofαsand the PDFs with all correlations between them fully taken into account. We study in detail all relevant sources of experimental, methodological and theoretical uncertainty. At NNLO we find αs(mZ) = 0.1185±0.0005(exp)±0.0001(meth), showing that methodological uncertainties are negligible. We conservatively estimate the theoretical uncertainty due to missing higher order QCD corrections (N3LO and beyond) from half the shift between the NLO and NNLO αs values,finding ∆α_ths= 0.0011.
Comments: I have to know more on this topic.
Title: Minimisation strategies for the determination ofparton density functions [arXiv:1711.09991]
Abstract: We discuss the current minimisation strategies adopted by research projects involving the determination of parton distribution functions (PDFs) and fragmentation functions (FFs) through the training of neural networks. We present a short overview of a proton PDF determination obtained using the covariance matrix adaptation evolution strategy(CMA-ES) optimisation algorithm. We perform comparisons between the CMA-ES and the standard nodal genetic algorithm (NGA) adopted by the NNPDF collaboration.
Comments: I have to know more on this topic. It is important.