Wednesday, August 10, 2022
HomeNatureAn in depth map of Higgs boson interactions by the ATLAS experiment...

An in depth map of Higgs boson interactions by the ATLAS experiment ten years after the invention


Experimental set-up

The ATLAS detector12 consists of an inside monitoring detector surrounded by a skinny superconducting solenoid, electromagnetic and hadron calorimeters, and a muon spectrometer incorporating three massive superconducting air-core toroidal magnets.

ATLAS makes use of a right-handed coordinate system with its origin on the nominal interplay level within the centre of the detector and the z axis alongside the beam pipe. The x axis factors from the interplay level to the centre of the LHC ring, and the y axis factors upwards. Cylindrical coordinates (r, ϕ) are used within the transverse airplane, ϕ being the azimuthal angle across the z axis. The pseudorapidity is outlined by way of the polar angle θ as η = −ln(tan(θ/2)).

The inner-detector (ID) system is immersed in a 2-T axial magnetic subject and supplies charged-particle monitoring within the vary |η| < 2.5. The high-granularity silicon pixel detector covers the vertex area and usually supplies 4 measurements per observe, the primary hit usually being within the insertable B-layer (IBL) put in earlier than Run 260,61. It’s adopted by the silicon microstrip tracker (SCT), which normally supplies eight measurements per observe. These silicon detectors are complemented by the transition radiation tracker (TRT), which allows radially prolonged observe reconstruction as much as |η| < 2.0. The TRT additionally supplies electron identification info primarily based on the fraction of hits (usually 30 in whole) above a better energy-deposit threshold equivalent to transition radiation.

The calorimeter system covers the pseudorapidity vary |η| < 4.9. Inside the area |η| < 3.2, electromagnetic calorimetry is supplied by barrel and endcap high-granularity lead/liquid-argon (LAr) calorimeters, with an extra skinny LAr presampler overlaying |η| < 1.8 to appropriate for vitality loss in materials upstream of the calorimeters. Hadron calorimetry is supplied by the metal/scintillator-tile calorimeter, segmented into three barrel constructions inside |η| < 1.7, and two copper/LAr hadron endcap calorimeters. The stable angle protection is accomplished with ahead copper/LAr and tungsten/LAr calorimeter modules optimized for electromagnetic and hadronic vitality measurements, respectively.

The muon spectrometer (MS) includes separate set off and high-precision monitoring chambers measuring the deflection of muons in a magnetic subject generated by the superconducting air-core toroidal magnets. The sphere integral of the toroids ranges between 2.0 and 6.0 Tm throughout a lot of the detector. Three layers of precision chambers, every consisting of layers of monitored drift tubes, covers the area |η| < 2.7, complemented by cathode-strip chambers within the ahead area, the place the background is highest. The muon set off system covers the vary |η| < 2.4 with resistive-plate chambers within the barrel, and thin-gap chambers within the endcap areas.

The efficiency of the vertex and observe reconstruction within the inside detector, the calorimeter decision in electromagnetic and hadronic calorimeters and the muon momentum decision supplied by the muon spectrometer are given beforehand12.

Fascinating occasions are chosen by the first-level set off system carried out in customized {hardware}, adopted by choices made by algorithms carried out in software program within the high-level set off62. The primary-level set off accepts occasions from the 40-MHz bunch crossings at a fee under 100 kHz, which the high-level set off additional reduces with the intention to file occasions to disk at about 1 kHz.

Statistical framework

The outcomes of the mixture introduced on this paper are obtained from a chance perform outlined because the product of the likelihoods of every enter measurement. The noticed yield in every class of reconstructed occasions follows a Poisson distribution the parameter of which is the sum of the anticipated sign and background contributions. The variety of sign occasions in any class okay is break up into the completely different manufacturing and decay modes:

$${n}_{okay}^{{rm{sign}}}={{mathcal{L}}}_{okay}sum _{i}sum _{f}({sigma }_{i}{B}_{f}){(Aepsilon )}_{if}^{okay},$$

the place the sum listed by i runs both over the manufacturing processes (ggF, VBF, WH, ZH, (tbar{t}H), tH) or over the set of the measured manufacturing kinematic areas, and the sum listed by f runs over the decay ultimate states (ZZ, WW, γγ, , (bbar{b}), (cbar{c}), τ+τ, μ+μ). The amount ({ {mathcal L} }_{okay}) is the built-in luminosity of the dataset utilized in class okay, and ({(Aepsilon )}_{if}^{okay}) is the acceptance instances choice effectivity issue for manufacturing course of i and decay mode f in class okay. Acceptances and efficiencies are obtained from the simulation (corrected by calibration measurements in management information for the efficiencies). Their values are topic to variations on account of experimental and theoretical systematic uncertainties. The cross-sections σi and branching fractions Bf are the parameters of curiosity of the mannequin. Relying on the mannequin being examined, they’re both free parameters, set to their normal mannequin prediction or parameterized as features of different parameters. All cross-sections are outlined within the Higgs boson rapidity vary |yH| < 2.5, which is expounded to the polar angle of the Higgs boson’s momentum within the detector and corresponds roughly to the area of experimental sensitivity.

The impression of experimental and theoretical systematic uncertainties on the expected sign and background yields is taken into consideration by nuisance parameters included within the chance perform. The expected sign yields from every manufacturing course of, the branching fractions and the sign acceptance in every evaluation class are affected by idea uncertainties. The mixed chance perform is subsequently expressed as:

$$L({boldsymbol{alpha }},{boldsymbol{theta }},{rm{d}}{rm{a}}{rm{t}}{rm{a}})=prod _{kin {rm{c}}{rm{a}}{rm{t}}}prod _{bin {rm{b}}{rm{i}}{rm{n}}{rm{s}}}P({n}_{okay,b}|{n}_{okay,b}^{{rm{s}}{rm{i}}{rm{g}}{rm{n}}{rm{a}}{rm{l}}}({boldsymbol{alpha }},{boldsymbol{theta }})+{n}_{okay,b}^{{rm{b}}{rm{okay}}{rm{g}}}({boldsymbol{theta }}))prod _{theta in {boldsymbol{theta }}}G(theta ),$$

the place nokay,b, ({n}_{okay,b}^{{rm{sign}}}) and ({n}_{okay,b}^{{rm{bkg}}}) stand for the variety of noticed occasions, the variety of anticipated sign occasions and the variety of anticipated background occasions in bin b of research class okay, respectively. The parameters of curiosity are famous α, the nuisance parameters are θ, P represents the Poisson distribution, and G stands for Gaussian constraint phrases assigned to the nuisance parameters. Some nuisance parameters are supposed to be decided by information alone and should not have any related constraint time period. That is, as an example, the case for background normalization elements which might be fitted in management classes. The consequences of nuisance parameters affecting the normalizations of sign and backgrounds in a given class are typically carried out utilizing the multiplicative expression:

$$n(theta )={n}^{0}{(1+sigma )}^{theta },$$

the place n0 is the nominal anticipated yield of both sign or background and σ the worth of the uncertainty. This ensures that n(θ) > 0 even for unfavourable values of θ. For almost all of nuisance parameters, together with all these affecting the shapes of the distributions, a linear expression is used as a substitute on every bin of the distributions:

$$n(theta )={n}^{0}(1+sigma theta ).$$

The systematic uncertainties are damaged down into unbiased underlying sources, in order that when a supply impacts a number of or all analyses the related nuisance parameter may be absolutely correlated throughout the phrases within the chance corresponding to those analyses through the use of widespread nuisance parameters. That is the case of systematic uncertainties within the luminosity measurement63, within the reconstruction and choice efficiencies64,65,66,67,68,69,70 and within the calibrations of the vitality measurements71,72,73,74. Their results are propagated coherently through the use of widespread nuisance parameters every time relevant. Only some parts of the systematic uncertainties are correlated between the analyses carried out utilizing the complete Run 2 information and people utilizing solely the 2015 and 2016 information, owing to variations of their evaluation, within the reconstruction algorithms and in software program releases. Systematic uncertainties related to the modelling of background processes, in addition to uncertainties as a result of restricted variety of simulated occasions used to estimate the anticipated sign and background yields, are handled as being uncorrelated between analyses.

Uncertainties within the parton distribution features are carried out coherently in all enter measurements and all evaluation classes75. Uncertainties in modelling the parton showering into jets of particles have an effect on the sign acceptances and efficiencies, and are widespread to all enter measurements inside a given manufacturing course of. Equally, uncertainties on account of lacking higher-order quantum chromodynamics (QCD) corrections are widespread to a given manufacturing course of. Their implementation within the kinematic areas of the simplified template cross-sections framework leads to a complete of 66 uncertainty sources, the place total acceptance results are separated from migrations between the varied bins (for instance, between jet multiplicity areas or between dijet invariant mass areas)76. Each the acceptance and sign yield uncertainties have an effect on the sign power modifier and coupling power modifier outcomes, which depend on comparisons of measured and anticipated yields. Solely acceptance uncertainties have an effect on the cross-section and branching fraction outcomes. The uncertainties within the Higgs boson branching fractions on account of dependencies on normal mannequin parameter values (comparable to b and c quark plenty) and lacking higher-order results are carried out utilizing the correlation mannequin described beforehand44.

In whole, over 2,600 sources of systematic uncertainty are included within the mixed chance. For a lot of the introduced measurements, the systematic uncertainty is predicted to be of comparable measurement or considerably smaller than the corresponding statistical uncertainty. The systematic uncertainties are dominant for the parameters which might be measured probably the most exactly, that’s, the worldwide sign power and the manufacturing cross-sections for the ggF and VBF processes. The anticipated systematic uncertainty of the worldwide sign power measurement (about 5%) is bigger than the statistical uncertainty (3%), with comparable contributions from the idea uncertainties in sign (4%) and background modelling (1.7%), and from the experimental systematic uncertainty (3%). The latter is predominantly composed of the uncertainty within the luminosity measurement (1.7%), adopted by the uncertainties in electron, jet and b-jet reconstruction, data-driven background modelling, in addition to from the restricted variety of simulated occasions (about 1% every). All different sources of experimental uncertainty mixed contribute an extra 1%. The systematic uncertainty within the manufacturing cross-section of the ggF course of is dominated by experimental uncertainties (3.5%) adopted by sign idea uncertainties (3%), in comparison with a statistical uncertainty of 4%. For the VBF course of, the place the statistical uncertainty is 8%, the experimental uncertainties are estimated to be 5%, and the sign idea uncertainties add as much as 7%. Systematic uncertainties are additionally dominant over the statistical uncertainties within the measurements of the branching fractions into W pairs and τ lepton pairs.

Measurements of the parameters of curiosity use a statistical take a look at primarily based on the profile chance ratio52:

$$varLambda ({boldsymbol{alpha }})=frac{L({boldsymbol{alpha }},hat{hat{{boldsymbol{theta }}}}({boldsymbol{alpha }}))}{L(hat{{boldsymbol{alpha }}},hat{{boldsymbol{theta }}})},$$

the place α are the parameters of curiosity and θ are the nuisance parameters. The (hat{hat{{boldsymbol{theta }}}}({boldsymbol{alpha }})) notation signifies that the nuisance parameters values are people who maximize the chance for given values of the parameters of curiosity. Within the denominator, each the parameters of curiosity and the nuisance parameters are set to the values ((hat{{boldsymbol{alpha }}}), (hat{{boldsymbol{theta }}})) that unconditionally maximize the chance. The estimates of the parameters α are these values (hat{{boldsymbol{alpha }}}) that maximize the chance ratio.

Owing to the normally massive variety of occasions chosen within the measurements, all outcomes introduced on this paper are obtained within the asymptotic regime the place the chance roughly follows a Gaussian distribution. It was checked in earlier iterations of the person enter measurements, as an example ref. 77, that this assumption additionally holds in circumstances with low occasion counts by evaluating the outcomes of the asymptotic formulae with these of pseudo-experiments. This confirmed the outcomes from a earlier work52 that the Gaussian approximation turns into legitimate for as few as 5 background occasions. Within the asymptotic regime twice the unfavourable logarithm of the profile chance λ(α) = −2ln(Λ(α)) follows a χ2 distribution with quite a few levels of freedom equal to the variety of parameters of curiosity. Confidence intervals for a given confidence degree (CL), normally 68%, are then outlined because the areas fulfilling (lambda ({boldsymbol{alpha }}) < {F}_{n}^{-1}({rm{C}}{rm{L}})) the place ({F}_{n}^{-1}) is the quantile perform of the χ2 distribution with n levels of freedom, so ({F}_{1}^{-1}=1,(4)) for a 1σ (2σ) CL with one diploma of freedom. The values of the parameters α corresponding to those confidence intervals are obtained by scanning the profile chance. Equally, the p worth pSM = 1 − Fn(λ(αSM)) is used to check the compatibility of the measurement and the usual mannequin prediction. The correlations between the parameters are estimated by inverting the matrix of the second derivatives of the chance.

The anticipated significances and limits are decided utilizing the ‘Asimov’ datasets52, that are obtained by setting the noticed yields to their anticipated values when the nuisance parameters are set to the values that maximize the chance (hat{{rm{theta }}}).

Parameterization inside the κ framework

Inside the κ framework, the cross-section for a person measurement is parameterized as

$$sigma (ito Hto f)={sigma }_{i}{B}_{f}=frac{{sigma }_{i}({boldsymbol{kappa }}){varGamma }_{f}({boldsymbol{kappa }})}{{varGamma }_{H}({boldsymbol{kappa }},{B}_{{rm{inv.}}},{B}_{{rm{u.}}})},$$

the place Γf is the partial width for a Higgs boson decay to the ultimate state f and ΓH is the whole decay width of the Higgs boson. The overall width is given by the sum of the partial widths of all of the decay modes included. Contributions to the whole Higgs boson decay width owing to phenomena past the usual mannequin could manifest themselves as a worth of coupling power modifier κp differing from one, or a worth of Binv. or Bu. differing from zero. The Higgs boson whole width is then expressed as ({varGamma }_{H}({boldsymbol{kappa }},{B}_{{rm{inv.}}},{B}_{{rm{u.}}})={kappa }_{H}^{2}({boldsymbol{kappa }},{B}_{{rm{inv.}}},{B}_{{rm{u.}}}){varGamma }_{H}^{{rm{SM}}}) with

$${kappa }_{H}^{2}({boldsymbol{kappa }},{B}_{{rm{inv.}}},{B}_{{rm{u.}}})=frac{{sum }_{p}{B}_{p}^{{rm{SM}}}{kappa }_{p}^{2}}{(1-{B}_{{rm{inv.}}}-{B}_{{rm{u.}}})}.$$

Higgs boson manufacturing cross-sections and partial and whole decay widths are parameterized by way of the coupling power modifiers as proven in desk 9 of ref. 22. An improved parameterization together with extra sub-leading contributions is used on this paper to match the elevated precision of the measurements.

Kinematic areas probing Higgs boson manufacturing

The definitions of kinematic areas for the precision examine of Higgs boson manufacturing within the framework of simplified template cross-sections44,56,57,58 are primarily based on the expected properties of particles generated in a given manufacturing course of. The partitioning follows the so-called Stage-1.2 scheme, which contains a barely finer granularity than the Stage-1.1 scheme57 and introduces the Higgs boson transverse momentum classes for the (tbar{t}H) manufacturing course of. Higgs bosons are required to be produced with rapidity |yH| < 2.5. Related jets of particles are constructed from all steady particles with a lifetime larger than 10 ps, excluding the decay merchandise of the Higgs boson and leptons from W and Z boson decays, utilizing the anti-okayt algorithm78 with a jet radius parameter R = 0.4, and will need to have a transverse momentum pT,jet > 30 GeV. Customary mannequin predictions are assumed for the kinematic properties of Higgs boson decays. Phenomena past the usual mannequin can considerably modify these properties, and thus the acceptance of the sign, particularly for the WW or ZZ decay modes, and this must be thought-about when utilizing these measurements for the related interpretations.

Higgs boson manufacturing is first categorized based on the character of the preliminary state and the related particles, the latter together with the decay merchandise of W and Z bosons if they’re current. These courses are: (tbar{t}H) and tH processes; qq′ → Hqq′ processes, with contributions from each VBF and quark-initiated VH (the place V = W, Z) manufacturing with a hadronic decay of the vector boson; pp → VH manufacturing with a leptonic decay of the vector boson (V(ℓℓ, ℓν)H), together with gg → ZH → ℓℓH manufacturing; and at last the ggF course of mixed with (ggto ZHto qbar{q}H) manufacturing to kind a single gg → H course of. The contribution of the (bbar{b}H) manufacturing course of is taken into consideration as a 1%44 improve of the gg → H yield in every kinematic area, as a result of the acceptances for each processes are comparable for all enter analyses44.

The enter measurements in particular person decay modes present solely restricted sensitivity to the cross-section in a number of the areas of the Stage-1.2 scheme, primarily due to the small variety of occasions in a few of these areas. In different circumstances, they solely present sensitivity to a mix of those areas, resulting in strongly correlated measurements. To mitigate these results, a number of the Stage-1.2 kinematic areas had been merged for the mixed measurement.

In comparison with particular person enter measurements, systematic idea uncertainties related to the sign predictions have been up to date for the mixture to intently observe the granularity of the Stage-1.2 scheme. The QCD scale uncertainties in ggF manufacturing had been up to date for all enter channels which might be delicate to this manufacturing course of. Out of 18 uncertainty sources in whole, two account for total fixed-order and resummation results, two cowl the migrations between completely different jet multiplicity bins, seven are related to the modelling of the Higgs boson transverse momentum (({p}_{{rm{T}}}^{H})) in numerous phase-space areas, 4 account for the uncertainty within the distribution of the dijet invariant mass (mjj) variable, one covers the modelling of the Higgs boson plus two main jets transverse momentum (({p}_{{rm{T}}}^{Hjj})) distribution within the ≥2-jet area, one pertains to modelling of the distribution of the Higgs boson plus one jet transverse momentum (({p}_{{rm{T}}}^{Hj})) divided by ({p}_{{rm{T}}}^{H}) within the high-({p}_{{rm{T}}}^{H}) area, and at last, the final takes into consideration the uncertainty from the selection of prime quark mass scheme. Concept uncertainties for the qq′ → Hqq′ and (tbar{t}H) processes are outlined beforehand28, and people of the V(ℓℓ, ℓν)H kinematic area observe the scheme described in an earlier work76. For the kinematic areas outlined by the merging of a number of Stage-1.2 areas, the sign acceptance elements are decided assuming that the relative fractions in every Stage-1.2 area are given by their normal mannequin values, and the uncertainties predicted by the usual mannequin in these fractions are taken into consideration.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments