Accelerator physics code web repository Page: 2 of 3
This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided to Digital Library by the UNT Libraries Government Documents Department.
The following text was automatically extracted from the image on this page using optical character recognition software:
Table 1: Number of codes in each category (bold) and sub-
strong-strong: 4 weak-strong: 3
electron cloud: 8
build up: 2 multi-bunch instability: 1
multipacting: 1 self-consistent: 2
single-bunch instability: 1 incoherent: 2
synchrotron radiation: 1
impedances: 4 instabilities: 5
ion effects: 2 luminosity: 1
nonlinear dynamics: 8 optics: 5
space charge: 4
input and output, (10) documentation or manual, (11) list
of special model features, (12) accelerators for which this
code was or is used, (13) benchmarking exercises against
other codes, (14) benchmarking against experiments, (15)
special programming features, (16) comments, (17) refer-
ences, and (18) associated categories. For several codes
supplementary web pages with extended links and docu-
mentation were created. The above information was col-
lected via a standard questionnaire sent to about 60 authors
and prospective contact persons. About 75% of the con-
tacted colleagues responded positively. As a first spin-off,
several home pages were newly created by the code au-
thors, e.g., those of ABCI  and MOSES , to the ben-
efit of the users. These complement already existing home
pages (e.g., ). For a few codes, however, even basic in-
formation from the authors is still missing.
The notion of 'benchmarking' may have four different
meanings , namely debugging: the code should cal-
culate what it is supposed to calculate; validation: results
should agree with estabished analytic result for specific
cases; comparison: two codes should agree if the model
is the same; and verification: the code should agree with
measurements. The need for debugging is obvious, but val-
idation is often difficult for complex simulations of nonlin-
ear processes. The HHH benchmarking focuses on the last
two areas, code comparison and experimental verification.
Below we give some benchmarking examples.
Code vs. Code
Numerous space-charge codes have been compared with
each other. There now is a good agreement for 2-
dimensional simulations over 103 turns. A comparison
of MICROMAP and SIMPSONS in longer-term simulations
has also been performed in great detail , with the aim
of predicting halo densities. An excellent agreement was
demonstrated for both scattering and trapping regimes, ex-
cept for a factor two discrepancy in the emittance growth,
possibly related to differences in the longitudinal dynam-
ics model. A parallel benchmarking of MICROMAP against
HEADTAIL showed an excellent agreement, at the 1% level,
even for the emittance growth, which in this case is due
to resonance crossing or scattering driven by an electron
cloud of constant size and linearly increasing density .
Also electron build-up simulation codes were bench-
marked against each other at several occasions; see, e.g.,
[13, 14, 15]. They generally agree within a factor of two
or better if the same or a similar model for the secondary
emission yield is used. Figure 1 shows a recent comparison
of POSINST and ECLOUD simulations for an LHC arc dipole
. The agreement of the two codes' simulations without
re-diffused electrons is considered satisfactory. The biggest
uncertainty seems to be the insufficient knowledge of the
in-situ surface properties.
6,. . . . . . . . .
Figure 1: Simulated electron-cloud heat load in an LHC
dipole as a function of bunch population for two differ-
ent value of Sm . R: POSINST code with full SEY model,
NR: POSINST code with no-rediffused model, LTC40 : re-
sult from ECLOUD code without re-diffused electrons. The
available cooling capacity (ACC) under two different as-
sumptions is also indicated .
Code vs. Experiment
Agreement between space-charge codes and experi-
ments is good in some cases and poor in others, especially
for larger numbers of turns and dynamic situations. For
resonance trapping and scattering, an acceptable agreement
between the beam losses simulated by MICROMAP simula-
tions and those observed in experiments at the CERN-PS
has been achieved by including chromaticity and extend-
ing the number of turns simulated to 2.5 x 106 .
Electron-cloud build-up simulations with ECLOUD are
in good agreement with measurements at the CERN SPS
after fitting two important input parameters, namely the
maximum secondary emission yield 6max, and the reflec-
tion probability of low-energy electrons R . Similarly,
POSINST simulations well reproduce observations at the
ANL APS and the LANL PSR after fitting Smax and R.
In the same way, RHIC data of peak electron flux and elec-
tron decay times have been benchmarked with two differ-
ent build-up codes, CSEC and ECLOUD, yielding somewhat
different values for Smax and R .
In the experimental benchmarking of simulation codes
. $,=1.5 (-A- R; -A- NR; -A- LTC40)
&.a. 1.3 (-8- R; -B- NR; - LTC40)
ACC at high L w 25% cant. -
"ACC at low L w/o cont.,
tb=25 fa '
" " ii
Here’s what’s next.
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Zimmermann, F.; Basset, R.; Bellodi, G.; Benedetto, E.; Dorda, U.; Giovannozzi, M. et al. Accelerator physics code web repository, article, June 1, 2006; Batavia, Illinois. (digital.library.unt.edu/ark:/67531/metadc881284/m1/2/: accessed November 13, 2018), University of North Texas Libraries, Digital Library, digital.library.unt.edu; crediting UNT Libraries Government Documents Department.