Parallel Beam Dynamics Simulation Tools for Future Light SourceLinac Modeling Page: 2 of 3
This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided to UNT Digital Library by the UNT Libraries Government Documents Department.
Extracted Text
The following text was automatically extracted from the image on this page using optical character recognition software:
vantage of the position dependent modeling of external el-
ements and speeds up the computation by using transfer
maps. The space-charge effects in this regime are small,
and the commonly used ballistic approximation to go from
a position dependent distribution to a time dependent dis-
tribution is valid. To model the electron beam in an rf linac
we have added new capabilities to the code. These capa-
bilities include: modeling forward and backward traveling
wave rf structures, modeling 3D space-charge effects using
an integrated Green function solver from the IMPACT-T
code, modeling short range wake fields, and modeling co-
herent synchrotron radiation effects in bending magnets.
For a beam out of the injector, the electrons move in-
side the linac with a speed close to the speed of light. The
relative longitudinal motion among electrons is small ex-
cept inside the chicane. This fact can be used to speed up
the simulation by lumping the space-charge and wake field
calculation in most sections of the linac. Inside the chicane,
where the relative motion is strong, multiple space-charge
and CSR wakefield kicks are used. By doing the space-
charge/wakefield lumping, we can save the computational
time by a factor of four to ten. However, this results in some
loss of information in the transverse plane. Fig. 2 shows a
.. . . . _
1 -06 fhA vyt
phas-
Figure 2: Uncorrelated energy spread at the end of a linac
using distributed and lumped space charge/wakefield kicks.
comparison of the final sliced rms energy spread from the
lumped simulation and the standard distributed simulation.
The level of energy spread shows reasonably good agree-
ment between the two simulations.
The IMPACT code was developed as a parallel particle-
in-cell code using a domain decomposition method. In
this method, the spatial computational domain is divided
and assigned to individual processors. Macroparticles with
physical position in that computational domain are as-
signed to that processor. During each time step, when a
particle moves out of the original computational domain, it
will be sent to the other processor owning that spatial do-
main containing the particle's new position. This method
has the advantage of avoiding global communication dur-
ing the charge deposition to obtain the charge density dis-
tribution on the grid and during the field interpolation to
obtain the space-charge fields from the grid. Only neigh-
boring communication is needed in these stages and in the
particle moving stage. However, this method may sufferfrom unbalanced work since each processor may have a
different number of macroparticles for a non-uniform par-
ticle distribution. In order to maintain a roughly equal num-
ber of particles on each processor, the local computational
domain boundary of each processor has to adjusted fre-
quently. This introduces extra computational cost and may
not even work well for some types of distributions. To over-
come this problem, we also implemented another parallel
algorithm, a particle-field decomposition method, into the
IMPACT code. This algorithm is based on a uniform distri-
bution of macroparticles and computational domain among
individual processors, and is a technique which has been
used in our previous simulation of colliding beams [9].
In this algorithm, there is no particle moving among pro-
cessors and each processor keeps about the same amount
of work. However, during the charge-deposition and the
field interpolation stages, the information has to be ex-
changed globally among all processors. For some appli-
cations, this could present a significant slow down of the
code. In the simulation of beam transport through a linac
for light source design, a large number of macroparticles
(e.g. one billion) is used to reduce numerical shot noise in
order to accurately predict the microbunching instability.
In this case, the number of computational grid points may
be much less than the number of macroparticles. Using
a particle-field decomposition method helps to reduce the
usage of memory on a single processor by keeping a good
load balance and also improves the computational speed.
With these recent enhancements, we have used the IM-
PACT code to simulate electron beam transport through the
designed FEL linac for the Fermi@Elettra project at Tri-
este. In these simulations, we have used an initial distri-
bution out of the injector generated by Fermi researchers.
This particle distribution contains only 200,000 particles.
We have repopulated this particle distribution by resam-
pling a transverse four dimensional uniform distribution
centered one each original particle. The size of the uniform
distribution in each dimension is chosen to smooth the dis-
tribution and to keep the rms size and emittance close to
the original distribution. In the longitudinal phase space,
we cut the distribution into a number of slices along the z
direction. Within each slice, we use a linear function to
fit the particle distribution. Each particle is resampled in
longitudinal phase space centered on the original particle
position. The energy deviation is calculated from the fitted
correlated energy distribution plus a uniform sampling with
desired initial energy spread to emulate the effects of a laser
heater. Fig.3 shows the uncorrelated rms energy spread at
the end of the linac with a different number of slices and
one billion macroparticles. Here, the larger slice number
corresponds to a smaller longitudinal box size and less ini-
tial smoothing. It can be seen that when the initial box size
is chosen too small (2000 slices), there is significant energy
spread growth of numerical noise due to the microbunch-
ing instability even using one billion macroparticles. As the
initial box size gets larger, the initial numerical shot noise
is significantly reduced by using one billion particles. Be-
Upcoming Pages
Here’s what’s next.
Search Inside
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Qiang, Ji; Pogorelov, Ilya v. & Ryne, Robert D. Parallel Beam Dynamics Simulation Tools for Future Light SourceLinac Modeling, article, June 25, 2007; Berkeley, California. (https://digital.library.unt.edu/ark:/67531/metadc901163/m1/2/: accessed May 6, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT Libraries Government Documents Department.