FermiGrid Page: 4 of 5
This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided to UNT Digital Library by the UNT Libraries Government Documents Department.
Extracted Text
The following text was automatically extracted from the image on this page using optical character recognition software:
TERAGRID 2007, MADISON, WI
ing, the Collector attempts to find it from other sources
(Gram, VOMRS, etc.)
OSG is currently deploying the Gratia Collector and
Reporting Services on a single central server, located at
Fermilab. In future releases, the collector and reporter
will also be installed at each OSG Site.
To date, Gratia Probes have been developed for Con-
dor, PBS, LSF, Sun Grid Engine, glexec, and dCache.
These probes have been deployed both via RPMs, dis-
tributed from the OSG web site, and via the Virtual Data
Toolkit (VDT) cache.
6 USER DISK SPACE AND MASS STORAGE
FermiGrid supplies shared data storage and access ser-
vices to user jobs, providing two pools of data storage for
general usage. The first is 14TB of disk space, served by
an NFS server appliance for grid users to install software,
stage input data, and write output data. This space is
mounted on most of the worker nodes and uses quotas to
restrict the amount of storage accessible to a user. In ad-
dition to this NFS disk, there is 6TB of disk spread across
5 storage servers. This disk is managed by the dCache
system and does not use quotas but implements a least
recently used cache management policy. The external
interface to this storage resource is either direct via
gridftp or by utilizing the Storage Resource Manager
(SRM) [16] interface. SRM provides dynamic space allo-
cation and file management functionality on local and
remote data storage systems, using grid credentials for
data access.
The FermiGrid dCache uses gPlazma [17] to interface
to the GUMS service to map user DNs to local UIDs for
file ownerships. The same DN to UID mapping schemes
described earlier are available here: many-to-one, one-to-
one and one-to-self. It should be noted again that in the
many-to-one mapping, all members of the VO have ac-
cess to all data written by other members of the same VO
and data can be easily shared between members of the
same VO. The same is true of the one-to-one mapping
scheme if the owner of the file enables group read per-
missions.
Authorized users may have access from their jobs to
the Fermilab tape storage system. In general, VOs not
directly associated with Fermilab do not have access to
the tape backed storage areas; however, upon special
request this can be arranged.
FermiGrid provides a set of standard data locations
referenced by environment variables so that users can
access their data and applications in a consistent manner.
The same is true for all OSG sites.
FermiGrid does not supply any tools for aiding the
user in management of data collections.
7 WORKER NODES
As of February 2007, there are -4450 CPUs at Fermi-
lab that are available for use by OSG members. The
breakdown by Fermilab experiment is shown in Figure 3.Experi- Gatekeeper
mentCPUs
CMS cmsosgce 700 dual & dual
dual coreCDF fcdfosgl
fcdfosg2
D-Zero docabosg2
GP Farm fngp-osgRAM Disk
4GB 250GB520 dual core, dual 4GB
200 dual
250GB
2GB 250GB
220 various 2GB 250GB
Figure 3: Fermilab CPUs Available to the OSG
VO Job Submissions for the Week of Feb 5-12, 2007Site Name
FNAL_ CDFOSG_ 1 nanohub
gaan
cdf
stagy
FNALCDFOSG_2 nanohub
cdf
gadu
FNALDZEROOSG_2 Unknown
FNAL FERMIGRID fermilab
Unknown
nanohub
FNAL GPFARM mis
engage
fermilab
stagy
ktev
mipp
sdss
cdms
nanohub
ic
LIGO
gaan
USCMS-FNAL-WC1-CE cemr
gaan
nsatlase
team
fermilab
nanohub
azero
scroVO sum
3
3
76
5572
101
12
100
6964
38
0
111
289
2
104
260
1551
1285
s36
101
8
113
4641
317
1316
561
57
1028
11
49
45807
25
7o
122
643
977
236
35804
984Figure 4: Weekly Job Submissions to FermiGrid Gate-
keepers (Unknown: submitted with vanilla grid proxies)
8 OPERATIONAL EXPERIENCE
Operational experience with FermiGrid has been very
good. The FermiGrid services are being used by several
experiment clusters and allow experiment support per-
sonnel to direct their effort to supporting their cluster
rather than the middleware services. FermiGrid has also
fostered the opportunistic use of idle cycles on the ex-
periment clusters by multiple VOs (Figure 4).
The personnel required to integrate and operate the
Upcoming Pages
Here’s what’s next.
Search Inside
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Yocum, D.R.; Berman, E.; Canal, P.; Chadwick, K.; Hesselroth, T.; Garzoglio, G. et al. FermiGrid, article, May 1, 2007; Batavia, Illinois. (https://digital.library.unt.edu/ark:/67531/metadc888282/m1/4/: accessed April 24, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT Libraries Government Documents Department.