Fermilab Logo
Computing Division at Fermilab
Computing Division - Fermilab-Search the Computing Web-CD Departments-Index-Help Desk-FAW
System Status-Projects in the Computing Division-Security-Fermilab Policy on Computing-CD Organization-CD Internal

Grid Projects at Fermilab

Fermilab is actively participating in the development and deployment of grid technology for high energy physics research. We are involved in a variety of grid projects, some involving CDF and D0 Run II data handling and other current research projects at the lab, others looking forward to and preparing for physics that will be coming from the LHC at CERN in a few years. These grid projects are collaborations of scientific and computer professionals from a number of participating labs, universities and other organizations throughout the U.S., Europe and Asia.

For some introductory information on grid technology, see Overview of Grid Computing.

On this page: dCache | FermiGrid | Grid2003 | GriPhyN | interactions.org | iVDgL | OSG | PPDG
| SDSS-GriPhyN | SRM | USCMS S&C | VOX | VO Privilege | WAWG

1/27/05: Condor tutorial slides are on the web at http://www.cs.wisc.edu/condor/tutorials/fermi-2005/ .

The goal of this project is to provide a system for storing and retrieving huge amounts of data, distributed among a large number of heterogenous server nodes, under a single virtual filesystem tree with a variety of standard access methods.
dcache logo
The goal of FermiGrid is to make all Fermilab computing facilities able to interoperate and run all types of grid jobs. FermiGrid will also provide a unified grid gateway to the outside grid world.
FermiGrid logo
Grid 2003
has deployed an international Data Grid with dozens of sites and thousands of processors. The facility is operated jointly by the U.S. Grid projects iVDGL, GriPhyN and PPDG, and the U.S. participants in the LHC experiments ATLAS and CMS.
grid 2003 logo
Grid Physics Network (GriPhyN)
The GriPhyN Project is developing Grid technologies for scientific and engineering projects that must collect and analyze distributed, petabyte-scale datasets. GriPhyN research will enable the development of Petascale Virtual Data Grids (PVDGs) through its Virtual Data Toolkit (VDT).
This website is a communication resource from the world's particle physics laboratories. Currently The Data Grid can be found under the Hot Topics link.
International Virtual Data Grid Laboratory (iVDGL)
IVDGL is an effort to achieve interoperability between the U.S. and the European physics Grid projects. Its computing, storage and networking resources provide a unique laboratory that will test and validate Grid technologies at international and global scales.
Open Science Grid (OSG) is a consortium of U.S. laboratory and university researchers seeking to establish a U.S national grid infrastructure for science. The goal of OSG is to iteratively build and extend existing grids, such as Grid2003, to enable the use of common grid infrastructure and shared resources for the benefit of scientific applications.
Particle Physics Data Grid (PPDG)
PPDG is a collaboratory project, committed to developing, acquiring and delivering Grid-enabled tools for data-intensive requirements of particle and nuclear physics.

Sequential data Access via
Meta-data on a Grid (SAMGrid)

SAMGrid is a general data handling system designed to be performant for experiments with large (petabyte-sized) datasets and widely distributed production and analysis facilities.  The components now in production provide a versatile set of services for data transfer, data storage, and process bookkeeping on distributed systems.  Components now in testing add the capability of job submission to a Grid, built around standard Grid middleware from Condor and Globus.
Sloan Digital Sky Survey
SDSS-GriPhyN Work Space

Griphyn is a project to develop technologies around the concept of "virtual data" in which derived datasets can be recreated on-demand in a grid computing environment.  SDSS is applying these technologies to various analyses of the SDSS dataset, creating derived datasets such as galaxy cluster catalogs for use in studying phenomena such as dark energy.
Storage Resource Manager (SRM) is a Grid middleware layer designed to provide uniform access to mass storage systems (MSS) together with a grid view of site storage resources.  It provides a staging disk outside of the MSS that can be shared dynamically by the users. SRM enables the dynamic coordination of compute and storage resources, supports storage management for long lasting simulation and analysis tasks in a grid environment, and manages job recovery from storage system and network failures, facilitating uninterrupted operation. SRM is an international collaboration among EDG WP2, EDG WP5, FNAL, Jlab, and LBNL.  
USCMS Software and Computing
Fermilab is the host lab for the U.S.-based participants in the CMS experiment at CERN's LHC. US CMS has simulated over 14 million grid-enabled, production-quality proton-proton collision events inside the CMS detector using CMS-MOP. The CMS-MOP scripts use grid tools and services developed by GriPhyN, the Globus Alliance, and others.
Virtual Organization Membership Service eXtension Project (VOX), sponsored by US CMS, SDSS and iVDgL and conducted at Fermilab, is a project to investigate and implement the requirements for admitting collaborators into a VO, and facilitating and monitoring their authorization to access grid resources. VOX is an extension to EDG-VOMS, an authorization system for Virtual Organizations (VOs). VOX provides a VO user registration service (VOMRS), a Site AuthoriZation service (SAZ) and a Local Resource Authorization Service (LRAS).

VO Privilege Project
US CMS is sponsoring this effort at Fermilab to develop and implement fine-grained authorization for access to grid-enabled resources and services in order to improve user account assignment and management at grid sites, and reduce the associated administrative overhead. It is a two-stage project that involves building, implementing and integrating elements within the grid authorization architecture developed by the Grid2003 team.
Wide Area Working Group (WAWG) is a working group in CD/CCF tasked with studying methods for inter-lab data communication given the data handling and grid-enabled systems being developed and implemented at Fermilab.

For assistance contact helpdesk@fnal.gov.
Information compiled and maintained by AH of the CD web group; last modified on 1/17/06.
(Address comments about page to cdweb@fnal.gov.)
Security, Privacy, LegalFermi National Accelerator Laboratory