Abstract: Web archiving is the process of storing and maintaining Internet resources to preserve them as a historical, informational, legal, or evidential record. In recent years, it has become an increasingly common practice in organizations around the world. Many state and federal archives, agencies, and universities in the United States have begun archiving the web, usually to create subject-specific collections of web sites that complement their existing collections. The concept of quality is central to creating a web archive. Ideally, an archived website is in every way identical to the original website, but many factors make this impossible. Instead, web archivists focus on comparing the archived site to the live site (if available), and on answering specific questions such as: 1) Is there content missing from the archived site? For example, are there pages or entire subdomains that should have been captured? 2) Does the archived site's appearance resemble the original? 3) Can media content such as audio and video be played back? 4) What is the depth to which a user can navigate within an archived site? 5) Do scripts in the archived site function correctly? When checking quality, web archivists look to see if an archived resource is "good enough." In this session, Brenda will cover how a web archivist can answer the above questions about an archived site. Several examples of what can be deemed "low", "medium", and "high" quality archived sites will be presented. Different institutions also have different processes in place for assuring that their web archives are of high quality. Brenda will compare and contract three different approaches to the Quality Assurance (QA) process: the one taken by the Archive-It service (currently the most popular web archiving service), the one in place at the University of North Texas Libraries, and a third approach used by the Internet Archive during their mass crawls of national domains.