Efficient Energy Saving Scheme for On-Chip Caches Page: 1
The following text was automatically extracted from the image on this page using optical character recognition software:
Efficient Energy Saving Scheme for On-Chip Caches
Abstract-With the reduction in feature size the static power
component, such as the leakage power, dominates the dynamic
power consumption in the on-chip caches. It has been observed
that all cache lines need not be kept active at all times. Only a
very few lines during a given window of time need to be actively
powered from the footprint, i.e., they are accessed during that
time. Earlier research  has addressed the issue of how to
determine the set of active lines and how long to keep them active
(powered). Circuit techniques have also been developed to keep a
cache line in low leakage state i.e., Drowsy State when the line is
not being accessed or used. Such a cache is called drowsy cache.
These circuit techniques try to achieve maximum reduction in the
leakage power without losing the information content and with
minimal performance penalty associated with power transitions.
These techniques when used with optimal switching scheme, which
decides when and what lines to drowse, results in maximum
reduction in energy consumed. In this paper, we study the cache
access pattern to evaluate them and arrive at an optimal scheme
to implement the drowsy cache. We achieve energy reduction on
the average of 88% of maximum gain achievable through the
underlying circuit technique. We also compare the performance
of our scheme with the earlier proposed schemes and show that
we can achieve up to 6% of higher saving in cache energy for the
benchmarks studied (with an average on 4% for all benchmarks
with equal weights) without any additional performance penalty.
Reducing the power and hence energy, is one of the im-
portant goals of today's embedded processor designers. Even
in the case of processors targeting desktops and servers cache
power reduction assumes importance in-terms of reducing the
heating problem. Dynamic power, i.e., switching the transistors,
is a major contributor to the power consumption in caches till
recently. As the cache size increases and feature size decreases,
the static power i.e., leakage component starts dominating
the energy equation. Hence by reducing the static power,
considerable improvement in energy usage can be achieved.
Various circuit techniques have been researched to achieve
static power reduction. Gated Vdd , ABB-MTCMOS ,
and dynamic Vdd scaling (DVS)  to name a few. Among
these techniques the DVS technique achieves the twin target
of not losing the cache state and reducing the leakage power.
Moreover the penalty to transit from low power (drowsy) to
normal state is minimal, both in-terms of time and energy.
Hence the DVS scheme is claimed to achieve the maximum
static power reduction of these techniques. Since cache memory
occupies a significant portion of a processor chip and consumes
almost 20% or more power, several suggestions have been
made to save power in cache memory operation. A cache line
or block is the smallest replaceable unit in the cache. In a
scheme suggested in , it had been proposed that power to
individual cache lines be controlled. The power to a cache line
is turned on when the cache line is accessed. A cache line
can thus be in two states: active - the power to the cache is
already on and no performance penalty is incurred when such
a cache line is accessed; drowsy - power to the cache line
is off and a performance penalty, whose value is determined
by the underlying circuit technique, is incurred when a line
in such a state is accessed. The instant at which a line must
be transited from drowsy to active state is clear, i.e., when an
access to such a line is made. However, it is not clear when
the reverse operation, i.e., how long the power must be kept on
must be carried out. In the scheme proposed in , power to
all cache lines is turned off periodically. An optimal interval
of 2000 cycle was shown to be effective if such a scheme
were to be used. In this paper, we study the cache access
pattern to evaluate them and arrive at an optimal scheme to
implement the drowsy cache. We compare the performance of
our scheme with the earlier proposed schemes and show that we
can achieve up to (6%) higher saving in cache energy without
any additional performance penalty.
A. Energy Vs Time tradeoff model
Any scheme that uses one of these techniques trades off
performance for power reduction. Reduction in performance
increases the execution time of an application and thus the
whole system runs for longer duration. This performance
reduction may lead to increased energy consumption if the
tradeoff is not studied carefully. Let us consider a hypothetical
system consisting of two components X and Y to understand
the tradeoff and the real gain achieved by any energy reduction
scheme. Suppose performance is traded off to reduce the
power consumption of component Y. The power consumption
of component X is not changed. In other word, component
X represents the part of the chip whose power consumption
remains unchanged when a particular scheme is deployed and
component Y is the part of the chip whose power consumption
is reduced. We use the following notation to derive the tradeoff
STotal energy of the system
P 4 Total power consumed (= Px + P,)
PPy Power consumption of component Y(= y * P)
Px 4 Power consumption of component X(= ( (1-y) *P)
F 4 Time of execution without performance reduction
q 4 Additive Power reduction factor
p 4 Additive Performance reduction factor
Here’s what’s next.
This paper can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Paper.
Gomathisankaran, Mahadevan & Somani, Arun. Efficient Energy Saving Scheme for On-Chip Caches, paper, 2002; (digital.library.unt.edu/ark:/67531/metadc94293/m1/1/: accessed July 24, 2017), University of North Texas Libraries, Digital Library, digital.library.unt.edu; crediting UNT College of Engineering.