U.S. Department of Energy - Energy Efficiency and Renewable Energy
Advanced Manufacturing Office
Issue Focus: Meeting Data Center Energy Challenges
Energy Matters was a quarterly newsletter for DOE's Advanced Manufacturing Program (AMO). It provided in-depth technical articles to help industry professionals save energy, reduce costs, and increase productivity. These archived issues may contain broken links or information that is no longer accessible. Some of the following documents are available as Adobe Acrobat PDFs.
This issue of Energy Matters focuses on energy challenges arising from the growth in data centers, and opportunities to curb energy use of servers, power, and cooling infrastructure. Discover how ITP aims to reduce data center energy use 10% by 2011. Learn about five ways to tackle data center efficiency at the server level, and read about the results of the 2008 Data Center Demonstration project. Our article about the Industrial Assessment Centers tells the story of this long-standing, university-based program. And, our featured Energy Expert fields questions to help improve efficiency in industrial steam systems.
In This Issue
Data center operators are implementing energy-efficient measures in the face of rapidly escalating energy consumption.
Saving energy in data centers is critical in the face of rapidly growing information technology demands. Through Save Energy Now, DOE's Industrial Technologies Program is developing resources to help data center operators identify opportunities to increase capacity and reliability, save energy and costs, and reduce environmental impacts.
Who could have predicted the explosive growth of computers in our society, with the advent of digital information technology? Because of this, servers and data centers are central to the operation of our economy, processing volumes of digital information to meet global business demands.
But, managing escalating information technology (IT) demand comes at a cost—rapidly increasing energy use for power, storage, and cooling needs. In 2006, data centers used 61 billion kWh of electricity, or 1.5% of all U.S. electricity consumption—double the amount used in 2000. The U.S. Environmental Protection Agency (EPA) predicts that national energy consumption by servers and data centers could double again by 2011 to more than 100 billion kWh, or an annual electricity cost of $7.4 billion. This surge in electricity use increases costs, emissions, burden on the power grid, and capital costs for construction of new data centers. However, by taking steps to measure data center energy use and apply best energy management and design practices, energy consumption could be alleviated.
I think there is a world market for maybe five computers.
Thomas Watson, chairman of IBM, 1943.
Turning Challenges into Energy-Saving Opportunities
The unprecedented growth of data centers presents an enormous opportunity to examine energy use at both the system and individual component level, which could lead to changes in the way we power and cool data centers. This includes R&D and application of energy-efficient technologies for servers, storage, network equipment, and site infrastructure. In addition, public and private sector collaboration is essential in assessing current performance, demonstrating best practices, developing new approaches and technologies, implementing incentive programs, and developing voluntary industry certification programs to influence energy use in data centers.
640K (of memory) ought to be enough for anybody.
Bill Gates, 1981
Increasingly, companies, utilities, and government agencies are focusing efforts to improve data center performance. To assist with these efforts, DOE's Save Energy Now has assembled a comprehensive program of energy assessments, metrics, software tools, training and partnerships.
Developing Energy-Efficient Solutions for Data Centers
Save Energy Now aims to reduce energy use in U.S. data centers 10% by 2011, building on the successful program in the manufacturing sector. To meet this ambitious goal, Save Energy Now is actively targeting energy savings potential in four main data center areas:
Save Energy Now's strategy focuses on these activities to help companies reduce energy use:
- Energy assessment protocols and methodologies for data centers to pinpoint energy and cost savings opportunities. Save Energy Now is assisting industry with the techniques and software tools to use uniform metrics and a systems approach that will identify near and long-term savings opportunities. Read a new assessment summary (PDF 303 KB) on energy and cost savings identified at Lucasfilm, which has identified annual energy savings opportunities of more than 3 million kWh, and cost savings of more than $343,000 per year. Download Adobe Reader.
- Metrics to benchmark and track performance of overall data center energy intensity including IT and infrastructure; cost, Btu, and carbon emissions.
- Data Center Energy Profiler (DC Pro) software is a key energy assessment tool. DC Pro measures how energy is being used in the data center, identifies potential energy and cost savings, and provides a comparison to other data centers' energy use. The tool provides an overview of a company's energy purchases, data center energy use, savings potential, and a list of specific actions you can investigate to realize these savings.
- Qualified Specialists program to certify data center efficiency experts to assist data centers with energy assessments. Learn more about DOE's Industrial Technologies Program (ITP) Qualified Specialists.
- Training curriculum for data center personnel. DOE will work with industry to develop an awareness training curriculum to help data center professionals assess performance, identify improvement opportunities and implement best energy management and design practices.
"The DOE Save Energy Now program is working with industry to drive a 10% improvement in data center energy efficiency by 2011," says Paul Scheihing, ITP Technology Manager. "We are developing a comprehensive tool suite that will be backed by training so that the market can more effectively reduce the total cost of ownership for data centers using a systems and facility-wide energy management approach."
Sign up to receive updates and information about ITP's efforts to address energy efficiency in data centers.
Working Together to Meet Challenges
Developing public/private partnerships is vital to accelerating industry adoption of technologies and best energy management practices in data centers. In 2007, DOE teamed up with The Green GridTM, a nonprofit group of IT companies, equipment manufacturers, data center operators, and designers. This collaboration aims to increase efficiency of data centers by implementing energy management programs, adopting clean energy technologies, and promoting continual efficiency improvements. Learn about the partnership.
Furthermore, DOE and the Environmental Protection Agency (EPA) joined forces in the National Data Center Energy Efficiency Information Program (PDF 225 KB), which integrates activities from the Save Energy Now initiative, Federal Energy Management Program (FEMP), and the ENERGY STAR program. Download Adobe Reader. This voluntary program will also engage industry stakeholders to develop and disseminate technical and information tools to industry.
To help launch this partnership, DOE and EPA held the National Data Center Energy Efficiency Strategy workshop on July 8, 2008, in Redmond, Washington. More than 150 industry and government leaders met to identify next steps for collaborating to improve efficiency in data centers. The resulting report, Energy Efficiency in Data Centers: Recommendations for Government-Industry Coordination (PDF 320 KB), details the discussions and recommendations for leveraging government, industry, utility and other stakeholder activities. Download Adobe Reader. It also includes a series of papers highlighting current trends in data centers energy efficiency. Learn more and view presentations from the workshop.
Strategies to Reduce Energy Demand
Read the related article, "Applying Solutions to Real-World Data Centers", which describes the 2008 Data Center Demonstration project. In this endeavor, a team of technology partners and operators tested energy-saving technologies and best practices in real-world commercial data centers. Learn about the technology initiatives and results.
Growing energy demand to power U.S. data centers means rising costs for businesses, strain on the nation's power grid, and increased environmental emissions. Many organizations are taking steps to reduce energy use, increase capacity, and enhance reliability by adopting energy-efficient technologies, practices, and standards. Through a strategy of partnerships, assessment tools, metrics development, and training, DOE's Save Energy Now can help data centers meet these challenges.
The 2008 Data Center Demonstration project showcased energy-saving technologies in real-world data centers.
Practical application of technologies can demonstrate concrete solutions for saving energy in U.S. data centers. The Silicon Valley Leadership Group did just that, by assembling a team of technology partners and data center operators to showcase energy-saving technologies in real-world commercial data centers.
The 2008 Data Center Demonstration project is the first of its kind to apply actual technologies and best practices in many areas at once to real-world commercial data center operations, IT equipment, and site infrastructure. Facilitated by the Silicon Valley Leadership Group, the project's goal was to demonstrate emerging and available technologies for data center operations; to determine if the improvements correlate with the energy projections in the Environmental Protection Agency's (EPA) report to Congress; and to encourage adoption of these technologies. The resulting Data Center Energy Forecast report shows that data centers can save up to 55% of energy costs by implementing technologies available today.
Evaluating Energy-Saving Scenarios
In 2007, the EPA released a report assessing energy efficiency opportunities for U.S. data centers. The Report to Congress on Server and Data Center Energy Efficiency (PDF 2.5 MB) outlines the energy use of U.S. data centers; opportunities and challenges in making data centers more energy efficient; and makes recommendations for near and long-term improvements. Download Adobe Reader.
The report presents three scenarios for potential electricity savings based on different levels of technology and best practices implementation:
- "Improved operation" includes improvements that are no or low cost, such as eliminating unused servers, enabling power management on servers, and improving airflow. Potential electricity savings = more than 20%
- "Best practice" includes adopting energy-efficient servers, moderately consolidating servers and storage, and boosting site infrastructure improvements by 70% (transformers, UPS, chillers, fans, pumps). Potential electricity savings = up to 45%
- "State of the art" includes aggressively consolidating servers and storage; enabling power management at the data center level of applications, servers, and equipment; and improving site infrastructure efficiency by up to 80%, including the use of liquid cooling and combined heat and power. Potential electricity savings = up to 55%.
The chart below projects current electricity use against the different energy efficiency scenarios from 2007 through 2011.
Source: The Report to Congress on Server and Data Center Energy Efficiency, 2007
Testing in Real-World Data Centers
The Data Center Demonstration project was initiated by the Silicon Valley Leadership Group, with participation by Lawrence Berkeley National Laboratory (LBNL) and support from the California Energy Commission. The project implemented 11 technology initiatives suggested in the EPA report, focusing on operations such as cooling, power distribution, consolidation, and optimization, and yielded 17 case studies. Read about the technologies and resulting energy, cost, and emissions savings in the Data Center Energy Forecast report.
A select group of data center operators hosted the technology initiatives within their facilities, and documented the energy savings and performance results. Partners and sponsors supplied equipment, technical expertise, and funding for the demonstrations. LBNL, DOE's principal research laboratory on data center efficiency, provided technical expertise and evaluation in one of the major demonstrations involving assessment of modular cooling systems, termed the "chill-off".
William Tschudi, LBNL program manager for data center energy efficiency, emphasizes the value of the project: "This series of demonstration projects illustrates very well that the opportunity for efficiency improvement lies in all areas of the computing and infrastructure chain. It also represents an unprecedented commitment from the companies that participated since they essentially self-selected the technologies to demonstrate, funded the implementation of the projects, and were very open in sharing results. The results show that projections in the EPA report to Congress are attainable and this is giving the industry added incentive to make changes."
Visit LBNL's Data Center Energy Management Web site.
Knowledge is Power
The project presents a wide range of technology case studies to help companies identify solutions to reduce energy use in their data centers. The studies conclude that best practice levels defined in the EPA report are achievable in all types of data centers. Future demonstration projects will implement novel IT and advanced cooling and power distribution technologies. DOE's Save Energy Now will help inform data center operators about the results of the project so that the solutions demonstrated in the Silicon Valley Leadership Group's project can be more widely deployed.
Reprinted from The Green Grid
It is not necessary to invest in large-scale hardware refresh programs or consolidation exercises to improve energy efficiency of data center servers. Identifying energy wasters, employing power saving features, right-sizing, powering down underutilized servers, and decommissioning legacy servers all help to reduce energy use. This article presents five ways to cut server power use by adjusting the way servers are running.
The Green GridTM is a global consortium dedicated to advancing energy efficiency in data centers and business computing ecosystems. In furtherance of its mission, The Green Grid is focused on:
- Defining meaningful, user-centric models and metrics
- Developing standards, measurement methods, processes and new technologies to improve data center performance against the defined metrics
- Promoting the adoption of energy efficient standards, processes, measurements and technologies.
Comprised of an interactive body of members who share and improve current best practices around data center efficiency, The Green Grid's scope includes collaboration with end users and government organizations worldwide to ensure that each organizational goal is aligned with both developers and users of data center technology. All interested parties are encouraged to join and become active participants.
The Green Grid Board of Directors is comprised of the following member companies: AMD, APC, Dell, HP, IBM, Intel, Microsoft, Rackable Systems, Sun Microsystems and VMware.
Reducing energy use at the point of consumption (the server) provides benefits at all other levels by reducing load on power and cooling facilities which in turn reduces their own energy use.
Installed servers in data centers today are mainly x86 commodity servers. These servers consume much of the power allocated to IT server equipment, and present the largest opportunity for saving power in the data center. A significant reduction in energy usage can be realized if data center professionals reconsider the mindset that all servers need to be powered on at all times.
The conventional wisdom is that servers must be kept running 24/7/52 because restarting them poses a potential downtime risk. However, research suggests that this perception is false.
In a series of laboratory tests over a 5 month period, 123 servers were restarted several times daily by disconnecting and reconnecting the power using an automated power strip outlet. Out of 18,826 restarts, not a single component failure occurred.
By utilizing scripting and systems management tools such as Wake-on-Lan capabilities, many organizations can implement key energy-saving processes, without impacting operations, capital budgets or system reliability.
Listed below are five recommendations that will help data center professionals reduce their overall data center energy consumption by making changes at the server level.
1. Identify the Culprits
To understand how applying new practices will affect energy consumption, it is necessary to first identify and document all running servers within the data center, determine their business purpose and measure their power consumption. Organizations do not typically measure power consumption on a per-server basis, however, it is possible to generate estimates without too much difficulty.
The latest generation of servers feature built-in power monitoring via their out-of-band management capabilities. The vast majority of currently installed (older) servers do not have this ability. Therefore, other measurement methods must be used.
It is possible to instrument the power delivery infrastructure (e.g. 'smart' power strips) which can monitor power usage for each server in real time and provide accurate power usage statistics. Be aware, however, that this will require investment in new hardware, impact operations during installation, and add overhead when implementing and monitoring the solution.
A low-cost, low-disruption method bases power usage calculations on a server's CPU utilization. A study which compared different levels of power consumption based on thousands of servers with differing workloads concluded that power consumption tracks very closely with utilization. Therefore, this single metric can be used as a relatively accurate estimate of power consumption.
Internal disks spin and draw power all the time; the only additional power they draw when being accessed is to move the read/write head. The dynamic differential between idle and fully utilized is only around 30% of the disks' power draw, and as a fraction of overall system power that is negligible.
Memory is constantly being refreshed and drawing power regardless of it being read or written to. The change in memory power draw with use is also not significant when taken as a fraction of overall system power use.
Most I/O and memory use also comes with some CPU activity, since the CPU is used to manage and monitor the progress of the task, and as such, disk and memory use correlates to CPU utilization.
The CPU varies dramatically in its power draw, since the architecture has been optimized to shut down when in idle states. As such, it is unique in being the only component of the system that has a marked effect on system level power draw based on its utilization. The figure below illustrates a model where server power consumption scales linearly with CPU utilization.
Figure 1. CPU Utilization to Power Consumption
Most servers are already collecting CPU utilization information via systems management software; however, few organizations make use of this data other than for capacity planning. By taking average CPU utilization over a defined period of time, it is possible to calculate an estimate of the power consumed for that period.
Since our model scales linearly from idle to maximum utilization, once we know the power draw of a server at peak usage and at idle it is simple to estimate power usage at any utilization rate.
Until recently, the only power figure published for servers was the rating of the power supply, which is typically much higher than the actual power consumed. However, server manufacturers are now publishing actual power utilization figures for current models at idle and maximum utilization. This is being driven by the adoption of the ASHRAE Thermal Guidelines or similar manufacturers report which provides power ratings for minimum, typical, and full configuration.
Most organizations standardize server specifications. Therefore, it is likely that a limited number of differing server models exist at any particular site. Therefore, measuring the power consumption of a single server of each type at full load and at idle will not be complicated or time consuming and will provide sufficient accuracy to make informed decisions.
Once these figures are available, an estimate of power consumption (P) at any specific CPU utilization (n%) can be calculated using the following formula:
Pn = (Pmax – Pidle) * n/100 + Pidle
If a server has a maximum power draw of 300 Watts (W) and an idle power draw of 200W, then at 5% utilization the power draw would approximate to:
Power Utilization at 5%
= (300 - 200) * 5/100 + 200
= 100 * 0.05 + 200
If the server was running at that average utilization for a 24-hour period, then the energy usage would equate to the following:
205W * 24 = 4920 Watt hour (Wh) = 4.92 kilowatt hour (kWh)
Through empirical measurement of various servers using a power analyzer, this approximation has proven to be accurate to within ±5% across all CPU utilization rates.
A baseline of current power usage throughout the data center can be created by adding up the power usage for all the servers in the data center. This data can then inform later decisions regarding which changes will have the most positive impact on overall server power usage.
2. Enable Server Processor Power-Saving Features
In recent years, x86 server processors have begun to incorporate the power-saving architectures that are common in both desktop and laptop computers. Enabling this feature can result in overall system power savings of up to 20%.
The power savings is achieved by reducing the frequency multiplier (Frequency identifier or FID) and the voltage (Voltage identifier or VID) of the CPU. Intel's version of this technology is known as either Enhanced Intel SpeedStep Technology (EIST) or Demand Based Switching (DBS). AMD's version is Cool'n'Quiet or PowerNOW. The combination of a specific CPU frequency and voltage is known as a performance state (p-state). "P-state control" refers to the capability to reduce frequency and voltage.
Altering the p-state can reduce a server's power consumption when at low utilization but can still provide the same peak level of performance when required. The switch between p-states is dynamically controlled by the operating system and occurs in microseconds, causing no perceptible performance degradation.
Figure 2: Impact of p-state on Power Consumption (AMDData7)
Although a processor may be p-state capable, both the system Basic Input-Output System (BIOS) and the operating system must be capable of enabling the feature to make use of it. Check the BIOS of a representative sample of each server model to find out if the server supports the relevant version of the p-state technology.
Instructions on how to implement p-states on the three main x86 commodity server operating systems can be found in Appendix A of the full article (PDF 220 KB). Download Adobe Reader.
3. Right-Size Server Farms
In recent years, Web services have driven the growth of server farms in data centers, which, in many cases, are overprovisioned. The analysis of server farm usage patterns will reveal the potential for 'right-sizing'. Unneeded capacity can be turned off, but the server farm can still provide sufficient resiliency for agreed upon service levels.
Data center owners should perform analysis of server utilization data (CPU, disk and network) across all servers in a server farm. Average utilization across the farm is likely to follow a daily, weekly and/or monthly pattern.
Collection of utilization data over time can inform decisions regarding how many actual servers are required to provide peak service levels plus resilience. It is likely that this number is lower than the actual number of servers in the farm, meaning that the surplus capacity can be powered down to conserve energy.
For example, if a server farm consisting of 10 servers has a maximum utilization (max of CPU, disk or network utilization per server) across the farm of 50%, this is an aggregate utilization of 500%, which equates to five servers running at 100%. To provide sufficient headroom and still allow for resilience, the farm could easily run with seven servers (peak utilization of 500/7 = 71%). Under this scenario, if one server failed, sufficient capacity would still exist (with six servers, peak utilization would be at 83%) and three warm standby servers would still be available to rapidly recover the availability levels should one of the active servers fail. In this example, a power-saving equivalent of up to three servers is possible for this farm.
It is possible to automate the restart of servers by using either built-in out-of-band power management capabilities or Wake-on-LAN tools. Out-of-band management capabilities can be controlled via vendor specific software or through standard simple network management protocol methods.
4. Power Down Servers When Not in Use
Not all servers need to be operational 24/7/52. Individual servers may be powered down for certain periods of the day. Typical examples are servers found in test and development environments. The test team will know when a test run has finished, and particular test machines could then be powered down until they are needed. In addition, development build systems should be powered down until a build run is required.
Certain types of servers will regularly go unused for lengthy periods of time. These should be targeted for powering down. CPU statistics will show that certain machines have a consistently low (typically an idle server will run at <1%) CPU utilization for long periods of time. Analysis of server utilization will reveal a pattern when the servers are busy; machines could be scheduled to power down for the periods of time that they are idle and then powered up in time to perform their useful work.
For example, a server executing backup software which is only busy from 10 PM until 6 AM could be scheduled to power itself down at 8 AM every day and then be powered up by operations management tools or a job scheduling system at 9 PM ready to perform the next night's backups. If the server were required for a restore during the day, the operator could run a script that would power the machine back up, run the restore and then power the machine back down.
5. Decommission Old Systems that Provide No Useful Work
Evidence suggests that a significant number of installed servers are not used at all by anyone, such as older servers that have fallen out of use but have not been decommissioned.
Servers of this type will usually have very low utilization rates all the time, with only the occasional spikes of utilization when standard housekeeping tasks run (backups, virus scans, etc.). But, these machines are performing no useful purpose, while consuming power and heating the data center.
Once a machine has been identified as "unused", confirm this status by analyzing network statistics. This exercise will ensure that all connections to the machine in question are from management systems and not from other business systems or from end users. If end users are indeed linked to the server in question, they should be contacted to determine how the server is providing useful work. It is highly likely that the connections are merely legacy in nature and can be terminated.
Once the server has been categorically confirmed as unused it can either be decommissioned, or turned off and put aside as stock ready for deployment should users develop a relevant requirement.
Keeping a legacy server around simply because it is available may be poor efficiency practice. New servers available today offer better performance with significantly reduced energy demands. If the decision is made to retire legacy servers, they should be processed for recycling and/or repurposing. Most server manufactures offer global recycling programs.
The Data Center Energy Profiler (DC Pro) online software tool Version 1.0 is an important first step toward measuring and identifying areas for potential energy savings in your data center. Developed by DOE's Industrial Technologies Program (ITP), DC Pro is a scoping tool that gives you a general idea of where the energy is being used in your data center, identifies the best opportunities for energy savings, and compares the energy use with other data centers.
Operating on the premise of "you are not managing what you don't measure," DC Pro first assesses the energy use of the data center through input supplied by the user, and then generates a report of potential energy savings based on that input.
The DC Pro tool requires input of basic information about the data center, such as description of the facility, utility costs, and system information on IT, cooling, power, and on-site generation. View the DC Pro checklist (PDF 129 KB) for a list of the information you will need to provide. Download Adobe Reader. DC Pro is designed so that the user can complete this profile in about an hour. The DC Pro tool then generates a customized report that details:
- Average amount of energy that is purchased or generated on-site, and the cost of the energy.
- Annual energy consumption, broken down by each major energy use system.
- Potential annual energy and cost savings, categorized by each major energy use system; and how this energy use compares to that of other data centers.
- Suggested next steps that could be implemented to save energy and costs.
ITP invites you to test DC Pro at your data center facility, and get started on the path to energy savings today! Learn more and download DC Pro Version 1.0 free of charge from the Save Energy Now Data Center Web site. Here, you can also find out more about DOE's partnership efforts to improve the efficiency of data centers, and sign up to receive updates and information about this initiative.
For more information, contact DOE's Energy Efficiency and Renewable Energy Information Center, or call 1-877-337-3463.
The data center energy assessment at LucasFilm identified 15 ways the company could save energy and costs.
DOE's Industrial Technologies Program (ITP) has expanded its Save Energy Now initiative to include U.S. data centers. In an effort to help industry explore this new area of potential savings, DOE has conducted an assessment at the energy-intensive data center of Lucasfilm. The results reveal steps companies can take to reduce data center energy consumption and improve efficiency.
Movie Downtime Equals Energy Savings for Lucasfilm
The data center at Lucasfilm Ltd. in San Francisco, California, is crucial to delivering large volumes of data and high-resolution images to the desktops of graphic artists, game developers, and motion picture directors. Known for creating such award-winning movies as "Star Wars" and "Indiana Jones," Lucasfilm also operates the largest computer network in the entertainment industry.
The 13,500-square-foot facility houses a render farm (cluster of computers that work around the clock to process digital images); file servers, storage systems, and more than 3,000 AMD processors. After evaluating a variety of energy use data that was collected over several weeks, the assessment team identified 15 ways Lucasfilm could save energy, which were narrowed down to seven measures that were deemed practical based on estimated implementation costs and payback periods. These included:
- Remove redundant uninterruptible power supply (UPS) systems
- Turn servers off during downtime (in between major movie projects)
- Stage chillers to maintain high load factor
- Operate UPS in switched by-pass mode
- Improve air flow
- Implement water-side economizer
- Install lighting controls.
These recommendations have the potential to save Lucasfilm approximately $343,000 in cost savings and more than 3.1 million kWh in energy savings. With implementation costs of $429,500, the company would achieve a simple payback of 1.2 years. Read the full assessment summary (PDF 303 KB). Download Adobe Reader.
IAC students get hands-on training by helping companies identify energy savings opportunities.
DOE's university-based Industrial Assessment Center program recently marked the completion of its 14,000 energy assessment, demonstrating sustained, widespread impact in identifying energy savings opportunities for industry today and molding energy engineers for our nation's future.
The Industrial Assessment Center (IAC) program was launched in 1976 during the energy crisis to help identify energy-saving opportunities in the industrial sector. Thirty-two years later, the role of the IACs in meeting U.S. energy and environmental demands is even more critical. This dedicated program continues to meet these challenges by conducting industrial energy assessments, training the energy workforce, and creating technical resources that inform companies about energy efficiency.
DOE's Industrial Technologies Program (ITP) sponsors the IAC program as part of its efforts to transfer energy-efficient and environmentally sound practices and technologies to U.S. industry and increase industrial competitiveness. The fledgling IAC program that originated in four engineering schools in 1976 has grown into a successful program operating in 26 universities throughout the United States, a testament to its positive and long-term impact.
14,000 and Still Counting
Recently, the program celebrated an important milestone: the completion of its 14,000 energy assessment. Over the life of the program, this translates into energy savings of more than 400 trillion Btu in natural gas for U.S industry, equivalent to powering 3.1 million U.S. homes. In addition, CO2 emissions were reduced by 22 million metric tons, or equal to removing 4 million cars from the road.
The landmark 14,000 assessment was conducted at Mid-South Metallurgical, a quintessential small U.S. company of 15 employees which provides precision and specialty heat-treating services to industry in the manufacturing-rich Middle Tennessee area. The major energy-consuming equipment in the plant includes eleven furnaces and two barium chloride salt baths. During the assessment, the Tennessee Technological University IAC team identified seven potential areas for energy savings which could save the company 2,989 MMBtu per year (875,941 kWh), or $74,685 in energy costs—an annual savings of 18%.
The IAC team is providing technical assistance to implement the energy efficiency recommendations and will follow up in 6 months to quantify actual energy savings.
Mid-South Metallurgical was very pleased with the energy assessment. "The team did an excellent job of providing a summary of recommendations, and then followed up with a detailed written report. These recommendations are on the top of our priority list for implementation." said Clif Coleman, owner of Mid-South Metallurgical. "This program is a great opportunity for students to gain exposure to industry and apply their skills in a practical way."
Immediate and Long-Term Benefits
Universities Operating IACs
- University of Alabama, Engineering
Satellite: Tuskegee University
- Bradley University
- Colorado State University
- Georgia Institute of Technology
- Iowa State University
- Lehigh University
- Mississippi State University
- North Carolina State University
Satellite: NC AT&T
- Oklahoma State University
Satellites: University of Arkansas, Fayetteville; Wichita State University
- Oregon State University
- San Diego State University
Satellite: Loyola Marymount University
- San Francisco State University
- Syracuse University
- Tennessee Tech University
Satellite: University of Memphis
- Texas A&M University at College Station
- University of Dayton
- University of Delaware
- University of Florida
- University of Illinois at Chicago
- University of Louisiana at Lafayette
- University of Massachusetts
- University of Miami
- University of Michigan
- University of Missouri
- University of Washington
- West Virginia University
The IAC program has proven to be a "win-win" for industry, engineering students, faculty, and employers by providing benefits such as:
- No-cost energy assessments for local small- and medium-sized manufacturers
- Hands-on training that molds students into energy-savvy engineers who can "hit the ground running" in their professional careers
- A public database resource of more than 14,000 assessment results
- Implementation of new energy-saving technologies into manufacturing processes
- Strong relationships between universities and local industrial manufacturers.
The IAC program assists small- and medium-sized manufacturers who sometimes lack in-house expertise or resources to make energy efficiency improvements. This assistance can often make the difference between a company's ability to remain competitive in the market, or close its doors.
Energy Assessments That Save Energy Now
The IACs work with personnel at eligible small- to medium-sized manufacturing plants to perform no-cost energy assessments that identify ways to improve energy use, waste, and productivity. The IAC team, which includes faculty and students, typically conducts the assessment during a one-day site visit. During this assessment, the team analyzes the plant's energy-consuming systems, focusing on areas of special concern to the company. After the assessment, the team provides recommendations to the plant manager, and follows up in 6 to 9 months to obtain implementation results. Learn more about the assessment process.
The average energy cost savings from recommended measures is $135,000 per year. Identified waste and productivity recommendations add an additional $85,000 to each average savings potential, bringing the total cost savings for an average IAC assessment to more than $220,000 per year. This can often mean 5%-10% in energy cost savings—significant to a company's bottom line.
When the Oklahoma State University IAC evaluated the Southwest United Industries metal coating manufacturing plant in Tulsa, Oklahoma, they identified nine energy-saving opportunities. These implemented improvements are now saving the company more than $60,000 annually in energy costs. "OSU's team of industrial engineers was very thorough in assessing potential energy savings in our plant. I would highly recommend that every company take advantage of these services provided through the DOE," said Jon Barrows of Southwest United Industries. Read the case studies about other major energy and cost savings the IACs have identified.
Building the Future Energy Workforce
Perhaps the greatest long-term benefit of the IAC program comes from transforming students into a trained workforce of outstanding engineers who can continue to address the nation's energy challenges throughout their careers. To date, more than 2,600 engineering students have participated in the IAC program, with an estimated 60% pursuing careers in energy-related fields across industrial, commercial, institutional, and residential sectors. IAC alumni can be found across the spectrum of professional engineering roles, including R&D, product development, project engineering, project management, and construction management. Currently, about 250 students are trained each year through the IAC program. This network of IAC students and alumni stays connected through a dedicated Web site and newsletter, annual meeting, and Internet-based professional group.
Again and again, IAC alumni express their appreciation for the knowledge and experience gained through the program. This experience not only enriches their education, but gives students an edge with prospective employers.
Sieglinde Kinne, a Colorado State University (CSU) IAC alumni, and now with Siemens Building Technology's Energy and Environmental Solutions, recounts, "My years working with Industrial Assessment Center faculty and students at CSU was pivotal to my career development," she says. "I imagined that my mechanical engineering degree would push forward my career in renewable energy. When I found out about the IAC, I instantly knew that I wanted to work in the field of energy efficiency. I believe that energy efficiency IS one form of renewable energy, and is typically the most cost-effective way to reduce fossil fuel consumption. The benefits we provided to the industrial clients was very rewarding; reducing energy costs at these plants may very well help to retain American jobs by making companies more competitive.
According to Satyen Moray, his experience through the IAC program was critical in preparing him for his position at Energy and Resource Solutions, Inc. (ERS), a leading energy engineering firm based in Boston. "The two years I spent at my IAC taught me to interact with customers, understand systems, develop new tools, and write and present clear reports. The IAC also gave me the necessary motivation to hit the ground running at ERS from day one."
Marcus Wilcox is an Oregon State University alumnus who credits his IAC experience with helping him to launch his own consulting firm, Cascade Energy Engineering. "As the first student in the Oregon State University program, I had no idea how valuable the training and experience would be," he says. "Participating in site visits, monitoring and assessing energy-using systems, and preparing reports for customers provided a broad education that has supported my career. We founded Cascade Energy Engineering 15 years ago, and now have 19 engineers providing industrial energy efficiency consulting. I can honestly say that our company and engineers would not be in existence without the original opportunity provided by the IAC program."
Technical Resources and Outreach
And the benefits don't stop there. Results from the assessments are compiled into the IAC database for use as a free resource available to the public. The IAC database is recognized as one of the most comprehensive industrial databases in the world. Users can search by assessment (size, industry, energy usage), and recommendations (type, energy, and cost savings) to find energy, waste and productivity improvements that may apply in their plant. Currently, the IAC database houses information on 14,000 assessments and 104,000 recommendations.
In 2007, IACs and the National Institute for Science and Technology's Manufacturing Extension Partnership (MEP) joined forces to leverage the MEP's vast network of small and medium-sized manufacturing plants and the IAC's energy expertise. Through this relationship, MEP customers receive information about ITP tools and resources, and are referred to IACs for assessments. The IACs have also conducted 90 joint assessments with MEPs nationwide, and sponsored 13 student internships at local MEPs to increase awareness of energy efficiency.
The IAC Adopt-a-Technology program introduces individual centers to commercially available technologies developed by the Industrial Technologies Program. Centers select a technology to research based on applicability in their region, and become the expert resource for that technology and its application. The students also research vendors, distributors, and installations of the technology to examine adoption levels and effectiveness.
Resources for Energy and Economy
In this time of economic and energy uncertainty, DOE's Industrial Assessment Centers provide a positive and useful resource for U.S. industry. As energy use and environmental concerns escalate, the IACs continue to boost U.S. industrial energy efficiency by conducting energy assessments, growing an energy workforce, and creating technical resources that increase public awareness about energy efficiency. The completion of its 14,000 assessment signifies a major achievement for this unique and dedicated program, and highlights one avenue to a brighter national energy future.
Riyaz Papar, our featured DOE Energy Expert, regularly conducts Save Energy Now energy assessments to improve efficiency of steam systems at industrial plants. In this issue of Energy Matters, Riyaz addresses some common questions that arise during assessments.
How do I identify steam energy saving opportunities in my plant?
This is a very simple question but the answer can be difficult to comprehend and implement, if not done in a systematic manner.
First and foremost, one must take a systems rather than a component-based approach to evaluate any steam system. DOE's Industrial Technologies Program (ITP) divides a steam system into four areas: generation, distribution, end-use, and recovery. Every industrial system will include one or more of the above four areas. All the four areas work together as a system, and modifying operations in any one area will have an impact both downstream and upstream of that area. Hence, it is very important to use a systems approach for identifying energy-saving opportunities in steam systems.
When I visit a plant, I use DOE's Steam System Tool Suite to evaluate and identifying steam system improvements. The Steam System Scoping Tool (SSST) quickly gives me an overall understanding of the level of best practices at the plant. This identifies potential areas of energy-saving opportunities on a qualitative basis. Next, I use the Steam System Assessment Tool (SSAT) to model the steam system at the plant. This is the detailed model which allows me to perform a "what-if" system-level analysis for different projects that I select in the SSAT. The SSAT provides me with quantified energy-saving opportunities for projects. This first level due-diligence can then lead to further project development on select energy-saving opportunities. Lastly, I also use the 3EPlus® insulation tool to get more specific on energy-saving insulation-related opportunities in the steam system. Learn more about how these steam software tools work (PDF 1.3 MB). Download Adobe Reader.
Currently, DOE is working together with industry experts, the American Society of Mechanical Engineers, and the American National Standards Institute to develop energy assessment standards. Watch the ITP Web Site for news on their progress, reviews, and release.
How do I calculate my cost of steam production ($/klb)?
This question comes up very often during energy assessments. Calculating the cost of steam production can be a very challenging task in a plant. Nevertheless, it is the first thing an energy engineer should do.
There are several factors that directly impact steam production cost, including:
- Energy (fuel) cost
- Water cost (includes chemical treatment cost)
- Electric utility cost for fans, controls, etc.
- Equipment maintenance cost
- Emissions control / mitigation cost
- Labor cost
- Equipment capital cost
- Insurance cost
In almost all circumstances, we should be concerned with the marginal or variable cost of steam rather than the fixed cost. The marginal cost truly represents the actual energy cost savings opportunities for projects that may be undertaken at a plant. For our discussion here, we will use the fuel and water (including chemical treatment) costs to calculate the steam production cost. This logic can be extended to include the other cost factors as listed above.
Let us start with the data required to calculate our cost of steam production:
- Time period: This is the time period over which the average cost of steam production is required. It can be an hour, a day, a month or a year. (8,760 hours)
- Average steam production: Typically from a steam flowmeter profile over the time period.(80,000 lb/hr)
- Average fuel consumed: Typically from a fuel flowmeter profile over the time period. (100 Mcf/hr)
- Fuel cost: Average cost of fuel over the time period. ($10 per Mcf)
- Average water usage: Typically from a water flowmeter profile over the time period. (50 gpm)
- Water cost: Average cost of water (including chemical treatment) over the time period. ($2.50 per kgal)
This is the most direct method to calculate the steam cost. In the event certain parameters are not available, other methods (for example, boiler efficiency) can be used to calculate the steam production costs. We will discuss those in a future column on steam system efficiency.
Is power generated from a steam back pressure turbine free, since I am using the exhaust steam in my process anyway?
This is a very interesting question and I will explain the premise using the figures below.
In a typical steam system, a Pressure Reducing Valve (PRV) reduces the steam pressure from P1 to P2. This pressure reduction happens in such a manner that the total energy content (enthalpy – Btu/lb) does not change (H1 = H2) and no shaft work is done.
On the other hand, when steam goes through a steam turbine, it expands and the steam pressure reduces from P1 to P2. The steam turbine produces shaft horsepower and as a result, the steam exit energy content (enthalpy – Btu/lb) is lower when compared to the PRV case.
Steam is used for heating purposes in the plant. The process heat duty (Btu/hr) is fixed by the plant demand. Since the steam supplied to the process has a lower enthalpy, an additional amount of steam (lb/hr) is required to ensure the same available heat duty. This additional amount of steam has an associated cost. Hence, power generated from a backpressure steam turbine is not free.
Nevertheless, using a backpressure steam turbine can improve the overall plant and global energy efficiency and more importantly, it can reduce total operating costs. Hence, it can be a very good energy-saving opportunity. In a future column on steam systems, we shall investigate the factors that impact steam turbine cost effectiveness.
For more information on how to improve your plant's steam system efficiency, please see the following ITP resources.
Questions/comments about this column? Contact Riyaz Papar at firstname.lastname@example.org.
Mr. Papar is Director of Energy Assets and Optimization at Hudson Technologies, and also serves as an energy consultant, steam end-user training instructor, and Qualified Specialist for the U.S. Department of Energy. He is a Registered Professional Engineer with more than 15 years of experience in industrial energy infrastructure and energy asset management of process and utility systems in refineries, chemical plants, and manufacturing facilities. He specializes in performance monitoring and optimization of energy systems. Mr. Papar received his Bachelors degree in Mechanical Engineering from the Indian Institute of Technology, and his Masters degree from the University of Maryland.
NOTICE: This online publication was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or any agency thereof.