U.S. Department of Energy - Energy Efficiency and Renewable Energy

Building Technologies Office – Information Resources

Text-Alternative Version: SSL Luminaire Performance in the Lab: Just How Well Do They Perform?

Below is the text-alternative version of the SSL Luminaire Performance in the Lab: Just How Well Do They Perform webcast.

Interviewer: Hi, I'm Mia Paget and I'm really happy to be here to speak to you today about the CALiPER testing program. We're going to talk about solid-state lighting luminaire performance—how they really perform when we test them.

Here we have the first slide—that's just an overview of our testing program. "CALiPER" stands for Commercially Available LED Product Evaluation and Reporting. So we're going to talk about our testing program. We go into a little bit of background on the context of the program, why the DOE is supporting this program, what the scope of the program is, what we cover in the program, how we organize it, the process for testing luminaires, what we've done to-date, and what the results have been from our first few rounds of testing. And then we'll give you some links and some other information about where you can find more information and reports from the testing program. And at the end there will be questions and answer time.

So first of all, what's the context of this program? The DOE is supporting a lot of work in solid-state lighting, all the way from very core research—background research toward product development, and then all the way spanning over to the market. And in this part that's spanning all the way to the market, we call that commercialization support, where I think they were guiding technology advances from the laboratory to the marketplace. And this is where our program is in commercialization support.

Within this area, commercialization support, as you can see on this next slide, there are several DOE activities underway. We have our CALiPER testing program, we have people working on technology demonstrations, technology procurement, on ENERGY STAR® for solid-state lighting, on technical support for standards development, and on the technical information network, which is the context for this particular webcast. So we're doing a lot of different work to try to help access our technologies, check the markets as best possible.

So why do we do the CALiPER testing; why have we put in place this program? First of all, bottom line is we want to provide objective, high-quality performance about these products. We want to make it so that we know, and people in the marketplace know, what is [audio glitch] performance of the products that are available on the market today. This supports the DOE research and development, it supports planning for research and development, it supports the ENERGY STAR program, and also supports a lot of efforts that are going on.

We also need to have this information in order to provide information to the testing procedure committees and to standards development for solid-state lighting, that they have actual, real, credible information that they can look at about the products and how the testing works. And then kind of more on the market side, we want to discourage low-quality products, and reduce the risk of having [audio glitch] products that aren't performing, as expecting. If you look at these little graphics over here on the right, these are things that we actually pulled off of Web sites, one that said, "You may never change another light bulb." While solid-state lighting does last longer in general than some other light sources, this could be misinterpreted by some buyers, and then they could be disappointed. So really what we're trying to make it so [audio glitch] and just reduce any buyer dissatisfaction in the long run. We'll come back to that in a minute.

One of the reasons that we need to do this is because of the speed of change with these technologies. We're seeing LED chips and their performance improving very very fast. If you look at this graphic here on the right, these two lines that are going up are basically the rate of change of the performance of LED chips. And in reality those slopes are even steeper than what they are on this graph. What we're discovering is they really are improving very very fast.

They're also getting better thermally. Newer chips are able to support much higher operating temperature than earlier chips, and we're seeing the chip manufacturers that are announcing new performance records almost every month. It's amazing the values we're seeing coming out.

So because of the chip performance, we're also seeing a steady stream of new products. The LED devices themselves and all the sub-components, and then that took over all the up to luminaires and other products that you can buy for end-users. All of this is happening in the context of very immature markets, where buyers don't really know very much about the products, and have limited understanding of how they really compare to what we're used to seeing. So that's one of the reasons that we really need to do this testing program and increase information about it.

Another reason that we're doing it—here on this next slide—many of you probably heard about the report that came out last year on compact fluorescent lighting, and the lessons that we learned from that whole history of how they came to market. If you haven't read this report, I recommend it highly. It's available on the DOE Web site, and we'll get the link to that later on in this talk. But really, this report listed and talked about a lot of the mistakes that happened that made it so that CFL technology really had trouble penetrating, and when it came to market, there were products that did not perform as expected, and even when they did start to perform well, the buyers had a general dissatisfaction with them, and avoided them. So it's much more complex than that, but there were many, many mistakes that happened in the sage of the CFL technology coming to market, and we're trying to avoid some of those mistakes with solid-state lighting.

Another reason why we need to do this testing program, as opposed to for other technologies, is that solid-state lighting testing is different than testing for other light sources. Solid-state lighting illuminators have to be tested at the luminaire level, not at the device level, for a number of reasons. I'll show you a little graphic in the next slide to help you to understand that.

Basically the luminaire has to be tested as a complete system—we can't test a lamp separately from the driver or separately from the luminaire. So this requires using absolute spectrometry, rather than relative spectrometry, where relative spectrometry involves testing a standard lamp, and then comparing other light sources to that standard lamp. In this case we can't do that—we have to test the entire luminaire. And so there's no standard that it's being compared against. We're actually measuring the output of a given luminaire and how it's performing.

Now the testing methods that we're using here are based on a new draft standard. It's going to be published in just a couple months, but it's been put out by the IESNA. It's called Lighting Measurement 79, "LM 79," and this is a measurement procedure that tells you how to do photometric testing for solid-state lighting luminaires for the devices—not for the chips, for the luminaires, and how to do them both in the integrating sphere, in absolute photometry. And while the stakeholders are just not familiar with this concept, "How do we test a whole luminaire, why do we test a whole luminaire, what's the difference between this testing and testing," [audio glitch] we're used to seeing that was done on incandescent or compact fluorescent luminaires or lamps.

So the DOE CALiPER testing program is working really closely with the standards groups. They're developing this standard, and other standards, so that we can help them develop those standards, improve them so we can increase confidence in understanding of the general public on SSL testing and of the stakeholders who really need to use that testing. And we really kind of want to jump-start the knowledge across the industry about this testing: what are the results? What's normal? What isn't? What does it look like? So we have a general pool of knowledge of testing, so that we have somewhere to start from—not just starting from scratch as these products are just starting to invade the market.

Now let's come back a little bit more to that question of just why are we doing luminaire level testing. Well, SSL luminaires, solid-state luminaires—at the core of it there's some sort of LED device, and it has a particular efficacy that's been measured by the manufacturer, but not following any standard measurement, because there's no standard available yet for measuring LED device efficacy.

And then that device has to be mounted on some sort of a heat sink. It doesn't have to be as big as this one you see in the picture, but it has to be mounted on some sort of heat sink, otherwise it will burn up when it's operating. And then it gets driven by a power supply and a driver at a particular current and voltage, depending on what the output needs are and depending on what the LED device needs are. And then gets integrated into a luminaire.

Well each of the parts—the device itself, how well the thermal management is done, how well is the heat source, what current it's operating at, what power level it's operating at, and then how that's integrated into luminaire, because the luminaire itself acts as a heat sink and does thermal management and also has optics and other aspects to it. All of those are pieces of the puzzle that say, "How well is this LED performing?" We cannot test it separately, because if you test it separately we don't know what the thermal management is, what's the heat sink, we don't know what the thermal effect of the luminaire is, and we don't know what the drive current is and what the effect of that drive current is, or also the heat effects of the cross line drivers on the LED device. So basically it all has to be tested as a system.

Just to get another picture of what we mean by luminaire efficacy and system efficacy, if you look at these two examples, on the top we have an example of how CFL as a system is measured and on the bottom how LED system efficacy is measured, basically. For the CFL lamp, traditionally, measurements would be taken at the lamp and ballast level, and you would get a system efficacy. In this example here they give the example 50 lumens per watt, measured by a testing standard call LM 9. So that would be your lamp and press ballast system efficacy at the CFL level.

And then you would take that CFL lamp and put it in a fixture, and there will be fixture losses, which could be 50%—they could actually be more than that. We're seeing some benchmarking testing that they can be much more than 50% in some cases. And so then based on that system efficacy, discounted by a percentage loss to fixture losses, you get a luminaire efficacy, or approximate luminaire efficacy, which might be 25 lumens per watt (and that's just an example for a system efficacy of 50 lumens per watt.

Now with the LED system, as we said, we can't measure system efficacy because the system is so sensitive to thermal aspects by current and other aspects. So we don't measure system efficacy—we know that the LED gets put in the fixture and there's losses in that fixture, but we don't have to measure that. We measure it at the luminaire level, and we get an actual, absolute measurement of the output and the power use at the luminaire level, which will be less a direct measurement of the luminaire efficacy. So this is the preferred way of measuring solid-state lighting luminaires. And here you get, for example, an efficacy of 35 lumens per watt. In the CFL system you may get lower efficacy in the end, even though the lamp efficacy was higher to begin with.

So moving on a little bit more now to what the testing program covers, we're looking at commercially available SSL products for the general illumination market. Now what do we mean by that? Most of the products that we test are products that you can purchase from a distributor, they are fully available, we pay money for them, they're not prototypes, they're not pre-production articles, they're actually commercially available luminaires. In some cases, we do test products that are a little bit early on in the cycle, especially when we're testing them for some of the other DOE commercialization support product programs that are working with products that are a little bit earlier along in their cycle. But for the most part, the CALiPER program only tests commercially available products.

And it only tests products that we call for the general illumination market. What do we mean by that? We mean products that are "light light," as we say here. So we're not testing red LED products, blue LED products, or anything like that. We're testing light light luminaires and replacement lamps, products that can be plugged into a 220, 110-volt wall socket, or into, in industry into higher socket, or else from ones that can be hard-wired into your normal AC output. So in some cases as well, we're testing replacement lamps or other lamps that are made to work on a DC or low-voltage circuit such as MR 16, but basically we consider all these to be products for the general illumination market.

We're testing products indoor and outdoor. If you look at the pictures over here, you can see there's some here that are actually outdoor area lights, outdoor wall lights, and then there's plenty of indoor products here, lamps that are used in downlights or other applications, desk lights, the recessed wall lights that can be used indoor or outdoor. And we're testing products that are both for the residential and commercial markets—we're not differentiating and testing only one market or the other—we're testing all sorts of products for the general illumination market.

What type of testing are we doing? Well most of the CALiPER testing is what we're calling basic spectrometry, that's just looking at the performance of these luminaires in an integrating sphere, or a value spectrometer. Now almost every single product that we get in we test in both the sphere and the value spectrometer, so that we can get light output efficacy, we can get color quantities, we can get beam characteristics, and we get electrical measurements and thermal characteristics. Both the sphere and value spectrometer can give us luminaire light output and efficacy values, and we actually look at both of them and make sure they're both the same in both of these different measuring techniques. The sphere is the one that gives us the color quantities and the value spectrometer is the one that gives us beam characteristics, intensity distribution—it tells us what the pattern of light looks like coming out of these luminaires, which I presume a lot of you are familiar with.

Now the other thing that we do with the basic spectrometry, which is really essential, is that we do benchmarking. We do the same type of testing, so absolute photometry at the luminaire level on the luminaires that use other light sources, so that we have tests that are really comparable to the tests that we're doing with the solid-state lighting products so we can just put them side-by-side and look at the output of two luminaires—one that might be solid-state lighting and one that might be using compact fluorescent or fluorescent tube, or incandescent sources. So that's the basic spectrometry that we're doing, covering almost every single product that we get in, in that way.

But we're also doing other types of testing, which we're calling here "nonstandardized" testing, because there are no standards that exist for this. For the sphere testing and value spectrometer we're following a test procedure that's put out by IESNA—it's not fully published yet, but it's very close to publication, so it's pretty far along right now. But for these other forms of testing there's literally no procedure right now—there's no testing standard out there. So we've had to work with different stakeholder groups and experts to develop methods here. So one of the things we're doing is in situ testing; we're looking at how do some of these products perform in situ, for example. Here's a picture over here of a refrigerated display case light being tested in an environmental chamber, so that we could bring it down to temperatures similar to temperatures where it's going to be operating actually in a refrigerated display case. And actually in this case we discovered that this product is performing 5% better at the colder temperatures. So that's what we suspected, but it was nice to confirm it through the testing.

We've done similar testing on insulated ceiling environments, recessed can environments. We're looking at how some of the replacement lamps perform in those environments where there is a reduction and how much of that reduction, in the output with the increased heat in an insulated ceiling environment. And then we're also doing what we're calling lumen depreciation testing. That's looking at how does the output degrade over time, 'cause these products are supposed to last for many, many, many years, and so we want to make sure that that's—what we're seeing is actually what they're reporting.

Now obviously there's no testing standards for doing lumen depreciation testing on luminaires and on solid-state lighting luminaires, but there is a standard in draft for doing this type of testing on the LED devices. So our methods draw from that standard, have adapted them to testing the needs of trial testing the luminaires.

And then we also do other forms of testing as needed to try to understand different aspects of the technology. We've done some thermal imaging studies to look at how the heat is managed in these devices—how it's actually moved through the devices over time. And we're going to be doing some testing on dimming and how dimmable these products really are, because it's one thing to see that a product is said to be dimmable—it's another thing to make sure that it really is dimmable, and understand what dimming looks like. So those are the types of tests we do, and we'll be doing many other types of tests as we move forward, as well.

So how does the testing program work? Well, basically every three months we go through a selection process following some very detailed criteria to select products across the market. So we want to get a range of products to bring also the different applications, we want to get products that are both looking very good or expected to be very good, and products that might be pretty poor. So we have a product tracking mechanism, where we're looking at products that are available on the market as they come to the market, and as they come in our doors we get information about them. And those are all listed and they're looked at and put together with the criteria that we have to develop, which is then reviewed.

And once it's OK'd, then we have to acquire these products, which doesn't sound like much, but then sometimes it's actually quite a challenge with these new technologies. In some cases products are readily available, and in some cases we find that we have to wait for nine months to get a product or even longer. So we try to always acquire those products through the normal distributor or in some cases we have to acquire them through a manufacturer, but in all cases we try to do it anonymously so that we're getting any product like anybody else would get if they purchased the product. Every once in a while we're not able to do that—the products that's being sold on a commercial market, for example, and that they'd like to know who the buyers are—in those cases we have to work directly with the manufacturer.

Now once we get the products in and we receive them and looked at them and have taken some initial notes about them and photographs of them, we send them to an independent testing lab. We use independent testing labs that have been pre-qualified for this testing program. There's not currently any sort of certification process for doing solid-state lighting testing, because there's no standards that are actually available out there, aside from the draft standards. So we've had to work with the folks that do accreditation and that do this type of testing to develop criteria for selecting the laboratories and determining which ones are really qualified to do this type of testing. And right now we have about four or five testing laboratories that are working with us on the testing, although only a couple of them are doing most of the testing, just having the appropriate equipment to do the testing. And hopefully that's going to increase over the next six months or a year, and we'll have more laboratories that are qualified and experienced and able to do the testing.

Other labs have also undergone some analog testing so that we are confident that their output is comparable, between the different testing labs, that they're doing the same type of work and getting the same results as each other.

Then once a laboratory comes back with the results, we have to look at them and analyze them and make sure that they make sense. We need to put them in a format that everybody can understand them, because different laboratories put results in different formats, and we want everything to be pretty coherent so that almost any stakeholder can look at these results and understand what they mean and how they compare to other results.

So we analyze the results, put them in a database, put them in reports, and then we share them with the manufacturer, of course, to let them know that we've done this testing and how it's performed. In some cases, the manufacturer may want to do a retest—that's happened once so far, out of all the tests we've done. And that's a possibility—we have a process for requesting a retest. And that basically is very easy for a manufacturer to do—they simply have to pay for the retest, the test themselves. But otherwise we have no problem with doing a retesting if it's needed, in the case when we get it it was just poor product that had come out with a new version right after our testing was done, and so they wanted us to test the newer version as well. And that's possible.

So once we do that when we publish the results. We put summary reports online—those summary reports are de-identified—they don't have any information about the product names or the manufacturer, because we're not trying to point fingers, we're not trying to put anybody on the spot. We want this industry, this technology to succeed, and we really want everybody across the industry to feel like we're trying to help them and not hinder them. So in the summary reports we don't put any names, but we do draw insights—we put the basic information, and those reports are just chock-full of information. In case you haven't looked at them yet, I really recommend you do.

We also put out detailed test reports about each product that's been tested. Those reports are not available for download, but they're available for request. So there's a form online that anybody can fill out and request the test reports that they want. And then we also do other analysis and studies where we look at the reports from different angles, in some cases looking at variability in the testing, in some cases looking at issues such as thermal issues or other aspects like that.

So we basically select and acquire the products, we send them to testing labs, we analyze the results, we compile those results and publish them for everybody, and we do all of this in the context of a policy that we have, a fundamental policy which we call a "no commercial use policy," and that is that these reports and the results cannot be used in any way to put another competitor's product at a disadvantage, or even to put your own product at an advantage in any blatant way. So we're restricted about saying that these results are for improving and advancing the industry, and they're not out there to be used for any commercial benefit or commercial disadvantage. And I'll put that policy up at the end just for everybody's benefit.

So what have we done so far? We started in the very end of 2006—toward the end of 2006 we did some pilot testing to figure out how this program will work, and we started our first round of testing. And as I said we're doing quarterly rounds, so every three months we select products, acquire them, test them, analyze them, produce reports and we've done three complete rounds. This Round Three was completed back in September or October, and Round Four testing is actually pretty much completed and we're now into the phase of analyzing that information and producing reports, and Round Five of testing is about to start in the selection process right now. So we're moving along at a rapid pace. We did about 10 products per round in the first two rounds, and then we're aiming for about 20 products per round henceforth.

So right now I think we'll look at this, Rounds One through Three, we have about 50 products that have been tested already in Rounds One through Three, and then another 20 products that we're testing in Round Four that are just getting analyzed right now.

So the focus, as I mentioned earlier, of all of these tests have been on the overall luminaire performance. We have not looked at device level performance, and these tests that have come out are not yet on reliability studies like lumen depreciation or in situ testing aside from that one test, the refrigerated cycle light. So most of these tests, all of tests are on luminaire performance, although some of the products have gone along to do lumen depreciation testing as well, but we won't have those results for many months.

There's a huge range of different applications, different types of products that we're testing here, and there's a huge range of results, and I'll show you some of those results now. Just a couple of minutes. And even though we have 50 products that have been tested, that might sound like a lot on the one hand—on the other hand that's a very small sample, and we're doing more testing with each round, and we just want to remind people that with the technology moving so fast, with the limits we have in the amount that we can test at any given time, we have to remember that the sample size is small, and don't want to draw too much, statistically, out of what we're looking at in any way that would be kind of false use of statistics, because again, this is a small sample, the range is so large in different applications that it's hard to see trends yet.

These pictures over here just give you a little example of what our test reports look like, and I really hope you get a chance to look at this.

So let's move on to look at the results. This first one is not to scare you—I don't want you to look at all the details on this slide—you could do that later, if you want. You'd be better off going out and looking at the summary reports to look at these details. I'm just showing you this table so that you get a feel for the types of information you're going to find in our summary reports. And you're going to find things like the manufacture published value—what does the manufacturer say about his or her product? What's the output of efficacy that they published? What do we measure in output? What do we measure in efficacy? What do we get as a measured color temperature for the product? And what do we get as a CRI? We also, if we have them, we'll put out there what the manufacturer gets as far as color temperature and CRI. And then we'll draw conclusions here. This particular table is about downlights, and you can see that we've tested a certain number of replacement lamps that can be used in downlights. They're being sold as replacements for R-30s or PAR-30s, for example. And we've also tested a certain number of downlight luminaires, so complete product source—it's not just a replacement lamp, but an entire luminaire. And our summary report's going to detail looking at what are the outputs and efficacies that we're seeing, what are the differences that we're seeing between what the manufacturer's saying and what the test results are actually showing. In some cases when you look at these first two products up here, our testing actually showed output that was higher than what the manufacturer was publishing. In other cases, for example down here we can look at one of these downlights—this is a recessed downlight that was tested and the manufacturer was publishing value up around 40 lumens per watt, and our test came out at 16. So those types of analysis are what you're going to find in our reports.

Now what else are you going to find regarding the downlights? We kind of summarize them so that you can get an overview of how things are progressing and what we've seen. In this case, in our Round Three report we saw that we published information about 13 different products for downlights. There were seven replacement lamps and six downlight fixtures here. And the lowest one here in efficacy was at about 12 lumens per watt. Now that's not much better than incandescent efficacy or halogen. But the highest one up here was up over 60 lumens per watt. So that was really amazing. The average of all of these 13 products was around 30 lumens per watt, which is really not bad for a luminaire efficacy. Again, remember what I said about this being a test of luminaire efficacy. Even with the lamps, it's the whole lamp, in this case the lamps are derated for a fixture loss factor, but otherwise the output is for the whole product, and not just the light source.

The output here is kind of—this gives you a little bit of information but needs to be taken with a grain of salt, because some of these products are smaller products that are aiming to replace a smaller watch luminaire, and some of them are products that are aiming to replace something up to a 65- or 70-watt product—an incandescent type of product. Here we have an output—one product that's putting out 100 lumens, and another one that's putting out over 700 lumens. So there's a really a huge range here of products.

The other point that we like to make here is that out of these products, almost half of them provided relatively accurate information in the manufacturer, and what I call relatively accurate is within about 15% of the measured value, +/-. So that's actually pretty good. That's more than half—just a little more than half of them actually provided imprecise information, and those ones, on average, overstated the performance of their products by about 70%, which is significant. So you're seeing both very consistent, very accurate things as your literature, and also very inaccurate—and you never know ahead of time which one it is.

Now what else are we doing with these results? As I said, we're doing benchmarking tests, where we're doing or comparing tests to other products. In this graph we have values for incandescent downlight outputs and efficacies, and CFL downlight outputs and efficacies. Some of these product results here are from tests that we actually did—some of them are outputs that are calculated from information from product catalogs. When you apply a fixture loss rating to them you can get estimated output and efficacy for those products.

So basically a couple things we could say about this benchmarking is that we know that downlight products, like any products, operate in a huge range. Say, the incandescent that we looked at here—one of these is actually a test value, and the other ones are all from product catalogs. The range in output from about 300 lumens all the way up to 700 lumens. So most of them are operating at the same efficacy range between about five and ten lumens per watt. The CFL that we've tested, a couple of these are ones that we tested in this program or in earlier programs, and a couple of them are from product literature. They also are in a huge range from same thing—lower to smaller ones from about 300 lumens all the way up to about 800 lumens, and then from about 25—the lowest efficacy at the luminaire level of the downlights, using s CFL lamp was about 25 lumens per watt, and the highest going all the way up to a little bit over 40. Now we've caught on here also the results from the tests we've done on phosphate lighting luminaires, and as you can see here, the products that we tested way at the beginning, which I'm calling 2006 product—although some of these other ones may also have been developed from 2006—we don't know that information. But these ones were obviously 2006 or prior, because that's when we tested them—they really didn't quite have the output that you're seeing in the incandescents and the CFL products. They did have slightly better efficacy than the incandescents, but they weren't meeting the CFL range.

And now you look at these products that we've tested more recently—these are products that have probably been developed more recently, they've done newer chips, they've got an increased knowledge in the industry about how to implement these chips in the luminaires. And we're seeing a number of products here that are fully capable of reaching the output levels of the incandescent and CFL products, and of reaching the efficacy levels of the CFL products, or even with this product here, exceeding those efficacy levels at the luminaire level.

So this is really promising for where things are going, especially if you put it in the context of all the products that we know that—LED device products that are improving from month-to-month. So this is very very promising.

As I said, we're looking at lots of different types of products and here's some information about task lighting. We tested quite a few undercabinet lights and desk lamps—here's five solid-state lighting undercabinet lights and eight solid-state lighting desk lamps, as well as two fluorescent tube undercabinet lamps and a CFL desk lamp, and a halogen desk lamp. And those were there for benchmarking purposes. And basically about five of these solid-state lighting products really show comparable or better performance to the fluorescent products that we've done the benchmark testing on.

And probably all of them—almost all of them did better than halogen except that there's a problem of off-state power, which I'll explain on the next slide. But basically there's a range of products here. The solid-state lighting task lights that we tested, some of them are very low wattage products, from four watts all the up to 21 watts, and the output that we're seeing is as low as 75 lumens that goes as high as over 400 lumens. Similarly, efficacy—the lowest one here is much better than you might see in a halogen desk lamp, but the highest one is quite good at 43 lumens per watt, which is actually better than any of the fluorescent benchmarks that we tested.

And the color temperatures—there's quite a range, and I guess that depends on what the manufacturer's designing for. Some of the products are very warm, and some of them are very cool color temperatures. And CIs are similar to the PIs that we're seeing throughout. The average CI in all the products we tested, the solid-state lighting products is about 76. So pretty similar in that range, although not in the solid states, but in the downlights we've seen the product all the way up to 95—CRM 95.

So getting back to what I said a minute ago about off-state power problems, what we're seeing is that because it's a lot easier for manufacturers to get UL listing if they use off-the-wall power supplies, what we might call wall works—what some folks call wall works, these products can have considerable off-state power draw. The highest that we've seen so far is drawing more than 2-1/2 watts when it's not on, which is quite extreme, especially considering that these are task lights that probably on average might be used, let's say three hours per day or even less. I don't know about your own use of your desk light or your task light—it might be less than three hours a day, maybe on average one hour.

So basically if you look at the efficacy of these products and calculate an effective efficacy—that means taking into account both the power draw while they're on, and the power draw while they're off, then you can see that they're effective efficacy, for the most part, really goes down. Now any one of these that have a line that goes straight across, well that's a product that has managed to be designed without any off-state power. So when you turn them off, they actually are off. But even for the CFL and halogen products here, you can see that some of them are drawing off-state power. Even this halogen one, the yellow line here, is the halogen desk lamp that we tested—it was drawing a little bit of off-state power—not much, but there's really no need for that, but it was doing that.

Now if you look at these solid-state lighting products, all except for this lowest one down here—all of the other ones would have higher efficacy than this halogen desk lamp if they didn't have off-state power draw. But if you account for that off-state power use, all of these products here are performing worse than that halogen product, even though they're using technology that should be able to do better than that.

So this really is a concern in the design of these products, and we're hoping that manufacturers are going to find ways to design around it and design products that are more like the ones we see up here that have no off-state power use, or this highest one here—even though it has a little bit of off-state power use, it's still performing better than this CFL undercabinet light here that we tested. That luminaire level is about 36 lumens per watt.

So what did I mention we're doing direct comparisons—in two cases we've been able to get exactly the same product in a solid-state lighting version and in another source [audio glitch]. The manufacturer sold the same product in a CFL version and in an LED version. It's a product from about a year ago, so it's—hopefully you'd see better performance in products coming out today, but this one's probably still on the market. The output of these two products basically the same. The efficacy of the CFL product was a little bit better—the LED product is about three-quarters of the efficacy of the CFL product, the color temperature that the LED was much colder temperature than CFL product. And color rendering basically the same. A difference of 5 in CI is really statistically not of any difference at this level. So basically the same color rendering. But the power factor of the LED product was much better than the power factor of the CFL product. So those products, that's a direct comparison between two products that are exactly the same, except for the source—same manufacturer, same luminaire, just a difference source in them.

Another example here: different manufacturer, different desk lamp, but in this case we were able to get it both in a halogen version and an LED version. Well, in this case the LED version isn't even putting out half the output of the halogen version that they sell, and it's only a little bit more efficacious than the halogen version, not much. But the color temperature is pretty good—they're both fairly warm, the color rendering of the LED one is only down at about 75, compared to close to 100 for a halogen lamp. And the power factor of the LED desk lamp here is a little bit lower than, of course, than the halogen one. But in this case we saw product that's being offered where perhaps it's a buyer purchase this LED luminaire, unless they purchased it for some other reason like the lifetime, the expected life of it, they might be a little bit disappointed in that output.

So we've also tested products from lots of other different categories. A lot of different outdoor luminaires, we've seen some outdoor luminaires that are just very high-performance, offering over 50 lumens per watt and very respectable level of output that was suitable for these applications. And then we also tested the one outdoor path light that was tested to be only six lumens per watt, which was really disappointing. So we're seeing quite a range in the quality of output of all the different [audio glitch].

Just a couple more examples: the refrigerated case light that we tested performed very well, exactly as the manufacturers had indicated in their product literature. We tested a surface mount luminaire, which did fairly well, but not at the level that the manufacture was indicating, and some of that may have been because of the optics on it, and so there's been some clarification on the side of that manufacturer to come out and put more clear information about how their luminaire performs with [?], which was very useful.

We tested a step/wall light, which has only an efficacy of four lumens per watt. Now the output of this is only 25 lumens. Now with the solid-state lighting kits that are coming out today, they should be able to put the output to that level with only half a watt, or a watt, you know? So really this is a case where the design wasn't necessarily suitable, wasn't taking into account the strength of the solid-state lighting source. And we've also tested a number of—I'm calling here "nondirectional" replacement lamps. These are lamps that are designed to replace a lamp, sort of standard Edison-based socket bulbs that we've been seeing since we were kids. And these ones really perform very disappointingly on the output side, putting out less than 50 lumens—really not enough to be called a replacement for any standard day lamp that you would be buying. So although one of these lamps here—this was actually the retest that we did on an earlier product newer version. The efficacy is quite good, but unless the purchase is going to be using this for an application that really needs very very low output, that purchaser might be very very disappointed in this product when they get it, because of this low output. So those are issues that we're identifying.

With our results, aside from just listing results and different products and different product categories, we're also looking at some general qualities and general aspects of this technology. And one of these is power factor. We look at the power factor of every product that we test, and we've seen a range all the way from as low as 0.3 to almost one, or 0.99. So really there's quite a range there. We're happy to see that in the products we've been testing more recently, the power factors seem to be going up very high, you'll see a dotted red line on this chart at 0.7—that is the limit that the ENERGY STAR for solid-state lighting criteria has taken place for residential applications—have a power factor of at least 0.7. And the commercial ones need a power factor of 0.9. And you can see that many many many of these products are meeting that requirement on the power factor side. So we're very happy to see that. Unfortunately, these ones that are performing at very low power factors, or often some of the products that are low output replacement lamps, and we hope that there won't be a whole proliferation of this type of product out there, because it won't look very good to the power industry if that happens. So that's why we're pushing these higher power factors, and that's one of the reasons that the ENERGY STAR criteria has these limits on it.

We also look at color qualities and put out these pretty little graphs that you can see if you look at our detailed reports, but that show what the spectral distributions look like for these different luminaires. And they can be very interesting—that can be helpful for some readers to understand why the color rendering might be different. Here you see this spiky one that is a fluorescent product, and these ones that usually have a couple of big peaks going up—those are solid-state lighting products. And then you'll have your typical incandescent product that goes way up and puts out a lot of infrared light as well.

So what we're seeing is, in the products we've tested to-date, tested products with color temperatures as low as 2,600, and as high as 36,000. Now what does that mean? Any product that's up there, in that range, even though it's being sold as a white light, isn't really white. It's off the end—usually in the blue scale, and just the numbers start going up very high at that point, and putting it into the multiple tens of thousands in the CDT range. And those really are not white lights and so they shouldn't be being sold as white lights, and that may be because of the choice of LED devices, or it may be because of the way that they have been implemented and how they're being driven into a range that's not white light.

For the phosphor conversion LEDs here, as opposed to RGB LED sources, RGB ones are luminaires that create white light using a combination of red, green and blue devices, and the phosphor conversion ones create white light in a different way—they use phosphor in the encapsulant, and that changes the color to make it white. So these products, the phosphor conversion products, the CRI is maybe not the best way of describing the color rendering, but that's what we have available to us today that are new types of indices that are being worked on.

But if we use CRI, which is what we have available today, we're seeing CRIs that range from about 51 up to 95. Now this 51 probably didn't have great color rendering, the 95 is really excellent for this type of product—especially given how the CRI rating is not really applicable to LEDs and some cases. So these are pretty good. The average, as I said earlier, that we see in CRI, across all the products we tested to-date, all the products is about 76. And the RGB luminaires that we tested, although we do publish the CRI values in those reports, because it's been measured, we want folks to understand and recognize that that really is meaningless—any CRI [audio glitch] underneath 50 is basically meaningless. And that doesn't mean that that luminaire is not going to have decent color rendering for a given application today—it may be very good. So in those cases, people are going to have to take the luminaire and actually look at it and see whether it's suitable—whether the white light and the color rendering that it has is suitable for the applications that they want to use it in.

We also look in a general sense at how the energy use and light output is trending, and I like to use the big vague cloud to remind folks that there's not a single value that represents what the output and efficacy is in any one of these types of light sources. So you have quite a range of output and efficacy for incandescent lamps, quite a range for a compact fluorescent or a fluorescent lamp, quite a range for solid-state lighting [audio glitch].

But in general what we're seeing is that, as you see here, the output level can be comparable now with solid-state lighting products—their output can be comparable to the output that you're going to see in an incandescent product or CFL product. So whereas a couple years ago, really the output was looking like it was not quite up there, now we're seeing product, and we're testing them and seeing that their output actually is on par with many incandescent and CFL products, maybe not with the products that are at the high end—those products that are built for apparently high wattages and high output. Some of those solid-state lighting products aren't quite up there, although they can be designed that way now.

For the most part, they're at least meeting the basic products that we're seeing in these other categories. And that's really so for applications that are more suited to directional lighting, such as undercabinets, desk lights, downlights. And we're seeing efficacies that really are definitely surpassing incandescent in almost every case. There's no question about that, that the solid-state lighting products now can have efficacies that surpass the incandescents. And compared to CFL, we're seeing products that might be half of the efficacy of CFLs, or even better than the efficacy of CFLs, so it's quite a range right now in the efficacy of the solid-state lighting products that we're seeing.

And I'd really like to point out that one of the most important things here is that there are wide differences in the products. It's really important not to generalize about these products. The technologies have all been really fast in the knowledge, and the experience that different manufactures have can be really different. So it's evolving fast, and there's a huge range. We're doing more and more benchmarking all the time, so that we can better compare these luminaire testing results to the luminaire testing results for other products, because what we're finding is that if you take a lamp rating and just apply some sort of guestimate for the fixture efficacy, or even sometimes the value that's been published for the fixture losses, we're finding that the output is not actually what we're seeing when we actually measure that lamp in that fixture. So that's for incandescent and CFLs. So we're doing benchmarking so we can really see what the luminaire output is for incandescent and CFL fixtures, and we just need to continue doing more and more [audio glitch?] testing.

So I'm just going to be wrapping up now, next few slides. Key point here that I keep saying is be careful not to generalize. In our testing we've included a huge range of products. The smallest drew only have a watt of power, and the largest was drawing almost 200 watts. The smallest was the one that drew half a watt here—but the lowest output of any that we've seen was ten lumens, and the highest was over 6,000 lumens. So we're looking at a huge range of products. So we don't want to make huge generalizations about this range of products. We're seeing some what really are not performing at high efficacies, and some that are really excellent. And same thing with the color temperature and CRI. So there's a wide range of products being tested, and a wide range of performance across those products.

So what does this tell you, as far as what you can take away from this? One key point is that product literature is not always consistent, and is not always reliable, so it's very important for buyers to request the luminaire testing results, and to try to understand the product literature and look at it carefully to see if the results really are luminaire testing results, because if the manufacturer is publishing efficacy values or output values that are based on the LED device efficacy or output, that is not going to tell you what the luminaire output is going to be—there's no direct correlation at this time between those two.

The CALiPER program doing all this testing is really already—we're seeing some positive influences, we're getting very good input, very good involvement from the manufacturers and from the industry, we can see an increase and improvement in market awareness and industry awareness of these different issues, of the need to luminaire testing. And we're also seeing our input going into the testing standards, seeing that the LM79 is getting validated and been refined. And so it's going to be coming out soon and in a solid version, going to a very very fast track process for a standard like that.

And basically—two points that I've mentioned a couple times is that the testing that we've presented here, the results we've presented are based on products that were probably designed from 2005 to early 2007. So these products are already showing some that can rival—and one that actually surpasses CFL products in output and efficacy, and that's pretty amazing, although there's some that aren't so good, there are some that are actually really good. And given what I've said about how this technology is improving very fast and how the manufacturers' understanding of this technology is moving forward with leaps and bounds, this is really great promise for upcoming generations of the products.

[Audio glitch 0:55:35 to 0:55:50] not able to get those luminaire testing results, it's nice to know that the ENERGY STAR criteria for solid-state lighting is coming out shortly. The effective date for that is September 30th, 2008, and those products will be tested for total luminous flux of the luminaire, so light output of the luminaire, luminaire efficacy—all these things that I've been talking about here, the color temperature, CRI, intensity distributions, those that have minimal zonal distribution requirements for different applications.

And also there will be testing that will be done to ensure the reliability of those products. Haven't talked much about that today because our lumen depreciation and reliability testing is just starting now. But we'll be getting more test results on that later this year, and the ENERGY STAR criteria has aspects criteria in it that address the lifetime of the product. So that's a very positive sign that's coming up.

But in the interim, it really is useful to be informed about what you're buying. Look at our test results, look at our summary reports to understand what you're looking at, and that should help you to [audio glitch].

For more information about our testing program, about the CALiPER testing program, you can visit DOE Web site and you'll find summary reports that you can download directly for the first few rounds of testing, and you can also request the detailed reports, as I mentioned, and they'll be sent to you. And all the requesting of detailed reports is contingent on agreeing to abide by our no commercial use policy, which basically says that any of the information we're putting out cannot be used in advertising, it can't be used to promote a company's product or service, and it can't be used to characterize a competitor's product or service. But you can use it for your own education, for making decisions, for understanding the technology better.

So that's all I have today, and I'm just going to show you the link to the DOE Web site one more time where you can go and download the report on CFL testing, CFL lessons learned, and our CALiPER reports and a lot of other information.