Blog

Meter-Based Methods from Coast to Coast

Demand Side Analytics (DSA) recently conducted two similar studies on the accuracy of using smart meter data for evaluation and settlement of energy efficiency, also known as meter-based methods. Different jurisdictions refer to these meter-based methods with distinctive terminology. Across our two studies, California (for Pacific Gas & Electric) refers to these methods as normalized metered energy consumption (NMEC) while Vermont (for the Vermont Department of Public Service) refers to them as Advanced Measurement & Verification (M&V).

While these studies were distinct, they were both concerned, to varying degrees, with:

  1. Estimating energy efficiency (EE) program impacts.
  2. Assessing the accuracy of meter-based methods.
  3. Providing recommendations for the types of populations, interventions, and locations where meter-based methods should be applied.

What follows is a review of the benefits of using meter-based methods for estimating EE program impacts, a brief overview of each study’s goals, and our findings.

What are Meter-Based Methods?

The primary challenge of estimating energy savings is the need to accurately detect changes in energy consumption due to the energy efficiency intervention, while systematically eliminating plausible alternative explanations for those changes. Did the introduction of energy efficiency measures cause a change in energy use? Or can the differences be explained by other factors (such as the effects of the COVID-19 pandemic)? To evaluate energy savings, it is necessary to estimate what energy consumption would have been in the absence of program intervention—the counterfactual or baseline.

Meter-based methods rely on whole-building, site-specific electric and/or gas consumption data, either at the hourly or daily level, to construct the baseline. This data is then used to estimate energy savings associated with the installation of individual or multiple energy efficiency measures (EEMs) at the site.

Why rely on Meter-Based Methods?

Many methods exist to estimate savings associated with EEMs, all with varying degrees of modeling complexity, data requirements, accuracy, and precision. The benefits of using meter-based methods include:

  • Eliminating the need for sampling because data is available for nearly all participants.
  • Reducing the burden on participants because technicians don’t need to visit the home or business to install metering equipment.
  • Producing faster feedback on energy-saving performance.
  • Enabling program administrators to look beyond the average customer and explore how savings vary across segments of interest.
  • Opening new opportunities for program design and delivery (i.e., pay-for-performance programs).
  • Producing granular savings estimates that are useful for a wide range of planning and valuation functions.

California

Pacific Gas and Electric Company (PG&E) currently uses the CalTRACK Version 2.0 method (CalTRACK) to estimate avoided energy use for its energy efficiency programs based on the Population-Level NMEC methodology. A notable feature of the population NMEC method has been the lack of comparison groups, which are used to adjust the energy savings baseline and normalize the savings estimate for factors beyond weather. The pre-post method without a comparison group relies almost exclusively on weather normalization and effectively assumes that the only difference between the pre- and post-intervention periods is weather and the installation of EEMs. The COVID-19 pandemic laid bare the limitations of the adopted method. The pandemic led to changes in our commutes, business operations, and home use patterns. Not surprisingly, it has also changed how, when, and how much electricity and gas we use. Moreover, the impact on energy use differs for residential customers and various types of businesses.

Given the changes in energy consumption that have occurred over the course of the COVID-19 pandemic, the need for alternative approaches to CalTRACK and similar, simple pre-post regression methods for estimating EE impacts is paramount. While adding comparison groups typically improves the accuracy of these energy saving estimates, there are three main logistical challenges:

  • Privacy of non-participant customer data. Current California laws and regulation exist to protect the privacy of advanced metering infrastructure (AMI) or smart meter data for individual customers.
  • Transparency Challenges. Many evaluation methods that rely on a comparison group require extensive calculation in order to construct the group. This complexity can hinder independent review and/or replication of the findings.
  • Complexity and frequency. PG&E and third-party EE program implementers target a wide range of customer segments and geographic areas, each of which require regular and specifically targeted non-participant data for evaluation. This consideration adds complexity to existing program administration processes.

To determine if there are viable alternative models that can accommodate the effects of the COVID-19 pandemic or other wide-scale non-routine events, DSA conducted an accuracy assessment of the existing Population NMEC methods as well as a variety of other methods with and without comparison groups.

What did we do?

Accurate and unbiased estimates of energy efficiency impacts are critical for utility program staff, third-party program implementers, and regulators. In evaluating the accuracy of the existing Population NMEC methods used in the PG&E territory, we tested a variety of other methods, with and without comparison groups, to simulate a competition and identify the methods that are unbiased and accurate (Figure 1).

The accuracy of these methods are assessed by applying placebo treatment on customers that did not participate in EE programs during the period analyzed. The impact of a program (or in this case, a pseudo-program) is calculated by estimating a counterfactual and comparing it to the observed consumption during the post-treatment period. Because no EEMs were installed in this simulation, any deviation between the counterfactual and actual loads is due to error. The process is repeated hundreds of times – a procedure known as bootstrapping – to construct the distribution of errors.

Figure 1: General Approach for Accuracy Assessment

What did we find?

  1. Population NMEC methods without comparison groups cannot account for the effects of the COVID-19 pandemic.
  2. The existing population NMEC methods without comparison groups show upward bias even prior to the effects of the pandemic.
  3. Comparison groups improve accuracy of the CalTRACK method.
  4. When constructing a matched control group, the choice of segmentation and matching characteristics matter more than the method of matching customers.
  5. Synthetic controls may perform well but are highly sensitive to the choice of segmentation used.
  6. Using aggregated granular profiles instead of individual matched controls in Difference-in-Differences methods yields comparable results to using individual customer matched controls.
  7. Accuracy and precision are dependent upon the number of sites aggregated together (Figure 2).
  8. No method is completely free of error.

Figure 2: Distribution of Error across Comparison Groups

Given these findings, rather than try to produce a single prescriptive method for NMEC analyses of energy efficiency programs, we instead recommend a framework by which proposed NMEC methods can be tested, certified, and used to estimate savings:

Vermont

The primary objective of the Hourly Impact of Energy Efficiency Evaluation Pilot was to better understand the time-value of energy efficiency measure savings and the implications for program design, delivery, and evaluation. Because energy efficiency in the Northeast qualifies for capacity value, accurate estimates of the contribution of energy efficiency to peak hours is critical. Using high-frequency 15-minute consumption data from Green Mountain Power’s AMI and program tracking data from Efficiency Vermont, the study team modeled energy consumption of participating homes and businesses separately in the pre-installation and the post-installation periods. These two periods were compared to understand how consumption changed following installation of an energy efficiency or beneficial electrification measure. A secondary objective of the study was to compare Advanced M&V methods, or regression-based modeling of utility meter data, with the approaches traditionally used in Vermont. This comparison helped to determine where Advanced M&V could offer cost savings, improve the accuracy and granularity of savings estimates, and identify lessons for program operations.

What did we do?

To generate savings for the 21 prescriptive measures and the 124 custom projects in Vermont, we implemented Advanced M&V procedures that build upon the International Performance Measurement and Verification Protocol (IPMVP) Option C Whole Facility approach to energy savings estimation. We do this through a regression model that follows Lawrence Berkeley National Laboratory’s (LBNL) Time-of-Week Temperature (TOWT) Model, where the dependent variable is hourly electric consumption from the meter and the independent variables contain information about the weather, day of week, and time of day.

This methodology estimates efficiency impacts in each hour of the year. Granular results provide insight into the distribution of energy savings across a year. For example, Figure 3 shows a heat map of the average energy savings from installing a variable speed heat pump. This measure’s model estimates a large load increase during the winter months (blue regions). Negative savings are a good thing in this case because it means Vermont homes are using the heat pump for heating and displacing delivered fuel consumption. There is also a pocket of denser load increase in the summer months during the middle of the day, presumably due to homes that may not have had air conditioning previously using the heat pump as an air conditioner.

Figure 3: Variable Speed Heat Pump Heat Map

What did we find?

  1. Modelling success for prescriptive measures is a function of effect size and number of participants.
  2. There are challenges when using Advanced M&V for “market opportunity” measures, where the baseline is a hypothetical new piece of equipment with code-minimum efficiency. This assumption creates issues because the pre-installation meter data reflects the replaced equipment at the end of its useful life.
  3. For custom projects, Advanced M&V methods work best for sites with predictable load patterns and large savings as a percent of total consumption (Figure 4).
  4. With the level of noise present, we caution against using site-specific results to determine incentive levels in Vermont and suggest Advanced M&V is more useful as a program evaluation tool.
  5. Advanced M&V is a powerful tool, but it is not the right tool for every job.

Figure 4: Example of a Well-Behaved Custom Project

Given these findings, to have a chance at accurately and precisely estimating savings from efficiency measures, the guidance below must be taken into consideration:

 

Does residential battery storage help the grid?

Do residential behind-the-meter batteries help the grid? The answer is: unfortunately, not as much as one would hope. The below plot shows my solar unit, two Tesla batteries, my whole home use, and my grid use on four select days.

The first plot is a very good outcome. It shows no grid usage. The home (blue) is exclusively being powered by battery storage (green) and solar (orange).  This pattern happens fairly often in the spring when household energy consumption is low and solar production is high. It also means no grid exports, even if the grid needs additional resources. The second case – contributing to the ramp – is disturbingly common. The battery starts charging as soon as the sun is up and is fully charged around mid-day, at which point, all of the solar comes online all at once. From a grid standpoint, it’s contributing to the ramps and is not helping absorb surplus solar. The last two scenarios are less common. The third plot shows some use during off-peak hours (I was charging a electric car), my intentional draw from the grid immediately before the 4-9 pm period, and use of the battery throughout that peak window (with a small amount of exports). It was also a very hot day. You see my AC unit cycling on and off. The last plot shows the full capability of the battery, close to 7 kW, which is rarely seen. The battery went into storm mode and drew power from the grid rather than charge only using the rooftop solar. When operated in default mode, the battery will almost never charge or discharge at its full capability. It means that behind-the-meter batteries are an under-utilized, untapped resource during periods from the grid needs resources the most and during period with excess generation on the grid.

If left to operate on their own, the batteries typically charge as soon as the sun comes up (the wrong time from a grid perspective), often don’t absorb surplus generation, and rarely, if ever, export to the grid when resources are needed most. By design, they operate with the customer in mind, which is an excellent objective. However, it is possible to lower customer bills, provide backup power, and also improve operations for the grid. As saying goes, “we can walk and chew gum at the same time.”

Why does this matter? Behind the meter battery storage is a growing, untapped resource, and the need for flexible, predictable resources is growing.  The below plot shows the growth in residential behind-the-meter battery storage in California. There are currently about 400 MW, but the magnitude of growing quickly. Roughly 8-10% of new solar installations are also install battery storage at the same time. And the share of solar sites electing battery storage is growing. What can be done to tap into this under-utilized resource? Clearly, it is not enough to have the batteries installed. It is necessary to operate them at the right times and to provide customers incentives to do so.

 

DSA is involved in several efforts to better use battery storage, including:
  • A virtual power plant study with over 1,000 residential batteries. The batteries are providing grid response based on day-ahead market prices  (after a strike price is hit) and in response to system operator alerts, warnings, and emergencies.
  • A battery storage pilot. Perhaps the most exciting part of the pilot is that we are using a randomized control trial to explicitly test how different incentive levels and incentive structures affect customer willingness to allow utilities to operate the battery for grid needs. In addition, we are testing daily operations with day-ahead market prices and time-of-use rates, and testing how to modify dispatch algorithms so behind the meter batteries can deliver a predictable, incremental resource. The pilot includes two tracks: one for customers with existing battery storage and for customers who are in the process of installing solar and/or battery storage (another sites).  DSA is in charge of all aspects of the turn-key pilot including design, recruitment, event operations, communication with the batteries (or more accurately, the battery API), setting up data tracking and collection databases, and evaluation. (Click the link for a presentation of battery pilot design: Battery Storage Pilot Design )
  • Programming a utility-scale battery to maximize load relief and demand charges for coops
  • Identifying high-value locations for distribution connected battery storage.
  • Assessing economic feasibility of utility-owned battery storage operated in response to market prices and T&D needs

Is Electric Demand Rebounding? An Interactive Dashboard

The last few months have been unusual. COVID-19 has led to changes in our commutes, business operations, and home use patterns. It’s also changed how, when, and how much electricity we use.  As long as we have social-distancing, efforts to re-open, and concerns about a second wave, forecasting electric demand will be more difficult. There are several fundamental questions:

  • By how much have electric demand and sales dropped?
  • Are different parts of the U.S. affected differently?
  • Is electricity demand rebounding to pre pre-COVID 19 levels? If so, where and how quickly?
  • How will demand rebound over the remainder of the summer, when most electric systems peak?

We were curious ourselves and decided to answer those questions. In the process, we built a live dashboard to track the effect of COVID on electric demand across the U.S. <link>. The plot below shows how electric demand changed over time compared to pre-COVID levels. The dots are individual days; the line shows the 7-day trend. In the U.S., electricity peak demand and sales dropped by roughly 7.5%. The drop is most noticeable in mid-March when the first stay-home orders started going into effect. Starting in May and June (depending on location), electricity use has begun to inch back up.

Electric demand dropped off throughout most of the U.S.  The plot below shows snapshots of the change in electrical demand for the first Monday of March, April, May, and June. For some areas, such as Nevada and Texas, efforts to re-open the economy (and casinos) are clearly evident in the energy patterns.

So, where do we go from here? Our dashboard will be updated daily to track how electricity demand and use changes as we attempt to return to normal. The dashboard provides the ability to view specific balancing authorities or the U.S. as a whole. Monitoring the effect of COVID on electric demand is the first step. COVID-19 has made forecasting energy sales and peak demand more difficult, both in the near and long term. The next step is developing short and long term forecasts that adjust as social distancing and economic re-openings evolve. We have some ideas about how to do that well.

 

California Electric Vehicle Penetration – Granular Maps

Earlier this week, the California Department of Motor Vehicles posted a granular data set with vehicle registration details by zip code, vehicle type (gas, electric, plug-in-electric, hybrid, etc), model year, and make (e.g., Tesla, Toyota, etc.). The data runs through October, 2018. Because it includes all vehicles, it provides insight into the penetration of electrified vehicles overall and by model year. We decided to visualize it, make it interactive, and share it. Enjoy.

We’ll be visualizing vehicle penetration data from Massachusetts and New York relatively soon. If you know of other states that provide granular vehicle data, drop us a note and we’ll add them to our list.

Click Image to go to Interactive Map

Summer Demand Response Changes at PJM

PJM recently released an updated 2019 Peak Load Forecast, the primary change being the inclusion of approved Peak Shaving Load Adjustments for summer-only demand response programs (report and supporting data available at: http://www.pjm.com/library/reports-notices.aspx).

Demand Side Analytics has prepared a report for the Consumer Advocate of PJM States which provides a high-level overview of the PJM change and explores implications for program administrators. We focus on three primary areas: 1) understanding load forecast adjustments and the implications for participation and timing, 2) Offer Strategy and Considerations, and 3) Price Suppression Effects. The full report is available here.

Peak Shaving Adjustments

Historically demand resources such as demand response and energy efficiency have entered the market as supply and been eligible to compete alongside traditional supply side resources (power plants) in a competitive auction to fulfill the resource requirements for the region. Demand response resources such as utility direct load control of central air conditioners have recently encountered difficulty participating in the market due to PJM’s “capacity performance” definition of generation capacity. A Peak Shaving Adjustment (PSA) offers a fundamentally different means for demand response to participate in the Reliability Pricing Model (RPM). Instead of being treated as supply that is capable of fulfilling resource requirements, a Peak Shaving Adjustment enters the market on the demand side. The characteristics of the shaving are used to create modified peak load forecasts.
In the report we discuss the factors that affect how a Peak Shaving resource will affect the Variable Resource Requirement, key design components as adopted by PJM, and the implications of barring dual participation.

Offer Strategy

The peak shaving “pledge” happens before the auction so there’s some uncertainty with regard to the value of a commitment it is made. If you have a state program/resource, or are contemplating developing one, how do you balance maximizing the load forecast adjustments while maintaining cost effectiveness? For example, would it be better to shave 100 MW for three hours on all days hotter than 95 degrees, or shave 50 MW for 5 hours on all days hotter than 90 degrees? In the report we explore the effects of:

  1. System Load characteristics – how the amount of summer vs. winter peaking risk affects compensation, and considerations of event frequency vs. duration.
  2. Weather – varies from year to year, but commitments are based on THI thresholds. If predicting performance based on median weather what is the risk in extreme weather years and the cost/benefit calculus of underperforming.
  3. Customer rotation – how frequently can customers reasonably be called without fatigue?

Price Suppression

The resource clearing prices in the PJM BRA are a function of zonal demand and the cost of resources available to meet those demands. Reducing peak capacity requirements generates value both by avoiding the costs associated with the load being shaved, and potentially by lowering the price for the remaining capacity that still must be procured. This second component is the price suppression effect. In reality the VRR is not a curve, but a staircase with tread width the size of power plants. Thus there is no guarantee that reducing peak will reduce the clearing price. Using PJM BRA sensitivity analyses from prior years, we estimate the slope of the supply curve for different market segments and provide bounds on the potential price suppression effect.

Persistence of Home Energy Report Impacts

Demand Side Analytics has done multiple HER evaluations for many utilities; across a range of geographies, fuels, and cohort sizes. In this post, we review the results of a persistence study conducted in Pennsylvania after the conclusion of report delivery at four of FirstEnergy’s Pennsylvania Electric Distribution Companies (EDCs). The full report can be found here. The effect of treatment is typically measured through comparison of the group of customers receiving the HER, known as the treatment group, to a statistically identical control group. The comparison is done both in the period prior to receiving the reports (to assess treatment and control equivalence), and after treatment (to measure the impact of treatment on consumption). The goal of the study was to identify how long energy savings persisted, even after reports were discontinued.

What is it?

Behavioral conservation programs, such as residential Home Energy Reports (HERs), are well understood to provide small, yet measurable reductions in energy use when appropriately deployed. These programs are relatively inexpensive, geographically widespread, and effective at reducing consumption for most residential customer segments. The treatment effect is related to behavior changes brought about by providing customers information about their energy consumption relative to their peers. By showing how much energy the customer is using compared to similar households, the HER induces behavior changes using the power of social norms. This effect is facilitated by having the HER provide energy efficiency and conservation tips to the customer, which induces temporary and permanent behavior changes. Because of this, the conservation effect can persist in treated groups even after customers stop receiving reports.

Why is it important?

The persistence of HER impacts means that even after the discontinuation of HER delivery, treated customers continue to provide energy savings relative to customers who never received a report. Accurately quantifying how long savings persist in the previously-treated group is important as it helps determine program cost-effectiveness and assessments of the effective useful life of any HER program.

How did we do the analysis?

The HER program in question was implemented as a randomized control trial for each of the EDCs. A randomized control trial is an evaluation technique that provides very precise and unbiased estimates of the effect of treatment – that is, the receipt of HER bill comparisons. If properly implemented, randomized control trials (RCTs) are a very effective framework for estimating HER impacts for two key reasons, related to how HER programs are designed:

  1. Expected effect size: Because the HER effect is generally small – on the order of 1-3% – the experimental design must be precise enough to detect the effect and must be able to account for any other factors that could bias energy consumption in the treatment group. By comparing consumption in the treatment group to the control group, external influences that are experienced by both the treatment and control groups are netted out of the treatment effect, reducing the amount of noise around the treatment’s impact
  2. Treatment duration: HER programs can run for many years; some Pennsylvania households have been receiving them for over five consecutive years. Over such a long period, many things can change at an individual home that would affect energy consumption (e.g., occupancy changes, renovations, or weather pattern changes). These factors are not all directly observed or measured, so they cannot be modeled and therefore may be misattributed to the effect of treatment in a regression. However, because these changes will equally affect the control and the treatment group, they will be netted out of an RCT impact estimate.

To isolate the impact of treatment while controlling for other factors that may influence energy use, DSA applied a lagged dependent regression approach. This model works particularly well at providing precise savings estimates when there is good pretreatment equivalence between the treatment and control groups. The model uses information about individual household seasonal consumption patterns collected through billing data analysis to estimate the impact of treatment in each month after the start of report delivery, including after reports were stopped for the persistence test.

To model the effect of persistence, a simple regression specification was used to determine the decay of impacts as a function of the number of months since the cohort received their last report. Because impacts can be seasonal and have uncertainty around them, a weighted average of the prior year’s monthly impacts was used to create an average pre-cessation savings level.

The key metric used to quantify the effect of persistence is how many months it takes for impacts to reach zero. Once the regression is performed, DSA used the intercept and slope from the regression output to calculate the number of months it would take for the trend in impacts to go to zero. This is shown graphically below, where it takes approximately 37 months for the orange trend line to cross the y-axis at zero. The intercept for the persistence regression line is set equal to the average savings in the prior 12-months (shown in blue circles and the grey squares at month = 0). The underlying assumption with this model is that the HER savings will continue to decay at the same rate observed in months 1-24 until reaching zero.

Figure 1: Persistence Modeling Example

Were there any project-specific considerations?

Each EDC studied in this project had multiple cohorts of customers that were included in the HER program and persistence study. Not all of these cohorts showed robust pretreatment equivalence. Because of this, it is best to carefully consider which cohort’s impacts should be included in an analysis of HER persistence. The criteria that DSA used to categorize cohort quality were threefold:

  1. Pretreatment equivalence must be established: Without this condition, the lagged seasonal regression model cannot provide unbiased estimates of the savings associated with a HER program.
  2. The cohort must be large enough in the persistence period to provide a precise impact: Cohorts with 10,000 or more unique – and active – customers after June 2016 provided enough information to ensure that impact estimates during the persistence period could be estimated precisely.
  3. Enough of the original cohort must remain active through the persistence period to feel confident in the internal validity of the impact: It is possible that there were systematic reasons for customer account churn in the persistent cohorts, which could create a biased estimate of the cohort’s savings. In other words, if customers who left the group responded to the HERs differently than customers who remained active, the overall cohort’s result would reflect only customers who remained active if enough other customers left. We focused our efforts on cohorts that had at least 50% of their original size still left by the persistence period.

These criteria are illustrated graphically in Figure 2 for one of the EDCs. The x-axis plots the average number of customers still active in the period between June 2016 and May 2018 for each cohort, while the y-axis shows the percentage of the original cohort size that is still active during this period. The markers for each cohort are also color-coded to highlight whether the cohort was used in the final analysis, or what the reason was for its exclusion.

Figure 2: Cohort Characterization

What are the results?

The cohort characterization resulted in five cohorts analyzed in the persistence study: two from Met-Ed and three from Penelec. The five cohorts that qualified were then fed in to a second-stage model that sought to determine the monthly decay rate of the savings estimates. Since there is noise in each savings estimate and seasonal variation in the savings estimates, DSA thought it most appropriate to set the intercept of each cohort’s regression to equal the average savings percentage over the twelve months immediately prior to the persistence test. That is, the starting point of this regression was not simply what the customers saved in May of 2016 but a weighted average of the full year prior to the test. Figure 3 shows the raw data used to construct this analysis. The five cohorts that were identified as having good equivalence and the appropriate cohort size are shown in the figure below. The trend line of persistent savings is shown in blue. This figure displays the trend for FirstEnergy cohorts only, and approaches zero nearly 30 months after the HER reports stop being sent to customers. This estimate is combined with other Pennsylvania studies, below, to provide an overall decay rate estimate.

Figure 3: Persistence Trends by Cohort

To estimate the HER effect duration more precisely, DSA fit a simple linear model that related the percent savings estimates – again weighted by the aggregate reference load – to the number of months it had been since the cohort received a HER. The weighting of the percent savings is necessary in this case because we are using percent savings as our variable of interest. Doing the weighting ensures that larger cohorts are have more impact than smaller ones, and that a 2% savings in a high-consumption month counts more than a 2% savings in a low-consumption month, while still creating a percentage metric that can be directly compared to other studies.

Table 1: Persistence Trends by Cohort

UtilityCohortPopulationInterceptSlopeMonths to No Impact
Met EdJuly 2012 Market Rate17,8281.753%-0.051%34.7
Jan 2014 Market Rate12,6881.039%-0.041%25.1
PenelecJuly 2012 Market Rate17,3352.387%-0.069%34.5
Jan 2014 Market Rate18,8281.190%-0.058%20.5
Nov 2014 Remediation15,0681.384%-0.051%27.4
FirstEnergyAll81,7461.613%-0.054%29.7

How do these results compare to a larger set of recent HER persistence studies?

In 2015, the Pennsylvania evaluation team conducted a similar analysis of residential HER persistence for cohorts from PPL and Duquesne Energy that stopped receiving HERs. Three cohorts across these two EDCs experienced between 16 and 24 months of no report delivery, with resumption of HERs after that period had passed. Prior to having begun the persistence test, the two PPL cohorts had received reports since 2010 (Legacy), and since 2011 (Expansion). Duquesne’s HER program began in PY4 (between June 2012 and May 2013), so at most customers received 11 months of HER treatment prior to report discontinuation.

Table 2: Persistence Trends for Other Pennsylvania HER Studies

UtilityCohortPop.Persistence Test StartPersistence Test EndMonths of TestInterceptSlopeMonths to No Impact
PPLLegacy48,700May-13Oct-14162.350%-0.060%39.2
Expansion52,900May-13Oct-14162.040%-0.040%51.0
DuquesneAll52,200May-13Mar-15211.210%-0.001%1,210.0
FirstEnergyAll81,746Jun-16.241.613%-0.054% 29.7

In general, the FirstEnergy results are quite similar to those of the two PPL cohorts, with between 29.7 to 51 months of expected impact decay time. The PPL customers in the HER program had been receiving reports for a longer period than most FirstEnergy customers, but had generally similar savings rates prior to the start of the persistence test. This generally corresponds to the common understanding of HER reports; namely that they can deliver relatively consistent savings after a maturation period of one to two years when customers first start receiving reports. The decay rates, or slope of percent savings decay, in the PPL study is quite similar to that of FirstEnergy, with between a 0.04% and 0.06% drop in savings per month (roughly a 0.5% to 0.75% annual decay).

Electric Vehicle Penetration in New York

Electric vehicles penetration and electrification has been a subject much debate and discussion recently. Like many other States, New York has been grappling with setting policy regarding electric vehicles.  Three key drivers for planning and policy decisions are the rate of adoption, the speed of turnover in vehicle stock, and whether adopters are concentrated in specific areas.

We have been working with a utility in New York to automate monitoring of electric vehicle adoption, develop customer specific adoption propensity scores, and estimate the impact of electric vehicles on the hourly loads (8760) of individual circuits, substations, and sub-transmission areas. In this blog, however, we discuss the vehicle adoption in New York as a whole using public available data sources.

New York is at the forefront of the open data movement. They publicly post vehicle registration data for each of 11.7 million vehicles in New York, including VIN numbers, zip codes, dates of registration and host of other factors (https://data.ny.gov/Transportation/Vehicle-Snowmobile-and-Boat-Registrations/w4pv-hbkt/data). Once we remove from the dataset boats, motorcycles, ATV’s, and 2018 models (since the data for that year is partial), we have roughly 9.4 million vehicles in New York. We supplemented  the New York vehicle registration with detailed information about the make, model, trim, engine type (gas/hybrid, etc.), model year and other characteristics that can be extracted from VIN numbers. This underlying data provides rich insights into electric vehicle adoption.

What Do We Know About Electric Vehicle and Green Car Penetration So Far?

The figure below shows green vehicle adoption by model year, as the percentage of the total registered vehicles in each model year.  The year by year penetration of electric vehicles and plug-in electric vehicles has been growing quickly, but remain small as share of total vehicles. The penetration of hybrids in New York appears to have already peaked between 2% and 3% of all vehicles. A key question is whether electric vehicles will drive up the overall share of green cars or if we will see a shift from hybrids to PHEV’s and all-electric vehicles. It is also instructive to understand the mix of vehicles, which is shown in second figure below. While hybrids were dominated by Toyota, the EV and PHEV market is far more open, with a wider mix of car manufacturers.

Of the 9.4 million cars in New York roughly a million are new each year. As vehicles age, the count of vehicles goes down, either because they are retired or resold outside of New York. The pattern below is critical for understanding how vehicle stock will change over time. For electric vehicle penetration to matter, the new car market share of electric vehicles must grow. Second, the penetration of electric vehicles won’t be instantaneous simply because only a relatively small share of individuals purchase and drive new vehicles.

Is Electric Vehicle and Green Car Penetration Deeper in Specific Locations?

Below, we show heats maps. The first compares the electric vehicle penetration by zip code. The second heat map compares the current penetration of green cars by zip code.

New York EV Penetration Interactive Map (Click here)

NY Green Vehicle Penetration Interactive Map (Click here)

The below chart compares the penetration for electric vehicles to the penetration of green cars. The size of the bubbles indicates the total number of vehicles registered in each zip code. Not surprisingly, adoption of electric vehicles is closely related to adoption of green cars in general. Basically, we can expect higher penetration of electric cars in areas that have a higher penetration of hybrids.

What Conclusions Can We Draw?

The analysis here is not about prognosticating the future of electric vehicles. There may be truly disruptive policies and technologies in the future. But what we know so far is the following:

  • Electric vehicle penetration as a percentage of all vehicles is small but growing.
  • Green vehicles in New York seems to be limited to between 2 to 3% of each model year.
  • Some locations have higher electric vehicle adoption rates.
  • Electric vehicles adoption is higher where hybrid and PHEV adoption was high.
  • The data to closely monitor and understand electric vehicle penetration is available, at least in New York.

 

Battery Storage! The future of the grid.

Today, two of us toured the largest largest lithium ion battery energy storage facility in the U.S. at SDG&E’s Escondido Substation. There are 24 pods, with enough power for 20,000 homes for 4 hours – 30 MW and 120 MWh. For anyone into energy, it’s quite impressive and a big part of the future. Each pod has 800 batteries inside, for a total of 19,200 battery units. The batteries are same as those that go into the BMW i3 and are enough to build over 3,600 BMW i3’s. The most fascinating part was peering in the inside of the pods and the controls (you always see the pod photos, but almost never see the inside). The storage is mostly used to balance the power grid at 4 second intervals and has a ridiculous ramp rate capability of 200 MW/minute (that’s fast acceleration). A big thanks to Leslie Willoughby and Ted Reguly from SDG&E for setting it up.

Washington State Distributed Energy Resource Planning

There has been a rapidly growing level of interest in distribution planning and how to integrated distributed energy resources (DERs).  The growth of DERs is fundamentally changing the nature of transmission and distribution system forecasting, planning, and operations.   However, the current state of transmission and distribution planning and of DER integration into planning vary widely from utility to utility. For this project, our team conducted an inventory of current utility distribution planning practices and capabilities in Washington. The results were presented at Workshop on November 20th to a broad range of stakeholders.

WA DER Planning Workshop – Current utility capabilities

Price Elasticity of Demand Analysis for LED Lighting

Demand Side Analytics recently designed and analyzed an LED pricing trial for Efficiency Maine Trust. The study involved the two largest retailers in the state and provided some valuable program design information on managing free-ridership, setting incentive levels, and capturing off shelf product placement. Full report can be found at the link below:

LED Lighting Pricing Trial Results