The rapid growth of Distributed Energy Resources (DERs) on the power grid brings many opportunities and challenges to energy utilities. A strong DER portfolio can help manage costs, improve power quality and make the grid more resilient. But higher DER penetration also has complex business process implications. Traditional distribution management systems are not DER-aware, making it difficult for utilities to consider the full impact of DERs in short term planning and operations.
Adopting a structured approach to designing, testing and evaluating DER optimization in Distributed Energy Resource Management (DERMS) software is essential not only to enhance the efficiency, reliability and resiliency of the distribution grid but to also streamline and improve internal operation practices. Following a step-by-step framework and formal methodology can help ensure that the software is delivering quality results.
DER Optimization: Quantify Value
DER Optimization is a critical DERMS function that typically works by running multi-variable Optimal Power Flow (OPF) formulations against the distribution electrical network with the DERs modeled appropriately. Optimization aims to securely meet a forecasted load utilizing DERs while minimizing generation costs and improving power quality. Optimization may also leverage machine learning algorithms or advanced smart search techniques to provide optimal solutions.
Utilities need to determine the value of distributed energy resources for themselves and their customers hosting behind-the-meter DERs. Accurately pricing DER services in conjunction with wholesale and distribution LMPs (Locational Marginal Prices) is beneficial in offsetting periods of high LMPs and generating revenue streams for DER-owning customers. In addition to major drivers such as cost savings and efficiency, utilities may want to assess how renewable resources on the network are supporting capital expenditure deferrals and greenhouse gas/carbon footprint reduction targets.
Current Challenges in Evaluating Performance
Proprietary vendor algorithms make it difficult for utilities to test why a particular optimization solution state was calculated by the software. Even as the optimization software successfully solves OPF formulations, the effectiveness of these solutions depends on several external factors. Availability and accuracy of as-operated network models, forecast, real time and historical load and generation data, weather data and more can influence results.
From a data modeling perspective, utilities need to have the ability to realistically model regulatory requirements, contractual constraints and consumer behavior such as electric vehicle charging patterns to make test scenarios reflect real world use cases. Special tooling is required to model and simulate advanced scenarios, especially when it comes to modelling future grid state with scaled up DER penetration.
A Structured Approach to Testing DER Optimization Software
OPF solutions need to be systematically analyzed to make sure results are sensible and meet objectives. The following framework provides the test structure and methodology to ensure that the software is delivering quality results.
1. Develop High-level Operational Goals and Use Cases
Having a well-defined understanding of operational goals is important in ensuring that organizational groups are working to achieve the same end goals. Similarly, use cases that involve DER optimization need to be identified and documented. Some examples:
- Operational Goals: Energy Efficiency, Reliability, Carbon Footprint Reduction
- Use Cases: Day-Ahead Planning, Real Time Corrections, Offline analyses.
2. Define Optimization Objectives
When evaluating vendors, it is important to ensure that required objectives are supported by the vendor solution. Minimizing costs, losses and violations are examples of optimization objectives. Often, a use case may need to be optimized for multiple objectives, in which case the software should support weighted multi-objective optimization.
3. Develop Performance Benchmarks
Solve-speed becomes a critical criterion if the use case involves solving for real time corrections to planned operations in a matter of seconds. Benchmarks may be defined around expected time to solve for different network sizes and expected performance for use cases such as real time operations versus day-ahead planning.
4. Identify Data Requirements; Collect and Organize Test Data
Collaboration across functional groups within the organization is essential in making relevant data available for tests. This includes historical, real time and forecasted data from various sources. Data models may vary across utility systems that would necessitate additional data integration, transformation and cleanup efforts. Distribution system simulation software may be used in cases where required data is not available.
5. Define Test Scenarios
A scenario is specific to a use case and objective. A power systems analyst may define the test scenario and create the input datasets needed to support testing of the scenario. For example:
- Scenario: A high load afternoon causing voltage violations.
- Optimization Objective: Minimize violations.
- Input Data: Network model, feeder loads, asset schedules, LMP price signals, violation cost data.
- Expected Result: Resource dispatch levels and other network settings to reduce the total violation count below threshold.
Creating a simulation platform and test harness framework will improve the testing process by aiding in quick setup of scenarios and performing regression tests when changes are made. Specialized tooling can also aid in provisioning test data quickly to support testing of scaled up scenarios. Automated comparisons against previously verified base cases can be the initial test for the quality of analysis results.
7. Analyze and Present Results
Manual review by subject matter experts may be required especially when a scenario is being executed for the first time. Iterative refinement of scenarios is performed based on findings. Automation tools play a vital role in performing this iterative process efficiently. Summarizing, visualizing, and reporting results to the larger team keeps relevant stakeholders informed. It is advisable to leverage a collaborative project and task management tool with dashboarding capabilities for this purpose.
DER optimization features offered by DERMS vendors vary in terms of their capabilities and underlying architectures. Evaluating and selecting the right DERMS product is key in ensuring that the integrated distribution management solution delivers value when deployed in the real world.
Nonfunctional requirements play a key role in guaranteeing that the final solution will deliver value to the enterprise. From the perspective of putting together a complete ADMS + DERMS solution, integration capabilities of the optimization component are critical in enabling integrated real-world operations with other utility systems. System components need to be able to scale horizontally to accommodate future growth in DER assets, both in number and types.
Trends over the past decade in energy markets, regulation, public policy and consumer attitudes towards renewables and technology indicate that we are at a transition point in the electric utility industry. Keeping pace with a changing market and operational dynamics requires more robust processes and automation. It is important for utilities to partner with experts offering business and technology consulting services around DERMS and ADMS to guide them through this transitional journey.
At TRC, we work closely with utilities to analyze their specific DER use cases and recommend integrated ADMS + DERMS solutions that fit their needs. Contact us to discuss your unique DER optimization and integration challenges or visit https://www.trccompanies.com/markets/power-and-utilities/digital-grid-solutions/distributed-energy-resources/to learn more.