Você está na página 1de 7

Overview

A Distribution Management System (DMS) is a collection of applications designed to monitor & control the entire distribution network efficiently and reliably. It acts as a decision support system to assist the control room and field operating personnel with the monitoring and control of the electric distribution system. Improving the reliability and quality of service in terms of reducing outages, minimizing outage time, maintaining acceptable frequency and voltage levels are the key deliverables of a DMS.

Most distribution utilities have been comprehensively using IT solutions through their Outage Management System (OMS) that makes use of other systems like Customer Information System (CIS), Geographical Information System (GIS) and Interactive Voice Response System (IVRS). An outage management system has a network component/connectivity model of the distribution system. By combining the locations of outage calls from customers with knowledge of the locations of the protection devices (such as circuit breakers) on the network, a rule engine is used to predict the locations of outages. Based on this, restoration activities are charted out and the crew is dispatched for the same. In parallel with this, distribution utilities began to roll out Supervisory Control and Data Acquisition (SCADA) systems, initially only at their higher voltage substations. Over time, use of SCADA has progressively extended downwards to sites at lower voltage levels. DMSs access real-time data and provides all information on a single console at the control centre in an integrated manner. Their development varied across different geographic territories. In the USA, for example, DMSs typically grew by taking Outage Management Systems to the next level, automating the complete sequences and providing an end to end, integrated view of the entire distribution spectrum. In the UK, by contrast, the much denser and more meshed network topologies, combined with stronger Health & Safety regulation, had led to early centralisation of high-voltage switching operations, initially using paper records and schematic diagrams printed onto large wallboards which were 'dressed' with magnetic symbols to show the current running states. There, DMSs grew initially from SCADA systems as these were expanded to allow these centralised control and safety management procedures to be managed electronically. These DMSs required even more detailed component/connectivity models and schematics than those needed by early OMSs as every possible isolation and earthing point on the networks had to be included. In territories such as the UK, therefore, the network component/connectivity models were usually developed in the DMS first, whereas in the USA these were generally built in the GIS. The typical data flow in a DMS has the SCADA system, the Information Storage & Retrieval (ISR) system, Communication (COM) Servers, Front-End Processors (FEPs) & Field Remote Terminal Units (FRTUs). [edit]Why DMS?

Reduce the duration of outages Improve the speed and accuracy of outage predictions. Reduce crew patrol and drive times through improved outage locating. Improve the operational efficiency Determine the crew resources necessary to achieve restoration objectives. Effectively utilize resources between operating regions. Determine when best to schedule mutual aid crews. Increased customer satisfaction A DMS incorporates IVR and other mobile technologies, through which there is an improved outage communications for customer calls. Provide customers with more accurate estimated restoration times. Improve service reliability by tracking all customers affected by an outage, determining electrical configurations of every device on every feeder, and compiling details about each restoration process. [edit]DMS Functions

In order to support proper decision making and O&M activities, DMS solution shall have to support the following functions: Network visualization & support tools Applications for Analytical & Remedial Action Utility Planning Tools System Protection Schemes The various sub functions of the same, carried out by the DMS are listed below: [edit]Network Connectivity Analysis (NCA) Distribution network usually covers over a large area and catering power to different customers at different voltage levels. So locating required sources and loads on a larger GIS/Operator interface is often very difficult. Panning & zooming provided with normal SCADA system GUI does not cover the exact operational requirement. Network connectivity analysis is an operator specific functionality which

helps the operator to identify or locate the preferred network or component very easily. NCA does the required analyses and provides display of the feed point of various network loads. Based on the status of all the switching devices such as circuit breaker (CB), Ring Main Unit (RMU) and/or isolators that affect the topology of the network modeled, the prevailing network topology is determined. The NCA further assists the operator to know operating state of the distribution network indicating radial mode, loops and parallels in the network. [edit]Switching Schedule & Safety Management In territories such as the UK a core function of a DMS has always been to support safe switching and work on the networks. Control engineers prepare switching schedules to isolate and make safe a section of network before work is carried out, and the DMS validates these schedules using its network model. Switching schedules can combine telecontrolled and manual (on-site) switching operations. When the required section has been made safe, the DMS allows a Pemit To Work (PTW) document to be issued. After its cancellation when the work has been finished, the switching schedule then facilitates restoration of the normal running arrangements. Switching components can also be tagged to reflect any Operational Restrictions that are in force. The network component/connectivity model, and associated diagrams, must always be kept absolutely up to date. The switching schedule facility therefore also allows 'patches' to the network model to be applied to the live version at the appropriate stage(s) of the jobs. The term 'patch' is derived from the method previously used to maintain the wallboard diagrams. [edit]State Estimation (SE) The state estimator is an integral part of the overall monitoring and control systems for transmission networks. It is mainly aimed at providing a reliable estimate of the system voltages. This information from the state estimator flows to control centers and database servers across the network. The variables of interest are indicative of parameters like margins to operating limits, health of equipment and required operator action. State estimators allow the calculation of these variables of interest with high confidence despite the facts that the measurements may be corrupted by noise, or could be missing or inaccurate. Even though we may not be able to directly observe the state, it can be inferred from a scan of measurements which are assumed to be synchronized. The algorithms need to allow for the fact that presence of noise might skew the measurements. In a typical power system, the State is quasi-static. The time constants are sufficiently fast so that system dynamics decay away quickly (with respect to measurement frequency). The system appears to be progressing through a sequence of static states that are driven by various parameters like changes in load profile. The inputs of the state estimator can be given to various applications like Load Flow Analysis, Contingency Analysis, and other applications. [edit]Load Flow Applications (LFA)

Load flow study is an important tool involving numerical analysis applied to a power system. The load flow study usually uses simplified notations like a single-line diagram and focuses on various forms of AC power rather than voltage and current. It analyzes the power systems in normal steady-state operation. The goal of a power flow study is to obtain complete voltage angle and magnitude information for each bus in a power system for specified load and generator real power and voltage conditions. Once this information is known, real and reactive power flow on each branch as well as generator reactive power output can be analytically determined. Due to the nonlinear nature of this problem, numerical methods are employed to obtain a solution that is within an acceptable tolerance. The load model needs to automatically calculate loads to match telemeter or forecasted feeder currents. It utilises customer type, load profiles and other information to properly distribute the load to each individual distribution transformer. Load-flow or Power flow studies are important for planning future expansion of power systems as well as in determining the best operation of existing systems. [edit]Volt-VAR Control (VVC) Volt-VAR Control or VVC refers to the process of managing voltage levels and reactive power (VAR) throughout the power distribution systems. There could be loads that contain reactive components like capacitors and inductors (such as electric motors) that put additional strain on the grid. This is because the reactive portion of these loads causes them to draw more current than an otherwise comparable resistive load would draw. The erratic current results in both over-voltage/under-voltage violations as well as heating up of equipments like transformers, conductors, etc. which might even need resizing to carry the total current. A power system needs to control it by scheming the production, absorption and flow of reactive power at all levels in the system. A VVC application shall help the operator to mitigate such conditions by suggesting required action plans. The plan will give the required tap position and capacitor switching to ensure the voltage to its limit and thus optimize Volt Vary control function for the utility. [edit]Load Shedding Application (LSA) Electric Distribution Systems have long stretches of transmission line, multiple injection points and fluctuating consumer demand. These features are inherently vulnerable to instabilities or unpredicted system conditions that may lead to critical failure. Instability usually arises from power system oscillations due to faults, peak deficit or protection failures. Distribution load shedding and restoration schemes play a vital role in emergency operation and control in any utility. An automated Load Shedding Application detects predetermined trigger conditions in the distribution network and performs predefined sets of control actions, such as opening or closing non-critical feeders, reconfiguring downstream distribution or sources of injections, or performing a tap control at a transformer. When a distribution network is complex and covers a larger area, emergency actions taken downstream may reduce burden on upstream portions of the network. In a non-automated system,

awareness and manual operator intervention play a key role in trouble mitigation. If the troubles are not addressed quickly enough, they can cascade exponentially and cause major catastrophic failure. DMS needs to provide a modular automated load shedding & restoration application which automates emergency operation & control requirements for any utility. The application should cover various activities like Under Frequency Load Shedding (UFLS), limit violation and time of day based load shedding schemes which are usually performed by the operator. [edit]Fault Management & System Restoration (FMSR) Reliability and quality of power supply are key parameters which need to be ensured by any utility. Reduced outage time duration to customer, shall improve over all utility reliability indices hence FMSR or automated switching applications plays an important role. The two main features required by a FMSR are: Switching management & Suggested switching plan The DMS application receives faults information from the SCADA system and processes the same for identification of faults and on running switching management application; the results are converted to action plans by the applications. The action plan includes switching ON/OFF the automatic load break switches / RMUs/Sectionalizer .The action plan can be verified in study mode provided by the functionality .The switching management can be manual/automatic based on the configuration. [edit]Load Balancing via Feeder Reconfiguration (LBFR) Load balancing via feeder reconfiguration is an essential application for utilities where they have multiple feeders feeding a load congested area. To balance the loads on a network, the operator reroots the loads to other parts of the network. A Feeder Load Management (FLM) is necessary to allow you to manage energy delivery in the electric distribution system and identify problem areas. A Feeder Load Management monitors the vital signs of the distribution system and identifies areas of concern so that the distribution operator is forewarned and can efficiently focus attention where it is most needed. It allows for more rapid correction of existing problems and enables possibilities for problem avoidance, leading to both improved reliability and energy delivery performance. On a similar note, Feeder Reconfiguration is also used for loss minimization. Due to several network and operational constraints utility network may be operated to its maximum capability without knowing its consequences of losses occurring. The overall energy losses and revenue losses due to these operations shall be minimized for effective operation. The DMS application utilizes switching management application for this, the losses minimization problem is solved by the optimal power flow algorithm and switching plans are created similar to above function [edit]Distribution Load Forecasting (DLF) Distribution Load Forecasting (DLF) provides a structured interface for creating, managing and analyzing load forecasts. Accurate models for electric power load forecasting are essential to the operation and planning of a utility company. DLF helps an electric utility to make important decisions including decisions on purchasing electric power, load switching, as well as infrastructure development.

Load forecasting is classified in terms of different planning durations: short-term load forecasting or STLF (up to 1 day, medium-term load forecasting or MTLF (1 day to 1 year), and long-term load forecasting or LTLF (1-10years). To forecast load precisely throughout a year, various external factors including weathers, solar radiation, population, per capita gross domestic product seasons and holidays need to be considered. For example, in the winter season, average wind chill factor could be added as an explanatory variable in addition to those used in the summer model. In transitional seasons such as spring and fall, the transformation technique can be used. For holidays, a holiday effect load can be deducted from the normal load to estimate the actual holiday load better. Various predictive models have been developed for load forecasting based on various techniques like multiple regression, exponential smoothing, iterative reweighted least-squares, adaptive load forecasting, stochastic time series, fuzzy logic, neural networks and knowledge based expert systems. Amongst these, the most popular STLF were stochastic time series models like Autoregressive (AR) model, Autoregressive moving average model (ARMA), Autoregressive integrated moving average (ARIMA) model and other models using fuzzy logic and Neural Networks. DLF provides data aggregation and forecasting capabilities that is configured to address todays requirements and adapt to address future requirements and should have the capability to produce repeatable and accurate forecasts. [edit]Standards Based Integration

In any integrated energy delivery utility operation model, there are different functional modules like GIS, Billing & metering solution, ERP, Asset management system that are operating in parallel and supports routine operations. Quite often, each of these functional modules need to exchange periodic or real time data with each other for assessing present operation condition of the network, workflows and resources (like crew, assets, etc.). Unlike other power system segments, distribution system changes or grows every day, and this could be due to the addition of a new consumer, a new transmission line or replacement of equipment. If the different functional modules are operating in a non-standard environment and uses custom APIs and database interfaces, the engineering effort for managing shall become too large. Soon it will become difficult to manage the growing changes and additions which would result in making system integrations non- functional. Hence utilities cannot make use of the complete benefit of functional modules and in some cases; the systems may even need to be migrated to suitable environments with very high costs. As these problems came to light, various standardization processes for inter application data exchanges were initiated. It was understood that a standard based integration shall ease the integration with other functional modules and that it also improves the operational performance. It ensures that the utility can be in a vendor neutral environment for future expansions, which in turn means that the utility can easily add new functional modules on top of existing functionality and easily push or pull the data effectively without having new interface adapters.

[edit]IEC 61968 Standards Based Integration IEC 61968 is a standard being developed by the Working Group 14 of Technical Committee 57 of the IEC and defines standards for information exchanges between electrical distribution system applications. It is intended to support the inter-application integration of a utility enterprise that needs to collect data from different applications which could be new or legacy. As per IEC 61968, a DMS encapsulates various capabilities like monitoring and control of equipment for power delivery, management processes to ensure system reliability, voltage management, demand-side management, outage management, work management, automated mapping and facilities management. The crux of IEC 61968 standards is the Interface Reference Model (IRM) that defines various standard interfaces for each class of applications. Abstract (Logical) components are listed to represent concrete (physical) applications. For example, a business function like Network Operation (NO) could be represented by various business sub-functions like Network Operation Monitoring (NMON), which in turn will be represented by abstract components like Substation state supervision, Network state supervision, and Alarm supervision. IEC 61968 recommends that system interfaces of a compliant utility inter-application infrastructure be defined using Unified Modelling Language (UML). UML includes a set of graphic notation techniques that can be used to create visual models of object-oriented software-intensive systems. The IEC 61968 series of standards extend the Common Information Model (CIM), which is currently maintained as a UML model, to meet the needs of electrical distribution. For structured document interchange particularly on the Internet, the data format used can be the Extensible Markup Language (XML). One of its primary uses is information exchange between different and potentially incompatible computer systems. XML is thus well-suited to the domain of system interfaces for distribution management. It formats the message payloads so as to load the same to various messaging transports like SOAP (Simple Object Access Protocol), etc.

Você também pode gostar