All Publications‎ > ‎

Monitoring Power Quality Beyond EN50160 and IEC 61000-4-30

posted Apr 26, 2010, 11:33 AM by Power Quality Doctor

Abstract Power Quality monitoring has become a standard task in electrical network management.

The standards currently in place provide minimum requirements, since they want to create a level playing field that allows analyzers from different manufacturers to give the same results. It is good idea in concept, but it also acts as double edged sword. Manufacturers design their product to comply with these standards but typically do not provide data and measurements that would allow power quality analysis to go beyond current capabilities.  To follow the guidelines set out by various standards and record faults or disturbances, today’s meters rely solely on event-based triggers.  While this method provides engineers with some information regarding an event, it does not allow for full analysis of all power parameters leading up to an event, during an event or how the overall network recovers after an event.  Further, due to limitations in memory storage, it is likely that even the data captured by such recording method(s) will not capture all of the ‘true’ power and energy parameters.  In a majority of cases, these limitations prevent power quality phenomena from being truly solved and prevent solutions that will eliminate future recurrence.    

This paper will highlight genuine case studies of Power Quality troubleshooting that was not capable of solving the Power Quality problem with measurements simply taken to comply with standards.  It will further show that by providing engineers with data beyond the standards, an unprecedented number of Power Quality events can not only be captured, but are definitively solved. 

     I.          Introduction

The main objectives for power quality monitoring are as follows:

Power Quality Statistics

Measuring the power quality conditions in general, mainly to analyze the overall performance of an electrical system’s power quality.  In many cases this is monitored for facility distribution networks, large regions or total value for a utility.

Power Quality Contracts

Customers who are sensitive to power quality may have a specific electrical power contract that outlines the minimum acceptable power quality level to be supplied by the utility.

Power Quality Troubleshooting

Analysis of power quality events, usually close to a problematic load or customer. The analysis may be driven from power quality failure, but is preferable to be driven by continuous monitoring that can detect potential problems.

It is relatively obvious that power quality troubleshooting is the first stage, hopefully followed by some kind of corrective action.  That corrective action would outline something that can or should be done in the network to improve the situation and prevent reoccurrence of the failure. However, the power quality statistics and contracts may also be followed by corrective action if the minimum power quality level is not achieved.

While it is obvious that there is never too much information that can be utilized in troubleshooting, many papers written on this topic discuss what additional information should be added to the existing guidelines for power quality statistics and power quality contracts.

    II.         Existing Standards and Trends

The two most common power quality standards in use today are IEC 61000-4-30 [1] and EN 50160 [2].

IEC 61000-4 provides measurement methods, describes measurement formulas, sets accuracy levels and defines aggregation periods. The main motivation for this standard is to provide common requirements for measurement devices to ensure that analyzers from different manufacturers give the same results.

EN 50160 provides recommended levels for different power quality parameters, including a time-based percentage during which the levels should be kept (e.g., limiting voltage flicker to 95% of the time per week).

Various papers have discussed the limitations of these current standards, such as KEMA and Leonardo Energy "Regulation of Power Quality" [3] or ERGEG (European Regulators' Group for Electricity and Gas) "Towards Voltage Quality Regulation in Europe" [4].

The main concerns about the existing standards are:

  • Time aggregations which hide some of the power quality issues
  • Limiting the values for only a portion of the time
  • Limiting the overall power quality variables to voltage quality only
  • Identifying the contribution of each side (source & user) to the power quality

To combat these limitations, several countries are modifying the IEC and other standards in an attempt to tighten Power Quality standards and improve network power quality.  For example, NVE, (Norwegian Water Resources and Energy Directorate), has begun to enforce stricter Power Quality standards in Norway [5]. The NVE standard reduces the averaging periods from 10 minutes to 1 minute and forces maximum levels of compliance, requiring 100% (compared to 95% in EN 50160). Other regulatory agencies, such as in Hungary, have forced the averaging period to every 3 seconds. From ERGEG (p. 13): "Using a 10-minute average may give satisfactory protection for thermal phenomena, but are not sufficient to protect against equipment damage or failure.”  Whilst these changes are significant steps to assist statistical data analysis and improve the requirement for information, they still do not require measurement equipment to provide the full picture required to completely understand and solve power quality and fault phenomena.  

   III.        New Analysis Concepts

A.    Introduction

Standards reflect the existing technology capabilities. They do not specify unreachable requirements from one side, but try to urge the development of new technologies that will drive and necessitate improvements from the other side.

There are 4 generations of power meters:

1st Generation – pure online meters, either analog or digital, which provides the current information without any logging

2nd Generation – data loggers, either paper-based or paper‑less, which provide periodic data recording

3rd Generation – power quality analyzers – provide logging of selective data based on events

4th Generation – endless logging power quality analyzers – allow continuous logging of all raw data

 The only way to achieve full comprehension of power quality and fault phenomena along with their impact throughout an electrical network is to fully record all power and energy parameters on a continual basis without relying on triggers or event-based recording protocols.  Such a compression technology that compresses the raw data of both voltage and current waveforms has been developed.  This technology compresses data in a typical 1000:1 ratio, reducing disk space required both in the analyzer/meter and the computer, and eases communication requirements.  This allows continuous logging of all the power quality and energy information, without specifying thresholds or selecting parameters to be measured. As the compression technology stores raw waveform data, all power quality and energy parameters are calculated in post-processing.  This concept is originally explained in IEC 61000-4-30 (p. 78): "Raw un-aggregated samples are the most useful for trouble-shooting, as they permit any type of post-processing that may be desired".

The following examples are taken from different sites throughout the world utilizing the compression technology spoken of above. All the figures (except Figure 12 and Figure 14) show data from real site equipped with continuous logging power quality analyzer. Each example represents a separate benefit for using such a continuous logging technology.

B.    EN 50160 Compliance

Figure 1 shows the compliance to EN50160 standard at the main service of an industrial customer. The supply is 22kV fed through two transformers that serve a large number of motors on this site.  The customer complained that the poor power quality supplied by the local utility caused significant monetary damages to their equipment. As shown, the utility power is in compliance with EN 50160, with no interruptions, variations, unbalances, etc. The only parameter that is not 100% compliant all of the time is the voltage dips, but at 98.1% of the time it is ok, which is more than the required 95%.  Whilst the power has remained ‘in compliance,’ meters and recorders that simply take and record minimal parameters to the standard(s) are not capable of providing the information required to definitively solve power quality and fault phenomena.  Plant equipment and electrical distribution networks still suffer production & delivery interruptions and failures even when in full compliance with existing standards.  The key is to provide full information to Power Quality engineers that enable them to see faults and disturbances that are seemingly outside the current guidelines, yet cause significant failure(s) and cost to all parties.  Moreover, measurements taken to comply with the standards do not make it clear who is responsible for the dips. Full Compliance with EN 50160 was not sufficient enough in providing any indication as to power anomalies within the site. While experiencing unexplained production interruptions and equipment failures, the customer received misguided status according to the standards.

Figure 1: Compliance with EN 50160 at Industrial Customer's Main Service

C.    All parameters

One of the problems of EN 50160 is that it requires measurement of voltages only. IEC 61000-4-30 recommends adding currents as well ("having current signatures as well greatly increases the range and the precision of statements that can be made about a power quality event", p. 81). When continuously logging all of the raw waveform data, all power quality and energy parameters are calculated in post processing. By doing this, all parameters can be examined in order to understand events. On Delta connections the measurements are typically limited to the line-to-line voltages only, as required by EN 50160 and others.  However, this hides some phenomenon. The event shown in Figure 2 highlights a short circuit between the blue phase and the ground. On the LinetoLine voltage profile (the upper graph) it is noticed only slightly, but much less than required to be recorded as an event (the standard 10% threshold). The outcome is that a potentially damaging event would not even be recorded, notwithstanding never analyzed.  Damage caused by such an event could be to any piece of electrical equipment connected to this network, since it will suffer from over voltage from phase to ground (in Delta networks, the analyzer's neutral input channel should be connected to the protective ground).

 

Figure 2: Line-to-Ground Event

Another example of the importance of using line-to-ground measurement in Delta networks is explained in Figure 3 through Figure 6.

Figure 3 shows a Line-to-Line event. This is nice information to have, however, the central essence of power quality analysis is the identification of the source(s) of failures. Figure 4 shows a zoom out of this event to a total of 1 second (showing more time-based information than many analyzers can record for all logged events, even with enhanced memory).  This view reveals that there was something wrong both before and after the event.  Figure 5 adds the Line-to-Neutral voltages and reveals the source. It started as a short circuit on the red phase, which created higher potential between each of the other two phases to the ground, which resulted with breakthrough on the blue phase. The result is shown on Figure 3 as sag on L3‑L1, but the source for the problem is ground fault between phase L1 to ground likely caused by a defective insulator or foreign material. Adding the current (Figure 6) explains the aftershock event – a voltage drop which resulted from simultaneous connection of many loads which were disconnected during the main event.

The following example shows the additional benefit from adding line-to-ground voltages on Delta networks. Additional parameters which help analysis are harmonics (for example voltage dips that are caused by resonance) or frequency (see example in the next section).

Figure 3: Line-to-Line Voltages

Figure

4: Line-to-Line Voltage Zoom Out shows 2 collateral events

Figure

5: Line-to-Line plus Line-to-Neutral Voltages

Figure

6: Adding Currents

D.    Continuous logging

The common practice is to use event based logging as the foundation for any power quality analysis. IEC 61000‑4‑30 even specifies that typically pre-trigger information of 1/4 of the graph should be included in the event. Figure 7  shows a voltage DIP, recorded together with the local utility on the main service of large refrigeration factory. Based on the events logging concept, it shows 16 cycles (a common default recording length). In addition to the standard voltage logging, it shows also the currents during the event. Since there is a current increase during the voltage drop, the rule of thumb for analysis is that this event is caused by the downstream user.

Using the data compression technology, it is possible to continuously store all electrical information. Figure 8 shows a larger view of the same event (approx. 7 second – more than 300 continuous cycles). In addition, it shows the frequency during the event.

Figure

7: Voltage DIP event – 16 cycles

Figure

8: Voltage DIP event – Zoom Out

The Frequency is the result of the balance between the generation and the demand.  It is one of the most important parameters for controlling the generation power. When the generation is more than the demand, the frequency increases and when the generation is less than the demand it decreases. As shown on the graph, 1 second after the event the frequency started to increase, indicating that generation was higher than demand. There are two possible reasons for this: (1) there was a problem in the generation which caused it to increase generation power, or (2) the demand was significantly reduced almost instantaneously, creating over- generation. What apparently happened is that the dip was in a large geographical area and caused many loads to stop and subsequently, the demand to drop. Unlike the previous conclusion, this proves that the source of the dip was from a large geographical area.  This conclusion identifies the responsibility for the event to lie with the utility.

What would be seen if we look at this event further on a larger scale of information? Figure 9 shows quarter of an hour of data. The frequency change can be clearly seen and also other current peaks which happened before the dip. It can be assumed that maybe the current peaks caused the problem, followed by regional collapse of the grid. Figure 10 shows approx. one and a half hours of continuous data (the displayed RMS values are calculated from the stored data at 512 samples per cycle, to a total of more than 100 Million samples being used for the analysis of this single event). The current peaks appear before, during and after the event and they are typical to this site. It was just a coincidence that a Current peak occurred during the same time of the voltage dip. Moreover, the drop in the voltage caused the current peak to be smaller than the other ones.

Figure

9: Voltage DIP event – 2nd Zoom Out

Figure

10: Voltage DIP event – 3rd Zoom Out

 Figure 11 shows time-synchronized data of the voltage, current and frequency on two other locations, located 106km (66miles) from each other and 62km/54km (38/34 miles) from the original site. The voltage and frequency graphs and the distance explain that the event was indeed a large scale event.

The nature of rules of thumb is that they are right in most of the cases, but not in all of the cases. One of the problems in analysis is the certainty of the conclusion. If one cannot be absolutely sure about the conclusion, it may not be sufficient for damage claims or investment in preventative measures.

Figure

11: Same Event – Other Locations

E.    Rapid Parameter Monitoring

In order to overcome data storage capacity and processing power limitations, the standards recommend averaging periods for different parameters. While averaging requires fewer resources from the analyzer manufacturer and less storage space on the host computer, it hides a large amount of vital power quality information. The advantages of more stabilized data and having similar results on different analyzers become more important than the ability to understand the network and the propagation of events

An example of the advantages of faster measurement is shown in Figure 12.  This example is taken from a paper by SINTEF Energy Research, Norway, which discusses the advantages of rapid monitoring of parameters. In this example using 10 minute averages, the voltage is below 207V (nominal 230V minus 10%) 3.5% of the time while using 1 minute averaging it is below 207V 28% of the time.

Figure

12: 1 minute vs. 10 minute averaging

The above example graphically characterizes the stark differences in results using different averaging periods. However, in order to fully understand the cause of this problem and to provide a solution, it is essential to monitor the cycle-by-cycle RMS voltage. The new data compression technology allows storage of every cycle for all the parameters – from voltages and currents to power and harmonics. The example in Figure 13 is from a spot welding factory in Germany. It depicts the different results when monitoring RMS values cycle-by-cycle versus the averaging technique per IEC 61000‑4‑30. When monitoring cycle-by-cycle there are 5 different voltage dips while when using 10 cycles averaging the results is only one longer dip. Moreover, when calculating according to the standard fixed 10 cycles window to sliding window the value changes during the dip are smaller and more important the peak values of both the voltage and current are smaller. The results were changed from 5 dips of 12 cycles and more than 20 volts drop to 1 dip of 60 cycles and 13 volts only.

Figure

13: Cycle-by-Cycle Measurements

Voltage flickering is another important power quality parameter that is characterized by slow measurement. IEC 61000‑4‑15 defines two periods for monitoring flicker – 10 minutes (PST – ST = Short Term) and 2 hours (PLT – LT = Long Term). In real life, many processes vary during the 10 minute period which makes it difficult to check the flicker level in real time and to accurately determine the true nature and cause of flicker.

A newly developed extended algorithm to the flicker standard allows analysis of flicker levels at 2 second resolution. The values are displayed on the same scale as standard PST/PLT which means that if the flicker level is kept constant, the values for 2 seconds, 10 minutes and 2 hours are the same. Other time periods, such as 10 seconds and 1 minute flicker measurement can be provided as well for further Power Quality investigation.

F.    High Sampling Rate

The nature of some power quality phenomenon is very fast which requires rapid sampling and logging rates. IEC 61000‑4‑30 does not specify what sampling rate to use. It discusses in general terms about sampling rates (p. 19): "To ensure that matching results are produced, class A performance instrument requires a bandwidth characteristic and a sampling rate sufficient for the specified uncertainty of each parameter."

When the sampling rate is not sufficient, the Power Quality event may not be visible or may mistakenly be considered as another type. Figure 14 shows the same event in 64 (top) and 1024 (bottom) samples per cycle. In the top graph, the event would be classified as voltage sag/drop. However in 1024 samples per cycle, it is clear that the sag is actually transient-induced.  

Figure

14: Affects of Sampling and Recording Rate

Although the standard does not force minimum sampling rate, many class A analyzers perform their measurements at 256 or more samples per cycle. However, due to memory and capacity limitations, they log the data in lower sampling rates (sometimes even as low as 16 samples per cycle only). Some analyzers also limit the number of channels that are logged at the highest sampling rate(s), dramatically reducing the accuracy and reliable power quality investigation. 

G.    Multi-point Time-Synchronized Analysis

Typical Power Quality events start from a single point/source and propagate throughout the network to different locations, impacting different elements of an electrical system in various ways.  Some events are in actuality a combination of two or more anomalies that occur during the same time period. Monitoring at a single point (typically at interconnect locations) shows the affect at this location only. Usually it is not possible to determine the source of the event and more importantly, the root cause of the problem. It becomes even more difficult when there is more than one source for what may seem like a single event. In this case, any conclusion may be counteracted if only one source is isolated and the event continues to appear.

Figure 15 shows the voltage levels at an industrial customer (the same one from section D above) who complained about equipment failures. Small dips were observed at the main service, simultaneously with transients. When more than one analyzer was installed, it showed that there were at least two sources for the voltage drop events. According to the voltage levels (the values in percentages to allow for comparison of different voltage levels), the event on the left started downstream of the right-hand side MCC, propagated upstream to the main service and then downstream to the other transformer. The event on the right side of the graph occurred in exactly the opposite direction. However, both of them appear similar when monitoring the main service only.

Figure

15:Voltage DIP from different locations

Analyzing event propagation based on RMS values is a good practice. More advanced propagation analyses can be done by analyzing the time differences for RMS values or even the phase shift of waveforms. The IEC 61000-4-30 requirement is very moderate, requiring a maximum time uncertainty of only plus/minus one network cycle (16.7 / 20 ms), which means two samples from two analyzers can differ by as much as 40 milliseconds. As transient propagation is much faster, more accurate time synchronization must be achieved to allow proper analysis.  The most common technique for time synchronization is the use of Global Positioning System (GPS). However, different analyzers have different time accuracies with GPS, some varying by more than the minimum single cycle required by the IEC standard. Another technique is using Local Area Network (LAN) synchronization and it is much easier to implement (GPS requires a sky view to operate). Using sophisticated algorithms it is possible to achieve even single sample accuracy (i.e., tens of microseconds), depending on the LAN topology and traffic. Figure 16 shows an expanded view of the left event in Figure 15. The analyzers are synchronized over the LAN and event propagation is easily monitored from the MCC up to the main service and down to the other transformer.

Figure

16:Voltage DIP from different locations Zoom

   IV.        Conclusion

Standards were created to provide an equal starting point for power quality analysis and to allow analyzers from varying manufacturers to yield the same (or at least similar results.  However, continuous measurement of raw electrical data at high sampling rate and accuracy explicitly reflects fractures in monitoring methods based solely on existing standards and regulations.

In many cases, limiting the information to only that which is ‘required’ by a certain standard actually prevents the troubleshooting engineer from monitoring and analyzing anomalies - not to mention identifying the source and preventing the same event in the future.

Data compression technology that allows continuous measurement and logging of data at high sampling rates
(up to 1024 samples/cycle) for extended periods of time provides engineers with the information they need to effectively analyze and take appropriate action to prevent future power events.  Further, providing both cycle-by-cycle and standards-based measurements simultaneously guarantees a true picture of the electrical parameters and anomalies.  Lastly, time-synchronized and continuous capture and recording of all the parameters without the need for thresholds or advanced filtering of the data ensures that all the information is stored and the complete analysis is possible – before, during and after an event.

    V.         References

[1]     IEC 61000-4-30:2003, “Testing and measurement techniques – Power quality measurement methods” 2003, pp. 81, 78, 19.

[2]     EN 50160:1999, “Voltage characteristics of electricity supplied by public distribution systems”

[3]     V. Ajodhia and B. Franken, “Regulation of Voltage Quality”, February 2007.
 (http://www.leonardo-energy.org/drupal/files/2007/07-0356%20ECI%20Regulation%20of%20Voltage%20Quality.pdf?download)

[4]     European Regulators' Group for Electricity and Gas (ERGEG), “Towards Voltage Regulation in Europe”, December 2006, pp. 13. (http://www.ergeg.org/portal/page/portal/ERGEG_HOME/ERGEG_PC/ARCHIVE1/Voltage%20Quality/E06-EQS-09-03_VoltageQuality_PC.pdf)

[5]     Norwegian Water Resources and Energy Directorate, “Regulations relating to the quality of supply in the Norwegian power system” November 2004. (http://www.nve.no/FileArchive/190/Regulations%20relating%20to%20the%20quality%20of%20supply.pdf)

[6]  Elspec G4400 Datasheet, Elspec Ltd., http://www.elspec-ltd.com/category/G4000_BLACKBOX_Fixed_Power_Quality_Analyzer

Comments