All Publications

Monitoring Power Quality Beyond EN50160 and IEC 61000-4-30

posted Apr 26, 2010, 11:33 AM by Power Quality Doctor

Abstract Power Quality monitoring has become a standard task in electrical network management.

The standards currently in place provide minimum requirements, since they want to create a level playing field that allows analyzers from different manufacturers to give the same results. It is good idea in concept, but it also acts as double edged sword. Manufacturers design their product to comply with these standards but typically do not provide data and measurements that would allow power quality analysis to go beyond current capabilities.  To follow the guidelines set out by various standards and record faults or disturbances, today’s meters rely solely on event-based triggers.  While this method provides engineers with some information regarding an event, it does not allow for full analysis of all power parameters leading up to an event, during an event or how the overall network recovers after an event.  Further, due to limitations in memory storage, it is likely that even the data captured by such recording method(s) will not capture all of the ‘true’ power and energy parameters.  In a majority of cases, these limitations prevent power quality phenomena from being truly solved and prevent solutions that will eliminate future recurrence.    

This paper will highlight genuine case studies of Power Quality troubleshooting that was not capable of solving the Power Quality problem with measurements simply taken to comply with standards.  It will further show that by providing engineers with data beyond the standards, an unprecedented number of Power Quality events can not only be captured, but are definitively solved. 

     I.          Introduction

The main objectives for power quality monitoring are as follows:

Power Quality Statistics

Measuring the power quality conditions in general, mainly to analyze the overall performance of an electrical system’s power quality.  In many cases this is monitored for facility distribution networks, large regions or total value for a utility.

Power Quality Contracts

Customers who are sensitive to power quality may have a specific electrical power contract that outlines the minimum acceptable power quality level to be supplied by the utility.

Power Quality Troubleshooting

Analysis of power quality events, usually close to a problematic load or customer. The analysis may be driven from power quality failure, but is preferable to be driven by continuous monitoring that can detect potential problems.

It is relatively obvious that power quality troubleshooting is the first stage, hopefully followed by some kind of corrective action.  That corrective action would outline something that can or should be done in the network to improve the situation and prevent reoccurrence of the failure. However, the power quality statistics and contracts may also be followed by corrective action if the minimum power quality level is not achieved.

While it is obvious that there is never too much information that can be utilized in troubleshooting, many papers written on this topic discuss what additional information should be added to the existing guidelines for power quality statistics and power quality contracts.

    II.         Existing Standards and Trends

The two most common power quality standards in use today are IEC 61000-4-30 [1] and EN 50160 [2].

IEC 61000-4 provides measurement methods, describes measurement formulas, sets accuracy levels and defines aggregation periods. The main motivation for this standard is to provide common requirements for measurement devices to ensure that analyzers from different manufacturers give the same results.

EN 50160 provides recommended levels for different power quality parameters, including a time-based percentage during which the levels should be kept (e.g., limiting voltage flicker to 95% of the time per week).

Various papers have discussed the limitations of these current standards, such as KEMA and Leonardo Energy "Regulation of Power Quality" [3] or ERGEG (European Regulators' Group for Electricity and Gas) "Towards Voltage Quality Regulation in Europe" [4].

The main concerns about the existing standards are:

  • Time aggregations which hide some of the power quality issues
  • Limiting the values for only a portion of the time
  • Limiting the overall power quality variables to voltage quality only
  • Identifying the contribution of each side (source & user) to the power quality

To combat these limitations, several countries are modifying the IEC and other standards in an attempt to tighten Power Quality standards and improve network power quality.  For example, NVE, (Norwegian Water Resources and Energy Directorate), has begun to enforce stricter Power Quality standards in Norway [5]. The NVE standard reduces the averaging periods from 10 minutes to 1 minute and forces maximum levels of compliance, requiring 100% (compared to 95% in EN 50160). Other regulatory agencies, such as in Hungary, have forced the averaging period to every 3 seconds. From ERGEG (p. 13): "Using a 10-minute average may give satisfactory protection for thermal phenomena, but are not sufficient to protect against equipment damage or failure.”  Whilst these changes are significant steps to assist statistical data analysis and improve the requirement for information, they still do not require measurement equipment to provide the full picture required to completely understand and solve power quality and fault phenomena.  

   III.        New Analysis Concepts

A.    Introduction

Standards reflect the existing technology capabilities. They do not specify unreachable requirements from one side, but try to urge the development of new technologies that will drive and necessitate improvements from the other side.

There are 4 generations of power meters:

1st Generation – pure online meters, either analog or digital, which provides the current information without any logging

2nd Generation – data loggers, either paper-based or paper‑less, which provide periodic data recording

3rd Generation – power quality analyzers – provide logging of selective data based on events

4th Generation – endless logging power quality analyzers – allow continuous logging of all raw data

 The only way to achieve full comprehension of power quality and fault phenomena along with their impact throughout an electrical network is to fully record all power and energy parameters on a continual basis without relying on triggers or event-based recording protocols.  Such a compression technology that compresses the raw data of both voltage and current waveforms has been developed.  This technology compresses data in a typical 1000:1 ratio, reducing disk space required both in the analyzer/meter and the computer, and eases communication requirements.  This allows continuous logging of all the power quality and energy information, without specifying thresholds or selecting parameters to be measured. As the compression technology stores raw waveform data, all power quality and energy parameters are calculated in post-processing.  This concept is originally explained in IEC 61000-4-30 (p. 78): "Raw un-aggregated samples are the most useful for trouble-shooting, as they permit any type of post-processing that may be desired".

The following examples are taken from different sites throughout the world utilizing the compression technology spoken of above. All the figures (except Figure 12 and Figure 14) show data from real site equipped with continuous logging power quality analyzer. Each example represents a separate benefit for using such a continuous logging technology.

B.    EN 50160 Compliance

Figure 1 shows the compliance to EN50160 standard at the main service of an industrial customer. The supply is 22kV fed through two transformers that serve a large number of motors on this site.  The customer complained that the poor power quality supplied by the local utility caused significant monetary damages to their equipment. As shown, the utility power is in compliance with EN 50160, with no interruptions, variations, unbalances, etc. The only parameter that is not 100% compliant all of the time is the voltage dips, but at 98.1% of the time it is ok, which is more than the required 95%.  Whilst the power has remained ‘in compliance,’ meters and recorders that simply take and record minimal parameters to the standard(s) are not capable of providing the information required to definitively solve power quality and fault phenomena.  Plant equipment and electrical distribution networks still suffer production & delivery interruptions and failures even when in full compliance with existing standards.  The key is to provide full information to Power Quality engineers that enable them to see faults and disturbances that are seemingly outside the current guidelines, yet cause significant failure(s) and cost to all parties.  Moreover, measurements taken to comply with the standards do not make it clear who is responsible for the dips. Full Compliance with EN 50160 was not sufficient enough in providing any indication as to power anomalies within the site. While experiencing unexplained production interruptions and equipment failures, the customer received misguided status according to the standards.

Figure 1: Compliance with EN 50160 at Industrial Customer's Main Service

C.    All parameters

One of the problems of EN 50160 is that it requires measurement of voltages only. IEC 61000-4-30 recommends adding currents as well ("having current signatures as well greatly increases the range and the precision of statements that can be made about a power quality event", p. 81). When continuously logging all of the raw waveform data, all power quality and energy parameters are calculated in post processing. By doing this, all parameters can be examined in order to understand events. On Delta connections the measurements are typically limited to the line-to-line voltages only, as required by EN 50160 and others.  However, this hides some phenomenon. The event shown in Figure 2 highlights a short circuit between the blue phase and the ground. On the LinetoLine voltage profile (the upper graph) it is noticed only slightly, but much less than required to be recorded as an event (the standard 10% threshold). The outcome is that a potentially damaging event would not even be recorded, notwithstanding never analyzed.  Damage caused by such an event could be to any piece of electrical equipment connected to this network, since it will suffer from over voltage from phase to ground (in Delta networks, the analyzer's neutral input channel should be connected to the protective ground).


Figure 2: Line-to-Ground Event

Another example of the importance of using line-to-ground measurement in Delta networks is explained in Figure 3 through Figure 6.

Figure 3 shows a Line-to-Line event. This is nice information to have, however, the central essence of power quality analysis is the identification of the source(s) of failures. Figure 4 shows a zoom out of this event to a total of 1 second (showing more time-based information than many analyzers can record for all logged events, even with enhanced memory).  This view reveals that there was something wrong both before and after the event.  Figure 5 adds the Line-to-Neutral voltages and reveals the source. It started as a short circuit on the red phase, which created higher potential between each of the other two phases to the ground, which resulted with breakthrough on the blue phase. The result is shown on Figure 3 as sag on L3‑L1, but the source for the problem is ground fault between phase L1 to ground likely caused by a defective insulator or foreign material. Adding the current (Figure 6) explains the aftershock event – a voltage drop which resulted from simultaneous connection of many loads which were disconnected during the main event.

The following example shows the additional benefit from adding line-to-ground voltages on Delta networks. Additional parameters which help analysis are harmonics (for example voltage dips that are caused by resonance) or frequency (see example in the next section).

Figure 3: Line-to-Line Voltages


4: Line-to-Line Voltage Zoom Out shows 2 collateral events


5: Line-to-Line plus Line-to-Neutral Voltages


6: Adding Currents

D.    Continuous logging

The common practice is to use event based logging as the foundation for any power quality analysis. IEC 61000‑4‑30 even specifies that typically pre-trigger information of 1/4 of the graph should be included in the event. Figure 7  shows a voltage DIP, recorded together with the local utility on the main service of large refrigeration factory. Based on the events logging concept, it shows 16 cycles (a common default recording length). In addition to the standard voltage logging, it shows also the currents during the event. Since there is a current increase during the voltage drop, the rule of thumb for analysis is that this event is caused by the downstream user.

Using the data compression technology, it is possible to continuously store all electrical information. Figure 8 shows a larger view of the same event (approx. 7 second – more than 300 continuous cycles). In addition, it shows the frequency during the event.


7: Voltage DIP event – 16 cycles


8: Voltage DIP event – Zoom Out

The Frequency is the result of the balance between the generation and the demand.  It is one of the most important parameters for controlling the generation power. When the generation is more than the demand, the frequency increases and when the generation is less than the demand it decreases. As shown on the graph, 1 second after the event the frequency started to increase, indicating that generation was higher than demand. There are two possible reasons for this: (1) there was a problem in the generation which caused it to increase generation power, or (2) the demand was significantly reduced almost instantaneously, creating over- generation. What apparently happened is that the dip was in a large geographical area and caused many loads to stop and subsequently, the demand to drop. Unlike the previous conclusion, this proves that the source of the dip was from a large geographical area.  This conclusion identifies the responsibility for the event to lie with the utility.

What would be seen if we look at this event further on a larger scale of information? Figure 9 shows quarter of an hour of data. The frequency change can be clearly seen and also other current peaks which happened before the dip. It can be assumed that maybe the current peaks caused the problem, followed by regional collapse of the grid. Figure 10 shows approx. one and a half hours of continuous data (the displayed RMS values are calculated from the stored data at 512 samples per cycle, to a total of more than 100 Million samples being used for the analysis of this single event). The current peaks appear before, during and after the event and they are typical to this site. It was just a coincidence that a Current peak occurred during the same time of the voltage dip. Moreover, the drop in the voltage caused the current peak to be smaller than the other ones.


9: Voltage DIP event – 2nd Zoom Out


10: Voltage DIP event – 3rd Zoom Out

 Figure 11 shows time-synchronized data of the voltage, current and frequency on two other locations, located 106km (66miles) from each other and 62km/54km (38/34 miles) from the original site. The voltage and frequency graphs and the distance explain that the event was indeed a large scale event.

The nature of rules of thumb is that they are right in most of the cases, but not in all of the cases. One of the problems in analysis is the certainty of the conclusion. If one cannot be absolutely sure about the conclusion, it may not be sufficient for damage claims or investment in preventative measures.


11: Same Event – Other Locations

E.    Rapid Parameter Monitoring

In order to overcome data storage capacity and processing power limitations, the standards recommend averaging periods for different parameters. While averaging requires fewer resources from the analyzer manufacturer and less storage space on the host computer, it hides a large amount of vital power quality information. The advantages of more stabilized data and having similar results on different analyzers become more important than the ability to understand the network and the propagation of events

An example of the advantages of faster measurement is shown in Figure 12.  This example is taken from a paper by SINTEF Energy Research, Norway, which discusses the advantages of rapid monitoring of parameters. In this example using 10 minute averages, the voltage is below 207V (nominal 230V minus 10%) 3.5% of the time while using 1 minute averaging it is below 207V 28% of the time.


12: 1 minute vs. 10 minute averaging

The above example graphically characterizes the stark differences in results using different averaging periods. However, in order to fully understand the cause of this problem and to provide a solution, it is essential to monitor the cycle-by-cycle RMS voltage. The new data compression technology allows storage of every cycle for all the parameters – from voltages and currents to power and harmonics. The example in Figure 13 is from a spot welding factory in Germany. It depicts the different results when monitoring RMS values cycle-by-cycle versus the averaging technique per IEC 61000‑4‑30. When monitoring cycle-by-cycle there are 5 different voltage dips while when using 10 cycles averaging the results is only one longer dip. Moreover, when calculating according to the standard fixed 10 cycles window to sliding window the value changes during the dip are smaller and more important the peak values of both the voltage and current are smaller. The results were changed from 5 dips of 12 cycles and more than 20 volts drop to 1 dip of 60 cycles and 13 volts only.


13: Cycle-by-Cycle Measurements

Voltage flickering is another important power quality parameter that is characterized by slow measurement. IEC 61000‑4‑15 defines two periods for monitoring flicker – 10 minutes (PST – ST = Short Term) and 2 hours (PLT – LT = Long Term). In real life, many processes vary during the 10 minute period which makes it difficult to check the flicker level in real time and to accurately determine the true nature and cause of flicker.

A newly developed extended algorithm to the flicker standard allows analysis of flicker levels at 2 second resolution. The values are displayed on the same scale as standard PST/PLT which means that if the flicker level is kept constant, the values for 2 seconds, 10 minutes and 2 hours are the same. Other time periods, such as 10 seconds and 1 minute flicker measurement can be provided as well for further Power Quality investigation.

F.    High Sampling Rate

The nature of some power quality phenomenon is very fast which requires rapid sampling and logging rates. IEC 61000‑4‑30 does not specify what sampling rate to use. It discusses in general terms about sampling rates (p. 19): "To ensure that matching results are produced, class A performance instrument requires a bandwidth characteristic and a sampling rate sufficient for the specified uncertainty of each parameter."

When the sampling rate is not sufficient, the Power Quality event may not be visible or may mistakenly be considered as another type. Figure 14 shows the same event in 64 (top) and 1024 (bottom) samples per cycle. In the top graph, the event would be classified as voltage sag/drop. However in 1024 samples per cycle, it is clear that the sag is actually transient-induced.  


14: Affects of Sampling and Recording Rate

Although the standard does not force minimum sampling rate, many class A analyzers perform their measurements at 256 or more samples per cycle. However, due to memory and capacity limitations, they log the data in lower sampling rates (sometimes even as low as 16 samples per cycle only). Some analyzers also limit the number of channels that are logged at the highest sampling rate(s), dramatically reducing the accuracy and reliable power quality investigation. 

G.    Multi-point Time-Synchronized Analysis

Typical Power Quality events start from a single point/source and propagate throughout the network to different locations, impacting different elements of an electrical system in various ways.  Some events are in actuality a combination of two or more anomalies that occur during the same time period. Monitoring at a single point (typically at interconnect locations) shows the affect at this location only. Usually it is not possible to determine the source of the event and more importantly, the root cause of the problem. It becomes even more difficult when there is more than one source for what may seem like a single event. In this case, any conclusion may be counteracted if only one source is isolated and the event continues to appear.

Figure 15 shows the voltage levels at an industrial customer (the same one from section D above) who complained about equipment failures. Small dips were observed at the main service, simultaneously with transients. When more than one analyzer was installed, it showed that there were at least two sources for the voltage drop events. According to the voltage levels (the values in percentages to allow for comparison of different voltage levels), the event on the left started downstream of the right-hand side MCC, propagated upstream to the main service and then downstream to the other transformer. The event on the right side of the graph occurred in exactly the opposite direction. However, both of them appear similar when monitoring the main service only.


15:Voltage DIP from different locations

Analyzing event propagation based on RMS values is a good practice. More advanced propagation analyses can be done by analyzing the time differences for RMS values or even the phase shift of waveforms. The IEC 61000-4-30 requirement is very moderate, requiring a maximum time uncertainty of only plus/minus one network cycle (16.7 / 20 ms), which means two samples from two analyzers can differ by as much as 40 milliseconds. As transient propagation is much faster, more accurate time synchronization must be achieved to allow proper analysis.  The most common technique for time synchronization is the use of Global Positioning System (GPS). However, different analyzers have different time accuracies with GPS, some varying by more than the minimum single cycle required by the IEC standard. Another technique is using Local Area Network (LAN) synchronization and it is much easier to implement (GPS requires a sky view to operate). Using sophisticated algorithms it is possible to achieve even single sample accuracy (i.e., tens of microseconds), depending on the LAN topology and traffic. Figure 16 shows an expanded view of the left event in Figure 15. The analyzers are synchronized over the LAN and event propagation is easily monitored from the MCC up to the main service and down to the other transformer.


16:Voltage DIP from different locations Zoom

   IV.        Conclusion

Standards were created to provide an equal starting point for power quality analysis and to allow analyzers from varying manufacturers to yield the same (or at least similar results.  However, continuous measurement of raw electrical data at high sampling rate and accuracy explicitly reflects fractures in monitoring methods based solely on existing standards and regulations.

In many cases, limiting the information to only that which is ‘required’ by a certain standard actually prevents the troubleshooting engineer from monitoring and analyzing anomalies - not to mention identifying the source and preventing the same event in the future.

Data compression technology that allows continuous measurement and logging of data at high sampling rates
(up to 1024 samples/cycle) for extended periods of time provides engineers with the information they need to effectively analyze and take appropriate action to prevent future power events.  Further, providing both cycle-by-cycle and standards-based measurements simultaneously guarantees a true picture of the electrical parameters and anomalies.  Lastly, time-synchronized and continuous capture and recording of all the parameters without the need for thresholds or advanced filtering of the data ensures that all the information is stored and the complete analysis is possible – before, during and after an event.

    V.         References

[1]     IEC 61000-4-30:2003, “Testing and measurement techniques – Power quality measurement methods” 2003, pp. 81, 78, 19.

[2]     EN 50160:1999, “Voltage characteristics of electricity supplied by public distribution systems”

[3]     V. Ajodhia and B. Franken, “Regulation of Voltage Quality”, February 2007.

[4]     European Regulators' Group for Electricity and Gas (ERGEG), “Towards Voltage Regulation in Europe”, December 2006, pp. 13. (

[5]     Norwegian Water Resources and Energy Directorate, “Regulations relating to the quality of supply in the Norwegian power system” November 2004. (

[6]  Elspec G4400 Datasheet, Elspec Ltd.,

Data Compression Technology Speeds Power Quality Analysis

posted Apr 1, 2010, 8:04 AM by Power Quality Doctor

An engineer’s main objective when troubleshooting a power quality event is to identify the source of the disturbance in order to determine the required corrective action. To identify the source, the engineer depends on recorded data captured by monitoring equipment.

Management demands a cost-effective solution to the problem be implemented as quickly as possible. The electrical engineer speaks of installing instrumentation, collecting data, analyzing data, re-installing, and re-analyzing. It is not uncommon for months to pass until the problem is isolated and a solution is implemented.

Power quality analysis has traditionally posed a unique challenge to the engineer, demanding an accurate assumption as to the dimensions of the disturbance in order to capture the event to memory for examination. The correct balance between memory size and the deviation of the disturbance from the norm is often elusive. Thresholds set too low capture too many events of little or no consequence, filling the memory before the sought after damaging event occurs. Setting the threshold too high can overshoot the event.

Data Compression Technology

Revolutionary data compression technology takes the guesswork out of isolating the source of power quality problems by eliminating the need for devising set points and calculating threshold values.

The ability to capture all the waveform data in high resolution in its entirety over an extended period of time is the only way to ensure that the event will be recorded, allowing the engineer to analyze the data and define a solution.

Until now, monitoring and analyzing system electrical trends have presented a true challenge because certain data compromises were required to counteract capacity, processing, and physical limitations. Data compression technology provides unlimited capacity for power quality data storage. This eliminates the requirement to set constraints on system data, rendering the risk in data selection based on set thresholds and triggers obsolete.

Operators of electrical networks are constantly faced with power events and transient occurrences that affect power quality and heighten energy costs.

In the past, to determine whether such events reflect system trends or isolated incidents, electrical engineers relied on partial information indicating what events occurred and when; not all events were recorded due to data capacity limitations and missed thresholds. Now, engineers analyzing multi-point, time-synchronized real-time power quality data can actually see why all power events occur and what causes them. In short, data compression technology pushes power quality analysis capabilities into the next generation.

Data compression technology allows for both immediate power quality problem solving as well as for true proactive energy management. The ability to analyze all data at any time enables energy managers to call up and analyze historic time-based energy consumption trends in order to make supply side decisions. Data compression technology allows control over both the consumption and quality of the supplied energy.

Considerations for optimal system functionality in diverse network topologies are based on the capabilities of the energy suppliers, service providers, and industrial and commercial consumers of energy to provide power quality over time and to successfully analyze, predict, and prevent energy events using multi-point, historic, and true-time logged data.

Achieving Benefits

The U.S. Department of Energy estimates that about $80 billion a year is lost to power quality issues. To reduce these losses, operators must identify the source of power events, identify the problem sources, and prevent their re-occurrence.

Problem sources are many and often reflect the need for predictive and preventative maintenance measures. Utility operators face problem sources such as capacity, weather conditions, and equipment failures. Consumers suffer from equipment failures, faulty installations, and incompatible equipment usage creating destructive resonant situations.

When effective monitoring is installed, power providers will strive to avoid negative impacts due to diminished quality and service capabilities, so as not to cause damages due to the following factors:

In industrial sectors:
  • downtime
  • product quality
  • maintenance costs
  • hidden costs (reputation, recall)

In commercial and service sectors:

  • service stoppage
  • service quality
  • maintenance costs
  • hidden costs (reputation, low customer satisfaction level)

Once a power quality event is fully characterized by accessing compressed power quality data, a solution can be implemented successfully.

Analysis Resources and Capabilities

Implementing data compression technology in an electrical installation means:

  • Needed information is stored; there are no more data compromises to counter recording resolution and capacity issues
  • Years of data for every network cycle are available with no data gaps
  • Thresholds and triggers are no longer needed; missing events becomes a thing of the past
  • All data parameters are recorded; there is no need to select measurement parameters
  • Comprehensive power quality reporting and statistics for data analysis and report generation are accessible and organized
  • Multi-point time-synchronized recording provides a true snapshot for any period in the entire network
Over the years, various technologies have evolved for monitoring and logging power quality data. Su rely, throughout this period, developers addressed the same challenges regarding potential power quality, data capacity, and system trends. Ultimately, the analysis of sampled data serves to manage, maintain, and optimize system operations and costs.

Four Technological Generations

It is possible to delineate four distinct generations in the development of power quality technology:

  • 1st generation, power meter/monitor: First-generation technologies provide display capabilities only. Utilizing analog or digital technology, logging information is used for monitoring the system.
  • 2nd generation, data logger: Second-generation technologies use periodic logging mechanisms and present data in paper or paperless form. Still, the information is utilized for system monitoring only.
  • 3rd generation, event recorder/power quality analyzer: Third-generation technologies require the setting of thresholds and triggers, which are always difficult to assess correctly given that memory capacity is finite and quickly filled. When values are set too low the capacity is filled instantly; when values are set too high very few events are recorded.
  • 4th generation, power quality data center: Fourth-generation technology provides limitless, continuous logging, and storage of power quality data using data compression technology. Setting of parameter values, thresholds, triggers, and other constraints on data are no longer required.

Additionally, a troubleshooter can determine why power quality events occur over the entire electrical network and then successfully identify what causes them, regardless of their cycle occurrence. This measurement and analysis technology enables the engineer to optimize electrical network efficiency and cut power quality losses by relying on the analysis of ungapped data.

Data Analysis Advantages

Data compression can help optimize analysis activities by introducing multi-point time-synchronization to the process (see figure 1 on right). Troubleshooters can trace energy flows over the network during power events to determine event causes. It is also important to log network energy flows when there are no events occurring. Also, logging is necessary at all other points while an event occurs at a specific point to correctly analyze the event.

During power quality events, impedances change. Using fourth-generation technology, it is possible to calculate impedances and perform accurate network simulations for comprehensive analysis.


Question: What was the source of the voltage sag?

Answer: The possible source could be either one, or a combination, of the illustrated events. Other factors could also be involved. Monitoring multiple sites simultaneously and continuously allows the engineer to see the whole picture – all the time. Power quality events can be examined at the time of the event,

and in the context of the timeline before and after the event; comparing the impedances at this site at different times.

Question: What causes data bottlenecks when logging data with non-compression technologies (see figure 2 on right)?

Answer: Data bottlenecks with non-compression technologies are caused by limitations in recording speed capabilities, storage space, communication throughput, and computer processing capacity. 

Question: How are data bottlenecks eliminated by using data compression technology?

Answer: Data bottlenecks in the logging process are eliminated using data compression technology (see figure 3 below): the CPU compresses the data; a compact flash is utilized instead of a hard disk; data are compressed so capacity is not a factor; in addition, block-oriented processing is implemented.

Implementing Data Compression

Patent-pending PQZip data compression technology is employed by the Elspec G4400 Power Quality Data Center and implements:
  • Compression algorithm with typical 1000:1 compression ratio. This real-time compression, performed independent of the sampling, prevents data gaps.
  • Multi-point implementation of time-synchronized devices over the entire grid shows the interactivity of the values recorded at the different points in the network at that point in time 
  • Infinite continuous logging and storage of data for total network analysis.

Summarizing Benefits

Of the four generations of technological evolution for storing power quality data for analysis, only fourth-generation data compression technology affords the unprecedented advantage of infinite, continuous logging and storage of high-resolution data. Using this new technology avoids capacity issues and for this reason yielded data are entirely uncompromised. This represents a clear advantage when analyzing system power trends and events. The natural and desired outcome of in-depth system analysis is prediction and prevention of power events, reduced power costs and the constant supply of enhanced power quality.

This article was printed on December 2006 issue of Energy & Power Management magazine (pp. 18-20).

Electrical Data Compression Patent

posted Mar 27, 2010, 11:31 AM by Power Quality Doctor   [ updated Apr 27, 2010, 2:09 AM ]

A successful power quality analysis required logging of high quantities of electrical parameters. In many situations during the logging session, the operator needs to select which parameters to log, in what resolution and what thresholds to use. A wrong selection of those may lead to inability to analyze the problem and costly repeat of the logging session.
This patent describes an electrical data compression technology that allows logging all the parameters all the time to guarantee successful analysis and reduce the overall cost of power quality analysis.
To read the patent click here
To view power quality analyzers based on this patent click here

Power quality cost: a practical approach

posted Oct 14, 2009, 11:11 PM by Sample User   [ updated Apr 27, 2010, 2:11 AM by Power Quality Doctor ]

Power Quality cost analysis is always subject to different vision within the same site or company. Amir Broshi explains why such a difference in point of views exists and how they are linked to one's partial vision of a global problem. In conclusion, he offers a tool: a simple checklist to give a complete overview of poor power quality costs impacts for electricity professionals as well as the decision makers, financial and site managers.

1-4 of 4