What is a standard and which standard is right for you?
In this month’s edition of the Powermetrix Fundamentals of Metering series, we look at a phrase or a word that is often misrepresented in its usage. Sometimes a phrase will come up similar to: “This is the standard for the industry,” or “The standard for meter testing.” This is done to create phrase association. Many understand the word “standard” to mean something normal, such as standard accessories on a car, or standard time of day to get up.
However, when it comes to metering, and the electrical industry as a whole, this is not the definition of a standard as it is used. When pertaining to metering, a standard has two separate meanings and definitions.
For starters, there is a written standard, such as the ANSI or IEEE standards, which are guidelines defining the “what” and “how” for a given industry. These standards are developed by a committee of organizations from a given industry who come together to codify the common practice of that industry into a set of documents. Most common in the meter testing world are the ANSI C12 standards, which are several documents that answer questions such as:
- What is a meter in terms of shape, size, and functionality?
- What are the different meter “forms” and how do I use them to measure power?
- How do I test a meter to make sure it is accurate?
A second definition of a standard refers to the equipment being used, and how they are traced for accuracy and precision. Remember last month’s article on Accuracy? We discussed how test equipment accuracy and precision is measured. We briefly mentioned the word standard, and discussed how test equipment must have a relation with a standard. That led us to answer the burning question: What is a standard?
Definition of Standard
A standard is an artifact or calculation that bears a defined relationship to a fundamental unit of measurement, such as length or mass – or Watts of electric power. When quantifying the “reliability” of some measured value, we “refer” to the standard to see how closely we agree. The point of having a reference standard is that if our measurement is different, then we are “wrong” and need to correct our reading.
For example, the PowerMaster 3 Series has an internal 3-phase reference standard that verifies its specified accuracy and precision. For auditing and legal purposes, the accuracy of the field standard should be “traceable” to a recognized metrology lab such as the National Institute of Standards and Technology (NIST).
This means that any lab test equipment used to verify the field standard has itself been verified against other traceable equipment, with each comparison in the sequence of comparisons being certified as having a guaranteed maximum error. The “chain of traceability” ultimately originates with a certified comparison against a fundamental unit of measurement maintained by an international standards body. This gives the readings of the field test equipment “validity” while being used.
Assume, for example, that we have a very accurate current source providing a 1A RMS AC current reference signal. The source will become the ideal value, or “standard”, against which all other measurements are compared.
If the stated meter accuracy is 0.2% then we would expect that 95% of its measurements will fall between 0.998A and 1.002A – that is within a band of ±0.2% of the “standard” value.
Similarly, if the measurement standard has a stated accuracy of 0.05% (4x better than the meter accuracy) then we would expect 95% of its measurements will fall between 0.9995A and 1.0005A – within a band of ±0.05% of the “standard” value.
Now assume the measurement standard measures the 1A reference and gives a reading of 0.9997A. If the precision is then stated as 25ppm (0.000025), and we were to take 100 measurements, we would expect 95% of the measurements to be between 0.999675A and 0.999725A – within the band of ±25ppm around the initial measured value.
Now suppose that the measurement standard is disconnected from the reference signal and then reconnected. While it is possible, we would not expect to get the same initial reading of 0.9997A – but we would expect the reading to be between 0.9995A and 1.0005A — within the band of the stated accuracy.
This is an extremely important part to understand. Having an internal standard that is traceable is what defines field test equipment’s ability to accurately test meters.
Simplified Definition of a Standard
We’ve just delivered a large amount of information for you to absorb, so let’s review.
- When field test equipment is taken into the field, it is done so with the assurance that it has been verified/calibrated against a transfer standard.
- The lab verification standard is then verified via a NIST traceable process. For utility companies, this means if a customer has a complaint about over billing, the utility can use this verified field test equipment to determine the validity of the customer’s complaint.
- Having a standard gives utility companies what is required to resolve customer issues.
How do I choose a standard?
Knowing a little more about standards helps utility professionals become informed when deciding on test equipment. Knowing the questions to ask before making a decision is imperative. Some common things to remember and to look for:
- There are many levels of standards. One should choose a standard based on the accuracy required for the job.
- No ONE manufacturer is the industry standard. There are multiple standard manufacturers, and you should choose the best one based on your needs.
- Having an internal reference standard provides the most accuracy and precision.
- Having NIST traceable equipment helps with customer and utility confidence.
Never be afraid to ask questions, especially before purchasing equipment that could be used for customer complaints, or to maintain internal requirements.