Temperature coefficient of sensitivity as expressed in the Amphenol NPA series datasheet
Temperature coefficient of measurement span
The magnitude of the sensor full-scale output is affected by temperature. This is called Temperature Error – Span in the TE datasheet sample, and may also be referred to as the temperature coefficient of span (TCS). It is calculated in a similar way to the TCZ. The full-scale output at the upper and lower CTR limits is compared with the full-scale at the standard temperature. The larger of the two differences is expressed as a ratio in percent per degree (%/°C).
Pressure hysteresis and temperature hysteresis
A sensor may give different readings for the same measured pressure, depending on whether the pressure has increased or decreased to reach the measured value. Key factors that cause pressure hysteresis include the characteristics of the diaphragm or strain-gauge material.
Pressure sensors can also exhibit temperature hysteresis, which results in a different pressure reading being produced at a given pressure and temperature depending on whether the temperature has increased or decreased to the value at which the measurement is taken. Temperature hysteresis is influenced by measurement conditions such as dwell time and temperature range, and is expressed as a percentage of full scale over the CTR.
Non-linearity
![](/wcm/connect/c1fd39b9-ef06-45d4-9201-0ba688cbf1b8/Non-Linearity-Graph-EN-Image.jpg?MOD=AJPERES&CACHEID=ROOTWORKSPACE-c1fd39b9-ef06-45d4-9201-0ba688cbf1b8-mDoNz59)
A graphical representation of non-linearity
using the Best Fit Straight Line method |
Non-linearity expresses the difference between the actual output of the sensor and the predicted response according to its typical performance. Non-linear responses can be affected by factors such as temperature, humidity, and vibration or other disturbances. Non-linearity can be expressed mathematically, as a percentage:
![](/wcm/connect/8ef0af9e-7af8-419d-9531-3ca7e83fb45c/Non-Linearity-Formula-EN-Image.gif?MOD=AJPERES&CACHEID=ROOTWORKSPACE-8ef0af9e-7af8-419d-9531-3ca7e83fb45c-mDoN2u0)
where:
- Din(max) is the maximum input deviation
- INf.s. is the maximum, full-scale input
Non-linearity can also be shown graphically (see right) which illustrates how the output voltage can deviate across the full-scale range. In this context, linearity can be quantified using the Best Fit Straight Line (BFSL) method, using mathematical regression to plot the BFSL that gives equal weighting to points above and below the line.
Terminal line method for calculating non-linearity
|
Alternative methods may be used, such as the terminal line technique, which expresses non-linearity as the maximum deviation from a straight line joining the zero and full-scale points (see left). The terminal line method eliminates zero-point and full-span errors, which simplifies recalibration if a sensor is replaced in the field.
The datasheet should state which method has been used. A note in the TE datasheet above tells the reader that the BFSL method was used to calculate the 1210’s typical non-linearity to be ±0.05 %span.
High-linearity pressure sensors can be produced by optimising the construction of the sensor, such as the diaphragm mounting, building the sensor using high-quality materials, and applying electronic compensation.
Several other parameters can affect the sensor accuracy, and should be considered when choosing the right sensor for a given application. These include resolution, dynamic characteristics, and long-term stability, as we’ll now explore.
Resolution
Resolution is the smallest incremental change in pressure that can be displayed at the output. It may be expressed as a proportion of the reading or the full-scale range, or as an absolute figure. Depending on the application, the pressure resolution may be easily related to real-world performance: a pressure sensor with 3mbar resolution, used in a depth gauge, will allow depth-measurement resolution of 3cm in water. Note that a sensor’s accuracy cannot be greater than its resolution.
Response time and dynamic performance
Response time is an expression of the sensor’s ability to change and stabilise at the new value, within the specified tolerance, in response to a change in the applied pressure. The response time may be different depending on whether the change is positive- or negative-going.
The datasheet may quote response time as a time constant, which is the time for the sensor signal to change from zero to 63.2% of full-scale range when an instantaneous full-scale change in pressure is applied.
Faster-acting sensors may be described in terms of their frequency response, or flat frequency, which is the maximum pressure-change frequency that can be converted into an output signal without distortion.
Dynamic linearity is an important parameter in applications that must monitor rapidly changing pressure. It can be influenced not only by the response time, but also by other characteristics such as amplitude and phase distortion.
Long-term stability or natural drift
Sensor accuracy tends to drift over time, due to ageing, environmental factors, and other application-related influences and factors. Such drift is not predictable, and may have a positive or negative change coefficient. Referring to the datasheet sample above, TE expresses the long-term stability as a percentage of the full-scale range, over a period of one year and assuming the current and temperature are constant. Hence stability as quoted in the datasheet can only be used as a guide and not as a guarantee of performance in the target application.
Other operational factors to consider
In this article, we’ve described key factors that affect the accuracy of a pressure sensor. Depending on the application, some aspects such as dynamic performance or resolution may be less important than others like linearity or temperature-related drift.
Once the optimum sensor has been selected on paper, it’s important to remember that other factors such as the equipment design, and day-to-day use can also influence pressure-sensing accuracy on setup and in the longer term.
Improper installation, for example, is often the underlying cause if a system fails to deliver the expected accuracy when deployed. This could be prevented by design, or by ensuring the equipment is shipped with clear installation instructions.
Application-related variables such as temperature, specific gravity of monitored fluids, dielectric characteristics, turbulence, changes in atmospheric pressure, or unexpected obstructions, blockages or vapour locks may also impair accuracy. Taking any likely effects into account when designing the equipment, and where possible selecting sensors that are immune or benefit from suitable compensation, can help to mitigate or avoid unacceptable inaccuracy.
And, of course, ensuring initial calibration, with regular recalibration and suitable intervals, is essential to safeguard long-term accuracy.
Looking for more on pressure sensor technology? Check out the further chapters of this guide below, or if you're pressed for time you can download it in a PDF format here.