In my last post I described how to start reading data from an ADXL335 accelerometer with an Arduino and convert those voltage readings into standard units. I even showed some real data coming out of my device and when you looked at it, it was pretty clear that:
- The data coming out were stable and repeatable
- The conversion I used wasn’t right: the acceleration of gravity changed depending on the tilt of the accelerometer.
The problem is that I wasn’t really calibrating. I never took measurements from my device and compared them with known values. I just trusted the datasheet (which doesn’t promise anything precise, by the way). Even if the datasheet promised exact sensitivity, it doesn’t know how the sensor will be placed in the circuit. For example:
- Is the soldering dodgy on any of the pins?
- Is the sensor tilted in the device? (yep, it is off by a few degrees thanks to that dodgy soldering.)
- Is everything on the circuit exactly the same downstream from the sensor pins?
Because the sensor readings are stable and repeatable, real calibration gives much better results. It just takes a bit more work. In this series of posts I’ll survey a few calibration methods starting with faith-based methods like reading datasheets, moving on to naïve but effective calibration, and finishing off with nonlinear least-squares based approaches that require a bit more Math.
Evaluating calibration methods
Before jumping in to the specific techniques, I want to say a few words about how I evaluate them. I have two basic metrics:
- How accurate is the calibration?
- How easy is it to perform the calibration?
To evaluate accuracy, I produced a standard data set with the accelerometer held fixed in 14 different known positions (approximately corresponding to the faces of a truncated cube). I use methods I’ll describe in another post to determine what the “true” accelerometer reading should be at each of these positions then compare that to the reading I get from whatever calibration method I am testing.
If I have such a great calibration method and can get the “true” values out of my device, why do I bother with other techniques? Well, for one I am curious about how good they are. But the real problem is that the best calibration method is not easy to perform: it involved 30 minutes of careful device placements, measurements, button pushing, and data recording. If it were possible to get an error on the order of 1% with less than 10 seconds of work wouldn’t you consider making that trade-off?
That is why I also evaluate how easy each calibration method is to perform. When I talk about ease of calibration, I am not talking about how easy it is to write the code. I’m giving you all the code anyway. All I care about is this:
How easy is it to get a device calibrated after it has been programmed?
If calibration takes thirty minutes of futzing and special equipment, I’ll say the easiness is bad. If it just takes a button press and 10 seconds of moving the device around I’ll say it is pretty good. If it doesn’t require anything at all, then it is great.
Sure, you can automate a tedious and error-prone calibration process if the results are so much better it makes it worthwhile, but that is not much help for DIYers or prototyping. Also, for the skiing devices I’m building I want to have a way to recalibrate them in the field when I reconfigure them or even just because the temperature changed.
So every calibration method I describe will get two grades: one for accuracy and one for easiness. In the end, we will see that there are some methods that get good marks in both categories.