“Double precision floating point number. On the Uno and other ATMEGA based boards, this occupies 4 bytes. That is, the double implementation is exactly the same as the float, with no gain in precision. On the Arduino Due, doubles have 8-byte (64 bit) precision.”

I had totally missed this before, so thank you for pointing it out.

]]>Watching the 100 sample video that Jim shared, I realized that with this code, I could greatly increase my smoothing window and increase my sampling rate (originally, 10 samples once every 6 seconds) and get a much smoother output from my sensors. I am working on a gardenbot/weather station and the sensitivity of some of the components makes the displayed values jump and change all the time, but with this code I will be able to make it much calmer in its changes.

]]>plp:49: error: ‘Statistics’ does not name a type

plp.ino: In function ‘void loop()’:

plp:59: error: ‘stats’ was not declared in this scope

Should I copy folder ststistics to my libs folder in Arduino – i did it but this not work.

here is my (Your) code

#include

#include

Statistics stats(10);

void setup()

{

Serial.begin(9600);

}

void loop()

{

int data = analogRead(A0);

stats.addData(data);

Serial.print(“Mean: “);

Serial.print(stats.mean());

Serial.print(” Std Dev: “);

Serial.println(stats.stdDeviation());

}

Sample size affects the effect of bad data as well. A small sample size will reflect the bad data more readily, but would also clear it out more quickly as well. A larger sample size keeps the bad data points around longer, but they will have less of an effect.

]]>Here’s a video of one of my experiments that is using your Statistics library to efficiently smooth out the readings from a spectrum analyzer IC to make a nice, flicker-free light show on a series of LEDs: http://youtu.be/t4MN1q9X0Us

Thanks again!

Jim

]]>Note that m = m *((n-1)/n ) will not actually discard old data. Instead, it scales it so that the value you expect to store remains approximately bounded by N*mean for the mean, and N*(mean^2) for the sum of squares for the variance. If you put in a window length of two, and try the sequence {10, 10, 10, 20, 20, 20} you will see that you get a final number of 15, not the 2-observation mean of 20. If you used two pairs of accumulators you could implement a window that is constrained between N and 2N, however, by switching accumulators whenever the count reaches N, clearing the “new” accumulator , and always using both together for output. This would give you the long-term memoryless property that you desire.

The “well behaved” warning is because of large N*value – particularly on readings that can vary widely. Overflow errors would wipe out the accuracy of all subsequent readings, and are reasonably likely to happen if there is respectable variance and auto-correlated errors.

Why would I notice all this? Why, by burning myself while making a rate adaptive Morse decoder of course.

]]>