I’ve been doing embedded work for so long now that it’s rare that I come across a need that I haven’t run into before. Well, it happened the other day, so I thought I’d share it with you.
Here’s the situation. I have a transducer whose role is to determine a binary condition (essentially X is present / absent). The transducer is AC coupled and pass band filtered such that what I’m looking for is the presence of a noisy AC signal. My first inclination was to take the pass-band filtered signal, rectify it and average it over time. While this worked reasonably well, the delta between X being present / absent was not as large as I’d have liked. Accordingly I investigated other metrics. Somewhat to my surprise I found that the peak signal values (i.e peak X present : peak X absent) gave a far better ratio. Thus I found myself needing to find the peak (i.e. maximum value) of a moving time series of data values.
To prove the concept, I used a brute force approach. That is, every time I received a new reading I’d store it in a buffer, the size of which was determined by the amount of time I wished to look back over. I’d then search the entire buffer looking for the maximum value. This approach of course worked fine – but it was hopelessly inefficient. Given that I am receiving a new value to process every few milliseconds, this inefficiency is unacceptable in the real product. Accordingly, I needed a better algorithm.
My first port of call was of course the internet. If you do a search for peak detector algorithms then you’ll find a plethora of algorithms. However they are all way too sophisticated for what I need, as they are aimed at finding *all* the local maxima within an N-Dimensional array (apparently an important problem in physics, image processing etc). I just want to know what’s the maximum value in a buffer of values – and I want to know it as efficiently as possible.
After pondering the problem for awhile, my thoughts first turned to an efficient median filtering algorithm. I discussed median filtering here. My rationale was quite simple. The efficient median filtering algorithm inherently keeps the time series data sorted and so I should be able to easily adapt it to return a maximum value rather than a median value. Well it turned out to be quite trivial. Indeed it’s so trivial that I wonder why I haven’t modified it before. Here’s the code (note, as explained in my original posting on median filtering, this algorithm and code is mostly courtesy of Phil Ekstrom):
#define STOPPER 0 /* Smaller than any datum */ #define MEDIAN_FILTER_SIZE (31) void median_filter(uint16_t datum, uint16_t *calc_median, uint16_t *calc_peak) { struct pair { struct pair *point; /* Pointers forming list linked in sorted order */ uint16_t value; /* Values to sort */ }; static struct pair buffer[MEDIAN_FILTER_SIZE] = {0}; /* Buffer of nwidth pairs */ static struct pair *datpoint = buffer; /* Pointer into circular buffer of data */ static struct pair small = {NULL, STOPPER}; /* Chain stopper */ static struct pair big = {&small, 0}; /* Pointer to head (largest) of linked list.*/ struct pair *successor; /* Pointer to successor of replaced data item */ struct pair *scan; /* Pointer used to scan down the sorted list */ struct pair *scanold; /* Previous value of scan */ struct pair *median; /* Pointer to median */ uint16_t i; if (datum == STOPPER) { datum = STOPPER + 1; /* No stoppers allowed. */ } if ( (++datpoint - buffer) >= MEDIAN_FILTER_SIZE) { datpoint = buffer; /* Increment and wrap data in pointer.*/ } datpoint->value = datum; /* Copy in new datum */ successor = datpoint->point; /* Save pointer to old value's successor */ median = &big; /* Median initially to first in chain */ scanold = NULL; /* Scanold initially null. */ scan = &big; /* Points to pointer to first (largest) datum in chain */ /* Handle chain-out of first item in chain as special case */ if (scan->point == datpoint) { scan->point = successor; } scanold = scan; /* Save this pointer and */ scan = scan->point ; /* step down chain */ /* Loop through the chain, normal loop exit via break. */ for (i = 0 ; i < MEDIAN_FILTER_SIZE; ++i) { /* Handle odd-numbered item in chain */ if (scan->point == datpoint) { scan->point = successor; /* Chain out the old datum.*/ } if (scan->value < datum) /* If datum is larger than scanned value,*/ { datpoint->point = scanold->point; /* Chain it in here. */ scanold->point = datpoint; /* Mark it chained in. */ datum = STOPPER; }; /* Step median pointer down chain after doing odd-numbered element */ median = median->point; /* Step median pointer. */ if (scan == &small) { break; /* Break at end of chain */ } scanold = scan; /* Save this pointer and */ scan = scan->point; /* step down chain */ /* Handle even-numbered item in chain. */ if (scan->point == datpoint) { scan->point = successor; } if (scan->value < datum) { datpoint->point = scanold->point; scanold->point = datpoint; datum = STOPPER; } if (scan == &small) { break; } scanold = scan; scan = scan->point; } *calc_median = median->value; *calc_peak = big.point->value; }
To use this code, simply set the MEDIAN_FILTER_SIZE to your desired value, and then call median_filter every time you have a new value to process. The advantage of this code is that it gives you two parameters – the median and the maximum.
However, when I bench marked it against the brute force approach I discovered that the brute force algorithm was actually faster – by a reasonable amount. After reflecting upon this for awhile I realized that the median filtering approach was of course keeping all of the data sorted – which was way more than I needed for a simple peak detector. Thus it was time for another approach.
After thinking about it for awhile, it was clear that the only datum I cared about in my buffer was the current maximum value. Thus upon receipt of a new value, I essentially had to decide:
- Is this new value >= current maximum value? If it is then it’s the new maximum value. Note that I really do mean >= and not >. The reason is explained below.
- Otherwise, is this new value about to overwrite the existing maximum value? If so, overwrite it and use a brute force search to find the new maximum value.
In order to make the algorithm as efficient as possible we need to minimize the number of times a brute force search is required. Now if your buffer contains only one incidence of the maximum value, then you’ll simply have to do the brute force search when you replace the maximum value. However, if your buffer contains multiple copies of the maximum value, then you’ll minimize the number of brute force searches by declaring the operative maximum value as the one furthest away from where we will write to in the buffer next. It’s for this reason that we use a >= comparison in step 1.
In a similar vein, if we have to do a brute force search then if there are multiple copies of the maximum value then once again we want to choose the maximum value furthest away from where we will write to in the buffer next.
Anyway, here’s the code
#define WINDOW_SIZE 31 uint16_t efficient_peak_detector(uint16_t value) { static uint16_t wbuffer[WINDOW_SIZE] = {0U}; static uint16_t wr_idx = 0U; static uint16_t peak_value = 0U; static uint16_t peak_value_idx = 0U; if (value >= peak_value) { peak_value = value; /* New peak value, so record it */ peak_value_idx = wr_idx; wbuffer[wr_idx] = value; } else { wbuffer[wr_idx] = value; /* Not a new peak value, so just store it */ if (wr_idx == peak_value_idx) /* Have we over written the peak value ? */ { /* Yes, so we need to do a brute force search to find the new maximum. Note that for efficiency reasons, if there are multiple values of the new peak value, then we want to chose the one whose index value is as far away as possible from the current index */ uint16_t idx; uint16_t cnt; for (cnt = 0U, idx = wr_idx, peak_value = 0U; cnt < WINDOW_SIZE; ++cnt) { if (wbuffer[idx] >= peak_value) { peak_value = wbuffer[idx]; /* Record new peak */ peak_value_idx = idx; } if (++idx >= WINDOW_SIZE) { idx = 0; } } } } if (++wr_idx >= WINDOW_SIZE) { wr_idx = 0; } return peak_value; }
Obviously, if you make your buffer size a power of two then you can optimize the buffer wrapping code. There are a couple of other minor optimizations that can be made. For example on eight bit processors, making the various indices and counters (wr_idx, peak_value_idx, idx, cnt) 8 bits will speed things up a lot. Similarly on a 32 bit processor you will likely gain a bit by using 32 bit values.
Notwithstanding the aforementioned optimizations, this code is on average considerably faster than the brute force approach and not much larger. Its main limitation is that its run time is highly variable depending upon whether we have to determine a new maximum value using the brute force approach or not.
Clearly it’s trivial to modify this code to perform a ‘trough detector’ (aka minimum detector), or indeed have it do both functions.
<I’ve noticed that comments are shown as closed. I’m not sure why this is, but am working with the system administrator to get it fixed. Apologies>
Median filtering
Saturday, October 2nd, 2010 by Nigel Jones
NOTE: I have heavily edited this blog post since it was originally published, based on some recent testing
If your engineering education was anything like mine then I’m sure that you learned all about different types of linear filters whose essential objective was to pass signals within a certain frequency band and to reject as far as possible all others. These filters are of course indispensable for many types of ‘noise’. However in the real world of embedded systems it doesn’t take one too long to realize that these classical linear filters are useless against burst noise. This kind of noise typically arises from a quasi-random event. For example a 2-way radio may be keyed next to your product or an ESD event may occur close to your signal. Whenever this happens your input signal may transiently go to a ridiculous value. For example I have often seen A2D readings that look something like this: 385, 389, 388, 388, 912, 388, 387. The 912 value is presumably anomalous and as such should be rejected. If you try and use a classical linear filter then you will almost certainly find that the 912 reading actually ends up having a significant impact on the output. The ‘obvious’ answer in this case is to use a median filter. Despite the supposed obviousness of this, it’s my experience that median filters are used remarkably infrequently in embedded systems. I don’t know why this is, but my guess is that it is a combination of a lack of knowledge of their existence, coupled with difficulty of implementation. Hopefully this post will go some way to rectifying both issues.
As its name suggests, a median filter is one which takes the middle of a group of readings. It’s normal for the group to have an odd number of members such that there is no ambiguity about the middle value. Thus the general idea is that one buffers a certain number of readings and takes the middle reading.
Now Until recently I recognized three classes of median filter, based purely on the size of the filter. They were:
- Filter size of 3 (i.e. the smallest possible).
- Filter size of 5, 7 or 9 (the most common).
- Filter size of 11 or more.
However, I now espouse a simple dichotomy
- Filter size of 3
- Filter size > 3
Filter size of 3
The filter size of three is of course the smallest possible filter. It’s possible to find the middle value simply via a few if statements. The code below is based on an algorithm described here. Clearly this is small and fast code.
uint16_t middle_of_3(uint16_t a, uint16_t b, uint16_t c) { uint16_t middle; if ((a <= b) && (a <= c)) { middle = (b <= c) ? b : c; } else if ((b <= a) && (b <= c)) { middle = (a <= c) ? a : c; } else { middle = (a <= b) ? a : b; } return middle; }
Filter size > 3
For filter sizes greater than 3 I suggest you turn to an algorithm described by Phil Ekstrom in the November 2000 edition of Embedded Systems Programming magazine. With the recent hatchet job on embedded.com I can’t find the original article. However there is a copy here. Ekstrom’s approach is to use a linked list. The approach works essentially by observing that once an array is sorted, the act of removing the oldest value and inserting the newest value doesn’t result in the array being significantly unsorted. As a result his approach works well – particularly for large filter sizes.
Be warned that there are some bugs in the originally published code (which Ekstrom corrected). However given the difficulty of finding anything on embedded.com nowadays I have opted to publish my implementation of his code. Be warned that the code below was originally written in Dynamic C and has been ported to standard C for this blog posting. It is believed to work. However it would behoove you to check it thoroughly before use!
#define STOPPER 0 /* Smaller than any datum */ #define MEDIAN_FILTER_SIZE (13) uint16_t median_filter(uint16_t datum) { struct pair { struct pair *point; /* Pointers forming list linked in sorted order */ uint16_t value; /* Values to sort */ }; static struct pair buffer[MEDIAN_FILTER_SIZE] = {0}; /* Buffer of nwidth pairs */ static struct pair *datpoint = buffer; /* Pointer into circular buffer of data */ static struct pair small = {NULL, STOPPER}; /* Chain stopper */ static struct pair big = {&small, 0}; /* Pointer to head (largest) of linked list.*/ struct pair *successor; /* Pointer to successor of replaced data item */ struct pair *scan; /* Pointer used to scan down the sorted list */ struct pair *scanold; /* Previous value of scan */ struct pair *median; /* Pointer to median */ uint16_t i; if (datum == STOPPER) { datum = STOPPER + 1; /* No stoppers allowed. */ } if ( (++datpoint - buffer) >= MEDIAN_FILTER_SIZE) { datpoint = buffer; /* Increment and wrap data in pointer.*/ } datpoint->value = datum; /* Copy in new datum */ successor = datpoint->point; /* Save pointer to old value's successor */ median = &big; /* Median initially to first in chain */ scanold = NULL; /* Scanold initially null. */ scan = &big; /* Points to pointer to first (largest) datum in chain */ /* Handle chain-out of first item in chain as special case */ if (scan->point == datpoint) { scan->point = successor; } scanold = scan; /* Save this pointer and */ scan = scan->point ; /* step down chain */ /* Loop through the chain, normal loop exit via break. */ for (i = 0 ; i < MEDIAN_FILTER_SIZE; ++i) { /* Handle odd-numbered item in chain */ if (scan->point == datpoint) { scan->point = successor; /* Chain out the old datum.*/ } if (scan->value < datum) /* If datum is larger than scanned value,*/ { datpoint->point = scanold->point; /* Chain it in here. */ scanold->point = datpoint; /* Mark it chained in. */ datum = STOPPER; }; /* Step median pointer down chain after doing odd-numbered element */ median = median->point; /* Step median pointer. */ if (scan == &small) { break; /* Break at end of chain */ } scanold = scan; /* Save this pointer and */ scan = scan->point; /* step down chain */ /* Handle even-numbered item in chain. */ if (scan->point == datpoint) { scan->point = successor; } if (scan->value < datum) { datpoint->point = scanold->point; scanold->point = datpoint; datum = STOPPER; } if (scan == &small) { break; } scanold = scan; scan = scan->point; } return median->value; }
To use this code, simply call the function every time you have a new input value. It will return the median of the last MEDIAN_FILTER_SIZE readings. This approach can consume a fair amount of RAM as one has to store both the values and the pointers. However if this isn’t a problem for you then it really is a nice algorithm that deserves to be in your tool box as it is dramatically faster than algorithms based upon sorting.
Median filtering based on sorting
In the original version of this article I espoused using a sorting based approach to median filtering when the filter size was 5, 7 or 9. I no longer subscribe to this belief. However for those of you that want to do it, here’s the basic outline:
if (ADC_Buffer_Full) { uint_fast16_t adc_copy[MEDIAN_FILTER_SIZE]; uint_fast16_t filtered_cnts; /* Copy the data */ memcpy(adc_copy, ADC_Counts, sizeof(adc_copy)); /* Sort it */ shell_sort(adc_copy, MEDIAN_FILTER_SIZE); /* Take the middle value */ filtered_cnts = adc_copy[(MEDIAN_FILTER_SIZE - 1U) / 2U]; /* Convert to engineering units */ ... }
Final Thoughts
Like most things in embedded systems, median filters have certain costs associated with them. Clearly median filters introduce a delay to a step change in value which can be problematic at times. In addition median filters can completely clobber frequency information in the signal. Of course if you are only interested in DC values then this is not a problem. With these caveats I strongly recommend that you consider incorporating median filters in your next embedded design.
Tags: Median filter
38 Responses to “Median filtering”
I use a technique called ‘easing’ sometimes. It does not require storing previous values in an array and does therefore not require a lot of resources. It basically works using this formula:
filtered += (adc_value – filtered) * easing; // the value of easing is something like 0.05
A nice interactive example can be seen here: http://processing.org/learning/basics/easing.html
Markus: What you are describing is effectively a low pass IIR filter. I also use these all the time. While these types of filters are good at rejecting additive white Gaussian noise they are ineffective at rejecting burst noise.
The linked list is quite interesting, indeed.
I usually find useful, especially in the analog input capture module (HW access layer), to use a mixed approach: calculating the mean value of the center samples in the sorted circular buffer, instead of using the single middle value.
Given a certain FILTER_SIZE, FILTER_TAIL and FILTER_HEAD are defined (value lower than FILTER_SIZE/2), which set the amount of the circular buffer to be discarded for the mean value calculation; obviously heads and tails are usually set to the same value.
That’s an interesting approach. If you have a FILTER_SIZE of 5, and you compute the mean of the middle 3 values then your median filter will only effectively reject a single burst value. I think you could get the same result by having a FILTER_SIZE of 3, taking the middle value and then feeding it into a standard low pass filter.
In fact, I tend favour the use of two filters, a first one to reject the impulsive noise in the lower ADC capture layer, and a second one adapted to the device that the functional module will control/monitor (e.g. LPF IIR).
But for many inputs often a single “median-mean” filter in ADC capture suffices (typically using seven position buffer, calculating the mean of the three middle values) as a simple burst and low pass filter.
Here we have another interesting post that has developed into a insightful thread.
The thread has certainly been interesting. I woke up at 5 am today thinking about comparing the various approaches …
I couldn’t understand why ADC_Counts[] must be copied to adc_copy[] array in the shell sort algorithm case.
After acquiring MEDIAN_FILTER_SIZE values, ADC_Counts[] array may be sorted in place, without any problem. The next value will start filling again ADC_Counts[] from the first position, overwriting old values.
Where am I wrong?
I don’t think you are wrong at all. I like to do the copy because it allows me to easily identify problems in both the input signal and also in the filtering algorithm. I should have made that clear. I will update the article, so thanks for pointing this out.
If you sort-in-place, then overwrite, the value being overwritten by the next sample won’t necessarily be the oldest one in the buffer.
Yes – I’d thought of that. However in practice it’s not clear to me that it’s a problem. Mind you it’s been a very long day …
As is written the algorithm, the sorting and filtering process is executed every MEDIAN_FILTER_SIZE samples.
if (ADC_Buffer_Full)
{
/* Compute the median of the filter */
}
Even if the bery first sample after sorting/filtering overwrites an old (but not oldest) sample, it’s not a problem. You have to acquire all the MEDIAN_FILTER_SIZE samples to execute again the sorting/filtering.
If you wait for another MEDIAN_FILTER_SIZE samples before sorting/filtering, then your effective sample rate has been reduced by a factor of MEDIAN_FILTER_SIZE. For some applications, this may well be fine. If you don’t sort-in-place, you can keep producing filtered results at the same rate as the fundamental sampling process (once the initial fill-the-ring-buffer stage has finished, of course).
The design problem as originally posed was to make a median filter — not a median filter with rate down-conversion.
In fact for median we do not need to sort the whole array and when used with appropriate sorting algo we can add a little optimization.
For my designs I often use a simple self made sorting function (bubble sort) and stop sorting after a half of array is sorted.
I have often wondered about this as an approach. I would be very interested in comparing your self made bubble sort that sorts half the array to say an insertion sort that sorts the entire array. If you’d care to post the code, then I will do a comparison similar to what I did in my sorting algorithm post.
At the entry the sort_buf contains copy of samples.
At the exit “upper” half of the sort_buf is sorted and (what is important) contains values bigger than in the “lower” half.
Index_T idx;
Index_T top_idx;
top_idx = (Index_T) (MEDIAN_LEN – 1);
/* bubble sort of the (upper) half of data array (to obtain center/median data) */
for (sort_loop = 0; sort_loop <= (MEDIAN_LEN/2); sort_loop++)
{
for (idx=0; idx sort_buf[idx+1])
{
DATA_SWAP(sort_buf[idx], sort_buf[idx+1]);
}
}
top_idx–;
}
Thanks Artur. I will bench mark it and post my results.
I did some benchmarks with the bubble “half” sort,
shell sort and insertion sort for the median search.
For sizes of 5,7 and 9 elements, the best algorithm
in term of performance is insertion sort. Bubble “half”
sort gives me pretty same result as insertion sort
for a size of 5 elements, but when dealing with more than
5 elements, insertion sort and shell sort are always faster.
For the big filter, why can’t you simply add all adc readings to a binary tree?
Pseudo:
– Add “adc reading” to binary tree. The first reading becomes the root.
– If the node is added to the left of the root (lesser), increment “left counter”.
– If the node is added to the right of the root (greater or equal), increment “right counter”.
– If “left counter” and “right counter” differ more than 1, balance the tree so that one side has n nodes and the other has either n or n+1 nodes.
– There is no need for recursive balancing, we just want to make sure that the root is in the middle.
– Once there are no readings left, the root == the median. No further calculations needed.
Since there is a limited number of nodes, this method should be much more efficient and straight-forward than linked lists.
An interesting idea. Care to code it up and compare its performance to the linked list approach?
I did fiddle around a bit with this after I posted, and what I came up with would more or less just be another flavour to the example posted. Plus you get additional RAM consumption for extra pointers, one left and one right pointer for each leaf in the tree.
The binary tree only performs better when the number of ADC samples are vast, and I can’t see any reason to have so many samples.
I’m always confused when people implement full pointer-based linked lists when they are statically sized. IMO, there’s no reason not to use 8-bit array indices instead of pointers in that case. Especially when you’re likely to be multiplying this filter by N sensor inputs.
I think that’s a valid point. I think Ekstrom originally wrote for a large CPU where pointers are no more expensive than array indices. On a small 8 bit system this is of course not the case. I’m really enjoying the suggestions made in the comments. Keep them coming and I will endeavor to collate them and then run some comparisons to see which algorithms give the best results.
“With the recent hatchet job on embedded.com…”
What is even worse is what I get from one of Embedded’s sister sites when I try to go there:
# http://www.eebeat.com is in the following categories: Spyware, Business.
# Hosts in the Phishing, Spyware, and Conficker Worm categories are blocked by default if you do not have a defense plan.
That message comes up instead of the actual site.
I do wonder if all this article is about is that someone left the capacitor off the input to the AtoD
One of my designs uses an MSP430, and I simply run the AtoD round all of it’s channels under DMA, then add the whole dratted buffer to a “total” array… it’s even possible to DMA into and out of the adder! by the time it’s done 16 of those, the right-justified 12-bit values have overflowed nicely into the top 12 bits which is where I wanted them anyway for conversion to an “AtoD resolution-independent” left-justified format!
Now it may be that while this gives me nice hardware-moderated averaging, I remain vulnerable to “splashes”, but maybe some sensible h/w filtering at the AToD input, reflecting the bandwidth the end-user is actually able to make use of, removes the splashes anyway…
D
D, When you referred to –> “it’s even possible to DMA into and out of the adder!”,
Did you mean the hardware multiplier in multiply accumulate mode?
If not, I’m curious as to how you piped DMA to just an adder.
Thanks
R
yes I think that’s the idea… it was a long time ago but if I recall I didn’t actually end up using this idea but it did look possible to do.
d
I’m a novice in C programming.I tried to use your program, but don’t understand how to call it. I have a sequence of 9 values.
m_array[9] ={4,5,5,4,3,7,8,0,10};
and my median_filter_size is 5. Could you add an example how to call your function? should i replace my array to buffer[ ] variable in your code?
thanks for any insight
I will post some example code tonight. I have discovered that trying to write example code on an iPad is and exercise in futility, so I’m afraid you will have to wait for me to get back home.
Thanks, by the way, my variable datatypes are SHORT of 16bits, and i just changed uint16_t in this algorithm with SHORT.
I changed it to short because I had negative values as inputs too. but it doesn’t work.
I got home way too late to do anything last night, so I apologize for the delay.
First off, this algorithm is really designed for processing a continuous stream of data which you wish to median filter. For example, if you have an analog-digital converter that is constantly outputting new data that you wish to median filter, then this is the algorithm for you. If you just have a one off static array to be median filtered, then you would be better off just sorting it and taking the middle value. That being said, if you really want to use the code to do what you want, then here’s what it would look like.
1. Change MEDIAN_FILTER_SIZE to the desired size of your filter (i.e. 5).
2. Your code would now look like this:
SHORT m_array[9] ={4,5,5,4,3,7,8,0,10};
void example_use(void)
{
SHORT idx;
SHORT median_value;
for (idx = 0; idx < 9; ++idx)
{
median_value = median_filter(m_array[idx]);
}
}
Now there are some subtleties here to watch out for. The algorithm as written only works on unsigned data (which is the sort of data you usually, but not always, get off an ADC). To use it with signed data, I think (but have not tested) that you need to do the following:
1. Change the first few lines of code so that they look like this:
SHORT median_filter(SHORT datum)
{
struct pair
{
struct pair *point; /* Pointers forming list linked in sorted order */
SHORT value; /* Values to sort */
};
2. Make this change to STOPPER
#define STOPPER (-32768) /* Smaller than any datum */
As an aside, the code above has been written to make it as easy to understand as possible. If I was doing this for real, I'd use the C99 data types and the N_ELEMENTS macro. Please search the blog if you aren't familiar with these as I think you'll be glad you did.
Thanks for your quick reply. Now my prrogram works perfect.I will llk at c99data types and N_ELEMENTS macros since profiling my code with this median filter did take more time than the one which sorted everytime my 5 element window. I’m sure this one is faster.
Thank you so much
Unless you have a particularly noisy system, then using median filters like this wastes quite a lot of valid data. Outliers do happen, but rather than taking just the median value of a group of samples, I usually prefer to combine the outlier rejection and filtering.
Most often, I keep track of the last six samples in a circular buffer, along with the sum of the values. It’s easy to update the total count when you take the oldest sample out and put a new sample in. Then to read the “current” value, you scan through the buffer to find the highest and the lowest values (it’s a linear scan, and can always start at address 0 – you don’t need to follow the logical order or track the index numbers), and subtract them from your total count. This gives you an average (mean) of your samples, scaled by 4 (easy to divide again if you want), with outliers removed. The code is simple, small and fast, and using an average here will increase your resolution (for guasian noise).
If you really want to keep a list of samples in a sorted order, then a bubble sort is usually the best choice of sorting algorithm. While bubble sort is very inefficient for mixed lists, it is very good for almost-sorted lists – and being simple it can be written clearly and fast. When you have a small list that is kept sorted, and you are removing one sample and adding another before a re-sort, then bubble-sort only needs a single linear pass through the list and is thus highly efficient. (If the list is long enough, however, it would be more efficient to find the new spot using a binary search and “open the gap” using a block move – a sort of single-pass insertion sort.)
Thanks for an interesting comment David. I actually tackled the issue of sorting small arrays in a separate post. Seehttp://embeddedgurus.com/stack-overflow/2009/03/sorting-in-embedded-systems/
Yes, I read that post too (it was also interesting). But the post on sorting small arrays has different critieria than you have here (at least as far as I understand it). Bubble sort quickly loses out to other algorithms when you have random data, or when you need to minimise worst-case times – as you show in that article. But where it wins – again, look at the numbers on your article – is for sorting small lists that are already very close to sorted. So if you have a list that you keep sorted, and only want to add or remove one item at a time, then a bubble sort will often be good enough (and a single-pass of an insertion sort will be the choice when the added complexity is justified).
As always in this field, there is no single “right” answer. It would be boring if there were!
It most certainly would be! I wasn’t sure if you’d seen the post, which was why I mentioned it. Although I didn’t mention it in the sorting article (and should have), the fact that a bubble sort is optimal when you have a near sorted array is a very useful tidbit that we all should keep in mind.
Thanks for this article. I found this link from embedded.com. I think it is the article that has the code you mean:
http://www.embedded.com/design/programming-languages-and-tools/4399504/Better-Than-Average
Nice code, Nigel Jones!
When I reach this level, I stop.