The following anecdote describes an early experience of mine while working in the engineering trenches of a major scope manufacturer. This happened three decades ago, and that will explain some of the limitations of the algorithm and its implementation.
The scope in question was an early digitizing model that featured eight-bit A-D converters on the input channels. One of the newer features was the ability to do digital averaging of the signal, which, for a repetitive trace, offered improvement both in the displayed signal and the signal-to-noise ratio (SNR) of the waveform captured and processed in memory.
The computation was done by the scope's resident "waveform processor", which basically featured eight-bit arithmetic. The decision was made to restrict the number of averages to powers of 2 (2, 4, 8, and so on up to 256). This allowed the division in the algorithm to be performed by a simple right-shift and saved a great number of cycles in the computation.
The issue that reared its head, very late in the day in terms of the scope’s design cycle, was the following. I was summoned to the lab where the prototype code had been running for a couple of days, and they were observing a curious phenomenon. In this new averaging mode, as the user increased the number of averages of a noisy waveform, one would expect to see an increasingly cleaner signal, all the way up to 256 averages. What was observed, however, was that the waveform did get visually cleaner up to a point, after which flat portions started appearing in the waveform, making it look distorted.
Suffice to say, the results did not look very pleasant as one increased the number of sweeps averaged, and they were sure there was no correlated noise in the system that would explain what was being seen.
As I was very familiar with digital filters even in those early days, it appeared immediately as some sort of a limit cycle phenomenon caused by finite word length in the arithmetic. Sure enough, investigation of the algorithm and implementation pointed to the causes of the problem. There were at least two effects at play.
Firstly, the implementation kept a running sum for each point to implement the average, and as a new point came in, it used the difference in values to update the sum. This meant that at each point in the waveform, you had a recursive digital filter -- i.e., one with a single pole -- which meant that it was indeed prone to limit cycles or dead bands. Secondly, the implementation of the divide, by a shift-and-truncate operation with eight-bit arithmetic, meant that the first-order recursive digital filter exhibited limit cycles the moment the divisor (power of 2) became large, and the dead bands showed up as flat areas on the waveform.
Since double-precision arithmetic was not an option, we came up with some simpler solutions involving a mix of rounding operations and some dithering, and we were able to keep the core of the firmware intact. Subsequent generations of scopes, of course, had much more powerful DSP hardware, so this particular issue did not crop up again -- to my knowledge.