BUT it comes at a cost.
Overall UI responsiveness suffers quite a lot. I could happily live with the fact that the signal trace itself is updated less frequently, but why is everything else slowed down as well? Isn’t it already bad enough that the rotary knobs are rather slow and easily miss input if turned too fast? Now with averaging enabled… how do I know if I turned the knob too fast for the scope to pick it up properly OR if the input got lost due to it being busy calculating the average value of the signal?
If I had to write code for a DSO I’d strictly separate the UI part from the data acquisition and processing part. Incoming data is written to buffers, data to be processed is written to buffers, processed data is written to buffers. The UI can pull data to be displayed from a buffer as fast as it can and doesn’t have to wait for other stuff to finish. Once pending jobs are done the buffers are flipped, of course synchronized with the display updates. Double buffering isn’t something new. Any information that is displayed should come from a buffer, cursors in manual mode don’t have to wait for any data handling, they should be ‘live’ and react instantly.
If the processing power of the thing is not high enough to do signal acquisition and processing AND zippy user interaction at the same time, then by all meas PAUSE the processing and measurement until the user has done his thing. Fast screen updates and instant response to user interaction are a must. As long as the user turns knobs or pushes buttons just pull the data from the buffer!