If you have tons of datapoints, one cool trick is to do intensity modulation of the graph instead of simple "binary" display. Basically for each pixel you'd count how many datapoints it covers and map that value to color/brightness of that pixel. That way you can visually make out much more detail about the data.
In electronics world this is what "digital phosphor" etc does in oscilloscopes, which started out as just emulating analog scopes. Some examples are visible here https://www.hit.bme.hu/~papay/edu/DSOdisp/gradient.htm
Great suggestion - density mapping is a really effective technique for overplotted data. Instead of drawing 1M points where most overlap, you're essentially rendering a heatmap of point concentration. WebGPU compute shaders would be perfect for this - bin the points into a grid, count per cell, then render intensity. Could even do it in a single pass. I've been thinking about this for scatter plots especially, where you might have clusters that just look like solid blobs at full zoom-out. A density mode would reveal the structure. Added to the ideas list - thanks for the suggestion!
Drawing lots of single pixels with alpha blending is probably one of the least efficient ways to use the rasterizer though. A good compute shader implementation would be substantially faster.
At 1M points it hardly makes a difference. Besides, 1 point -> 1 pixel mapping is good enough for a demo, but in practice it will produce nasty aliasing artifacts because real datasets aren't aligned with pixel coordinates. So you have to draw each point as a 2x2 square at least with precise shading, and we are back to the rasterizer pipeline. Edit: what actually needs to be computed is the integral of the points dataset over each square pixel, and that depends on the shape of each point, even if it's smaller than a pixel.
Aren't we at petaflops now with GPUs? 1M or even 1G points should be no issue if it renders to a framebuffer and doesn't go through mountains af JS framework rubbish followed by mountains of GTK/Qt/.NET rubbish.
Not true. Fill rate and memory speed is still a huge bottleneck. The issue is not “rubbish” but memory speed. It is almost always memory speed, cache, ram, disk etc.
There is this misconception that if one uses js or c# to tell a gpu what to do it is somehow slower than rust. It only is if you crunching data but moving memory to the gpu and telling gpu to crunch is virtually identical.
Most consumers dont have that and at 60 fps you are already maxing it out and more assuming os is doing nothing else. Bandwidth even on gpus is still the bottleneck.
Even then, when u write to a framebuffer directly in the gpu if the locations of the points are not contiguous you are thrashing. Rendering points very fast is still very much about reducing the data set down to bypass all the layers of memory walls.
That works if more overdraw = more intensity is all you care about, and may very well be good enough for many kinds of charts. But with heat map plots one usually wants a proper mapping of some intensity domain to a color map and a legend with a color gradient that tells you which color represents which value. Which requires binning, counting per bin, and determining the min and max values.
Wait, does additional blending let you draw to temp framebuffers with high precision and without clamping? Even so you'd still need to know the maximum value of the temp framebuffer though.
That's what EXT_float_blend does. It's true, though, that you can't find the global min/max in webgl2. This could be done, theoretically, with mipmaps if only those mipmaps supported the max function.
Couldn't you do that manually with a simple downscaling filter? I'd be very shocked if fragment shaders did not have a min or max function.
Repeatedly shrinking by a factor of two means log2(max(width, height)) passes, each pass is a quarter of the pixels of the previous pass so that's a total of 4/3 times the pixels of the original image. Should be low enough overhead, right?
Sure, that will work, but it's log2 passes + temp framebuffers. As for overhead, I'm afraid it will eat a couple fps if you run it on every frame. In practice, though, I'm not sure that finding the exact maximum is that valuable for rendering: a good guess based on the dataset type will do. For example, if you need to render N points that tend to congregate in the center, using sqrt(N) as the heuristic for the maximum works very well.
That digital phosphor effect is fascinating! As someone who works frequently with DSP and occasionally with analogue signals, it's incredible to see how you can pull out the carrier/modulation just by looking at (effectively) a moving average. It's also interesting to see just how much they have to do behind the scenes to emulate a fairly simple physical effect.
agreed, heatmaps with logarithmic cell intensity are the way to go for massive datasets in things like 10,000-series line charts and scatter plots. you can generally drill downward from these, as needed.
In electronics world this is what "digital phosphor" etc does in oscilloscopes, which started out as just emulating analog scopes. Some examples are visible here https://www.hit.bme.hu/~papay/edu/DSOdisp/gradient.htm