A rising tide lifts all boats / making GEGL/GIMP previews faster
This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
A rising tide lifts all boats / making GEGL/GIMP previews faster | Øyvind Kolås | 18 Mar 10:33 |
A rising tide lifts all boats / making GEGL/GIMP previews faster
On Tue, Mar 17, 2015 at 10:20 PM, Elle Stone wrote:
Everything works faster on a smaller image.
A good observation, and the rationale for plans that are already partially implemented.
Unlike simpler edits like Curves, currently the transform tools don't save a record as a recallable transform.
Let's say you open a smaller version of the image and make the cage transform. Could the transform steps be recorded and then "replayed" on the full-size image? Or would coding something like this be just as complicated as making the transform faster in the first place?
Since the Cage transform is a GEGL op with parameters; serializing and replaying the parameters shouldn't be very hard - it might even be possible to add generic code in GIMPs current destructive way of doing editing and do the previewing scaled down close to zoom level a list of specific ops. In the rest of the mail; I am not referring to the cage tool but to previews in general.
In GEGL Work is under way to make this happen generically for the rendering/preview of operations (which include curves and warp tool), Thus if one added code to do it specifically per op in GIMP might be code that later hopefully would need to be removed.
The ball for improvements on this is currently in the GEGL side of the GIMP/GEGL court; the last improvement on GIMPs side is that it requests the rectangle of the viewport first; and then the parts of the canvas outside the active window, as well as actually using the rendered preview instead of regenerating the merged down render in the layer.
There is code in GEGL that works for some graphs but not yet for all combinations of ops yet. The approach taken is roughly as you describe it working on scaled down data. But it happens behind the scenes - code using GEGL asks for a rectangular result at a specific scale to be generated from a node. Currently all data is processed at 100% scale/zoom and scaled down as a final step. Without changing the API used GEGL can do various tricks when satisfying this request. The pixel data in GEGL is stored in GeglBuffers which are sparse tiled on-demand mipmap pyramids, similar to the collections of satelite imagery files at different scale levels used by google maps/earth. 50% 25% 12.5% 6.25% etc. (NOTE: a majority of linear temporary data allows for efficient and accurate implementation of on-demand mipmap tile generations; other pixel representations don't.)
In GEGL currently quite a few operations work with mipmaps without much change (point operations and composers as well as blur operations.) Transform operations need 100% level data but can render to smaller level, GEGL should however cope with operations not fully supporting mipmaped rendering and generate blurred/fuzzy data from the 25% / 50% level for the steps not supporting it.
Once the above is working reliably - and probably some work in GIMP; some processing wouldn't need to happen until GIMP after is idle having fully updated what is on screen or you invoke export, zoom in/out or pan. (for some exports; one might even get away without rendering the full 100% size; depending on whether all operations used are accurate when rendered at a different scale or not).
At LGM in Toronto in the end of April I am giving a talk on a new UI framework, one of my experiments and examples for this system is a test program for the GEGL mipmap rendering; which I hopefully have time to brush up a bit for demonstration purposes then. Here is an old screenshot of it: http://pippin.gimp.org/mrg/mrg-gegl-00.jpg only the white operations are active, the noise reduction only works on 100%, but the panorama projection takes an 8000px wide image and passes it on to exposure, levels and vignette as a buffer only 675px wide. This means that tweaking the setting of any of these operations is fast,. as long as it isn't the noise reduction which is tweaked - but that should be tweaked zoomed in anyways .. (and just averaging when scaling down to 50% .. can be seen as a somewhat valid noise reduction preview..)
/pippin