Some Questions about GEGL library
This discussion is connected to the gegl-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
Some Questions about GEGL library | Rahmati Fateme | 26 Mar 20:37 |
Some Questions about GEGL library | Øyvind Kolås | 28 Mar 16:51 |
917eb79a1003290858o3ce3f74c... | 07 Oct 20:29 | |
Some Questions about GEGL library | Øyvind Kolås | 30 Mar 18:11 |
Some Questions about GEGL library
Hello,
I'm a master's student in computer science at Strasbourg university (France).
I start working on a project related to GEGL library, my goal is to study how to optimize "operators' graph" to reduce execution time.It would be nice if I could have some information about the optimizations which are already implemented in GEGL.
I would be grateful if you can give me some informations about these questions:
1. Is there any optimizations done on the graph in the last version of GEGL?
For instance, one could think of several optimizations:
· Precomputation of node compositions: If you apply several nodes maybe the composition of the nodes is computable and easier to compute.
Lets say that we have two nodes "increase lightness by 10" then "increase lightness by -20", the graph could be automatically simplified to "increase lightness by -10", is it already the case in GEGL ?
Of course one could assume that the application which is using GEGL should not generate such graphs, but maybe for more complicated composition it would be easier to do it in GEGL. For instance, is the composiion of two gaussian blur another "blur" operation ? Another example could be to combine, Lightness and Curves operations into a single Curves operation with different parameters.
· Precomputation of operators' graphs for several pictures: If you want to apply the same graphs to several pictures it could be nice if some precomputation is done on the graph before it is evaluated on all the pictures.
For instance, given a point operators f which performs the same operation on the different channels of the image (r,g,b) (lightness, contrast, ...), if one is working with 8 bits pictures, one could precompute an array containing f(x) for x in 0->255, and then using this same array for evaluating the graph on all the pictures.
Is it already the case in GEGL ?
· Evaluation in parallel of several nodes of the graph using OpenMP
· Evaluation of the nodes using GPU, I have read it was the subject of a Google summer of code, what is the current status of this ?
2. How the "operator's graph" is maintained and how the nodes are saved?
Can you also give me some references where I could find more related information?
Thank you for your attention.
Some Questions about GEGL library
On Fri, Mar 26, 2010 at 7:37 PM, Rahmati Fateme wrote:
Hello,
I'm a master's student in computer science at Strasbourg university (France).I start working on a project related to GEGL library, my goal is to study how to optimize "operators' graph" to reduce execution time.It would be nice if I could have some information about the optimizations which are already implemented in GEGL.
I would be grateful if you can give me some informations about these questions:
1. Is there any optimizations done on the graph in the last version of GEGL?
For instance, one could think of several optimizations:
· Precomputation of node compositions: If you apply several nodes maybe the composition of the nodes is computable and easier to compute.
Lets say that we have two nodes "increase lightness by 10" then "increase lightness by -20", the graph could be automatically simplified to "increase lightness by -10", is it already the case in GEGL ?
Of course one could assume that the application which is using GEGL should not generate such graphs, but maybe for more complicated composition it would be easier to do it in GEGL. For instance, is the composiion of two gaussian blur another "blur" operation ? Another example could be to combine, Lightness and Curves operations into a single Curves operation with different parameters.
Currently GEGL does not do any extensive optimizations on the graph level, some operations make sure that when the given parameters are a no-op the pixels are not touched but the original buffer passed through (blurs with radius == 0.0, rotations, and scales or translates with no change). The porter duff over (normal layer compositing) op also optimizes to a passthrough when only one input buffer is provided.
Consecutive affine operations (a scale and then a translate) is also collapsed to be a single affine operation (avoiding the inherent loss of multiple resamplings). This is handled within the affine ops by making them all be subclasses of a common operation. Ideally this would be extended in such a manner that GEGL would be able to re-arrange all affine operations to happen only once, immediately after loading the pixel data - this would reduce the number of resamplings as well as reduce total processing time.
· Precomputation of operators' graphs for several pictures: If you want to apply the same graphs to several pictures it could be nice if some precomputation is done on the graph before it is evaluated on all the pictures.
For instance, given a point operators f which performs the same operation on the different channels of the image (r,g,b) (lightness, contrast, ...), if one is working with 8 bits pictures, one could precompute an array containing f(x) for x in 0->255, and then using this same array for evaluating the graph on all the pictures.
Is it already the case in GEGL ?
GEGL has lazily initialized lookup tables that even work for floating point, doing processing on 8bit image data is best avoided.
· Evaluation in parallel of several nodes of the graph using OpenMP
There is an experimental multi processing configure option, the paralellization is done by having separate threads compute separate parts of the final render. There is no need to use OpenMP to achieve this.
· Evaluation of the nodes using GPU, I have read it was the subject of a Google summer of code, what is the current status of this ?
The result of last years summer of code was proof of concept code allowing automatic migration of tiles for GeglBuffers between system memory and GPU memory. Large gains can not be made until many ops exist in a GPU version thus avoiding excessive migrations back and forth.
2. How the "operator's graph" is maintained and how the nodes are saved?
Can you also give me some references where I could find more related information?
Please study the source code, for further questions please join the irc channel #gegl on gimpnet (irc.gimp.org).
/Øyvind K.
Some Questions about GEGL library
On Mon, Mar 29, 2010 at 4:58 PM, Nicolas Robidoux wrote:
Øyvind (and all):
"Long term" development question:
If one was to try to optimize gegl operations which involve resampling (image rotation, affine and perspective transformations), do you think this should be done on the GPU, or should be done, outside of the GPU, by exploiting SSE-type operations? Any other idea?
Note that multiple other ops will eventually also be using the resampler infrastructure, at least when using C code, among these are displacement maps, mirrors/kaleidoscope (already exist in git), ripples/waves ported from GIMP, mapping to and from polar coordinates and more.
It is likely that such transformations would, at least initially reuse the texture resampling units found in the GPU and thus probably only provide "standard" samplers.
It is not obvious to me that the goal should be to port "everything" to the GPU. Is it obvious to you?
To get the benefits offered by the GPU, at least in a scenario where texture data has to be migrated between the systems, any operations that does not exist on the GPU will slow down GPU based acceleration, potentially to the degree where benefits from using the GPU will be lost.
The approach I think makes sense for GEGL is to allow ops to opt-in to be able to do GPU processing. With the GSOC GPU branch GPU and CPU usage of the same GeglBuffer is transparent and tiles are automatically migrated from RAM to texture memory and back as needed.
What parts of GEGL are more likely to benefit from having a GPU version? Least likely?
Image processing algorithms that are embarassingly parallel will benefit from having GPU versions. This at least includes compositing, with good caching in place it might be possible to do fast gpu recomposition of cached subgraph results from slow/non-gpuable renderings.
/Øyvind K.