Gegl questions
This discussion is connected to the gegl-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
Gegl questions | Patrik Östman | 12 Dec 08:50 |
Gegl questions | Øyvind Kolås | 12 Dec 13:34 |
Gegl questions | Patrik Östman | 06 Jan 19:43 |
Gegl questions | Øyvind Kolås | 06 Jan 19:55 |
a43b8c180712111309r3d9118ef... | 07 Oct 20:29 |
Gegl questions
Hi.
I am new to this forum and have some technical and performance related questions that I have not been able to completely figure out reading the documentation at the home page.
- The main idea of gegl is that you can build up graphs of nodes and that you decide when to process the graph. You are also able to specify a region, scale and output format. To get the best performance I guess that cropping and scaling is done early in the process stage. Correct? And is this true for all operations? Cropping should not be a problem but can imagine that scaling may cause problems for operations like filters there you have to adapt the filter to get the same effect as for a none scaled image.
- If a graph is processed and an operation is added to the graph I guess that when reprocessing the graph the cache of the previous rendering is used. This means that only the new operation is processed. But what if you have rendered the graph using different scales and regions, do you have separate caches for different render options?
Thanks for your answers.
Best regards Patrik Östman
Gegl questions
On Dec 12, 2007 7:50 AM, Patrik Östman wrote:
- The main idea of gegl is that you can build up graphs of nodes and that you decide when to process the graph. You are also able to specify a region, scale and output format. To get the best performance I guess that cropping and scaling is done early in the process stage. Correct? And is this true for all operations? Cropping should not be a problem but can imagine that scaling may cause problems for operations like filters there you have to adapt the filter to get the same effect as for a none scaled image.
By requesting a subregion to be rendered, the minimal required regions to be computed for every node is first computed in a first pass (expanding as needed for context for blurs, and restricted by crop operations present in the graph.) Moving scaling to be performed as early as possible is something that has been considered but currently is not done since it would be in conflict with the per-node caches.
- If a graph is processed and an operation is added to the graph I guess that when reprocessing the graph the cache of the previous rendering is used. This means that only the new operation is processed. But what if you have rendered the graph using different scales and regions, do you have separate caches for different render options?
As mentioned there is only a single cache, but the buffers these caches are built on have built in capabilities for mipmap scaling. These things are not mentioned in detail on the webpage because they are things that might change for the better in the future independently of the public API.
Another key element you didn't touch upon is parallellization of the processing which is also related to such future internal developments, please examine the GEGL bugzilla for some information on ideas in this regard.
/Øyvind K.
Gegl questions
Thank you for your answers.
2007/12/12, Øyvind Kolås :
By requesting a subregion to be rendered, the minimal required regions to be computed for every node is first computed in a first pass (expanding as needed for context for blurs, and restricted by crop operations present in the graph.) Moving scaling to be performed as early as possible is something that has been considered but currently is not done since it would be in conflict with the per-node caches.
As mentioned there is only a single cache, but the buffers these caches are built on have built in capabilities for mipmap scaling. These things are not mentioned in detail on the webpage because they are things that might change for the better in the future independently of the public API.
When you say that the buffers have built in capabilities of mipmap scaling, what do you mean by that? For me mipmaps are prescaled versions of images to make arbitrary scaling much faster. When executing an operation do you only use the mipmap buffer that is nearest the desired scale so you don't need to operate on the full scale image?
To exemplify. If you want to make a fast preview by applying an operation on a scaled down version of an image, do you have to use separate graph representations for the scaled down version versus the full size image meaning that you need a scale operator in one of graph paths or will this be managed by the mipmap buffers and using the scale parameter in the blit-function. Meaning that the operation is only performed on the mipmap scaled version that is closest to the choosen scale.
/Patrik Ö
Gegl questions
On Jan 6, 2008 6:43 PM, Patrik Östman wrote:
To exemplify. If you want to make a fast preview by applying an operation on a scaled down version of an image, do you have to use separate graph representations for the scaled down version versus the full size image meaning that you need a scale operator in one of graph paths or will this be managed by the mipmap buffers and using the scale parameter in the blit-function. Meaning that the operation is only performed on the mipmap scaled version that is closest to the choosen scale.
GEGL doesn't already do this in an efficient manner, but some of the infrastructure to do so is present, and the public API has been designed with such usage in mind.
Some of the things that haven't received proper attention yet is how to deal with spatial properties for the ops. Blur radiuses need to be scaled, has issues with such operaiton, traditional convolve operations are only defined for a 1:1 pixel grid etc.
Do note that GEGL doesn't do such optimizations yet, and GEGL is still in a state where the public API is very close to be declared ready while the internals will probably see quite a bit of refactoring to enable optimizations like the one you describe here as well as other optimizations.
/Øyvind K.