Scanline processing in a GeglOperation
This discussion is connected to the gegl-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
Scanline processing in a GeglOperation | Hans Petter Jansson | 21 Apr 08:41 |
Scanline processing in a GeglOperation | Øyvind Kolås | 21 Apr 12:18 |
Scanline processing in a GeglOperation | Øyvind Kolås | 21 Apr 17:43 |
Scanline processing in a GeglOperation | Hans Petter Jansson | 21 Apr 20:43 |
Scanline processing in a GeglOperation | Hans Petter Jansson | 21 Apr 20:51 |
Scanline processing in a GeglOperation | Øyvind Kolås | 21 Apr 21:36 |
Scanline processing in a GeglOperation | Hans Petter Jansson | 22 Apr 08:59 |
Scanline processing in a GeglOperation | Øyvind Kolås | 23 Apr 15:41 |
Scanline processing in a GeglOperation | Hans Petter Jansson | 24 Apr 22:47 |
Scanline processing in a GeglOperation
[Sorry if this gets posted twice; I managed to send the first one with the wrong From: address].
Hi. I've looked through the docs and (briefly) the list archives, but I can't figure out if it's possible to efficiently process the input to a GeglOperation in a scanline fashion -- i.e. with access to, and outputting, only the pixels of a single, complete row at a time.
I'm trying to write a Floyd-Steinberg dithering implementation, and the way the algorithm is specified, the order in which pixels are processed is significant. It "pushes" the quantization error ahead of it to the next pixel and to pixels on the next scanline. For more background see [1].
Is there an efficient way of doing this in a GEGL op?
Scanline processing in a GeglOperation
On Mon, Apr 21, 2008 at 7:41 AM, Hans Petter Jansson wrote:
[Sorry if this gets posted twice; I managed to send the first one with the wrong From: address].
Hi. I've looked through the docs and (briefly) the list archives, but I can't figure out if it's possible to efficiently process the input to a GeglOperation in a scanline fashion -- i.e. with access to, and outputting, only the pixels of a single, complete row at a time.
I'm trying to write a Floyd-Steinberg dithering implementation, and the way the algorithm is specified, the order in which pixels are processed is significant. It "pushes" the quantization error ahead of it to the next pixel and to pixels on the next scanline. For more background see [1].
Is there an efficient way of doing this in a GEGL op?
The floyd steinberg type of error diffusion is going a bit contrary to the way GEGL is designed, since effectivly to bottomright most pixel depends on the contents of the entire image. Thus if the upper left pixel in the image changes the entire image needs to change as well. Thus to correctly implement floyd steinberg you would actually have to request the processing of the entire image and not piece by piece thus losing the ability to handle larger than RAM images. The stretch contrast operation is another operation where single pixels depends on the entire image.
There are other digital halftoning methods that probably are a much better match for GEGLs processing design. Where the value for any given pixel only depends on it's neighbourhood and not the entire image. (Do also note that GEGL currently does not support
/Øyvind K.
Scanline processing in a GeglOperation
On Mon, Apr 21, 2008 at 11:18 AM, Øyvind Kolås wrote:
Thus to correctly implement floyd steinberg you would actually have to request the processing of the entire image and not piece by piece thus losing the ability to handle larger than RAM images.
This isn't entirely true as you can fetch and store individual scanlines when reading/writing from the involved GeglBuffers as the op is processing for the entire image, nevertheless such an op would need the entire image as source data to update the bottomrightmost pixel. And it would make future scheduling and paralellization of graphs involving such ops much harder than needed. I imagine either green noise or blue noise based halftoning/dither might have properties that align better with the GEGL architecture, not to mention that those approaches do not suffer from similar artifacts of repeating patterns that floyd steinberg dithering suffers from.
/Øyvind K.
Scanline processing in a GeglOperation
On Mon, 2008-04-21 at 11:18 +0100, Øyvind Kolås wrote:
The floyd steinberg type of error diffusion is going a bit contrary to the way GEGL is designed, since effectivly to bottomright most pixel depends on the contents of the entire image. Thus if the upper left pixel in the image changes the entire image needs to change as well. Thus to correctly implement floyd steinberg you would actually have to request the processing of the entire image and not piece by piece thus losing the ability to handle larger than RAM images. The stretch contrast operation is another operation where single pixels depends on the entire image.
The way I did this in a GIMP 2.4 plugin is I load a row at a time from the backing store, and keep two rows's worth of quantization error data in memory at all times -- one for the current row and another for the next. When a row is complete, I swap the error buffers and load the next row of image data.
Would an approach like this be possible with GEGL, or is it outside its scope?
There are other digital halftoning methods that probably are a much better match for GEGLs processing design. Where the value for any given pixel only depends on it's neighbourhood and not the entire image.
Could you recommend one? I've already tried random dithering, which looks terrible, especially at low resolutions, and Bayer, which is very poor in details (but good on gradients).
Most of the halfway decent algorithms I'm familiar with are based on the Floyd-Steinberg method of error diffusion.
(Do also note that GEGL currently does not support
That's fine -- I'm doing color reduction with dithering on images to be used in an OS installer, which will run at low resolutions and 16-bit (565) color. How they're stored is an implementation detail, but how they look is important.
Scanline processing in a GeglOperation
On Mon, 2008-04-21 at 16:43 +0100, Øyvind Kolås wrote:
On Mon, Apr 21, 2008 at 11:18 AM, Øyvind Kolås wrote:
Thus to correctly implement floyd steinberg you would actually have to request the processing of the entire image and not piece by piece thus losing the ability to handle larger than RAM images.
This isn't entirely true as you can fetch and store individual scanlines when reading/writing from the involved GeglBuffers as the op is processing for the entire image, nevertheless such an op would need the entire image as source data to update the bottomrightmost pixel. And it would make future scheduling and paralellization of graphs involving such ops much harder than needed.
Yeah, that's what I thought. But insofar as my implementation is concerned, I don't care about speed, as long as it can be done.
So I guess the question is: How do I request processing of the entire image, on a row by row basis?
I imagine either green
noise or blue noise based halftoning/dither might have properties that align better with the GEGL architecture, not to mention that those approaches do not suffer from similar artifacts of repeating patterns that floyd steinberg dithering suffers from.
Well, I think the aesthetic merits of the Floyd-Steinberg class of error diffusion algorithms is outside the scope of this discussion :)
Scanline processing in a GeglOperation
On Mon, Apr 21, 2008 at 7:51 PM, Hans Petter Jansson wrote:
On Mon, 2008-04-21 at 16:43 +0100, Øyvind Kolås wrote: > On Mon, Apr 21, 2008 at 11:18 AM, Øyvind Kolås wrote:
> > Thus to correctly implement floyd steinberg you would actually have to > > request the processing of the entire image and not piece by piece thus > > losing the ability to handle larger than RAM images.
> This isn't entirely true as you can fetch and store individual > scanlines when reading/writing from the involved GeglBuffers as the op > is processing for the entire image, nevertheless such an op would need > the entire image as source data to update the bottomrightmost pixel. > And it would make future scheduling and paralellization of graphs > involving such ops much harder than needed.
Yeah, that's what I thought. But insofar as my implementation is concerned, I don't care about speed, as long as it can be done.
You need to do the following:
static GeglRectangle
get_required_for_output (GeglOperation *operation,
const gchar *input_pad,
const GeglRectangle *roi)
{
/* request that we have the entire bounding box to operate on */
return *gegl_operation_source_get_bounding_box (operation, "input");
}
static GeglRectangle
get_cached_region (GeglOperation *operation,
const GeglRectangle *roi)
{
/* request that all of the op should be cached for this request */
return *gegl_operation_source_get_bounding_box (operation, "input");
}
As well as setting the corresponding methods on the GeglOperationClass, see other operations for examples.
At this stage:
The input and output regions requested for processing should be the same and you can start reading/writing linear rectangular subregions to the input buffer as well as writing results to the output buffer. You could at this stage choose to do the entire buffer in one chunk, this is where an intelligent implementation allows to still work on larger than RAM images, while a stupid one needs the entire memory temporarily in memory as a linear buffer.
So I guess the question is: How do I request processing of the entire image, on a row by row basis?
If you look at the stretch contrast operation it is operating on a line by line basis when it is doing the actual stretching (it is slow though and allocates a huge linear buffer for the min/max detection instead of doing it in chunks).
/Øyvind K.
Scanline processing in a GeglOperation
On Mon, 2008-04-21 at 20:36 +0100, Øyvind Kolås wrote:
You need to do the following:
[...]
Thanks for all the help! I wrote an operation to do color reduction to a specified number of bits per channel, employing one out of a couple of potential color compensation strategies.
I'm attaching it in case you want to use it for something, or re-use parts of the code in a better operation.
I put a test image up at
http://hpjansson.org/temp/meadow-dithered.png
The picture has one bit per channel for a total of 8 colors, making it a true retro experience. From left to right -- original?, thresholding, Bayer, F-S, covariant random and random dithering.
Scanline processing in a GeglOperation
On Tue, Apr 22, 2008 at 7:59 AM, Hans Petter Jansson wrote:
On Mon, 2008-04-21 at 20:36 +0100, Øyvind Kolås wrote:
http://hpjansson.org/temp/meadow-dithered.png
The picture has one bit per channel for a total of 8 colors, making it a true retro experience. From left to right -- original?, thresholding, Bayer, F-S, covariant random and random dithering.
This might indeed be useful at some point, but right now I do not think the GEGL architecture is flexible enough to warrant compilation and installation by default. Thus I will drop this .c file into the workshop directory where various works in progress reside. It is possible to write specialized ops that would read the RGBA u16 and output the required file. (Using R'G'B'A u16) would probably give slightly better results assuming the displays and data involved are roughly sRGB data.
A plan for supporting "pallettized" images is forming somwhere on the
horizon for GEGL. It would probably involve a specialized babl format and
the ability to attache a floating point RGBA pallette to the format.
Another similar
issue for babl is that it isn't currently capable of generating pixels
for formats where
the components are not a multiple of 8, it is due to these short comings in GEGL
I've placed color-reduction in the workshop for now.
For some ramblings about indexed/palettized images take a look at: http://codecave.org/?weblog_id=indexed_metamers
/Øyvind K.
Scanline processing in a GeglOperation
On Wed, 2008-04-23 at 14:41 +0100, Øyvind Kolås wrote:
On Tue, Apr 22, 2008 at 7:59 AM, Hans Petter Jansson wrote:
On Mon, 2008-04-21 at 20:36 +0100, Øyvind Kolås wrote:
http://hpjansson.org/temp/meadow-dithered.png
The picture has one bit per channel for a total of 8 colors, making it a true retro experience. From left to right -- original?, thresholding, Bayer, F-S, covariant random and random dithering.
This might indeed be useful at some point, but right now I do not think the GEGL architecture is flexible enough to warrant compilation and installation by default. Thus I will drop this .c file into the workshop directory where various works in progress reside. It is possible to write specialized ops that would read the RGBA u16 and output the required file. (Using R'G'B'A u16) would probably give slightly better results assuming the displays and data involved are roughly sRGB data.
Thanks for taking it in! I'm thinking a more GEGLy approach might be to do the quantization in an input-output filter and the dithering in an optional input+aux -> output filter that would take the quantized image as aux and dither according to the differences from the original image. That would make sense for error diffusion dithering, at least. And since there are lots of ways to do the quantization step too, it would only make sense to do it separately.