patch for scale-region.c
This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
patch for scale-region.c | Sven Neumann | 28 Aug 09:54 |
patch for scale-region.c | Geert Jordaens | 28 Aug 22:38 |
patch for scale-region.c | Sven Neumann | 29 Aug 09:36 |
patch for scale-region.c | Sven Neumann | 29 Aug 14:15 |
patch for scale-region.c | Sven Neumann | 29 Aug 21:33 |
patch for scale-region.c | gg@catking.net | 30 Aug 09:49 |
patch for scale-region.c | Alastair M. Robinson | 30 Aug 19:35 |
patch for scale-region.c | gg@catking.net | 31 Aug 21:35 |
patch for scale-region.c | Sven Neumann | 06 Sep 00:22 |
patch for scale-region.c | gg@catking.net | 08 Sep 10:04 |
patch for scale-region.c | Sven Neumann | 08 Sep 20:40 |
patch for scale-region.c | Alastair M. Robinson | 29 Aug 15:04 |
patch for scale-region.c | Geert Jordaens | 29 Aug 17:44 |
patch for scale-region.c | Alastair M. Robinson | 30 Aug 18:36 |
patch for scale-region.c | Geert Jordaens | 29 Aug 17:56 |
10382835.38431220660501350.... | 07 Oct 20:26 | |
patch for scale-region.c | Nicolas Robidoux | 06 Sep 02:30 |
patch for scale-region.c
Hi,
after spending quite some time with the new scale code, I have some doubts about the way that it is scaling down. So here's a patch that I am playing with at the moment:
http://sven.gimp.org/misc/gimp-decimate.diff
This patch applies against SVN trunk. It only changes the current behavior when scaling the image down. The strategy it applies is that for scale ratios < 0.5, the image is decimated using a simple box filter until it has a size that is only up to two times larger than the desired size. At this point the standard interpolation algorithm is used to scale the image to the final size. Of course for interpolation-none, no box filter is used, but the nearest pixel is chosen.
I haven't done much testing with this patch yet, but overall it seems to give better results than the code that is currently in trunk (and it is a little bit faster). However, there are some corner cases like the Moire rings in http://svenfoo.org/scaletest/12-6.html where the results are worse.
If someone is interested, please apply the patch and try some of the problematic images in the previous scaletest. Your feedback is very much appreciated.
Sven
patch for scale-region.c
Sven Neumann wrote:
Hi,
after spending quite some time with the new scale code, I have some doubts about the way that it is scaling down. So here's a patch that I am playing with at the moment:
http://sven.gimp.org/misc/gimp-decimate.diff
This patch applies against SVN trunk. It only changes the current behavior when scaling the image down. The strategy it applies is that for scale ratios < 0.5, the image is decimated using a simple box filter until it has a size that is only up to two times larger than the desired size. At this point the standard interpolation algorithm is used to scale the image to the final size. Of course for interpolation-none, no box filter is used, but the nearest pixel is chosen.
I haven't done much testing with this patch yet, but overall it seems to give better results than the code that is currently in trunk (and it is a little bit faster). However, there are some corner cases like the Moire rings in http://svenfoo.org/scaletest/12-6.html where the results are worse.
If someone is interested, please apply the patch and try some of the problematic images in the previous scaletest. Your feedback is very much appreciated.
Sven
patch for scale-region.c
Hi,
On Thu, 2008-08-28 at 22:38 +0200, Geert Jordaens wrote:
What are your doubts with the new code? Why would a simple box-filter be better for decimating?
My doubts with the current approach are manifold:
The current code has decimation routines, but they are only suited for the special case of scaling down by a factor of 0.5. There are two problems with this:
(1) The code actually uses the special decimation routine not only if scaling down by 0.5 in both directions, but also if the scale factor is 0.5 in just one direction (and different in the other).
(2) If scaling down by 50%, a special algorithm is used (the said decimation routines), but a very different algorithm is used when scaling down by other scale factors (say 51%). This introduces a discontinuity and gives quite unexpected results.
The other grief I have with the current code is the way that the decimation routines are implemented. Let's have a look at the cubic interpolation code, the lanczos code has the same issue. What the code does is it calculates four weighted pixel sums for the neighborhood of the four source pixels that surround the center of the destination pixels. It then averages these four pixel sums. This is quite inefficient as exactly the same operation could be done faster and with less errors by calculating one larger pixel sum using an appropriate weighting matrix.
My patch addresses these issues the following way:
The decimation routines are replaced by a straight-forward 2x2 box filter. It also introduces special decimation routines to decimate in X or Y direction only.
As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%.
The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests.
I will be away over the weekend. Hopefully a few people can do some tests in that time so that we can decide how to proceed early next week.
Sven
patch for scale-region.c
Hi,
I also started to play with some enhancements on top of the patch we are discussing here. Using a steep gaussian filter instead of the plain box filter seems to be a good compromise. It's better at suppressing Moire patterns, at the cost of introducing a little blur. I have not yet finished this patch though and it has some issues. It works for the most common cases though and if someone is interested, it can be downloaded here: http://sven.gimp.org/misc/gimp-decimate-2.diff
Sven
patch for scale-region.c
Hi
Sven Neumann wrote:
As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%.
What I don't understand is why there's a need to interpolate at all in the case of scaling an image down. When scaling up, interpolation is used to estimate missing information, but when scaling down there is no missing information to be estimated - the problem is instead finding the best strategy for *discarding* information.
What I do in PhotoPrint is just use a simple sub-pixel-capable box filter - which is what your current approach (scale-by-nearest-power-of-two, then interpolate) is approximating.
The routine looks like this:
// We accumulate pixel values from a potentially
// large number of pixels and process all the samples
// in a pixel at one time.
double tmp[IS_MAX_SAMPLESPERPIXEL];
for(int i=0;iGetRow(row);
// We use a Bresenham-esque method of calculating the
// pixel boundaries for scaling - add the smaller value
// to an accumulator until it exceeds the larger value,
// then subtract the larger value, leaving the remainder
// in place for the next round.
int a=0;
int src=0;
int dst=0;
while(dstwidth)
{
if(src>=source->width)
src=source->width-1;
for(int i=0;iwidth-(a-width);
p/=width;
// p now contains the proportion of the next pixel
// to be counted towards the output pixel.
a-=source->width; // And a now contains the remainder, // ready for the next round.
// So we add p * the new source pixel
// to the current output pixel...
if(src>=source->width)
src=source->width-1;
for(int i=0;iwidth;
}
++dst;
// And start off the next output pixel with
// (1-p) * the source pixel.
for(int i=0;i
}
The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests.
The slight blurriness comes, I think, from performing the scaling in two distinct stages. Just for kicks, since I had a rare spare hour to play with such things, here are versions of the 3% and 23% test from your page, for comparison, scaled using the downsample filter whose core is posted above:
http://www.blackfiveservices.co.uk/3Percent.png http://www.blackfiveservices.co.uk/23Percent.png
Hope this is some help
All the best,
--
Alastair M. Robinson
patch for scale-region.c
Alastair M. Robinson wrote:
Hi
Sven Neumann wrote:
As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%.
What I don't understand is why there's a need to interpolate at all in the case of scaling an image down. When scaling up, interpolation is used to estimate missing information, but when scaling down there is no missing information to be estimated - the problem is instead finding the best strategy for *discarding* information.
What I do in PhotoPrint is just use a simple sub-pixel-capable box filter - which is what your current approach (scale-by-nearest-power-of-two, then interpolate) is approximating.
The routine looks like this:
// We accumulate pixel values from a potentially // large number of pixels and process all the samples // in a pixel at one time.
double tmp[IS_MAX_SAMPLESPERPIXEL]; for(int i=0;iGetRow(row);// We use a Bresenham-esque method of calculating the // pixel boundaries for scaling - add the smaller value // to an accumulator until it exceeds the larger value, // then subtract the larger value, leaving the remainder // in place for the next round.
int a=0;
int src=0;
int dst=0;
while(dstwidth)
{
if(src>=source->width)
src=source->width-1;
for(int i=0;iwidth-(a-width);
p/=width;
// p now contains the proportion of the next pixel // to be counted towards the output pixel.a-=source->width; // And a now contains the remainder, // ready for the next round.
// So we add p * the new source pixel // to the current output pixel...
if(src>=source->width)
src=source->width-1;
for(int i=0;iwidth;
}
++dst;// And start off the next output pixel with // (1-p) * the source pixel.
for(int i=0;i
}The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests.
The slight blurriness comes, I think, from performing the scaling in two distinct stages. Just for kicks, since I had a rare spare hour to play with such things, here are versions of the 3% and 23% test from your page, for comparison, scaled using the downsample filter whose core is posted above:
http://www.blackfiveservices.co.uk/3Percent.png http://www.blackfiveservices.co.uk/23Percent.png
Hope this is some help
All the best, --
Alastair M. Robinson
patch for scale-region.c
Sven Neumann wrote:
Hi,
On Thu, 2008-08-28 at 22:38 +0200, Geert Jordaens wrote:
What are your doubts with the new code? Why would a simple box-filter be better for decimating?
My doubts with the current approach are manifold:
The current code has decimation routines, but they are only suited for the special case of scaling down by a factor of 0.5. There are two problems with this:
(1) The code actually uses the special decimation routine not only if scaling down by 0.5 in both directions, but also if the scale factor is 0.5 in just one direction (and different in the other).
(2) If scaling down by 50%, a special algorithm is used (the said decimation routines), but a very different algorithm is used when scaling down by other scale factors (say 51%). This introduces a discontinuity and gives quite unexpected results.
The other grief I have with the current code is the way that the decimation routines are implemented. Let's have a look at the cubic interpolation code, the lanczos code has the same issue. What the code does is it calculates four weighted pixel sums for the neighborhood of the four source pixels that surround the center of the destination pixels. It then averages these four pixel sums. This is quite inefficient as exactly the same operation could be done faster and with less errors by calculating one larger pixel sum using an appropriate weighting matrix.
My patch addresses these issues the following way:
The decimation routines are replaced by a straight-forward 2x2 box filter. It also introduces special decimation routines to decimate in X or Y direction only.
As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%.
The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests.
I will be away over the weekend. Hopefully a few people can do some tests in that time so that we can decide how to proceed early next week.
Sven
Thanks for the answers on my questions. I'd like to comment on them.
1. I agree that there is some optimisation to be done, I had already
started a 1D version of the scale-region though did not find the time to
test it completely.
2. The whole point of doing the scale by 2 was that in case of
decimating a standard (integer) precomputed filter can be applied.
This a advantage for filters like bicubic and lanczos. The final scale
step had to use the interpolation code.
As for testing the patch no luck here, I'm renovating my house (partly)
this weekend.
Geert
patch for scale-region.c
Hi,
On Fri, 2008-08-29 at 14:15 +0200, Sven Neumann wrote:
I also started to play with some enhancements on top of the patch we are discussing here. Using a steep gaussian filter instead of the plain box filter seems to be a good compromise. It's better at suppressing Moire patterns, at the cost of introducing a little blur. I have not yet finished this patch though and it has some issues. It works for the most common cases though and if someone is interested, it can be downloaded here: http://sven.gimp.org/misc/gimp-decimate-2.diff
In the meantime this patch has evolved and now also includes decimation routines that only scale in one direction. But currently I am still in favor of the first patch (gimp-decimate.diff).
Long-term we should try to get rid of the multi-pass scaling approach and instead implement the scaling properly. But for 2.6, I'd like to suggest that we apply gimp-decimate.diff.
Sven
patch for scale-region.c
On Fri, 29 Aug 2008 21:33:45 +0200, Sven Neumann wrote:
Hi,
On Fri, 2008-08-29 at 14:15 +0200, Sven Neumann wrote:
I also started to play with some enhancements on top of the patch we are discussing here. Using a steep gaussian filter instead of the plain box filter seems to be a good compromise. It's better at suppressing Moire patterns, at the cost of introducing a little blur. I have not yet finished this patch though and it has some issues. It works for the most common cases though and if someone is interested, it can be downloaded here: http://sven.gimp.org/misc/gimp-decimate-2.diff
In the meantime this patch has evolved and now also includes decimation routines that only scale in one direction. But currently I am still in favor of the first patch (gimp-decimate.diff).
Long-term we should try to get rid of the multi-pass scaling approach and instead implement the scaling properly. But for 2.6, I'd like to suggest that we apply gimp-decimate.diff.
Sven
Hi,
I have not looked at this code in a while so I can't comment on how it does what it currently does, so I will only comment on the results you posted.
I have compared the results by opening the images in separte tabs in Opera at 200% which allows them to be viewed in exactly the same position and switch back and forth with one click. This allows a direct comparison without moving the eye. Any differences become instantly obvious. This seems to be the most effective way for the brain to react to differences.
Having looked at the 3% reductions which are probably the most critical (and make the most use of the binarly division in your patch) I not sure the results can be seen as superior.
Comparing Lanczos 3% old vs patched: lefthand building roof has bad moire effects that totally obscure underlying detail. Both sets of trees have much less obvious staircasing in the current code. There is an overall impression of sharpness in the new code but this seems really to be just high contrast artifacts with a lack of intermediate tones.
Observations are generally the same for the 3% cubic.
Similarly, a quick check on 50% cubic , old and patched, again at 200%. Just looking at the top left corner there are two dots. The current code renders them nice and round whereas the patch shows fairly ugly artifacts. Admittedly, these artifacts are high contrast but I see that as a defect not a feature. Surely creating the grey tones necessary to smooth out the pixelisation is the aim of decimation code.
Interestingly the blackfive code (thanks for sending the that algo Alistair) seems even harsher but does give some impression of sharpness by apparently accentuating edges.
If this is considered from an analytical , data processing perspective I can't imaginge what the frequency responce of this multipass approach must look like. It's hard to see how multipass binary division plus a final decimation filter can preserve more information than one, well designed filter.
Just a final obsevation to throw into the mix: I took the 3% lanczos and gave it 25% sharpen. The result is almost identical in overall appearance and contrast to the patched lanczos 3% but without the articfacts. I'm not sure what conclusions to draw from that but it's an interesting result.
My weekends scheduled for finishing a P.V. panel heliostat, so back to work now.
regards.
patch for scale-region.c
Hi :)
Geert Jordaens wrote:
The code is not interpolating rather resampling (supersampling in case of lanczos and bicubic) in the case of scaling down.
OK - perhaps I'm misunderstanding the approach taken here, then.
As with interpolation, the Lanczos/Bicubic functions are being used to fit a parametric function to the original discrete samples, yes?
The question is how is this function then used? Is a single sample taken from it for each sample of the scaled-down image, or is the function "integrated" over an appropriate interval for each sample?
I'd assumed, perhaps incorrectly, that the former approach was being used, and if so, then a simple box-filter should be more accurate; if the latter, however, then I'd expect the results to be better than a box filter.
All the best,
--
Alastair M. Robinson
patch for scale-region.c
Hi :)
gg@catking.net wrote:
Comparing Lanczos 3% old vs patched: lefthand building roof has bad moire effects that totally obscure underlying detail. Both sets of trees have much less obvious staircasing in the current code. There is an overall impression of sharpness in the new code but this seems really to be just high contrast artifacts with a lack of intermediate tones.
I think these are aliasing artifacts caused by high-frequency components in the original image - unless you take steps to remove frequencies higher than the target sample rate before resampling a signal, aliasing will result. And as you noted, it affects my code too.
Reducing that effect required some form of low-pass filtering before scaling - to remove the high frequency components which can't be represented in the lower-resolution image.
Here's another version of the 3% reduction image, with a 33 radius (100% / 3%) gaussian blur applied before the reduction:
http://www.blackfiveservices.co.uk/3PercentPreBlur.png
I also note that my original 3% version was one pixel narrower than
Sven's, so here it is again:
http://www.blackfiveservices.co.uk/3Percent.png
Interestingly the blackfive code (thanks for sending the that algo Alistair) seems even harsher but does give some impression of sharpness by apparently accentuating edges.
I suspect that's just the result of a "cleaner", single-stage reduction with the aliasing artifacts on top.
If this is considered from an analytical , data processing perspective I can't imaginge what the frequency responce of this multipass approach must look like.
Chances are it would be low-pass to some degree, so arguably a beneficial side-effect, even if a designed filter would be an improvement!
All the best,
--
Alastair M. Robinson
patch for scale-region.c
On Sat, 30 Aug 2008 19:35:20 +0200, Alastair M. Robinson wrote:
Comparing Lanczos 3% old vs patched: lefthand building roof has bad moire effects that totally obscure underlying detail. Both sets of trees have much less obvious staircasing in the current code. There is an overall impression of sharpness in the new code but this seems really to be just high contrast artifacts with a lack of intermediate tones.
I think these are aliasing artifacts caused by high-frequency components in the original image - unless you take steps to remove frequencies higher than the target sample rate before resampling a signal, aliasing will result. And as you noted, it affects my code too.
That's exactly what lanczos does. Which is why you don't see any alising and you get smooth transitions. Whether the overall softness of the image is correct rendition and a neccessary feature or due to some minor errors in calculating the kernel may be worth looking at.
If the code is close but not quite right some softening would be likely.
/gg
patch for scale-region.c
Hi,
while I see your points and I appreciate your comparisons of the results, fact is that the current code has bugs that are fixed by my patch. The most apparent problem is that the current code is using the 2-dimensional decimation routines even when downscaling only in one direction. To see this, create a new image, apply a standard grid on it using Filter->Render->Patterns->Grid and scale it down in one direction by a scale factor smaller than 0.5. The one pixel wide grid lines will become blurry. I don't think this is acceptable and so far the only choice we have is to apply either gimp-decimate.diff or gimp-decimate-2.diff as found on http://sven.gimp.org/misc/. So far I am in favor of applying gimp-decimate.diff. Unless someone objects and provides an alternative, I will commit this change this weekend.
Sven
patch for scale-region.c
Hello Sven:
... To see this, create a new image, apply a standard grid on it using Filter->Render->Patterns->Grid and scale it down in one direction by a scale factor smaller than 0.5. The one pixel wide grid lines will become blurry. I don't think this is acceptable and so far the only choice we have is to apply either gimp-decimate.diff or gimp-decimate-2.diff...
A cartoon of what I understand about downsampling, which may be useful when people debate the various merits of downsampling methods:
There are two extremes in what most people expect from downsampling. What they expect basically depends on what the downsampling is applied to:
--- Old school CG type images (e.g., Super Mario or the old Wilbur, the Gimp mascot). Then, nearest neighbour (and analogous methods) will, in most situation, do better than box filtering (and most LINEAR interpolatory methods). The reason is that the picture is made up of flat colour areas with sharp boundaries, and anything (linear) which deviates a lot from nearest neighbour will not preserve the property of begin made up of flat colour areas separated by sharp lines. For most people, blur, in this context, is more annoying than aliasing.
--- Digital photographs, in which the image is usually made up of smooth colour areas with blurry boundaries, and in addition, there is noise and demosaicing artifacts. Then, in general, nearest neighbour is not acceptable, because it amplifies the noise (which is not present in CG images) and aliasing is more visually jarring than blur. In this situation, box filtering (especially its exact area variant) and analogous methods will, in most situations, do better than nearest neighbour.
Consequence:
Linear methods cannot make both groups of people happy.
Making most people happy will require TWO (linear) downsampling methods.
Alternatively, it will require having a parameter (called blur?) which, when equal 0, gives a method which is close to nearest neighbour, and when equal to 1, gives a method which is close to box filtering.
I can help with this.
-------
Another important point which was raised is that if the image is enlarged in one direction and reduced in the other, one single method is unlikely to do well.
Within the GEGL approach, it may be that such situations are better handled by upsampling first (using a good upsample method) in the upsampling direction, then feeding the result to the downsampler in the downsampling direction.
That is: Don't expect one single method/"plug-in" to do both.
In summary:
To stretch in one direction and shrink in the other, first do a one direction stretch, followed by a one direction shrink.
-------
Also, in previous emails about this, I saw the following valid point being made:
Suppose that the following strategy is followed for downsampling.
To make things more explicit, I'll use specific numbers.
Suppose that we want to downsample an image from dimensions 128x64 to 15x9 (original pixel dimensions are powers of two for the sake of simplicity).
First, box filter down (by powers of two, a different number of times in each direction) to 16x16, then use a standard resampling method (bilinear, say) to downsample to 15x9.
The point that was made was that doing things this way is not continuous, meaning that scaling factors which are almost the same will not give images which are almost the same.
For example, if one followed this strategy to downsample to 17x9 instead of 15x9, one would first box filter down to 32x9, then apply bilinear. It should surprise no one that this may produce a fairly different picture.
The point I want to make about this is that it is possible to fix this "discontinuous" behavior, as follows.
Produce TWO box filtered down images.
In the case of downsampling from 128x64 to 15x9, the two downsamples would be of dimensions 16x16 and 8x8.
Then, downsample the 16x16 to 15x9 using, say, bilinear, and upsample the 8x8 to 15x9 using, again, bilinear, making sure that the sampling keeps the alignment of the images (I know how to do this: it is not hard).
Then, blend the two images as follows:
Let Theta = ((15-8)/(16-8)+(9-8)/(16-8))/2.
Final image = Theta * downsample + (1-Theta) * upsample.
If you think about what this does, you will realize that this satisfies the criterion that nearby downsampling factors give nearby images.
(WARNING: nearest neighbour is discontinuous, so the nearby images can actually be quite different. But they will be less different than with standard nearest neighbour.)
If someone wants to implement the above, I can help.
-----
I hope someone finds the above useful when thinking about the downsampling issue.
With regards,
Nicolas Robidoux Laurentian University/Universite Laurentienne
patch for scale-region.c
On Sat, 06 Sep 2008 00:22:58 +0200, Sven Neumann wrote:
Hi,
while I see your points and I appreciate your comparisons of the results, fact is that the current code has bugs that are fixed by my patch. The most apparent problem is that the current code is using the 2-dimensional decimation routines even when downscaling only in one direction. To see this, create a new image, apply a standard grid on it using Filter->Render->Patterns->Grid and scale it down in one direction by a scale factor smaller than 0.5. The one pixel wide grid lines will become blurry. I don't think this is acceptable and so far the only choice we have is to apply either gimp-decimate.diff or gimp-decimate-2.diff as found on http://sven.gimp.org/misc/. So far I am in favor of applying gimp-decimate.diff. Unless someone objects and provides an alternative, I will commit this change this weekend.
Sven
Clearly trapping the special case of only scaling in one dimension is a worthwhile optimisation. That would seem to be a separate issue from fundementally changing the scaling algo. Could I suggest you break these two changes into separte patches.
I had a quick look at the current code during the week and found it hard to recognise the "lanczos" in decimateLanczos2() . There seems to be some sort of gaussian filter being applied which probably accounts for the softening. I'm not sure this is necessary if reduction is indeed using lanczos since it is in itself a frequency filter. It may however been needed with the current code which appears to use fixed coeffs and therefore cannot presumably be a filter tuned to the specific scaling.
In any case the results are quite good and the code seems stable in the limitted testing I've been able to do.
I doubt I'll have time to get into coding any changes in the immediate future but if you can split the patch it would make developing a more rigourous filter easier in the future.
regards.
--
patch for scale-region.c
Hi,
On Mon, 2008-09-08 at 10:04 +0200, gg@catking.net wrote:
Clearly trapping the special case of only scaling in one dimension is a worthwhile optimisation. That would seem to be a separate issue from fundementally changing the scaling algo. Could I suggest you break these two changes into separte patches.
You misunderstood my patch then. The patch is not an optimization, not at all. It just fixes a bug in what used to be the current code (used to be as that patch has been committed to SVN trunk in the meantime). The code simply treated this case wrongly. It wasn't slow, it created an obviously wrong result.
We will have time again in the next development cycle to improve this code. Or we might decide to use GEGL for scaling. For now, I believe that the code in trunk handles all cases reasonably well and we should stick to it for the upcoming 2.6 release. Of course, if someone finds a corner-case that the code handles incorrectly, please let me know about it.
Sven