unsharp mask
This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
unsharp mask | bioster | 19 Nov 21:50 |
unsharp mask | photocomix | 22 Nov 00:29 |
unsharp mask | bioster | 22 Nov 13:24 |
unsharp mask | Ofnuts | 22 Nov 13:59 |
unsharp mask | Patrick Horgan | 22 Nov 20:59 |
unsharp mask | Ofnuts | 22 Nov 21:08 |
unsharp mask | Patrick Horgan | 22 Nov 21:18 |
unsharp mask | Ofnuts | 22 Nov 21:26 |
unsharp mask | Patrick Horgan | 23 Nov 07:24 |
unsharp mask | gg@catking.net | 23 Nov 11:50 |
unsharp mask | bioster | 23 Nov 15:11 |
unsharp mask | Torsten Neuer | 23 Nov 17:03 |
unsharp mask | gg@catking.net | 23 Nov 21:30 |
unsharp mask | Patrick Horgan | 24 Nov 01:22 |
unsharp mask | Patrick Horgan | 24 Nov 01:04 |
unsharp mask | gg@catking.net | 24 Nov 02:17 |
unsharp mask | Patrick Horgan | 24 Nov 02:47 |
unsharp mask | bioster | 24 Nov 16:03 |
unsharp mask | bioster | 24 Nov 18:08 |
unsharp mask | Bill Skaggs | 24 Nov 16:25 |
unsharp mask | " | 24 Nov 16:48 |
unsharp mask | gg@catking.net | 24 Nov 21:47 |
unsharp mask | saulgoode at flashingtwelve.brickfilms.com | 08 Dec 16:53 |
unsharp mask | Rob Antonishen | 08 Dec 18:22 |
unsharp mask | Sven Neumann | 08 Dec 18:28 |
unsharp mask | Alexia Death | 08 Dec 18:35 |
unsharp mask | Michael Natterer | 08 Dec 20:46 |
unsharp mask | Patrick Horgan | 08 Dec 21:37 |
unsharp mask | gg at catking.net | 08 Dec 20:49 |
unsharp mask | Sven Neumann | 08 Dec 20:57 |
unsharp mask | Alexia Death | 08 Dec 21:15 |
unsharp mask | Patrick Horgan | 08 Dec 21:39 |
unsharp mask | Sven Neumann | 08 Dec 22:39 |
unsharp mask | peter sikking | 09 Dec 16:18 |
unsharp mask | gg at catking.net | 08 Dec 22:10 |
unsharp mask | Rob Antonishen | 08 Dec 19:40 |
unsharp mask | Sven Neumann | 08 Dec 20:44 |
- postings
- 5
unsharp mask
Hi there, I'm new to this forum and was hoping someone would be able to answer a question about the unsharp mask filter.
I'm doing some image processing for work, and was going through various filters and found that the unsharp mask has some qualities that I like. So I read up on unsharp masking and implemented it in my own program. However, my results aren't quite the same.
Put simply: gimp's output has more contrast, and appears somewhat smoother.
After playing with my unsharp mask a bit, I downloaded the source to 2.6.11 (possibly a mistake, since I am using the 2.6.10 binary) and found the code that does the unsharp masking (unsharp-mask.c). Now, looking through the code I don't see gimp doing anything really different from me outside of the convolution matrix code. Frankly, I don't understand the convolution matrix code.
Why does it take 50 samples and then average them? Is this significant?
I tried running some numbers through the equation e^-(x^2/2s^2) in the comments, and ran that through a calculator. The numbers largely came out negative, but when I used those numbers they gave obviously incorrect output (hugely overexposed appearance).
Are there any other non-standard tweaks to this algorithm that I haven't spotted yet?
I'm currently using a simple convolution of: .1, .2, .4, .2, .1 For a similar sized convolution matrix, what would gimp be using? My development environment isn't set up to compile gimp.
Thanks for any assistance!
- postings
- 65
unsharp mask
Now, looking through the code I don't see gimp doing anything really different from me outside >of the convolution matrix code. Frankly, I don't understand the convolution matrix code.
Why does it take 50 samples and then average them? Is this significant?
even if i am not a developer i may assure you that consider the neighboring pixels (that is what convolution matrix are for ) is VERY significant, and sure that may explain well why result look simultaneously more contrasted and smoother.
i think you should really give a look to what convolution matrix may do
- postings
- 5
unsharp mask
Now, looking through the code I don't see gimp doing anything really different from me outside >of the convolution matrix code. Frankly, I don't understand the convolution matrix code.
Why does it take 50 samples and then average them? Is this significant?
even if i am not a developer i may assure you that consider the neighboring pixels (that is what convolution matrix are for ) is VERY significant, and sure that may explain well why result look simultaneously more contrasted and smoother.
i think you should really give a look to what convolution matrix may do
Er, I think you misunderstood me a bit. I believe I understand what the convolution matrix is, and I also understand why it's important. Yes, the entire purpose of the convolution matrix is to look at neighbouring pixels to create the pixel you're looking for, but that is not what I was talking about when I said '50 samples'.
In the code which *creates* the convolution matrix it does not look at the pixels at all. It creates the convolution matrix using an equation. I would think that it would simply plug in the variables for each convolution matrix value it's looking for, but what it is actually doing is taking 50 numbers and plugs those into the equation, then averages them.
Just to repeat, this is in the part that creates the convolution matrix, not the part that uses the convolution matrix to generate pixel values. I was wondering about the purpose of all those samples along the equation that generates the matrix.
unsharp mask
On 11/22/2010 02:24 PM, bioster wrote:
Now, looking through the code I don't see gimp doing anything really different from me outside>of the convolution matrix code. Frankly, I don't understand the convolution matrix code.
Why does it take 50 samples and then average them? Is this significant?
even if i am not a developer i may assure you that consider the neighboring pixels (that is what convolution matrix are for ) is VERY significant, and sure that may explain well why result look simultaneously more contrasted and smoother.
i think you should really give a look to what convolution matrix may do
Er, I think you misunderstood me a bit. I believe I understand what the convolution matrix is, and I also understand why it's important. Yes, the entire purpose of the convolution matrix is to look at neighbouring pixels to create the pixel you're looking for, but that is not what I was talking about when I said '50 samples'.
In the code which *creates* the convolution matrix it does not look at the pixels at all. It creates the convolution matrix using an equation. I would think that it would simply plug in the variables for each convolution matrix value it's looking for, but what it is actually doing is taking 50 numbers and plugs those into the equation, then averages them.
Just to repeat, this is in the part that creates the convolution matrix, not the part that uses the convolution matrix to generate pixel values. I was wondering about the purpose of all those samples along the equation that generates the matrix.
From the comments in the code, it looks like it's computing the convolution matrix values by integrating the gaussian bell curve (using 100 points total).
unsharp mask
On 11/22/2010 05:59 AM, Ofnuts wrote:
On 11/22/2010 02:24 PM, bioster wrote:
... elisions by patrick ...
Er, I think you misunderstood me a bit. I believe I understand what the convolution matrix is, and I also understand why it's important. Yes, the entire purpose of the convolution matrix is to look at neighbouring pixels to create the pixel you're looking for, but that is not what I was talking about when I said '50 samples'.
In the code which *creates* the convolution matrix it does not look at the pixels at all. It creates the convolution matrix using an equation. I would think that it would simply plug in the variables for each convolution matrix value it's looking for, but what it is actually doing is taking 50 numbers and plugs those into the equation, then averages them.
Just to repeat, this is in the part that creates the convolution matrix, not the part that uses the convolution matrix to generate pixel values. I was wondering about the purpose of all those samples along the equation that generates the matrix.
From the comments in the code, it looks like it's computing the convolution matrix values by integrating the gaussian bell curve (using 100 points total).
Now I'm interested. Where can I look in the code for this? They really do this every time with the same results instead of just having an array of the numbers so generated?
Patrick
unsharp mask
On 11/22/2010 09:59 PM, Patrick Horgan wrote:
From the comments in the code, it looks like it's computing the convolution matrix values by integrating the gaussian bell curve (using 100 points total).
Now I'm interested. Where can I look in the code for this? They really do this every time with the same results instead of just having an array of the numbers so generated?
plugins/unsharp-mask.c, line 768 and up. It's not always the same results, because the blur radius is used as at the standard deviation of the integrated gaussian.
unsharp mask
On 11/22/2010 12:59 PM, Patrick Horgan wrote:
Now I'm interested. Where can I look in the code for this? They really do this every time with the same results instead of just having an array of the numbers so generated?
Never mind. I saw it. It's depending on the radius
which is a double. Even though double's aren't reals
so it wouldn't be an infinite number of matrices
required if you did it for each possible value less
than 10 to would be an impossibly large number.
Reading this code was a pleasure. Clearly written by
someone who wants the code to be
maintainable/comprehensible by others.
Patrick
unsharp mask
On 11/22/2010 10:18 PM, Patrick Horgan wrote:
On 11/22/2010 12:59 PM, Patrick Horgan wrote:
Now I'm interested. Where can I look in the code for this? They really do this every time with the same results instead of just having an array of the numbers so generated?
Never mind. I saw it. It's depending on the radius which is a double. Even though double's aren't reals so it wouldn't be an infinite number of matrices required if you did it for each possible value less than 10 to would be an impossibly large number. Reading this code was a pleasure. Clearly written by someone who wants the code to be
maintainable/comprehensible by others.
Well, I wondered too: the radius slider goes from 0 to 120 in .1 steps so that would be only 1200 values to pre-calculate.
unsharp mask
On 11/22/2010 01:26 PM, Ofnuts wrote:
Well, I wondered too: the radius slider goes from 0 to 120 in .1 steps so that would be only 1200 values to pre-calculate.
They only do it for numbers less than 10 and the length varies with the radius (from a low of 5 to a high of 45) as well. The total number of entries would be 2520 * 8 bytes for a gdouble so it would only need 20,160 bytes of storage, ideally but since C doesn't have variable length arrays you'd have to declare the array as something like:
static gdouble global_cmatrix[100][45];
which would make the length 4500*8=36000 bytes but of course you'd have to store the length of each as well. If you wanted to you could store them as unsigned chars, so it would be another 100 bytes, or a total of 36100 bytes. In a trial it really added 36831 bytes. How would that affect the usability of the plugin? I've attached an include file that has the matrices you would need. Then you could use:
static gint
gen_convolve_matrix (gdouble radius,
gdouble **cmatrix_p)
{
*cmatrix_p=global_cmatrix[(int)((radius*10)-.5)];
return cmatrix_lens[(int)((radius*10)-.5)];
}
I don't know if it would be a good idea or not.
Is it true though that the user interface only _allows_ values in tenths or is it that it only displays values in tenths but returns values in between?
Patrick
_______________________________________________ Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
unsharp mask
On 11/23/10 08:24, Patrick Horgan wrote:
On 11/22/2010 01:26 PM, Ofnuts wrote:
Well, I wondered too: the radius slider goes from 0 to 120 in .1 steps so that would be only 1200 values to pre-calculate.
They only do it for numbers less than 10 and the length varies with the radius (from a low of 5 to a high of 45) as well. The total number of entries would be 2520 * 8 bytes for a gdouble so it would only need 20,160 bytes of storage, ideally but since C doesn't have variable length arrays you'd have to declare the array as something like:
static gdouble global_cmatrix[100][45];
which would make the length 4500*8=36000 bytes but of course you'd have to store the length of each as well. If you wanted to you could store them as unsigned chars, so it would be another 100 bytes, or a total of 36100 bytes. In a trial it really added 36831 bytes. How would that affect the usability of the plugin? I've attached an include file that has the matrices you would need. Then you could use:
static gint gen_convolve_matrix (gdouble radius, gdouble **cmatrix_p)
{
*cmatrix_p=global_cmatrix[(int)((radius*10)-.5)]; return cmatrix_lens[(int)((radius*10)-.5)]; }I don't know if it would be a good idea or not.
Is it true though that the user interface only _allows_ values in tenths or is it that it only displays values in tenths but returns values in between?
Patrick
___________________
What is the aim here? Unless I've missed the point, it seems like a trade-off between computation time for the matrix and memory footprint of storing all the possible values that matrix could hold.
That's fairly bit of memory to grab if there's not a real need. How long does it take to calculate matrix each time? Is this even a noticeable proportion of the time required to apply the filter to an image?
If I've got the wrong end of the stick, what is the problem with the current code?
regards.
- postings
- 5
unsharp mask
On 11/23/10 08:24, Patrick Horgan wrote:
What is the aim here? Unless I've missed the point, it seems like a trade-off between computation time for the matrix and memory footprint of storing all the possible values that matrix could hold.
That's fairly bit of memory to grab if there's not a real need. How long does it take to calculate matrix each time? Is this even a noticeable proportion of the time required to apply the filter to an image?
If I've got the wrong end of the stick, what is the problem with the current code?
regards.
I think they're talking about trading off between spending CPU time computing the matrix and spending memory storing it precomputed.
I think I disagree with you about it being a "fair bit of memory". 30-40k of memory in't a big deal by today's standards, and when you put that beside a modestly sized image which can easily run a few megabytes, I would consider it entirely reasonable. That's assuming you release that memory after your unsharp tool is done. The computation time is probably in the same class... my computer is fairly brisk and I don't think the time spent computing the matrix is noticable.
So it's not that there's a "problem" with the current method, they're just pondering whether they can squeeze better performance out of it by making the tradeoff.
unsharp mask
Am 23.11.2010 16:11, schrieb bioster:
On 11/23/10 08:24, Patrick Horgan wrote:
What is the aim here? Unless I've missed the point, it seems like a trade-off between computation time for the matrix and memory footprint of storing all the possible values that matrix could hold.
That's fairly bit of memory to grab if there's not a real need. How long does it take to calculate matrix each time? Is this even a noticeable proportion of the time required to apply the filter to an image?
If I've got the wrong end of the stick, what is the problem with the current code?
regards.
I think they're talking about trading off between spending CPU time computing the matrix and spending memory storing it precomputed.
I think I disagree with you about it being a "fair bit of memory". 30-40k of memory in't a big deal by today's standards, and when you put that beside a modestly sized image which can easily run a few megabytes, I would consider it entirely reasonable. That's assuming you release that memory after your unsharp tool is done. The computation time is probably in the same class... my computer is fairly brisk and I don't think the time spent computing the matrix is noticable.
Don't forget about code size and the size of memory allocated for the matrix - this also takes up a certain amount of memory. Maybe not 30-40k, but the difference between run-time computed and pre-computed values would not be that big, but...
So it's not that there's a "problem" with the current method, they're just pondering whether they can squeeze better performance out of it by making the tradeoff.
And this is the real trade-off: trading maintainability of code - just imagine someone wanted to increase the precision of the "radius" parameter - for a minimum amount of speed.
This function IS CALLED ONLY ONCE for the whole filtering progress!
Which means that this is clearly the least important place for code optimization in this plug-in, since the speed gained from making the matrix pre-computed will dissolve into nothingness with growing images!
How many microseconds does the computation of the matrix take ? Did anyone evaluate this yet ?
I mean, before starting to speculate about methods of optimization for a certain part of code, one should first have a look at what can be gained there - in this case: near zero.
Torsten
unsharp mask
On 11/23/10 18:03, Torsten Neuer wrote:
Am 23.11.2010 16:11, schrieb bioster:
On 11/23/10 08:24, Patrick Horgan wrote:
What is the aim here? Unless I've missed the point, it seems like a trade-off between computation time for the matrix and memory footprint of storing all the possible values that matrix could hold.
That's fairly bit of memory to grab if there's not a real need. How long does it take to calculate matrix each time? Is this even a noticeable proportion of the time required to apply the filter to an image?
If I've got the wrong end of the stick, what is the problem with the current code?
regards.
I think they're talking about trading off between spending CPU time computing the matrix and spending memory storing it precomputed.
I think I disagree with you about it being a "fair bit of memory". 30-40k of memory in't a big deal by today's standards, and when you put that beside a modestly sized image which can easily run a few megabytes, I would consider it entirely reasonable. That's assuming you release that memory after your unsharp tool is done. The computation time is probably in the same class... my computer is fairly brisk and I don't think the time spent computing the matrix is noticable.
Don't forget about code size and the size of memory allocated for the matrix - this also takes up a certain amount of memory. Maybe not 30-40k, but the difference between run-time computed and pre-computed values would not be that big, but...
So it's not that there's a "problem" with the current method, they're just pondering whether they can squeeze better performance out of it by making the tradeoff.
And this is the real trade-off: trading maintainability of code - just imagine someone wanted to increase the precision of the "radius" parameter - for a minimum amount of speed.
This function IS CALLED ONLY ONCE for the whole filtering progress!
Which means that this is clearly the least important place for code optimization in this plug-in, since the speed gained from making the matrix pre-computed will dissolve into nothingness with growing images!
How many microseconds does the computation of the matrix take ? Did anyone evaluate this yet ?
I mean, before starting to speculate about methods of optimization for a certain part of code, one should first have a look at what can be gained there - in this case: near zero.
Torsten
I agree , I thinks there's not much point unless someone benchmarks this and shows there is a real problem.
However , in reading up on this I found this article suggesting that it is at least 2x faster to do this operation on luminosity in YUV rather than RGB and it often looks better by avoiding odd looking inverse colours.
http://www.codeguru.com/cpp/g-m/gdi/gdi/article.php/c3675
I also note that the preview could be streamlined in some areas.
1/ clicking preview on and off clearly redoes the whole operation , this should probably be buffered if the params are the same as the last on state. Clicking on/off is a valid requirement to visually compare before and after. With a large radius this can be quite a long wait. (2-3s on a 2.4GHz Athlon) which impedes a clear comparison.
The preview could be saved in a tile and simply restored if none of the parameters change and the window is not moved.
2/ dragging the preview box could poss benifit from storing the matrix but again this could be micro-seconds.
3/ a small drag of preview reverts the whole thing to unprocessed data and starts again. This is a waste of time if the drag is less than the whole preview window size. The drag could copy the already sharpened area and just work on the two rectangles that got dragged in. Keeping the already processed rect would also be a good visual feedback , especially when using the NESW crosshairs to drag the window.
some quick experimenting seems to indicate that this is common behaviour on ALL filter previews. While many are fast enough (on fastish processors) convolution based filters can be rather slow.
since unsharp is the most useful sharpen filter it may merit some work , especially if it would benefit across all filters.
/gg
unsharp mask
On 11/23/2010 03:50 AM, gg@catking.net wrote:
... elision by patrick ...
What is the aim here? Unless I've missed the point, it seems like a trade-off between computation time for the matrix and memory footprint of storing all the possible values that matrix could hold.That's fairly bit of memory to grab if there's not a real need. How long does it take to calculate matrix each time? Is this even a noticeable proportion of the time required to apply the filter to an image?
You're right. When the image of the plugin is only 7k or so, and the dynamically allocated memory for the matrix is 1/99th of the size of the static array, in comparison it's a fair bit. The only place we'd care about it would be at initial load of the plugin. It's still kb vs mb or gb of an image, so it's not a lot in real terms. The time to calculate the matrix each time is not noticeable on any modern computer. Shoot, running it 100 times to make the include file was only a few milliseconds. The only reason I looked at it was because someone was talking about it and I wondered whether it was wasted time to calculate it each time the radius was less than 10. It filled in time while my brain was figuring out a templating problem in the background. It was a fairly meaningless exercise to avoid being annoyed at something I _couldn't_ figure out just then. My brain is really content to stay busy solving meaningless problems for fun though;) To really know, you'd have to do tests each way to see if it really made any difference either way, and I suspect it doesn't. It's certainly not what holds anything up in the user's experience of the plugin. For a few, it might make a difference in comprehensibility, but I doubt it. A comment that says what the method calculates, vs a similar comment that the table is the result of such calculations doesn't seem much of a difference.
If I've got the wrong end of the stick, what is the problem with the current code?
I like that expression. Clearly means, "If I'm wrong" only longer and more colorful. I wonder where it came from? I don't think I've heard it before. Is it from a particular region of the world? Growing up in Texas we had a LOT of cool expressions. What kind of stick is it? What would you do with the right end of the stick? What happens to you if you have the wrong end of the stick? Just what the heck IS the metaphor? If someone is getting beaten the wrong end of the stick would be the one hitting you and the right end would be the end being held by the person using it, but clearly, that's not the metaphor! If it was, then "If I've got the wrong end of the stick" would mean, "If something bad is happening to me". My brain never stops! lol!
best regards,
Patrick
unsharp mask
On 11/23/2010 09:03 AM, Torsten Neuer wrote:
... elision by patrick ...
And this is the real trade-off: trading maintainability of code - just imagine someone wanted to increase the precision of the "radius" parameter - for a minimum amount of speed.
That's true.
This function IS CALLED ONLY ONCE for the whole filtering progress!
Which means that this is clearly the least important place for code optimization in this plug-in, since the speed gained from making the matrix pre-computed will dissolve into nothingness with growing images!
How many microseconds does the computation of the matrix take ? Did anyone evaluate this yet ?
It's minimal, you wouldn't have to evaluate it to tell that. Inspection of the algorithm is enough. Sorry, didn't mean to cause an uproar! I was just being playful with code when someone was asking about the convolution matrix, and my brain took off down a tangent. No one in the conversations suggested that there was a problem, or that we make any changes to the code. We were just discussing one of the details about the algorithm and alternate ways of doing it. It's fun to talk about algorithms and the tradeoffs between calculation vs tables. No one said that there was a problem that had to be fixed. The tradeoffs are minimal, although your point about the maintainability of the code changing if someone wanted to change (not just increase, any change) the precision of the radius is the only point I've seen that could really differentiate between the choices. That's always an important consideration in design of stuff like this, "Would the precision of the input parameters ever change?" Thanks for bringing that into the discussion and thanks for joining the discussion.
I mean, before starting to speculate about methods of optimization for a certain part of code, one should first have a look at what can be gained there - in this case: near zero.
No one _was_ talking about optimizing. It was just a sort of fun water cooler discussion about code. It's true, that in this case there's not much to differentiate between the choices, but it's important to think about choices like this because we're not always maintaining code, sometimes we're writing new code. One of my favorite things about this plugin is the clarity of the code and the documentation. There's even one routine written twice, once in a clear way, disabled, and then again in an optimized way with a comment that it's the same algorithm as the clear one with loop invariants hoisted out of the loop. What a great way for a developer to communicate. Much better than the usual code with no comments.
Patrick
unsharp mask
On 11/24/10 02:04, Patrick Horgan wrote:
If it was, then "If I've got
the wrong end of the stick" would mean, "If something bad is happening to me". My brain never stops! lol!best regards,
Patrick
I think the basic idea is one of making a mistake by picking up a stick by the wrong end , which is covered in shit.
I don't know the exact origin. It's a bit like "when the shit hits the fan". I don't think you're supposed to ask who's fan and where did the shit come from and how come it gets near the fan anyway?
It's just a colourful expression which is a bit more fun that saying "if I'm mistaken".
We're all agreed that avoiding calculating the matrix is pretty pointless. Any thoughts on what I said about the previews?
/gg/
unsharp mask
On 11/23/2010 06:17 PM, gg@catking.net wrote:
On 11/24/10 02:04, Patrick Horgan wrote:
If it was, then "If I've got
the wrong end of the stick" would mean, "If something bad is happening to me". My brain never stops! lol!best regards,
Patrick
I think the basic idea is one of making a mistake by picking up a stick by the wrong end , which is covered in shit.
Really! My word. That's graphic. That's ok. This IS a list having to do with graphics.
I don't know the exact origin. It's a bit like "when the shit hits the fan". I don't think you're supposed to ask who's fan and where did the shit come from and how come it gets near the fan anyway?
Well at least the image of the metaphor is clear. You know that if shit hits a fan it's bad, but wrong end of a stick just isn't as obvious.
It's just a colourful expression which is a bit more fun that saying "if I'm mistaken".
We're all agreed that avoiding calculating the matrix is pretty pointless. Any thoughts on what I said about the previews?
Sounded important and made me think that if no one else looks into it by the time I finish all the other things I'm doing for various projects it would be fun and worthwhile looking into. Really, why wouldn't you cache the result of all that work? I'm getting to a point in my life where it's about time to get back into graphics and catch up with OpenGL (just got the new SuperBible), and learn gegl. Need to get a paying job soon too. Can't work for equity forever. Anyone need a good linux C/C++ systems programmer? lol!
Patrick
- postings
- 5
unsharp mask
On 11/23/2010 06:17 PM, gg@catking.net wrote:
Well at least the image of the metaphor is clear. You know that if shit hits a fan it's bad, but wrong end of a stick just isn't as obvious.
Well, I think the person who used the expression was being a bit circumspect in order to use cleaner language. Afaik, the "real" expression more along the lines of: getting the shitty end of the stick.
If you want to speed up the code a little, I don't see any point to any of the 3 lines that include the variable 'ctable_p' in blur-line() in 'unsharp-mask.c'. It looks to me like just deleting those would eliminate some pointless arithmetic.
unsharp mask
Maybe it would help to have a general explanation of the principle behind the filter.
The basic idea of the Unsharp Mask is to enhance the difference between the
original image
and a blurred version of it. The algorithm first blurs the image, then
calculates the difference
between the original image and the blurred version, and then alters the
original image by
moving each pixel farther away from its blurred value.
The convolution is simply a way of blurring the image. There are countless
ways of
doing a blur -- the filter is using a rather crude approximation of a
Gaussian blur, which
is the most commonly used blurring algorithm.
-- Bill
unsharp mask
On Wed, Nov 24, 2010 at 4:25 PM, Bill Skaggs wrote:
Maybe it would help to have a general explanation of the principle behind the filter.
The basic idea of the Unsharp Mask is to enhance the difference between the original image
and a blurred version of it.
- postings
- 5
unsharp mask
Now that I look at it, you can probably take out everything related to the ctable variable as well. Basically it looks like it generates a table of values in advance in order to avoid repeat math later, but it never uses it. So this should probably be either finished or removed.
The math to generate the values doesn't look more complex than the math to look up the value on a table, so if there's a performance increase it's probably minor. Might be worth checking out though just in case though, since the code is basically already written.
unsharp mask
On 11/24/10 17:48, Øyvind Kolås wrote:
On Wed, Nov 24, 2010 at 4:25 PM, Bill Skaggs wrote:
Maybe it would help to have a general explanation of the principle behind the filter.
The basic idea of the Unsharp Mask is to enhance the difference between the original image
and a blurred version of it. The algorithm first blurs the image, then calculates the difference
between the original image and the blurred version, and then alters the original image by
moving each pixel farther away from its blurred value.The convolution is simply a way of blurring the image. There are countless ways of
doing a blur -- the filter is using a rather crude approximation of a Gaussian blur, which
is the most commonly used blurring algorithm.In GEGL unsharp mask is implemented at a higher level of abstraction than low-level filters and reuses gaussian blur directly, thus speed improvements to gaussian blur in GEGL will also benefit unsharp mask directly. See http://git.gnome.org/browse/gegl/tree/operations/common/unsharp-mask.c this version of unsharp mask can already be used in GIMP through the GEGL-tool, it would probably be good if the unsharp-mask in GIMP was properly replaced with the GEGL one.
/Øyvind Kolås
Hi Øyvind ,
congratulations on this solid body of work in designing and building the gegl library. I'm sure it will have a lot of benefit as gimp is slowly migrated to use it. It gives a great opportunity to restructure a code base that has to a large extent grown more by evolution than design. There always comes a point where things need a good shake down. I'm sure gegl migration will provide that opportunity.
For unsharp filter , yes it does seem considerably quicker at applying the filter to the whole image than the existing gimp filter but unfortunately it has the do the whole image all the time for the preview.
Wouldn't it be better to have a preview window similar to the existing gimp preview which vastly reduces the amount of work needed for a preview and also lets the user focus on a particular zone of interest in the image.
Most images have a focal point or some other critical area where any effect needs to be optimised. A shot of a person this will nearly always be the face. How it affects the grass behind the subject is often of much less interest. Being able to work on a close up of a critical part is very valuable.
I think the preview is doing far more work than is needed and with high quality images being large this can be quite a drag in previewing different parameter values to adjust the effect.
The comment I made earlier about the existing preview and the benefit of caching when flipping the preview on and off applies here also. Even more so really since it is processing the whole image. On a larger image it is simply not possible to flip preview on/off to directly compare the two. It can take 10s to process the image but it needs to be fast.
The eye (or the brain) is very good at picking up the slightest difference where an image if flipped like that and it is a useful technique to be able to compare them directly.
A couple of quick points in passing:
1/ why is there no threshold slider? How is the threshold determined? This can be a useful control.
2/ I find the slider titles and hint texts very unclear. Std deviation is too specific to the coding implementation and is no help to someone needing to process an image. The hint does not really help clarify.
std dev is scale factor , yet scale is strength :?
The old "radius" seems clearer in terms of what it does to the image (that there is not a radius in the code is irrelevant).
The old "amount" was a pretty unhelpful, muddy term but maybe simply "sharpness" would be better than scale. After all this is all about sharpening the image.
Could I suggest :
radius : range of the effect
sharpness: intensity of effect
threshold: sensitivity to detail
It looks like the underlying code has now reached a maturity where it could be used full time but I feel that there is a slight regression in the level of control of the parameters and the preview which is, of course, very important in terms of usability.
regards, gg/
Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
unsharp mask
A bit off-topic, but in one of the upcoming releases would it be possible to increase the maximum radius allowable in the Unsharp Mask dialog to "500" or so. With increased image sizes being much more prevalent than in the past, the original maximum ("120") can be somewhat limiting.
unsharp mask
Looks simple enough - it is only a GUI restriction (I was able to apply larger radii using the script-fu console)
Attached is a diff
-Rob A>
On Wed, Dec 8, 2010 at 11:53 AM, wrote:
A bit off-topic, but in one of the upcoming releases would it be possible to increase the maximum radius allowable in the Unsharp Mask dialog to "500" or so. With increased image sizes being much more prevalent than in the past, the original maximum ("120") can be somewhat limiting.
_______________________________________________ Gimp-developer mailing list
Gimp-developer at lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
-------------- next part --------------
A non-text attachment was scrubbed...
Name: diff
Type: application/octet-stream
Size: 622 bytes
Desc: not available
Url : /lists/gimp-developer/attachments/20101208/fb3801bd/attachment.obj
unsharp mask
On Wed, 2010-12-08 at 13:22 -0500, Rob Antonishen wrote:
Looks simple enough - it is only a GUI restriction (I was able to apply larger radii using the script-fu console)
Attached is a diff
We prefer patches created from git commits using 'git format-patch'. If you had submitted such a patch, I would have pushed your commit by now. Can you resend your diff with a commit log please?
Sven
unsharp mask
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
--Alexia
unsharp mask
We prefer patches created from git commits using 'git format-patch'. If you had submitted such a patch, I would have pushed your commit by now. Can you resend your diff with a commit log please?
Sven
Sorry. I've never worked with git before.
Would the attached be correct?
-Rob A>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-Increased-maximum-radius-to-500-in-unsharp-mask-plug.patch
Type: text/x-patch
Size: 959 bytes
Desc: not available
Url : /lists/gimp-developer/attachments/20101208/432e20e2/attachment.bin
unsharp mask
On Wed, 2010-12-08 at 14:40 -0500, Rob Antonishen wrote:
We prefer patches created from git commits using 'git format-patch'. If you had submitted such a patch, I would have pushed your commit by now. Can you resend your diff with a commit log please?
Sven
Sorry. I've never worked with git before.
Would the attached be correct?
Doesn't totally follow the rules for the git commit log message (taken from the git commit manual page):
Though not required, it?s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description. Tools that turn commits into email, for example, use the first line on the Subject: line and the rest of the commit in the body.
But yes, it does the trick and I'll push it right away.
Thanks, Sven
unsharp mask
On Wed, 2010-12-08 at 20:35 +0200, Alexia Death wrote:
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
Which would make the noobs think what exactly? That GIMP developers suffer from some semi-inverse tautologigal redundancy disorder?
Unsharp Mask Sharpen, ts ts ts :)
ciao, --mitch
unsharp mask
On 12/08/10 19:35, Alexia Death wrote:
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
--Alexia
Good point but is that any clearer than 'unblur mask blur' ?
Until this stupid, cryptic, confusing name gets dumped everyone who has not already met it will be confused.
Let's call it something else that makes sense and maybe just put unsharp in the hint in case anyone knows it from elsewhere.
/gg
unsharp mask
On Wed, 2010-12-08 at 20:35 +0200, Alexia Death wrote:
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
Seriously, Unsharp Mask is the correct term and it is widely known and mentioned in pretty much any book/tutorial that covers image manipulation and sharpening algorithms. It doesn't really help to disguise the filter by giving it a different name. GIMP is not really meant to be the application that a noob should use for their casual image manipulation needs. There are other application that fill this need and we should leave it to them to present the user with just a single sharpening algorithm and calling it "Sharpen" (even though it would most probably be "Unsharp Mask").
Sven
unsharp mask
On Wed, Dec 8, 2010 at 10:57 PM, Sven Neumann wrote:
On Wed, 2010-12-08 at 20:35 +0200, Alexia Death wrote:
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
Seriously, Unsharp Mask is the correct term and it is widely known and mentioned in pretty much any book/tutorial that covers image manipulation and sharpening algorithms.
Well, I personally had been using gimp for several years and at least one image processing related university class behind me, before I ran into sharpening tutorial that mentioned it as one of the options.
If photo processing isn't your main track, you probably dont know what its for. And its a serious usability issue, if there are features that would be very useful, but are impossible to discover. Being pro-oriented is fine, but there are several kinds of pros and stomping on in gimp education of its users like that is really not helpful IMHO.
I'm not suggesting changing it's name, I'm proposing suffixing the name of the algorithm with its purpose because the algorithm name itself is outright misleading. Putting it in the menu in a menu item named Sharpen would serve the same purpose, but I dont think we have any other sharpening algorithms in default distribution to justify that.
--
--Alexia
p.s Sorry for the double mail again, Sven...
unsharp mask
On 12/08/2010 12:46 PM, Michael Natterer wrote:
On Wed, 2010-12-08 at 20:35 +0200, Alexia Death wrote:
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
Which would make the noobs think what exactly? That GIMP developers suffer from some semi-inverse tautologigal redundancy disorder?
Unsharp Mask Sharpen, ts ts ts :)
That is funny. The noobs of course, don't know about the unsharp mask algorithm and don't realize that it will sharpen. The think it will unsharpen.
Patrick
unsharp mask
On 12/08/2010 01:15 PM, Alexia Death wrote:
On Wed, Dec 8, 2010 at 10:57 PM, Sven Neumann wrote:
On Wed, 2010-12-08 at 20:35 +0200, Alexia Death wrote:
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
Seriously, Unsharp Mask is the correct term and it is widely known and mentioned in pretty much any book/tutorial that covers image manipulation and sharpening algorithms.
Well, I personally had been using gimp for several years and at least one image processing related university class behind me, before I ran into sharpening tutorial that mentioned it as one of the options.
If photo processing isn't your main track, you probably dont know what its for. And its a serious usability issue, if there are features that would be very useful, but are impossible to discover. Being pro-oriented is fine, but there are several kinds of pros and stomping on in gimp education of its users like that is really not helpful IMHO.
I'm not suggesting changing it's name, I'm proposing suffixing the name of the algorithm with its purpose because the algorithm name itself is outright misleading. Putting it in the menu in a menu item named Sharpen would serve the same purpose, but I dont think we have any other sharpening algorithms in default distribution to justify that.
How about, "Sharpen (Unsharp Mask)"
Patrick
unsharp mask
On 12/08/10 21:57, Sven Neumann wrote:
On Wed, 2010-12-08 at 20:35 +0200, Alexia Death wrote:
When we are on the subject already, I have a suggestion. Lets save the countless noobs time and change the menu label of unsharp mask to Unsharp Mask Sharpen?
Seriously, Unsharp Mask is the correct term and it is widely known and mentioned in pretty much any book/tutorial that covers image manipulation and sharpening algorithms. It doesn't really help to disguise the filter by giving it a different name. GIMP is not really meant to be the application that a noob should use for their casual image manipulation needs. There are other application that fill this need and we should leave it to them to present the user with just a single sharpening algorithm and calling it "Sharpen" (even though it would most probably be "Unsharp Mask").
Sven
Looking at the present way this is presented:
Sharpen ... Unsharp mask
Then, because everyone who wants to sharpen an image is likely to look at the option called "sharpen" this has a hint saying it's not as powerful as unsharp mask.
The unsharp mask hint explains that this is the most commonly used way to sharpen an image.
Thus it is recognised that this is confusing and counter intuitive to the point where both comments are saying you don't want to use sharpen , try unsharp.
While unsharp mask may be the "correct term" it only makes sense in a photographic lab of a generation ago, it is now jargon. However interesting that may be to know the origins, the term is now a hindrance to usage and a clearer presentation would be better IMHO.
While the vision of gimp is to be "top-end" this should not require all users to have 10 years professional experience or a degree in image processing. It should be top-end in function without being esoteric.
I would suggest renaming sharpen to "simple" or "basic sharpen" and unsharp to "sharpen". That would instantly guide people to the probably required tool.
Then use the hints to provide for those seeking more detail to know which they require rather than , as now, using this space to say the names are confusing and you probably don't want this one, use the other one.
/gg
unsharp mask
On Wed, 2010-12-08 at 23:15 +0200, Alexia Death wrote:
I'm not suggesting changing it's name, I'm proposing suffixing the name of the algorithm with its purpose because the algorithm name itself is outright misleading. Putting it in the menu in a menu item named Sharpen would serve the same purpose, but I dont think we have any other sharpening algorithms in default distribution to justify that.
Well, you are suggesting to change its name. Suffixing it is a name change and it has the potential to confuse users that are actually looking for "Unsharp Mask" and expect to find a standard filter under its usual name.
About the menu, that's a good point. IMO it would be useful to have a "Sharpen" group in the "Filters->Enhance" menu, separated from the other filters. Here all sharpen filters can register and having "Unsharp Mask" in a group with "Sharpen" should be a good hint on what it actually does.
Sven
unsharp mask
OK, now the interaction design input:
yes, someone made a terrible mistake more than 20 years ago by taking a metaphor from the physical photo lab and using it in the digital software realm without taking the change of medium into account. this is a very common mistake, when metaphors are taken from one medium to another.
so it should have never been called unsharp mask.
I have nothing against correcting mistakes made in the past, but we are in this case stuck with the name.
and as Sven pointed out, our product vision implies high-end use with thousands hours of (self-)training included. mastering any professional tool takes that amount of effort. no shortcuts.
so we have to live with the pro' (in-crowd) name for this and not make ourselves ridiculous trying to reinvent this wheel.
the best suggestion I have ever read (and this issue has been chewed on again and again since the first UI review I did with Kamila ages ago) is:
On 8 Dec 2010, at 23:39, Sven Neumann wrote:
About the menu, that's a good point. IMO it would be useful to have a "Sharpen" group in the "Filters->Enhance" menu, separated from the other
filters. Here all sharpen filters can register and having "Unsharp Mask"
in a group with "Sharpen" should be a good hint on what it actually does.
clarification by context. good.
--ps
founder + principal interaction architect man + machine interface works
http://blog.mmiworks.net: on interaction architecture