Hints on image reduction algorithm
This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
Hints on image reduction algorithm | Vincent Cadet | 11 Feb 16:15 |
Hints on image reduction algorithm | Daniel Hornung | 11 Feb 20:40 |
Hints on image reduction algorithm | Vincent Cadet | 12 Feb 12:49 |
Hints on image reduction algorithm
Hello people.
I am looking for an algorithm to reduce a screen image to a very small size, e.g. 3x2 pixels width x height so as each pixel is the average colour (RGB) of the corresponding screen area. For the given size 3x2, the screen would be divided into 3 columns and two rows, which means a 1920x1080 display would be comprised of 6 areas of 640x360 pixels like this:
1 2 3
+---+---+---+
4 | o | o | o | 5
+---+---+---+
6 | o | | o | 7
+---+---+---+
Stripes 1 to 7 correspond to areas with an o sign.
I am planning to build an ambi-light system with LED stripes I bought at IKEA; 3 stripes on top and 2 stripes aside of the monitor. Each stripe would reflect the average colour of its own area on the screen, hence the algorithm.
There would be around 10-15 computation per second. The project is made of an external USB module that drives the LED stripes. It should receive only an array of values (5 RGB values, exactly). The image reduction should take place either in a screen grabbing daemon or in a modified VNC server, I haven't evaluated yet which one is best for that. In short the code will run on a Linux machine. Mine for short, it's a Gentoo 64-bit kernel 3.4 that runs on Xorg.
There are Arduino kits for the hardware part so I'm focusing on the software part right. Most algorithms I saw are screen grabbing and involve nested loops for each pixel and I'm sure there's a better way. (E.g. MMX instructions?) I thought of JPEG reduction since it corresponds to what I need if you keep only the average from the spectrum. So my question is: can the JPEG compression algorithm be effectively used and optimized to get that to work? Or if you have better ideas or hints, I'm open.
Thanks a lot in advance.
Hints on image reduction algorithm
On Monday, 11. February 2013 16:15:06 Vincent Cadet wrote:
Hello people.
I am looking for an algorithm to reduce a screen image to a very small size, e.g. 3x2 pixels width x height so as each pixel is the average colour (RGB) of the corresponding screen area.
...
There are Arduino kits for the hardware part so I'm focusing on the software part right. Most algorithms I saw are screen grabbing and involve nested loops for each pixel and I'm sure there's a better way. (E.g. MMX instructions?) I thought of JPEG reduction since it corresponds to what I need if you keep only the average from the spectrum. So my question is: can the JPEG compression algorithm be effectively used and optimized to get that to work? Or if you have better ideas or hints, I'm open.
Hello Vincent,
first of all, you probably don't want to use GIMP for this task, I assume you just write here because you guess that people here can answer anyway. :-)
What do you mean by nested loops? Averaging over a (very) large number of elements is not trivial, since you don't want to run into numerical precision or overflow problems.
But all in all you probably don't want jpeg or other fancy and time-consuming stuff, but just average the R, G and B components. Your compiler should do all the processor specific optimization for you anyway, I think. In a few little tests I did with XGetImage(...), most of the time was used by copying the screenshot into memory, I ended up at around 30ms per image.
If you are looking for optimization: Reduce the resolution you work with (if that's possible with your preferred screengrabber library), you won't see the details in your 6 LEDs anyway ;-) And reduce your framerate and simply interpolate the color in between, your eyes probably won't see a difference.
Have fun with this project and show some photos later on!
Daniel
Mein öffentlicher Schlüssel / My public key: 4096R/600ACB3B 2012-04-01 Fingerabdruck / Fingerprint: 9902 575B B9A0 C339 CFDF 250B 9267 CA6B 600A CB3B Runterladen z.B. bei/ Get it e.g. from: pgp.mit.edu, subkeys.pgp.net, pgp.uni-mainz.de, pool.sks-keyservers.net, ...
Hints on image reduction algorithm
Thanks for your hints, Daniel.
--- En date de: Lun 11.2.13, Daniel Hornung a crit:
On Monday, 11. February 2013 16:15:06 Vincent Cadet wrote:
Hello people.
I am looking for an algorithm to reduce a screen image to a very small size, e.g. 3x2 pixels width x height so as each pixel is the average colour (RGB) of the corresponding screen area.
...
There are Arduino kits for the hardware part so I'm focusing on the software part right. Most algorithms I saw are screen grabbing and involve nested loops for each pixel and I'm sure there's a better way. (E.g. MMX instructions?) I thought of JPEG reduction since it corresponds to what I need if you keep only the average from the spectrum. So my question is: can the JPEG compression algorithm be effectively used and optimized to get that to work? Or if you have better ideas or hints, I'm open.Hello Vincent,
first of all, you probably don't want to use GIMP for this task, I assume you just write here because you guess that people here can answer anyway. :-)
Exactly. I came to ask here as there are some of you who have far more skills than I have (none actually). My guess is dev's behind Gimp are the ones who must know which algorithm best fits and what considerations to take into account, especially in that very case: smallest image size.
What do you mean by nested loops? Averaging over a (very) large number of elements is not trivial, since you don't want to run into numerical precision or overflow problems.
It is not trivial, indeed. That's what I thought. The algorithm I had in mind when posting is like this:
. loop through each row, skipping one line over two: loop through each column, skipping one columns over two: sum RGB value for the current pixel
. when nested loops are done divide RGB value with the number of steps
This is of course the definition of "average" or "mean" but there had to be a better way to me. My take is for instance when you consider low pass filters, it's far more efficient (IIRC) to use shift registers while one would think of looping, summing and averaging. I'm referring to my courses in signal theory (taken like 20 years ago, you get the picture :D) when "mapping" complex functions into discrete algorithms. So I assumed there are similar formulas for image computations.
But all in all you probably don't want jpeg or other fancy and time-consuming stuff, but just average the R, G and B components.
That I didn't know. Thanks for the hint.
Your compiler should do all the processor specific optimization for you anyway, I think. In a few little tests I did with XGetImage(...), most of the time was used by copying the screenshot into memory, I ended up at around 30ms per image.
I know the compiler is smart enough to optimize if I tell it so. But I also know a compiler is blind to wishes and you have to lay your code as optimization-friendly as can be so it "guesses" right how to efficiently optimize it (that's a couple of years of development have taught me ;-).
Note that I have read about GCC and it looks like it's now able to "parallel'ize" loops. That would probably be useful in this very case since screen regions are predictable so as are the memory portions the nested loops inspect. I just wonder if that's enough of an optimization though.
Also 30 ms, even in this worst case, leaves me with another 70ms to do the analysis.
If you are looking for optimization: Reduce the resolution you work with (if that's possible with your preferred screengrabber library), you won't see the details in your 6 LEDs anyway ;-) And reduce your framerate and simply interpolate the color in between, your eyes probably won't see a difference.
Talking about it (reducing the resolution) are there any wise paths to follow (to add to those you gave)? Like taking one pixel out of 4 or 16 or... is it that simple? From your experience, what would then be the best ratio? One pixel out of a 8x8 square, for instance (referring somehow to JPEG image "slicing")? I can also use smaller areas, like stripes near the borders of the image, I guess...
Or can I for instance sample random, small regions of contiguous pixels from the grabbed image and compute the average from that? Do you know if there's an API that can do that (might be not the right place to ask, I admit)?
As for the screen grabber library... well it's my first attempt into Linux development :D . Quite a nice challenge you'll say. So I have basically no idea what grabbing functions do exist. I can Google that of course but do you by chance have an idea if (and which) are the best candidate for the job? I also would like one that works even with full-screen OpenGL games.
Have fun with this project and show some photos later on!
Hehe, thanks for your interest, I'm flattered, no kidding. But I'd rather show the code beforehand instead ;-) .
Vincent
Daniel
--
Mein ffentlicher Schlssel / My public key: 4096R/600ACB3B 2012-04-01
Fingerabdruck / Fingerprint:
9902 575B B9A0 C339 CFDF 250B 9267 CA6B 600A CB3B Runterladen z.B. bei/ Get it e.g. from: pgp.mit.edu, subkeys.pgp.net, pgp.uni-mainz.de, pool.sks-keyservers.net, ...
-----La pice jointe associe suit-----_______________________________________________ gimp-developer-list mailing list
gimp-developer-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gimp-developer-list