GSoC idea
This discussion is connected to the gegl-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.
This is a read-only list on gimpusers.com so this discussion thread is read-only, too.
GSoC idea | Henrik Akesson | 21 Mar 14:12 |
GSoC idea | Martin Nordholts | 21 Mar 14:14 |
GSoC idea | Martin Nordholts | 21 Mar 14:24 |
GSoC idea | Sven Neumann | 21 Mar 16:29 |
GSoC idea | Martin Nordholts | 21 Mar 16:54 |
GSoC idea | Sven Neumann | 21 Mar 17:16 |
GSoC idea | Henrik Akesson | 24 Mar 16:45 |
GSoC idea
Has anyone ever done a performance study of GEGL?
What do you think of a GSoC project of:
"Performance study and optimisation of GEGL."
- Creating a multi-platform performance tool-set for automatically
extracting performance data from the gegl library using performance counters
- Creating a set of typical scenarios for gegl, which could double as
integration/regression tests
- Reporting on current status of gegl performance.
- Identification of main bottlenecks
- Prototyping or implementing solution for above bottlenecks.
- Documenting above tools.
In any case, it would interest me to do this project and I happen to finish my master in june, so I'd be available. I could potentially be interested in the GPU one as well.
/Henrik
GSoC idea
Henrik Akesson wrote:
Has anyone ever done a performance study of GEGL?
What do you think of a GSoC project of:
"Performance study and optimisation of GEGL."
I think this is an excellent idea and support it to 100%
- Martin
GSoC idea
Martin Nordholts wrote:
Henrik Akesson wrote:
Has anyone ever done a performance study of GEGL?
What do you think of a GSoC project of:
"Performance study and optimisation of GEGL."
There was an optimization round on GEGL a couple of months back with regards to improving GIMP performance when e.g. using GEGL for the projection. I used sysprof for profiling back then, and an awful lot of time during processing is spent on constructing GObjects. Resuing objects or getting rid of the constant need to create new objects should give a significant performance improvement. There are certainly other areas to optimize as well but GObject construction stood out.
- Martin
GSoC idea
Hi,
There was an optimization round on GEGL a couple of months back with regards to improving GIMP performance when e.g. using GEGL for the projection. I used sysprof for profiling back then, and an awful lot of time during processing is spent on constructing GObjects. Resuing objects or getting rid of the constant need to create new objects should give a significant performance improvement. There are certainly other areas to optimize as well but GObject construction stood out.
Was it really the creation of objects that stood out or the code that is run in the init() and constructor() methods of these GEGL objects?
Sven
GSoC idea
Sven Neumann wrote:
Hi,
There was an optimization round on GEGL a couple of months back with regards to improving GIMP performance when e.g. using GEGL for the projection. I used sysprof for profiling back then, and an awful lot of time during processing is spent on constructing GObjects. Resuing objects or getting rid of the constant need to create new objects should give a significant performance improvement. There are certainly other areas to optimize as well but GObject construction stood out.
Was it really the creation of objects that stood out or the code that is run in the init() and constructor() methods of these GEGL objects?
If I recall correctly the actual creation of objects also took a significant amount of time, not only the init() and constructor() methods
- Martin
GSoC idea
Hi,
On Sat, 2009-03-21 at 14:12 +0100, Henrik Akesson wrote:
Has anyone ever done a performance study of GEGL?
What do you think of a GSoC project of:
"Performance study and optimisation of GEGL."
- Creating a multi-platform performance tool-set for automatically extracting performance data from the gegl library using performance counters
- Creating a set of typical scenarios for gegl, which could double as integration/regression tests
- Reporting on current status of gegl performance. - Identification of main bottlenecks - Prototyping or implementing solution for above bottlenecks. - Documenting above tools.
That's a nice proposal as it starts exactly where optimization should start, by getting profound profiling data.
It might also be interesting to add a framework to GEGL that allows to register optimized operations and to compare them against the reference implementation. A similar approach is taken in babl. Doing this for GEGL is admittedly going to be more complex, but it would provide an interesting framework for improving the GEGL performance. Based on this framework, people could contribute optimized code and can still be certain that it provides the correct results. Such code could be optimized for a particular color format (legacy 8bit for example) and/or for particular CPUs (MMX, SSE, ...) or a GPU.
Sven
GSoC idea
I've been doing some research.
I propose using:
1 - OProfile (as it is better documented and than sysprof and runs on
more platforms/processors) for system-wide profiling i.e. finding
_where_ the time is spent.
2 - PAPI instrumentation of the code for finding out _what_ it is
doing that takes time.
3 - Ruby script for extracting a html report that is reasonably easy
to understand (performance counters can be quite difficult to
understand). The ruby script would use graphviz, gnuplot etc for
generating suitable output.
Step 2 could possibly be automated using the output from step 1, but I might be aiming a bit too high here.
I further propose 3 scenarios:
1 - automatic performance report (make profile?) using a set of
performance cases
2 - targeted performance report on operations.
3 - memory management profiling - for reporting on ram usage and disk
usage and swapping (this might require other tools than OProfile/PAPI
and it might again be aiming too high).
The tool would only function with command-line based applications, that are not using any interactive input.
As a matter of discipline, all profiling-runs should be written as tests and would therefore also guarantee the correct output of optimised operations.
I think that covers your use-case, Sven.
What do you think?
/Henrik
2009/3/21 Sven Neumann
Hi,
On Sat, 2009-03-21 at 14:12 +0100, Henrik Akesson wrote:
Has anyone ever done a performance study of GEGL?
What do you think of a GSoC project of:
"Performance study and optimisation of GEGL."
- Creating a multi-platform performance tool-set for automatically extracting performance data from the gegl library using performance counters
- Creating a set of typical scenarios for gegl, which could double as integration/regression tests
- Reporting on current status of gegl performance. - Identification of main bottlenecks - Prototyping or implementing solution for above bottlenecks. - Documenting above tools.That's a nice proposal as it starts exactly where optimization should start, by getting profound profiling data.
It might also be interesting to add a framework to GEGL that allows to register optimized operations and to compare them against the reference implementation. A similar approach is taken in babl. Doing this for GEGL is admittedly going to be more complex, but it would provide an interesting framework for improving the GEGL performance. Based on this framework, people could contribute optimized code and can still be certain that it provides the correct results. Such code could be optimized for a particular color format (legacy 8bit for example) and/or for particular CPUs (MMX, SSE, ...) or a GPU.
Sven