Project Management and Free Software
I have been reading about project management in this nice book
[http://www.amazon.de/exec/obidos/ASIN/3455094732/028-8254887-6504516] Project Management für Einzelkämpfer.
It describes how to avoid feature bloat and reduce the scope of your project to the most important things.
This is great advice, and I just wanted to cover some of the issues with using free software.
Lets assume for this moment that you have a task to do, and you have decided to use Free (open source) software. You dont really want to spend time working on the software itself if you dont need to. But lets assume that you have the resources to do this in your team.
First of all, just getting the software to work is a exercise in distraction. Configuring, Compiling and Testing the software is just one task. But what about selecting the right package from the available ones. Or having to use functions from many incompatible parts.
These tasks are in themselves distractions from the project goal.
Now the real issue is the loss of control. The amount of dependancies that a software package has
is not always obvious from the beginning. Just getting the latest version and compiling the software, brings in many new variables into the equasion. How can this be planned and measured?
So, really you get a field full of landmines that have to be defused.
Now look at the number of file formats, and the cost of hooking up the programs to each other.
at last when you want to publish your results you will need to produce nice and easy to consume reports with tables.
So, What I propose is a simple introspector framework that collects all the input and output formats of all the software by intercepting the IO calls and the stacks around them. Then we can mark the memory that is the source of the outside data. Then follow the control graph of the assembly. We mark all the nodes that it travels through. This graph contains test data extracted from profiling the testcases and benchmarks. So we need a real time profiling tool that is capable of memory profiling and association of the profile paths with the data traces.
This will finally lead to a point where the data is emitted. There we collect the calls to output and note the marked memory, as to where it came from. I want to summarise the metadata with an added integer or long that represents an index into a table of paths.
more to come