introspection as a mental process
Let us look at the human mind as the most expensive processor imaginable.
The IO is very very slow and error prone.
It is however the best pattern matching server we can afford at the time.
So, the process of pattern matching needs to be augmented.
The introspector will need to collect the data from all types of data sources,
and it will need to do so quickly. Therefore it is important that datasamples can be collected and classified. Imaginable is an firm grasp of the gcc toolchain and using that metadata collecting data that way. The metadata is then published for all to use. This would include all byte ranges (Programs(Functions(Blocks(...(Token(Chars(Bytes(Bits(Meaning)))))) of source code with all semantic data attached to define the meaning of the source code. Each statement of meaning is a signed declaration from a sender, and only when that statement has been evaulated and its contents accepted by a different reader, you the reader that is, Or even an indexing system.
Such an indexing system is one of the major goals of mine for the introspector. The idea is simple : Given a model that is completely understood, ie source code of the compiler, we can model any data expressed in that language that we find on the internet via google et al in a page against our model of the language. This will produce a semantic subset of the introspector system, the current set of knowledge that we have about the subject program that we are introspecting.
Thus, a full introspection could be viewed as one mega file local portable net meta(cvs tgz google(mail) mbox) search that queries each resource in each context and builds pages of data for workers to receive and process and analyse layout present review search graphing diagraming.
All of these applications are available under linux . If we can introspect them via the gcc and gather semantic information about them then we can parse those pages and align them with introspector resources presented. Included in the available introspection data provided will be audited samples of the various output files that are traced against the metadata and expanded with metadata in a rdf format. Basically each bit, byte , token, of data that is of any atomic value is treated as an rdf resource in terms of a gcc datamodel. This is available from the gdb as well.
An introspector interface into the gdb would be of great value.
Listening to Erick Sermon Marvin Gaye - Just Like Music
So the idea is collecting these samples of data that is for the human mind and indexing it via the gcc. Each and every relevant resource or configuration of resources that are described in a program serve as the source of a query into a gigantic database (google et al). The results are used to find metadata about the program. By joining the searches , or the results of them, we look for common pages and relationships between them.
Also, now here is an important point :
The testing of this data, and the statements of predicates about those tests can be widely automated, but the final driving force in the human mind and therefore we need to build the best human interface so that the user can drive the introspection process comfortably.