Wednesday, June 26, 2002 (Richard Kenner) writes:
> In article Ronald Cole writes:
> >Oh, bullshit, Richard... The GPL plainly says "To protect your
> >rights, we need to make restrictions that forbid anyone to deny you
> >these rights or to ask you to surrender the rights."
> >
> >It seems clear that "discouraging free distribution" is equivalent
> >in effect to asking you to surrender the right to distribute.
> That's correct, though I'd use the word "waive" rather than
> "surrender".

Dewar posted that he feels that he is within the "letter *and the
spirit*" of the GPL when he *asks* his "wavefront" customers not to
redistribute that which he distributes. I, however, feel that by
doing so, he has violated the "spirit" of the GPL (since the quoted
clause is found in the preamble and doesn't appear to be present in
the enumerated sections).

> But the key point that this is *asking*, not *requiring*.

Still, the GPL says "To protect your rights, we need to make
restrictions that forbid anyone to ... *ask* you to surrender the
rights" and then fails to actually enumerate such a restriction (a
loophole which apparently both Stallman and Dewar use to discourage
"runaway snapshots").

Metaprogramming and
Free Availability of Sources
Two Challenges for Computing Today1
François-René Rideau

CNET DTL/ASR (France Telecom) 2
38--40 rue du general Leclerc
92794 Issy Moulineaux Cedex 9, FRANCE

We introduce metaprogramming in a completely informal way, and sketch out a theory of it. We explain why it is a major stake for computing today, by considering the processes underlying software development. We show, from the same perspective, how metaprogramming is related to another challenge of computing, the free availability of the sources of software, and how these two phenomena naturally complement each other.

2.2 What is Metaprogramming? What is Reflection? Why are they so important?
Metaprogramming is the activity of manipulating programs that in turn manipulate programs. It is the most general technique whereby the programming activity can be automated, enhanced, and made to go where no programming has gone before.
Reflection is the ability of systems to know enough about themselves so as to dynamically metaprogram their own behavior, so as to adapt themselves to changing circumstances, so as to relieve programmers and administrators from so many tasks that currently need to be done manually.
These notions are explained in my article, Metaprogramming and Free Availability of Sources, that also explains why a reflective system must be Free Software. You may also consult the Reflection section of the TUNES Review subproject.
Reflection is important because it is the essential feature that allows for dynamic extension of system infrastructure. Without Reflection, you have to recompile a new system and reboot everytime you have to change your infrastructure, you must manually convert data when you extend the infrastructure, you cannot mix and match programs developed using different infrastructures, you cannot communicate and collaborate with people with different background. At the technical level, all these mean interruption of service, unreliability of service, denial of service, and unawareness of progress; but at the psycho-social level, lack of reflection also means that people will have to2

Re: gcc front-/backend (Was: Re: Binary archive issues) On Fri, 8 Aug 1997, Steffen Opel wrote:

> > No. It was just one of the ideas during a "brainstorm", and like many
> > other ideas, it might die silently, particularly since an influential
> > individual whose name I won't tell and who can veto anybody else's
> > decissions, seems to be opposed to this idea.
> Could you please at least summarize the arguments of this well unknown
> person?

Sigh... I shouldn't have brought this subject to this list :-).

OK, the person was Richard M. Stallman, the president of FSF.

His arguments were along these lines:

1. GCC is not a school example, GCC is developed to be useful.

2. The proposal to split the backend from the frontends serves little
purpose in terms of improving quality, it mostly makes GCC look somewhat

1 & 2 clash, and 1 is more important, therefore forget about 2 and focus
on releasing 2.8.0.

My comment:

PLEASE don't comment on this subject. The people who are making decissions
are NOT on this list, so discussing this subject serves no purpose apart
from occupying network resources. Project Info - RedShift

An Illustration of Perl Objects with C APIs An Illustration of Perl Objects with C APIs
To illustrate the use of C API's with Perl objects, a toy PPM-format image class is created and then extended ( and This child class runs just as fast as if it were part of a big C-language extension library, instead of being small and independent, depending only on having provided a C API, and on Inline::C, to make use of it.
The image data is maintained in the PPM string itself, as sequence of (height x width x 3) one-byte integer color samples. These samples are exposed as a seemingly normal perl array, implemented by a Tie::Array/perltie-based helper subclass, created from a simple specification, by a generator of array-like classes. This sample array is then "folded" into an apparently normal 3D array of array of arrays, to provide a very natural interface to the image data (though a quite slow one). also provides a C API, namely a method which returns a string of C code. It defines macros which obtain a pointer to the string's memory, and provide direct and rapid access to the image bytes. This method was created by the same generator, also from a simple specification.

Berlin -- Home Berlin is a windowing system derived from Fresco, a powerful structured graphics toolkit originally based on InterViews. Berlin extends Fresco to the status of a full windowing system, in command of the video hardware (via GGI, SDL, DirectFB or GLUT) and processing user input directly rather than peering with a host windowing system. Additionally, Berlin's extensions include a rich drawing interface with multiple backends, an upgrade to modern CORBA standards, a new Unicode-capable text system, dynamic module loading, and many communication abstractions for connecting other processes to the server. It is developed entirely by volunteers on the internet, using free software, and released under the GNU Library General Public License

Pietrzak.Org update: 3/14/2001 I've found some projects working very nearly along the same lines as mine. The Software Development Foundation (SDS) is a system supporting analysis and manipulation of source code in multiple languages, based on an XML format for storage and inter-communication between SDS-compliant tools. Very much like my own design! Also, the Synopsis project describes itself as a code documentation tool with support for multiple languages; perhaps not as ambitious at SDS or my DCT, but more achievable... I may need to think about alliances with or support of these efforts. Along similar lines, the Introspector project, rather than create a new code analysis tool, seeks to augment the existing GCC compiler system to store the information needed for analysis tasks -- potentially leveraging the depth of GCC to create a system more powerful than any individual tool.
update: 7/1/2001 Well, it's been a while. I've had my brain stuck inside the guts of the C Macro preprocessor for some time now, but I've finally been able to get some things working. As such, I now have a web-page up that exercises some elements of the DCT. (Please be patient when viewing that page; it is running a hacked-up version of an experimental piece of software on an old x86 box with inadequate resources...)

Take an object and :

Display Object in window
Display as HTML,XML,Graph,Table,Tree
Print to Text
Print to XML
Print as Source Code
Print Attribute List

Brent Fulgham [] WROTE

I've been playing with Ciao prolog a bit, and so I build a Debian
package for it (so that it would co-exist with the rest of the
system in a more pleasing manner.)

Interested parties can get it from:

There is a binary i386 build, and the necessary *diff.gz file
for building on other architectures.

XML Schema Test Results -- Microsoft contributions, full report :)


Welcome to Schema Mania Welcome to Schema Mania
Schema Mania is a place for people who like (or need, or are just good at) database designs. It's completely non-profit, dependent on the enthusiasm of its visitors and the talent of its contributors.

Schema Mania was conceived as a repository of database designs. You'd be able to come here, browse for a database design in your "problem space". With luck, you'd find something at least similar to what you had in mind. You'd download it, adapt it to your needs, and be happy. would be a web of database designs, if you will.
But, a funny thing happened on the way to the forum. Much of the technology that Schema Mania needs is not ready for general use. What's available is nascent; the rest is missing. However valuable the concept might be, Schema Mania lacks both software and standards. It thus became part of Schema Mania's goal to bring together people of various disciplines, to help them find each other and create better tools.

RDF Interest Group IRC Scratchpad, last cranked at 2002-06-25 21:35
DAML mode for Xemacs and GNU emacs
posted by jhendler at 2002-06-25 21:35 ( +)

jhendler: installation instructions provided

Introspector Project collects Semantic Graphs from the GCC compiler via an XML interface and stores them in a Postgres Repository

Seth: Gcc bootstrap and Postgres interface underway
mdupont: The introspector project could produce RDF for into a DAML Index
mdupont: The Postgres database could be used for storing any old DAML
mdupont: Possible targeted compilers include GCC C++, Java, the DotGNU c# compiler and others including Perl,Python and Ruby
mdupont: Possible targeted compilers include GCC C++, Java, the DotGNU c# compiler and others
mdupont: The introspector patches the GCC to dump the semantic network of nodes from a give input program
mdupont: It uses a simple XML graph and attribute syntax and PIPEs this information to a perl program that parses it on the fly.
mdupont: Currently it does not support RDF, but with the help of the motivated and helpfull team at RDFIG we will be able to use DAML real soon now!!!
mdupont: The compiler if it does not support advanced code introspection like perl and ruby needs to be patched to support the Introspector interface.
mdupont: The compiler if it does not support advanced code introspection like perl and ruby needs to be patched to support the Introspector interface.
mdupont: There is currently no DTD or Schema available.
mdupont: But an example output can be found at
mdupont: That represents the XML dump of a function called tree_size
mdupont: Which returns the size of a tree object, which is a atomic symantic node of the compiler
mdupont: The purpose of the introspector is create an interface into the compiler to extract meta-data about a program
Seth: Kewl ... i want to to work for python

Relax NG Compact Syntax

Question about patents of the Intentional Programming tools from Microsoft, relates to the Introspector Project

Microsofts intentional programming project was cancelled. But the idea was to store all the semantic information in a repository and use transformations to create the various representations of the program on the screen.

RDF Interest Group IRC Scratchpad, last cranked at 2002-06-25 21:35
posted by mdupont at 2002-06-25 20:10 ( )
mdupont: I would like to use DIA as UML editor for the introspector

W3C Semantic Web: Resource Description Framework (RDF) Interest Group Collaboration: IRC Tools and Logs
The RDF Interest Group IRC Scratchpad, a Web-based recommendation and annotation system, has been created by Edd Dumbill of XMLhack. The Scratchpad selectively logs comments made on the IRC channel, and is operated by an IRC bot 'dc_rdfig' (see chump instructions and source code release for more details). A full text search facility and RSS feed are available, as are the source XML files generated by the tool. Libby Miller has made an RDF-based search facility for the RSS blog data. ILRT's RDF at a Glance page shows one use of the RSS data.
Complete public logs of the discussions on the #rdfig channel are also available (in text, html and rdf flavours), thanks to Dave Beckett of ILRT. These logs are created by the 'logger' bot, which also offers search facilities (use: /msg logger help for more info).

W3C Semantic Web: Resource Description Framework (RDF) Interest Group #RDFIG - Internet Relay Chat (IRC) for Semantic Web Developers
Member of the RDF developer community can also augment mailing-list discussions with real time Internet chat tools. IRC is one such tool. There are a number of public IRC servers in existence: while W3C doesn't itself run such a server for general developer discussions, members of the Interest Group can often be be found on the Open Projects IRC Network (channel #rdfig for general RDF IG discussion).

XML developer news from XMLhack: by and for the XML community Welcome to xmlhack, a news site for XML developers. Our aim is to distill essential news, opinions, tips and issues concerning XML development.

Resource Description Framework (RDF) / W3C Semantic Web Activity The Resource Description Framework (RDF) integrates a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications provide a lightweight ontology system to support the exchange of knowledge on the Web.

Active Server Pages: see ASP Invocation:
The process of performing a method call on a CORBA object, which can be done without knowledge of the object's location on the network. Static invocation, which uses a client stub for the invocation and a server skeleton for the service being invoked, is used when the interface of the object is known at compile time. If the interface is not known at compile time, dynamic invocation must be used.
For those who are queasy about the idea of enforced naming conventions, explicit information about a class can be provided using the BeanInfo class. When a RAD Tool wants to find out about a JavaBean, it asks with the Introspector class by name, and if the matching BeanInfo is found the tool uses the names of the properties, events and methods defined inside that pre-packaged class.


In this paper, we describe an approach to high performance computing which makes extensive use of commodity technologies. In particular, we exploit new Web technolgies such as XML, CORBA and COM based distributed objects and Java. The use of commodity hardware (workstation and PC based MPP's) and operating systems (UNIX, Linux and Windows NT) is relatively well established. We propose extending this strategy to the programming and runtime environments supporting developers and users of both parallel computers and large scale distributed systems. We suggest that this will allow one to build systems that combine the functionality and attractive user environments of modern enterprise systems with delivery of high performance in those application components that need it. Critical to our strategy is the observation that HPCC applications are very complex but typically only require high performance in parts of the problem. These parts are dominant when measured in terms of compute cycles or data-points but often a modest part of the problem if measured in terms of lines of code or other measures of implementation effort. Thus rather than building such systems heroically from scratch, we suggest starting with a modest performance but user friendly system and then selectively enhancing performance when needed. In particular, we view the emergent generation of distributed object and component technologies as crucial for encapsulating performance critical software in the form of reusable plug-and play modules. We review here commodity approaches to distributed objects by four major stakeholders: Java by Sun Microsystems, CORBA by Object Management Group, COM by Microsoft and XML by the World-Wide Web Consortium. Next, we formulate our suggested integration framework called Pragmatic Object Web in which we try to mix-and-match the best of Java, CORBA, COM and XML and to build a practical commodity based middleware and front-ends for today's high performance computing backends. Finally, we illustrate our approach on a few selected application domains such as WebHLA for Modeling and Simulation and Java Grande for Scientific and Engineering Computing.

2.4.3 Visual Metacomputing

The growing heterogeneous collection of components, developed by the Web / Commodity computing community, offers already now a powerful and continuously growing computational infrastructure of what we called DcciS - Distributed commodity computing and information System. However, due to the vast volume and multi-language multi-platform heterogeneity of such a repository, it is also becoming increasingly difficult to make the full use of the available power of this software. In our POW approach, we provide an efficient integration framework for several major software trends but the programmatic access at the POW middleware is still complex as it requires programming skills in several languages (C++, Java, XML) and distributed computing models (CORBA, RMI, DCOM). For the end users, integrators and rapid prototype developers, a more efficient approach can be offered via the visual programming techniques. Visual authoring frameworks such as Visual Basic for Windows GUI development, AVS/Khoros for scientific visualization, or UML based Rational Rose for Object Oriented Analysis and Design are successfully tested and enjoy growing popularity in the respective developer communities. Several visual authoring products appeared also recently on the Java developers market including Visual Studio, Visual Age for Java, JBuilder or J++.

HPC community has also explored visual programming in terms of custom prototypes such as HeNCE or CODE, or adaptation of commodity systems such as AVS. At NPAC, we are developing a Web based visual programming environment called WebFlow. Our current prototype summarized below and discussed in detail in Section 5.7 follows the 100% Java model and is currently being extended towards other POW components (CORBA, COM, WOM) as discussed in Sections 5.8 and 5.9.


Thanks for your tips, and thanks to all people on this list.
I am very excited about the resonance I get from the prolog community, the
gcc compiler community proper is not that interested in this project or any
project like it.
It seems that many people here have sympathy for the idea of extracting meta
data from c/c++ programs.

To answer your question,
>>Is there any redundant information?
yes there is much redundant information,
for example, I have one output file per input c file from the compiler,
plus one file per function that is compiled in each module, each time it is
declared c file or (in lines appear all over the place).

>>Could the information be put into a CDB file and Ciao's memory be
>>used as a cache?
I was hoping that that would work.

The set of all global information for a c program is not that large,
types and functions, these should be compressed down.

The files that I have are around 10-20 MB per source file for the
translations of the gcc sources themselves.

My original memory limitations were with gnu prolog, I must admit I have not
tried with ciao yet :(.

>>Is there information which is seldom needed, so it could be loaded on
the bodies of the functions can be loaded on demand,
the usage information of data types is not always needed.

I have switched the processing to Perl for a while, but I really did like
working with prolog,
also because of the ability of querying.

This weekend I will send out and update on the project with all newer source
code and example XML files

to the project page at

I will be working from my account this weekend.

-----Original Message-----
From: Richard A. O'Keefe []
Sent: Donnerstag, 13. Dezember 2001 17:18
Subject: Re: Database and memory limitations

Manuel Carro <> wrote:
I find it of interest that you are transforming xml datasets into
with xsl... specifically the reason your snippet caught my eye is
I'm about
to try out some previous work with Topic Navigation Maps with Prolog
is new to me), well basically to see what fits well and what

I note that SWI Prolog comes with an SGML parser which supports XML,
including XML namespaces. This package has particular support for RDF.
I don't know whether Ciao's and SWI's licences are compatible, but it
might be worth looking into. I'm told that SWI Prolog is being used
to process 90MB RDF files.

I also note that Prolog is vastly more convenient for XML processing
than XSLT is. Prolog "Document Value Model" data structures for
representing XML are pretty much bound to be much cheaper than the
"Document Object Model" data structures used by most XSLT processors,
if you have a reasonably compact representation for text. (SWI Prolog
uses garbage-collected atoms for this.)

My own experience is that having Prolog, Scheme, and Haskell available
it'll take a gun pointed at my head or an extremely large bribe to make
me use XSLT for anything.

I suspect that the fundamental problem is with the representation that
is being generated as the output of the XSLT processing step.

Is there any redundant information?
Is there information which is seldom needed, so it could be loaded on
Could the information be put into a CDB file and Ciao's memory be
used as a cache?