Older blog entries for mdupont (starting at number 38)

I have released a new version of the introspector, a proof of concept, something you can look at and learn from. A self contained demo program that allows you to graphically explore the structure of a almost any program that you can compile with the gcc!

It features the introspector ice cube. The ice cube contains a superfast and compressed extract of the semantic data of the program that can be compiled in as a lib and loaded into memory in miliseconds.

The graph alogorithms are also very fast on constant size arrays of object!

Hopefully It will become the new way to embed a static semantic resources into your new programs.

We then slice the ice cube for each by Property into nice thin C arrays.

It has a gcc tree extracted out of the dotgnu pnet idlasm code emit function. That means i have reversed engineered an free software component.

The results of the reverse engineering are stored in a rdf repository. This has cwm,perl, and shell scripts doing semantic processing of the data. An redland RDF repository is used to interface into the guts of the gcc compiler.

The asts are serialized by a patched gcc3.4 experimental -fdump-translation-unit, you can find the source code in the cvs.

That is emitted into rdf and converted by a perl script into a ice cube.

That are served into slices of data, each attribute its own vector that has the length of the number of nodes in the selected rdf property. There is in fact a matrix of all the objects and relationships between them stored in the Array.

This program contains just the linux binary of the program that has all this data compiled into an ICE Cube :

That is emitted into a inline c array for compiling into the target program.

Please join up on the list, come to the #introspector chat zone on freenode.net, and jabber me at mdupont@nureality.ca

Just had an idea :

What about a simple signed rdf description of the things that you want to sell? Like UDDI, but just a RDF file.

Then you can post all of that to the web, register yourself with a search machine and buyers can grab the stuff from you.

mike

I find it very strange, that while reading my an article about microsofts anti-linux FUD, that it is payed for by microsoft .

Here is the article : NEWSFORGE

Here are the adds that appear on my screen : MS AD MSBANNER

In fact, I feel sick to my stomach.

2 Sep 2003 (updated 2 Sep 2003 at 16:58 UTC) »

So after reading about slashdot RSS Feeds and after reading berend'S diary [Edited grammer]

It hit me :

I have been thinking about unification of blogging, wikis, email, bugreports, and all types of data about your software into on format.

It should be possible to just create a set of RDF files that contain all relevant information, these files will be just interpreted by various agents for display.

You can post the data that is directed at a specific person into a directory, for example if you want to send a mail to someone, you would sign that in an encypted rdf file and post that on your webserver. They would just pull it from your server.

In any case, this is making sense to me, it will be a central way to manage the publication on all the mediums needed. By using RDF you can mark up your messages with all the metadata needed.

The rendering can be done by a set of filters.

Imagine an wordnetization, ispellification of your text, identify the words etc.

That would allow for grammar engines to process your text easier... crazy idea are exploding in my mind...

I got your point madhatter

I will constrain my postings to advogato for a while, maybe I have been posting too much uninteresting stuff.

peace,

mike

I tried to post this to my artible, but something is wrong.

exa I have talked may times about this to rms and others, In fact, I mail with him often.

My issue is not with rms, but with the way that gcc developers are supporting on one hand an export on their own, but on the other hand

raph If you want to delete this article, please feel free.

It is not that the free software philosophy is the problem, but those who seek to gain more power and control than is provisioned by the gpl.

Omnifarious you say "Get over it, and build something" I have been building something for years now, you can download and run it and use it.

you say "Or, get someone else who understands the issue and can write without making it sound like paranoid ranting to write the article for you. "

OK, what parts dont you like? I can post a follow up here that is an edited version.

DotGNU x11 control ideas :

1. VCG graph layout control

2. RDF editor control

3. IP address control

26 Aug 2003 (updated 26 Aug 2003 at 17:38 UTC) »

Here you go DeepNorth, now you can certify bytesplit as a Troll. To bad advogato ignores it.

Troll Certification

<form method="post"

action="http://www.advogato.org/acct/certify.html">

Certify Who? as:

<select name="level" value="level"> <option> Master </option>

<option> Journeyer </option>

<option> Apprentice </option>

<option selected="selected"> Troll </option>

<option> Observer </option>

</select>

<input type="submit" value="Certify">

<input name="subject"

value="bytesplit">

</form>

RFC : SPAMCENTER

The spamcenter is a place to deliver spam to.

By collecting spam there, instead of bouncing it or delivering it to the user, you will be able to reach the following goals :

1. See who is spamming.

2. See what is being spammed.

3. See when the spammers are spamming

4. Alow users to retrieve non-spam

5. Allow patterns to be set up to compress the spam as just differences to other spam.

6. Allow tester of antispam software to run on the data files.

7. Allow people to route their mail via the SPAMCENTER for screening.

8. use certifications and webs of trust to allow certified users to report new spam.

9. Allow certified users to create patterns to catch spam.

10. store all the data under the GFDL and GPL.

26 Aug 2003 (updated 26 Aug 2003 at 08:09 UTC) »
treecc-info.owl ontology The treecc introspector ontology main file

This ontology is extract out of the c header of the treecc program.

It will allow the RDF markup of the instances of the compiler objects defined using the treecc language.

In that sense, it is a meta-meta-model.

The treecc grammer for a language is a form of a meta-model of that language.

The supporting files are also here : input contains many enum values that will be used to mark various nodes, they will become owl:Classes on thier own right.

This file Parser contains support properties about the results of the parser

eg : treecc-parse:TreeCCParse will have the owl:Domain of a TreeCCDocument and the owl:Range of a TreeCCIntrospected file

mike

29 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!