Older blog entries for tetron (starting at number 3)

Strictly speaking, I didn't get jack shite done today. Turned in the last lab for CS201 and got really far in Zelda II but other than that... Where did all the time go? Damn.

Anyway. Was gonna write a new chat client that used a "mario" style interface, eg you could jump around on platforms like all those old NES games. It could be a really l33t way to chat, mixing talking and interactivity in some amusing ways :-) Then I realized it would make more sense to hack gfxchat to do the platform stuff rather than writing a whole new program. THEN realized that it was 11:00pm and I didn't feel like writing any code. So I wrote a poem instead :-)

A tail around a corner
A toy rolls by
Memory of a purr
Black and white spots
A flash of fur

Streaking through the house
Death to ping-pong balls!
Stop!
Time for food
Time for grooming
Time for sleep

Noble, poised
This is he
Living with us
But not below us
One of us
The feline, the king.
(other poetry on my website)

nixnut - XML is just a syntax. Yes, it is a metalanguage in the sense that it defines how to specify real, useful languages, but it is in the design of these languages where we need more meta-level markup. One thing that has been rattling around in my head recently has been how to design a language that actually coerces users into using content-based markup so that programs can reason about a page's contents more intellegently than the modern complex web page that's just a million nested tables. The greatest boon to intellegent agents and the Internet would be a web that computers could extract meaning from as well as people.

Okay, I'm ranging everywhere from hypermedia and virtual reality to intellegent agents and artificial intellegence here, but I think this is the future, and the future is going to be cool :)

I HATE COMPUTER SCIENCE 201!

"Architechture and Assembly language" **growl** A little from column (b), a LOT from column (a). Electrical engineering for computer science majors. Lovely. Did I mention that this class _FAILS_ half the students in it every semester? And that half of those are FAILING IT FOR THE SECOND TIME?! Well, I went and talked to a professor about it at least. Maybe the department will get a clue... some day.

jmason, nixnut and jschauma responded to my little rant on hypermedia. Well, the superficial part about why HTML sucks, not the important bits :-)

Someone mentioned that it could be done server side. Of course you're right, the server can do ANYTHING - my example of slashboxes or netcenter channels is just a very sophisticated sort of server-side include. The idea here is doing it client-side. How can we get more intellegence into the client?

Frames are evil, layers are proprietary, and I don't think javascript could actually acomplish this sort of multilevel document nesting. What I'm thinking of is beyond just sticking one piece of a web site in another (as banner ads do); rather what would be really cool would be for there to be an interaction between the inner and outer documets on various levels - visually (layout) and conceptually (outer documents are more general, inner documents more specific, or other sorts of information relationships.)

What I think would be really cool, is if ON YOUR HOME PC (none of this centralized "portal" crap) you could combine many inflowing information sources (CNN, Slashdot, Memepool, Sluggy Freelance) into a single page that is best laid out for YOU - personalized, managed by an intellegent agent, most useful to you, and best of all you can screen out the advertising :-)

I don't think the web is up to this. If HTML were a stronger metalanguage, meaning it described the MEANING of the data it wrapped tags around, then we'd be able to analyze so much more from the average HTML document. Modern HTML is a bunch of layout information with a few vestigal tags from the times people tried to make it mean something. <em> vs <i> anyone? Or <address>?

Whoops. Got a meeting to go to. More rants on this subject tomorrow :-)

Dude. Objects actually go away now when they're supposed to. This is cool. Finally I won't have to restart the server every time something screws up :)

Talked to my friend Reed for a long time last night about the future of hypermedia. What the web could have really used would be an <include> tag, that let you include bits of HTML from other documents into your own, just like you can inline offsite images. If you could nest documents in documents in documents you could build incredible dynamic even sort of organtic thing - more of a complex multilevel expansive document space than a mere web page. Here's an example: take oh, slashdot slashboxes, or netscape netcenter channels. These are headline services. They get a feed from these other sites, and reformat it and integrate it as part of their own page. Now think of this: imagine if they simply linked back to the originating site, and that site could then paint inside the box whatever it liked? You can still maintain a single cohesive web page, but the browser is actually compositing many pieces of other sites together into a single sort of digest.

By better laying out our hypermedia information, in hierarchtical/hyperlinked structures, the browser can become more of an intellegent agent. Power to the end user! The internet right now is racing in two directions - one towards more centralization, with portals and ASPs and backbones, the other towards more decentralization, with gnutella and freenet and their kin.

The problem, however, is how to balance the order (and control) gained/afforded by centralization, vs the chaos that is the current rather primitive crop of distributed systems? The web does this supprisingly well, but only by having multiple, redundant, _centralized_ indices of the web. Distributed centralization. Gotta love it :) But the web is also incredibly noninteractive, and that's really bad. When people say they use the web "as an applications platform" they really mean they're using HTTP to download some ActiveX or Java controls. The idea of a web page as an application only makes sense in the context of forms and javascript - but these are not real-time technologies. A dynamic hypermedia system needs to be able to respond to changes in the system as they happen, and the web is unsuited to this.

In case you hadn't guessed, the project I'm working (ADR) on has a lot to do with this :)

Birthday yesterday (July 29th). I'm twenty now. No longer a hotshot teenager. Damnit. Now I really have to do something cool to stay ahead of the next generation :)

Left sputnik (my laptop, my desktop is named mir :-) plugged into the wrong wall socket (it's on a switch, meant for a lamp, which means you can turn it off) all night. Woke up to find the battery completely flat. WHOOPS!

Might as well give this a try.

Caught the illustrious graydon on IRC, had a nice chat about CORBA. I probably should have looked into it more before implementing MOS (Meta Object System, see Amherst Distributed Reality for the specifics.) Not that I think CORBA would have been a perfect fit for our application, since it has some specific feature requirements, but because it's good to have another point of view, especially when the two systems are conceptually similar.

Finally rounding out the lifecycle of objects though. It was sort of bad that you could aquire and manipulate objects, but not get rid of them :) But things ought to get GC'd now... I hope.

Arrrrg! I wish Java3D 1.2 for linux would come out. The interesting parts of our project (working virtual reality) are at a complete standstill. It's very frustrating.

I'll keep y'all posted, 'cause when this finally comes together its gonna be mad cool :)

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!