Older blog entries for tampe (starting at number 98)

2 Jan 2010 (updated 16 Jan 2011 at 07:04 UTC) »
Types

I ended the last sequence of blogs about exploring looping, then basically stopped and started to learn about type theory and prolog using Qi.

right now I working with this engine to write a type system that works pretty much like lisp type system e.g. if we can deduce type, then use it! This type-engine will be used to compile Qi to lisp/clojure/scheme/go? or whatever lisp like environment you got.

Cheers!

1 Jan 2010 (updated 16 Jan 2011 at 07:06 UTC) »

A new year.

Well every Christmas I spend some time on thinking about space and some ideas form, it's a fun and entertaining game. maybe not correct thoughts, but entertaining. Now the new year starts and it is back to business with computer quizzes instead of trying to find the dream of Einstein. Anyhow I made a small document describing my (well you never now, people tend to independently walk the same paths) view of how the world is constructed.

I promise, no more of this, until next Christmas.

----------------------------------------

We have the right to think, correct or not, and if in a world of only correctness, you will simply drown in mathematics and never learn to swim it.

/Stefan

29 Dec 2009 (updated 16 Jan 2011 at 07:17 UTC) »
Christmas days, thinking differently

I spend this Christmas, reading Simon Sings Big Bang. And as Simon pretty much says, I say as well, what a wonderful world.

I'm mentally affected by my education that I constantly ask myself if the things that is presented in a popularizing book is really what is true. Did he mean that. Sure people turn things int a better daylight when asked about it afterwords and so on. There is a constant flow on my mentally left margain. Anyhow I'm now really impressed by the puzzle that scientist made to achieve such a solid ground for the Big Bang theory.

There is always weak spots in an arguments but the core argument is really solid in my view.

I like the formulation that uses the potential formulation under the Lorenz gauge if I remember everything correctly. Then all components follow a wave equation and there is one linear first order constraints that look close to a simple continuity equations. Now I wanted to understand what this actually meant and searched for some example that would reveal that to me. And there is one telling partial solution. You could have a solution where you make the constraint a continuity equation. You will have a sort of "magnitude of disturbance field" in the scalar potential and the vector potential will be a sort of momentum potential e.g. the scalar potential times a velocity field of constant velocity c. It's a really trivial and very particular solution. But you can modify it. You can assume that if along a direction you have the same disturbance, then you can choose any velocity you like.

Now, in my view this captures some essential features of electromagnetism. A constant stream of light is not dependent of the speed of the stream and it is information that is constrained to the speed of light. Not necessary the actual physical or disturbance transport.

Note that if we have just one stream the transversal direction has to be transported with the speed of light and indicate plane waves.

Even if this is a simple particular solution. One would probably be able to deduce Maxwell's equations after closing the space using Lorenz transforms

Ok this is just a mathematical play, but it poses from my position of knowledge very interesting question. It's just some speculation from a guy that is not an expert. But I still hope that I've teased your imagination so please have fun, play with the mathematics, enjoy the stars and have a very happy hacking new year.

22 Nov 2009 (updated 16 Jan 2011 at 07:20 UTC) »

Exploring,Fighting and Enjoying

I'm coding on a library called BoopCore right now. The reason is that I was thinking about how to make the code for the new version of Qi, called Shen.

Oh, I did a small test with the Einstein riddle. Got basically the same speed as gprolog so the "prolog" compiling part is not too bad.

My idea though is that it should be used by designers of PL tools and for people with specific needs. As an example backtracking and unification can be really customized and fast.

The program itself is pure magic, e.g. has some really poor design. But this is ok for a first version. That version has to have all the features, which is not decided from the beginning, but grows as bugs are found and new features has to be implemented. This means I am Exploring ideas, Fighting the beast to get them implemented but throughout the whole process, enjoy it as if it was the best wine in the world.

Cheers!

Type Type and Type And this is what I get

I have been quite for some time here, work is calling, family is calling and a new version of Qi ready to be explored was released. Now in the mean time I have been studying the sequent calculus and the Qi source code to learn how to mod it to my taste.

So perhaps my coolest hack is a type driven macro framework. So a little code,


(def-tp-macro ex1
   X                  : A  -> (? Usage number) 
   where (ask? Usage)


[X : num Y : num] : A -> (simple-num-num X Y) where (element? simple Usage)

[X : num Y : str] : A -> (simple-num-str X Y) where (element? simple Usage)

[X : num Y : num] : A -> (complex-num-num X Y) where (element? complex Usage)

[X : num Y : str] : A -> (complex-num-str X Y) where (element? complex Usage))

The first line says that at any kind of arguments and type of evaluation context first ask for it usage and the return values will be stored in Usage. This will send out the type-system to track usages according to some mechanism, this is done the first time. The next time if not inhibited (ask? Usage) will be negative and the system goes down to expand according to function signature and the properties of the The Value of Usage. In (? Usage T), T is the type that is returned from the function in the non ask context, e.g. (+ (ex1 X1 Y1) (ex1 X2 Y2)) should type-check!!

It works stupidly by type-checking repeatedly and whenever something is asked for a retry is done. A process that can be made much more effective by remembering the type deduction and use this memory at nonvolatile sites in the deduction tree a property that somehow has to be passed up the deduction tree. Anyway (ask? Usage) will be true if someone later in the deduction have inhibited it. Such if a ex1 value is used in the argument of ex2 that also asks for information in this case ex2 inhibits ex1 when it asks for information. (To speed up this deduction process ex1 should be marked as nonvolatile)

Actually macros can work on code fragments like this


(def-tr-macro ex3
  [adv [X : number Y : str] Z : symbol] : A -> [Z X]
  ...)
This is a quite general construct and of cause the process of macro-expansion, usage information exchange and so on can be repeated recursively.

So the macro can use information how the result of the form is used later on in the code and under what signature the type-system thinks that this form will be evaluated under. So there is a message system that works in both directions in the code graph (what signals do I get from the arguments and what context or what features of what I provide is used.

There are weak points to this construct but I think now have one big chunk that can be put to use when I go deep into optimizations of loop macros. At least I now have a powerful tool to do optimizations by using the computer as much as possible and my coding fingers as little as possibly

1 Jan 2009 (updated 16 Jan 2011 at 07:36 UTC) »

The number of dimensions are, well infinite

I have a standard prediction of a science breakthrough that comes after having studied the standard model.

In the standard model of physics they combine all natural forces but gravity in one formulation. The basic building block there seems to be fields that have properties that comes from the Yang Mills and Pauli theory. To understand the standard model it is good to think about what Yang Mills and Pauli try to describe. I played around with it a little got an understanding. It feels like these fields describe disturbances in a kind of ray fluid.

Lets move some atoms

11 Dec 2008 (updated 16 Jan 2011 at 07:39 UTC) »

Lets type unit tests

Maybe this is plain stupidity, but hey it's different from the usual path.

If you have full control of the type system, what can we do?

Well type is a flow of information so basically here you have the possibility to do meta tracking.

But lets consider another idea. Assume that we make an abstract class, with no implementation.


(class stack A         Abstract
       (clear )->NIL
       (push A)->NIL
       (pop   )-> A)

This follow sort of the standard specification used in Java C++ C# etc.

Now consider another "concrete" class


(class mycl  A         Impl
       (clear )->NIL
       (push A)->NIL
       (pop   )-> A))

Then typically you subclass mycl from stack and you go. The idea now is that in order to be able to subclass mycl from the abstract class you would have to pass a unit test e.g.


(class mycl) => (class stack) if (stack-unit-test mycl)
Customizing the type theory adding these tests with cashing would be .... different.

As a side note, this is how I view object orientation abstractly. You have a set of code snippets in the form of different function implementations and object orientations just put names on subsets and makes arrows between the snippets, now the process of making subclasses is just rules that makes arrows and rules how to select which code to evaluate or the context to evaluate a function when using one or several name references.

If we all looked nice and same, the ugliest person would become attractive

/Snorgersen

Words can dance

My day work is calling for serious attention and I had the usual period of being tired. This is sort of strange, but From time to time I need to sleep more. (This public effort of mine is done usually after 21:00 when the rest of the family relax before the TV or is sleeping)

Anyway this means that I do not pay attention on detail right now but spend my spare time gathering inputs, Reading and just think.

I have been playing around with texmacs, mainly trying to see how code look like when you allow for a little more typesetting then your favorite editor. I concluded from this effort how badly needed monospace is. The reason is that I tend to align stuff because things is easier on the brain when you have alignment. This means that the brain will learn to use the difference between two rows in it's visual parsing. Letters not being aligned will just make this activity a pain (to do this make sure your tired, you then notice bad style). On the other hand one feature that makes for better reading is subscript and superscript under monospace. The thing is that it's ok to change the space in a few blocks. Personally I would also like to have a few extra symbols too use while programming then what's common. But this is old stuff, emacs has ok support for this and I've used it for some of my LISP or Qi programming. I don't think that Big Integral and sum operators, square root constructs and fractions is so badly needed for programming that I would like to suggest implementing that in a programming editor. On the other hand the ability to use different decorators of your symbols is attractive if they don't disturb the spaces too much.

So I concluded that decorators, sub/super script that disturb the spaces as little as possible together with unicode symbols would be my favorite typesetting extensions to the common craft of programming.

Let's sleep on it

16 Nov 2008 (updated 16 Nov 2008 at 21:24 UTC) »

You can do everything in assembler

I first want to say that layout manager and such really is something that is old technology and already included in all modern gui's, but it is always good to just focus on a thing and think a little to see if things can be improved.

When drawing diagrams in powerpoint I always get very frustrated. the reason is that I believe that something like a well implementation of the tools found in the TeX community to draw diagrams, would be so much easier (for me). It would be so cool to be able to use the TeX tools to layout stuff in diagramming and gui construction. If you are like me I suggest to read and play with those TeX tools like xy-pic, great fun.

Let me also point out that QP is probably not used because simpler solutions then what I described is probably already implemented. The point using QP is that you can specify that not only X is close to Y, but allow the distance is allowed to vary if there is conflicting constraint(s). It is also a quite general framework if you stick to linear constructs. What I miss is a framework and tools, to handle and develop layout managers. There should simply be a tool for me and others who now the math (perhaps do it using scipy) to develop some great markup framework to do layout of different things. I could imagine it being based of QP and also of cause, as a special case Linear programming. One could think of including other mathematical tools as well like convex programming.

As an example let's discuss about implementing a tool to make sure that the layout has straight left and right margins. We stack a sequence of objects X1,X2,X3..., with a smallest distance d1,d2,d3,.. between X1,X2;X2,X3;..., And we also say that we punish a deviation from di by a coupling constants C1,C2,... basically you could automatically select Ci,di according to the sizes of objects (xy-pic in the TeX world has such ideas in it's layout strategy of diagrams.) Now to specify a conflicting constraint to let the total distance of the sequence to be constant in order to achieve a right margin. You never get an exact right margin, but by adjusting the spaces between words or objects you may make it happen. The good thing using this is that you will always get an answer and depending on measures how well every constraints is satisfied you can decide to remove objects to the next line or include new objects. Also you could fire up guis and let users who want to tweak the coupling constants, do it to adjust the layout.

Finally here is a trick I use to deduce the components in a quadratic criteria x'S'x + v'x + d from other representations without thinking. When I first tried to do this I went for the white board and started to deduce things by hand. well that is not needed. just define a function f(x) that is your definition of the criteria, that is most easy to define. Then use your computer to evaluate f(0), f(ei),i=1,..,n, f(ei+ej) i,j=1,..,n. Using this information you will be able to deduce S and v. I find this a very neat tool. It is written once and will save you ton's of time.

What I fear is that we mostly develop to make things simple for the less advanced users because most users are on that level which means that this is ok. Beeing an advanced user means that you don't get the benefit of most development and that means that you are not 10 times as effective but perhaps only 2 times and you loose.

Cheers

In the beginning there was a Layout

I think that a great layout engine should be the base of a great Gui. So here is some thoughts about layout.

I don't now about all quirks of TeX, but basically, it seems, its that art about stacking rectangles where the rectangles themselves can be based of stacked rectangles and so on. It is a simplified description but you can see one fundamental point in this argument, you can use an object oriented description of the rectangles. Actually you should be able to use the TeX engine to do a automatic layout of a Gui. Anyone now anything about how to do this in a nice way?

Automatic Layout of graphs are cool, and I ones made a huge call-graph of a python program on a A2 paper with the help of the ghraphviz package, really nice. In graphviz you can have nodes that are rectangles. You should then be able to use the graphviz engine to find the coordinates of your gui elements as well to get a certain kind of layout. Is there such a link out in the wild?

Constraint Programming for layout is cool and of cause not a new idea. You basically give constraints for how the coordinates in each rectangle can vary in relation to other coordinates in different rectangles. Something like x>y,x-y>d (x is after y and at minimum d units apart). Then you might add a penalty av magnitude, say C((x-y) - d)^2 and include all these in a quadratic criteria telling how much you like a deviation from d distance and play with that. This is a more fuzzy and general way of implementing stacking. Say that we implement stacking according to this. We might want to align stuffs in different ways e.g. want your coordinate to be close to other coordinate in some way, close can again be related to quadratic criteria. You can express that a linear transformation of a set of coordinates can be close to another set of coordinates and so on. It is a powerful concept. You will end up with quadratic programming QP.

There are some special cases. For example if you always stack in an exact way when you build your rectangles, you may skip the inequalities and just solve the fuzzy constraints that's left in the quadratic criteria. Then you basically need to solve linear equation systems. Kan we do this fast? Well in order to have as an exact and general solution and keep it simple you may start using linear algebra routines of full matrices. The order of this algorithm is n^3 if I'm not misstaken so a layout of 1000 coordinate values will take 10GFlops to calculate, Maximum space is about 100MB for doubles. (The transfer to the computing unit may actually be much less due to sparseness of the matrix)) Now considering that GPGPU solutions can deliver 400GFlops (see one of the examples at the CUDA site (with a TESLA card)). You may now understand that you in a few years when the GPU technology have been standardized and matured, you will be able to do some cool things with layouts. To the reader, for cases with sparse matrices you will find other mathematical tools and GPGPU may not be suited.

Solving quadratic programming problems with constraints is if I remember correctly heavily dependant on solving equation systems as well so GPGPU technology is probably welcomed for this case as well

Can we dream up more layout tricks? Well some kind of morphing. Say that I have a set of symbols and wants them to be more square like. How can we implement this? Maybe one way is to make the best fit of a square to some set of coordinates from a symbol. Then you can calculate the closest points on the square for each point on the symbol. A new symbol with the same coordinate identities will then be searched for so that you weight the variance to the square and the variance to the old symbol and then hence morph between them. I don't now how this works, but there are plenty of tricks like this that have the potential to implement more fuzzy notions about a layout.

And then the content begun

89 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!