Older blog entries for tampe (starting at number 101)

27 Aug 2010 (updated 27 Aug 2010 at 10:29 UTC) »

Compress that for me, please

So, if you are in a habit of analyze lists and tree structures like code, and like pattern matching, you may end up writing ton's of code like.


(def sum-of-leaves
   ((X . L)      (+ (sum-of-leaves X) (sum-of-leaves L)))
   (()           0)
   ( X           X))

e.g. if you have a list sum the first and the sum of rest the empty list yields 0 and all non list elements -the leaves, assumed numbers - end up being just counted as the value they represent.

So how should we code this destruction into a primitive lang? One idea I used is to work with the guile VM directly and use a few extra primitives there.

So consider destructuring


(define (f A)  (match A ((X 'a (X . L) . U) (list X L U) ...)

we could do this with a stack machine like,


(push A)         ;; Stack -> (A)
(explode-cons)   ;; Stack -> ((car A) (cdr A))
(set X)          ;; Stack -> ((cdr A))
(explode-cons)   ;; Stack -> ((cadr A) (cddr A))
(match-eq? 'a)   ;; Stack -> ((cddr A))   
(explode-cons)   ;; Stack -> ((caddr A)  (cdddr  A))
(explode-cons)   ;; Stack -> ((caaddr A) (cdaddr A)
(cdddr A)))
(match-eq? X)    ;; Stack -> ((cdaddr A) (cdddr A))
(set L)          ;; Stack -> ((cddr A))
(set U)          ;; Stack -> ()
(list X L U)
;;; note if an explode-cons or eq? fails it will 
;;; reset and the next pattern will be tried.

And what you note is that this is a more compact reduction of pattern matching then doing it with the standard VM of guile. So the end results is that code executed on this VM is both faster and more compact then using the standard setup. But of cause if we would like to compile this to the native platform then the standard compilation to pure scheme probably has an edge.

Interestingly though (I'm implementing a prolog on top of guile), this pattern can be generalized to the case where A - the input is a unifying variable. The destruction will look almost the same, but we need to tell the VM that we are in a mode of destructing a unifying variable. Meaning that if X is not bound we will set that variable to a cons and push two new unbound variables (car A) and (cdr A) onto the stack.

Cheers!

24 Aug 2010 (updated 16 Jan 2011 at 06:54 UTC) »
What's growing on your head old moose

So a couple of weeks of working and this was the result,


prun() :- L = [1,[3,[7,[15],8,[16]],
               4,[9,[17],10,[18]]],
               2,[5,[11,[19],12,[20]]],
               6,[13,[21],14,[22]]],
          postpone_frame(30,0.8),
          f(L).


f([H|L]) :- !, sub-tree-max([H|L],X), postpone(X), g(H,L).

f(H) :- postpone(H),pk(H), fail.

g(H,L) :- f(H). g(_,[H|L]) :- g(H,L). g(_,[]) :- fail.

where sub-tree-max represents the maximum value in a specific subtree. We get the result:

 
(18) (19) (20) (21)
(22) (15) (16) (17)
(14) (11) (12) (13)
(8)  (9)  (10) (7)
(6)  (4)  (5)  (3)
(2)  (1)

Note, the intention is to make an addition to a prolog engine to accomplish the above.

postpone_frame(Level,Fraction) will start executing all postpones above Level*Fraction. Then it will execute all postpones above Level*Fraction*Fraction and so on. You may want to change the algorithm at your taste but the main idea is not to use a full sort but tasty chunks of code which basically flows from most interesting to least interesting. I will add a lower level as well for which a direct Cut is taken.

Actually the current working implementation is very costly, e.g. every postpone is visited and if criteria is not met (the state is still recalculated!!!) it will again postpone (stupid, yea I know, but I wanted a working simple starting point)

Have fun

And so we unify the universes and continues just as before

How cool it is to have fun and do something useful. Well at least on the paper. My main focus for some time is to learn scheme and help out for that community. It's a really nice experience - thanks for that.

Anyhow I just entered a new project on Advogato - guile unify - which is may latest contribution. I've been hacking on it for some months, and feel that it got some interesting twists. so what is it?. Well it's exploring the combination of scheme and prolog.

One of the unique features of scheme that have etched its pattern in my brain is continuations - mainly because it was so hard to grok that call/cc stuff when I first encountered it. And this is a really interesting combination to breed in this marriage between a schemer and a prologur.

prolog is a combination of a variable stack, tree search, and unifying matches. To introduce continuations one probably end up with needing redo trees. A path from the root of the tree out to a leaf is passing variable "set commands" in such a way that the state of variable stack is restored from a blank base setup at the leaf, where a closure is found ready to take a continuation action. By making a tree one can save some on memory and perhaps also on time. Got this working and prompted me give the project some light.

Now, actually I'm lying. Pure continuation will come, but I'm targeted the method for a limited kind of continuation. e.g. a postpone command.

Consider writing a chess solver. Now you build up state information in various structures as you go and would like to use that information to draw strategic conclusions for a certain move. You may want to cut unstrategic moves, right! Well actually this may be prompting the developer to do a lot of tuning to get correct. So an interesting idea is to store the state of the game, save that continuation on a list and continue with that list if resources are available. This approach is more adaptive and probably lead to less tuning for the developer. To note is that storing this state for millions of continuations can put a severe strain on memory and also raise the complexity of the continuation. So that's why compress all the information in a redo tree very much like saving a word lexicon effectively may be interesting - actually, i don't now about this, I just made it work!! and it was great fun and a challenge to do it.

Consider trying to logically prove a statement. Could be cool to try to search for a small and elegant solution before trying those infinite paths. So just monitor stack usage and postpone at the points where stack reaches above a level, and continue to the next level if needed then on. Just write the usual prolog stuff and insert a few commands and you will have it. Nah not implemented but hey look at.

An example in prolog comes here,


prun() :- L = [[[5],2],1,[3,4]],
          postpone_frame,
          X is 0,
          pk(X),
          f(L,X).


f([H|L],N) :- !, postpone, N is N + 1, g(H,L,N). f(H,N) :- pk([H,N]), fail. g(H,L,N) :- f(H,N). g(_,[H|L],N) :- g(H,L,N). g(_,[],_) :- fail.

And entering prun leads to the sequence


(1 1),(2 2),(3 2),(4,2),(5,3)
Yea, trivial, but it shows how simple to code it. Now it's actually trivial to make a stack size criteria that will be a nice hack to try just to see if it improves.

Oh, The speed is not to bad and it is taking advantage of tail calls, prompt logic and CPS. Turns out CPS is not bad for speed - but do affect it.

Now, entering true continuation is just a matter of hard work. I will need to write a special GC for this structure because, the data-structure is tuned to be good at the scenario above and true continuation will be a cool creature, but a second citizen in my view that need some caring.

I will try to find time to talk a little more about the internals of all this. Maybe it's all a nice sand castle, maybe it is severely cool. Time will tell. I will continue to hack on it. If you have any questions or would like to learn or help, join the guile devel community and ask about guile unify.

Cheers

Stefan

2 Jan 2010 (updated 16 Jan 2011 at 07:04 UTC) »
Types

I ended the last sequence of blogs about exploring looping, then basically stopped and started to learn about type theory and prolog using Qi.

right now I working with this engine to write a type system that works pretty much like lisp type system e.g. if we can deduce type, then use it! This type-engine will be used to compile Qi to lisp/clojure/scheme/go? or whatever lisp like environment you got.

Cheers!

1 Jan 2010 (updated 16 Jan 2011 at 07:06 UTC) »

A new year.

Well every Christmas I spend some time on thinking about space and some ideas form, it's a fun and entertaining game. maybe not correct thoughts, but entertaining. Now the new year starts and it is back to business with computer quizzes instead of trying to find the dream of Einstein. Anyhow I made a small document describing my (well you never now, people tend to independently walk the same paths) view of how the world is constructed.

I promise, no more of this, until next Christmas.

----------------------------------------

We have the right to think, correct or not, and if in a world of only correctness, you will simply drown in mathematics and never learn to swim it.

/Stefan

29 Dec 2009 (updated 16 Jan 2011 at 07:17 UTC) »
Christmas days, thinking differently

I spend this Christmas, reading Simon Sings Big Bang. And as Simon pretty much says, I say as well, what a wonderful world.

I'm mentally affected by my education that I constantly ask myself if the things that is presented in a popularizing book is really what is true. Did he mean that. Sure people turn things int a better daylight when asked about it afterwords and so on. There is a constant flow on my mentally left margain. Anyhow I'm now really impressed by the puzzle that scientist made to achieve such a solid ground for the Big Bang theory.

There is always weak spots in an arguments but the core argument is really solid in my view.

I like the formulation that uses the potential formulation under the Lorenz gauge if I remember everything correctly. Then all components follow a wave equation and there is one linear first order constraints that look close to a simple continuity equations. Now I wanted to understand what this actually meant and searched for some example that would reveal that to me. And there is one telling partial solution. You could have a solution where you make the constraint a continuity equation. You will have a sort of "magnitude of disturbance field" in the scalar potential and the vector potential will be a sort of momentum potential e.g. the scalar potential times a velocity field of constant velocity c. It's a really trivial and very particular solution. But you can modify it. You can assume that if along a direction you have the same disturbance, then you can choose any velocity you like.

Now, in my view this captures some essential features of electromagnetism. A constant stream of light is not dependent of the speed of the stream and it is information that is constrained to the speed of light. Not necessary the actual physical or disturbance transport.

Note that if we have just one stream the transversal direction has to be transported with the speed of light and indicate plane waves.

Even if this is a simple particular solution. One would probably be able to deduce Maxwell's equations after closing the space using Lorenz transforms

Ok this is just a mathematical play, but it poses from my position of knowledge very interesting question. It's just some speculation from a guy that is not an expert. But I still hope that I've teased your imagination so please have fun, play with the mathematics, enjoy the stars and have a very happy hacking new year.

22 Nov 2009 (updated 16 Jan 2011 at 07:20 UTC) »

Exploring,Fighting and Enjoying

I'm coding on a library called BoopCore right now. The reason is that I was thinking about how to make the code for the new version of Qi, called Shen.

Oh, I did a small test with the Einstein riddle. Got basically the same speed as gprolog so the "prolog" compiling part is not too bad.

My idea though is that it should be used by designers of PL tools and for people with specific needs. As an example backtracking and unification can be really customized and fast.

The program itself is pure magic, e.g. has some really poor design. But this is ok for a first version. That version has to have all the features, which is not decided from the beginning, but grows as bugs are found and new features has to be implemented. This means I am Exploring ideas, Fighting the beast to get them implemented but throughout the whole process, enjoy it as if it was the best wine in the world.

Cheers!

Type Type and Type And this is what I get

I have been quite for some time here, work is calling, family is calling and a new version of Qi ready to be explored was released. Now in the mean time I have been studying the sequent calculus and the Qi source code to learn how to mod it to my taste.

So perhaps my coolest hack is a type driven macro framework. So a little code,


(def-tp-macro ex1
   X                  : A  -> (? Usage number) 
   where (ask? Usage)


[X : num Y : num] : A -> (simple-num-num X Y) where (element? simple Usage)

[X : num Y : str] : A -> (simple-num-str X Y) where (element? simple Usage)

[X : num Y : num] : A -> (complex-num-num X Y) where (element? complex Usage)

[X : num Y : str] : A -> (complex-num-str X Y) where (element? complex Usage))

The first line says that at any kind of arguments and type of evaluation context first ask for it usage and the return values will be stored in Usage. This will send out the type-system to track usages according to some mechanism, this is done the first time. The next time if not inhibited (ask? Usage) will be negative and the system goes down to expand according to function signature and the properties of the The Value of Usage. In (? Usage T), T is the type that is returned from the function in the non ask context, e.g. (+ (ex1 X1 Y1) (ex1 X2 Y2)) should type-check!!

It works stupidly by type-checking repeatedly and whenever something is asked for a retry is done. A process that can be made much more effective by remembering the type deduction and use this memory at nonvolatile sites in the deduction tree a property that somehow has to be passed up the deduction tree. Anyway (ask? Usage) will be true if someone later in the deduction have inhibited it. Such if a ex1 value is used in the argument of ex2 that also asks for information in this case ex2 inhibits ex1 when it asks for information. (To speed up this deduction process ex1 should be marked as nonvolatile)

Actually macros can work on code fragments like this


(def-tr-macro ex3
  [adv [X : number Y : str] Z : symbol] : A -> [Z X]
  ...)
This is a quite general construct and of cause the process of macro-expansion, usage information exchange and so on can be repeated recursively.

So the macro can use information how the result of the form is used later on in the code and under what signature the type-system thinks that this form will be evaluated under. So there is a message system that works in both directions in the code graph (what signals do I get from the arguments and what context or what features of what I provide is used.

There are weak points to this construct but I think now have one big chunk that can be put to use when I go deep into optimizations of loop macros. At least I now have a powerful tool to do optimizations by using the computer as much as possible and my coding fingers as little as possibly

1 Jan 2009 (updated 16 Jan 2011 at 07:36 UTC) »

The number of dimensions are, well infinite

I have a standard prediction of a science breakthrough that comes after having studied the standard model.

In the standard model of physics they combine all natural forces but gravity in one formulation. The basic building block there seems to be fields that have properties that comes from the Yang Mills and Pauli theory. To understand the standard model it is good to think about what Yang Mills and Pauli try to describe. I played around with it a little got an understanding. It feels like these fields describe disturbances in a kind of ray fluid.

Lets move some atoms

11 Dec 2008 (updated 16 Jan 2011 at 07:39 UTC) »

Lets type unit tests

Maybe this is plain stupidity, but hey it's different from the usual path.

If you have full control of the type system, what can we do?

Well type is a flow of information so basically here you have the possibility to do meta tracking.

But lets consider another idea. Assume that we make an abstract class, with no implementation.


(class stack A         Abstract
       (clear )->NIL
       (push A)->NIL
       (pop   )-> A)

This follow sort of the standard specification used in Java C++ C# etc.

Now consider another "concrete" class


(class mycl  A         Impl
       (clear )->NIL
       (push A)->NIL
       (pop   )-> A))

Then typically you subclass mycl from stack and you go. The idea now is that in order to be able to subclass mycl from the abstract class you would have to pass a unit test e.g.


(class mycl) => (class stack) if (stack-unit-test mycl)
Customizing the type theory adding these tests with cashing would be .... different.

As a side note, this is how I view object orientation abstractly. You have a set of code snippets in the form of different function implementations and object orientations just put names on subsets and makes arrows between the snippets, now the process of making subclasses is just rules that makes arrows and rules how to select which code to evaluate or the context to evaluate a function when using one or several name references.

If we all looked nice and same, the ugliest person would become attractive

/Snorgersen

92 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!