# A Relational Model of Programming

## Posted 19 Nov 2003 at 21:12 UTC by cactus

I can't help but write a brief introduction to the basics of a relational model of programming, to hopefully spark some interesting discussion. It is a model developed by Ákos Fóthi, and it fascinated me since the first couple of his lectures.

It all starts out very simple, by four definitions describing the simplest case.

Problems are ordered pairs in a space so that F is a subset of A x A, where the problem-space A = A_1 x A_2 x ... x A_n and each set A_i is at most countably infinite. So a problem can be thought of as a nondeterministic relation that maps from some points in a problem-space to one or more other points of the same space.

A program is also a relation, so S is a subset of A x A**, where A* equals the set of all possible finite sequences where each element is a member of the set, Ainf equals the set of all infinite sequences, and A** is the union of A* and Ainf. However, not all such relations are to be called programs: we shall further restrict this definition by requiring all programs to fulfill three properties:

• A program must map at least one sequence to all members of A.
• No sequence mapped contains two or more subsequent elements that are equal, unless it's an infinite repetition.
• The first element of every mapped sequence S(a) is equal to a, the point in A that it is mapped to.

Already you can see how this notion of problems and programs completely eliminates the need for a hyptothetical computer.

To make it easier to relate programs to problems, we shouldn't care about the intermediate steps of a program, and thus define the program function p(S), again a subset of A x A, of a program S so that it maps from those points of A that S only maps finite sequences to, to the end-points of all such sequences.

We can now define what it means for a program S to solve problem F: S solves F iff:

• For every point that F maps from, p(S) also contains at least one mapping
• For every point that F maps from, p(S)(a) is equal to, or a subset of, F(a)

This definition bodes well with the intuitive picture of what it means to solve a problem: from every starting point, the program should always arrive to a valid solution, but not necessarily to all of them.

Below I give these four definitions in LaTeX format (using the definition of domain D of relation R so that it contains the points that R maps from), with links to a PNG rendition of them:

So we now have the formal means to define problems and to test and prove if a program (also given in formal means) solves it. Again, all without the need to introduce a hypothetical machine, or to trace the steps of a program.

These four simple definitions open up a door to a whole new world. What happens if we add or remove dimensions to/from A, thus creating sub- or super-spaces? How do you refine the definition of solution for the case when a problem and a program do not operate on the same space, but there's some transformation to traverse from one space to the other? What happens to p(S) if you build S from smaller programs, by defining the three fundamental construction methods: sequencing, branches and loops? What kinds of steps can the sequences of S contain, for it to be implementable on a Turing machine (and, consequently, on a real computer)?

I wish I could just give a hyperlink to the textbook containing this and a lot more, but unfortunately, it's only available in Hungarian. In fact, this relational model of programming that I've briefly described above is mostly unknown and unused outside ELTE. Of course, if you're feeling particularily adventureous, you can skip the textual descriptions and just look at the mathematical formulas here.

#### Second order arithmetic, posted 24 Nov 2003 at 21:16 UTC by chalst»(Master)

It's worth noticing that it is very easy and natural to formalise the above definitions in Z_2, the standard formulation of second-order arithmetic. The notion of solvability then can be related to a rich theory relating subsystems of second-order arithmetic to recursion theory; see, for example, Stephen Simpson's lecture notes on Mass Problems. I'm not sure that cactus can really mean:

Already you can see how this notion of problems and programs completely eliminates the need for a hyptothetical computer.
Surely "programs" that are not at least recursive are not programs at all?

#### Um...., posted 1 Dec 2003 at 07:18 UTC by diablod3»(Apprentice)

Does anyone actually understand what he is saying? I'm lost; excuse me while I go watch my dreams of working in software development go up in smoke. ;)

#### Nope, posted 4 Dec 2003 at 16:52 UTC by slamb»(Journeyer)

Afraid I don't understand it, either.

Maybe a more concrete example would help. What sort of program would it be worthwhile to formalize in this way? How would you set about doing so? What's the goal? A proof of correctness? Some sort of optimization?

#### Discrete Math, posted 5 Dec 2003 at 00:40 UTC by nymia»(Master)

This is a tough nut alright! Using Discrete Math to describe a programming model is really something, though.

#### Does recursion exist?, posted 5 Dec 2003 at 14:21 UTC by cactus»(Master)

chalst: Does recursion exist? If you look at the actual sequence of steps a program does while running, it's going to be just that -- a sequence.

If you're interested, I can transcribe the definition of the 'loop' programming construct for you to see how that kind of recursion is handled (it will be quite a job, however, since I'll have to translate all the ground-work neccessary as well)

#### Enough programs? Or too many?, posted 9 Dec 2003 at 03:59 UTC by chalst»(Master)

cactus: I have no doubt that the above definitions can capture all recursive functions. It's just it looks to me that they also capture uncomputable functions as well, such as the functions in the first Turing jump degree (ie. what you can compute if you have an oracle that tells you whether any TM/value pair will halt). So what you call programs are in fact more like what most people in the formal methods field call specifications.

Just to elaborate on what I said before. It's impossible to tell whether what you have written is a practical approach to solving programming tasks. I agree that the definitions are elegant, but there are other elegant formalisms (eg. Hoare logic, Squiggol, Bird&De Moor's relational algebra), but you can't tell from the elegance of the fundamental definitions how it will pan out in practice. Maybe a worked example showing how some code synthesis is done in the formalism is a good idea?

#### Related work, posted 20 Dec 2003 at 16:54 UTC by BitchKapoor»(Apprentice)

"Teaching Design of Objects and Classes through a Relational Model of Programming" http://citeseer.nj.nec.com/473768.html

"On Composing Problems and Parallel Programs" http://citeseer.nj.nec.com/81112.html (the notation is very heavy and unstructured in the above paper; maybe the authors were trying to meet page count limitations?)

"Proving the Temporal Properties of the Unique World" http://citeseer.nj.nec.com/ath99proving.html

#### Examples, posted 5 Mar 2004 at 10:04 UTC by cactus»(Master)

If you've read the papers posted by BitchKapoor, hopefully you'll be able to parse these problems (with solutions included).