On VPRI STEPS

NOTE HXA7241 2012-04-29T10:19Z

They seem to be producing a ‘second-draft’ of current practice, and making some good abstractions, not a revolutionary general technique. Perhaps because no general revolutionary technique seems possible for programming as such.

What Alan Kay does is likely interesting and good, and the VPRI STEPS project appears to be doing something. But let us instead be sceptical! That is a more demanding exercise.

Is it possible to place what VPRI is doing within a broader understanding of software engineering: to gauge how significant it could possibly be, and in what way?

———

People regularly say they translated their system from language A to language B and got 10 times code-size reductions. This seems largely what STEPS is doing (leaving aside the numbers): when you look at something again, you can see how to improve it and do a much better job. But the improvement comes not mainly from the general aspect – the language B – but from understanding better the particular project because you already had a first-draft.

This can still be valuable work though, since it is about fairly general things we all use. Second-drafts are valuable, and ones of important things even more so.

But we should probably not infer that they are coming up with a revolutionary general way to write new software.

Substantially because large scale software cannot be designed: it must be evolved. We only know what to focus a second-draft around because we muddled together the first draft and it evolved and survived. You cannot write a second-draft first.

———

The fundamental perspective to examine this is based on what Brooks said a while ago.

All software works by the same mechanism: abstraction. All languages are fundamentally the same (Turing equivalent). If you want to ‘compress’ code, you write a library, and call a subroutine: now 100 lines is replaced by one line. The only thing left in language differences is to tweak the lexical form, the look of it.

So we cannot compress code better by finding a better general mechanism (software already is it). We compress code (or rather, work) by finding better particular abstractions for what we happen to need.

The value is in the scale of reuse, not the neatness of lexical-syntax. If a 1000 people reuse some software, that is the gain: 1000 people get the benefit of that hard thinking and work, but for practically nothing. If the lexical-form/lexical-syntax of code is improved, it helps the programmer manipulate it more easily, but that is probably insignificant compared with thinking through the problem to be solved. And the factor of improvement is much smaller than making software that is useful in more cases.

This is partly another view of the ‘no-silver-bullet’ law again: the cost of software is in essential, not accidental, ‘clerical’, complexity.

———

Or, looking at and saying this in different way:

VPRI's specialised language approach is like a multi-grid approach. Digital-Wittgenstein: a programming language places a grid of computationality over the world of actions.

So what is the difference between grids? Marginal surely. They are all the same asymptotically, in ‘developmental complexity’: all O(n) – that is, amount of code with respect to software functionality requirements. They just have a different constant factor.

But a large difference in constant factor would be very valuable (or at least would appear to be, since our perspective is mostly on individual projects not software globally).

But could that factor be large? How much difference can a language make? Surely not much. Ultimately they are all Turing-equivalent. All languages can wrap functionality up into functions/procedures/subroutines, and that is the only source of ‘leverage’ anyway – abstraction is the basic force-multiplier of information.

All the gain from software comes from abstraction: capturing a functionality in one piece of code and reusing it in multiple ways/places. Thinking – i.e. working – once, using – i.e. getting the value of – multiple times.

But all (normal) languages can do this; they all have some subroutine abstraction feature. That means the only difference is in things that are not the key matter – and that means things like the lexical/presentational form cannot be important.

———

This leads to asking what is ‘general’. All abstractions, all software, is general to some degree – that is half of its essence. And an innovation need not be absolutely fundamental to be rather revolutionary. The Fourier transform and the FFT are not fundamental revolutions of mathematics, but they are certainly exceedingly useful.

And there is also a way out for languages: even though they seem overall, ‘economically’, of very limited value, they give us stepping-stones for thinking: and if a different language helps us find our way to otherwise unreachable ideas, that has unlimited positive potential.

———

References:

Loosely related: