Categories
General

Duct Tape In Space

ObRandom: A rather detailed view of how duct tape was used during the Apollo moon landings.

Categories
General

Structure and Interpretation of Computer Programs

Over the last year, I’ve got a lot more picky about how I spend my ‘free’ time – I think all new fathers encounter this! I used to read reddit and follow all the latest k3wl happenings in the programming world. Occasionally, I’d read something that was really insightful but mostly it was just here-today-gone-tomorrow fluff .. “my way is more awesomer that yrs” kinda stuff. So I decide to try and ignore the fluff by thinking “will this be interesting in a year from now?” before I start reading. And pretty soon afterwards, I decided to commit time to watching the videos of the SICP lecture series.

SICP (see amazon.com) was the basis of the introductory CS course at MIT for many years. That makes it sounds like it’ll be a noddy course for noobs. It really absolutely is not. It covers a massive range of material – stuff which I’ve been learning gradually since leaving uni – in a really pretty accessible way. I’m interested in programming languages in general, and I now wish I’d watched these years ago. Perhaps it wouldn’t have made so much sense back then, but it would’ve equiped me with a useful (albeit non-mainstream) way of looking at the world.

The course explores “programming languages” as a concept, not “programming as an activity”. It’s a lesson in how to make tools, not just how to use those tools. So, you won’t learn how to build a big object oriented application. But you will learn different ways to implement support for “objects” in a language – eg. different ways to do method dispatch. If you know about C++ or Java, this kind of thing is baked into the language. SICP will show you ways to to mix the cookie dough, rather than just describing the cookies. And it’ll also tell you how to make a mixing bowl out of cookie dough too!

The course uses the scheme language. The choice of language isn’t shockingly important – it’s just a vehicle for making the concepts discussed in the course concrete. The real concepts – abstraction, combination and naming – are the fundamental ideas and transcend languages. Scheme is just a convenient way of talking about these concepts. The scheme language (its pretty simple) is explained as you go along anyway.

The videos were made in 1986, so they provide an interesting snapshot in time. The vast majority of the content is still 100% relevant today. The lectures on logic programming and streams (lazy evaluation) are interesting because in 1986 it wasn’t so clear how these ideas would turn out – eg. Miranda is mentioned, but Haskell was still to come. For example, the lecturer motivates the introduction of mutable state by showing how threading state between pure functions breaks encapsulation – whereas today you could use monads to make this more manageable.

With the benefit of 20 years of hindsight, there’s a few things that come across as being klunky. There’s a lot of interesting discussion about the relationship between car/cdr/cons and how you could realize those concepts. However, when we get on to more implementation heavy stuff, car/cadr/etc are used purely as the way to pluck data out of lists. Coming from an ML/haskell background, I was crying out for pattern matching. Ironically, pattern matching is covered in lecture 4, but it’s not added into the standard vocabulary. Instead the lecturer keeps saying “to find the foo, we need to take the car of the cadr of the list”. It made me miss pattern matching a lot!

The video series is also a reflection of the MIT research interests of the time. If you’ve only ever used one language before, it’s difficult to understand how that language affects the way you express and understand things. In the early days, it wasn’t clear what the ‘best way to program computers’ was. It probably still isn’t clear today. But in the 70’s and 80’s MIT was busy churning out language after language to try to better solve the “difficult AI problems” of the day. People were trying lots of different ideas – procedural style, rule-based systems, logic systems, objects, lazy evaluation etc- and consequently there was a rich ecosystem of programming languages which acted as vehicles to try these ideas out. This is the culture in which scheme was developed, and the choice of topics covered in SICP reflects this period.

So what are the good bits of the lecture series? Higher order function are introduced early as a basic building block and thereafter taken for granted. I think that somewhat sets the tone as far as this ‘introductory’ course goes! The benefits of decoupling interfaces/contracts from concrete implementation is covered repeatedly via examples. And the lectures are peppered with “shocking” examples – eg. implementing car/cdr/cons just using lambda – which serve to illustrate the current topic but also tie together the deeper overall themes of the course.

I enjoyed the wide range of topics covered, and way that main themes continually recur throughout the lectures (abstraction and combination). I feel that this was a well-constructed course, carefully designed to lead students on a journey by showing them many facets of programming, whilst continually explaining how they interrelate. Additionally, I think that it all feels very genuine – it’s a reflection of what Sussman and Abelson were working on at the time. They often stress that the examples they use aren’t just toy examples. The metacircular evaluator isn’t there just for teaching purposes – it’s how these guys experimented with possible language features (say, call-by-name vs call-by-value).

The questions at the end of the lectures are often the best bit – they’re often “uhh, I didn’t understand this bit”, but the answers often shed new light on an area so I’m glad that the questions were asked!

Any downsides? The final lectures on interpretation and compilation gets somewhat bogged down in prolonged hand-cranked examples. In an introductory course, it’s good to illustrate how to make abstract language concepts realizable in real hardware. But even allowing for that, I think they overstretched somewhat.

I was sad when I finished watching the last video. These videos may be twenty years old, but I enjoyed watching them and learned a lot. Mr Sussman and Mr Abelson: thank you for sharing your knowledge and understanding so effectively! 🙂

Categories
General

Decentralized monitoring

Occasionally, one of the three computers in my flat will crash or hang in some way. When this happens, it’d be nice to be notified. For example, if the MythTV host crashes it’ll stop recording any TV programs until I notice and kick it. So I could just install one of the many network monitoring packages out there. But as far as I can see, they all want me to choose one ‘master’ host and have the other machines report their statistics into the master. That sounds awfully oldskool. Which of my three machines should I pick as ‘master’? They’re all equally likely to go down, and when the master goes down, I won’t get any notifications.

What I really want is a “neighbourhood watch” scheme, where each host watches the others. Or, in posher language, I want a decentralized monitoring solution. Here’s how it’d work. Each host would announce ‘I’m still running’ or ‘My load is 50%’ every few seconds, and the other hosts will listen and record the message to disk. If any one machine bursts into flames, the other two should still have a complete record (including the fact that the failed host stopped talking at 15:30).

Ideally, hosts should be able to add themselves to the party without much hassle. Announcing the messages via IP broadcast would fit the bill, but it’d only work on one LAN segment. Also, the hosts might end up being fairly busy if they have to handle status updates from all other hosts all the time.

Sounds like an ideal application of gossip protocols. When a node first joins the monitoring party, it needs to be told about at least one other participating node. The nodes can subsequently exchange occasional ‘gossip’ messages with a few of their neighbours, telling them about the state of the world. The gossip message can also contain information about other nearby participants. In this way, information gradually spreads out around the network. Network partitions are a minor pain, but so long as there’s at least one other host still around to notice any explosions, they’ll still be noted.

This seems to fit the bill nicely: decentralized, unlikely to lose data, and robust to hosts appearing and disappearing.

So that’s the monitoring and data recording side dealt with (ha, no lines of code written!). But what about the alerting? If one host goes down, I don’t really want N other hosts all to start spamming me with duplicate notifications. I guess neighbourhood watch scheme suffer this problem – everyone in the street hears a window smash and the police get twenty phone calls! But can a decentralized mob temporarily appoint one of their number as a leader in order to send out a single notification? Yes, that’s exactly what distributed ‘election’ algorithms are designed to do. If all the hosts can talk to each other, they’ll manage to agree on a single leader. If there’s been a network partition, at worst we’ll get one leader in each partition and so get a few notifications (ie. the right hand says ‘I can’t see the left hand!’ whilst the left hand says ‘I can’t see the right hand!’).

There’s maybe a bit of tension here. You could write a gossip protocol where a node only keeps track of a few neighbours at any given time. But, for an election, you’ll need to know about everyone in the network. And even in a gossip system where nodes do eventually learn about the whole network, there’s going to be a startup period where the new node does know about everyone. Hmm, but I guess the election process might be able to handle this if it can gather together all the partial knowledge about who’s in the election. Ah, more detailed reading required!

Surely this system already exists? Well, the closest I could find was GEMS which doesn’t appear to handle the single-notification problem. All very interesting stuff though!

Categories
General

Why do they call it: Referentially transparent (II)

In the previous post, I explored the idea that some sentences change meaning if you switch between different names for the same object. Some other sentences don’t change meaning – these ones are referentially transparent, which in non-posh-speak means “you don’t notice if I switch to a different name”.

I’ve now asked some questions on haskell-cafe, and consulted various gurus at my work so I’m a little less confused that I was yesterday. 🙂

How does this concept apply to programming languages?

In haskell, you could say that the “names” were expressions like “2+2” and “sqrt(9)” and the underlying ‘thing’ which was being named are the values which those expressions evaluate to. So the integer “two” has many names – “1+1” and “sqrt(4)” and many others.

That gives us a nice connection back to the Quine definition of “referentially transparent”. Essentially, when we claim that Haskell is RT, its just a shorthand way of saying “you can replace an expression with any other expression that evaluates to the same result, and it won’t affect the overall meaning of the program”.

However, I feel that I’ve just handwaved over a few interesting points.

It’s reasonable to think of “sqrt(4)” as being a name for “2” in haskell because, well, functions always return the same value in haskell. But in Java or C, what kind of name could “rand()” be? The function evaluates to various values and it does this because it uses some kind of additional runtime context/state. So the basic idea that you can always have “names” and underlying “things” is a bit shaky – and we can’t really use Quine’s definition without them. What does it mean to say that C++ is “refentially opaque” if the very idea of names/references is shaky?

The same problem applies to the english-language examples from yesterday. I claimed that “Scotlands capital” was another name for edinburgh-the-city. That’s certainly true today. But it might not be true in 100 years. And perhaps, someone who lives in Glasgow might even disagree that its true today! So names which are co-referent today might not always be co-referent.

There’s a famous-ish example that’s similar too. In the sentence “The King of France is bald”, what kind of thing is being named by ‘The King of France’, given that France is a republic. That’s the kind of thing which keeps philosophers awake at night.

And if you’re into books, you’ll know that the author “Iain M Banks” writes sci-fi but “Iain Banks” writes mainstream fiction. There’s only one human being, but he writes under two “personas”. This makes me think that you have to be pretty careful about the THINGS that you think you are NAMING. In this example, “Iain M Banks” and “Iain Banks” are coreferent if you are discuss the human being, but they are not coreferent if you are discussing his “writing personas”.

There was one more point that I handwaved over. Why do I believe that Haskell really is referentially transparent? I might not trust Simon Peyton Jones when he claims that it is! But to prove some property of a language, you need a formal semantics for the language. And haskell98 doesn’t have that (unlike Standard ML). So there’s no much hope of doing a bottom-up inductive proof over the structure of the language – not just yet anyway. That’s a pity.

I’ll finish off with a couple of links about what RT means and RT and haskell.

Now I’m going to return to finish watching the SICP video which triggered this whole day-long distraction. 🙂

Categories
General

Why do they call it: Referentially transparent

I like understanding why things have the names that they do. Two stories immediately spring to mind: the first is from Michael Faraday, which I wrote about a while ago whilst the second is from Richard Feynman (at 6 mins).

But back to the topic at hand! I know pretty well what the phrase “referentially transparent” means, at least in relation to computer programs. It’s used to describe languages like Haskell where its always safe to replace a function like “foo()” by its return value (eg. 4) without changing the meaning of the program. This is not something which you can do in general for languages like Java or C++ because functions, such as rand(), can potentially return different values each time you call them. Also, if foo() contains printf statement, then the expressions “foo()” and “4” are different beasts, even if foo() does always return 4.

So that’s the concept in question. But why give it a name like “referentially transparent”? What’s “transparent” about it? And what “references” are we talking about? (and how many metajokes could one make here?)

The term “referentially transparent” seems to come from a certain Mr Quine philosophising about natural languages. Let’s do an example, but first we need to look at two things.

Firstly, take a sentence like “Andrew is bigger than a mouse” and erase the word “Andrew” to leave a hole – ie. “??? is bigger than a mouse”. Philosophers call this a CONTEXT. By filling the blanks in the CONTEXT you get back to a sentence.

Secondly, think about the city of Edinburgh. It’s a thing. There are many ways to refer to it: “Edinburgh”, “Scotlands capital”, “the city at 56N 3W”.

Now we’re ready to understand where the phrase “referentially transparent” comes from.

The first word (“referentially”) is talking about the fact that “the Edinburgh thing” has many names. There are many ways to REFERENCE that thing.

You should read “transparent” to mean “doesn’t make a difference”. If Microsoft rewrite Excel so that it runs faster but the UI is identical then you’d say the change was “transparent” to the users. It’s “transparent” in the sense of “you can’t see it”. It DOESN’T mean “you can see all the innards”, like a transparent PC case.

Mr Quine says that a CONTEXT is “referentially transparent” if we can fill the hole with different names for the same underlying thing without affecting the meaning of the sentence.

An example should make this clearer:

  • “[Edinburgh] is bigger than a mouse” is true.
  • “[Scotlands capital] is bigger than a mouse” is still true.
  • “[The city at 56N 3W] is bigger than a mouse” is also true.

So, the truth/meaning of the sentence WASN’T AFFECTED by our choice of which of those three REFERENCES we used. If we want to appear posher, we say that the context “(hole) is bigger than a mouse” is REFENTIALLY TRANSPARENT.

The opposite case is when your choice of name does matter. Quine describes those contexts as being REFERENTIALLY OPAQUE – ie. the changing names are important to the meaning of the sentence. An example would be the context “??? has 9 letters”:

  • “[Edinburgh] has 9 letters” is true.
  • “[Scotlands capital] has 9 letters” is false.
  • “[The city at 56N 3W] has 9 letters” is false.

Therefore the context “??? has 9 letters” does change meaning depending on which name/reference we use. Why? It’s because the sentence is now discussing the name itself rather than the thing.

Actually, that’s not a great example because it’s not very clear what I meant by it – ie. in what ‘sense’ I used the words. If I was twisted, I could claim I intended to say that there was a sculpture exhibit in town which contained nine depictions of alphabet letters – in which case all three sentences would mean the same thing. English is too ambiguous! But the example is sufficient for the purpose of this blog posting.

Now let’s get back to programming languages and see if we can make some connection between what “referentially transparent” means in haskell and what we’ve just seen it means in natural language.

Errm, actually, maybe that’ll wait for another post. I need to get a clearer understanding of why I believe haskell is referentially transparent in the above sense without putting the cart before the horse. Eek, what if Simon Peyton-Jones has been lying to us all along!? I mean, I can’t think of any way to break the RT rules, but that’s very different from a ground-up proof.