One Day I'll Write A Coherent Continuation Explainer
Continuations are a really cool and powerful abstraction! Like, brain-rewiringly powerful. Like, seriously crazy fundamentally-reworking-the-way-you-approach-software-ly powerful abstraction.
They’re also explained the worst way possible. Take this excerpt from the Wikipedia page (with apologies to Luke Palmer on perl.perl6.language
):
Say you’re in the kitchen in front of the refrigerator, thinking about a sandwich. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwich, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwich. But fortunately, there’s a sandwich on the counter, and all the materials used to make it are gone. So you eat it. :-)
What! What!! What!!!! What does a sandwich have to do with it? Where did you get a fucking continuation! How did the sandwich go back in time? Did anything else go back in time? Why am I back in front of the refrigerator? Why did I not just eat the damn sandwich? This is the worst explanation ever.
Also, the treatment of continuations in the literature seems to generally be that they are “contexts” with “holes” that terms can be plugged into, which is a really term-rewriting-interpreter-focused design and not really applicable to the execution models that live in mere mortals’ heads. Unhelpful!!
Anyway I think we can do better! I’d like to take a potshot at explaining them. Eventually. I’m hosed as hell at the moment.
Actually, I’ve started people out on the Learn Continuations Agenda already! Hexcasting, of some moderate infamy, has a “take continuation” opcode called Iris’ Gambit, masquerading as a “early return to label” feature. The execution model is simplified compared to most languages, and notably the invocation of the continuation doesn’t involve sending any data explicitly “through” the continuation (instead data is pushed to an operand stack which survives the jump anyway).
This kind of simplified model is cool and helpful for intuition, if you’re willing to learn a bytecode format with punny instruction formats and no mnemonics. …Hm.
Anyway I wanna/should really write a paper about this.