This article is about a certain aspect missing in today’s mainstream programming languages and systems. I bumped on this idea while reading Alan Kay’s writing about making the difference between mutable and immutable data “moot” in the context of FP vs. OOP by bringing in the concept of managed time. Since then I have been on the look out for material that helps develop my understanding on this subject. It is a fertile area with a lot of open problems for research and bringing back the fruits of these labour as an interactive system will unlock new pathways in computing. Here I present a collection of some of the resources that have helped me in charting my journey.
If you know of any others relevant to the subject matter, please tweet to me on Twitter. Also, if you are a fan like me, you are welcome to join the Alan Kay Telegram group where we share the best of his wisdom.
As I understand it, the beginnings can be traced back to one of the computer pioneers John McCarthy’s (Inventor of LISP) work. As Alan Kay writes on Quora:
A conflict was between
at (robot, philadelphia)and
at (robot, new york)which could not happen simultaneously, but could happen “over time”. This was like the problem of contemporary programming where variables would be overridden (and sometimes even files) — basically, letting the CPU of the computer determine “time”.
This destructive processing both allows race conditions and also makes reasoning difficult. John started thinking about modal logics, but then realized that simply keeping histories of changes and indexing them with a “pseudo-time” when a “fact” was asserted to hold, could allow functional and logical reasoning and processing. He termed “situations” all the “facts” that held at a particular time — a kind of a “layer” that cuts through the world lines of the histories.
The idea did not make it’s way into mainstream computing, rather computers relied on locks and semaphores to manage data and continue to do so. But these ideas made their way into lesser known systems. I will detail some of the pertinent works worth reading up on the concept here.
David P. Reed did his doctoral dissertation called Naming and synchronization in a decentralized computer system on ideas similar to that of worldlines in the John McCarthy's paper.
I was pleasantly surprised to see a fan club for NAMOS to make it popular.
The dissertation is involved and takes a good time to read. A short introduction to his ideas can be found in the paper: Implementing Atomic Actions on Decentralized Data.
I will include a visual review to this paper once I thoroughly understand the constructs of this system.
Leslie Lamport’s work in this area is relatively well known and has found a long backing of academic research behind it. So much so that, it is amongst one of the most cited papers in Computer Science of all time.
Lamport brought in articulation and clarity to the field and devised the idea of logical clocks to track time. It warrants a separate article to write about the different intellectual traditions that his canon has spawned, but I will just link the original paper, and his development of Paxos after it.
Raft is an algorithm that was developed as a response to Paxos, as some people found the former tough to implement and they chose a different set of tradeoffs for tracking state and time.
Another person who worked on the concept of incorporating time in computing systems was David R. Jefferson. His virtual time is a way of keeping track of time using the TimeWarp mechanism. What is really interesting is that it uses a logical clock to timestamp the entities and has a nifty mechanism built in to rollback from errors in case the data is not consistent. Adrian Coyler has done a review of the paper here.
Lucid is one among the first languages to have a concept of tracking time as a first class citizen. This is what Alan Kay has to say about it:
To just pick just one of these, Strachey in the early 60s realized that tail recursion in Lisp was tantamount to “a loop with single simultaneous ‘functional assignment’ ”. And that writing it this way would be much clearer by bringing the computation of the *next* values for the variables together.
There are no race conditions possible because the right hand side of the assignments are all computed using old values of the variables, and the assignment itself is done to furnish new values for the variables all at once. (Looping and assignment can be clean if separate “time zones” are maintained, etc.)
The manual of this language is written as a readable document that introduces the key ideas in it and the motivations of the authors in coming up with it. Think the name of the language is apposite to the clarity of it’s manual.
The paper is a 200 page read. So, if you want a short introduction to it’s mathematical properties, a shorter paper is available.
There are also languages like CPL, Simula, and Christopher Strachey’s work on tail recursion worth mentioning here. But I am relegating that work to a time I have traced these lines of inquiries and have better clarity on them.
David Reed’s work continued to find it’s application in 2005, ~30 years later when the TeaTime architecture was implemented for the Croquet project.
Even at this age the verve put out by Jonathan Edwards in creating new avenues for programming is inspiring. And Sean creates some of the most craziest visualizations of programming I have ever seen. Combined, they have written a paper on the concept of managed time in taming the complexity that arises when making time tractable in programming languages.
The next ideas are ones that I have explored very little but would love to dig deeper. I wouldn't be the best person to explain how they connect to the overarching theme of managed time. I am postponing patching together these details into a coherent story for later, but for the making this a comprehensive representation of my research and hoping it would be useful for a fellow enthusiast, I'm listing these resources here.
- ERights by Mark Miller
E is a language with promise pipelining that enable high performance deadlock free distributed pure-object computing.
- CRDT is a data structure that allows for coordination among multiple parties devised in 2011 by Marc Shapiro, Nuno Preguiça, Carlos Baquero and Marek Zawirski in their paper.If you are looking to get into CRDTs, this is an accessible blog series on order theory that underpins these kinds of data strucutres.
- PonyLang is a programming language in this space that I want to explore further.
- Version Control: Exploring ideas like logical clocks makes you draw parallels between how version control hashes and creates a logical ordering of your codebases. I would like to dabble in building a version control system and the rudimentary ideas behind this is documented by Tony Garnock-Jones
- Datalog is a significant design in this space and a modern reincarnation of the idea is Datomic is a database with first class support for time which is finding good success in the industry. This is a stream that warrants a deep look.
- Bloom/Dedalus by Peter Alvaro, Neil Conway, Joe Hellerstein, Bill Marczak, Sriram Srinivasan is a temporal logic programming language. Temporal Logic is a very fertile area that has overlap with this line of enquiry.
- Datafun by Michael Arntzenius and Neel Krishnaswami is a very intriguing language. The idea of defining computation as fixedpoints on semilattices looks like an intriguing idea to dig deeper into.
People in this space
Here are some people in this space who are active in sharing their viewpoints and ideas on managed time in programming.
Alan Kay is a pioneer in the field and still active and kicking on Quora answering questions about deep ideas in computing.
Shalabh Chaturvedi is one of the best persons to follow in this space. He brings much needed clarity and articulation to the conceptual space of computing and blogs about these ideas.
If you would like to read more posts like this, sign up to our Newsletter!