I just hopped in on the Unison office hours (by accident) and Paul & Rebecca were very helpful! I asked about laziness, GUI libraries, and identity-over-versions of smaller sub-expressions. The latter is something I think sorely needed for composition of components which cross many concerns but I don’t have a coherent write-up of why.

Rebecca linked me to a few GUI libraries that I can try out, @dfreeman/tv looks especially interesting:

This little chat motivated me to continue working on my distributed FRP paper, and to try out Unison. Their page at https://www.unison.cloud/our-approach/ also looks like an interesting read.

Bugs in my “Imperative FRP” TodoMVC: when an item is deleted it should stop contributing to the “completed count”. This wouldn’t have been a problem with Reflex’ Adjustable behavior? This just goes to show that not doing these apps in a declarative style or at least with a list data type is really hard. Typically the list data type method doesn’t have incremental bookkeeping of total/completed TODO items though.

I’m in the process of cleaning up my TodoMVC version. It’s currently around a 110 lines of domain specific code. https://github.com/widok/todomvc/blob/master/src/main/scala/org/widok/todomvc/Application.scala is about a 100 lines of domain specific code, so hopefully I can get under that.

I am missing the persistence (“LocalStorage”) part of TodoMVC, but doing that in a very higher-order FRP style is something I need to explore still. The Widok version doesn’t seem to handle that either AFAICT.

Experiment: Imperative FRP

I just managed to get TodoMVC working with Threepenny-GUI, in an “imperative FRP” style. This means that I used functions with signatures like

newTellEvent :: (CommutativeSemigroup a) => Moment (Event a -> Moment (), Event a)

This gives you a “tellEvent”/“callback adder” to which you can add events which of which the occurrences will be combined in the result event.

It allows you to send event occurrences “up” from deep within the code without having to pass the events around, just like with Reflex’ EventWriter. EventWriter is much more cumbersome to use I feel, but on the other hand newTellEvent would need some way to ensure that you cannot pass the “tellEvent” outside of the scope, which might lead to implementation problems? I think figuring out the semantics for newTellEvent might help me figure things out more on that front.

Another thing I did is implement things like the “x items left” feature of TodoMVC by having

do (tellAdjustCompletedCount, adjustCompletedCountE :: Event (Sum Int)) <- newTellEvent
   (tellAdjustTodoCount, adjustTodoCount :: Event (Sum Int)) <- newTellEvent
   ...

Then on every todo add, completetion, delete, toggle, etc. you tell Sum 1 or Sum (-1). This is quite error prone (you might miss something which should influence the count), but at the same time much less error prone than doing it with IORefs, because you still have the trustable FRP semantics backing you up, making sure that whatever happens at time t stays true at time t.

Why do this?

I think what you get is like a manual version of what you’d get writing a fully declarative TodoMVC with all the optimizations, e.g. using Reflex’ Incremental data type.

A better interface?

Because tell/result always come in pairs, you might as well keep them in a single value. This could have Profunctor implemented on it.

Future of programming potential

I feel like having to use newTellEvent to do this is just a limitation of the current textual programming model. In an editor you might just be able to click at all the event producing locations deep in the code and combine them in a certain way, after which you can use the result anywhere else.

The hard part would be to decide what “context” to use, because if a button is used all over a program you almost certainly want only a subset of buttons to count for your event.

Thought: IDE should “quick expand” short definitions

A common complaint about code reuse is something like “I prefer things to be written out so that I can see what it does”. And, while I’m in the moral “always abstract when you see a pattern” camp, it is annoying to have to learn what every little definition does. Maybe a development environment could solve this problem for small definitions by showing the shorter code (like a function name), along with the expanded definition, or do the expansion on cursor hover.

I just realized that one reason to use EventWriter for performing IO actions on event occurrences is garbage collection, you need to somehow keep a reference to the event.

Today I worked on translating my Threepenny-GUI TodoMVC to my FRP library. Reflex-DOM seems to do GUI by using the PerformEvent transformer which I think is an EventWriter for IO actions, but looking at Threepenny-GUI it seems they just fire off actions whenever? I’m not entirely clear on the details of how to best implement this or what the reasons are for doing it a certain way.

Switching perspectives in an FRP app (functional/imperative? visible/hidden state?)

I’ve also had many thoughts or rather vague vibes come to me about how you should be able to switch the view around for an app like TodoMVC (or any interactive app I guess?) from “functional-functional” to “functional-imperative”. I’m not even sure what that means, now that the vibes have fleeted :')

Maybe: You can write FRP in a much more imperative way than usual by creating events out of nothing and add occurrences to them with a trigger or tellEvent. This is just as “purely functional” as using a state transformer for example. On the other hand you can always write the same thing while manually bubbling up the events.

I’m just remembering another aspect of this: suppose you want to know/keep some state like “the current number of to-do items”. You can do this explicitly incrementally with Reflex’ Incremental + PatchMap for example. But, you could also manually write out the way the counter changes with a more imperative feel, i.e. “when a new task is added, increase the counter”.

Another thing is that you can write a to-do app with an explicit list of tasks in a behavior, and manually add identity to each task with an integer etc. Or you can hide the state and only expose how the things you see on screen are changed by plumbing events.

It’s all very vague and I’d have to work out some examples to get a grasp on it.

Example

Consider the entry field for the todo list. When you press enter you want to clear it, so in a graphical environment you could select “on enter key” and then “do action”, click on the field, and select “set contents”, setting that to the empty string. This is obviously a very imperative way. But when you examine the field you should also be able to see the “functional perspective”: what influences the state of the field?

Similarly, the list of tasks can be seen the collection of “all initial tasks entered from the entry field + any modifications after the fact”, or the entry field could say “insert a task in the task list”.

FRP Smalltalk/tangible functional programming

I’ve also been thinking about the FRP-based Smalltalk/IDE. Would it be a natural solution to composable, incremental program definitions? Like where you’d write a let-binding it would actually be a behavior for each part. A behavior with identity. A denotative notion of identity is very important to make progress I think.

Then I’m not exactly sure what it means to “run” the programs. Like I’ve written about before, you want to be able to look at and explore the state of a program.

One interesting thing is that the state is fully determined by input + time.

Reflecting on where I’m going

The past half year I’ve mostly worked on understanding Reflex and coding little FRP libraries. I’m much happier with my understanding of the nuances now and I’ve published a little FRP library and an accompanying blog post. This library should be able to serve as a platform to explore FRP related work.

I feel like now is the time to continue with actual novel work and pick up my ideas again. I’m finding it hard to pick though. I don’t have much internal motivation to work on generic “purely distributed programming” but at the same time that is where I have the most concrete material. Having some external motivation would definitely help here. Potentially writing down small bits of it as I go as blog posts too.

My other ideas are mostly around what right now I can only summarize as “purely functional Smalltalk meets Unison and Hazel”. I don’t know if I can do something concretely with that in Haskell without writing a language from scratch, because I think I’d need something like DrRacket to make progress with this in an existing system.

Thought: comments need need to stay synchronized with the actual code

You should be able to write comments which refer to identifiers in your code. Maybe they could even refer to actual code fragments/expressions. When these identifiers or expressions change or are removed, your editor should warn you.

How Distributed FRP improves over processes-and-messages programming

When I submitted my (incomplete and understandably rejected) DFRP paper one of the reviewers commented that I should make clear what advantages DFRP has over the regular way of doing distributed programming.

Some that I can think of:

No need to ever explicitly send information

This eliminates problems like deadlocks in which a process is blocked waiting for a message which will never come. There are not necessarily any processes, so they can’t be blocked, but you could still write programs which assume something is going to happen and that that fact is going to be known at some point, even though either the thing doesn’t happen or knowledge about it is never transferred. So it’s still possible to write incorrect programs, but maybe a lower level notion of accidental deadlock is eliminated. The choreographic programming research says this is one of the main advantages of their paradigm. I know about this due to the HasChor library and paper from ICFP, which implements choreographic programming in Haskell. I found the paper easy to read and enjoyable to understand.

Composability

Or: the main advantage of DFRP is that of all denotative/purely functional programming. With this you can split up distributed programs in a way which is not possible without introducing concurrency bugs in other languages. The only problem is I’m having a hard time thinking of concrete examples while at the same time being convinced that this is a huge cause of lack of code reuse and easy to use abstractions in distributed programming (typical me).

Maybe something that comes up in Paxos is a feasible example. When you read the algorithm, it says you want to get a quorum of “promises”, which is one concern. Then later the description says oh, and you also want to continue with “the highest accepted proposal you got back”, and this is another concern. Normally you’d go back and change your quorum code to include that, because if you write launch a separate process to collect those the results might not be correct? (Launching processes is the only way to achieve composability in message-based distributed programming AFAIK.)

Composing programs also forces you to write coordination code between components.

Higher-level abstractions like batching of received messages are very natural

You can very easily write a “aggregated commutative semigroup” function or have it as a primitive in your library. With this comes ease of optimizations etc. I guess the main argument here is that automating these kinds of things in regular programming doesn’t come as naturally?

The model is compatible with continuous transmission

I don’t have any examples for this, but you should be able to write programs which are about continous signals being transmitted, rather than the discrete messages.

Maybe things like “average light intensity coming from the south”??

I gave up on trying for an even shorter basic FRP implementation. It feels like there should be some abstraction that I’m missing but it’s fine to continue with what I have now!

I’m currently playing around with Threepenny GUI because I’d like to know an accessible GUI library that I can use for experiments. In parallel I’ll finish commenting my FRP code.

After adding more of Reflex’ tests all three of yesterday’s branches still pass. Time to look at my implementation using the original subscribe definition and see if I can make it look more succinct like the other types.

I now have an implementation in three ways and I’m not sure which is best. All this was because fixIO is not lazy enough to do:

fixIO $ \unsubscribe -> do
  subscribe e $ \propagate -> do
    ...
    unsubscribe

When originally implementing subscribe as:

subscribe :: Event a -> (Maybe a -> IO ()) -> (Unsubscribe, Maybe (Maybe a))

This fixIO problem didn’t exist because processing the initial value happened after the subscribe had run. Now I was trying to implement the FRP interface without having to return this Maybe (Maybe a) (occurrence known at subscribe time or not and what’s its value). Maybe with my new insights I’ll give the original version a shot again and see if I can make a shorter implementation using it.

The three options I came up with are:

branch no-unsubscribe-queue-via-passing-in-unsubscribe-to-subscribe
Here I implemented event subscription without returning the unsubscribe, instead it’s passed in.
       type Subscriber a = Unsubscribe -> Maybe a -> IO ()
       type Unsubscribe = IO ()
       newtype Event Impl a = Event { subscribe :: Subscriber a -> IO () }

I had to use unsafePerformIO to save the unsubscribe action in some places. The advantage of this one is that I didn’t have to use a delaying queue for unsubscribing in coincidence, but everything else became a little harder.

branch no-unsubscribe-queue-via-passing-in-unsubscribe-to-subscribe
Here I have subscribe as:
       type Subscriber a = Maybe a -> IO ()
       newtype Event Impl a = Event { subscribe :: Subscriber a -> IO Unsubscribe }

Here coincidence looks like:

       coincidence :: Event Impl (Event Impl a) -> Event Impl a
       coincidence coincidenceParent = cacheEvent $ Event $ \propagate -> do
         subscribe coincidenceParent $ do
           maybe (propagate Nothing) $ \e -> do
             innerEUnsubscribeRef :: IORef (IO ()) <- newIORef $ pure ()
             occWasKnownRef <- newIORef False
             unsubscribe <- subscribe e (\occ -> do
                                            writeIORef occWasKnownRef True
                                            join (readIORef innerEUnsubscribeRef)
                                            propagate occ)
             writeIORef innerEUnsubscribeRef unsubscribe
             occKnown <- readIORef occWasKnownRef
             when occKnown unsubscribe

Which mimics the original subscribe.

Every other function is simpler to implement.

branch coincidence-using-toClearQueue
         type Subscriber a = Maybe a -> IO ()
         newtype Event Impl a = Event { subscribe :: Subscriber a -> IO Unsubscribe }
       coincidence :: Event Impl (Event Impl a) -> Event Impl a
       coincidence e = cacheEvent $ Event $ \propagate -> do
         subscribe e $ maybe (propagate Nothing) (addToQueue toClearQueueRef <=< (`subscribe` propagate))
This one uses the event occurrence clearing queue to unsubscribe from the inner coincidence event.

I’m going to add the rest of the Reflex semantics test suite (skipped fan/Dynamic stuff) for some added confidence and then go back to the original subscribe function definition to see if I can make things a little shorter with it.

The original subscribe function as found in Reflex is here https://github.com/parenthetical/frp-journey/tree/original-subscribe-definition.

I figured I’d try event subscription without returning the current known occurrence and just relying on the propagate callback for everything. This made the code a lot shorter but now my original way to handle the immediate unsubscribe of the inner event in coincidence no longer works without a FixIO cycle. Adding an unsubscribe queue works to make that lazy but I’d rather avoid it so I’m spending some time figuring out if that’s possible.

I’ve published a repo implementing FRP from scratch and I’m in the process of writing an accompanying blog post. This implementation doesn’t do any graph traversal optimisations but it is as expressive and lazier than Reflex. I’m now hoping to add GC, graph traversal optimizations, and a GUI library to see how those work.

Concretely the next steps I’m interested in are:

  • Cleaning up my implementation some more, solving all TODOs, and fully commenting the code.
  • Writing a Todo-MVC version using my FRP implementation and threepenny-gui. This I’m planning to do by first keeping the FRP and GUI separate and then combining the two in a nice API.

I’ve also just realize that instead of Reflex’ “event subscription takes a callback and returns the current known occurrence (if any)” I might be able to have event subscription call the callback immediately? I’m not sure this works out but I’ll give that a try now.

Niggles while writing Haskell:

  • I have to add dependencies manually to the Cabal file and then make sure I reload direnv.
  • Module names have to match file hierarchy but Cabal doesn’t suggest an automated fix. HLS does though. Why can’t the module name be derived from the file name?
  • Having to list all modules in my Cabal file.
  • Having to add imports manually to my Haskell modules.

Thought: I should be able to run any expression in my code, being prompted for inputs. This makes it so you don’t have to define intermediate expressions if you want to test part of a bigger thing.

Long time no write! I’ve been continuing to explore Reflex’ implementation. I’ve got it quite short now and I think I understand all parts, except why the extra-laziness of the buildIncremental=/=buildDynamic functions are needed. Hopefully I can figure that out by working through a failing test.

My plan now is to delete the last of the optimizations I can find (which might only be the use of “height” to save on graph traversal), and then write about how to go from zero to Reflex Spider.

Thought while simplifying and getting to understand Reflex’ Spider implementation:

  • When doing this work I focused on ignoring things which are performance optimizations, e.g. ‘FastWeakBag’ over ‘WeakBag’ This allows seeing more patterns. When you’ve gotten to the essence of things you can re-introduce optimizations.

I’m still hard at work refactoring Reflex’ Spider implementation and greatly improving my understanding of what’s going on and seeing hints of how I can make a more grokkable version.

I’ve been doing this by constantly repeating the following:

  • Inlining any function that’s only used once. In my experience splitting up imperative code into named procedures without their context, especially if they don’t capture a repeating pattern, is bad for grokking.
  • Looking for repeating patterns. This is the slowest and most intuitive work, since code which does the same can be written in subtly different ways. And in imperative code it’s hard to deduce whether the order of statements matters.
  • Bringing functions into the context in which they’re used. This makes patterns and dependencies more obvious.

Sometimes doing these steps will result in more lines of code because I’m using newlines to keep code readable. Big procedures and functions can sometimes make you miss the forest for the trees but I still feel it’s worth it. Re-reading, setting your font size really small, refactoring to bring function definitions as close to use site as possible using lexical scope all helped to find patterns which reduce the code size using meaningful, reusable, procedures.

This has been succesful so far, but at a certain point something else comes into play: thinking about the meaning of code:

  • Is a certain function doing something conceptually similar to another?
  • Could you express the shared meaning in a simple denotative semantics and using composition of semantic functions?
  • Can you restructure the code similarly to how you wrote it compositionally?

Thought on dependencies & causality

I’m using Emacs’ LSP mode to find references to identifiers. These are listed in an “xref” buffer. I tried to refresh the buffer the usual way, but that’s not supported.

What would it take for something like a buffer to support “recreate me with the current state of the world”?

I’ve continued working on understanding how Reflex works and trying to implement FRP using an occurs :: Event a -> m (Maybe a) implementation primitive. Such a primitive might help with making a reasonably efficiont implementation which doesn’t rely on many event merge primitives such as mergeIncrementalG and mergeIncrementalWithMoveG from Reflex (mergeCheapWithMove, mergeCheap, and mergeIntCheap in the implementation). It might also make coincidence and functions like push in Reflex share code.

I’m doing this on two fronts: one is my simple FRP implementation, for which I copied part of Reflex’ test suite, and the other is working on Reflex’ Spider implementation. While looking at the latter I discovered some duplicated code between coincidence and switch which I factored out. Now I’m looking at the various merge functions and trying to see what they share.

For the simple implementation I’ve been copying code from Reflex.Spider but I haven’t yet made my tests run with this implementation (before I had a different one which passed all the tests but wasn’t lazy enough to express some definitions without looping). Getting the tests to run is my priority now.

I’ll also grab a local copy of Reflex and apply my refactorings there so I can run the official test suite and be more confident that I haven’t messed up.

For future reference, here’s a pure implementation of my simple FRP library using occurs where I could:

instance (Enum t, Ord t) => Frp (Pure t) where
  newtype Event (Pure t) a = Event { unEvent :: t -> Maybe a }
  newtype Behavior (Pure t) a = Behavior { unBehavior :: t -> a }
    deriving (Functor, Applicative, Monad) via (->) t
  type Now (Pure t) = (->) t
  sample = unBehavior
  never = toEvent $ pure Nothing
  sampleAtMaybe f = toEvent . runMaybeT . (MaybeT . f <=< MaybeT . occurs)
  coincidence = Frp.sampleAtMaybe occurs
  hold a e'@(Event e) fromT = Behavior $ \sampleT -> do
    if sampleT <= fromT
      then a
      else case e (pred sampleT) of
             Nothing -> unBehavior (hold a e' fromT) (pred sampleT)
             Just a' -> a'
  now = Event . (\t' -> guard . (t' ==))
  mergeE a b = toEvent $ align <$> occurs a <*> occurs b
  switch = toEvent . (occurs <=< Frp.sample)

occurs :: Event (Pure t) a -> Now (Pure t) (Maybe a)
occurs = unEvent

toEvent :: Now (Pure t) (Maybe a) -> Event (Pure t) a
toEvent = Event

nonEmpty :: Foldable t => t a -> Maybe (t a)
nonEmpty xs = if null xs then Nothing else Just xs

mergeIntMap :: (Witherable f, Foldable f) => f (Event (Pure t) a) -> Event (Pure t) (f a)
mergeIntMap =
   toEvent . fmap nonEmpty . wither occurs

It’s been a long while since I wrote here! I attended the GHC Contributors Workshop and ZuriHac in the meanwhile and I’ve been working on implementing a Reflex-style FRP from scratch.

The GHC Contributors Workshop was a great event and made me feel more comfortable to dive into the GHC code. ZuriHac as always was lovely and chilled out, and even had a Conal talk to keep the denotative programming spirits up <3.

I met Ken Micklas during ZuriHac who, as it turned out, had been doing the same thing and was able to give me useful tips. He had some interesting ideas about how to improve the FRP interface, for example by avoiding

merge :: Event a -> Event b -> Event (These a b)

You no longer have to wait to know wether an event occurs simultaneously with another, and you can start working with events which have hidden multiple simultaneous occurrences. Only when folding over them would you have to decide what to do with these. He wrote about this idea and others in his “A trilemma in functional reactive programming models” post.

My WIP FRP library status

I now understand in general terms how to implement the graph traversal algorithm for events with a priority queue, and hold to create behaviors. My current implementation isn’t entirely correct though I believe, even though I’ve copied Reflex’ test suite and that passes. So, there is still some work to do.

As a next step I should probably implement a GUI to-do list app to make sure I have enough primitives to make that work and to check if my implementation is correct. It’s looking like Monomer has one as an example so maybe I’ll go with that library.

A simple and concise implementation

I started out with a simple interface to implement many functions with fewer primitives based on occurs and nowToEvent functions:

occurs :: Event a -> Now (Maybe a)
nowToEvent :: Now (Maybe a) -> Event a

For example:

never = nowToEvent (pure Nothing)
coincidence = nowToEvent . runMaybeT . (MaybeT . occurs <=< MaybeT . occurs)

This is easy to implement if you use a pull-based system (polling the list of all events to make sure behaviors are updated), but it will be interesting to see if I can make a similar interface which does the push-based graph traversal optimizations.

I wrote down the implementation of some primitives using occurs and hope to reverse engineer an implementation for them from the efficient one I’m writing.

I’m wondering if it’s possible to go from an FRP implementation in which accidental space leaks are possible by keeping a reference to an Event, to one in which that is not. In the former an Event is really just a list of occurrences/futures. In the latter you have a Now monad which ensures programs can only observe occurrences after “now”.

My first thought was that you’d wrap every Event in a Behavior to make a new Event type (this behavior being equivalent to the Now data type). This behaviour would automatically keep popping occurrences of the list-based event.

Then… the runtime has to unwrap the Now mechanically? I’m not sure any of this is making sense.

I’ve been reading Sylvain Henry, John Ericson, and Jeffrey M. Young’s “Modularizing GHC” paper which discusses the current state of the GHC code base and how it might be improved. As of now the code has become inflexible due to new work (necessarily?) choosing the path of lowest friction to add features. I imagine doing thorough refactorings when you’re on a PhD thesis time budget and a lot of others are also trying to add features isn’t easy.

The authors propose that the principles of Domain Driven Design will be helpful to define what good improvements to the current state would look like:

  • Ubiquitous language: use consistent and precise terminology in code and documentation, and make it apparent at type-level.
  • Domain isolation with layered architecture: use layering to avoid spaghetti or lasagna code.
  • Supple design: make code “a pleasure to work with, inviting to change”.

These all sound like great principles, and having this as a check-list that cannot be circumvented for any newly checked in code would be a major help. I’ve tried to use the GHC API while playing around with writing HLS plugins and the documentation and names used were extremely confusing. It might be impossible to start something from scratch without looking at existing examples and without external help. Not something that ought to be the case in a code base related to functional programming.

I’m back from walking the Cape Wrath Trail! Three weeks of wonderful Highlands nature, many ticks, and a few midges.

I did some note taking about programming with objects/identity/user intention/causality while on the trail and I’ve been experimenting with FRP implementation. I’m not entirely sure where I’m going with that. I’d need good test cases to see that what I’m doing is correct, and I’d love to write interactive programs but that is annoying without a good GUI library. Maybe I should stick to “My first C programming book” style CLI programs.

Next week I’ll be attending the GHC Contributors Workshop as well as ZuriHac. This year ZuriHac runs many talks, I’m looking forward to that.

Back from hiatus for a few days.

I just watched Gilad Bracha’s presentation on Ampleforth, a live-coding/literate programming/notebook/GUI environment. I loved how you could go from GUI element to definition, compose definitions and document any way you like, and how editing definitions is instantly reflected in the environment. What I think is missing is that in the markup elements are referred to by strings, so you have to think about naming objects if you want to refer to them. A graphical environment is the perfect place to avoid this extra mental burden.

Otherwise I’ve been working on understanding how FRP is implemented. I wasn’t able to make sense of all the things going on in Reflex’ implementation, so instead I’m starting from Conal Elliott’s Push-pull functional reactive programming paper. I’m hoping that implementing the Future abstraction using techniques more similar to those of Reflex (graph traversals) is going to work. The reactive library which came out of the push-pull paper never worked properly, and I’m hoping that’s because of the very experimental implementation techniques.

In any case, it would be nice if everything can be defined in terms of Futures; I want to experiment with different advanced uses of FRP and the simpler my implementation is the better. If it’s slower than Reflex that’s not a big deal, anything is better than the semantic functions I’ve been using up until now to experiment.

Some thoughts on Objects in FRP.

Denotative Identity and Objects, Denotative Object-Oriented Programming

I see identity (in the sense of an object having an identity) as a problem in our current functional programming practice. It seems that we invent it over and over again whenever we need it, manually generating identifiers and tracking whatever values are supposed to be associated with that identifier. Or, we go the impure route and use mutable references and revert back to impure imperative programming.

I think we can do better than that with FRP-like semantics.

Identity

I’ve worked on generating unique identifiers in FRP before. Semantically these are something like a list of (Time,Index), which note the causality of the identifier. I.e. “what are the moments in time that lead to this particular identifier being generated?”

type Identifier = [(Time,Natural)]

The Natural here is “the n-th identifier generated at a particular moment with this chain of causality”. I’m not very satisfied with that, it feels like the definition of the program should somehow be included there as well? Semantically at least. When you implement identifiers the number index works fine.

Object definition

I’m considering an object as something with an identity and a state which can change over time. This can be modeled as a pair of Identifier and Behavior.

type Object a = (Identifier, Behavior a)

Interface

instance Eq Identifier

getId :: Now Identifier

new :: Behavior a -> Now (Object a)

instance Eq (Object s) where
  (a, _) == (b, _) = a == b

Objects can be compared for equality by comparing their identity. There is no aliasing problem because the definition of their state cannot be modified.

Future work

  • Could I use these objects to solve problems related to first class timelines/causality problems? Like, if I could use this to track which objects some action uses, then I could tell that deleting an action from the timeline is going to make these x,y,z future actions non-sensical and propose solutions.

The power of a denotation for causality?

I’m having some brain sparks related to causality. I was thinking about how when defining GUIs you want to be able to go from the GUI element to the code that made it exist, or the history that makes that you’re seeing on the screen. If you have a good model for causality, i.e. “what lead to this happening/that existing”, you might be able to include “because this code was written, these buttons were clicked, etc.” in there.

New FOC Ideas entries

I added entries for Thoughts on comfortable GUI development and A DSL should have its own debugger.

I asked on the Future of Coding Slack whether anyone had ideas relating to the program state vs definition idea and some interesting links came up.

The entangled strands of time in software development

Nick Smith linked his Twitter thread, which shared many of the same thoughts I’m having, links to a paper: “The entangled strands of time in software development”, Matthias Hauswirth and Mohammad Reza Azadmanesh 2017.

This paper looks at what programmers need from their environment. I took some notes below.

Time travel & altering the past

On undo/redo resulting in a tree of history:

[…] keeping a mental model of the tree, and one’s place in that tree when undoing and redoing, is too taxing. After all, life is a sequence, not a tree. But then, life doesn’t even have an undo.

This quote makes it clear that making this tree user friendly is high priority if you want to build a system in which exploration is cost-free. Achieving this will make collaboration cost-free as well. Git has a hard time with this because of it’s lowest common denominator text support.

Reading this makes me remember my interest in non-linear undo work. I’ve seen some older papers on this but I don’t know how much it’s supported in actual applications these days. I wonder what a modern denotative approach looks like.

Source & documentation navigation

This reminds me of how even with a functional programming you want identity and relations to be able to address things like looking up which functions refer to which etc.

Program execution/testing/debugging

I’ve written about this here.

Source editing & version control

VC and edit history are currently not integrated. Instead they could present a unified interface with classic VC features such as commits being an extra feature on top of regular work.

Source editing & source navigation

Here they arguing that you could unify navigation with editing. Navigation usually has a separate web browser-like interface (back/forward). This means that “looking at something” should be in the history?

Same for documentation.

My thoughts

I love all the concerns put forward in this paper. It seems hard to implement without deeply integrating with the actual programming language being used. For example, to rebase history well, you have to know whether the history you’re copying makes sense in the current context.

Rather than immediately attempt my own implementation of FRP, I’ve been working on understanding Reflex’ Spider implementation. I started out with the original code—randomly deleting anything that didn’t seem essential like Incremental and Dynamic (because they can be expressed in terms of Event and Behavior), but it seems that Dynamic might be a basic building block of the implementation? Maybe the way behaviors work is actually via something Dynamic-like?

Anyway, I’ve re-copied the whole Reflex implementation and I’m now simplifying with the help of Git. First I’m deleting all debugging stuff behind the DEBUG* flags, next I’ll see about deleting some things like fan which are just optimizations.

I think understanding what “height” is and what the goals of all the “deferral queues” are is a major step in understanding.

I added the first topic to my “ideas for the future of coding” section on the site (current title: Un-separating program state and definition).

Logging this thought I’ve been having for a long time (if I’m not repeating an earlier entry):

An action and its intent should be first class things

I think that programming with a history of first class, meaningful, user/programmer actions, sounds very valuable. Actions are things like: clicking a button to do something, editing the definition of a program,… You need to capture the intent of the action and the causality, not just the lower level effects of the action. Causality is which actions in the past lead to the current one being able to exist, e.g. drawing a square leads to the square being able to be resized later.

This is related to why commit messages are so important in development: the individual edits to the code are only there because we can’t easily capture our true intent in the code.

The more of the actual intent is logged, even in a CRUD-style program, the better you can do things like changing the underlying state model and migrating the old to the new state by replaying actions.

I guess this all kind of sounds obvious, like, “can’t you just name functions appropriately” or something, but I think there’s more to it.

I just remembered another useful FRP feature which currently doesn’t exist. It would solve the “I wrote my program as a function of Event a, but I want it to accept input looking something like Event [a], where [a] is an ordered sequence of occurrences that have to happen “within the current moment”. Doing this safely, especially if you want to pass multiple of these Event [a] values to a program, seems tricky. You want to keep your “moments within the current moment” apart from other “moments within the current moment” because there would not necessarily be a relationship between them.

Maybe one way to achieve it would be something like the following, for which you need a source of first-class sub-time values.

withMoments :: Integer -- How many sub-moments to generate
            -> forall t. [Time t]
                         -> Event t' (Map (Time t) a)
                         -> (Event t a -> Behavior t b)
                         -> Behavior t' b

(I didn’t think about this signature properly.)

Code metadata–version coupling

Another “what programming should be like” thought: I find it weird that TODOs/issues/… are not coupled to code. Some people consider TODO/FIXME notes in the code itself a smell, but they are metadata about the code and I’d like to see them while working on the code.

As far as I understand people are worried about:

  • Untrackability of TODO in code.
  • A kind of fake “technical debt accumulation” because it’s too easy to add TODOs.

On the other hand I think I’ve seen technical debt accumulation precisely because TODOs were not added. The same little niggles were present all over the code base but no one kept track of them and it kept making things worse.

I started working on an implementation of Reflex-like FRP in Haskell. I’ve long wondered how it all works and not understanding the internals has held me back from understanding what goes wrong when my mad experiments fail. I’m taking this as an opportunity to learn about haskell.nix as well.

Things I’d like to explore with this implementation:

  • A Now monad which has
        occurs :: Event t a -> Now (Maybe a)
    
  • Experiment with first-class timelines. (Needs keeping the flexibility of Reflex’ parameterized types.)
  • Maybe focus on commutative-first operations? I’ve long thought that sequential-first over commutative-first is a mistake in library design, and Haskell suffers from it. (Another way to see this is “sets over lists”.)

I started reading Ink and Switch’s article on their Upwelling project. I’m really enjoying it so far and I’m excited to see other people are sharing my thoughts on how collaboration tools should work.

More thoughts and questions:

SmallTalk-y environments and distributed programming

Why does it feel like SmallTalk environments don’t do so well for distributed programming? (Pure ignorance here, maybe they do.) Does any “more than text” environment do well?

Concrete examples vs abstractions

I listened to the FoC podcast episode on Magic Ink. In it the thought comes up that design tools should work via concrete examples instead of starting with abstractions, like “normal” programming does. This may sound like it’s in opposition to the category-abstract-as-much-as-we-can vibes we have in the FP community, but I believe it’s actually extremely compatible, even necessary. The thing that’s missing is a database of concrete examples and a user friendly interface.

A programming environment should say “you have two numbers, here are the combinators that work on them”. It should then save the chosen combinator both as a concrete one and as a more abstract monoidal operation. If the code changes and you e.g. switch from numbers to matrices it should pick a new sensible operation or ask if multiple exist.

Undo via timeline-of-actions manipulation

My master’s thesis was about making timelines/events in FRP first class values which can be modified, and using that to implement things like undo. I think this is a concept which should be explored more, and which would render an FRP-based programming environment super useful, because you could ask “what-if” questions and have a much better interactive development/debugging experience.

Causality of actions, “why can this happen?”

When thinking of implementing undo functionality with the above timeline-of-actions concept one of the first problems which comes to mind is deleting an action which makes future actions meaningless. E.g. if in a drawing program you remove the action which created a circle, then later actions resizing that circle no longer make sense. Can we invent a programming language in which it is easy to ask “which parts of the future exist only because of this thing which happened in the past?”.

UI designs/programs should be testable

Could you record interactions with a UI and replay them in various contexts (resized windows etc.) so that they still work? This needs predictable identifiers for the UI components. What does a programming language which natively supports these predictable IDs look like?

In common OO programming object identity is not predictable, because the system is not pure and not replayable.

I think it would be good to have a list of projects I’m (considering) working listed on my web site. Things I can think of right now:

  • Play around with making Obelisk composable (components with frontend, backend, and database)
  • Finish my “distributed FRP” paper? I got a demotivated and burnt out on this tbh.
  • Blog post about denotative distributed programming.
  • Trying to find an amazing implementation of Paxos in a toy distributed FRP implementation.

Thoughts I wrote down during my Fisherman’s Trail walk:

Debuggers should work on different semantic levels

For example, an FRP debugger should tell you “what is true at time t” (values of behaviors, any event occurrences). If say you wrote an infinite recursion in the non-FRP code, you should be able to drop down to a friendly execution model for that code.

Why couldn’t Smalltalk be a purely functional programming environment? Why not an FRP-inspired Smalltalk?

You’d need:

  • A model of how the program definition changes over time.
  • A precise, composable, and easy to understand of where the state of objects comes from. Smalltalk OO gives identitiy to everything with state, but maybe that’s overkill and we can make identity explicit in a natural way.

When developing GUIs you should be able to ask questions and edit live

  • How is this element defined?
  • Where does it come from?
  • Why does it exist/am I seeing it?

I mean, you should be able to enter a special mode while developing which allows you to click on a button and be taken to its editable definition. Editing that definition should either change that button (like the DOM editor in your browser) or all buttons (but that might render the current state impossible to reach?). Other things which would be cool is to know the causality of the button’s existence. E.g. which other buttons were pressed in the past to get to where we are?

Why can’t state and code evolve together? (Is textual programming a dead end?)

I was recently helping a friend with a Python project and we had to use a variable to store some text that we’d modify later in the program. She was surprised to learn that this wouldn’t change the text in the source code as well. I had to tell her that this was a reasonable assumption but that the field of programming was not yet ready for her futuristic ideas :’).

This really begs the question “why can’t state and code evolve together?” The two are intimately linked. One version of the state might only make sense for one particular version of the code. The only reason I can think of why carrying state between versions of a program tends to work is because we explicitly name bits of state, e.g. with database tables. These are usually globally unique identifiers which kills composition.

To make code-state-coevolution feasible you’d have to be able to do operations like “start over with fresh state”.

This coevolution problem seems extremely hard to solve with a plain text editor. You really want to have program-and-state history as a branchable, rewritable, … history. Just like we do now with Git. I can’t see that happening unless the editor really understands what’s happening, and when you have that you might as well add a lot more UI to “code”.

How would you write a document editor in which the document is defined as an exportable program but is no different from code? In which you can define in-line editing interfaces for the document elements? This is just Smalltalk-like stuff IIRC. “Bring Smalltalk back?”

I’d really like to find out how a library like Reflex works. I also just found out that browsers support weak references and finalizers in JavaScript now, so I think that means Reflex Spider-style implementations can now be done in JS.

IIRC something that’s expensive in Reflex is to know whether Event (Event a) has a simultaneous occurrence. Another expensive thing is joining Dynamic (because of the previous simultaneous nested event occurrence thing?). Why are they so in the current Reflex implementation?

Random thought: it would be cool if an implementation supported

occurs :: Event a -> Now (Maybe a)

A function like this might make writing code involving multiple events and behaviors much more ergonomic. Now you only have sample for behaviors, but not the equivalent for dynamics.