The thing to keep in mind as one begins a life of coding is that writing code that works is relatively easy. You are in the flow, you can get it to work. No problem, just a little cut-and-paste-and-modify. But maintenance grows ever more difficult.
To get a better maintenance experience, I am now experimenting with event programming. This is a practical necessity when coding asynchronously, i.e., external stuff happens that your code needs to respond to. Both user interfaces and interacting with external programs (servers, file systems, databases, …) largely demand such features. This is event-driven programming and an interesting perspective on it can be found at http://eventdrivenpgm.sourceforge.net/event_driven_programming.pdf
But it is easy enough to have events as the inputs to your program while coding in an otherwise sequential way. The question becomes, can we use events for replacing procedural logic and what do we gain from that?
Let us start with what is meant by an event here. An event is a string message, possibly with associated data, generated by an event emitter. Listeners are functions that respond to the various events that they subscribe to. This is the event model of node.js, at least to the best of my understanding.
My first attempt was to think of an event as a function call that turns into multiple functions. This turned out to be a bad idea. I was writing events such as “send data to server” and then upon return, “add history”. But the problem with that is one starts prescribing the flow with the event message. So what listeners needed to be added is being restricted by the event message. I was writing actions which is not what an event should be.
A much better way is to think of events as the end of a function. It is a statement of completion. Instead of the above flow, events might read as “user clicked submit” which initiates sending data to the server, “server returned data” whose listener might be a processing function, “history processed” whose listener is the “add history” function.
What this does is it causes one to think about what should be done before a function is called. I think to myself, “once the server has returned, then I can process the data. And once that is done, I add to the history”. You can see the events and listeners described above coming directly from that thought process.
So events are not calling functions. Rather they give the news of the day and functions choose to act or not based on that information. As a nice benefit, if there are no listeners, there are no errors though we will see how one can be notified of such an event.
One advantage of such a setup is that you can instantly understand when a function is called and what it aims to do. Sure, you can write comments, but having the code implement it means that it is never out of sync. If you want to add more actions to an event, that is not a problem. You can also fairly easily pipe event messages and listener descriptions to a log system. You can see your program tell you exactly how it is executing with almost no additional cost. Finally, events allow one to separate code cleanly into different parts. The removal of a class from a browser object can happen at the same time as a server call, but the code of these two never care about each other, can be in separate files, and never depend on each other.
An analogy that my lovely, brilliant wife came up with is that of an assembly line. Before functions, each program was essentially a one-off. It is the artist’s glass blowing shop. Then the functions gave a methodology. But it was still one person doing it. Each task was the focus, in a long series of tasks. But with events, it becomes a factory with assembly lines converging and diverging, as products flow in and out of the stations along the belts. The process is the focus of each station. They simply report when their task is done. And then whatever follows does its job.
And this is a flexible assembly line. Each listener can be added or removed at any time. And one time listeners can also be added. Once fired, they are done. This allows one to avoid having to check some condition in order to figure out what action to take: Add the listeners as needed, let them fire, and remove them when they are no longer needed.
Now we come to the issue of data. Functions often use data and modify data. One of the mottos of maintainable code is to make it very clear what a function depends on and what it affects. A good function is one that uses only the arguments it is given and modifies only by returning. With JavaScript objects being passed as reference, not copies, the temptation to modify an object in the middle of a function is very great. Resist. Even worse is the use of generous elevating levels of scope that allow one to modify variables not declared inside a function. Resist at all costs.
With events signaling completion, the passing of data seems less attractive. In a processing pipeline, it might pass on what was just created. But functions reacting to it will be limited to receiving the data sent. I initially coded it up using a data object associated with the emitter which was the sole argument passed to listeners. While this works, it is opaque and therefore not maintainable.
Then I realized that events need not send data, but rather the data can live in a central data object associated with the emitter. All data in it should be JSONable (see below for the other types). To understand how I pass arguments to the functions, we need to backtrack a little.
When defining listeners, I use string’s for their names and put them in a big action object. That is, my functions are defined with "add history" : function () {//add history}
. But to deal with arguments, we can use "add history" : [["date", "headline"], function (date, headline) {//add history}]
, Then when installing the function into my global action object, I wrap it first with another function that can throw in the arguments from the global data object. And, of course, it can output the arguments being sent while debugging.
Even more, I can replace, “date” with a command object that states how to obtain a value. For example, {$$default : "Hi", $$get : "greeting"}
would try to return the “greeting” data object, but failing that, it returns a default of “Hi”. One can also have transformations being done, such as converting the date into a specific format. Or validation checks. It allows one to do some minor sanitizing and transforming on data before getting to the body of the function. This can help clarify the important part of the function as well as making error tracking a little easier since one can see what is being fed into the function. Is it the input data that is a problem or the main body logic of the function? It is extremely nice to be able to see immediately which one is the issue.
At the end of a function, what I do is borrow from MongoDB. MongoDB uses an update object to modify its database contents. For example, {$set : {name: "JT", blog: "mythiclogos"}, $inc : {coolfactor: 2} }
would set the name to be JT, the blog to mythiclogos and increment the coolfactor by 2. This object, which can be easily printed as JSON, describes what needs to be modified and how to do it. This is a perfect return object to do the modifications.
Thus, I create such an object and return it at the end of a function that modifies data. But not to be limited by their commands, I also introduce my own, such as $$emit, $$on, and $$once which will emit events or attach listeners. I use the $$ to distinguish from what works with MongoDB. With this return object, debugging the modifications is again easy. You can see what is being output from the function.
To implement this, I again relied on the wrapping function. It receives the return from the called function and then does the global modification of the data.
The emitting of events can also come in two flavors. For the $$emit function I chose to use the node.js flavored process.nextTick(fn)
(really just setTimeout(fn,0)
) to emit only after the current level of listeners are done. That is, this is a breadth-first eventing. The other approach (depth-first) is to emit an event immediately, preventing other listeners from acting until the next chain is done. I implement this with $$emitnow.
Similarly, to be notified about events that have no listeners, we can add some code in the $$emit to check the listeners array.
And so it was that I used events to get back to the roots of functions with arguments and return statements, functions without side effects.
This was written from the perspective of browser coding. For server-side, such as in node.js, a global emitter object may be dangerous. But one can create session-based emitters that can be the global object for that user/session. And one may use a global object for global data sharing, such as a highscores object which is not attached to users, but to the site itself.
I intend to develop this more. You can see an example of its use at github.com/jostylr/goodloesolitaire and the repository where I will be flushing this out more is at github.com/jostylr/eventingfunctions
Update: I forgot to mention that for non-JSONable objects, I use a kludge. I have a store variable that I import into the function scope and store the objects there. To record this, I use $$store which takes in a string or array of strings indicating the key. The store should be used for functions and complicated external objects such as the return value from an AJAX call. I am open to suggestions on improving that, but I very much like the JSON-able data model.