Skip to content

Moved

I am transitioning over to tumblr. I don’t like self hosting and I like the idea of a social network based on content.

The new blog address is, for now, jostylr.tumblr.com though I intend to use my own domain jostylr.com for it eventually.

I also intend to archive this blog at GitHub. And I will blog about that when I get around to doing it.

Ask and Ye Shall Receive

I have been teaching my course online for over a year now. Until this semester, I did not require typed up solutions. I felt that I needed to give them guidance. Or at least an alternative to word processor hell, as I like to call it.

Markdown was my path out. I thought it was simple enough with embedded latex for them to use. I started writing up some guidelines and then started to write up my solutions that way. After an hour, I realized it was still too complicated.

But I had latched onto the idea of typed up work. So I booted up NeoOffice and realized one can insert equations easily enough. This made me feel better as OpenOffice is free and cross-platform. Thus any of my students could use it if they did not have MSOffice already.

So I simply made it a requirement and went on with my life. My students all complied without a word of complaint. Done.

It looks better and they actually are writing more of an explanation than a chain of equations. Better for me and better for them.

Ask and ye shall receive.

Events in the Event Loop

The other day I wrote about the event loop in node.js. In particular, there is a nice trick of ceding control to the event loop using process.nextTick()

But a question arises. How are events related to the event loop? Amusingly, they are completely separate.

Naively, I thought that when an event is emitted, the resulting actions are queued up in the event loop. Not so. Events and their handlers are acted on immediately. And this is a good thing.

When are events emitted?

How do we see when events are emitted? Try this code:

/*globals require, console, process*/

var EvEm = require('events').EventEmitter;
var gcd = new EvEm();

gcd.on("hello", function () {
  console.log("Greetings!");
});

gcd.on("goodbye", function () {
  console.log("I must leave now.");
});

gcd.emit("hello");

console.log("Thanks for the greeting.");

process.nextTick(function () {
  gcd.emit("goodbye");
});

console.log("Can we say goodbye yet?");

And here is the output:

Greetings!
Thanks for the greeting.
Can we say goodbye yet?
I must leave now.

We start by loading the two events we will use. Then we emit “hello”. This has the immediate effect of writing “Greetings!” to the console. After that action is done, control returns to where the emit happened. Thus, the “Thanks for the greeting.” is written as it is the next line of code. We have now arrived at process.nextTick. This loads up a function into the queue that will, when it is its turn, emit “goodbye”. But we are still in the first tick of the event loop. Moving onto the next line, we get the question “Can we say goodbye yet?”. The first tick is done. The next tick is called and the “goodbye” event is emitted, leading to “I must leave now.”.

Emitting Later

The basic technique for queueing events should now be obvious: use process.nextTick. This is why there was no need to have emitting events be automatically put in the next tick. It is easy to delay emission. But if they were delayed later by default, it would be difficult to undo this.

But why do we want to emit now? First, it gives us an immediate flow. Events become more like calling functions of old. This may or may not be a good thing, but it is a common need. Even more fundamentally, node is a server-side technology. This means that we could be dealing with large number of requests. The queue could get quite large for incoming requests. If each of them were processed over-and-over, bit-by-bit, the memory overhead and time could get quite large.

But isn’t the whole point of node.js to do asynchronous logic? Yes and no. Think of it this way. Imagine a grocery store. We all have experiences where something goes wrong with the person in front of you and the line is held up. Imagine now that person being deftly put aside while other customers get serviced and the issue gets resolved by a manager. This is what node.js does. It takes the external bits, such as database calls, and makes it so that the server can continue to process other customers. When the database calls back, the customer gets back in line and is then dealt with. So while that particular customer has to wait a little extra, overall the experience is faster.

Imagine now if events were delayed at every opportunity. The analogue is that after each ringing up of an item, all the other customers get a chance for one item. And then it continues. This would cause greater delays, not less. And it would take the cash register many, many bits of data to store and correlate with.

So asynchronous for external calls and, if one wants, internal long running processes. But otherwise, the logic flows in sequence. And this is the efficient model.

If you have a regular need for emitting later, you can extend the prototype of EventEmitter to have an emitLater method:

EventEmitter.prototype.emitLater = function () {
  var self = this;
  var args = arguments;
  process.nextTick(function () {
    self.emit.apply(self, args);
  });
};

This is not optimized, but rather just the basic idea.

Here is the earlier example, modified:

/*globals require, console, process*/

var EvEm = require('events').EventEmitter;
EvEm.prototype.emitLater = function () {
  var self = this;
  var args = arguments;
  process.nextTick(function () {
    self.emit.apply(self, args);
  });
};


var gcd = new EvEm();

gcd.on("hello", function () {
  console.log("Greetings!");
});

gcd.on("goodbye", function () {
  console.log("I must leave now.");
});

gcd.emit("hello");

console.log("Thanks for the greeting.");

gcd.emitLater("goodbye");

console.log("Can we say goodbye yet?");

It has the same output as the first example.

You may want to use this technique if multiple actions respond to the same event and you want to ensure that all reactions to the first event are done with before processing any events called from those reactions.

Events calling themselves

Since events are emitted immediately, if an event leads to actions that call that very event, we enter recursive eventing. And just with normal function recursion, we can exhaust the call stack. Observe:

/*globals require, console, process*/

var EvEm = require('events').EventEmitter;

var gcd = new EvEm();

gcd.on("hello", function (count, times) {
  count += 1;
  if (count < times) {
    gcd.emit("hello", count, times);
  } else {
    gcd.emit("done", count)    ;
  }
});

gcd.on("done", function (count) {
  console.log(count);
});

gcd.emit("hello", 0, 1e3);

gcd.emit("hello", 0, 1e6);

The first count goes well, but the second exceeds the maximum call size. Again, this is because each function does not finish executing until after all subsequently called events and their actions have been resolved.

We can use the process.nextTick trick to avoid the call stack issue. But it places each event calling onto the queue. It will slow down the process.

/*globals require, console, process*/

var count = 0;
var times = 1e6;

var start = (new Date()).getTime();

while (count < times) {
  count += 1;
}

var diff = (new Date()).getTime() - start;

console.log("while diff: "+diff+" count: "+count);

var EvEm = require('events').EventEmitter;

var gcd = new EvEm();

gcd.on("hello", function (count, times, start) {
  count += 1;
  if (count < times) {
    process.nextTick(function () {
      gcd.emit("hello", count, times, start);      
    });
  } else {
    gcd.emit("done", count, start);
  }
});

gcd.on("done", function (count, start) {
  var diff = (new Date()).getTime() - start;
  console.log("event diff: "+diff+" count: "+count);  
});

gcd.emit("hello", 0, times,  (new Date()).getTime());

When I ran this, I saw that the while loop runs in 4 ms while the nextTicked/evented loop took 2000 ms. But it does avoid a call stack crash.

On Event Loops in node.js

The wonderful environment of node.js uses an event loop rather than threading to deal with multiple incoming requests and more. Threading is tricky, or so I have been told. Event loops are less tricky, or so I have been told. Why?

I think the key reason is that threads are running at their own pace, separately. In event loops, until the loop loops, it is a single execution of logic. The idea is that there is a queue that takes in requests to act. Each time the current logic ends its execution, the queue is checked and the next action is taken, if any.

To demonstrate this, consider the simple node.js server, saved in server.js:

//server.js
/*globals require, console, process*/
var http = require('http');
var server = http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
});
server.listen(1337, &quot;127.0.0.1&quot;);
server.on('close', function () {
  console.log('server closed');
});
console.log('Server running at http://127.0.0.1:1337/');

// on exit, let us know.
process.on('SIGINT', function () {
  console.log('server told to shut down');
  server.close();
});

This server should work by running node server. You should be able to send an interrupt with ctrl-c, at least on a Mac, initiating the close down procedures. Without the process.on, a ctrl-c kills the server immediately.

Next we add a while loop:

//noserver.js
/*globals require, console, process*/
var http = require('http');
var server = http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
});
server.listen(1337, &quot;127.0.0.1&quot;);
server.on('close', function () {
  console.log('server closed');
});
console.log('Server running at http://127.0.0.1:1337/');

var count = 0;
while (1) {
  if (count % 1000000 === 0) {
    console.log(count / 1000000 );
  }
  count += 1;
}

Running this will produce a non-functioning sever. Why? Because the while loop never cedes control. The event loop is never accessed. To make sure you can kill it with ctrl-c, we remove the process.on block first.1

To be explicit about loop access, we need to use process.nextTick():

//servertick.js
/*globals require, console, process*/
var http = require('http');
http.createServer(function (req, res) {
 res.writeHead(200, {'Content-Type': 'text/plain'});
 res.end('Hello World\n');
}).listen(1337, &quot;127.0.0.1&quot;);
console.log('Server running at http://127.0.0.1:1337/'); 

var count = 0;
process.nextTick(function self () {
  if (count % 1000000 === 0) {
    console.log(count / 1000000 );
  }
  count += 1;
 process.nextTick(self);
});

Notice how there is no evident loop. The loop is the event loop itself. Every time the anonymous function self runs, it queues itself for the next loop. The function .nextTick is a queueing agent. As far as I know, there is no way to cede direct control to the event loop. All one can do is do an explicit queue action and be done with the executing logic for the moment.

An analogue to .nextTick in the browser is setTimeout(fn, 0) which is what browserify does. But the docs in ndoe.js claim that nextTick is much more efficient than that and, thus, actually different. Let’s test this by running the following code:

/*globals require, console, process*/

var times = 10e6;
var count = 0;

var start = (new Date()).getTime();

while (count &lt; times) {
  count += 1;
}

var diff = (new Date()).getTime() - start;

console.log(&quot;while diff: &quot;+diff+&quot; count: &quot;+count);

start = (new Date()).getTime();
count = 0;

process.nextTick(function self () {
  if (count  &lt; times ) {
    count += 1;
    process.nextTick(self);
  } else {
    diff = (new Date()).getTime() - start;
    console.log(&quot;nextTick diff: &quot;+diff+&quot; count: &quot;+count);
    start = (new Date()).getTime();
    count = 0;
    setTimeout(function setself () {
      if (count  &lt; times ) {
        count += 1;
        setTimeout(setself, 0);
      } else {
        diff = (new Date()).getTime() - start;
        console.log(&quot;setTimeout diff: &quot;+diff+&quot; count: &quot;+count);
        start = (new Date()).getTime();
        count = 0; 
        process.nextTick(function lesstick () {
          var i;
          for (i = 0; i &lt; 1e6; i += 1) {
            count += 1; 
          }
          if (count  &lt; times ) {
            process.nextTick(lesstick);
          } else {
            diff = (new Date()).getTime() - start;
            console.log(&quot;delayed nextTick diff: &quot;+diff+&quot; count: &quot;+count);
          }
        });
      }
    }, 0);

  }
});

This is not written particularly well. A better style would be to define the functions separately and then do the callbacks. Or you can use events such as with eventingfunctions. But as you can see we first do a while loop. This is fast. Then we use nextTick callbacks. And we wait. Our third trial is setTimeout. We read a book. We come back and see the fourth trial uses nextTick, but does a million computations each time before releasing. This is fairly fast and will allow other stuff to happen. The results I obtained are2:

while diff: 34 count: 10000000
nextTick diff: 13840 count: 10000000
setTimeout diff: 109973 count: 10000000
delayed nextTick diff: 62 count: 10000000

As one can see, accessing the event loop is a costly procedure, but it is much better to use nextTick than setTimeout. For long running computations, a separate process is, of course, preferred. But at the least, use the trick in delayed NextTick.

The code can be found at github.com/jostylr


  1. I find ctrl-z, jobs -l, kill -s KILL # where # is the process number, works wonders.

  2. in milliseconds

Decorating the Truth with God

On a good day, I might say that religion decorates falsehoods with God. On a bad day, I might quote Richard Dawkins. Still, God is a resonant idea with many of us. And I believe that is because there is a God. But religion adds a lot of stuff to the core idea of God. Lots of crazy, often bad, stuff.

I propose that we use science as the backdrop for discussing God. That is, take the truth as we know it and decorate it with a bit more, namely the notion of God. As science changes, so to should the myth change. Instead of God-given knowledge as purported in religions, we have human-discovered knowledge about the world and God.

As an example, in quantum mechanics, there is the wavefunction, a single object that governs all aspects of the world. In the specific theory of Bohmian mechanics1, the wavefunction guides the particles that constitute our existence.

An infinite, guiding hand, instantly aware of everything2? That sounds like God. So why not claim this scientific object as God?

But what of the lack of consciousness? Well, here is where the payoff is. There is no consciousness to start with. But overtime, the wavefunction and the universe evolve in such a way that there are indeed entities in it that are conscious. And remember, the wave function guides the particles. The particles have no back influence on the wavefunction. Thus, it seems to me, consciousness, whatever that is with its strange notion of self and choice, is in the wavefunction. Thus, I see our evolution into conscious beings as part of the evolving consciousness of God.

So rather than this image of a watch-maker winding up the universe and seeing where it goes, we have the image of this thing that slowly evolves into a conscious God. And we are the instruments of that appearance.

So what? Well, this idea can guide us. Think about it. If you get angry, God gets angry. Is that a good thing? I think not. The way we live, act, and feel, are all tied into the mind of God. Does it not seem reasonable to want to lead God into a good place of existence? We do that through being happy, forgiving, kind people.

All doctrines espouse such a way of life, including religions, spiritualists, and doctors. But they all tend to couch it in selfish terms: “It is a matter of eternity in heaven or in hell”; “Good things flow back to you”; “Your health and longevity are at stake”. All of these may be valid reasons, but they seem small-minded to me. Imagine this point of view: “Good actions lead to a good God.” Wow, that is power. Live the way you would want God to be. Do you want God to be smoking marijuana? Loafing on a couch watching TV? Being abusive? No. I think not. So don’t be that way either. Lead an interesting life, full of explorations, creativity, kindness. Be forgiving of others.

Do I believe in this? I think it is a wonderful notion. Why should we strive to be happy, kind, positive people? Because that is what God will become if we become that. If we succumb to anger and fear, so to does God follow.

But do I believe in this? I say that it is not a matter of belief. The facts are clear and science-based. What I have done is decorate the truth.


  1. Other interpretations also work, such as many worlds and GRW, but my preference is Bohmian mechanics for its simplicity and personal appeal. Note that standard quantum mechanics lacks a description of reality as it discusses experimental results exclusively. This is the origin of Schrödinger’s cat

  2. Note that this somewhat contradicts relativity and that aspect is still being worked on.

The Role of the Infinite in the Finite

I am a big proponent of there being only a finite number of numbers. Yes, I know the argument about their being an infinite number of natural numbers: assume not, take the largest one, add one. And it really is that infinity which lets in all the rest, as far as I can tell.

Here is a counterexample: computer mathematics. I like to program. I like math. Putting them together is a sweet deal. But one has to admit that there are not an infinite number of numbers that can be represented on a computer or even all computers being used. You can setup systems that can get rather large, perhaps arbitrarily large, but there will always be a finite largest number. And yet, we can model everything we need with the computer just fine. We do our calculations with these machines all the time.

So in some sense, embracing the computer, leads us to contemplate a mathematical world of finite extent. Let’s start with what goes wrong with the above argument about infinity: it assumes we can know the largest number. And that is false on the computer system. There is a largest number that will ever be represented on a computer, at least with our current  technology; I do not know of any speculative technology that disputes this idea either. If there are a finite number of particles in the universe, we are pretty much stuck. But that largest number we do not know. If we did, we could just add one to it, even on a computer. For the standard number representation on a computer, if we add one to the largest (which is known for any machine/language) will either be an error or cycle to some other number. Notice that the computer model is not a model of all the arithmetic axioms. In particular, addition is not closed in the system and/or does not preserve the ordering. Also take note that we are free to represent numbers differently than the standard representation: we could use arrays to do numbers in base a million, we could use logarithms to represent a much larger range of numbers easily, we could use strings directly.

What brought this up for me? Newton’s method. I was coding it up, ignoring the calculus of taking derivatives. Just approximate by fitting a secant to the function instead of the actual tangent line. So we need to take two points fairly close to each other. Aye, there’s the rub. They can only get too close. For example, if the computer can hold 4 digits for a number, then 3.124 and 3.125 are adjacent numbers. There is nothing in between. Adding .0001 to 3.124 will lead to 3.124. So the delta x is exactly 0. In reality,  we can easily get something like 10 digits of accuracy which is enough for anything. But still, there is a limit.

Calculus, on the other hand, allows us to compute the derivative exactly. We get a function which we can then use to get the slope and not encounter a problem of resolving the differences and quotients of the numerical derivative. So it is in this way that the infinite world of normal mathematics really does make a difference in the finite world. It is perfectly acceptable to treat it as an outside trick, a rule that works without an underpinning.

So do I believe in the infinite? Much like my view of God, I think of it being there as a useful guide, but without really committing myself.

Drafted on 6/15/11.

Working Out the Developer Flow

I like JavaScript. So that means I have chosen Node.js and MongoDB as my server language and database. After all, Node.js is JavaScript on the server and MongoDB uses JSON as its basic data format and also for its query objects.

Having just one language to work with is a complete joy. The browserify package allows me to use the same module system and even the same modules on the browser as on the server. The package query-engine allows me to use MongoDB query objects in my JavaScript code. And my eventingfunctions is an attempt to push the entire MongoDB object model into my flow.

So I have unity all along the coding front. But there is still the question of the workflow. I use TextMate. That is working for me. Fortunately, people write bundles for it which is what makes it very useful. With luck, that will continue to be the case.

I use git via GitHub, but mostly from a graphical client. I started with GitHub for Mac which was really easy to use. But I am currently liking GitBox as it is closer to the metal, as they say. I feel like I am seeing the command line commands and I do feel more comfortable with git command line. It also allows me to see the remote changes before I pull, which is a very nice feature.

Coding in node.js, I use NPM. Until recently, I was a basic consumer: npm install whatever. When I realized that I should extract a module from my project, I looked around for a way to do that. I tried git submodule. It seemed a reasonable idea. It met with failure. The general setup of node modules is npm now. But even more so, I find git itself to be a bit difficult at times and submodules seem to be at the difficult part. Removing them was complicated. I also was getting confused as to where to edit the submodules. Keeping the stand-alone clone up to date with the embedded one was a silly process.

I began to look again. And then I found it. npm has a command called link. Go to the directory of your stand-alone module and run npm link. This makes a symbolic link with the global install registry. In the project directory, now repeat link but with the module name. So if eventingfunctions is my module, I first do npm link in the eventingfunctions directory. Next, let’s say that the project is webapp. Then go to webapp’s root directory and type npm link eventingfunctions. In the node_module directory, eventingfunctions should appear. As you make changes in the stand-alone, they are instantly reflected in the linked directory; they are symbolic links, after all.

To set webapp to install all its dependencies with the single command, use npm install. This will only work with a package.json being present in the working directory. Use npm init to create a basic package.json and use npm help json to get some guidance on that file. Getting the versioning correct does require some guidance.

Having cleaned up that mess1, I am now quite pleased with npm as a way of managing not only published projects, but my own.

I also learned about global versus local installs in npm. The recommendation is that command line apps should be global npm install -g while modules used inside a project using reuire() should be installed locally. If doing both, install it both ways.

I did, however, have to modify one of the files in one of the npm projects. I am not sure the best way to handle it. I guess I will just write a script to run after an update to make the changes.

My next challenge is to get my blogs out of WordPress and into GitHub. I am pretty sure I will be using markdown for the blog entries (this is written in markdown) via pandoc and probably using json templates for a templating engine. My plan is to make a static site generator using those tool and then publish my blogs on GitHub Pages. For images, I am thinking I will put them on my DropBox account and link to them from there.

The choice of markdown is easy since it is simple to learn and consistently chosen in the node community as well as GitHub. The choice of json templates is harder as it has little community wide acceptance. However, having read the author’s thoughts on why, I tend to agree. Namely, it is declarative, minimal, and JSON. I like all of that. But we shall see.


  1. git mergetool helped after I deleted the submodules without first pulling other conflicting changes

Eventing Functions

The thing to keep in mind as one begins a life of coding is that writing code that works is relatively easy. You are in the flow, you can get it to work. No problem, just a little cut-and-paste-and-modify. But maintenance grows ever more difficult.

To get a better maintenance experience, I am now experimenting with event programming. This is a practical necessity when coding asynchronously, i.e., external stuff happens that your code needs to respond to. Both user interfaces and interacting with external programs (servers, file systems, databases, …) largely demand such features. This is event-driven programming and an interesting perspective on it can be found at http://eventdrivenpgm.sourceforge.net/event_driven_programming.pdf

But it is easy enough to have events as the inputs to your program while coding in an otherwise sequential way. The question becomes, can we use events for replacing procedural logic and what do we gain from that?

Let us start with what is meant by an event here. An event is a string message, possibly with associated data, generated by an event emitter. Listeners are functions that respond to the various events that they subscribe to. This is the event model of node.js, at least to the best of my understanding.

My first attempt was to think of an event as a function call that turns into multiple functions. This turned out to be a bad idea. I was writing events such as “send data to server” and then upon return, “add history”. But the problem with that is one starts prescribing the flow with the event message. So what listeners needed to be added is being restricted by the event message. I was writing actions which is not what an event should be.

A much better way is to think of events as the end of a function. It is a statement of completion. Instead of the above flow, events might read as “user clicked submit” which initiates sending data to the server, “server returned data” whose listener might be a processing function, “history processed” whose listener is the “add history” function.

What this does is it causes one to think about what should be done before a function is called. I think to myself, “once the server has returned, then I can process the data. And once that is done, I add to the history”. You can see the events and listeners described above coming directly from that thought process.

So events are not calling functions. Rather they give the news of the day and functions choose to act or not based on that information. As a nice benefit, if there are no listeners, there are no errors though we will see how one can be notified of such an event.

One advantage of such a setup is that you can instantly understand when a function is called and what it aims to do. Sure, you can write comments, but having the code implement it means that it is never out of sync. If you want to add more actions to an event, that is not a problem. You can also fairly easily pipe event messages and listener descriptions to a log system. You can see your program tell you exactly how it is executing with almost no additional cost. Finally, events allow one to separate code cleanly into different parts. The removal of a class from a browser object can happen at the same time as a server call, but the code of these two never care about each other, can be in separate files, and never depend on each other.

An analogy that my lovely, brilliant wife came up with is that of an assembly line. Before functions, each program was essentially a one-off. It is the artist’s glass blowing shop. Then the functions gave a methodology. But it was still one person doing it. Each task was the focus, in a long series of tasks. But with events, it becomes a factory with assembly lines converging and diverging, as products flow in and out of the stations along the belts. The process is the focus of each station. They simply report when their task is done. And then whatever follows does its job.

And this is a flexible assembly line. Each listener can be added or removed at any time. And one time listeners can also be added. Once fired, they are done. This allows one to avoid having to check some condition in order to figure out what action to take: Add the listeners as needed, let them fire, and remove them when they are no longer needed.

Now we come to the issue of data. Functions often use data and modify data. One of the mottos of maintainable code is to make it very clear what a function depends on and what it affects. A good function is one that uses only the arguments it is given and modifies only by returning. With JavaScript objects being passed as reference, not copies, the temptation to modify an object in the middle of a function is very great. Resist. Even worse is the use of generous elevating levels of scope that allow one to modify variables not declared inside a function. Resist at all costs.

With events signaling completion, the passing of data seems less attractive. In a processing pipeline, it might pass on what was just created. But functions reacting to it will be limited to receiving the data sent. I initially coded it up using a data object associated with the emitter which was the sole argument passed to listeners. While this works, it is opaque and therefore not maintainable.

Then I realized that events need not send data, but rather the data can live in a central data object associated with the emitter. All data in it should be JSONable (see below for the other types). To understand how I pass arguments to the functions, we need to backtrack a little.

When defining listeners, I use string’s for their names and put them in a big action object. That is, my functions are defined with "add history" : function () {//add history} . But to deal with arguments, we can use "add history" : [["date", "headline"], function (date, headline) {//add history}], Then when installing the function into my global action object, I wrap it first with another function that can throw in the arguments from the global data object. And, of course, it can output the arguments being sent while debugging.

Even more, I can replace, “date” with a command object that states how to obtain a value. For example, {$$default : "Hi", $$get : "greeting"} would try to return the “greeting” data object, but failing that, it returns a default of “Hi”. One can also have transformations being done, such as converting the date into a specific format. Or validation checks. It allows one to do some minor sanitizing and transforming on data before getting to the body of the function. This can help clarify the important part of the function as well as making error tracking a little easier since one can see what is being fed into the function. Is it the input data that is a problem or the main body logic of the function? It is extremely nice to be able to see immediately which one is the issue.

At the end of a function, what I do is borrow from MongoDB. MongoDB uses an update object to modify its database contents. For example, {$set : {name: "JT", blog: "mythiclogos"}, $inc : {coolfactor: 2} } would set the name to be JT, the blog to mythiclogos and increment the coolfactor by 2. This object, which can be easily printed as JSON, describes what needs to be modified and how to do it. This is a perfect return object to do the modifications.

Thus, I create such an object and return it at the end of a function that modifies data. But not to be limited by their commands, I also introduce my own, such as $$emit, $$on, and $$once which will emit events or attach listeners. I use the $$ to distinguish from what works with MongoDB. With this return object, debugging the modifications is again easy. You can see what is being output from the function.

To implement this, I again relied on the wrapping function. It receives the return from the called function and then does the global modification of the data.

The emitting of events can also come in two flavors. For the $$emit function I chose to use the node.js flavored process.nextTick(fn) (really just setTimeout(fn,0) ) to emit only after the current level of listeners are done. That is, this is a breadth-first eventing. The other approach (depth-first) is to emit an event immediately, preventing other listeners from acting until the next chain is done. I implement this with $$emitnow.

Similarly, to be notified about events that have no listeners, we can add some code in the $$emit to check the listeners array.

And so it was that I used events to get back to the roots of functions with arguments and return statements, functions without side effects.

This was written from the perspective of browser coding. For server-side, such as in node.js, a global emitter object may be dangerous. But one can create session-based emitters that can be the global object for that user/session. And one may use a global object for global data sharing, such as a highscores object which is not attached to users, but to the site itself.

I intend to develop this more. You can see an example of its use at github.com/jostylr/goodloesolitaire and the repository where I will be flushing this out more is at github.com/jostylr/eventingfunctions

Update: I forgot to mention that for non-JSONable objects, I use a kludge. I have a store variable that I import into the function scope and store the objects there. To record this, I use $$store which takes in a string or array of strings indicating the key. The store should be used for functions and complicated external objects such as the return value from an AJAX call. I am open to suggestions on improving that, but I very much like the JSON-able data model.

iPhone4S: The Low Light Challenge

I recently acquired an iPhone4S. It is a nice phone as is the iPhone4. It has a voice personal assistant which is entertaining and possibly useful, but it was not my motivation for purchasing it. Rather, I take lots of baby photos. I have shot thousands of photos of my baby since she was born 6 months ago. All of that was on the iPhone4.

But I find that as fall deepens, the brilliant daylight of summer fades. In its place I have low-light levels that drive the iPhone4 to be slow on the picture take and poor on the quality. The promise of the iPhone4S is that of quick photos of stunning quality at even low-light levels. Does it deliver?

Before buying, I saw a street light comparison. It made my hopes go up. In the store, I tried picture taking quickly and it was noticeably different even in the brightly lit store. But the real test was at home, with baby.

And in my opinion, it delivers the goods. See my flickr set of iPhone 4S and some corresponding iPhone4 pictures. It is still a little slow to start, but taking multiple pictures is very fast. The coloring is amazingly life-like. And the low-light pictures, while still grainy, are much improved and one can make out the subject clearly. All in all, if the iPhone4 is your primary camera as it is mine, upgrade to the 4S and enjoy!

As a side note, this marks the completion of the Applification of my household: 2 iPhones, 1 iMac, MacBookAir, MacBook, iPad, iPod Touch, iPod Mini. Until now, my wife had a clamshell phone. Now she has my iPhone4. We are also now an AT&T house as we are both on the same phone plan which means we both get data plans and shared minutes, but at the same price as our two plans separately. I also have a discount with my employer which applies to the $25 data plan reducing it to almost the price of the $15 plan which has no discount. So we did that and we now have a lot of data streaming potential.

All in all, I am a very satisfied customer.

Tagged , ,

The rise and fall of The Falls by Google