jump to navigation

Spying on Functions January 8, 2018

Posted by Andor Saga in JavaScript, Programming.
add a comment


I’ve been working through a series of insightful coding tutorials. It’s a series of episodes that walk developers through creating Super Mario in JavaScript from scratch. They were created by Pontus Alexander and I highly recommend them.

So while working through the videos, Pontus demonstrated a really interesting JavaScript pattern in episode 5. Essentially we can spy on a given a function or method — and add custom logic every time it is called.

Let’s look at an example. We start with a simple class:

class Person {
  sayHi(name) {
    console.log(`Yo ${name}!`);

A common use case is knowing how many times a function is called. It’s overly simplistic, but it clearly demonstrates what we’re doing. So let’s find out how many times sayHi is called.

Since functions/methods are just values, we can create reference to the original method and overwrite it.

let person = new Person();

// Save a reference to the original function
let sayHiOrig = Person.prototype.sayHi;

// Redefine sayHi. This allows us to add custom logic,
// at the same time leaving sayHi to do whatever it does.
let count = 0;
Person.prototype.sayHi = function(name) {
  console.log(`sayHi called ${++count} times.`);
  sayHiOrig.call(this, name);

We then call the function normally. From the perspective of the class, it is unaware of the spy and completely decoupled — which is super great.

// > sayHi called 1 times.
// > Yo Pascal!

Normal Mapping using PShaders in Processing.js April 4, 2015

Posted by Andor Saga in gfx, GLSL, JavaScript, Open Source, Processing.js.
add a comment

Try my normal mapping PShader Demo:

Last year I made a very simple normal map demo in Processing.js and I posted it on OpenProcessing. It was fun to write, but something that bothered me about it was that the performance was very slow. The reason for this was because it uses a 2D canvas–there’s no hardware acceleration.

Now, I have been working on adding PShader support into Processing.js on my spare time. So here and there i’ll make a few updates. After fixing a bug in my implementation recently, I had enough to port over my normal map demo to use shaders. So, instead of having the lighting calculations in the sketch code, I could have them in GLSL/Shader code. I figured this should increase the performance quite a bit.

Converting the demo from Processing/Java code to GLSL was pretty straightforward–except working out a couple of annoying bugs–I got the demo to resemble what I originally had a year ago, but now the performance is much, much, much better 🙂 I’m no longer limited to a tiny 256×256 canvas and I can use the full client area of the browser. Even with specular lighting, it runs at a solid 60 fps. yay!

If you’re interested in the code, here it is. It’s also posted on github.

#ifdef GL_ES
precision mediump float;

uniform vec2 iResolution;
uniform vec3 iCursor;

uniform sampler2D diffuseMap;
uniform sampler2D normalMap;

void main(){
	vec2 uv = vec2(gl_FragCoord.xy / 512.0);
	uv.y = 1.0 - uv.y;

	vec2 p = vec2(gl_FragCoord);
	float mx = p.x - iCursor.x;
	float my = p.y - (iResolution.y - iCursor.y);
	float mz = 500.0;

	vec3 rayOfLight = normalize(vec3(mx, my, mz));
	vec3 normal = vec3(texture2D(normalMap, uv)) - 0.5;
	normal = normalize(normal);

	float nDotL = max(0.0, dot(rayOfLight, normal));
	vec3 reflection = normal * (2.0 * (nDotL)) - rayOfLight;

	vec3 col = vec3(texture2D(diffuseMap, uv)) * nDotL;

	if(iCursor.z == 1.0){
		float specIntensity = max(0.0, dot(reflection, vec3(.0, .0, 1.)));
		float specRaised = pow(specIntensity, 20.0);
		vec3 specColor = specRaised * vec3(1.0, 0.5, 0.2);
		col += specColor;

	gl_FragColor = vec4(col, 1.0);

Wibbles! March 19, 2015

Posted by Andor Saga in 1GAM, Game Development, Games, JavaScript, Open Source.
add a comment


I wanted to learn the basics of require.js and pixi.js, so I thought it would be fun to create a small game to experiment with these libraries. I decided to make a clone of a game I used to play: Nibbles. It was a QBASIC game that I played on my 80386.

Getting started with require.js was pretty daunting, there’s a bunch of documentation, but I found more of it confusing. Examples online helped some. Experimenting to see what worked and what didn’t helped me the most. On the other hand, pixi.js was very, very similar to Three.js….So much so, that I found myself guessing the API and I was for the most part right. It’s a fun 2D WebGL rending library that has a canvas fallback. It was overkill for what I was working on, but it was still a good learning experience.

Implementing PShader.set() October 5, 2013

Posted by Andor Saga in JavaScript, Processing, Processing.js, PShader.
add a comment

I was in the process of writing ref tests for my implementation of PShader.set() in Processing.js, when I ran into a nasty problem. PShader.set() can take on a variety of types including single floats and integers to set uniform shader variables. For example, we can have the following:

pShader.set("i", 1);
pShader.set("f", 1.0);

If the second argument is an integer, we must call uniform1i on the WebGL context, otherwise uniform1f needs to be called. But in JavaScript, we can’t distinguish between 1.0 and 1. I briefly considered modifying the the interface for this method, but knew there was a better solution. No, the last thing I wanted was to change the interface. So I just thought about it until I came up with an interesting solution. I figured, why not call both uniform1i and uniform1f right after each other? What would happen? It turns out, it works! It seems one will always fail and the other will succeed, leaving us with the proper uniform set!

Using WebGL readPixels? Turn on preserveDrawingBuffer August 1, 2011

Posted by Andor Saga in JavaScript, Open Source, point cloud, webgl, XB PointStream.

Since I’ve already written a few blogs about WebGL’s readPixels and because developers seem to find my page mostly by this keyword, I decided to help clarify a recent issue I found.

In some of my WebGL scripts I have a feature which allows users to convert 3D images to 2D (see here). The script does this simply by making a call to readPixels.

This used to work until browsers (namely WebKit and Chrome) began implementing the preserveDrawingBuffer option. This is an option set when the WebGL context is acquired and as its name suggests it preserves drawing buffers between frames.

What this means is if preserveDrawingBuffer is false/off (which it is by default) it will not save the depth and color buffers after each draw call. Trying to call readPixels in this state will result in an array of zero’ed out data.

If you’re planning on calling readPixels, you’ll need to turn on this option when you get your WebGL context.

var context = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});

The WebGL spec states that this may cause a performance hit on some machines so only enable it if you really need to.

Defenestrating WebGL Shader Concatenation June 23, 2011

Posted by Andor Saga in JavaScript, Open Source, point cloud, webgl, XB PointStream.

When I began writing the WebGL shaders for XB PointStream, I placed the vertex and fragment shaders together in their own .js file, separate from the library. In each file I declared the two necessary variables. So, for the cartoon shader it looked something like this:

// Fragment shader ...
var cartoonFrag = 
"#ifdef GL_ES\n" +
"  precision highp float;\n" +
"#endif\n" +

"varying vec4 frontColor;" +
"void main(void){" +
"  gl_FragColor = frontColor;" +

// Vertex shader ...
var cartoonVert = "..."

If users wanted to render their point clouds with a cartoon effect, they would include the .js resource in their HTML page and tell XB PointStream to create a program object:

// ps is an instance of XB PointStream
progObj = ps.createProgram(cartoonVert, cartoonFrag);

The problem with this approach is the GLSL code is very difficult to read and maintain. Yesterday I finally changed this. Instead of users including the shader resources, I decided users would call a function and pass in the path of the shader file. The function would then XHR the file and return a string with the file’s contents. I started with defining an interface:

var vertSrc = ps.getShaderStr("shaders/cartoon.vs");
var fragSrc = ps.getShaderStr("shaders/cartoon.fs");
cartoonProg = ps.createProgram(vertSrc, fragSrc);

I decided that would be simple enough. I then implemented the getShaderStr function:

this.getShaderStr = function(path){      
  var XHR = new XMLHttpRequest();
  XHR.open("GET", path, false);
    this.println('Error reading file "' + path + '"');
  return XHR.responseText;

You’ll notice I made the request synchronous. I did this for a couple of reasons. First, it keeps things simple on the user’s end. They aren’t forced to create callbacks and figure out when they can begin setting shader uniforms. Second, the data sets I’m dealing with (3MB – 40 MB) significantly outweigh a tiny request of 118 bytes. If any performance improvements are to be made in the library, they won’t be made by asynchronously XHR’ing shader code. However, I’m still open to suggestions. Leave a comment if you think this can be done more elegantly.

With those changes:

// cartoon.fs file
#ifdef GL_ES
  precision highp float;

varying vec4 frontColor;
void main(void){
  gl_FragColor = frontColor;

Whew! Much, much cleaner! Notice I have not only removed the obtrusive quotations marks and plus signs, but I’ve also rid the fragment shader of newline characters for the necessary preprocessor definitions. Although this was a relatively small change in terms of the interface, it’s a huge win for the library.

Real-time debugging in Processing June 4, 2011

Posted by Andor Saga in Game Development, JavaScript, Open Source, Processing, Processing.js.

Yesterday I saw one of my colleagues working on a real-time interactive graphical sketch in Processing. I noticed he was using Processing’s println() function to debug. println() is great, but not when the state of sprites are changing per frame.

I think a better solution is to develop a simple class which can handle frequent state changes and overlay those states directly on top of the visualization or game, sort of like a HUD. This is what I usually create when working on a larger Processing or Processing.js game. It only takes a few minutes to write up a simple but extremely useful debugger.

Let’s start with an interface:

cDebugger gDebugger;
boolean gIsDebugOn = true;
int gFontSize = 16;

void setup(){
  size(400, 400);

  // We'll obviously need to allow changing the text color.
  gDebugger = new cDebugger();
  gDebugger.setColor(color(100, 255, 100));

void draw(){

  // Update the sprite states.
  // ...

  // In every frame, we'll tell the debugger the current state
  // of some variables we think are important.
  gDebugger.add("FPS: " + frameRate);
  gDebugger.add("mouse: [" + mouseX + "," + mouseY + "]");
  gDebugger.add("last key: " + key);

  // Draw world, sprites, etc.
  // ...

  // Now render the states on top of everything.

void keyPressed(){

  // We should be able to toggle the debugger so
  // it doesn't consume resources.
  if(key == 'd'){
    gIsDebugOn = !gIsDebugOn;

  // The debugging lines can add up quickly.
  // One way to keep everything on screen is to allow
  // the user to adjust the font size.
  if(key == '+'){

  if(key == '-'){
    if(gFontSize == 0){
      gFontSize = 1;

  gDebugger.setFont("verdana", gFontSize);

I named the debugger instance gDebugger. I did this only because using ‘debugger‘ is a JavaScript keyword and will break Proecessing.js sketches. On that note, Processing developers should shy away from all JavaScript keywords to keep their sketches Processing.js compatible. If you’re interested how we’re planning on solving this tricky issue, take a look at our Lighthouse ticket.

Okay, now that we have a basic interface, we can focus on the implementation:

public class cDebugger{
  private ArrayList strings;
  private PFont font;
  private int fontSize;
  private color textColor;
  private boolean isOn;

  public cDebugger(){
    strings = new ArrayList();
    setFont("verdana", 16);
    isOn = true;

  public void add(String s){

    If off, the debugger will ignore calls to add() and draw().
  public void setOn(boolean on){
    isOn = on;

  public void setFont(String name, int size){
    fontSize = size <= 0 ? 1 : size;
    font = createFont(name, fontSize);

  public void setColor(color c){
    textColor = c;

  public void clear(){
    while(strings.size() > 0){

  public void draw(){
      int y = fontSize;

      for(int i = 0; i < strings.size(); i++, y += fontSize){
        text((String)strings.get(i), 5, y);

      // Remove the strings since they have been rendered.

That’s it!

It’s just a simple bare bones real-time debugger. You can easily extend it to add more useful features such as ‘pages’ users can flip through or sprite categories. I’m sure you can think of many more ideas : )

XHR Browser Differences November 20, 2010

Posted by Andor Saga in JavaScript, Open Source.
add a comment

In my last post, I wrote about an issue with Minefield which drastically slows down local XMLHttpRequests. My supervisor, Dave Humphrey, filed the bug with Mozilla and I had some help from Olli Pettay creating a work-around.

While writing the work-around, I found browsers had a bunch of XHR differences. In this blog, I list some of the things I found.

Property Differences

When I began noticing differences between browsers, I decided to take a step back and first analyze the properties of an XHR instance, so I ran the following code on four browsers (Minefield, Chrome, Opera and WebKit):

var XHR = new XMLHttpRequest();
var str = "";
for(var i in XHR){
  str += i + "\n";

I found Minefield, Chrome and WebKit all return at least the following:


On Minefield, you’ll also get these:


and Chrome and WebKit will give you these additional properties:


Opera is a strange case since it only has these properties:


onreadystatechange Differences

I noted these cases and continued playing around by creating a very simple XHR example which uses the onreadystatechange event:

<span id="debug"></span> 
var debug = document.getElementById('debug');

var XHR = new XMLHttpRequest();
XHR.onreadystatechange = function(){
    case 1: debug.innerHTML += "opened.<br />";break;
    case 2: debug.innerHTML += "got headers.<br />";break;
    case 3: debug.innerHTML += "loading....<br />";break;
    case 4: debug.innerHTML += "done.<br />";break;      

XHR.open("GET", "some_file.txt");

If you run the following code on Minefield, you’ll see something like this:

got headers.

The larger your file, the more likely you’ll see “loading…” printed. It may appear fewer times if the file is small or cached, but will always appear at least once. You probably noticed “opened” was printed twice. This isn’t the case when running the example on Chrome, Opera or WebKit—which only print it once.

onprogress Differences

Now I’ll get to the difference which prompted me to start experimenting with XHR in the first place. Here’s a longer example which uses event listeners:
(Note, I couldn’t run this in Opera since it doesn’t support addEventListener for XHR objects)

<span id="debug"></span>
var debug = document.getElementById('debug');

function progress(evt){
  debug.innerHTML += "'progress' called...<br />";

function abort(evt){
  debug.innerHTML += "'abort' called...<br />";

function error(evt){
  debug.innerHTML += "'error' called<br />";

function load(evt){
  debug.innerHTML += "'load' called<br />";

function loadstart(evt){
  debug.innerHTML += "'loadstart' called<br /> ";

var xhr = new XMLHttpRequest();

xhr.addEventListener("loadstart", loadstart, false);
xhr.addEventListener("progress", progress, false);  
xhr.addEventListener("load", load, false);
xhr.addEventListener("abort", abort, false);
xhr.addEventListener("error", error, false);

xhr.open("GET", "some_file.txt");  

loadstart is called exactly once when the file starts loading. load is called exactly once when the entire file is done being loaded. However, browsers seem to handle the progress event differently. If the file is small enough, on Minefield you’ll see the following output:

'loadstart' called
'load' called

On the other hand, Chrome and WebKit will print out:

'loadstart' called
'progress' called...
'load' called

So Minefield will call the progress event zero or many times while Chrome/WebKit will call it one or many times. Since I was using my work-around on Minefield and the file I was using was small, this gave me some frustration. So I had to add a bit of extra logic to handle this case.

onabort and onerror Differences

I kept experimenting with various methods until I got to onerror. I first tried causing the onerror event to fire by supplying a non-existing file. This only threw an exception. Then I remembered I could stop XHR by pressing escape and wondered if it would cause an onabort or onerror. I created a large file which gave me enough time to interrupt the request, and I found yet more differences.

After pressing escape, I found Minefield calls the onerror event while Chrome and WebKit call the onabort event—which I think is a fairly significant variation.

I’m sure there are more XHR differences between browsers. I’ve only attempted to list a couple here. One solution to this problem involves simply wrapping the XMLHttpRequest object with your own object, which may reduce some headache.

Modularizing C3DL September 18, 2010

Posted by Andor Saga in C3DL, JavaScript, Open Source, webgl.
add a comment

Over the Summer Matthew Postill has done some substantial work on C3DL. He’s fixed bugs, added collision detection and sped up rendering with frustum culling.

We’re expecting the library to continue to grow in size and features. The problem is not all these features will be desired by developers using the library. If a user only wants to render a teapot with C3DL, why force them to load the library in its entirety, with a particle system, a bunch of shaders and collision detection? Cathy has suggested we could tackle this problem by modularizing the library. This would impact the internals of the library quite a bit, but it would also provide much more flexibility.

Firstly, it would allow developers to build the library—a bit like jQuery. The user would select what components they need and the library would be built containing only those selected options.

Secondly, it would allow other developers to create their own components and hook them into C3DL. This means developers could write their own model parsing code instead of being forced to use ours.

Another related change would entail offering release and debug versions. The debug version could include parameter checking at the start of each function, which would be omitted from the release build.

I’m not going to kid myself. There’s a significant amount of work required to make this happen—there’s also an immense payoff. So I’m excited to start working on this, but I know I’ll need some help. I’d love to hear from any developer who has had experience implementing something along these lines.

Summer Reflections September 14, 2010

Posted by Andor Saga in Arius3D, C3DL, JavaScript, Open Source, Processing.js, XB PointStream.

During the Summer I had the opportunity to work with some highly motivated and intelligent developers at Seneca’s Centre for Development of Open Technology. For four months we cranked out code for several exciting technologies such as for C3DL, NextJ, The Fedora ARM project, XB PointStream, Popcorn and Processing.js.

This was the first time I have worked in CDOT where there were so many developers working on so many projects. Almost all the projects dealt exclusively with JavaScript, but we also had to work with other libraries and standards like WebGL, JSON, and Video. As the technologies we worked with varied, so did our challenges: documentation was scarce, standards and APIs changed, or our code simply didn’t work and we needed help.

What made working in the CDOT environment actually work is communication. Our days began with a morning Scrum meeting where we exchanged problems we stumbled into the day before and also shared our success stories. The meetings were brief (only 10 minutes) but on a few occasions they were invaluable. As we stated our problems our colleagues gave their ideas and opinions, “Have you tried looking into something like this…?” or “You should read this blog…”. We didn’t always have the answers, but we were good at pointing someone the right direction.

And sometimes we did have the answers. Our cubicles were close so it made sense to ask the regular expression expert a question which would save us half an hour and only steal a few minutes of their time. Other times we posed questions to other developers on IRC or we received extremely useful suggestions on our blog posts.

We also took the opportunity to meet face-to-face with our industry partners. Developers at Arius3D gave us guidance, tips and valuable resources. Down at the Toronto Mozilla office we were given a WebGL walkthrough and help with relevant WebGL tools. We also worked closely with Brett Gaylor–a filmmaker working with us on the video tag. Others of us met with developers from NexJ and Fedora.

All these forms of communication were important for the development of the technologies we worked on. It only reminds me how crucial these are for open source development.

Now that the Fall semester has started I’m back in Seneca taking classes but I’m still excited to be working at CDOT on XB PointStream, C3DL and Processing.js.