(Almost) Full Circle

Months ago, before I began working on Processing.js, I created a simple Processing sketch to get used to the syntax and then I blogged about it. When writing the sketch, I tried to use as many functions I could. Spheres, points, transformations, lights, materials, text, etc. Most of those functions we have already ported over to Pjs along with many, many others.

Today I went back to that first Processing sketch and threw the source in my Pjs Web IDE. I admit, I did have to remove a couple functions which we’re still working on (text, noise), but the final demo is pretty damn close to what I first wrote:

If you have a WebGL-enabled browser, you can run the live demo.

Vertex V.S. Fragment

While working on lights and materials, I was reminded how much per-vertex lighting sucks compared to per-fragment lighting. Especially with spotlights, you can get some awful artifacts. If you look closely at the screenshot above, you’ll see the shiny part isn’t all that smooth. That’s because our shaders are emulating the fixed-function pipeline, which Processing uses. But since we are using WebGL, we have option of writing shaders to do per-fragment. This can be GPU-intensive, but it’s something we should think about.


Compensating for WebGL readPixels() Implementation Inconsistencies

UPDATE: This is old news. If you want to see example code of readPixels() using the newest WebGL spec, go to my newer post.

I’m working with Dave Humphrey my professor at Seneca College with a reference test tool he began for Processing.js. He started working on this tool because many of our tests require image-image comparisons. That is, our final rendered image should match identically to the same sketch produced by Processing. And doing that manually would be painstakingly slow.

The tool is comprised of a few parts, one of which takes a Processing sketch, renders it, and dumps out the raw pixel values of the canvas into a web page. This data can then be saved to a text file and added to the batch list of tests to run. Since he already had the script dumping out the result from 2D sketches, he asked me to develop the same thing for 3D sketches.

We are using WebGL to do all the 3D rendering for Pjs. We already ported over some 3D functions from Processing such as points, lines, box and sphere. My job was to take the following sketch and get the values from the framebuffer.


So given the above code, I needed to produce something such as this as our reference ‘image’:

//[100,100]33,66,99,255,33,66,99,255 .....

The first two values in size are the canvas dimensions and are followed by the series of values which would need to be extracted from the framebuffer (which also includes the alpha component). I understood what I needed to do and got to work.

Since we are using WebGL, I knew we had to use readPixels. I went to the WebGL spec which had the declaration of the function.

// spec:
WebGLArray readPixels(GLint x, GLint y, 
                      GLsizei width, GLsizei height,
                      GLenum format, GLenum type) 
// example:
gl.readPixels(0, 0, 100, 100, gl.RGB, gl.UNSIGNED_BYTE);

The fist four arguments are straightforward. The format defines what data you want returned and the order. It can either be RGB, RGBA or ALPHA. The type argument specifies the type of the value returned. You can request things like UNSIGNED_SHORT_5_6_5 which defines per-component bit lengths, but the simplest is just passing in UNSIGNED_BYTE.

After a bit more reading I got down to hacking some code. I began working on this using Webkit, Safari’s nightly rendering engine. I wanted to create a simple test case, but I got stuck trying to do anything with the result of the call. Safari kept throwing an exception.

var buff = gl.readPixels(0, 0, 100, 100,
                         gl.RGB, gl.UNSIGNED_BYTE);
// Something with buff...

// Result of expression 'buff' [undefined] is 
// not an object.

I struggled with this for some time. I kept thinking I needed to feed the return value of readPixels into a newly allocated WebGLUnsignedBufferArray. I couldn’t understand why buff was undefined. But eventually I tried requesting RGBA as the type and it actually returned something!

gl.readPixels(0, 0, 100, 100, gl.RGBA, gl.UNSIGNED_BYTE);

So I learned it wasn’t actually my fault, something was wrong with Webkit. Unfortunately, I was still under the impression I had to allocate a special WebGL buffer. The specification states “The specific subclass of WebGLArray returned depends on the passed type.” So if you pass in UNSIGNED_BYTE, you’ll get a WebGLUnsignedByteArray. But eventually I found out I could just use a regular JavaScript variable to hold the return type. No special allocation necessary.

Once I figured those two things out, I was able to make some progress. I assigned the return value from readPixels to a variable and queried the type. It was a [object WebGLUnsignedByteArray] as expected. I assume the subscript operator is defined for this object since I was able to get the pixel values using []. The object does have a this defined (which likely does the exact same thing):

getter GLubyte get(in unsigned long index);

So I was able to iterate over the elements and my code progressed to this:

var buff = gl.readPixels(0, 0, width, height, 
                         gl.RGBA, gl.UNSIGNED_BYTE );
var pixels = [];

for(var i = 0; i < buff.length; i++){
  pixels[i] = buff[i];

Finished! Or so I thought. When trying to run the same code in Minefield my complete output was:


Where were my pixels? It didn’t take me long to figure my for loop wasn’t running. After playing with it for a while, I just resorted to querying the type and found it to be a [object Object]. Not exactly a WebGLUnsignedByteArray is it? Just a regular JavaScript object. To get the sweet goodies (property names) I wrote a simple for-in loop:

for(var i in buff) {

This gave me the properties width, height and data. So a simple assignment was all I needed to get the data for this case!

pixels = buff['data'];

Since I knew Chromium’s XHR bug breaks our Pjs code and Opera doesn’t yet support WebGL, I only had two browsers to consider, hence the extremely basic regex. But if you’re using readPixels, feel free to take whatever you need from my code and adapt it to your needs. Be it support for Chromium, browsers for mobile devices or whatever.

var agent = navigator.userAgent;
var isSafari = agent.match(/safari/i);
var context = canvas.getContext("experimental-webgl");
// Safari returns undefined if format RGB is requested
var buff = gl.readPixels(0, 0, width, height, 
                         gl.RGBA, gl.UNSIGNED_BYTE );
var pixels = [];
if(isSafari) {
  for(var i = 0; i < buff.length; i++){
    pixels[i] = buff[i];

// Minefield
else if(!isSafari){
  pixels = buff['data'];

I hope the return value of readPixels is standardized before the developers at Mozilla, Apple, Google, Microsoft, etc. release their complete implementations of their WebGL-enabled browsers, but I have a feeling they probably will.

Release 0.7

I just finished my 0.7 release for my DPS911 class at Seneca College. For this project I continued with my work on Processing.js, using WebGL to add lighting functions. Check out my demos below. You’ll need a WebGL-compatible browser to run them. Get either Minefield, Webkit or Chromium.

Mouse Directional Light

Bouncing Box



Processing.JS Web IDE

The demos above use the functions I added to the library which are:

  • noLights()
  • ambientLight()
  • directionalLight()
  • pointLight()

Not Quite Deja Vu

It might seem this release is just a repeat of my 0.3 release, but in truth that release just didn’t have the flexibility we have now to be useful. This code doesn’t have literals and hacks holding it together- it’s flexible, and very much programmable. This is possible thanks to a proper camera and 3D matrix object. No crappy hard-coding going on over here.

Lighting 2D and 3D Shapes

So last time I wrote these functions we didn’t have fill() and stroke(), so this time I had to figure out how Processing lit filled shapes and their outline strokes. It turns out the strokes aren’t lit, but the fill colors are. Since the lighting calculations are done in the vertex shader, I had to play around with my old code. While doing so I found out that “2D” shapes such as lines and points aren’t lit either. It was at that point I had to decide if we should continue using one shader for all rendering or if we should have a simpler one for rendering unlit vertices. After thinking about it for a while, I ended up deciding to write a simpler set of shaders for 2D geometry. Will having a simpler shader outweigh the cost of switching program objects and setting uniform variables twice? This is something I’m prepared to profile and fix as necessary.

Creating createProgramObject()

Because there are now two shaders, I had to abstract the process of creating a program object (which is a set of vertex and fragment shaders). I created a createProgram() function for this. This returns a program object and reduces some redundancy in the code. The interface can’t be simpler:
createProgramObject( vertexSource, fragmentSource );

Silly Bugs

While working on this release I found myself making some very silly mistakes. I figure it’s worth mentioning them to remind myself not to do them again.

So because we now have two shaders, it is necessary to call useProgram() every time a uniform or attribute in a specific shader needs to be set. Calling uniformi(), uniformf(), etc. won’t work if the incorrect program object is loaded. This means in my light functions I needed to load the correct program object before accessing the lights since the 2D program has no such variables. This is something I kept forgetting.

Next, what’s wrong with this bit of code?
uniformf( programObject, "lights["+count+"].type", 1.0 );
The problem is the light type in the vertex shader is defined as an int, not a float. The result isn’t a thrown exception or crash. The scene just renders incorrectly. It’s not the first time I made this mistake, but it also has become easier to spot every time. It also doesn’t help that uniformf() and uniformi() look so similar.

Box and Sphere Normals

When I started working on the lights, I assumed the normals of the box shape point ‘diagonally’ out, that is not parallel to the 3D axis. This was simple to implement since the normals matched the vertices one for one. But when I ran my first test, I noticed my result differed from Processing. First I thought it was the winding order of my vertices and I wasn’t in the mood to start hacking at raw vertices. I switched to try to light the sphere, but had problems there too. The normals seemed to be ‘off’ somehow and my sphere didn’t seem smooth. I knew the normals in this case should point in the same ‘out’ direction as their corresponding vertices. So I removed the normals array in the code and just passed in the vertices as my normal attribute to my shader which fixed the problem.

I then went back to my box and realized Processing’s normals for box are probably not what I had thought them to be. I learned that each ‘face’ of the box had its normals pointing parallel to the 3D axis. So I had to go through all the vertices of the cube, find out which face it was (front, back, sides) and create a static array which had corresponding correct normals. I used to have the box vertices in one long line of code, so make things easier in the future, I broke up that line of code into six lines. One line for each face and labeled the order. I also believe in adhering to the 80 character standard which just about matches what I have now.

Saturated Colors

I’ve noticed the final rendered sketch on different browsers (Chrome, Firefox, Safari) don’t appear identical in terms of color. It almost seems as if on Chrome and Safari the colors seem saturated. When comparing my reference images from Processing, the scene looks slightly ‘washed out’ depending on the browser. I suppose I’ll need to file this somewhere.

Wiki Update

I updated my wiki page which includes links to the Lighthouse tickets I completed, my GitHub branch, demos, tests and a diff which shows the work I did.


I also added the four demos I wrote for this release to the dropdown in my Processin.JS Web IDE.

Light Sphere Deja Vu

I’m working on adding ambient lights and directional lights to Processing.js using WebGL. I had already implemented this before but we (the PJS team) couldn’t add it to the repository because too many pieces were missing, such as the camera and matrix objects. Now that those are in, this release I can finally get this code in. However, since then the library changed quite a bit. We now have stroke and fill colors, point and lines primitives and changes to the WebGL spec. Because of this, I had to go back and revise my implementations of lights as well as my demos. While doing this I ran into some stupid bugs which set me back, but it looks like everything is looking pretty good now (see screenshot yay!). I’m still experiencing some issues with opacity in Minefield, but that’s about the biggest issue right now.

Processing.js 3D WebGL Measurements

I finally had some time today to take a closer look at the OpenGL Profiler on my Macbook. Since we (the Processing.js team) are using WebGL to render 3D objects in the Processing.js library, this tool provides a great way to see what OpenGL calls are made each frame.

After I figured out how to attach the profiler to a browser and extract some relevant data, I ran my Processing.js fill test. The test renders one hundred cubes every frame. I let it run for a few seconds then I dumped the timings to a file. This is what I found:

GL Function # of Calls Total Time (µsec) Avg Time (µsec) % GL Time % App Time
glBindBuffer 53,662 14,069 0.26 0.72 0.07
glClear 272 18,856 69.33 0.97 0.10
glClearColor 272 140 0.52 0.01 0.00
glDisable 26,423 4,968 0.19 0.25 0.03
glDrawArrays 53,662 564,531 10.52 28.95 2.86
glEnable 26,423 4,978 0.19 0.26 0.03
53,662 7,301 0.14 0.37 0.04
glFlush 272 28,194 103.66 1.45 0.14
53,662 72,459 1.35 3.72 0.37
glGetIntegerv 53,662 7,922 0.15 0.41 0.04
glGetProgramiv 53,662 13,093 0.24 0.67 0.07
135,380 224,110 1.66 11.49 1.14
glLineWidth 27,239 3,181 0.12 0.16 0.02
glPolygonOffset 26,423 2,971 0.11 0.15 0.02
glReadPixels 272 827,596 3042.63 42.44 4.20
glUniform4fvARB 53,662 55,449 1.03 2.84 0.28
glUniformMatrix4fvARB 81,717 88,474 1.08 4.54 0.45
53,662 11,742 0.22 0.60 0.06

I had a few surprises after running this. First, why the heck is glGetUniform() so slow? I thought it would be much faster, so I want to look into that. Next, most of these calls we do explicitly, but I noticed some of them are called automatically by the browser such as glFlush() and glReadPixels(). The call to glFlush() isn’t surprising, but I didn’t expect to see glReadPixels() there. But then it’s probably used for blitting. It’s slow, but we may not be able to change that. To improve performance, we’ll probably need to focus our attention on the weakest link, glDrawArrays() since we call that ourselves.

Another way to speed things up is to limit the amount of calls to WebGL state changes which we don’t actually change. For example, we keep calling glLineWidth(), but my demo only needs to set that once. This can be solved quickly by just adding a conditional and an isChanged flag.

Running the profiler was certainly enlightening and it does open the door to some interesting possibilities, like running profile tests between Safari, Minefield and Chrome. I’m also interested to see what kind of results I’ll get for C3DL. I’ll need to dig through the options of the applications a bit more and see what else I can learn.

Study Week Update

Reading week is over and just like last semester, I didn’t do much reading, or studying for that matter. Last semester I indulged in a solitary week-long hack-a-thon. I attempted to port Crayon Physics to the Web using Processing.js, you can find the final (unfinished) demo here.

This reading week I worked on C3DL. I fixed a few bugs, added support for up_axis for COLLADA models and continued some work on the beginnings of a RTS game using the library. You’ll need a WebGL-compatible browser to view the demo. If you don’t have one, you can take a look at a video I made.