Engage3D Hackathon Coming Soon!

A month ago, Bill Brock and I pitched our idea to develop an open source 3D web-based videoconferencing system for the Mozilla Ignite Challenge over Google chat. Will Barkis from Mozilla recorded and moderated the conversation and then sent it off to a panel of judges. The pitch was to receive a slice of $85,000 that was being doled out to the winners of the Challenge.

After some anticipation, we got word that we were among the winners. We would receive $10,000 in funding to support the development of our prototype. Our funding will cover travel expenses, accommodations, the purchasing of additional hardware and the development of the application itself.

We will also take on two more developers and have a hackathon closer to the end of the month. Over the span of four days we will iterate on our original code and release something more substantial. The Company Lab in Chattanooga has agreed to provide us with a venue to hack and a place to plug into the network. Both Bill and I are extremely excited to get back to hacking on Engage3D and to get back to playing with the gig network.

We will keep you updated on our Engage3D progress, stay tuned!


Developing engage3D – Phase 1

A point cloud of Bill Brock rendered with WebGL

I am working with Bill Brock (a PhD student from Tennessee) to develop an open source 3D video conferencing system that we are calling engage3D. We are developing this application as part of Mozilla’s Ignite Challenge.

Check out the video!

During the past few days, Bill and I made some major breakthroughs in terms of functionality. Bill sent me Kinect depth and color data via a server he wrote. We then managed to render that data in the browser (on my side) using WebGL. We are pretty excited about this since we have been hacking away for quite some time!

There has been significant drawbacks to developing this application over commodity internet, I managed to download over 8Gb of data in only one day while experimenting with the code. Hopefully, this will soon be able to be ported to the GENI resources in Chattanooga, TN for further prototyping and testing.

Even though we are still limited to a conventional internet connection, we want to do some research into data compression. Also, we have been struggling with calibrating the Kinect. This is also something we hope to resolve soon.

Reflections on Hackanooga 2012

This past weekend, I had the opportunity to participate in Hackanooga, a 48-hour hackathon in Chattanooga, Tennessee. The event was hosted by Mozilla and US Ignite with the purpose of encouraging and bringing together programmers, designers, and entrepreneurs to hack together demos that make use of Chattanooga’s gigabit network.

We only had two days to hack something together, so I wanted to make my goals simple and realistic. My plan was to set up a server on the gig network, upload dynamic point cloud data to the server and then stream the data back in real-time. In case I got that working, I wanted to try to avoid disk access altogether and stream the data from a Kinect and render it using WebGL.

Before the hacking began, anyone who wasn’t assigned a project had the chance to join one. I was surprised that several developers from the University of Tennessee at Chattanooga approached my table to hear more about my project. We quickly formed a group of four and dove right into setting up the server, editing config files, checking out the super-fast gig connection, and making sure our browsers supported WebGL. After a lot of frustration we finally managed to ‘stream’ the dynamic point cloud from the server. Although the data seemed to stream, it took a while to kick in on the client side. It was almost as if the files were being cached on the server before being sent over the wire. This is still something I need to investigate.

Once we had that working, we moved on to the Kinect. We created a c++ server that read the Kinect data and transferred that data to a node server via sockets. The node server then sent the data off to any client connections. My browser was one of the clients rendering the data, however, the rendered point clouds were a mess. One of the applications we wrote had a bug that we didn’t manage to fix in time ): Even though we didn’t manage to hammer out all the issues, I still had a lot of fun working with a team of developers.

Rumor has it that Hackanooga may become a yearly event. I’m already excited (:

Shadows in WebGL Part 1


In my last blog I wrote about an anaglyph demo I created for my FSOSS presentation in October. It was part of a series of delayed blogs which I only recently had time to write up. So, in this blog I’ll be proceeding with my next fun experiment: Shadows in WebGL.

Shadows are useful since they not only add realism, but can also provide additional visual cues in a scene. Having never implemented any type of shadows, I started by performing some preliminary research and found that there are numerous methods to achieve this effect. Some of the more common techniques include:

  • vertex projection
  • projected planar shadows
  • shadow mapping
  • shadow volumes

I chose vertex projection since it seemed very straightforward. After a few sketches, I got a fairly good grasp of the idea. Given the position for a light and vertex, the shadow cast (for that vertex) will appear at the line intersection between the slope created by those points and the x-intercept. If we had the following values:

  • Light = [4, 4]
  • Vertex = [1, 2]

Our shadow would be drawn at [-2, 0]. Note that the y component is zero and would be equal to zero for all other vertices since we’re concentrating on planar shadows.

At this point, I understood the problem well; I just needed a simple formula to get this result. If you run a search for “vertex projection” and “shadows” you’ll find a snippet of code on GameDev.net which provides the formula for calculating the x and z components of the shadow. But if you actually try it for the x component:

Sx = Vx - \frac{Vy}{Ly} - Lx
Sx = 1 - \frac{2}{4} - 4
Sx =-3.5

It doesn’t work.

When I ran into this, I had to take a step back to think about the problem and review my graphs. I was convinced that I could contrive a working formula that would be just as simple as the one above. So I conducted additional research until I eventually found the point-slope equation of a line.

Point-Slope Equation

The point-slope equation of a line is useful for determining a single point on the slope give the slope and another point on the line. This is exactly the scenario we have!

y - y1 = m(x - x1)

m – The slope. This is known since we have two given points on the line: the vertex and the light.

[x1, y1] – A known point on the line. In this case: the light.

[x, y] – Another point on the line which we’re trying to figure out: the shadow.

Since the final 3D shadow will lie on the xz-plane, the y components will always be zero. We can therefore remove that variable which gives us:

-y1 = m(x - x1)

Now that the only unknown is x, we can start isolating it by dividing both sides by the slope:
-\frac{y1}{m} = \frac{m(x - x1)}{m}

Which gives us:
-\frac{y1}{m} = x - x1

And after rearranging we get our new formula, but is it sound?
x = x1 - \frac{y1}{m}

If we use the same values as above as a test:
x = 4 - \frac{4}{\frac{2}{3}}
x = -2

It works!

I now had a way to get the x component for the shadow, but what about the z component? What I did so far was create a solution for shadows in 2 dimensions. But if you think about it, both components can be broken down into 2 2D problems. We just need to use the z components for the light and point to get the z component of the shadow.

Shader Shadows

The shader code is a bit verbose, but at the same time, very easy to understand:

void drawShadow(in vec3 l, in vec4 v){
  // Calculate slope.
  float slopeX = (l.y-v.y)/(l.x-v.x);
  float slopeZ = (l.y-v.y)/(l.z-v.z); 

  // Flatten by making all the y components the same.
  v.y = 0.0;
  v.x = l.x - (l.y / slopeX);
  v.z = l.z - (l.y / slopeZ);

  gl_Position = pMatrix * mVMatrix * v;
  frontColor = vec4(0.0, 0.0, 0.0, 1.0);

Double Trouble

The technique works, but its major issue is that objects need to be drawn twice. Since I’m using this technique for dense point clouds, it significantly affects performance. The graph below shows the crippling effects of rendering the shadow of a cloud consisting of 1.5 million points—performance is cut is half.

Fortunately, this problem isn’t difficult to address. Since detail is not an important property for shadows, we can simply render the object with a lower level of detail. I had already written a level of detail python script which evenly distributes a cloud between multiple files. This script was used to produce a sparse cloud—about 10% of the original.

Matrix Trick

It turns out that planar shadows can be alternatively rendered using a simple matrix.

void drawShadow(in vec3 l, in vec4 v){
  // Projected planar shadow matrix.
  mat4 sMatrix = mat4 ( l.y,  0.0,  0.0,  0.0, 
                       -l.x,  0.0, -l.z, -1.0,
                        0.0,  0.0,  l.y,  0.0,
                        0.0,  0.0,  0.0,  l.y);

  gl_Position = pMatrix * mVMatrix * sMatrix * v;
  frontColor = vec4(0.0, 0.0, 0.0, 1.0);

This method doesn’t offer any performance increase versus vertex projection, but the code is quite terse. More importantly, using a matrix opens up the potential for drawing shadows on arbitrary planes. This is done by modifying all the elements of the above matrix.

Future Work

Sometime in the future I’d like to experiment with implementing shadows for arbitrary planes. After that I can begin investigating other techniques such as shadow mapping and shadow volumes. Exciting! (:

Anaglyph Point Clouds

See me in 3D!

A couple of weeks ago I gave a talk at FSOSS on XB PointStream. For my presentation I wanted to experiment and see what interesting demos I could put together using point clouds. I managed to get a few decent demos complete, but I didn’t have a chance to blog about them at the time. So I’ll be blogging about them piecemeal for the rest of the month.

The first demo I have is an anaglyph rendering. Anaglyphs are one way to give 2D images a depth component. The same object is rendered at two slightly different perspectives using two different colors. Typically red and cyan (blue+green) are used.

The user wears anaglyph glasses, which have filters for both colours. A common standard is to use a red filter for the left eye and a blue filter for the right eye. These filters ensure each eye only sees one of the superimposed perspectives. The mind them merges these two images into a single 3D object.


There are many ways to achieve this effect. One method which involves creating two asymmetric frustums can be found here. However, you can also create the effect by simply rotating or translating the object. It isn’t as accurate, but it’s very easy to implement:

// ps is the instance of XB PointStream
// ctx is the WebGL context

// Yaw camera slightly for a different perspective
// Create a lookAt matrix. Apply it to our model view matrix.
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));   
// Render the object as cyan by using a colour mask.
// Preserve the colour buffer but clear the depth buffer
// so subsequent points are drawn over the previous points.

// Restore the camera's position for the other perspective.
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));   

// Render the object as red by using a colour mask.

Future Work

I hacked together the demo just in time for my talk at FSOSS, but I was left wondering how much better the effect would look if I had created two separate frustums instead. For this I would need to expose a frustum() method for the library. I can’t see a reason not to add it considering this is a perfect use case, so filed!

Fixing Ro.me’s Turbulent Point Cloud

Run me
Turbulent Point Cloud

A few days ago I noticed the turbulent point cloud demo for ro.me was no longer working in Firefox. Firefox now complains that the array being declared is too large. If you look at the source, you’ll see the entire point cloud is being stuffed into an array, all 6 megabytes of it. Since it no longer works in Firefox, I thought it would be neat to port the demo to XB PointStream to get it working again.

Stealing Some Data…

I looked the source code and copied the array declaration into a empty text file.

var array = [1217,-218,40,1218,-218,37,....];

So I had the data, which was great, but I needed it to be in a format XB PointStream could read. I had to format the vertices to look something like this:

1217	-218	40
1218	-218	37


Using JavaScript to do the conversion only made sense, but I first had to split up the file which contained my array declaration so Firefox could actually load it. After some manual work, I had 6 files—each with its own smaller literal array.

I then wrote a JavaScript script which loaded each array and dumped the formatted text into a web page. I ran my script and copied the output several times until I had the entire file reconstructed as an .ASC file.

Adding Turbulence

Once I had the point cloud loaded in XB PointStream, I needed to add some turbulence. I could have used the shader which the original demo used, but I found a demo by Paul Lewis which I liked a bit better. The demo isn’t quite as nice as the original, but given more time I could incorporate the original shader as well to make it just as good.