Real-time WebGL Rendering of House of Cards

Watch the Video

I was reading over the WebGL around the net roundup this week when I saw Mikko Haapoja’s rendering of a frame of Radiohead’s House of Cards. I thought this was neat and wondered if I could render the frames in real-time using XB PointStream.

CSV Parser

First I downloaded the House of Cards data and saw it was in CSV format. XB PointStream already has the architecture setup for user-defined parsers, so I was able to write one without changing the library itself.

User-defined Shader

To make things interesting I wrote a simple shader which changes the positions of the points and colors while the video plays. Again, I didn’t need to change the library since user-defined shaders are supported as well.

Performance Issues

When I first began rendering the video, I was using a MacBookPro 3.1 (2Ghz, 2GB RAM, GeForce 8600M GT 128MB), but Firefox began chugging after about 400 frames. Luckily my supervisor (Cathy Leung) saved me by giving me a new MBP 8.2 (2GHz, 8GB RAM, AMD Radeon HD 6490M 256MB). With this new system I was able to render it in real-time without any major issues.

There are 2100 frames of Thom Yorke singing which totals 880MB, so you can’t stream it online :c However, I’ll place all my work on Github if you’d like to tinker with it. Keep an eye on my blog when I make it available.


LOD With XB PointStream

Run me

A simple way to increase performance when rendering point clouds is using levels of detail (LOD). If the camera is far from an object, a lower detailed version of that object can be rendered without much loss of visual detail. As the camera moves closer, higher fidelity versions can be drawn.

A while ago I thought about adding this functionality to XB PointStream and soon realized that the library already supports it! The library can load different point clouds in the same canvas, which allows users to split a cloud into a series of files and conditionally render them. When I had this idea I was too busy with other work, so I had to put it off.

Yesterday I finally sat down and began working out the details to get a demo up and running. I needed two things. First, I needed to evenly distribute the points in a cloud. All of the clouds in my repository have been scanned linearly or in blocks, which doesn’t lend itself well for LOD purposes. For LOD, each cloud needs to represent a coarse level version of the entire object. Second, I needed to split up the cloud into several files.

I decided to start with a simple ASCII point cloud format, ASC. The file is organize something like this:

1.13 6.86 7.81 0 128 255
7.27 9.59 7.29 0 128 255

Using Some Python

I don’t know Python, but I knew it would be a good choice for this task. My plan was to load the input file into an array, randomly select indices from the array and write them out to the output file.

Soon after I got to work on writing my script, I saw there was a shuffle() method for arrays. This saved me quite a bit of work, so I was happy. I then hacked together the rest of the script. If you’re a Python developer, let me know if there are ways I can fix up the code.

Andor Salga

This script will take an ASC file, evenly distribute the
points and separate the cloud into a series of files.

import random
import sys

# Usage: python pointCloud.asc 4
if (len(sys.argv) < 4):
  print "Usage: python pointcloud.asc outFileName [numLevels]\n"
  inFileName = sys.argv[1];
  outBaseFileName = sys.argv[2];

  arr = []
  file = open(inFileName)
  while 1:
    line = file.readline()
    if not line: break 


  # Find out how many points we are going to have per
  # file. Don't worry about rounding issues. We will simply
  # append the remaining points to the last cloud.
  numFiles = int(sys.argv[3])
  pointsPerFile = len(arr)/numFiles

  nextFile = 0
  outFilename = outBaseFileName + "_0.asc"

  FILE = open(outFilename, "w")

  line = 0
  for item in arr:
    FILE.write(str(item)[0 : -1] + "\n")
    if(line > 0 and (line % pointsPerFile == 0 and nextFile+1 != numFiles )):
      nextFile += 1
      outFilename = outBaseFileName + "_" + str(nextFile) + ".asc"
      FILE = open(outFilename, "w")
    line += 1  

I tested this first with a million points and didn’t see much performance gain. This was a bit disappointing and suggests that there are other bottlenecks in the library. I decided instead to try the largest file I have, the visible human, which is about 3.5 million points. I fed the cloud into my script and split it up into 10 files. When testing this I got a reasonable FPS gain. When rendering all the clouds I get ~20FPS and when I zoom out and render only 1 cloud, I get ~60FPS.

In Conclusion

If you’re going to render a large data set with XB PointStream, consider using my script to split it up into many files to increase your script’s performance.

Using WebGL readPixels? Turn on preserveDrawingBuffer

Since I’ve already written a few blogs about WebGL’s readPixels and because developers seem to find my page mostly by this keyword, I decided to help clarify a recent issue I found.

In some of my WebGL scripts I have a feature which allows users to convert 3D images to 2D (see here). The script does this simply by making a call to readPixels.

This used to work until browsers (namely WebKit and Chrome) began implementing the preserveDrawingBuffer option. This is an option set when the WebGL context is acquired and as its name suggests it preserves drawing buffers between frames.

What this means is if preserveDrawingBuffer is false/off (which it is by default) it will not save the depth and color buffers after each draw call. Trying to call readPixels in this state will result in an array of zero’ed out data.

If you’re planning on calling readPixels, you’ll need to turn on this option when you get your WebGL context.

var context = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});

The WebGL spec states that this may cause a performance hit on some machines so only enable it if you really need to.