Building a Generic WebGL Camera Library

One problem with XB PointStream is the lack of a camera system. This exists because the WebGL library was built primarily for parsing and rendering point clouds files.

We’ve left it up to users to ‘create’ their own cameras by calling our low-level matrix transformations functions. However, as more projects begin to use the library (WebGL PhotoSynth Viewer, 3DTubeMe, 3DStream) it’s becoming apparent some form of easy-to-use camera for XB PointStream is necessary.

I began tackling this issue on my spare time since it bothered me so much. First, I investigated which types of cameras are supported by Arius3D‘s 3DImageSuiteViewer. They have three which are fairly well known: Orbit, Free and ArcBall. I started by implementing something simple, an orbit camera I already developed for C3DL. Orbit cameras are fairly simple, but harder to explain, so just play with the working demo:

When I started developing this, my plan was to strip out C3DL-specific code and get it working with XB PointStream. After a bit of tinkering, I realized the fixes I would be making should be fed back upstream to C3DL. Taking this one step further, any WebGL library should be able to make use of this code. So I closured the entire camera.

Here’s how you can use it:

// Create an orbit camera halfway between the closest and farthest point about 
// the default orbit point [0,0,0].
var cam = new OrbitCam({closest:0, farthest:100, distance: 50});

// override the values.
cam.setFarthestDistance(80);
cam.setClosestDistance(10);
cam.setDistance(10);

// go back as far as possible (uses 80)
cam.goFarther(100);

// figure out how much the cursor moved...
var deltaX = ....
var deltaY = ....

// Yaw about global Y.
cam.yaw(deltaX);

// Pitch about orbit point.
cam.pitch(deltaY);

// matrix magic!
pointStream.multMatrix(M4x4.makeLookAt(cam.position, cam.direction, cam.up));
pointStream.translate(-cam.position);

// done!
pointStream.render(pointCloud);

Okay, the demonstration isn’t all that impressive. What is really impressive is the encapsulated and sharable code. Any WebGL developer can take this and instantly have a working camera. The code takes care of making sure the camera doesn’t zoom out too far and always has the correct orientation. Neat eh? Once I (hopefully) finish the other main cameras (ArcBall and Free), I’ll place them on a github repository for other WebGL developers.

Advertisements

Improving c3DL Performance

Last week Cathy asked me to come up with some ideas on how to speed up C3DL. Now we already have a few techniques which give us pretty good performance improvements like using typed arrays when possible or culling objects using bounding sphere/frustum tests. But there’s still a way to go in terms of speed.

I spent some time researching topics and thinking about different solutions to this problem and then I filed the following tickets which I thought would decrease rendering time.

Tracing

One thing we can do make sure our code is on trace. There’s a few places in our library which Minefield can’t compile. This means there are parts of the library where every rendered frame interprets ‘slow’ JavaScript. We need to review these bits in our code and fix them. This would be a definite win since users don’t need to do anything to gain the benefits.

Billboarding

We can help user scripts stay fast by adding billboarding support. If a user wants to render a high-quality model far from the camera, they may be better off drawing a textured quad which orients itself towards the camera. This shouldn’t be difficult, we already have the this code in the particle system.

Discrete Level of Detail

Next, we could add support for discrete level of detail. Why render a high-quality detailed object far from the camera when it will only occupy a few dozen pixels on the canvas? If the user could provide versions of their model which range in detail, the library could switch between them as the camera approaches the object. The trade-off to this would be the noticeable ‘pop’ as the objects are switched. For this problem, we could implement a form of continuous level of detail, but this isn’t something we should consider soon.

Spatial Partitioning

Spatial partitioning can provide the library with useful high-level information about the scene to render. By splitting up the scene we could perform more hidden surface removal. Also, Since the scene would be sorted, we could reduce use of depth buffer reading and writing, improving performance….theoretically. I ran some tests for this, but I didn’t see any improvement.

Benchmarks

Finally, we need to update our performance test benchmark script and integrate it into our release procedure. Keeping a log of performance results (and visualizing it with Processing.js?) will help us make sure future releases stay fast.

Of course, we shouldn’t limit ourselves to only these tickets. These are just some things I thought we could implement. We can (and should) look into other techniques such as portals, antiportals, pre-lit geometry, instancing, displacement mapping, occlusion culling, etc, etc, etc.

Modularizing C3DL

Over the Summer Matthew Postill has done some substantial work on C3DL. He’s fixed bugs, added collision detection and sped up rendering with frustum culling.

We’re expecting the library to continue to grow in size and features. The problem is not all these features will be desired by developers using the library. If a user only wants to render a teapot with C3DL, why force them to load the library in its entirety, with a particle system, a bunch of shaders and collision detection? Cathy has suggested we could tackle this problem by modularizing the library. This would impact the internals of the library quite a bit, but it would also provide much more flexibility.

Firstly, it would allow developers to build the library—a bit like jQuery. The user would select what components they need and the library would be built containing only those selected options.

Secondly, it would allow other developers to create their own components and hook them into C3DL. This means developers could write their own model parsing code instead of being forced to use ours.

Another related change would entail offering release and debug versions. The debug version could include parameter checking at the start of each function, which would be omitted from the release build.

I’m not going to kid myself. There’s a significant amount of work required to make this happen—there’s also an immense payoff. So I’m excited to start working on this, but I know I’ll need some help. I’d love to hear from any developer who has had experience implementing something along these lines.

Summer Reflections

During the Summer I had the opportunity to work with some highly motivated and intelligent developers at Seneca’s Centre for Development of Open Technology. For four months we cranked out code for several exciting technologies such as for C3DL, NextJ, The Fedora ARM project, XB PointStream, Popcorn and Processing.js.

This was the first time I have worked in CDOT where there were so many developers working on so many projects. Almost all the projects dealt exclusively with JavaScript, but we also had to work with other libraries and standards like WebGL, JSON, and Video. As the technologies we worked with varied, so did our challenges: documentation was scarce, standards and APIs changed, or our code simply didn’t work and we needed help.

What made working in the CDOT environment actually work is communication. Our days began with a morning Scrum meeting where we exchanged problems we stumbled into the day before and also shared our success stories. The meetings were brief (only 10 minutes) but on a few occasions they were invaluable. As we stated our problems our colleagues gave their ideas and opinions, “Have you tried looking into something like this…?” or “You should read this blog…”. We didn’t always have the answers, but we were good at pointing someone the right direction.

And sometimes we did have the answers. Our cubicles were close so it made sense to ask the regular expression expert a question which would save us half an hour and only steal a few minutes of their time. Other times we posed questions to other developers on IRC or we received extremely useful suggestions on our blog posts.

We also took the opportunity to meet face-to-face with our industry partners. Developers at Arius3D gave us guidance, tips and valuable resources. Down at the Toronto Mozilla office we were given a WebGL walkthrough and help with relevant WebGL tools. We also worked closely with Brett Gaylor–a filmmaker working with us on the video tag. Others of us met with developers from NexJ and Fedora.

All these forms of communication were important for the development of the technologies we worked on. It only reminds me how crucial these are for open source development.

Now that the Fall semester has started I’m back in Seneca taking classes but I’m still excited to be working at CDOT on XB PointStream, C3DL and Processing.js.

Initial thoughts on High Quality Surface Splats with WebGL

Last week Mickael and I met up with developers from Arius3D. They specialize in 3D imaging, which includes scanning 3D objects, displaying them on screens and printing them out in 3D. They have a free ActiveX plug-in called 3DImageSuite for IE which allows users to view the 3D objects.

The problem is users are limited to using one browser to view this content. 3DImageSuite is also only available for Windows. So even though IE “dominates” the browser world, Arius3D are limiting their non-IE users.

Because of our experience developing Javascript libraries which specialize in 3D rendering, we have partnered up with them to solve this problem. We’ll be using WebGL to do the rendering. Not just because it’s what we specialize in, but also because it’s becoming a standard. Once WebGL is fully implemented in the other browsers and we complete this technology, their content will be rendered cleanly without any add-ons. It will be IE users who will be at a disadvantage when trying to view this content.

3DImageSuite displays point clouds which consists of XYZ and RGB data. So the problem sounds trivial, especially since we already have the capacity to render 3D points in both C3DL and Processing.js. The real problem is the 3D scans are very dense, they are composed of millions of points. Trying to render this huge amount of data cannot be done in either of our libraries practically.

To get started on this project, I began reviewing my meeting notes with the Arius3D developers and started brainstorming for ideas and thoughts on the solution.

• One of the first things I thought about was allowing the user to control the level of detail. This can be extremely valuable user input. If the user doesn’t require 100% detail, there is not point in wasting resources downloading and rendering it. This can be the case when they are on a limited mobile platform.

• We are in the process of trying to speed up Processing.js by making the browser do a JIT on the Javascript. This will allow the JS to run native C++ speed. We’ll be trying to place as much processing as we can on the GPU and CPU.

• Use compression on server side and decompress data on the client side to reduce download times.

• Implement some form of spatial partitioning, such as an octree to cull splats.

• Stream the data so lower quality ‘versions’ of a model can be displayed quickly and progressively gain quality.

• Use static rendering. That is, render the model only once and not in ‘real-time’. If the user rotates the object, render a low-quality version while the user is rotating it. Once the user stops, re-render 100% of the object. I’ve seen this done before, but I forgot about it. Luckily someone reminded me at OCE. Thanks

• Because self-shadows is something which Arius3D is considering, we may need to use raytracing instead of simple Blinn-Phong shaders.

These are just some of the initial thoughts I’ve had along with ideas from the developers. I’ve begun some research on surface splatting and spatial partitioning and the problem does seem very interesting and exciting. I think more research, development and talks with the developers will lead us in the right direction.

Study Week Update


Reading week is over and just like last semester, I didn’t do much reading, or studying for that matter. Last semester I indulged in a solitary week-long hack-a-thon. I attempted to port Crayon Physics to the Web using Processing.js, you can find the final (unfinished) demo here.

This reading week I worked on C3DL. I fixed a few bugs, added support for up_axis for COLLADA models and continued some work on the beginnings of a RTS game using the library. You’ll need a WebGL-compatible browser to view the demo. If you don’t have one, you can take a look at a video I made.

Rendering Blending Objects Last

This morning I added a static outline to a building in the C3DL RTS demo. I soon realized there was something wrong with the way the final scene looked. Here is a screen shot, can you see what’s wrong?

The problem is the lines (which make up the circle) are being rendered after the particle system. Since the particle system uses blending, it needs to be rendered last. It takes whatever is in the color buffer and blends it with its own colors, it doesn’t just overwrite the color buffer fragments like lines. Here is how it should look:

Obviously we didn’t have a demo before which used both particle systems and lines, which is why this little bug managed to slip its way into the library. The fix was simple, but I’ll need to add a ticket to our lighthouse account before I can get the changes committed.