Adding GIS Requirements to XB PointStream

GIS Data rendered using XB PointStream

I’m currently working with XB PointStream, a WebGL tool which streams and renders 3D images in the browser. When Mickael Medel and I started working on this we were advised to keep the tool general since the problem of streaming mass amounts of points is present in several industries. By developing a generic solution for our client (Arius3D), we would be helping other industries as well.

We got started by setting up a Git hub repository, a Wiki page and a Lighthouse account. We worked in the open, filing issues on Lighthouse as we went. However, it was almost exclusively Mike and I commenting and managing the tickets.

We were surprised when someone posted a comment stating that our library was no longer rendering their data. This was great news for two reasons. Firstly because we had another person interested in the library (yay!). Secondly, the data Paul has isn’t the conventional point cloud data we are used to rendering. Instead of a small 3D image, he’s working with GIS data.

Arius3D did provide us with GIS data (pictured above) and it serves as a good demo, but it differs in terms of point count in relation to other systems. The scan above was probably done using a stationary LIDAR device. But what if the data was aerially scanned over a long distance and contained hundreds of millions of points? If a point cloud is never expected to fully download, our library would fail. This is the case with Paul’s data and his case introduces some new requirements.

New Requirements

To render potentially hundreds of millions of points in real-time, our library needs to not only stream large amounts of data, but also must discard or swap chunks of points when no longer needed.

Another interesting problem related to GIS data is how to color the points. Users may not just have colors for visible light, but may also have infrared data too. How can we add a feature which allows users to swap between color modes? We’ll need to work on fleshing out a use case and requirements for this.

Working with GIS data will give us new perspectives on rendering point clouds and will also give us the opportunity to add interesting features. Features which are useful yet will keep the library general.


Improving c3DL Performance

Last week Cathy asked me to come up with some ideas on how to speed up C3DL. Now we already have a few techniques which give us pretty good performance improvements like using typed arrays when possible or culling objects using bounding sphere/frustum tests. But there’s still a way to go in terms of speed.

I spent some time researching topics and thinking about different solutions to this problem and then I filed the following tickets which I thought would decrease rendering time.


One thing we can do make sure our code is on trace. There’s a few places in our library which Minefield can’t compile. This means there are parts of the library where every rendered frame interprets ‘slow’ JavaScript. We need to review these bits in our code and fix them. This would be a definite win since users don’t need to do anything to gain the benefits.


We can help user scripts stay fast by adding billboarding support. If a user wants to render a high-quality model far from the camera, they may be better off drawing a textured quad which orients itself towards the camera. This shouldn’t be difficult, we already have the this code in the particle system.

Discrete Level of Detail

Next, we could add support for discrete level of detail. Why render a high-quality detailed object far from the camera when it will only occupy a few dozen pixels on the canvas? If the user could provide versions of their model which range in detail, the library could switch between them as the camera approaches the object. The trade-off to this would be the noticeable ‘pop’ as the objects are switched. For this problem, we could implement a form of continuous level of detail, but this isn’t something we should consider soon.

Spatial Partitioning

Spatial partitioning can provide the library with useful high-level information about the scene to render. By splitting up the scene we could perform more hidden surface removal. Also, Since the scene would be sorted, we could reduce use of depth buffer reading and writing, improving performance….theoretically. I ran some tests for this, but I didn’t see any improvement.


Finally, we need to update our performance test benchmark script and integrate it into our release procedure. Keeping a log of performance results (and visualizing it with Processing.js?) will help us make sure future releases stay fast.

Of course, we shouldn’t limit ourselves to only these tickets. These are just some things I thought we could implement. We can (and should) look into other techniques such as portals, antiportals, pre-lit geometry, instancing, displacement mapping, occlusion culling, etc, etc, etc.