Normal Mapping using PShaders in Processing.js

Try my normal mapping PShader Demo:
normalMap

Last year I made a very simple normal map demo in Processing.js and I posted it on OpenProcessing. It was fun to write, but something that bothered me about it was that the performance was very slow. The reason for this was because it uses a 2D canvas–there’s no hardware acceleration.

Now, I have been working on adding PShader support into Processing.js on my spare time. So here and there i’ll make a few updates. After fixing a bug in my implementation recently, I had enough to port over my normal map demo to use shaders. So, instead of having the lighting calculations in the sketch code, I could have them in GLSL/Shader code. I figured this should increase the performance quite a bit.

Converting the demo from Processing/Java code to GLSL was pretty straightforward–except working out a couple of annoying bugs–I got the demo to resemble what I originally had a year ago, but now the performance is much, much, much better 🙂 I’m no longer limited to a tiny 256×256 canvas and I can use the full client area of the browser. Even with specular lighting, it runs at a solid 60 fps. yay!

If you’re interested in the code, here it is. It’s also posted on github.

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 iResolution;
uniform vec3 iCursor;

uniform sampler2D diffuseMap;
uniform sampler2D normalMap;

void main(){
	vec2 uv = vec2(gl_FragCoord.xy / 512.0);
	uv.y = 1.0 - uv.y;

	vec2 p = vec2(gl_FragCoord);
	float mx = p.x - iCursor.x;
	float my = p.y - (iResolution.y - iCursor.y);
	float mz = 500.0;

	vec3 rayOfLight = normalize(vec3(mx, my, mz));
	vec3 normal = vec3(texture2D(normalMap, uv)) - 0.5;
	normal = normalize(normal);

	float nDotL = max(0.0, dot(rayOfLight, normal));
	vec3 reflection = normal * (2.0 * (nDotL)) - rayOfLight;

	vec3 col = vec3(texture2D(diffuseMap, uv)) * nDotL;

	if(iCursor.z == 1.0){
		float specIntensity = max(0.0, dot(reflection, vec3(.0, .0, 1.)));
		float specRaised = pow(specIntensity, 20.0);
		vec3 specColor = specRaised * vec3(1.0, 0.5, 0.2);
		col += specColor;
	}

	gl_FragColor = vec4(col, 1.0);
}

Wibbles!

wibbles

I wanted to learn the basics of require.js and pixi.js, so I thought it would be fun to create a small game to experiment with these libraries. I decided to make a clone of a game I used to play: Nibbles. It was a QBASIC game that I played on my 80386.

Getting started with require.js was pretty daunting, there’s a bunch of documentation, but I found more of it confusing. Examples online helped some. Experimenting to see what worked and what didn’t helped me the most. On the other hand, pixi.js was very, very similar to Three.js….So much so, that I found myself guessing the API and I was for the most part right. It’s a fun 2D WebGL rending library that has a canvas fallback. It was overkill for what I was working on, but it was still a good learning experience.

Gomba 0.2

gomba

I’ve been busy nursing my cat back to health, so I missed blogging last Saturday 😦 He’s doing a bit better, so I’m trying to stay hopeful.

Today I did manage to find some time to catch up on my blogging, so here are the major changes on Gomba:

  • Fixed a major physics issue (running too quick & jumping was broken)
  • Added coinbox
  • Fixed kicking a sprite from a brick
  • Added render layers

Rendering Layers

The most significant change I added was rendering layers. This allows me to specify a layer for each gameobject. Clouds and background objects must exist on lower layers, then things like coins should be a bit higher, then the goombas, Mario and other sprites even higher. You can think of each layer as a transparent sheet high school teachers use for overhead projectors. Do they have digital projectors yet?? I can also change a gameobject layer at runtime so when a goomba is ‘kicked’, I can move it to the very top layer (closest to the user) so that it appears as if the sprite is being remove from the world. Rendering them under the bricks would look just strange.

I used a binary tree to internally manage the rendering of the layers. This was probably overkill and I could have done away with an array, dynamically resizing it as needed if a layer index was too high. Ah well. I plan to abstract the structure even further so the implementation is unknown to the scene. I also need to fix tunnelling issues and x-collision issues too…Maybe for next month.

Gomba 0.15

gomba_015

Play demo

I’m releasing a 0.15 version of Gomba, a component-based Processing platform game. I’m trying to be consistent about releases, so that means making a release every 4 weeks. I didn’t get everything I wanted into this release, so it’s not quite a 0.2. In any event, here are some of the changes that did make it in:

– Added platforms!
– Added audio channels for sound manager
– Many of the same component type can now be added to a gameobject
– Added goombas & squashing functionality
– Added functionality to punch bricks
– Fixed requestAnimationFrame issue for smoother graphics

I’m excited that I now have a sprite that can actually jump on things. But adding this functionality also introduced a bunch of bugs I now have to address. I have a list of issues I’m going to be tackling for the next 4 weeks, which should be fun.

Gomba 0.1

Play demo

I was reading the Processing book Nature of Code by Daniel Shiffman and I came up to a section dealing with physics. I hadn’t written many sketches that use physics calculations, so I figured it would be fun to implement a simple runner/platformer game that uses forces, acceleration, velocity, etc. in Processing.

I decided to use a component-based architecture and I found it surprisingly fun to create components and tack them on to game objects. So far, I only have a preliminary amount of functionality done and I still need to sort out most of the collision code, but progress is good.

This marks my 0.1 release. I still have quite a way to go, but it’s a start.  You can take a look at the code on github or play around with the demo

I got bunch of inspiration from Pomax. He’s already created a Processing.js game engine you can check out here

BTW “gomba” in Hungarian is mushroom 🙂

Understanding Raycasting Step-by-Step

raycasting

Introduction

For some time I’ve wanted to understand how raycasting works. Raycasting is a graphics technique for rendering 3D scenes and it was used in old video games such as Wolfenstein 3D and Doom. I recently had some time to investigate and learn how this was done. I read up on a few resources, digested the information, and here is my understanding of it, paraphrased.

The ideas here are not my own. The majority of the code are from the references listed at the bottom of this post. I wrote this primarily to test my own understanding of the techniques involved.

Raycasting vs. Raytracing

Raycasting is not the same as Raytracing. Raycasting casts width number of rays into a scene and it is a 2D problem and it is non-recursive. Raytracing on the other hand casts width*height number of rays into a 3D scene. These rays bounce off several objects before calculating a final color. It is much more involved and it isn’t typically used for real-time applications.

Framework

Having spent a few years playing with Processing and Processing.js, I felt it was time for me to move on to OpenFrameworks. I’ve always favored C++ over Java, so this was an excuse to make my first OpenFrameworks application. My first one! (:

Background Theory

The idea is we are drawing a very simple 3D world by casting viewportWidth number of rays into a scene. These rays are based on the players position, direction, and field of view. Our scene is primitive, constructed by a 2D array populated with integers which will represent different colored cells/blocks while 0 will represent empty space.

We iterate from the left side of the viewport to the right, creating a ray for every column of pixels. Once a ray hits the edge of a cell, we calculate its length. Based on how far away the edge is, we draw a shorter or longer single pixel vertical line centered in the viewport. This is how we will achieve foreshortening.

Since our implementation does not involve any 3D geometry, this technique can be implemented on anything that supports a 2D context. This includes HTML5 canvas and even TI-83 devices.

What really excites me about this method is that the complexity is not dependent on the number of objects in the level, but instead on the number of horizontal pixels in the viewport! Very cool!!

Initial Setup

Inside an empty OpenFrameworks template, you will have a ofApp.h header file. In this file we declare a few variables: pos, right, dir, FOV, and rot all define properties of our player. The width and height variables are aliases and worldMap defines our world.

#pragma once

#include "ofMain.h"

class ofApp : public ofBaseApp{
  private:
  ofVec2f pos;
  ofVec2f right;
  ofVec2f dir;
  float FOV;
  float rot;

  int width;
  int height;
    
  int worldMap[15][24] = {
    {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},
    {1,0,0,0,0,0,3,3,3,0,0,0,0,0,3,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,3,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,3,0,3,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,3,3,3,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,3,0,3,0,3,0,3,0,3,0,3,0,3,0,3,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}
  };
    
  public:
  void setup();
  void update();
  void draw();
		
  void keyPressed(int key);
  void keyReleased(int key);
  
  ~ofApp();
};

In the setup method of our ofApp.cpp implementation file we assign initial starting values to our player. The position is where the player will be relative to the top left of the array/world. We don’t assign the up and right vectors here since those are set in update() every frame.

void ofApp::setup(){
  pos.set(5, 5);
  FOV = 45;
  rot = PI;
  
  width = ofGetWidth();
  height = ofGetHeight();
}

Setting the Direction and Right Vectors

We need two crucial vectors, the player’s direction and right vector.

In update(), we calculate these vectors based on the scalar rot value. (The keyboard will later drive the rot value, thus rotating the view).

Once we have the direction, we can calculate the perpendicular vector using a perp operation. There is a method which performs this task for you, but I wanted to demonstrate how easy it is here. It just involves swapping the components and negating one.

If our rot value is initially PI, it means we end up looking towards the left inside the world. Since our world is defined by an array it means our y goes positively downward in our world coordinate system.

void ofApp::update(){
  dir.x = cos(rot);
  dir.y = sin(rot);
  
  right.x = -dir.y;
  right.y = dir.x;
}

Drawing a Background

We can now start getting into our render method.

To clear each frame we are going to draw 2 rectangles directly on top of the viewport. The top half of the viewport will be the sky and the bottom will be grass.

void ofApp::draw(){
  // Blue rectangle for the sky
  ofSetColor(64, 128, 255);
  ofRect(0, 0, width, height/2);
  
  // Green rectangle for the ground
  ofSetColor(0, 255, 0);
  ofRect(0, height/2, width, height);

Calculating the Base Length of the Viewing Triangle

The FOV determines the angle from which the rays shoot from the player’s position. The exact center ray will be our player’s direction. Perpendicular to this ray is a line which connects to the far left and far right rays. This all forms a little isosceles triangle. The apex being the player’s position. The base of the triangle is the camera line magnitude which we are trying to calculate. The purpose of getting this is that we need to know how far left and right the rays need to span. The greater the FOV, the wider the rays will span.

Using some simple trigonometry, we can get this length. Note that we divide by two since we’ll be generating the rays in two parts, left side of the direction vector and the right side.

  float camPlaneMag = tan( FOV / 2.0f * (PI / 180.0) );

Generating the Rays

Before we start the main render loop, we declare rayDir which will be re-used for each loop iteration.

  ofVec2f rayDir;

Each iteration of this loop will be responsible for rendering a single pixel vertical line in the viewport. From the far left side of the screen to the far right.

  for(int x = 0; x < width; x++){

For each vertical slice, we need to generate a ray casting out into the scene. We do this by mapping the screen coordinates [0 to width-1] to [-1 to 1]. When x is 0, our ray cast for intersection tests will be collinear with our direction vector.

currCamScale will drive the direction of the rays by taking the result of scaling the right vector.

    float currCamScale = (2.0f * x / float(width)) - 1.0f;

Here is where we generate our rays.

Our right vector is scaled depending on the base of our triangle viewing area. Then we scale it again depending on which slice we are currently rendering. If x = 0, currCamScale maps to -1 which means we are generating the farthest left ray.

Add the resultant vector to the direction vector and this will form our current ray shooting into our scene.

    rayDir.set(dir.x + (right.x * camPlaneMag) * currCamScale, 
               dir.y + (right.y * camPlaneMag) * currCamScale);

Calculating the Magnitude Between Edges

Each ray travels inside a 2D world defined by an array populated with integers. Most of the values are 0 representing empty space. Anything else denotes a wall which is 1×1 unit in dimensions. The integers then help define the ‘cell’ edges. These edges will be used to help us perform intersection tests with the rays.

What we are trying figure out is for each ray, when does it hit an edge? To answer that question we’ll need to figure out some things:

1. What is the magnitude from the players position traveling along the current ray to the nearest x edge (and y edge). Note I say magnitude because we will be working with SCALAR values. I got confused here trying to figure this part out because I kept thinking about vectors.

2. What is the magnitude from one x edge to the next x edge? (And y edge to the next y edge.) We aren’t trying to figure out the vertical/horizontal distance, that’s just a unit of 1. Instead, we need to calculate the hypotenuse of the triangle formed from the ray going from one edge to the other. Again, when calculating the magnitude for x, the horizontal base leg length of the triangle formed will be 1. And when calculating the magnitude for y, the vertical height of the triangle will be 1. Drawing this on paper helps.

Once we have these values we can start running the DDA algorithm. Selecting the nearest x or y edge, testing if that edge has a filled block associated with it, and if not, incrementing some values. We’ll have two scalar values ‘racing’ one another.

Okay, let’s answer the second question first. Based on the direction, what is the magnitude/hypotenuse length from one x edge to another? We know horizontally, the length is 1 unit. That means we need to scale our ray such that the x component will be 1.

What value multiplied by x will result in 1? The answer is the inverse of x! So if we multiply the x component of the ray by 1/x, we’ll need to do the same for y to maintain the ray direction. This gives us a new vector:

v = [ rayDir.x * 1 / rayDir.x , rayDir.y * 1 / rayDir.x]

We already know the result of the calculation for the first component, so we can simplify:

v = [ 1, rayDir.y / rayDir.x ]

Then get the magnitude using the Pythagorean theorem.

    float magBetweenXEdges = ofVec2f(1, rayDir.y * (1.0 / rayDir.x) ).length();
    float magBetweenYEdges = ofVec2f(rayDir.x * (1.0 / rayDir.y), 1).length();

Calculating the Starting Magnitude

For this we have to calculate the length from the player’s position to the nearest x and y edges. To do this, we get the relative position from within the cell then use that as a scale for the magnitude between the edges.

Here we keep track of which direction the ray is going by setting dirStepX which will be used to jump from cell to cell in the correct direction.

For x, we have a triangle with a base horizontal length of 1 and some magnitude for its hypotenuse which we recently figured out. Now, based on the relative position of the user within a cell, how long is the hypotenuse of this triangle?

If the x position of the player was 0.2:

magToXedge = (0 + 1 - 0.2) * magBetweenXedges
           = 0.8 * magBetweenXedges

This means we are 80% away from the closest x edge, thus we need 80% of the hypotenuse. The same method is used to calculate the starting position of the y edge.

    if(rayDir.x > 0){
      magToXedge = (worldIndexX + 1.0 - pos.x) * magBetweenXEdges;
      dirStepX = 1;
    }
    else{
      magToXedge = (pos.x - worldIndexX) * magBetweenXEdges;
      dirStepX = -1;
    }

    if(rayDir.y > 0){
      magToYedge = (worldIndexY + 1.0 - pos.y) * magBetweenYEdges;
      dirStepY = 1;
    }
    else{
      magToYedge = (pos.y - worldIndexY) * magBetweenYEdges;
      dirStepY = -1;
    }

Running the Search

Here x and y values ‘race’. If one length is less than the other, we increase the shortest, increment 1 unit in that direction and check again. We end the loop as soon as we find a non-empty cell edge.

Note, we keep track of not only the last cell we were working with, but also which edge (x or y) that we hit with sideHit.

    int sideHit;

    do{
      if(magToXedge < magToYedge){
        magToXedge += magBetweenXEdges;
        worldIndexX += dirStepX;
        sideHit = 0;
      }
      else{
        magToYedge += magBetweenYEdges;
        worldIndexY += dirStepY;
        sideHit = 1;
      }
    }while(worldMap[worldIndexX][worldIndexY] == 0);

Selecting a Color & Lighting

We kept track of the non-empty index we hit, so we can use this to index into the map and get the associated color for the cell.

    ofColor wallColor;
    switch(worldMap[worldIndexX][worldIndexY]){
      case 1:wallColor = ofColor(255,255,255);break;
      case 2:wallColor = ofColor(0,255,255);break;
      case 3:wallColor = ofColor(255,0,255);break;
    }

If all faces of the walls were a solid color, it would be difficult to determine edges and where the walls were in relation to each other. So to provide a very rudimentary form of lighting, any time the rays hit an x edge, we darken the current color.

    wallColor = sideHit == 0 ? wallColor : wallColor/2.0f;
    ofSetColor(wallColor);

Preventing Distortion

The farther the wall is from the player, the smaller the vertical slice needs to be on the screen. Keep in mind when viewing a wall/plane dead on, the length of the rays further out will be longer resulting in shorter ‘scanlines’. This isn’t desired since it will warp our representation of the world.

***Update***
After I posted this I was e-mailed how this part works, here’s an image to help clarify things:raycast distortion fix

Essentially, we take the base of the ‘lower’ triangle and divide it by the base of the ‘upper’ triangle. This value is actually the same proportionally as the perpendicular to camera line to the direction vector.

    float wallDist;

    if(sideHit == 0){
      wallDist = fabs((worldIndexX - pos.x + (1.0 - dirStepX) / 2.0) / rayDir.x);
    }
    else{
      wallDist = fabs((worldIndexY - pos.y + (1.0 - dirStepY) / 2.0) / rayDir.y);
    }

Calculating Line Height

If a ray was 1 unit from the wall, we draw a line that fills the entire length of the screen.

To provide a proper FPS feel, we center the vertical lines in screen space along Y. The middle of the cells will be at eye height.

To calculate how long the vertical line needs to be on the screen, we divide the viewport height by the distance from the wall. So if the wall was 1 unit from the player, it will fill up the entire vertical slide.

At this point we need to center our line height in our viewport line. Draw two vertical lines down, the smaller one centered next to the larger one. To get the position of top of the smaller one relative to the larger one, you just have to divide both lines halfway horizontally then subtract from that the half that remained from the smaller line. This gives us the starting point for the line we have to draw.

    float lineHeight = height/wallDist;
    float startY = height/2.0 - lineHeight/2.0;

Rendering a Slice

We finish our render loop by drawing a single pixel width vertical line for this x scanline iteration and we center it in the viewport. You can always clamp the lines to the bottom of the screen and make the player feel like they are mouse in a maze (:

    ofLine(x, startY, x, startY + lineHeight);
  }// finish loop
}// finish draw()

Keyboard Control

I’m still learning OpenFrameworks, so my implementation for keyboard control works, but it’s a hack. I’m too embarrassed to post the code until I resolve the issue I’m dealing with. In the mean time, I leave the task of updating position and rotation of the player up to the reader.

References

[1] http://www.permadi.com/tutorial/raycast/index.html
[2] http://lodev.org/cgtutor/raycasting.html
[3] http://en.wikipedia.org/wiki/Ray_casting

Game 1 for 1GAM 2014 – Asteroids

Asteroids

Skip the blog and play Asteroids!

Back in November, I picked up a contract to develop Asteroids in Processing.js. After developing the game, I lost touch with my contractee and thus $150. Soon after, I went on vacation and when I returned, I decided to polish off what I had and place it as a 1GAM entry. I added some audio, gave it a more authentic look and feel, added more effects and the like. So, this is my official release for my first 2014 1GAM!

Game 2 for 1GAM: Tetrissing

tetrissing

Click to play!
View the source

I’m officially releasing Tetrissing for the 1GAM challenge. Tetrissing an open source Tetris clone I wrote in Processing.

I began working on the game during Ludum Dare 26. There were a few developers hacking on LD26 at the Ryerson Engineering building, so I decided to join them. I was only able to stay for a few hours, but I managed to get the core mechanics done in that time.

After I left Ryserson, I did some research and found most of the Tetris clones online lacked some basic features and has almost no polish. I wanted to contribute something different than what was already available. So, that’s when I decided to make this one of my 1GAM games. I spent the next 2 weeks fixing bugs, adding features, audio, art and polishing the game.

I’m fairly happy with what I have so far. My clone doesn’t rely on annoying keyboard key repeats, and it still allows tapping the left or right arrow keys to move a piece 1 block. I added a ‘ghost’ piece feature and kickback feature, pausing, restarting, audio and art. There was nothing too difficult about all this, but it did require work. So, in retrospect I want to take on something a bit more challenging for my next 1GAM game.

Lessons Learned

One mistake I made when writing this was over complicating the audio code. I used Minim for the Processing version, but I had to write my own implementation for the Processing.js version. I decided to look into the Web Audio API. After fumbling around with it, I did eventually manage to get it to work, but then the sound didn’t work in Firefox. Realizing that I made a simple matter complex, I ended up scrapping the whole thing and resorting to use audio tags, which took very little effort to get working. The SoundManager I have for JavaScript is now much shorter, easier to understand, and still gets the job done.

Another issue I ran into was a bug in the Processing.js library. When using tint() to color my ghost pieces, Pjs would refuse to render one of the blocks that composed a Tetris piece. I dove into the tint() code and tried fixing it myself, but I didn’t get too far. After taking a break, I realized I didn’t really have the time to invest in the Pjs fix and also came up with a dead-simple work-around. Since only the first block wasn’t rendering, I would render that first ‘invisible’ block off screen, then re-render the same block onscreen the second time. Fixing the issue in Pjs would have been nice. But that wasn’t what my main goal was.

Lastly, I was reminded how much time it takes to polish a game. I completed the core mechanics of Tetrissing in a few hours, but it took another 2 weeks to polish it!

If you like my work, please star or fork my repository on Github. Also, please post any feedback, thanks!

BitCam.Me

bitcam_me_asalga

Check this out: I created a WebRTC demo that pixelates your webcam video stream: BitCam.me.

I recently developed a healthy obsession with pixel art and I began making some doodles in my spare time. Soon after I started doing this, I wondered what it would be like to generate pixel art programmatically. So I fired up Processing and made a sketch that did just that. The sketch pixelized a PNG, taking the average pixel color of the nearest neighbor pixels.

After completing that sketch, I realized I could easily upgrade what I had written to use WebRTC instead of a static image. I thought it would be much more fun and engaging to use this demo if it was in real-time. I added the necessary JavaScript and I was pretty excited about it (:

I then found SuperPixelTime and saw it did something similar to what I had written. But unlike my demo, it had some nice options to change the color palette. I read the code and figured making those changes wouldn’t be difficult either and soon had my own controls for changing palettes.

I had a great time making the demo. Let me know what you think!

Enjoy!

Engage3D Hackathon Coming Soon!

A month ago, Bill Brock and I pitched our idea to develop an open source 3D web-based videoconferencing system for the Mozilla Ignite Challenge over Google chat. Will Barkis from Mozilla recorded and moderated the conversation and then sent it off to a panel of judges. The pitch was to receive a slice of $85,000 that was being doled out to the winners of the Challenge.

After some anticipation, we got word that we were among the winners. We would receive $10,000 in funding to support the development of our prototype. Our funding will cover travel expenses, accommodations, the purchasing of additional hardware and the development of the application itself.

We will also take on two more developers and have a hackathon closer to the end of the month. Over the span of four days we will iterate on our original code and release something more substantial. The Company Lab in Chattanooga has agreed to provide us with a venue to hack and a place to plug into the network. Both Bill and I are extremely excited to get back to hacking on Engage3D and to get back to playing with the gig network.

We will keep you updated on our Engage3D progress, stay tuned!