jump to navigation

Normal Mapping using PShaders in Processing.js April 4, 2015

Posted by Andor Saga in gfx, GLSL, JavaScript, Open Source, Processing.js.
add a comment

Try my normal mapping PShader Demo:
normalMap

Last year I made a very simple normal map demo in Processing.js and I posted it on OpenProcessing. It was fun to write, but something that bothered me about it was that the performance was very slow. The reason for this was because it uses a 2D canvas–there’s no hardware acceleration.

Now, I have been working on adding PShader support into Processing.js on my spare time. So here and there i’ll make a few updates. After fixing a bug in my implementation recently, I had enough to port over my normal map demo to use shaders. So, instead of having the lighting calculations in the sketch code, I could have them in GLSL/Shader code. I figured this should increase the performance quite a bit.

Converting the demo from Processing/Java code to GLSL was pretty straightforward–except working out a couple of annoying bugs–I got the demo to resemble what I originally had a year ago, but now the performance is much, much, much better 🙂 I’m no longer limited to a tiny 256×256 canvas and I can use the full client area of the browser. Even with specular lighting, it runs at a solid 60 fps. yay!

If you’re interested in the code, here it is. It’s also posted on github.

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 iResolution;
uniform vec3 iCursor;

uniform sampler2D diffuseMap;
uniform sampler2D normalMap;

void main(){
	vec2 uv = vec2(gl_FragCoord.xy / 512.0);
	uv.y = 1.0 - uv.y;

	vec2 p = vec2(gl_FragCoord);
	float mx = p.x - iCursor.x;
	float my = p.y - (iResolution.y - iCursor.y);
	float mz = 500.0;

	vec3 rayOfLight = normalize(vec3(mx, my, mz));
	vec3 normal = vec3(texture2D(normalMap, uv)) - 0.5;
	normal = normalize(normal);

	float nDotL = max(0.0, dot(rayOfLight, normal));
	vec3 reflection = normal * (2.0 * (nDotL)) - rayOfLight;

	vec3 col = vec3(texture2D(diffuseMap, uv)) * nDotL;

	if(iCursor.z == 1.0){
		float specIntensity = max(0.0, dot(reflection, vec3(.0, .0, 1.)));
		float specRaised = pow(specIntensity, 20.0);
		vec3 specColor = specRaised * vec3(1.0, 0.5, 0.2);
		col += specColor;
	}

	gl_FragColor = vec4(col, 1.0);
}

Gomba 0.2 November 24, 2014

Posted by Andor Saga in Game Development, Open Source, Processing, Processing.js.
add a comment

gomba

I’ve been busy nursing my cat back to health, so I missed blogging last Saturday 😦 He’s doing a bit better, so I’m trying to stay hopeful.

Today I did manage to find some time to catch up on my blogging, so here are the major changes on Gomba:

  • Fixed a major physics issue (running too quick & jumping was broken)
  • Added coinbox
  • Fixed kicking a sprite from a brick
  • Added render layers

Rendering Layers

The most significant change I added was rendering layers. This allows me to specify a layer for each gameobject. Clouds and background objects must exist on lower layers, then things like coins should be a bit higher, then the goombas, Mario and other sprites even higher. You can think of each layer as a transparent sheet high school teachers use for overhead projectors. Do they have digital projectors yet?? I can also change a gameobject layer at runtime so when a goomba is ‘kicked’, I can move it to the very top layer (closest to the user) so that it appears as if the sprite is being remove from the world. Rendering them under the bricks would look just strange.

I used a binary tree to internally manage the rendering of the layers. This was probably overkill and I could have done away with an array, dynamically resizing it as needed if a layer index was too high. Ah well. I plan to abstract the structure even further so the implementation is unknown to the scene. I also need to fix tunnelling issues and x-collision issues too…Maybe for next month.

Gomba 0.15 October 25, 2014

Posted by Andor Saga in Game Development, Open Source, Processing, Processing.js.
add a comment

gomba_015

Play demo

I’m releasing a 0.15 version of Gomba, a component-based Processing platform game. I’m trying to be consistent about releases, so that means making a release every 4 weeks. I didn’t get everything I wanted into this release, so it’s not quite a 0.2. In any event, here are some of the changes that did make it in:

– Added platforms!
– Added audio channels for sound manager
– Many of the same component type can now be added to a gameobject
– Added goombas & squashing functionality
– Added functionality to punch bricks
– Fixed requestAnimationFrame issue for smoother graphics

I’m excited that I now have a sprite that can actually jump on things. But adding this functionality also introduced a bunch of bugs I now have to address. I have a list of issues I’m going to be tackling for the next 4 weeks, which should be fun.

Gomba 0.1 September 20, 2014

Posted by Andor Saga in Game Development, Open Source, Processing, Processing.js.
add a comment

Play demo

I was reading the Processing book Nature of Code by Daniel Shiffman and I came up to a section dealing with physics. I hadn’t written many sketches that use physics calculations, so I figured it would be fun to implement a simple runner/platformer game that uses forces, acceleration, velocity, etc. in Processing.

I decided to use a component-based architecture and I found it surprisingly fun to create components and tack them on to game objects. So far, I only have a preliminary amount of functionality done and I still need to sort out most of the collision code, but progress is good.

This marks my 0.1 release. I still have quite a way to go, but it’s a start.  You can take a look at the code on github or play around with the demo

I got bunch of inspiration from Pomax. He’s already created a Processing.js game engine you can check out here

BTW “gomba” in Hungarian is mushroom 🙂

Game 1 for 1GAM 2014 – Asteroids January 14, 2014

Posted by Andor Saga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment

Asteroids

Skip the blog and play Asteroids!

Back in November, I picked up a contract to develop Asteroids in Processing.js. After developing the game, I lost touch with my contractee and thus $150. Soon after, I went on vacation and when I returned, I decided to polish off what I had and place it as a 1GAM entry. I added some audio, gave it a more authentic look and feel, added more effects and the like. So, this is my official release for my first 2014 1GAM!

Implementing PShader.set() October 5, 2013

Posted by Andor Saga in JavaScript, Processing, Processing.js, PShader.
add a comment

I was in the process of writing ref tests for my implementation of PShader.set() in Processing.js, when I ran into a nasty problem. PShader.set() can take on a variety of types including single floats and integers to set uniform shader variables. For example, we can have the following:

pShader.set("i", 1);
pShader.set("f", 1.0);

If the second argument is an integer, we must call uniform1i on the WebGL context, otherwise uniform1f needs to be called. But in JavaScript, we can’t distinguish between 1.0 and 1. I briefly considered modifying the the interface for this method, but knew there was a better solution. No, the last thing I wanted was to change the interface. So I just thought about it until I came up with an interesting solution. I figured, why not call both uniform1i and uniform1f right after each other? What would happen? It turns out, it works! It seems one will always fail and the other will succeed, leaving us with the proper uniform set!

Game 2 for 1GAM: Tetrissing May 17, 2013

Posted by Andor Saga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment

tetrissing

Click to play!
View the source

I’m officially releasing Tetrissing for the 1GAM challenge. Tetrissing an open source Tetris clone I wrote in Processing.

I began working on the game during Ludum Dare 26. There were a few developers hacking on LD26 at the Ryerson Engineering building, so I decided to join them. I was only able to stay for a few hours, but I managed to get the core mechanics done in that time.

After I left Ryserson, I did some research and found most of the Tetris clones online lacked some basic features and has almost no polish. I wanted to contribute something different than what was already available. So, that’s when I decided to make this one of my 1GAM games. I spent the next 2 weeks fixing bugs, adding features, audio, art and polishing the game.

I’m fairly happy with what I have so far. My clone doesn’t rely on annoying keyboard key repeats, and it still allows tapping the left or right arrow keys to move a piece 1 block. I added a ‘ghost’ piece feature and kickback feature, pausing, restarting, audio and art. There was nothing too difficult about all this, but it did require work. So, in retrospect I want to take on something a bit more challenging for my next 1GAM game.

Lessons Learned

One mistake I made when writing this was over complicating the audio code. I used Minim for the Processing version, but I had to write my own implementation for the Processing.js version. I decided to look into the Web Audio API. After fumbling around with it, I did eventually manage to get it to work, but then the sound didn’t work in Firefox. Realizing that I made a simple matter complex, I ended up scrapping the whole thing and resorting to use audio tags, which took very little effort to get working. The SoundManager I have for JavaScript is now much shorter, easier to understand, and still gets the job done.

Another issue I ran into was a bug in the Processing.js library. When using tint() to color my ghost pieces, Pjs would refuse to render one of the blocks that composed a Tetris piece. I dove into the tint() code and tried fixing it myself, but I didn’t get too far. After taking a break, I realized I didn’t really have the time to invest in the Pjs fix and also came up with a dead-simple work-around. Since only the first block wasn’t rendering, I would render that first ‘invisible’ block off screen, then re-render the same block onscreen the second time. Fixing the issue in Pjs would have been nice. But that wasn’t what my main goal was.

Lastly, I was reminded how much time it takes to polish a game. I completed the core mechanics of Tetrissing in a few hours, but it took another 2 weeks to polish it!

If you like my work, please star or fork my repository on Github. Also, please post any feedback, thanks!

Sprite Sheet Guide Generator January 13, 2013

Posted by Andor Saga in Game Development, Pixel Art, Processing.js.
add a comment

sprite sheet guide generator
Click the image above to get the tool.

I began using Pixen to start making pixel art assets for a few games I’m developing for the 1 game a month challenge.

While pixelating away, I found myself creating a series of sprite sheets for bitmapped fonts. I created one here, then another, but by then I found myself running into the same problem: before I began drawing each glyph, I first had to make sure I had a nice grid to keep all the characters in line. Each font used a different number of pixels, so I had to start from scratch every time. You can imagine that counting rows and columns of pixels and drawing each line separating glyphs is extremely tedious. I needed something to eliminate this from my workflow.

I decided to create a decent tool that took away this painful process. What I needed was a sprite sheet guide generator, a tool that created an image of a grid based on these inputs:

  • Number of sprites per row
  • Number of sprites per column
  • Width of the sprite
  • Height of the sprite
  • Border width and color

I used Processing.js to create the tool and I found the results to be quite useful. After almost finishing the tool, I realized I could alternate the sprite background colours to help me even more when I’m drawing down at the pixel level, so I implemented that as well.

You can run the tool right here or you can click on the image at the start of this post.

BitCam.Me December 25, 2012

Posted by Andor Saga in Open Source, Pixel Art, Processing.js.
add a comment

bitcam_me_asalga

Check this out: I created a WebRTC demo that pixelates your webcam video stream: BitCam.me.

I recently developed a healthy obsession with pixel art and I began making some doodles in my spare time. Soon after I started doing this, I wondered what it would be like to generate pixel art programmatically. So I fired up Processing and made a sketch that did just that. The sketch pixelized a PNG, taking the average pixel color of the nearest neighbor pixels.

After completing that sketch, I realized I could easily upgrade what I had written to use WebRTC instead of a static image. I thought it would be much more fun and engaging to use this demo if it was in real-time. I added the necessary JavaScript and I was pretty excited about it (:

I then found SuperPixelTime and saw it did something similar to what I had written. But unlike my demo, it had some nice options to change the color palette. I read the code and figured making those changes wouldn’t be difficult either and soon had my own controls for changing palettes.

I had a great time making the demo. Let me know what you think!

Enjoy!

Experimenting with Normal Mapping September 26, 2012

Posted by Andor Saga in Game Development, Open Source, Processing, Processing.js.
Tags: ,
add a comment


Click me!

** Update March 20 2014 **
The server where this sketch was being hosted went down. I recently made several performance improvements to the sketch and re-posted it on OpenProcessing.

Quick note about the demo above. I’m aware the performance on Firefox is abysmal and I know it’s wonky on Chrome. Fixes to come!

I’ve heard and seen the use of normal mapping many times, but I have never experimented with it myself, so I decided I should, just to learn something new. Normal mapping is a type of bump mapping. It is a way of simulating bumps on an object, usually in a 3D game. These bumps are simulated with the use of lights. To get a better sense of this technique, click on the image above to see a running demo. The example uses a 2D canvas and simulates Phong lighting.

So why use it and how does it work?

The great thing with normal mapping is that you can simulate vertex detail of a simplified object without providing the extra vertices. By only providing the normals and then lighting the object, it will seem like the object has more detail than it actually does. If we wanted to place the code we had in a 3D game, we would only need 4 vertices to define a quad (maybe it could be a wall), and along with the normal map, we could render some awesome Phong illumination.

So, how does it work? Think of what a bitmap is. It is just a 2D map of bits. Each pixel contains a color components making up a the entire graphic. A normal map is also a 2D map. What makes normal maps special is how their data is interpreted. Instead of each pixel holding a ‘color’ value, each pixel actually stores a vector that defines where the corresponding part in the color image is ‘facing’ also known as our normal vector.

These normals need to be somehow encoded into an image. This can be easily done since we have three floating point components (x,y,z) that need to be converted into three 8 or 16 bit color components (r,g,b). When I began playing with this stuff, I wanted to see what the data actually looked like. I first dumped out all the color values from the normal map and found the range of the data:

Red (X) ranges from 0 to 255
Green (Y) ranges from 0 to 255
Blue (Z) ranges from 127 to 255

Why is Z different? When I first looked at this, it seemed to me that each component needs to be subtracted by 127 so the values map to their corresponding negative number lines in a 3D coordinate system. However, Z will always point directly towards the viewer, never away. If you do a search for normal map images, you will see the images are blue in color. So it would make sense why the blue is pronounced. The normal is always pointing ‘out’ of the image. If it ranged from 0-255, subtracting 127 would result in a negative number which doesn’t make sense. So, after subtracting each by 127:

X -127 to 128
Y -127 to 128
Z 0 to 128

The way I picture this is I imagine that all the normals are contained in a translucent semi-sphere with the semi-sphere’s base lying on the XY-plane. But since the Z range is half of that of X and Y, it would appear more like a squashed semi-sphere. This tells us the vectors aren’t normalized. But that can be solved easily with normalize(). Once normalized, they can be used in our lighting calculations. So now that we have some theoretical idea of how this rendering technique works, let’s step through some code. I wrote a Processing sketch, but of course the technique can be used in other environments.

// Declare globals to avoid garbage collection

// colorImage is the original image the user wants to light
// targetImage will hold the result of blending the 
// colorImage with the lighting.
PImage colorImage, targetImage;

// normalMap holds our 2D array of normal vectors. 
// It will be the same dimensions as our colorImage 
// since the lighting is per-pixel.
PVector normalMap[][];

// shine will be used in specular reflection calculations
// The higher the shine value, the shinier our object will be
float shine = 40.0f;
float specCol[] = {255, 128, 50};

// rayOfLight will represent a vector from the current 
// pixel to the light source (cursor coords);
PVector rayOfLight = new PVector(0, 0, 0);
PVector view = new PVector(0, 0, 1);
PVector specRay = new PVector(0, 0, 0);
PVector reflection = new PVector(0, 0, 0);

// These will hold our calculated lighting values
// diffuse will be white, so we only need 1 value
// Specular is orange, so we need all three components
float finalDiffuse = 0;
float finalSpec[] = {0, 0, 0};

// nDotL = Normal dot Light. This is calculated once
// per pixel in the diffuse part of the algorithm, but we may
// want to reuse it if the user wants specular reflection
// Define it here to avoid calculating it twice per pixel
float nDotL;

void setup(){
  size(256, 256);

  // Create targetImage only once
  colorImage = loadImage("data/colorMap.jpg");
  targetImage = createImage(width, height, RGB);

  // Load the normals from the normalMap into a 2D array to 
  // avoid slow color lookups and clarify code
  PImage normalImage =  loadImage("data/normalMap.jpg");
  normalMap = new PVector[width][height];
  
  // i indexes into the 1D array of pixels in the normal map
  int i;
  
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      i = y * width + x;

      // Convert the RBG values to XYZ
      float r = red(normalImage.pixels[i]) - 127.0;
      float g = green(normalImage.pixels[i]) - 127.0;
      float b = blue(normalImage.pixels[i]) - 127.0;
      
      normalMap[x][y] = new PVector(r, g, b);
      
      // Normal needs to be normalized because Z
      // ranged from 127-255
      normalMap[x][y].normalize();
    }
  }
}

void draw(){
  // When the user is no longer holding down the mouse button, 
  // the specular highlights aren't used. So reset the values
  // every frame here and set them only if necessary
  finalSpec[0] = 0;
  finalSpec[1] = 0;
  finalSpec[2] = 0;
  
  // Per frame we iterate over every pixel. We are performing
  // per-pixel lighting.
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      
      // Simulate a point light which means we need to
      // calculate a ray of light for each pixel. This vector
      // will go from the light/cursor to the current pixel.
      // Don't use PVector.sub() because that's too slow.
      rayOfLight.x = x - mouseX;
      rayOfLight.y = y - mouseY;

      // We only have two dimensions with the mouse, so we
      // have to create third dimension ourselves.
      // Force the ray to point into 3D space down -Z. 
      rayOfLight.z = -150;
      
      // Normalize the ray it can be used in a dot product
      // operation to get a sensible values(-1 to 1)
      // The normal will point towards the viewer
      // The ray will be pointing into the image
      rayOfLight.normalize();
      
      // We now have a normalized vector from the light
      // source to the pixel. We need to figure out the
      // angle between this ray of light and the normal
      // to calculate how much the pixel should be lit.

      // Say the normal is [0,1,0] and the light is [0,-1,0]
      // The normal is pointing up and the ray, directly down.
      // In this case, the pixel should be fully 100% lit
      // The angle would be PI

      // If the ray was [0,-1,0] it would
      // not contribute light at all, 0% lit
      // The angle would be 0 radians

      // We can easily calculate the angle by using the
      // dot product and rearranging the formula.
      // Omitting  magnitudes since they are = 1
      // ray . normal = cos(angle)
      // angle = acos(ray . normal)

      // Taking the acos of the dot product returns
      // a value between 0 and PI, so we normalize
      // that and scale to 255 for the color amount     
      nDotL = rayOfLight.dot(normalMap[x][y]);
      finalDiffuse = acos(nDotL)/PI * 255.0;
      
      // Avoid more processing by only calculating
      // specular lighting if the users wants to do it.
      // It is fairly processor intensive.
      if(mousePressed){
        // The next 5 lines calculates the reflection vector
        // using Phong specular illumination. I've written
        // a detailed blog about how this works: 
        // https://andorsaga.wordpress.com/2012/09/23/understanding-vector-reflection-visually/ 
        // Also, when we have to perform vector subtraction
        // as part of calculating the reflection vector,
        // do it manually since calling sub() is slow.
        reflection = new PVector(normalMap[x][y].x,
                                 normalMap[x][y].y,
                                 normalMap[x][y].z);
        reflection.mult(2.0 * nDotL);
        reflection.x -= rayOfLight.x;
        reflection.y -= rayOfLight.y;
        reflection.z -= rayOfLight.z;
        
        // The view vector points down (0, 0, 1) that is,
        // directly towards the viewer. The dot product 
        // of two normalized vector returns a value from
        // (-1 to 1). However, none of the normal vectors
        // point away from the user, so we don't have to
        // deal with making sure the result of the dot product 
        // is negative and thus a negative specular intensity.
        
        // Raise the result of that dot product value to the
        // power of shine. The higher shine is, the shinier
        // the surface will appear.        
        float specIntensity = pow(reflection.dot(view),shine);
        
        finalSpec[0] = specIntensity * specCol[0];
        finalSpec[1] = specIntensity * specCol[1];
        finalSpec[2] = specIntensity * specCol[2];
      }
      
      // Now that the specular and diffuse lighting are
      // calculated, they need to be blended together
      // with the original image and placed in the
      // target image. Since blend() is too slow, 
      // perform our own blending operation for diffuse.
      targetImage.set(x,y, 
        color(finalSpec[0] + (finalDiffuse *   
                            red(colorImage.get(x,y)))/255.0,

              finalSpec[1] + (finalDiffuse * 
                          green(colorImage.get(x,y)))/255.0,

              finalSpec[2] + (finalDiffuse *  
                         blue(colorImage.get(x,y)))/255.0));
    }
  }
  
  // Draw the final image to the canvas.
  image(targetImage, 0,0);
}

Whew! Hope that was a fun read, Let me know what you think!