Long story short, establishing OpenGL ES 2.0 is slightly different than WebGL, but the basic principle is similar.

Cocos2d-x provides you with a few ..uh, inconvenient wrappers.. that can help you started with custom shaders but if you don’t have the knowledge in everything that is OpenGL it can get a bit hectic, plus it’s hard to find a good reference.

OpenGL is a gigantic topic so I’ll explain just the basic parts (plus I’m not an expert as well) so hopefully the below will helps you a bit.

Add Vertex Shader (triangle.vsh)

attribute vec4 a_position;

void main(void) {
    gl_Position = a_position;

and Fragment Shader (triangle.fsh)

void main(void) {
    gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);

Add them to the compile resource list, like so
How to

Do some OpenGL ES

void HelloWorld::init() {

    // Creates a program object
    CCGLProgram *program = new CCGLProgram();

    // A (check below)
    bool loaded = program->initWithVertexShaderFilename("triangle.vsh","triangle.fsh");

    // Check if everything is ok
    if (!loaded) {
        CCLOG("oh, god no");

    // B (check below)
    program->addAttribute(kCCAttributeNamePosition, 0);

    // link the programs
    // C (check below)

    // D (check below)
    CCLOG("Program Log %s", program->programLog());

    // Set the program to the current node

    // Release it from the memory pool

    // Set the clear color, not required actually.
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);


void HelloWorld::draw() {
    // Clear the color Buffer
    // E (check below)
    // Draw the triangle
    GLfloat vVertices[9] = {
        0.0f, 0.5f, 0.0f,
        -0.5f, -0.5f, 0.0f,
        0.5f, -0.5f, 0.0f
    // F (check below)
    ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position );
    glVertexAttribPointer(kCCVertexAttrib_Position, 3, GL_FLOAT, GL_FALSE, 0, vVertices);
    glDrawArrays(GL_TRIANGLES, 0, 3);
    // Not required, only for stats

These are the necessary elements that an OpenGL needs, so let’s have a look;

bool loaded = program->initWithVertexShaderFilename("triangle.vsh","triangle.fsh");

initWithVertexShaderFilename as the name suggested, will loads all external files, compile, as well as attaching all the shaders into the program.

program->addAttribute(kCCAttributeNamePosition, 0);

addAttribute simply adds the attribute vec4 a_position; to 0, I’m not sure why Cocos2d-x have to make a function for it, because the actual call is just this;

glBindAttribLocation(programObject, index, attributeName);

During the compilation (Step A) Cocos2d-x will bind their own uniforms into your shaders.

"uniform mat4 CC_PMatrix;n"
"uniform mat4 CC_MVMatrix;n"
"uniform mat4 CC_MVPMatrix;n"
"uniform vec4 CC_Time;n"
"uniform vec4 CC_SinTime;n"
"uniform vec4 CC_CosTime;n"
"uniform vec4 CC_Random01;n"

Notice the end is source, which is our custom shaders.


Checking every step is absolutely necessary in OpenGL, since most of the times an error will not be reported, it will rather silently present you with a blank screen for you to stare into.


Gets the program, and sets the built-in uniforms value in step C.

ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position );
glVertexAttribPointer(kCCVertexAttrib_Position, 3, GL_FLOAT, GL_FALSE, 0, vVertices);
glDrawArrays(GL_TRIANGLES, 0, 3);

What glVertexAttribPointer does it retrieves the vertices from the buffer objects, so we need to tell it how to process those data, in order to understand it have a look at this interleaved array basics explanation.

glDrawArrays(GL_TRIANGLES, 0, 3); draws the primitives, there are 3 types of primitives which are Triangle, Lines, and Points, each one has its own types, again explaining those things in detail can have your mind tangled after a while, but in a nutshell GL_TRIANGLES simply means drawing 1 triangle by 3 given vertex (n/3).

ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position ); basically just enables the Vertex attribute array, although why does it need to be defined explicitly is a bit puzzling.

And here’s the result

Not too exciting :p but you can plug in your own shaders now.


SIGGRAPH University : “An Introduction to OpenGL Programming”
OpenGL ES 2.0 programming guide

Been a bit busy these couple of weeks with Kirin’s project, but finally managed to do the characters! I’ve added a few and polished the map a bit.

Fruity Coins



I’ve tried to keep everything consistent with the initial concept, geometrical+cute+simple.

That’s enough for now, onward with the coding! I’ll return to design later once some fundamental code is covered and then back to the design or maybe the background music.

Preliminary design! after a few weeks of mulling while showering/commuting/random loitering and other unwieldy cognitive activities, an idea has finally dawned on me



There will be more details/modifications, もちろん!but its extrapolations will not veered from inconsistent tangent. (apology for the jargons, lack of sleep)

There are a few aspects that I could say about the conception, but not while revealing the core ideas, so I have to leave it as it is.

The design was inspired by Dieter Rams, Markus Persson, Jim Guthrie, and Asher Vollmer.


Starting on my own game, although brilliant idea is immensely dependent on how the stars are aligned (metaphorical speaking obviously) still I can’t disclose the exact concept since the indie game scenery is notoriously brutal, but in a glimpse the basic premise is a maze with a few added twists.

Choosing the Framework

There were candidates, which were

  1. oF, because of Ridiculous Fishing.
  2. cocos2D, because of Badland
  3. Native, because of iOS7
  4. and lastly cocos2d-x, because its porting capabilities

At the end I have chosen cocos2d-x because mobile != iOS, although hindsight native was perhaps a better choice since cocos2d-x is inferior in comparison with iOS 7, its laden with flaws such as the documentation, slow update, and mainly performance, on dormant the CPU usage hovers around 16%

as native, it can go as low as 5%.

Although this is just my opinion, a diehard cocos2d-x user might argue au contraire.


First phase is building the maze-generator, there a lot of maze algorithm out there and some uses fancy algorithm such as Gabriel Graph, Relative neighborhood graph, or Normal distribution but I chose Depth-first search since the game has to be simple and fast.


Build an empty maze

void MazeGenerator::addInit() {
    int i;
    for (i = 0; i < _total; i++) {
        // _w is width of the tiles
        int flag = EMPTY, nh = i/_w;
        if (i < _w || i > _total-_w-1 || i%_w == 0 || i%(_w*(nh+1)-1) == 0 || (i%2 != 0 && nh%2 != 0)) {
            flag = WALL;
        if (_start < 0 && flag == EMPTY) {
            _start = i;
            // TODO randomize start position
            flag = START;
        if (flag != EMPTY) _empty--;


Add the Depth-first search

void MazeGenerator::addDepthSearch() {
    CCArray* stack = CCArray::create();
    CCInteger* p;
    CCInteger* q;
    int s = _start;
    while (_path < _empty) {
        // Check for any surrounding empty tiles
        CCArray* r = checkSurrounding(_stage, s, EMPTY);
        // If exists
        if (r->count() > 0) {
            p = dynamic_cast<CCInteger *>(r->objectAtIndex(rand()%r->count()));
            _stage->replaceObjectAtIndex(p->getValue(), CCInteger::create( PATH ));
            s = p->getValue();
            // TODO randomize end position
            if (_end < s) _end = s;
            _deadend = 1;
        } else {
            // Add a cul-de-sac
            if (_deadend > 0) {
                q = dynamic_cast<CCInteger *>( stack->lastObject());
                _stage->replaceObjectAtIndex(q->getValue(), CCInteger::create( WALL ));
                _deadend = -1;
            // if cul-de-sac reached, step back 
            p = dynamic_cast<CCInteger *>( stack->lastObject());
            s = p->getValue();

And here’s the result

// 11x11 tiles
// 7 = Starting point
// 8 = Ending point
// 0 = Wall
// 6 = Path


And another thing is path finding, the first choice is of course A*, but I removed/modified several things;

  1. The past path-cost function, as is not required
  2. The open and close list, since I thought duplicating the scene array and tagging it with a different number would be faster, there’s no need for additional iterations nor added processing to deconstruct the extra arrays.
  3. And as for the future path-cost function I employed the Manhattan Method, which goes something like this.
H = 10*(abs(currentX-targetX) + abs(currentY-targetY))[/cc]

Simple enough, note the coefficient 10 is not necessary unless you're dealing with decimal, I used it just for the sake of it 凸(`0´)凸

void MazeGenerator::addPathfinding() {
    // Duplicate the stage
    _pfind = CCArray::create();
    CCArray* stack = CCArray::create();
    CCArray* r;
    CCInteger* p;
    int s = _start, pos = 0;
    while (_path > 0) {
        // Check surrounding for Path
        r = checkSurrounding(_pfind, s, PATH);
        // If path exists
        if (r->count() > 0) {
            int temp = 0;
            for (int i = 0; i < r->count(); i++) {
                p = dynamic_cast<CCInteger *>(r->objectAtIndex(i));
                temp = getCost(p->getValue());
                // to filter smallest cost
                if(temp < pos) pos = temp;
            // Mark the path with 1
            _pfind->replaceObjectAtIndex(pos, CCInteger::create( 1 ));
            stack->addObject(CCInteger::create( pos ));
            // break if end reached
            CC_BREAK_IF(_end == pos);
        } else {
            if cul-de-sac, step back
            p = dynamic_cast<CCInteger *>( stack->lastObject());
            s = p->getValue();


The Manhattan Method

int MazeGenerator::getCost(int p) {
    int x0 = _end%_w,
        y0 = _end/_w,
        x1 = p%_w,
        y1 = p/_w;
    return 10*(abs(x1-x0)+abs(y1-y0));

And the results is pretty good.

// 1 = Routes


Until next time.

It’s been awhile since I did something in Flash, 懐かしい〜。

I’ve been experimenting with eye tracking system in Flash lately, there are a few demos out there but none of them provide the source code.

Through Google I came across to Fabian Timm and Erhardt Barth’s research, which basically explains the face detection process as follows;

  • Applying the face detector and extracting the eye region.
  • Isolating the eye area by detecting the color gradient between iris and sclera

It’s pretty simple and I thought the BitmapData class is robust enough to handle this type of algorithm

Whilst tinkering I’ve stumbled across Tomek’s blog, which describes a clever way of extracting color by juggling the threshold and blend method, that rang a few bells.

So after a few custom adjustments I’ve made my own eye tracking code, nothing too fancy since the requirement isn’t too complicated.

First add BlendMode

public function addBlendMode(s:BitmapData):BitmapData {

	var r:BitmapData = new BitmapData(s.width,s.height);
	var r2:BitmapData = new BitmapData(s.width,s.height);
	var rect:Rectangle = new Rectangle(0,0,s.width,s.height);
	var pt:Point = new Point(0,0);

	// to get more contrast
	r.draw(r2, new Matrix(), new ColorTransform(), BlendMode.MULTIPLY);
	return r;


Use the threshold method, convert the bitmap into 1 bit color.

// the threshold color is important
public function addThresholdColor(b:BitmapData, th:* = 0xff111111):BitmapData {
	var bmd2:BitmapData = new BitmapData(b.width, b.height);
	var pt:Point = new Point(0,0);
	var rect:Rectangle = new Rectangle(0, 0, b.width, b.height);
	var color:uint = 0xFF000000;

	bmd2.threshold(b, rect, pt, ">=", th, color, 0xFFFFFFFF, false);

	return bmd2;

Lastly, mark the eye, I’ve done it only for one eye since the movement will be symmetrical anyway.

public function getBound(b:BitmapData):Rectangle {

	var maxBlobs:int = 40;
	var i:int = 0;
	var minX:int = 640; // video width
	var maxX:int = 0;

	var minY:int = 480; // video height
	var maxY:int = 0;

	var hx:int;
	var hy:int;

	while(i < maxBlobs) {

		var bound:Rectangle = b.getColorBoundsRect(0xffffffff, 0xffffffff);  

		if(bound.isEmpty()) break;

		var bx:int = bound.x;
		var by:int = bound.y;
		var bwidth:int = bx + bound.width;
		var bheight:int = by + bound.height;

		if(bwidth < minX) minX = bx;
		if(bwidth > maxX) maxX = bwidth;

		if(bheight < minY) minY = by; 
		if(bheight > maxY) maxY = bheight;

		for(var y:uint = by; y < bheight; y++) {
			if(b.getPixel32(bx,y) == 0xffffffff) {

				// fill color
				b.floodFill(bx,y, 0xffff0000);

	return new Rectangle(minX, minY, maxX - minX, maxY - minY);


At the moment the algorithm is not perfect, I need to add automatic light detector and stabilizer and due to the webcam resolution and environment lighting the gradient tracking seems impractical but the same principle applies, more or less, also I haven’t tested it against a person who has dark colored skin or light colored eyes, so I’ll post it later.


Beyond Reality Face Detection

This post is a bit long so here’s the demo or play the video below in case you can’t see it (WebGL has tons of quirks depending on the browser versions/OS) and source files is here.

I had my first experience with WebGL around a few months ago and with GLSL when I was a part of the World Wide Maze team, but everything was done through three.js, which hides most of the nitty-gritty.

Since WebGL is based on OpenGL specifications and currently most graphic processing uses OpenGL as their infrastructure (well until OpenCL comes along), so I thought it would be useful to know the inner workings of the system.

In here I employed the Ping-Pong techniques by utilizing the Framebuffer Object architecture which enables millions of particles calculated through the texture, I’ve drawn a simple diagram to explain the process.

Hopefully that made any sense! This techniques also used for post-processing calculation by applying several passes through the texture.

For the algorithms I’ve used the Duffing oscillator, Lorenz system, and some randomized trigonometric calculation.

Quick tips, if you want to use complex algorithms I suggest that you read a lot of math books and try to understand at least the basic, because there are already hundreds of mathematician that have published their work since centuries ago, Computer Vision algorithms like Space partitioning, Fortune’s algorithm, etc. has been around for decades and you can easily find the programming adaptation, all you have to do is just decode/modify/combine it.

Tested on

– Chrome v29.0.1547.57
– Mac OSX 10.8.4, 3.2 Ghz Intel Core i3


Google I/O 2011: WebGL Techniques and Performance
Making WebGL Dance
How to write portable WebGL
Learning WebGL
Introduction to shaders
Iñigo Quílez’s blog
– Once you understand the concept of shaders just stare at these amazing shaders to get more in-depth



Just a bit of intermezzo, I’ve built a photoshop script that’ll detect the position of an object and store it in JSON.

How to do it

1. First, arrange your objects like so;

Notice the naming convention.

The object you want to retrieve its position from, is nested inside a folder which has its name appended with “@”, this concept was taken from cutandslice.

2. Run the script.

and it’ll churn a JSON like below;

	"stage": {
		"item1": {
			"x": 277,
			"y": 618,
			"width": 287,
			"height": 646

So if your job involves endless toiling with Photoshop, then script automation will extremely optimize your flow work and you can use the ubiquitous Javascript! I swear you can’t do any coding without bumping with Javascript now a days.

Smart example is of course cutandslice, it has become an inseparable tools for me.

If you’d like to make it your own, start from Michael Chaize’s tutorial below;

Tested on

– Adobe Photoshop CS6
– Mac OSX 10.8.4


Source file

I’ve been thinking of sharpening my C++ skills for sometimes now, so a while ago I picked up OpenFrameworks.

I made a simple physics engine, just a hint of linear algebra and a few creative iterations, you can have these.

Anyway have a peek at my code.


2D physics engine tutorial
Math and stuff
Physics stuff

The surge of mobile/desktop optimization has been overwhelmingly demanding lately, but maintaining a consistent performance, weight, and look across all browsers/devices is excruciating painful, especially on mobile.

The one who got really close is Jongmin Kim’s website, which this demo had been inspired from.

So driven by curiosity I made yet another Hipster Gallery employing the latest technology and techniques such as CSS3, Canvas, WebGL, and miscellaneous.

It hasn’t been rigorously tested so expect incompatibility issues here and there, but it works pretty well on Chrome and Apple’s devices.

And have a peek at my code and feel free to use it but please don’t remove my name.



0.1 (2013/04/17)
– First version

0.2 (2013/04/18)
– Fixed some css issues
– Removed hover for mobile
– Fixed CSS transform javascript
– Fixed a few implementations on FF
– Added version list

Tested on

Chrome 26.0, Firefox 20.0, Safari Version 6.0

iPad Retina iOS 6.3 Safari + Chrome
iPhone > 4, > iOS 6 Safari + Chrome

This is my first Leap Motion experiment! thank you Leap Motion for delivering this wonderful device to me.

The art was heavily influenced by the talented Robert Hodgin, although it’s nowhere as good as his work.

Leap Motion Experiment #1 from Rahmat Hidayat on Vimeo.

Leap Motion Experiment #2 from Rahmat Hidayat on Vimeo.

For experiment #2 I’ll update the World Matrix as soon as I can.

The frequency sometimes haphazardly fluctuates which can break the interaction but that can be compensated by a simple noise reduction algorithm, also I suspect that they’ll be improving the software on every update.

But overall considering the price and the size, its performance is superb, the latency almost non-existent and is even faster than Kinect.

I’m going to use Cinder for the next experiment, perhaps with C++ performance can be enhanced.

If you want to try it on your own you can download it here, this is my first experiment so expect loads of bug. 🙂


Leap Motion SDK
Leap JS