Blog

Starting on my own game, although brilliant idea is immensely dependent on how the stars are aligned (metaphorical speaking obviously) still I can’t disclose the exact concept since the indie game scenery is notoriously brutal, but in a glimpse the basic premise is a maze with a few added twists.
じゃあ、いきましょうー!

Choosing the Framework

There were candidates, which were

  1. oF, because of Ridiculous Fishing.
  2. cocos2D, because of Badland
  3. Native, because of iOS7
  4. and lastly cocos2d-x, because its porting capabilities

At the end I have chosen cocos2d-x because mobile != iOS, although hindsight native was perhaps a better choice since cocos2d-x is inferior in comparison with iOS 7, its laden with flaws such as the documentation, slow update, and mainly performance, on dormant the CPU usage hovers around 16%

Screenshot
as native, it can go as low as 5%.

Although this is just my opinion, a diehard cocos2d-x user might argue au contraire.

Development

First phase is building the maze-generator, there a lot of maze algorithm out there and some uses fancy algorithm such as Gabriel Graph, Relative neighborhood graph, or Normal distribution but I chose Depth-first search since the game has to be simple and fast.

/**

Build an empty maze

*/
void MazeGenerator::addInit() {
    
    int i;
    
    for (i = 0; i < _total; i++) {
        
        // _w is width of the tiles
        int flag = EMPTY, nh = i/_w;
        
        if (i < _w || i > _total-_w-1 || i%_w == 0 || i%(_w*(nh+1)-1) == 0 || (i%2 != 0 && nh%2 != 0)) {
            flag = WALL;
        }
        
        if (_start < 0 && flag == EMPTY) {
            _start = i;
            
            // TODO randomize start position
            flag = START;
        }
        
        if (flag != EMPTY) _empty--;
        
        _stage->addObject(CCInteger::create(flag));
        
    }
}

/**

Add the Depth-first search

*/
void MazeGenerator::addDepthSearch() {
    
    CCArray* stack = CCArray::create();
    CCInteger* p;
    CCInteger* q;
    
    int s = _start;
    
    while (_path < _empty) {
        
        // Check for any surrounding empty tiles
        CCArray* r = checkSurrounding(_stage, s, EMPTY);
        
        // If exists
        if (r->count() > 0) {
            
            p = dynamic_cast<CCInteger *>(r->objectAtIndex(rand()%r->count()));
            
            _stage->replaceObjectAtIndex(p->getValue(), CCInteger::create( PATH ));
            s = p->getValue();
            
            // TODO randomize end position
            if (_end < s) _end = s;
            
            stack->addObject(p);
            
            _deadend = 1;
            
            _path++;
            
        } else {
            
            // Add a cul-de-sac
            if (_deadend > 0) {
                
                q = dynamic_cast<CCInteger *>( stack->lastObject());
                
                _stage->replaceObjectAtIndex(q->getValue(), CCInteger::create( WALL ));
                
                _deadend = -1;
            }
            
            // if cul-de-sac reached, step back 
            stack->removeLastObject();
            p = dynamic_cast<CCInteger *>( stack->lastObject());
            
            s = p->getValue();
        }
        
    }
    
}

And here’s the result


// 11x11 tiles
// 7 = Starting point
// 8 = Ending point
// 0 = Wall
// 6 = Path

0,0,0,0,0,0,0,0,0,0,0,
0,7,0,0,0,0,0,0,0,0,0,
0,6,0,6,6,6,6,6,6,6,0,
0,6,0,0,0,0,0,0,0,6,0,
0,6,6,6,0,6,6,6,6,6,0,
0,0,0,6,0,6,0,0,0,6,0,
0,6,0,6,0,6,0,6,0,6,0,
0,6,0,6,0,6,0,6,0,6,0,
0,6,6,6,6,6,0,6,6,6,0,
0,0,0,6,0,0,0,0,0,8,0,
0,0,0,0,0,0,0,0,0,0,0,

And another thing is path finding, the first choice is of course A*, but I removed/modified several things;

  1. The past path-cost function, as is not required
  2. The open and close list, since I thought duplicating the scene array and tagging it with a different number would be faster, there’s no need for additional iterations nor added processing to deconstruct the extra arrays.
  3. And as for the future path-cost function I employed the Manhattan Method, which goes something like this.
H = 10*(abs(currentX-targetX) + abs(currentY-targetY))[/cc]

Simple enough, note the coefficient 10 is not necessary unless you're dealing with decimal, I used it just for the sake of it 凸(`0´)凸


void MazeGenerator::addPathfinding() {
    
    // Duplicate the stage
    _pfind = CCArray::create();
    _pfind->addObjectsFromArray(_stage);
    
    CCArray* stack = CCArray::create();
    CCArray* r;
    CCInteger* p;
    
    int s = _start, pos = 0;
    
    while (_path > 0) {
        
        // Check surrounding for Path
        r = checkSurrounding(_pfind, s, PATH);
        
        // If path exists
        if (r->count() > 0) {
            
            int temp = 0;
            
            for (int i = 0; i < r->count(); i++) {
                
                p = dynamic_cast<CCInteger *>(r->objectAtIndex(i));
                
                temp = getCost(p->getValue());
                
                // to filter smallest cost
                if(temp < pos) pos = temp;
                
            }
            
            // Mark the path with 1
            _pfind->replaceObjectAtIndex(pos, CCInteger::create( 1 ));
            stack->addObject(CCInteger::create( pos ));
            
            // break if end reached
            CC_BREAK_IF(_end == pos);
            
            _path--;
            
        } else {
            
            if cul-de-sac, step back
            stack->removeLastObject();
            p = dynamic_cast<CCInteger *>( stack->lastObject());
            
            s = p->getValue();
        }
 
    }
    
}

/**

The Manhattan Method

*/
int MazeGenerator::getCost(int p) {
    
    int x0 = _end%_w,
        y0 = _end/_w,
        x1 = p%_w,
        y1 = p/_w;
    
    return 10*(abs(x1-x0)+abs(y1-y0));
    
}

And the results is pretty good.


// 1 = Routes

0,0,0,0,0,0,0,0,0,0,0,
0,7,0,0,0,0,0,0,0,0,0,
0,1,0,6,6,6,6,6,6,6,0,
0,1,0,0,0,0,0,0,0,6,0,
0,1,1,1,0,1,1,1,1,1,0,
0,0,0,1,0,1,0,0,0,1,0,
0,6,0,1,0,1,0,6,0,1,0,
0,6,0,1,0,1,0,6,0,1,0,
0,6,6,1,1,1,0,6,6,1,0,
0,0,0,1,0,0,0,0,0,1,0,
0,0,0,0,0,0,0,0,0,0,0,

Until next time.

It’s been awhile since I did something in Flash, 懐かしい〜。

I’ve been experimenting with eye tracking system in Flash lately, there are a few demos out there but none of them provide the source code.

Through Google I came across to Fabian Timm and Erhardt Barth’s research, which basically explains the face detection process as follows;

  • Applying the face detector and extracting the eye region.
  • Isolating the eye area by detecting the color gradient between iris and sclera

It’s pretty simple and I thought the BitmapData class is robust enough to handle this type of algorithm

Whilst tinkering I’ve stumbled across Tomek’s blog, which describes a clever way of extracting color by juggling the threshold and blend method, that rang a few bells.

So after a few custom adjustments I’ve made my own eye tracking code, nothing too fancy since the requirement isn’t too complicated.

First add BlendMode

public function addBlendMode(s:BitmapData):BitmapData {

	var r:BitmapData = new BitmapData(s.width,s.height);
	var r2:BitmapData = new BitmapData(s.width,s.height);
	var rect:Rectangle = new Rectangle(0,0,s.width,s.height);
	var pt:Point = new Point(0,0);
	r.draw(s);
	r2.draw(s);

	// to get more contrast
	r.draw(r2, new Matrix(), new ColorTransform(), BlendMode.MULTIPLY);
	return r;

}

Use the threshold method, convert the bitmap into 1 bit color.

// the threshold color is important
public function addThresholdColor(b:BitmapData, th:* = 0xff111111):BitmapData {
			
	var bmd2:BitmapData = new BitmapData(b.width, b.height);
	var pt:Point = new Point(0,0);
	var rect:Rectangle = new Rectangle(0, 0, b.width, b.height);
	var color:uint = 0xFF000000;

	bmd2.threshold(b, rect, pt, ">=", th, color, 0xFFFFFFFF, false);

	return bmd2;
}

Lastly, mark the eye, I’ve done it only for one eye since the movement will be symmetrical anyway.

public function getBound(b:BitmapData):Rectangle {

	var maxBlobs:int = 40;
	var i:int = 0;
	
	var minX:int = 640; // video width
	var maxX:int = 0;

	var minY:int = 480; // video height
	var maxY:int = 0;

	var hx:int;
	var hy:int;

	while(i < maxBlobs) {

		var bound:Rectangle = b.getColorBoundsRect(0xffffffff, 0xffffffff);  

		if(bound.isEmpty()) break;

		var bx:int = bound.x;
		var by:int = bound.y;
		var bwidth:int = bx + bound.width;
		var bheight:int = by + bound.height;

		if(bwidth < minX) minX = bx;
		if(bwidth > maxX) maxX = bwidth;

		if(bheight < minY) minY = by; 
		if(bheight > maxY) maxY = bheight;

		for(var y:uint = by; y < bheight; y++) {
			
			if(b.getPixel32(bx,y) == 0xffffffff) {

				// fill color
				b.floodFill(bx,y, 0xffff0000);
			}
		}

		i++;
	}
	
	return new Rectangle(minX, minY, maxX - minX, maxY - minY);

}

At the moment the algorithm is not perfect, I need to add automatic light detector and stabilizer and due to the webcam resolution and environment lighting the gradient tracking seems impractical but the same principle applies, more or less, also I haven’t tested it against a person who has dark colored skin or light colored eyes, so I’ll post it later.

Resources

Beyond Reality Face Detection

This post is a bit long so here’s the demo or play the video below in case you can’t see it (WebGL has tons of quirks depending on the browser versions/OS) and source files is here.

I had my first experience with WebGL around a few months ago and with GLSL when I was a part of the World Wide Maze team, but everything was done through three.js, which hides most of the nitty-gritty.

Since WebGL is based on OpenGL specifications and currently most graphic processing uses OpenGL as their infrastructure (well until OpenCL comes along), so I thought it would be useful to know the inner workings of the system.

In here I employed the Ping-Pong techniques by utilizing the Framebuffer Object architecture which enables millions of particles calculated through the texture, I’ve drawn a simple diagram to explain the process.

Hopefully that made any sense! This techniques also used for post-processing calculation by applying several passes through the texture.

For the algorithms I’ve used the Duffing oscillator, Lorenz system, and some randomized trigonometric calculation.

Quick tips, if you want to use complex algorithms I suggest that you read a lot of math books and try to understand at least the basic, because there are already hundreds of mathematician that have published their work since centuries ago, Computer Vision algorithms like Space partitioning, Fortune’s algorithm, etc. has been around for decades and you can easily find the programming adaptation, all you have to do is just decode/modify/combine it.

Tested on

– Chrome v29.0.1547.57
– Mac OSX 10.8.4, 3.2 Ghz Intel Core i3

References

Google I/O 2011: WebGL Techniques and Performance
Making WebGL Dance
How to write portable WebGL
Learning WebGL
Introduction to shaders
Iñigo Quílez’s blog
– Once you understand the concept of shaders just stare at these amazing shaders to get more in-depth
WebGLStats

Resources

gl-matrix.js

Just a bit of intermezzo, I’ve built a photoshop script that’ll detect the position of an object and store it in JSON.

How to do it

1. First, arrange your objects like so;

Notice the naming convention.

The object you want to retrieve its position from, is nested inside a folder which has its name appended with “@”, this concept was taken from cutandslice.

2. Run the script.

and it’ll churn a JSON like below;

{
	"stage": {
		"item1": {
			"x": 277,
			"y": 618,
			"width": 287,
			"height": 646
		}
	}
}

So if your job involves endless toiling with Photoshop, then script automation will extremely optimize your flow work and you can use the ubiquitous Javascript! I swear you can’t do any coding without bumping with Javascript now a days.

Smart example is of course cutandslice, it has become an inseparable tools for me.

If you’d like to make it your own, start from Michael Chaize’s tutorial below;

Tested on

– Adobe Photoshop CS6
– Mac OSX 10.8.4

Resource

Source file

I’ve been thinking of sharpening my C++ skills for sometimes now, so a while ago I picked up OpenFrameworks.

I made a simple physics engine, just a hint of linear algebra and a few creative iterations, you can have these.

Anyway have a peek at my code.

Resources

2D physics engine tutorial
Math and stuff
Physics stuff

The surge of mobile/desktop optimization has been overwhelmingly demanding lately, but maintaining a consistent performance, weight, and look across all browsers/devices is excruciating painful, especially on mobile.

The one who got really close is Jongmin Kim’s website, which this demo had been inspired from.

So driven by curiosity I made yet another Hipster Gallery employing the latest technology and techniques such as CSS3, Canvas, WebGL, and miscellaneous.

It hasn’t been rigorously tested so expect incompatibility issues here and there, but it works pretty well on Chrome and Apple’s devices.

And have a peek at my code and feel free to use it but please don’t remove my name.

Resources

http://requirejs.org/
http://ricostacruz.com/jquery.transit/
http://backbonejs.org/
http://underscorejs.org/
http://html5boilerplate.com/
https://github.com/fschaefer/Stately.js/
https://github.com/millermedeiros/requirejs-plugins
http://learnboost.github.io/stylus/
https://github.com/GoodBoyDigital/pixi.js
https://github.com/bebensiganteng/jQuery-Keyframes
http://www.greensock.com/get-started-js/
http://www.createjs.com/#!/PreloadJS
http://gruntjs.com/

Versions

0.1 (2013/04/17)
– First version

0.2 (2013/04/18)
– Fixed some css issues
– Removed hover for mobile
– Fixed CSS transform javascript
– Fixed a few implementations on FF
– Added version list

Tested on

Desktop
Chrome 26.0, Firefox 20.0, Safari Version 6.0

Mobile/Tablets
iPad Retina iOS 6.3 Safari + Chrome
iPhone > 4, > iOS 6 Safari + Chrome

This is my first Leap Motion experiment! thank you Leap Motion for delivering this wonderful device to me.

The art was heavily influenced by the talented Robert Hodgin, although it’s nowhere as good as his work.

Leap Motion Experiment #1 from Rahmat Hidayat on Vimeo.

Leap Motion Experiment #2 from Rahmat Hidayat on Vimeo.

For experiment #2 I’ll update the World Matrix as soon as I can.

The frequency sometimes haphazardly fluctuates which can break the interaction but that can be compensated by a simple noise reduction algorithm, also I suspect that they’ll be improving the software on every update.

But overall considering the price and the size, its performance is superb, the latency almost non-existent and is even faster than Kinect.

I’m going to use Cinder for the next experiment, perhaps with C++ performance can be enhanced.

If you want to try it on your own you can download it here, this is my first experiment so expect loads of bug. 🙂

Resources

Leap Motion SDK
Three.js
Leap JS

It’s a bit late but Happy New Year everyone! I’ve made a short montage as new year greeting video.

Please excuse the quality, since I’ve never made any video before.

Ignore the glitches such as camera shakes (invest in those camera stabiliser folks!), inconsistent color tones, and the story veered off after awhile. 🙂

I’ve borrowed the music from Lullatone – Leaves Falling, hopefully there won’t be any copyright issues :p

I’d like to thank Felix Turner for allowing me to steal.. euh.. copy.. I mean adapt his art. 🙂

I’ve learned a lot about WebGL and Three.js from him.

Check it out here.

My first experiment with WebGL, using Three.js and Tweenlite.

After almost 5 years of turbulent life in Dubai, I have finally moved to Japan, the land of the rising Sun as I wished 7 months ago.

Everything is just wonderful in Japan, its majestic nature imbued by the immaculate culture creates a very poetic life in every breath.

I am now working at Katamari and AID-DCC, below is a sneak preview of how ebullient the people are.

Synopsis, the story tells about how an ex-employee makes an inconspicuous cameo throught out the company’s outing, each appearance has a difficulty level and if it’s goes unnoticed it will be rewarded.

And this is their video introduction.