Quantcast
Channel: GameDev.net
Viewing all 16874 articles
Browse latest View live

PocketPC: An Introduction

$
0
0
Windows CE is a monstrous mess, and it's frighteningly difficult to use
Microsoft is going to dump it
Palm has the market all sewn up


If you've been reading the trade articles of the past few years, this is probably the impression you got from Microsoft's handheld entries. There are some facts, though, that you might not know. . .
  • At the time of this writing, demand for Compaq's iPaq handheld has outstripped supply for nine months
  • PocketPC is much more capable than Palm for a decent gaming experience
  • PocketPC has been gaining market share
This article is going to explore what exactly is Windows CE, what this PocketPC thing is, some of the technical aspects that make it the finest platform for handheld gaming available today, and how to get started developing for one.


What is Windows CE and What is PocketPC?

Background

Unlike Microsoft's previous efforts at ROM-able Windows, Windows CE is a ground-up rewrite. While it may look more or less like plain old Windows , it is a completely different animal from Windows 3.x, Windows 9x, and Windows NT.

The designers of CE decided early on to focus on portability and small size in the design of CE, and it shows. Making a truly tiny machine based on a Pentium-class processor just wouldn't be practical, because even though there are low-power Pentium-class processors for laptops, shrinking a Pentium machine to the size of a deck of cards and still getting reasonable battery life just wasn't going to happen. Hence, Windows CE was written to work with for alternative processors with very low power consumption, like the StrongARM, SH3/SH4, and MIPS.

Thankfully, though, you don't have to worry about what processor for which you're writing. Much like Microsoft's unsuccessful plans for Windows NT running on every desktop platform, you can write to a standard Windows API and get your application to work on all CE processors with a simple recompile.

. . .and this time, it actually works.

Devices

While Windows CE is not locked into any particular form-factor and is capable of running on anything from embedded microcontrollers to cell-phones, two main form-factors have become predominant, the Handheld PC and the PocketPC.

Handheld PC 2000

This version should look instantly familiar. It looks and acts very similarly to the Windows you've come to know and tolerate.

Attached Image: shot1.gif

The H/PC 2000 desktop

The only major difference is that the applications don't live in nice little overlapping panes. Since the screen is so small, apps automatically grow to the size of the screen, and the title and menubars are combined. Here's what the baby version of Excel looks like on the same machine.

Attached Image: shot2.gif

Excel running on an H/PC

The market for Handheld PC's based on Windows CE unfortunately is in decline. While the platform started strongly a few years ago with high-quality offerings like the Philips Velo, HP Jornada, Sharp Mobilon, and Vadem Clio, only the HP has survived until the release of MS's Handheld PC 2000 software.

PocketPC

In response to the instant success of the Palm Computing platforms, Microsoft introduced the Palm-size PC. Like the Handheld PC, it started out with several vendors, many of which bailed after low sales, like Uniden and Philips. Despite the technical superiority of hardware like the Casio E-100 series, the Palm-size PC platform was savaged by critics for being overcomplicated and clumsy to use. The standard Windows interface, while it worked well on larger screens, was tight on Handhelds and was downright difficult on a small 240x320 screen.

Attached Image: shot3.gif

Windows CE running on a P/PC

About a year ago, Microsoft released a new version of the Windows CE interface, redubbed "PocketPC". While internally it was basically the same Windows CE as the earlier versions, the user-interface was retooled to work better on a tiny screen. The start menu was moved to a little icon in the corner. The menubar shrank and moved to the bottom. It took very good advantage of color. Best of all, Microsoft finally made available the baby-office apps that had previously only been available on the Handheld PC's. Critics, on the whole, have been warm to the changes, finding the interface faster and easier to use.

Attached Image: shot4.gif

The new PocketPC interface


Capabilities

The best way to show the capabilities of the respective devices is to show them side-by-side. Here is a table showing the capabilities of the most popular PocketPC devices on the market. Also shown are a couple of popular Palm devices for comparison.

DeviceProcessorRAMScreenSoundPrice *Casio E-125150 MHz VR412232 MB320x240 16-bit color16-bit stereo, 44 kHz, MP3$482Compaq iPaq 3650206 MHz Intel StrongARM32 MB320x240 12-bit color16-bit stereo, 44 kHz, MP3$482Compaq iPaq 3100206 MHz Intel StrongARM32 MB320x240 4-bit grayscale16-bit stereo, 44 kHz, MP3$350HP Jornada 548133 MHz Hitachi SH-332 MB320x240 12-bit color16-bit stereo, 44 kHz, MP3$450HP Jornada 545133 MHz Hitachi SH-316 MB320x240 12-bit color16-bit stereo, 44 kHz, MP3$400Palm IIIc20 MHz Motorola Dragonball8 MB160x160 8-bit colorMonophonic tone generator (beep)$282Palm VIIx20 MHz Motorola Dragonball8 MB160x160 2-bit grayscaleMonophonic tone generator (beep)$338* based on the lowest MySimon.com price at the time of writing

As you can see, the PocketPC machines are all more expensive than the Palms, but the capabilities of the PocketPC's are disproportionately greater. Numbers, though, can only give you part of the story. Here's a comparison of some of the best of the PocketPC offerings compared with the best color Palm titles.

Attached Image: shot5.gif

PocketQuake on the IPaqAttached Image: shot6.gif

Zio Golf on PocketPC (landscape mode)Attached Image: shot7.gif

JimmyArk 2 on PocketPC



Attached Image: shot8.gif

Karate Master for PalmAttached Image: shot9.gif

Race Fever for PalmAttached Image: shot10.gif

Biplane Ace for Palm
How Do You Program for PocketPC?

Well, you're in luck. Not only are there some very capable developer tools for PocketPC, they're available for a song. You can purchase Microsoft's eMbedded Visual Tools 3.0 CD from Microsoft for only the cost of shipping and handling. The package includes eMbedded Visual C++, eMbedded Visual Basic, and emulators for the platforms mentioned above.

The tools are very mature and robust. In fact, they're almost identical to their Windows-only brethren, Visual C++ 6.0 and Visual Basic 6.0. The biggest difference is that they do not generate native x86 Windows applications. The C++ compiler cross-compiles to the aforementioned processors, while eMbedded Visual Basic produces files that are interpreted by a VBScript-style interpreter on the target device.

Let's dispense early on with Visual Basic for games, though. Since eMbedded Visual Basic produces programs that are interpreted by the PocketPC's rather rudimentary VBScript interpreter, eMbedded Visual Basic isn't really suitable for games. If you need a form-based data-collector or something to perform field calculations for you, it's ideal. For games, though, it's just not there. Let's concentrate on the eMbedded Visual C++.

How the compiler works

There are two ways to develop an app for PocketPC, and you will very likely be using both methods interchangeably.

The first is by compiling your game for a connected device, then uploading, running, and debugging the game over the connection (usually serial). While this sounds complicated, it's actually quite simple. eMbedded Visual C++ works through MS's ActiveSync software, which is the software used to connect your PocketPC to your computer to exchange data with your address book and calendar. If your PocketPC is connected via ActiveSync, you've done all that's necessary to develop for your device. Simply choose the processor that your device has and press the "make" button just as if you were making a standard Windows application. The file will be compiled, linked, and sent over to the device. Press the "run" button, and your app will pop up on the device's screen. Set a breakpoint in your code, and the code will stop when it gets to that point. Examine a variable in memory, and Visual C++ will get the value and show it to you on your main screen. Neat, huh?

Attached Image: diagram1.gif

As you may have already figured out, though, there is one chief disadvantage to this approach -- speed. While compiling the file and sending it over to the device happens at a reasonable speed, debugging is downright glacial. The device and eMbedded Visual C++ are constantly having to update each other as to the status of your running program. You'll probably want to save debugging on the device as a last resort to fix bugs that don't show up under the next method -- compiling for the PocketPC emulator.

The second method is to develop for the on-screen PocketPC emulator. Calling it an emulator, though, is a bit of a misnomer. Rather than make processor-emulators for the various processors out there, MS simply built a version of Windows CE that runs on an x86 processor. To compile for the emulator, you set the target processor as the x86 CE Emulator, press the "make" button, and eMbedded Visual C++ will generate a Windows CE app that runs on the x86 processor. Pressing the "run" button will then run your app in the on-screen Windows CE emulator.

Attached Image: diagram2.gif

The principal advantage to this approach is speed. Speed speed speed. Using the on-screen emulator, you can run an application under the debugger and it will run just as quickly as if you were debugging a native Windows application. Copying the app to the on-screen emulator and running is almost instantaneous. Developing an app for the on-screen emulator will make you feel right at home if you're used to developing standard Windows apps under Visual C++ 6.0.

There are plenty of disadvantages, though, to this approach. For one, you're compiling for a processor that simply doesn't exist in the PocketPC world. If a particular device has some picadillos germaine to its processor, you won't see them when it runs on the emulator. Also, the emulator takes on the capabilities of the compiling machine, so you'll likely be developing your app on a 24-bit screen even though there are no PocketPC's with 24-bit screens. Furthermore, your emulated app will likely be running much faster emulated than on the device, so you won't get a good feel for how your game plays if you develop solely for the emulator. Finally, the emulator only runs on Windows NT or Windows 2000 for reasons that will be clear later. If you have Windows 95/98/ME, you're stuck with developing on the connected device.

Most PocketPC developers quickly find that they need to work between the two different methods to develop a PocketPC game. For the bulk of development, the on-screen emulator is the way to go. It's a great way to get your app up and running quickly, and debugging is a breeze. From time to time, though, you'll need to compile on the device so you can ensure that the graphics look right, the game is playing at a reasonable speed, and no bugs are creeping in that aren't showing up on the emulator. Thankfully, switching between one approach and another is as simple as choosing the target processor on the toolbar and recompiling.

The PocketPC API

If this all looks great, and you're chomping at the bit to convert your large-scale DirectDraw-based isometric RPG title to the PocketPC, there's something you need to know.

The PocketPC API is different from the Win32 API

Sorry to throw a bucket of cold water on your plans, but if you have grand designs on simply recompiling your code and having it work, it's not going to happen that easily. A significant chunk of the Win32 API functions (around 90% of 'em, actually) aren't there.

Hey, where's GlobalAlloc?

Operating systems are evolutionary things. Every year a new version comes out with new capabilities, some of which supercede existing capabilities. The OS makers, though, often must leave antiquated function calls in place to keep from breaking old apps. Since there weren't going to be any old CE apps, the designers went through the Win32 API with a fine-toothed comb to prune it down to its bare essentials -- a library that would still allow you to create powerful apps, but without supporting a lot of antiquated function calls. Hence, if you're looking for AddAtom() or GetWindowWord(), you won't find them.

After dumping the dead wood, the designers went through the list of similar, yet redundant functions. For the most part, if there existed several functions that did a similar job, they just kept the one or two that could best cover the capabilities of the rest. Hence, MoveTo() and LineTo() are gone, but PolyLine() is still there. CreateFont() is gone, but CreateFontIndirect() is still there. DrawText() was better than TextOut(), so DrawText() got to stay. You get the idea. Just be prepared, when developing a PocketPC app, to hunt for updated versions of some of your favorite API functions.

You can ignore Unicode no longer

Something else you're going to have to get used to is text-handling. Since Unicode is the way of the world, and supporting both ANSI and Unicode would take up more space than necessary, ANSI strings got the boot. All of the PocketPC functions that take strings are expecting Unicode strings. So don't type:

MessageBox (hWnd, "This is my first CE app", "Hello World", MB_OK);
You'll just upset your compiler. What it wants to see is:

MessageBox (hWnd, TEXT("This is my first CE app"), TEXT("Hello World"), MB_OK);
The TEXT-macro simply converts a string at compile-time to unicode format. This goes for every hard-coded string in your application, from window-class-names to filenames you pass to the file-handling commands. There are Unicode equivalents for all of your favorite string-handling functions, so don't get too upset. Just kiss the venerable old char * goodbye.

Direct3D, DirectDraw, DirectInput, DirectPlay, and OpenGL

PocketPC doesn't support them. In fact, the API functions that aren't part of the Kernel, Window, and GDI modules of Windows probably aren't there. While some extensions like Winsock are still around (supporting IR communication, cool eh?), many of the latter-day add-ons to Windows are nowhere to be found.

Don't despair, though. There a couple of game-related technologies available that will help you ease the pain of working for the platform.


Game Technologies

While DirectX isn't there, you're not completely out of luck. There are some technologies that'll ease the pain of losing DirectX.

GAPI

GAPI, formally GameX, is a technology that MS licensed that allows direct framebuffer access for games in a reasonably portable way. It's without-a-doubt the fastest way to throw pixels on the screen, but it's got a couple of drawbacks.

GAPI is simple. It's only ten function calls. After initializing GAPI with GXOpenDisplay(), you can call GXBeginDraw() to get a pointer to the framebuffer. GXGetDisplayProperties() returns a structure containing the properties of the display, including bits per pixel, width, height, X pitch, and Y pitch. The pitch values specify the distance between pixel values in the buffer, because a framebuffer is not necessarily a 240x320 array of 16-bit values. It's up to the hardware maker how the video memory is organized.

In addition to framebuffer access, GAPI gives you a relatively platform-neutral way of accessing the PocketPC's controls. This is important because while some handhelds like the Casio and the iPaq have nice little direction-pads, some like the new HP PocketPC decided to clone tha Palm's horrible four-horizontal-keys layout. GXGetDefaultKeys() returns a struct containing the standard key-values that PocketPC supports, so you can easily check to see if a key is down.

While GAPI's strengths are that it is very simple and does the job it sets out to do, it has a couple of weaknesses. First off, it's propriatary. While there are GAPI DLL's available for Casio, Compaq, and HP, there aren't any such DLL's available for other models or form-factors, and you're beholden to Microsoft if any new PocketPC models come out.

Another problem with GAPI is that direct framebuffer access precludes all of the nice window commands. If you want to draw a line, some text, or stretch a bitmap, you're on your own.

Finally, GAPI doesn't run on the emulator, which gets rid of some of the advantages of developing on-screen as mentioned in the previous article. Thankfully, though, an intrepid hacker wrote a GAPI DLL that indeed works on the emulator. It is available here.

The DIBSection API and CEAnim

If you don't need to throw pixels at the screen at the highest speed possible, and you want very good speed without worrying about what new platforms are coming out and whether or not GAPI will support them, you should look at the Win32 DIBSection API. It's been around since Windows 95, and it works.

DIBSections are weird birds. Back in the days of Windows 3.1, there were only two different ways to handle bitmaps, DDB's and DIB's. DDB's (Device Dependent Bitmaps) are owned by the video driver and are in whatever format the driver prefers. They are very fast to display but have one gigantic drawback -- you can't change the bits once you've created the bitmap. If you want to change the bits, you need to create a new bitmap, which make it far less than optimal for displaying frames of animation. The second method is the Device Independent Bitmap. A DIB's memory is owned by your application, but drawing the bitmap to the screen requires the video driver to convert the bitmap to screen format, which makes displaying them much slower than DDB's.

In the latter days of Windows 3.1, Microsoft created the much-maligned WinG. WinG added a third bitmap type that combined the best of both worlds. You could modify the bits directly, and you could display them to the screen quickly. Several console games were ported using WinG, like Earthworm Jim and a few other scrolling platform-games.

The WinG API survived into Windows 95 as the DIBSection API, and it still exists today. Using CreateDIBSection(), you can create a buffer of memory that's shared between the application and video driver. You can change the bits as necessary and blit the buffer to the screen very quickly using the standard old BitBlt() command.

CEAnim is an extensive class library for Windows CE by Random Software (www.randomly.com). It leverages the DIBSection API to the hilt to provide all kinds of animation effects, including sprite animation, alpha blending, dirty rectangle management, and palette management. On the whole, it's much more more extensive and high-level than GAPI.

In addition to graphics, CEAnim includes a library of common data structures and memory management functions. Best of all, though, is that it addresses a problem caused by the loss of DirectSound -- wave mixing. There's a sound class that can play multiple sounds at once so you don't have to cripple the sounds in your game.

It's certainly worth a look. Download it at ftp://www.randomly.c.../ceanim_src.zip. If anything, check out the author's CE offerings at the web address above to see the kind of things you can do.

The DOOM and Quake Engines

Direct3D and OpenGL don't exist for PocketPC, but you're not out of luck if you're looking for 3D that fits in your pocket. Both Doom and Quake 1 have been ported to PocketPC, and you can use them to develop your own projects. The engines for both Doom and Quake are both freely available and use the Gnu Public License, so you can freely use them in your own projects -- even commercial ones!

QuakePPC source code and binaries are available here. DoomCE source is available here. The terms of Id's licenses are available here.


Conclusion

PocketPC's are cool. It's easy to develop for them. The development tools are robust and very inexpensive. While the whole of the Win32 API isn't there, there's enough to get around. And there are a few technologies that will help you develop games that rival what you see on the desktop.

Enjoy!

PocketPC Development Resources

Compilers

Microsoft Mobile Device Developer
PocketC (a third-party C develpment environment that actually runs on the device)

Books

Inside Microsoft Windows CE
Programming Windows CE
Essential Windows CE Application Programming
Windows CE 3.0 Application Programming

Discussion Groups

An active mailing list on YahooGroups is windowsce-dev

hpc.net has a Windows CE developer's mailing-list with over 1500 members!

Microsoft has several active newsgroups for CE/PocketPC development
microsoft.public.win32.programmer.ce
microsoft.public.pocketpc.developer
microsoft.public.windowsce.app.development

Hardware Sites

PocketPC.com
Compaq's iPaq
HP's Jornada
Casio's E-100 series

Games (be sure to check out your competition)

Jimmy's Windows CE Software
www.pocketgamer.org
ZIOSoft


The Clash of Mobile Platforms: J2ME, ExEn, Mophun and WGE

$
0
0
The Clash of Mobile Platforms: J2ME, ExEn, Mophun and WGE
by Pedro Amaro

Abstract: The author makes a brief introduction to the development of cellphone games. Some of the specific characteristics of this market are analyzed and the four primary free game development platforms are described, mentioning each one's advantages and disadvantages. This article is primarily targeted at amateur development teams who wish to evolve to professionals in this market.

1. Introduction
2. The wireless gaming market
3. J2ME
4. ExEn
5. Mophun
6. WGE
7. Which should you choose?
8: Conclusions


1. Introduction
At the moment, most programmers enter in the videogames' world "through" the computer. Taking in consideration the high licensing fees in the console market, it's extremely difficult to commercially release games on these systems (at least in a legal way). However, these last two years (and especially in 2002), another "gateway" to professional game development has been opened: the cellphone. The appearance of several free and device-independent development platforms allows amateur development teams to compete head-to-head with this sector's professionals without any notorious disadvantages. To achieve this, one must choose the platform that best fits the intended objectives. To make that choice easier, this article introduces the four main free platforms available in the market: J2ME [1], ExEn [2], Mophun[3] and WGE [4][5]. Initially, the reader is introduced in the specific characteristics of this market, to help him notice the development differences regarding other systems. This introduction is followed by an analysis of the strong and weak points of each one of the platforms mentioned above. The last paragraph contains a short summary on each platform, as well as an indication of what type of situation each one fits better.


2. The wireless gaming market
Unlike what happens in other systems, the amount of people who buy a cellphone just to play is extremely reduced. Sometimes the choice is purely based on the price, sometimes people buy whatever the mobile operator sells them and there's also people who choose a certain model just because his friends also have one… there are plenty of reasons to choose a device and the available games are usually only seen as a nice "side effect" of buying a cellphone.

With the appearance of the most recent cellphones, which have advanced graphic and sound capabilities, this market started expanding. Apart from the mentioned capabilities, the existence of free device-independent development platforms played a major role in this expansion. Looking at the success of handheld consoles (especially Nintendo's Gameboy, which became the world's best selling console in 1999), it becomes obvious that there's an excellent market to explore. Afterall, most people use their cellphone to be reachable anywhere, anytime. This means that, usually, they always take the cellphone with them. You can't say the same thing about a handheld console, since its users only carry it around when they're sure that they'll go through long waiting periods. The cellphone, on the other hand, is always there: at the bus stop, at the dentist's waiting room or in a boring class. Its main function (communicate) is constantly being requested, which means that its remaining functions (where games are included) are also always available.

Another important aspect lies on the fact that the typical cellphone user doesn't care about the technology available in his device. "J2ME", "ExEn" and "Mophun", for example, are words that most users don't know. Now, if the selected term is "Snake", it's pretty certain that at least those who usually play with a cellphone will recognize the word. Even though, at the moment, the choice of a cellphone isn't influenced by its games, it's extremely likely that this situation will change in the next 2 to 3 years. However, you shouldn't expect that common users will start checking if the device they're planning on buying supports a certain platform. What they will look for is if that device supports a good amount of quality games at reasonable prices… without forgetting that a cellphone's main function is communication.

A final thought goes to the game genres with higher probabilities of achieving success in this market. Unlike what's usual in other systems, the cellphone users don't play during long time periods. Games which involve that necessity (like, for example, RPGs and platforms) must have an extremely high quality level to "conquer" the user. Puzzles are the most common genre: if they're easy to play, fun, feature a fast action and do not demand a lot of training time, the success is almost assured. Another genre which is starting to become famous is the action: fighting games, shoot'em and beat'em up are now starting to enter the cellphone gaming market. When developing a game for this system, you shouldn't forget that there's a high probability of it being played only for 5 or 10 minutes at a time. If it doesn't "grab" the player in this time period, its commercial success will be quite limited.


3. J2ME
The "Java 2 Micro Edition" is usually considered what Java was originally supposed to be: a cross-platform language capable of working in devices with highly reduced capabilities. With that in consideration, it doesn't come as a surprise the similarities between J2SE and J2ME. As a matter of fact, J2ME is often considered a Standard Edition stripped to the essential.

Since it wasn't initially planned for games, its potential is quite reduced when compared with the other platforms created specifically for that purpose. Although MIDP 2.0 already comes with a GameAPI, the current version (MIDP 1.0) only has a few rudiments of what would be required to produce technically advanced games. For example, there's no support for resizing images, perform simple 2D rotations or even include sound. However, due to the fact that it appeared first and managed to acquire a good amount of supporters, J2ME became almost a market standard and it's the platform that carries more games in more devices.

J2ME's development costs are extremely reduced. The SDK is freely available and there are no licensing expenses, which means that anyone can create a game and market it. However, unlike the other platforms created specifically for games, there is no J2ME business model. The developer must negotiate its commercialization with three possible "partners": manufacturers, operators and distributors.

Negotiating a contract with a manufacturer is usually the most difficult option. Most of the times, it's cheaper for the manufacturer to create its own internal development team rather than paying a third party to develop games to be included in all its devices. Besides, considering that it's already possible to download games for the cellphone, the number of titles initially available in the device's memory is a characteristic losing some attention. Most of the times, these games are weaker than those the player can obtain through a simple download.

Negotiating directly with an operator is becoming the most common alternative. Most operators already have a service targeted at game developers and the current indicators show that these services will expand. The profit margins in revenue sharing are usually the highest (around 80%). However, sometimes commercializing a game can be quite difficult. Most of these services require a test period in which the game download is free. If the game is successful in this period, it moves on to a commercialization stage. The problem with this option lies in a simple fact: when the game enters the commercialization stage, the "new game" effect has wore out and the possible buyers already played it when it was freely available. Another problem lies in the limitation of negotiating with just one operator. For example, to release a game in more than one country, negotiations with at least two operators are required. This problem thickens when the developer wants a continental-wide or worldwide distribution. Anyway, sometimes it can be the best option (when, for example, due to localization difficulties, the developer wishes to target a single country and the operator does not demand a free download trial period).

The third option, dealing with a distributor, is usually the most appealing when what the developer wishes is a large scale distribution. It's quite common for distributors to have agreements with several operators. The turnoff of this lies on the lower profit margins. Usually operators take 20% of the profits, while the remaining 80% are divided among the distributor and the developer. Although it's possible to obtain a revenue share between 20% and 70% (which is above the 5% to 10% from other markets), the profit will never be as high as it would be if it was negotiated directly with the operator. Apart from this disadvantage, the developer also has to find a distributor interested in his application, which sometimes can be extremely difficult (although there are cases where it's the distributor that contacts the development team). The main advantage lies in the lack of commercially worries for the developer, since both the operator negotiations and the marketing are a task for the distributor.

Regarding J2ME's future, generally speaking you could say it's excellent. Not only does it have an extensive list of manufacturers supporting it (making it almost a standard), but it also managed to overcome the problems of JVMs that did not follow the specifications (which occurred due to the manufacturers' "rush" in releasing devices supporting this technology). On the gaming market, its future is somewhat dependent of MIDP 2.0. It's certain that it won't fade away and it should keep the leadership during 2003… but if one or more of the remaining contenders stays ahead in a technological level and manages to include its engine in an amount of devices similar to J2ME's, Sun's platform will face some difficulties in keeping the leadership in this specific market.


4. ExEn
"Execution Engine" (also known as ExEn) was developed by In-Fusio to "fight" the limitations imposed by J2ME in game development. It's also interesting to notice that In-Fusio tried to overcome those limitations working together with Sun by presenting the proposal of a GameAPI for MIDP 2.0.

ExEn was the first mass market downloadable game engine to be made available in Europe. This was an important first step that allowed ExEn to achieve the current position of leader in this continent, making it the most used game engine (which also means that it's the one with a wider range of games).

In early November 2002, there were 18 models which supported ExEn. In an European view, this means around one million available cellphones. Although it's a somewhat reduced number when compared with the five million devices which have J2ME technology, it's an impressive amount for a "small" proprietary technology.

Nevertheless, when compared with the remaining contenders, it's incorrect to say that such leadership is justified by the technological capabilities. Both in graphical and processing speed terms, ExEn is far from the lead. However, by supplying additional important game development functions (sprite zooming, parallax scrolling, raycasting, rotations), it easily overcomes J2ME. Adding to this a virtual machine that, despite not being the fastest, can be around 30 times faster than a generic VM (although usually is only 10 to 15 times) and only leaves a 5% footprint on the device's memory, it's easy to see why this is the most widely chosen game engine.

Another important reason that lead several developers to choose ExEn is In-Fusio's business model. This is divided in 2 levels: standard and premium. In the standard level (free subscription), In-Fusio offers the SDK, an emulator, on-line technical support and the possibility of, later on, upgrading to the premium package. The developers that achieve the premium level have their games marketed by In-Fusio, which promotes them in the operators who have devices supporting this engine.

Execution Engine's growth perspectives are quite good. With a new version (2.1) released in the beginning of 2003, the support of several influential software-houses (Handy Games and Iomo, for example) and an attractive business model for independent producers, the number of available games should increase considerably. In-Fusio has also started to enter the Chinese market, which should become one of the strongest (if not the strongest) in the next 2 to 3 years.


5. Mophun
Mophun is described by its creators (Synergenix) as a "software-based videogame console". Although its development began in late 1999, its market implantation only achieved a serious level in November of 2002.

Its late appearance, allied to the fact that only three devices carry this engine (Ericsson T300, T310 and T610) made some developers discard the option of developing for this system. The somewhat biased market analysis performed by Mophun's producers also "scared away" some interested developers… for example, in one of those analysis, Mophun is shown dividing the leadership of the European market with J2ME. However, while the J2ME and ExEn information reported back to October of 2002, the values presented for Mophun were predictions for 2003. This fact passed on the feeling that something went wrong with Mophun at an operator and manufacturer support level.

Technically speaking, Mophun has no rivals. Tests performed by independent organizations showed that, in a device where Mophun reaches 60 MIPS, J2ME only went as far as 400 KIPS (this represents a performance 150 times higher). Synergenix also adds that, in certain devices, part of the VM code is directly translated into native code, meaning that it's possible to achieve 90% of the device's maximum capability (for instance, reaching 90 MIPS in a device that reaches 100 MIPS when running native programs). The remaining characteristics are similar to ExEn's.

Like ExEn and J2ME, Mophun is also freely available. In some aspects, Synergenix's business model resembles In-Fusio's: after the game is developed, Synergenix handles certification, distribution and marketing. However, since its current network isn't very extended, it doesn't seem to be as appealing as ExEn's, which made some developers choose the theoretically weaker system.

Mophun's future is "semi-unknown". If Synergenix fails to quickly acquire additional support, it's quite likely for Mophun to be dropped in favour of less powerful but financially more appealing development technologies. However, if the promises that several operators and manufacturers are going to adopt Mophun briefly are followed, this system's advanced technical skills can make it the new leader.


6. WGE
The "Wireless Graphics Engine" is TTPCom's solution. Although it began being considered the main candidate for domination of the game engines' market, the lack of support by game developers ended up decreasing the initial appeal.

It's impossible denying that, from a purely technical point of view, WGE has everything to win. It may be slower than Mophun, but the several API modules make 2D and 3D programming easier (including tile management and collision detection functionalities), allow a simple access to networking functions and grant sound support, among other capabilities.

As its direct contenders, the SDK download is free and TTPCom has a business model aimed at attracting the game development teams. To the usual revenue sharing from the games sold on a download basis, there's the addition of a "minimum income" resulting from selling the games directly to the device's manufacturers.

Unfortunately, despite the initially generated "fever", the lack of support from the primary manufacturers ended up limiting WGE's success. Most software houses avoided it, which lead small companies and independent developers to follow that example. The result is easy to see: the number of games available for WGE is slightly over 30. This lack of interest shown by the majority ends up bringing an advantage for those who want to start developing for WGE: with such small internal competition, it's easier for a quality game to succeed. The disadvantage lies on the lower number of possible players, which may considerably limit the profits obtained from the game's commercialization.

Although it's wrong to say that WGE's immediate future is dark, its perspectives have been more pleasant. Considering the strong competition that the current market fragmentation will bring in the next two years, if TTPCom isn't able to bring more software houses to its catalogue, it'll hardly get the support of additional manufacturers. On the other side, without the support of additional manufacturers, it's extremely hard to attract more software houses. WGE's future depends on TTPCom's ability to break this cycle. If it makes it within the next 3 or 4 months, the growth perspectives are quite positive. Otherwise, the end is almost unavoidable.


7. Which should you choose?
At this point, a question arises: which platform shall a programmer choose? Due to the high fragmentation of this market, there isn't one answer that suits all situations. To choose the platform that best fits the situation, it's necessary to set the objectives of what the team wants to produce and analyze the advantages and disadvantages of the several available platforms.

When the objective involves reaching a wide market and it's possible to make some compromises on a performance level, J2ME is the best option. If commercializing the game is also an objective, the team must expect to lose some extra time negotiating the distribution deals.

If the project requires more potentialities than those offered by J2ME and there's the option of choosing a smaller market or if the team wishes to choose a platform that offers a simple business model, ExEn should be selected.

When the most critical aspect lies on the performances (both in speed and graphical terms), Mophun appears as one of the main choices. In this case, it's important to check if taking the risk of choosing a not yet widely spread platform is a possibility.

If the option for a platform with a reduced market isn't a problem, if the objective is the creation of a high-performance game and if Mophun isn't a satisfactory choice for any reason, WGE is the best option. Once again, it's advisable to study well the choice in order to prevent excessive expenses when compared to the expected profits.

<a name="a8" id="a8">
8. Conclusions
With this article, the author intended to make a brief introduction to the main wireless game development platforms. It is expected that this may aid the choice of the platform by those who wish to enter this emerging market. This analysis was limited to the four main freely available platforms, in order to make this article especially useful for the amateur development teams who seek an entrance into professional game development. However, all those who wish to go through such entrance must remember that it will only be possible with the production of quality products adapted to the specific needs of this market.



References

1. Sun Microsystems, J2ME Homepage, http://wireless.java.sun.com
2. In-Fusio, ExEn Homepage, http://developer.in-fusio.com
3. Synergenix, Mophun Homepage, http://www.mophun.com
4. TTPCom, TTPCom Homepage, http://www.ttpcom.com
5. 9Dots, WGE Support Page, http://www.9dots.net


Pedro Henrique Simões Amaro
Departamento de Engenharia Informática
Universidade de Coimbra
3030 Coimbra, Portugal
pamaro@student.dei.uc.pt
http://pedroamaro.pt.vu

An Introduction to Developing for Mobile Devices Using J2ME/MIDP (Part 1)

$
0
0
An Introduction to Developing for Mobile Devices Using J2ME/MIDP (Part 1)
by Kim Daniel Arthur

The world of mobile gaming has never been as hot as it is today! Through the last couple of years the mobile gaming experience has grown from asynchronous black and white games to real-time colour multiplayer java games. The introduction of colour mass market "gaming" enabled mobile phones has taken mobile gaming a giant step up the gaming food chain.


Hobbyist heaven
The current state of mobile game development resembles much that of the early 80s, where a single developer could lock himself into his bedroom for a month and come out with a fresh best selling classic. Add the simple fact that you can get all the tools you need to make your own mobile game completely free (without having a bad conscience), and you will understand why mobile game development is attracting so much interest from hobby developers and smaller game houses alike.

"So how do I get started?" I hear you ask. Well read on!


Overview
This 2-part article will introduce you to the world of mobile game development through J2ME and MIDP. The first part will give a general introduction to the platform and environment, familiarise you with the MIDP API and help you code, compile and run your first MIDP game.

The second part will take a more detailed look at important elements in MIDP development such as:

  • Device specific APIs
  • Tips and tricks on how to reduce your application size
  • Targeting multiple devices
  • Implementing sound and music
The different platforms
Firstly lets take a look at the main target platforms. The list of platforms is ever growing but can be split into 2 main categories; Java based and C++ based. Within these 2 categories there are several variations with their own characteristics and device base.

The most widely available/supported platforms are:

Java based C based (non java) MIDP (J2ME) Mophun ExEn Brew WGE .NET compact framework DoJa It is important to consider which platforms have the widest range of device support, and more importantly which devices are most popular among users. In some cases you might want to target a specific region or phone network operator, in this case the choice of target platform might not lie in your hands. As an example, Verizon (one of the largest network operators in the US) are pushing Brew enabled handsets. So far J2ME/MIDP seems to be the most widely available platform, chosen by Nokia, Motorola, Siemens and Samsung to mention a few.

This article will focus on the J2ME/MIDP platform and the supporting devices. If you wish to read more about the other platforms check out the resource section where you will find links to related information.


So what is all this J2ME and MIDP business anyways?
(4 letter acronym warning!)


J2ME
J2ME (Java 2 Micro Edition) is an optimised subset of J2SE (Java 2 Standard Edition) or "normal java". J2ME itself defines a further subset of configurations and profiles that are used to tailor the environment for low-end devices such as PDAs and mobile phones.


CLDC
CLDC (Connected Limited Device Configuration) is one of the J2ME configurations which is designed for devices with slow(er) processors and limited memory. Typically devices with a 16 or 32bit CPU and from 128k to 512kb+ memory.


MIDP
The MIDP (Mobile Information Device Profile) profile (API) defines elements such as:

  • High level user interface elements
  • Application lifecycle management
  • Local data storage
  • Connectivity
And most importantly for us:

  • Low level Graphics APIs
  • Input handling
  • Media (Sound)
The MIDP profile together with the CLDC configuration make up the KVM (Kilo Virtual Machine) which is the runtime that we will be developing for.


Devices
To wet your appetite a little here is a list of some of the devices that you will be able to develop for when you become the master of MIDP!

Nokia Motorola Siemens Samsung 3510i T720 S55 S100 3300 A500 5100 6650 6100 7210 7650 3650 N-Gage For a more detailed list, check out this link!


Environment / Tools
Before you get your coding fingers all warmed up lets take a look at the basic MIDP development environment and process.


J2SE SDK
As you will be writing and compiling java code you will need the standard J2SE SDK. This includes the basic tools needed to compile your code. So if you haven't already, download and install the J2SE SDK from here.


The Wireless Toolkit
As you might have suspected you will be developing for a device that is not a "PC" so you will need something that emulates the target device or platform. Enter the J2ME Wireless Toolkit. The J2ME Wireless toolkit or WTK will be your best friend (and sometimes enemy) in the coming time. The WTK includes these main features:

  • The MIDP API classes and documentation
  • Default set of device emulators
  • Example code and applications
  • Java class pre-verifier (more about this one later)
The WTK can be downloaded and installed from here.


Editor of choice
To write your code you can of course use the editor you are most friendly with. My personal favourite is UltraEdit. There are several Java IDEs that integrate well with the WTK:

Once everything is installed lets get going on the fun stuff, writing code! (at last!)


MIDP the API

What's here, what's not?
As you start developing for MIDP you will soon see that the API is relatively small and compact. The API consists of 7 nicely Class packages:

  • java.io - Provides data stream classes that among other things are useful for reading from resources (level files, images, sounds..)
  • java.lang - Includes the basic Java Classes derived from the J2SE API. All important classes like Thread, primitive wrapper classes ( Byte, Short, Integer..) and a cut down Math class.
  • java.util - A subset of the J2SE java.util package that holds a set of helpful utility classes like Random, Vector, Hashtable and the TimerTask class.
  • javax.microedition.io - This package contains all the networking related classes and interfaces. Be aware that HTTP is the only mandatory network protocol in MIDP implementations.
  • javax.microedition.lcdui - The mightiest package of them all! Includes classes both for low and high level UI operations. The high level components include Forms, Lists, TextFields and Commands all-important to handle user input and navigation. Most commonly used in games for menus and instruction screens. Among the low level objects are Canvas, Graphics and Image which provide you with typical game actions like drawing to the screen and catching user input.
  • javax.microedition.midlet - the midlet package defines the entry and exit point for MIDP applications (MIDlets). It contains a single Class (yes, the MIDlet class) which is used by the AMS (Application management software) to control the lifecycle or state of your MIDlet. All MIDP applications you make must have a class that extends the MIDlet class to allow the AMS to (most importantly) start and stop your application.
  • javax.microedition.rms - the rms package provides you with mechanisms to store data persistently. Nice to use for storing high scores, save games and the likes! The basic storage elements are referred to as records which you can read and write via the RecordStore class. (you might be used to store data in a local file, but in MIDP you would write it to a RecordStore, as you have no access to the file system as such)
If you have previous experience from Java programming, maybe even made an Applet or two you will see quite a few similarities with the J2SE counterpart. And the basics of MIDlet development will be quicker to pick up. If you are new to Java all these packages and classes might sound quite daunting, but once you have your MIDlet running you will soon enough grasp what is needed to complete your game project!


The MIDlet
MIDP applications are called MIDlets (similar to the well known Applet), the files you can consider the MIDP executable are the Jad and Jar files.

The Jar file is archive (zip file) including most importantly your games Class files and resources. It also includes a Java Manifest file which along with the Jad (Java Application Descriptor) file contains vital information about your MIDlet.

We will find out exactly what is included in these files alittle later!


The building blocks
As mentioned the basic building block and entry point of your application (ok lets call it your game), game, is the MIDlet class. Also mentioned earlier was the AMS which is the piece of software on the device that manages your games lifecycle. When the user opens his list of games and decide to start your game the AMS will create a new instance of your main class, the one that extends MIDlet. The AMS will use the default (no argument) constructor of your MIDlet class to do this. If no error / Exception occurs when doing so, it will call the startApp() method on the new MIDlet instance. Your MIDlet is now in "active" state, this is where you gain control and can start performing your magic!

Another important building block for your game is the Canvas, which defines the all imporant methods to draw to the screen and capture user input. The Canvas class itself extends a class called Displayable, the Displayable class is an base for all objects that can be "placed onto" the devices display, such as Lists and Forms.

The Canvas class defines several important methods that we should take at now so you will be mentally prepared for what is to come later! The methods are commonly referred to as event delivery methods, they deliver events that you can handle as needed in your game:

  • keyPressed( int keyCode ) - Indicates that a key has been pressed. Which key that has been pressed can be identified through the single int parameter of the method.
  • paint( Graphics g ) - this is called by the Virtual Machine when a scheduled repaint is performed. The Graphics parameter is the object used to render to the Canvas. NOTE: you should never call paint manually, if you want the Canvas to be repainted you can call repaint() on the Canvas!
  • keyReleased( int keyCode ) - works the same way as keyPressed() but is triggered by the release of a key.
Important limitations and pitfalls
The last thing to do before we make your first MIDlet is to identify some all important limitations and pitfalls. (don't let these scare you!)

  • Do not believe the myth that there is no transparency support in MIDP. Most devices and emulators support transparent images. (Some require the pngs to be saved as 24bit as opposed to indexed mode). Some devices (Nokia) even support alpha transparency!
  • MIDP has no floating point support. (no double or float) But this should not limit your possibilities as fixed point math will come to your rescue!
  • No trigonometry functions. So once again its time to dig out those lookup tables!
  • No direct access to image data (pixels) through generic MIDP. So for example common tasks like get()'ing and set()'ing of pixels are not possible.(not quite true for set()'ing as you can draw a 1 pixel line or rectangle to do this) But there are some devices (Nokia) that provide you with device specific methods to access the pixels and image data.
  • No generic support for rotation / scaling of images. Although some devices provide (Nokia) you device specific methods for this. Commonly rotation in 90 degrees increments and flipping both horizontally and vertically are implemented in device specific libraries.
  • Graphic modes are not palette based. 4096 is the most common colour count.
  • Most phones do not support multiple simultaneous key presses.
  • Watch your application size. Most devices have a defined maximum application size ranging from 30kb on the low end b&w phones to the more generous 180kb limits on the high end colour phones. For colour games a good target when it comes to application size is 64kb which is the lowest limit around for colour MIDP devices. (Remember to always check the application size limit for each phone you target!)
  • Try to keep the amount of classes in your game to a minimum, as each class will add size overhead and heap memory overhead to your game. Sometimes you will even have to break common design rules to get around the size and speed limitations. (For example using accessor methods like getX(), setX() are considered an unnecessary overhead)
  • If you are planning to support a wide range of devices, don't put game related logic in the class that extends Canvas. As on some devices (Nokia for example ;) ) where you night want to extend a device specific Canvas class called FullCanvas. So the less game logic you have in your Canvas class the less unique code you need for different device versions of your game!
  • Remember to obfuscate your Class files! Not only does this reduce the size of your files, it makes it harder for others to decompile your game. (http://proguard.sourceforge.net/ , http://www.retrologi...guard-main.html)
Ok, enough already! lets go!


First MIDlet
To write our first MIDlet we will use the J2ME Wireless Toolkit and the tools it provides. The most important tool is the KToolbar, from within the KToolbar you can create and manage your MIDlet projects. It has features to compile, package, run and even obfuscate your MIDlet.

Using the KToolbar to manage your MIDlet build process will enable you to quickly get into the development of your first MIDlet. You wont have to worry about doing all the compiling and packaging on the command line, but rather you can save this for later when you are comfortable with the environment. As your projects grow in size your needs for a tailored build process will increase and you will most probably need more control! (More on this in part 2)


Setting up the project
When you start the KToolbar, you will see the following window. Take a close look, this is your new friend!

/reference/programming/features/j2me1/image002.jpg

To create your project, click (you guessed it) the New project button.

/reference/programming/features/j2me1/image004.jpg

In the first field, enter the name of your Project. The second field "MIDlet Class Name" is where you define the name of your MIDlets main class. This class will be the one that extends MIDlet and is instantiated by the AMS. The actual name you give it is not important, but it must be a valid Java Class name. For now lets call it Startup. The next window holds all the properties that must be present in your MIDlets Jad and Manifest file.

/reference/programming/features/j2me1/image006.jpg

  • MIDlet-Jar-Size - This Jad attribute must always reflect the exact size in bytes of your MIDlets Jar file. When using the KToolbar to package your MIDlet this is handled automatically. (If the jad file holds a wrong value for your Jar files size, it will fail to install to the target device!)
  • MIDlet-Jar-URL - The url to the Jar file of your MIDlet. (Typically just the filename of the jar file, this is more important in conjunction with delivery of the MIDlet via over the air downloads)
  • MIDlet-Name - The name of your MIDlet, which appears in the list of MIDlets in the devices AMS.
  • MIDlet-Vendor - The name of the MIDlets vendor, you or your company.
  • MIDlet-Version - The current version of the MIDlet.
  • MicroEdition-Configuration - Mandatory field to identify which configuration the MIDlet uses
  • MicroEdition-Profile - Mandatory field to identify which profile the MIDlet uses
When the new project wizard is complete you will be prompted with the following (or similar):

Place Java source files in "c:\j2mewtk\apps\MyGame\src"
Place Application resource files in "c:\j2mewtk\apps\MyGame\res"

What happened now is that the KToolbar has created a directory structure for your project. KToolbar projects are placed in their own subdirectory in the "apps" folder which is located in the folder where you chose to install the WTK.

As the KToolbar states, we should place all our source files in the newly created "src" folder.


Basic MIDlet
Ok, we have the project set up and are ready to go. In the "src" folder, create a file called Startup.java (case sensitive). This will be the source file for our MIDlets main class. The Startup Class will extend javax.microedition.midlet.MIDlet which has 3 abstract methods that need implementing. These are the all important methods that control the lifecycle of our MIDlets.

Below is the source for our first and most basic MIDlet

import javax.microedition.midlet.*; public class Startup extends MIDlet { /* * Default constructor used by AMS to create an instance * of our main MIDlet class. */ public Startup() { //Print message to console when Startup is constructed System.out.println("Constructor: Startup()"); } /* * startApp() is called by the AMS after it has successfully created * an instance of our MIDlet class. startApp() causes our MIDlet to * go into a "Active" state. */ protected void startApp() throws MIDletStateChangeException { //Print message to console when startApp() is called System.out.println("startApp()"); } /* * destroyApp() is called by the AMS when the MIDlet is to be destroyed */ protected void destroyApp( boolean unconditional ) throws MIDletStateChangeException { } /* * pauseApp() is called by the AMS when the MIDlet should enter a paused * state. This is not a typical "game" pause, but rater an environment pause. * The most common example is an incoming phone call on the device, * which will cause the pauseApp() method to be called. This allows * us to perform the needed actions within our MIDlet */ protected void pauseApp() { } }

example01.zip


Compiling, preverifying and running
To compile all the src files in your project (currently just Startup.java) press "Build".

What the KToolbar does now is to compile your source code against the MIDP and CLDC APIs (and any libraries in the /lib folder of the currently selected emulator) into a folder called tmpclasses. The command it executes would be similar to (relative to project folder):

javac -d tmpclasses -bootclasspath %wtk%\lib\midpapi.zip -classpath tmpclasses;classes src\*.java

The next step the KToolbar takes is to preverify your java Classes and place them in the "classes" folder in your project. Class files must be preverified before they can be run on a MIDP device.

The command line approach for preverifying: (preverify.exe is a tool provided with the WTK, located in the WTK's \bin folder)

preverify -classpath %wtk%\lib\midpapi.zip tmpclasses -d classes

Now that your MIDlet is compiled and preverified you can run it by pressing "Run". As the only thing we are doing in our MIDlet is to print 2 strings to the console nothing will actually happen in the display of the emulator, but you should see the following in the WTK console:

Building "MyGame"
Build complete
Constructor: Startup()
startApp()

This shows us that the Startup class was constructed through its default constructor and then startApp() was called. Not very exciting, but important for our MIDlet to start :)


Using Forms and Commands
Forms and Commands are high level UI components that come in handy for building menus and showing instructions in games. The Form class is a subclass of Displayable, this means that we can directly display it on the devices Display.

The devices current Displayable can be set by help from the Display class. To get a reference to a Display object for our MIDlet we call the static method getDisplay() on the Display class. The method takes one parameter which is a reference to an instance of our MIDlet class:

Display display = Display.getDisplay( midletInstance );

we can now set the display to any Object that extends Displayable by calling:

display.setCurrent(nextDisplayable );

Now that we know this, lets try and make a Form and display it!

First we need to create a Form, the most basic form we can create is an empty form with a title;

Form basicForm = new Form("Form Title");

Let's add a String to the Form as well, this is done by appending it :

basicForm.append("My basic Form");

So we now have a Display and a Form ( which extends Displayable ), so we have all we need to proceed in showing the Form;

display.setCurrent( basicForm );

Put all this in our startApp() method and this will be the first thing that happens when our MIDlet launches.

Form and Display are both Classes in the javax.microedition.lcdui package so we must remember to import this in our source file.

example02.zip

Build and run!

/reference/programming/features/j2me1/image008.jpg

Now that we have the Form, lets add a Command so that we can get some high-level input from the user. Lets make a Command that allows the user to generate a random number and append it to the Form.

A Command is a component that triggers an action in our MIDlet. To capture the event triggered when the user activates the Command we need to use the CommandListener interface. The CommandListener interface defines one method that we must implement:

commandAction( Command c, Displayable d )

When the user triggers the Command, the implementation will call the commandAction method on the associated CommandListener. The CommandListener is set by the setCommandListener() method on the Displayable where the Command has been added.

appendCommand = new Command( "Random", Command.SCREEN, 0 );

The Command constructor takes 3 parameters, the String associated with the Command, the type of Command (indicates to the implementation what type of action this Command will perform, sometimes affects the positioning of the Command on the screen) and the Commands priority or display order if you wish. And add it to our Form (addCommand() is inherited from Displayable, so all Displayables can have Commands):

basicForm.addCommand( appendCommand );

We now have a Command added to our Form, the next step would be to set the CommandListener of our Form. Lets make our Startup class implement CommandListener and override the commandAction() method to perform our actions.

basicForm.setCommandListener( this );

As we wanted to generate a random number we will also need to add a Random number generator to our app. For this we must import java.util.Random and create a new generator :

generator = new Random( System.currentTimeMillis() );

This creates a new Random generator and seeds it with the current time in milliseconds.

/* * Callback method for the CommandListener, notifies us of Command action events */ public void commandAction( Command c, Displayable d ) { //check if the commandAction event is triggerd by our appendCommand if( c == appendCommand ) { //append a String to the form with a random number between 0 and 50. basicForm.append("Random: " + ( Math.abs( generator.nextInt() ) %50 ) + "\n"); } }

Build and run!

example03.zip

This was a brief introduction to the High-level components of the MIDP API. They provide a generic portable way of displaying information and getting high-level user input. Next is where the heart of your game will take place, so get ready to paint() that Canvas!


Using the Canvas to draw and handle input
Maybe the most important thing in a game is being able to draw stuff to the screen, be it characters, items or backgrounds. To do this in MIDP we will be using the Canvas, Graphics and Image classes, these are the main classes you will be using for your low-level graphics handling.

The Canvas is an abstract class and we must therefore subclass it to be able to use it, lets make a new Class called GameScreen that extends Canvas. As we have seen before the Canvas class defines the abstract paint( Graphics g ) method, in our GameScreen class we will override this method which will allow us to draw to the Graphics object passed to the paint() method by the Virtual Machine.

This leaves us with the following source for our GameScreen class:

import javax.microedition.lcdui.*; public class GameScreen extends Canvas { //Default constructor for our GameScreen class. public GameScreen() { } /* * called when the Canvas is to be painted */ protected void paint( Graphics g ) { } }

Now we have the basics we need to draw to the screen, lets get things set up to receive key events from the user. The Canvas class defines 3 methods that handle key events, keyPressed(), keyReleased() and keyRepeated(). The canvas class has empty implementations of these methods, so it is up to us to override them and handle the events as we see fit.

/* * called when a key is pressed and this Canvas is the * current Displayable */ protected void keyPressed( int keyCode ) { } /* * called when a key is released and this Canvas is the * current Displayable */ protected void keyReleased( int keyCode ) { }

As you see we have only implemented keyPressed() and keyReleased() not keyRepeated(). You should try not to rely on keyRepeated() events as the frequency of the calls to keyRepeated() varies a lot from device to device. And using the behaviour keyRepeated() provides us is not the optimal way to check whether the user has held the key down or not.

Ok so we are now ready to receive input and draw to the screen, before we go any further lets make sure we know how to get the Canvas we have made displayed on the screen. Remember the Startup class we made earlier? Lets change this class so that its sole purpose is to serve as an entry point to our game and create and display a new instance of our GameScreen class.

protected void startApp() throws MIDletStateChangeException { Display display = Display.getDisplay( this ); //GameScreen extends Canvas which extends Displayable so it can // be displayed directly display.setCurrent( new GameScreen() ); }

So now we are creating a new GameScreen and displaying it. Now we can try out some of the primitive drawing methods available from the Graphics class.

//set the current colour of the Graphics context to a darkish blue // 0xRRGGBB g.setColor( 0x000088 ); //draw a filled rectangle at x,y coordinates 0, 0 with a width //and height equal to that of the Canvas itself g.fillRect( 0, 0, this.getWidth(), this.getHeight() );

By setting the colour via setColor( int rgbColor ) we will affect all subsequent rendering operations to this Graphics context. Hence our call to fillRect( x, y, width, height ) will draw a filled rectangle in our desired colour. This also introduces 2 quite important methods of the Canvas class, getWidth() and getHeight() you will use these methods to obtain the total available for you to draw to on the Canvas. These are important values when targeting multiple devices with varying screen sizes. Always obtain the values via getWidth() and getHeight() don't be tempted to hardcode the values as you will create a lot of extra work for yourself when you want to port your game. Try to make all your draws to the screen (where possible) relative to the width and height of the Canvas.

Build and run!

example04.zip


Input handling
Just to get the hang of handling key events lets make the colour of our filled rectangle change when the user presses a key. For fun we can make the rectangle red when the user presses the LEFT key, green when for RIGHT, black for UP, white for DOWN and blue for FIRE.

As you might have noticed, a key pressed event is represented by a int value reflecting the key code of the key the user pressed. This key code can be treated in 2 separate ways, either via its actual value (KEY_NUM0 to KEY_NUM9, KEY_STAR or KEY_POUND which make up a standard telephone keypad) or its game action value (UP, DOWN, LEFT, RIGHT, FIRE, GAME_A, GAME_B,GAME_C,GAME_D). Why have 2 approaches you ask? As there are so many different keypad layouts and configuration, using the game action value of a key code will allow us to identify keys by their game action in a portable fashion. To retrieve the game action mapped to a key code we use the getGameAction( int keyCode ) method of the Canvas class.

/* * called when the Canvas is to be painted */ protected void paint( Graphics g ) { //set the current color of the Graphics context to the specified RRGGBB colour g.setColor( colour ); //draw a filled rectangle at x,y coordinates 0, 0 with a width // and height equal to that of the Canvas itself g.fillRect( 0, 0, this.getWidth(), this.getHeight() ); } /* * called when a key is pressed and this Canvas is the * current Displayable */ protected void keyPressed( int keyCode ) { //get the game action from the passed keyCode int gameAction = getGameAction( keyCode ); switch( gameAction ) { case LEFT: //set current colour to red colour = 0xFF0000; break; case RIGHT: //set current colour to green colour = 0x00FF00; break; case UP: //set current colour to black colour = 0x000000; break; case DOWN: //set current colour to white colour = 0xFFFFFF; break; case FIRE: //set current colour to blue colour = 0x0000FF; break; } //schedule a repaint of the Canvas after each key press as //currently do not have any main game loop to do this for us. repaint(); }

Build and run!

example05.zip

If you look at the last line of the keyPressed() method you will see a call to repaint(). This schedules a repaint of the Canvas. Normally we would not do this from within the keyPressed() method, but at the end of our game loop. So now is a good time to get that main game loop going!


Game loop
The Thread class will be used to spawn our game thread so we can use this for our main loop. Threads can be created in 2 different ways. Either by subclassing Thread and overriding the run() method of the Thread class. But as mutliple inheritance is not possible in Java (our GameScreen class is already extending Canvas) we will use the second approach which is to implement the Runnable interface and implement the run() method of that interface. This means we can spawn a new Thread by passing the instance of our GameScreen class (which implements Runnable) to the Threads constructor.

//Default constructor for our GameScreen class. public GameScreen() { //create a new Thread on this Runnable and start it immediately new Thread( this ).start(); } /* * run() method defined in the Runnable interface, called by the * Virtual machine when a Thread is started. */ public void run() { }

Now when we construct our GameScreen it will create and start a new Thread which triggers a call to our run() method.

We want our main loop to be called at a fixed rate, for this example lets set the rate to 15 times per second. (Although it is impossible to give an exact performance figure that will apply to all games, 15fps is a reasonably indicative average to start off with)

Within the run() method of our class we implement the timing logic for our main loop:

/* * run() method defined in the Runnable interface, called by the * Virtual machine when a Thread is started. */ public void run() { //set wanted loop delay to 15th of a second int loopDelay = 1000 / 15; while( true ) { //get the time at the start of the loop long loopStartTime = System.currentTimeMillis(); //call our tick() fucntion which will be our games heartbeat tick(); //get time at end of loop long loopEndTime = System.currentTimeMillis(); //caluclate the difference in time from start til end of loop int loopTime = (int)(loopEndTime - loopStartTime); //if the difference is less than what we want if( loopTime < loopdelay ) { try { //then sleep for the time needed to fullfill our wanted rate thread.sleep( loopdelay - looptime ); } catch( exception e ) { } } } } /* * our games main loop, called at a fixed rate by our game thread */ public void tick() { }

To test our main loop lets make it change the colour of the background to a random colour every frame. We can use this opportunity to move the repaint() call to within our game loop.

/* * Our games main loop, called at a fixed rate by our game Thread */ public void tick() { //get a random number within the RRGGBB colour range colour = generator.nextInt() & 0xFFFFFF; //schedule a repaint of the Canvas repaint(); //forces any pending repaints to be serviced, and blocks until //paint() has returned serviceRepaints(); }

Build and run! (don't stare to long at the screen, you will be mesmerized!)

example06.zip


Using Images
Images in MIDP are very easy to create. The easiest way is to call the static method of the Image class, createImage( String name ). The String passed to the method is the location of the image within the MIDlets jar file. So the first thing we need to to is make a image to use in our game. Generic MIDP only supports PNG images. When using the KToolbar to build our projects, all we need to to is place the image file in the "res" folder of our project. Create a image and called "sprite.png" and put it in the res folder. Normally you should keep your images files as small as possible, there are several tricks, the obvious ones are to save them as indexed mode PNGs. To gain an extra few bytes you can optimise them with for example XAT image optimiser or pngcrush, these tools on average save you 30% of your original image size! Note: for transparency to work in the default WTK emulators your files must be saved as 24bit pngs, this is not true for most of the actual devices or device specific emulators.

try { myImage = Image.createImage("/sprite.png"); } catch( Exception e ) { e.printStackTrace(); }

There you go, image created. Images take time to create and require a lot of runtime memory, so keep your image creation within controlled areas of your game. Actual memory usage varies between implementations, bear in mind graphic modes are not palette based, a typical implementation will use 2 bytes of memory per pixel.

To draw the created Image to the screen, use the drawImage() method of the Graphics class.

g.drawImage( myImage, x, y, Graphics.TOP | Graphics.LEFT );

The Graphics.TOP | Graphics.LEFT parameter is called an anchor. This defines how the Image should be drawn relative to the x and y coordinates. The Graphics.TOP constant causes the TOP of the image to be at the y coordinate, and the Graphics.LEFT constant causes the LEFT of the image to be at the x coordinate. So if you wanted to draw an image at the center of the screen, the quickest way to do it is to set the anchor to the vertical and horizontal center of the Image:

g.drawImage( myImage, this.getWidth()/2, this.getHeight()/2, Graphics.VCENTER | Graphics.HCENTER );

Build and run! (i suggest we get rid of the random colour thingy before it causes any permanent damage to our brain..)

example07.zip


Double buffering
To avoid flickering when drawing to the screen we will need to use well known double buffering techniques, where everything is rendered to an off screen buffer then later drawn to the visible screen. Some implementations will actually do the double buffering for us! Whether a device does so or not can be queried at runtime via the isDoubleBuffered() method on the Canvas. The advantage of not having to do the double buffering yourself is that it saves us the runtime memory needed to store the off screen buffer. We can easily write our code to automatically check and cater for devices that need us to implement double buffering ourselves.

At the same time as we load and create all our needed Images we can create our off screen buffer if needed.

/* * Creates all the needed images, called upon creation * of our GameScreen class */ public void createImages() { try { //if device doesnt do automatic double buffering if( !isDoubleBuffered() ) { //create offscreen Image bufferImage = Image.createImage( getWidth(), getHeight() ); //get a Graphics context to we can render onto the bufferImage buffer = bufferImage.getGraphics(); } myImage = Image.createImage("/sprite.png"); } catch( Exception e ) { e.printStackTrace(); } }

We create a new empty Image by calling Image.createImage( width, height ). The Image should be the exact same size as the Canvas's viewable area. In MIDP there are Mutable and Immutable Images. The difference being that Immutable Images, the ones created from image files/data, cannot be modified once created. A mutable Image, normally created through Image.createImage( width, height) can be modified by obtaining a Graphics context that will render to the Image itself. This is done by calling getGraphics() on the Image. This is what we have done for our back buffer!

With a small modification to our paint() method we can accommodate for those devices that do not do the buffering for us.

/* * called when the Canvas is to be painted */ protected void paint( Graphics g ) { //cache a reference to the original Graphics context Graphics original = g; //if device doesn't do automatic double buffering if( !isDoubleBuffered() ) { //change the g object reference to the back buffer Graphics context g = buffer; } //set the current color of the Graphics context to the specified RRGGBB colour g.setColor( colour ); //draw a filled rectangle at x,y coordinates 0, 0 with a width // and height equal to that of the Canvas itself g.fillRect( 0, 0, this.getWidth(), this.getHeight() ); //draw an image to the centre of the screen g.drawImage( myImage, this.getWidth()/2, this.getHeight()/2, Graphics.VCENTER | Graphics.HCENTER ); if( !isDoubleBuffered() ) { //draw the off screen Image to the original graphics context original.drawImage( bufferImage, 0, 0, Graphics.TOP | Graphics.LEFT ); } }

This might be a little confusing at first, at the top of the paint() method we keep a reference to the original Graphics context that was passed as a parameter to the method. We then check whether we need to perform the double buffering, if so we change the Graphics context that the g variable references to the Graphics context obtained from the buffer Image. At the end of the paint method we again check if we needed to perform the double buffering, and draw the buffer Image to the original Graphics context we kept earlier.

Build and run!

Example08.zip

To wrap up, lets change our input handling so we can move our image around the screen by pressing the keys.

/* * called when a key is pressed and this Canvas is the * current Displayable */ protected void keyPressed( int keyCode ) { //get the game action from the passed keyCode int gameAction = getGameAction( keyCode ); switch( gameAction ) { case LEFT: //move image left imageDirection = LEFT; break; case RIGHT: //move image right imageDirection = RIGHT; break; case UP: //move image up imageDirection = UP; break; case DOWN: //move image down imageDirection = DOWN; break; case FIRE: //set current to a random colour colour = generator.nextInt()&0xFFFFFF; break; } } /* * Our games main loop, called at a fixed rate by our game Thread */ public void tick() { int myImageSpeed = 4; switch( imageDirection ) { case LEFT: myImageX-=myImageSpeed; break; case RIGHT: myImageX+=myImageSpeed; break; case UP: myImageY-=myImageSpeed; break; case DOWN: myImageY+=myImageSpeed; break; } //schedule a repaint of the Canvas repaint(); //forces any pending repaints to be serviced, and blocks until //paint() has returned serviceRepaints(); }

Build and run!

Example09.zip
all.zip

That sums up the first part of the article, you should now be equipped with the tools and knowledge you need to make your first MIDP game. You know how to display and capture high level information, you can create and draw to a Canvas, you can even set up a game loop and handle key events, so now it is all up to you and your imagination!

The next part of the article will among other things focus on specific devices and their APIs, optimising the size of your MIDlet and look into ways you can more easily target multiple devices.


Resources
Below is a random list of sites related to J2ME and mobile gaming, enjoy!

http://java.sun.com/j2me/ - Sun's J2ME website
http://midlet.org - a huge repository of MIDlets and the chance to make your game publicly available
http://www.billday.com/j2me/ - nice resource with mixed info on J2mE
http://www.midlet-review.com - Mobile game review site, see what games are around and how they rate
http://games.macrospace.com - excellent collection of commercial games
http://www.microjava.com - a good resource for J2ME related news, tutorials and articles
http://www.mophun.com - The biggest mophun resource around
http://wireless.ign.com - IGN's wireless gaming section
http://www.qualcomm.com/brew - All the info you will need to get started with Brew
http://www.kobjects.org/devicedb - list of devices and device specs
http://wireless.java.sun.com/device/ - Another detailed list of java devices
http://home.rocheste...ohommes/MathFP/ - easy to use publicly available fixed point library for J2ME
http://www.forum.nokia.com/main.html - Nokia's developer site, lots of news, tools and developer forums
http://www.motocoder.com - Motorola's developer site
http://archives.java...m-interest.html - Sun's KVM mailing list

Porting Mobile Games - Overcoming the hurdles

$
0
0
The market for wireless games and content is here and now according to recent announcements by industry pundits. Nokia has estimated that in 2003 there were more than 10,000,000 downloads of Java enabled games per month worldwide; Ovum claims over 250 million Java enabled devices are in the market today; and, mobile industry analysts Zelos Group recently estimated 2003 mobile content revenues at over $500 million globally. All suggest that this is just the beginning of a lucrative market for mobile entertainment products.

One of the biggest challenges facing the mobile games industry, however, is the sheer number of different devices and local market requirements. With more than 250 different J2ME enabled devices in the market, along with multi-language and other customization requirements from mobile operators, a game developer faces a challenge in tapping into this broad market opportunity. Porting games from one mobile device to another has become a thorn in the side of an otherwise successful industry.


Porting Choices
In the past, many game studios considered porting skills – whether it be from console to PC or across consoles – to be a key part of the value add they would bring as either a developer or publisher. However, with over 250 device platforms plus 80 different mobile operators, just collecting the information and guidelines on these devices and markets can prove to be overwhelming and very costly.

If, for example, a mid-size game publisher has 20 games in its portfolio and wants these available globally in multiple languages it would have to create close to 5,000 different builds (20 games x 5 languages x 50 top devices). At an internal cost of approximately $2,500 per build it would need a budget of $12.5 million for mobile porting alone – something most mobile game budgets won’t support. Add to this specific requirements from mobile operators for billing API’s, game community API’s or operator branding and the size of the problem gets even more immense.

Given this scenario, what are the choices? Today there exists basically two ways to attack this problem as a mobile game developer / publisher: become an expert on mobile porting internally; or outsource to “porting houses” or service providers who have developed an expertise. Either way, the porting process can be done manually or by using automated porting tools to drive down costs and speed time to market.


Internal Porting
Developing the expertise to port across mobile devices internally is a risky approach. Firstly, strong relationships with the mobile operators and device manufacturers around the world are needed to ensure availability of the necessary information, as well as guidelines and devices to port the applications. Secondly, global testing facilities are needed to be able to load applications onto the actual devices and test them – the frequencies and network protocols of wireless networks in various parts of the world often differ from a local network. Thirdly, software and tools are required just to manage the immense numbers of source code builds, and staff trained on all of the devices and tools is critical.

As recently as a year ago, internal porting seemed doable for mobile game developers. There were only twenty or so devices and only a few mobile operators selling games. Now as the market grows and matures, the challenges to scaling this business are a significant hurdle. Automated tools may be the saviors. With automation, publishers can rely on the tool providers for device information and to provide the necessary workflow engines and data bases to effectively and efficiently perform the ports and to manage the growing number of builds.


The Outsourcing Alternative
As the pressure mounts to get more versions of games out faster, game publishers find themselves turning to outsourcing. Whether it is with small local shops or with larger, low cost, offshore software houses, the challenges remain substantial.

While a company that specializes in mobile game porting is more likely to have success in sourcing devices and mobile operator guidelines, they remain challenged with global testing facilities. But probably the biggest challenge, not unlike any outsourcing project, is maintaining quality. Often, to meet the strict deadlines imposed on them, outsourcing firms employ multiple employees to port the same application. The result is inconsistencies that are unacceptable to the publisher. No matter how clear the guidance, individuals will approach creating code differently, resulting in different end user experiences.

Again, porting partners that employ automation tools can provide time to market and consistency across a broader range of device builds. This is a key differentiator that must be looked for when selecting a porting partner.


Porting Strategies and Solutions
While the mobile market is quickly becoming a real opportunity for existing PC and console game titles, and the revenues are starting to roll in, the porting challenge can easily derail the opportunity to reach this massive market.

Through standardization and reuse of code elements, Sumea has streamlined its internal process to create significant efficiencies in porting. Other publishers such as Gameloft have developed teams on multiple continents to be able to handle local testing and relationships with mobile operators. Tira Wireless has developed an automated porting platform called Tira Jump that supports close to 100 J2ME devices and even can handle translation ports. Tira currently offers the Jump service to mobile game publishers, such as THQ Wireless, and plans to come out with a licensed version of the software for sale in the near future.

Whatever approach is taken, it is clear that game developers and publishers have to consider their porting strategies carefully. The market for mobile games is here and growing rapidly. The demographics of the market for wireless services and for console and PC games are very similar – young males – and thus there is a significant untapped opportunity to exploit existing titles on this new breed of mobile phones. It is also clear this market opportunity will only drive further advances in porting technologies and tools from which the whole industry will benefit.


Author Biography
Allen Lau, CTO and Co-founder, Tira Wireless

With more than 10 years technical and management experience, Allen Lau brings deep development expertise to the Tira Wireless team. Allen is the leader of the JSR 190 Standards Initiative and is considered an industry pioneer in code instrumentation, porting and digital rights management techniques related to Wireless Java. He also participates in a number of other JSRs related to Wireless Java standards.

Before joining Tira, Allen occupied senior development positions at Symantec. As Senior Development Manager at Symantec, Allen oversaw the development teams at the Toronto Research and Development facility. Prior to that, he fulfilled the role of Principal Software Engineer, spearheading the design and development of Symantec's premier product, WinFax PRO. Allen possesses the vision and skills necessary to design and develop industry-leading products from conception through to completion. He holds Bachelor's and Master's degrees in Electrical Engineering from the University of Toronto.

An Introduction to BREW and OpenGL ES

$
0
0

Introduction
It was only a matter of time until someone decided to put a 3d graphics API onto a phone handset. OpenGL has long been a graphics industry standard for 3d, and now OpenGL ES is fast becoming the standard for 3d on limited devices. Limited devices is an apt description though, even a high end phone might only have a 50Mhz ARM processor and 1MB of memory. But don't be put off, even with these limitations you can still create some very impressive games.

Writing games for mobile phones is unlike writing for the PC. With the design more limited by the platform restrictions you don't need a huge team with multiple programmers and an army of artists, its well within reason for a single person to turn out a quality title from the comfort of their bedroom.

This article will go from installing and setting up a BREW development environment and emulator, through to getting an Open GL ES system up and running and displaying a single triangle. From there existing OpenGL resources can take you further into the process of developing your 3d application.


Installing the BREW SDK
You really need to use Internet Explorer for this process. The BREW SDK is installed by an ActiveX control which only seems to work in Internet Explorer 6 or better. During this article I am going to assume you create a c:\BREW directory, and then install the BREW SDK into c:\BREW\BREW 3.0.1. If you want to install it somewhere else (like the default c:\program files\BREW 3.0.1), then just adapt the paths I mention as you proceed.

First, go here and register for a free BREW developer account, then install the BREW 3.0.1 SDK from here. Its a web based installer, just start it going, give it a directory to install to and wait. At around 20MB it won't take too long to install, even on a 56k modem. Towards the end it will ask if you want it to set a BREWDIR environment variable. Say yes or various things won't work correctly.

From this page, install the Visual C++ addon, and download the BREW SDK Extension for OpenGL ES. Extract the OpenGL ES zip file, and:

  • Move all files from inc into c:\BREW\BREW 3.0.1\sdk\inc
  • Move all files from src into c:\BREW\\BREW 3.0.1\sdk\src
  • Move the dll from BREW 3.x into c:\BREW\BREW 3.0.1\sdk\bin\modules
  • Move all files from devices into c:\BREW\BREW 3.0.1\sdk\devices
Directory structure
BREW is a bit tricky sometimes when it comes to where it expects to find various files, and tends to give the same cryptic error message for pretty much any case of missing or misplaced files. Below is how I have my machine setup. For this article I am assuming you have the same setup, again if you install BREW somewhere else just substitute your paths as appropriate.

c:\BREW\ My BREW root directory c:\BREW\BREW 3.0.1\ The 3.0.1 SDK. You can have several SDKs installed at once and chose between them by setting an environment variable c:\BREW\project1\ A project directory c:\BREW\project1.mif The MIF file for project 1 (note that its here and not inside the project1 directory, very important)
Create new project
  • Select the Brew App Wizard (under Visual c++ projects)
  • Set the "location" to c:\BREW
  • Enter a project name. Make it lower case with no spaces or special symbols, lets choose "test_project1" (this is what I'll refer to through out the article as the project name)
  • Hit OK, and then Finish without making any changes on the Wizard dialog or running the MIF editor
  • It will probably say that the project has been modified outside Visual C++, so say OK to reload
MIF editor
  • Run the MIF editor. On the Visual C++ BREW toolbar (which should be on by default, if not right click in the toolbar are and enable it) its third button
  • Click the new button (NOT File -> New)
  • Assuming your not a licensed BREW developer you will need to generate a class id locally. Select that option.
  • Make up a class id, I started at A0000001 and went up as I created more projects. Pick anything, but if you create more projects they must have unique class id's
  • Enter your project name as the class name, so in our case "test_project1"
  • Click OK, and you will be prompted to save. Save into the project directory, so c:\BREW\test_project1\test_project1.bid
  • File->save, save into the PARENT of the project directory, so c:\BREW\test_project1.mif
  • Now compile the MIF by choosing Build -> compile MIF script. Click OK a couple of times and you're done.
Setting up to run and debug through Visual Studio
  • Right click on the project in solution explorer to get up project properties
  • Configuration properties -> Debugging -> Command, Select BREW_Simulator.exe in your BREW SDK bin directory (in my case, C:\BREW\BREW 3.0.1\sdk\bin\BREW_Simulator.exe)
  • Configuration properties -> Linker -> Debugging, Change "Generate debug info" to "Yes (/DEBUG)"
Compile and run the project. It should compile with no errors and start the emulator. If you get compile errors you probably didn't set your class name in the MIF editor to exactly the same as the project name.

Now if you set a break point in your code it will get triggered correctly when the emulator is running your dll.


The emulator
Select Blackcap16 as your emulator profile. From File -> Load Device browse to the devices directory of the SDK and select Blackcap16.qsc. It remembers which device you are using, so you will only have to do this the first time you run the emulator.

In the emulator, File -> Change applet directory. Set it to the directory that contains your .mif file and your project directory, for me thats c:\BREW.

You should now see the emulator with two icons, your projects and Settings. Use the arrow keys to select which application you want to run, and enter to start it. When you run your project it looks like nothing happens! Thats because the app wizard only generates boiler plate start up code for you. You should see in the output window of Visual C++ a message saying that your dll was loaded, assuming it does and you get no errors, success!

If you get a message saying "This app was disabled to save space. Would you like to restore it now?", that's the cryptic message I mentioned earlier. It almost always means you have your files in the wrong places (probably the .dll in the wrong place.) Assuming you used the app wizard to generate your initial code, check you saved the .mif file into the right place.


The best laid plans...
Coding for limited devices like mobile phones can be a nightmare, especially if you come from a PC background and are used to luxuries like having more than a few hundred kb of heap, and more than a few hundred bytes of stack space.

Although it must be a Design Patterns advocates wildest fantasy, there is no static or global data on BREW. Also, BREW is completly event driven. Unlike "normal" programming where you typically have a while(..) loop to do your stuff, with BREW you can only respond to events like key presses or timers going off. There's also no floating point math, GL ES expected its values in 16.16 fixed point format. I'll address each of these in turn.


Storage space
So with no global or static data, where do we store our variables? BREW will store a single struct for us, which must first contain an AEEAplet structure, but can then contain any other data we want. Check out the main .c file the app wizard created for you. Right at the top is a structure named after your application, and in the AEEClsCreateInstance function is a call to AEEApplet_New which allocates heap space for it. BREW will look after a pointer to this data for us, and will pass it to us as a parameter to most things. I am going to refer to this as "the global BREW structure".

Some people like to just put all their data straight into that structure. However I prefer a slightly more oo approach.

Assuming you are going to write more than one BREW application you want to structure your startup/shutdown/event handling code into a shell so you dont have to rewrite it for every single application you create. First, change your main .c file to a .cpp file so it compiles as C++ (else you will get errors using classes). Create a class called Game with functions boolean Create(), void Destroy() and void Tick(int timeElapsed). Add an instance of Game into your global BREW data structure, right after AEEApplet, and remove the other data from the struct. I have also added int mOldTime which will will use later to track the elapsed time between frames.

// From test_project1.cpp struct test_project1 { AEEApplet a; // The compulsory applet structure Game mGame; // Our game class int mOldTime; // used to track the speed we are running at }; // From Game.h class Game { private: IShell * mShell; IDisplay * mDisplay; AEEDeviceInfo mDeviceInfo; public: boolean Create(IShell * shell, IDisplay * display); void Destroy(); void Tick(int timeElapsed); IShell * GetShell() { return mShell; } IDisplay * GetDisplay() { return mDisplay; } int GetWidth() { return mDeviceInfo.cxScreen; } int GetHeight() { return mDeviceInfo.cyScreen; } }; The AEEDeviceinfo structure contains various information about the current phone and operating environment, most importantly for now it contains the width and height of the screen. Given that virtually all phones have different sized screens you should try and adapt to the screen size at run time. That way your program will have a chance to work on several phones without recompiling.

// From Game.cpp boolean Game::Create(IShell * shell, IDisplay * display) { mShell = shell; mDisplay = display; mDeviceInfo.wStructSize = sizeof(mDeviceInfo); ISHELL_GetDeviceInfo(mShell, &mDeviceInfo); DBGPRINTF(" *** Width %d, Height %d", GetWidth(), GetHeight()); return TRUE; } void Game::Destroy() { } void Game::Tick(int timeElapsed) { // Uncomment this if you want proof the timer callback is working //DBGPRINTF("TICK! %d", timeElapsed); } The DBGPRINTF function to output text, either to the Visual C++ output pane if you are running in the debugger, or to a window within the emulator. To get access to it you need to include AEEStdLib.h. For now Destroy() doesn't do anything, as you add more functionality you can use it to clean up any resources you allocate.

Now to wire these up. Replace the contents of test_project1_InitAppData with a call to Game::Create, and make a call to Game::Destroy in test_project1_FreeAppData. Both of these functions are passed a pointer to the global BREW data structure, so you have easy access to the instance of your Game class. The other parameters you need are available through the AEEApplet stored within the global BREW structure.


Timers
Every thing in BREW is event based. If you were to try and remain in the startup function forever with a while loop after a few seconds the phone would reboot. BREW detects applications that have stopped responding (in its opinion) and forces a full reboot to try and clear the problem.

To get an application to run in a style resembling a real time game we use a timer to repetedly execute our game loop. BREW makes it really easy to set up a timer to callback a function of our choice with ISHELL_SetTimer. ISHELL_SetTimer takes four parameters, a pointer to the applications IShell (which is now contained in our Game class), the number of miliseconds in the future you want the function called, a pointer to a function to call, and finally a void * to some data you want passed to the callback.

The callback function needs to take a void pointer as a parameter and return void. I usually cast the address of global BREW structure to a void * and use that as my user data, that way in the callback function I can call Game::Tick(int timeElapsed). One thing to note is that timer callback functions are one shot wonders. If you want the callback to happen again you need to set the timer again.

// From test_project1.cpp void SetTimer(test_project1 * data); void timer_tick(void * data) { test_project1 * tp1 = static_cast < test_project1 * > (data); int TimeElapsed = GETUPTIMEMS() - tp1->mOldTime; tp1->mOldTime = GETUPTIMEMS(); tp1->mGame.Tick(TimeElapsed); SetTimer(tp1); } void SetTimer(test_project1 * data) { int result = ISHELL_SetTimer(data->mGame.GetShell(), TickTime, timer_tick, static_cast < void * > (data)); if (result != SUCCESS) { DBGPRINTF(" *** SetTimer failed"); } } GETUPTIMEMS() returns the number of miliseconds the phone has been on. TickTime is a constant that specifies how often (again in miliseconds) to call the main loop. Its calculated based on the FPS you want, like this:

// From test_project1.cpp const int WantedFPS = 20; const int TickTime = 1000 / WantedFPS; The only thing that remains is to set the timer going for the first time. Do this from the event handler funtion in test_project1.cpp. The function is called test_project1_HandleEvent. Add a call to SetTimer(pMe); to the EVT_APP_START case. This will get called by BREW when (if) your create function has successfully completed.


Fixed Point Math
The ARM chips that power most BREW phones have no floating point units. Instead they use a format called 16.16 fixed point. The 16.16 refers to taking a 32 bit variable, using the first 16 bits for the whole part of a number, and the last 16 bits for the fractional part.

To convert an int to 16.16 format, simply shift it left 16 places. To convert it back, shift the other way. A full fixed point tutorial is outside the scope of this article, but there are plenty of resources on the internet. All you need for this article is a macro to convert numbers to fixed point.

// From Game.h #define ITOFP(x) ((x)<<16)
Input
We recieve an event to our event handler function when a key is pressed, and another when it is released. Its up to us to track which keys are down at any given time.

To make things slightly more intresting the key codes used dont start at 0. They start at a constant called AVK_FIRST, and end at AVK_END. AVK is the prefix for the key codes too, so the 3 key would be AVK_3, the direction keys are AVK_UP, AVK_DOWN, etc. Check out aeevcodes.h for a complete list.

Lets add an array to our Game class to track the state of keys, and two functions to be called when we recieve key press and release events.

// From game.h class Game { ... boolean mKeysDown[AVK_LAST - AVK_FIRST]; ... void KeyPressed(int keyCode); void KeyReleased(int keyCode); ... In Game::Create loop through and set all the mKeysDown[..] to false so we start with a blank slate. The implementation of KeyPressed and KeyReleased is simple enough, just remember to take into consideration the key codes starting at AVK_FIRST not 0.

In your startup file, in your event handler function test_project1_HandleEvent, in the switch statement replace the whole case EVT_KEY with the following to route key events into the Game class.

// From test_project1.cpp ... case EVT_KEY_PRESS: pMe->mGame.KeyPressed(wParam); return TRUE; case EVT_KEY_RELEASE: pMe->mGame.KeyReleased(wParam); return TRUE; ... Now in game code you can test if (mKeysDown[AVK_UP - AVK_FIRST] == TRUE). Again, don't forget to take into account the offset of AVK_FIRST. It would probably be best to write a wrapper function to do the test which handles the offset internally.

If you compile and run now you should see the code from Game::Create printing out the width and height of the screen to the Visual C++ output pane.


OpenGL ES 1.0
At last we come to the actual topic of this article, OpenGL ES. It taken a while to get here mainly because for a lot of people this will be their first non-PC programming target.

If you have any experience with writing Open GL code you will find your knowledge translates nearly directly to GL ES. There are however a few differences. The startup and shutdown sequence is different from PC based Open GL. Given that there are no floating point functions things have slightly different names. Most of the names simply replace the trailing f (for float), with an x (for fixed). The most significant difference is GL ES does away with the immediate mode glVertexf interface. All renderering is done through the batched glDrawElements interface for improved efficiency.

To get access to the GL ES functions and data types you need to include IGL.h in your code. You will also need to add the file GL.c, which came with the GL ES SDK to your project. Its located at c:\BREW\BREW 3.0.1\sdk\src\GL.c.

There's another header called AEEGL.h which is intended for (the few) people who'd prefer to use OpenGL ES in the same way other BREW features are used: through an interface. So instead of calling

glPushMatrix()

, you'd call

IGL_glPushMatrix(pIGL)

where pIGL is a pointer to an IGL interface.

This article sticks to the standard way of using OpenGL.


The Renderer class
Keeping with the oo theme, all the setup and shutdown code is gathered into a class called Renderer. Take a look at the class definition.

// From Renderer.h class Renderer { private: IGL * mIGL; IEGL * mIEGL; EGLDisplay mDisplay; EGLConfig mConfig; EGLSurface mSurface; EGLContext mContext; public: boolean Create(IShell * shell, IDisplay * display); void Destroy(); void FlipToScreen(); }; IGL is an Interface to GL, while IEGL is a platform specific layer to sit between IGL and the underlying architecture.

The other parameters are just as their names suggest. EGLDisplay is the graphics display, EGLConfig is the video mode (there is normally only one mode available, as opposed to a PC graphics card which might have several to choose from). EGLSurface is the actual surface rendering operations write to. EGLContext represents the current state of the GL environment that will be used when you execute commands.


Renderer::Create(..)
Throughout this function its very important to check every function call for errors, and to completely clean up if anything goes wrong. On a PC its maybe a bit annoying if a program spews garbage and you have to reboot, but I have heard several stories of phones locking up and having to be sent for repair after particularly nasty code errors.

// From Renderer.cpp if (ISHELL_CreateInstance(shell, AEECLSID_GL, (void **)&mIGL) != SUCCESS) { Destroy(); return FALSE; } if (ISHELL_CreateInstance(shell, AEECLSID_EGL, (void **)&mIEGL) != SUCCESS) { Destroy(); return FALSE; } IGL_Init(mIGL); IEGL_Init(mIEGL); Using the ISHELL interface we get BREW to create IGL and IEGL objects for us. The IGL_Init() and IEGL_Init() functions are part of a wrapper system that stores pointers to the IGL and IEGL so we can just call the more usual glClear(..) rather than IGL_glClear(mIGL, ...).

// From Renderer.cpp mDisplay = eglGetDisplay(display); if (mDisplay == EGL_NO_DISPLAY) { Destroy(); return FALSE; } Get the GL display, based on the current BREW display.

// From Renderer.cpp EGLint major = 0; EGLint minor = 0; if (eglInitialize(mDisplay, &major, &minor) == FALSE) { Destroy(); return FALSE; } DBGPRINTF(" *** ES version %d.%d", major, minor); Initialize GL ES, which also sets major and minor to the major and minor version numbers of the current GL ES implementation. At the moment that is going to always say 1.0, but version 1.1 is coming soon. In the future it will be worth checking this the same way you check for various extensions in GL to be able to use more advanced features if they are available. If you really don't care, you can pass NULL for the last two parameters to not retrieve the version information.

// From Renderer.cpp EGLint numConfigs = 1; if (eglGetConfigs(mDisplay, &mConfig, 1, &numConfigs) == FALSE) { Destroy(); return false; } Retrieve a valid configuration based on the display.

// From Renderer.cpp IBitmap * DeviceBitmap = NULL; IDIB * DIB = NULL; if (IDISPLAY_GetDeviceBitmap(display, &DeviceBitmap) != SUCCESS) { Destroy(); return FALSE; } if (IBITMAP_QueryInterface(DeviceBitmap, AEECLSID_DIB, (void**)&DIB) != SUCCESS) { IBITMAP_Release(DeviceBitmap); Destroy(); return FALSE; } Using the BREW IDISPLAY interface, get the current device bitmap. From this, use the IBITMAP interface to query for a device dependant bitmap (a bitmap in the native phone format). This will be our front buffer.

// From Renderer.cpp mSurface = eglCreateWindowSurface(mDisplay, mConfig, DIB, NULL); IDIB_Release(DIB); IBITMAP_Release(DeviceBitmap); if (mSurface == EGL_NO_SURFACE) { Destroy(); return FALSE; } Create the surface we will be rendering to. This is our back buffer which when we issue an eglSwapBuffers will be copied to the font buffer. We can release the bitmaps we acquired earlier, they have served their purpose.

// From Renderer.cpp mContext = eglCreateContext(mDisplay, mConfig, NULL, NULL); if (mContext == EGL_NO_CONTEXT) { Destroy(); return FALSE; } if (eglMakeCurrent(mDisplay, mSurface, mSurface, mContext) == FALSE) { Destroy(); return FALSE; } Create a context, and then lastly make our display, surface and context current so they are the target of any rendering we do.

Assuming we got this far with no errors, the basic GL ES system is up and ready to be used.


Renderer::Destroy
I have mentioned the importance of cleaning up correctly several times, so lets take a look at the Destroy function that takes care of shutting everything down.

// From Renderer.cpp eglMakeCurrent(EGL_NO_DISPLAY, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT); if (mContext) { eglDestroyContext(mDisplay, mContext); mContext = NULL; } if (mSurface) { eglDestroySurface(mDisplay, mSurface); mSurface = NULL; } if (mDisplay) { eglTerminate(mDisplay); mDisplay = NULL; } if (mIEGL) { IEGL_Release(mIEGL); mIEGL = NULL; } if (mIGL) { IGL_Release(mIGL); mIGL = NULL; } First we deactivate our display, surface and context, then take each in turn and destroy or release them depending on how they were created.


Renderer::FlipToScreen
We are nearly finished with the Renderer class now, lets take a look at the final function FlipToScreen, and then move onto actually getting something on screen.

// From Renderer.cpp void Renderer::FlipToScreen() { eglSwapBuffers(mDisplay, mSurface); } That is the entire function, it just calls eglSwapBuffers to copy our backbuffer to the screen.


A Spinning Triangle
Add an instance of Renderer to the Game class. Also add an int called mRotateAngle to record the current rotation of the triangle. In Game::Create, at the end, we have this:

mRenderer.Create(mShell, mDisplay); // Enable the zbuffer glEnable(GL_DEPTH_TEST); // Set the view port size to the window size glViewport(0, 0, GetWidth(), GetHeight()); // Setup the projection matrix glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Diable lighting and alpha blending glDisable(GL_LIGHTING); glDisable(GL_BLEND); // Set the fustrum clipping planes glFrustumx(ITOFP(-5), ITOFP(5), ITOFP(-5), ITOFP(5), ITOFP(10), ITOFP(100)); // Set the model view to identity glMatrixMode(GL_MODELVIEW ); glLoadIdentity(); // Enable the arrays we want used when we glDrawElements(..) glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); mRotateAngle = 0; This initializes our renderer with our stored ISHELL and IDISPLAY. It sets various initial GL states. Note the use of the ITOFP macro to convert values into 16.16 fixed point. Start the triangle with no rotation, facing the camera. Don't forget to add a matching call to Renderer::Destroy to Game::Destroy to clean up when the program exits.

glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glPushMatrix(); glLoadIdentity(); glTranslatex(0, 0, ITOFP(-15)); if (mKeysDown[AVK_LEFT - AVK_FIRST] == TRUE) { mRotateAngle -= 3; } if (mKeysDown[AVK_RIGHT - AVK_FIRST] == TRUE) { mRotateAngle += 3; } if (mRotateAngle < 0) mRotateAngle += 360; if (mRotateAngle > 360) mRotateAngle -= 360; glRotatex(ITOFP(mRotateAngle), ITOFP(0), ITOFP(1), ITOFP(0)); int FaceData[9] = { -ITOFP(2), -ITOFP(2), ITOFP(0), // First vertex position ITOFP(2), -ITOFP(2), ITOFP(0), // Second vertex position -ITOFP(0), ITOFP(2), ITOFP(0) // Third vertex position }; int ColorData[12] = { ITOFP(1), ITOFP(0), ITOFP(0), ITOFP(0), // First vertex color ITOFP(0), ITOFP(1), ITOFP(0), ITOFP(0), // Second vertex color ITOFP(0), ITOFP(0), ITOFP(1), ITOFP(0) // Third vertex color }; uint8 IndexData[3] = {0, 1, 2}; glVertexPointer(3, GL_FIXED, 0, FaceData); // Set the vertex (position) data source glColorPointer(4, GL_FIXED, 0, ColorData); // Set the color data source glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, IndexData); // Draw the triangle glPopMatrix(); mRenderer.FlipToScreen(); As I said earlier if you have any existing OpenGL knowledge this should look very familiar to you. Note again the use of the ITOFP macro to convert values to 16.16 fixed point, and the introduction of the new data type GL_FIXED as a parameter to various functions.


Adding Text
We can use the BREW functions to draw text to screen, as long as we do it after ending GL rendering for the frame. At the end of Game::Tick, after calling mRenderer.EndFrame(), add this:

AECHAR buffer[16]; WSPRINTF(buffer, 16, L"FPS: %d", 1000 / timeElapsed); IDISPLAY_DrawText(mDisplay, AEE_FONT_BOLD, buffer, -1, 5, 5, NULL, 0); IDISPLAY_Update(mDisplay); The last call, IDISPLAY_Update only needs to be called once at the very end for however much text or other data you want to put on screen. BREW is entirely UNICODE (except for filenames), so we need to use the wide version of sprintf. To declare a string constant as a wide string simply precede it with an L.


Run!
If you compile and run you should have a triangle with three different colored corners. The FPS should be in the top left corner, and pressing the left and right arrows (either use the keyboard, or click the emulator buttons) should rotate the triangle. Congratulations, you just wrote your first Open GL ES program!


Conclusion
Hopefully if you have been following along you have managed to install the BREW SDK, set up the emulator and have built your first OpenGL ES program.

Once you have OpenGL ES up and running you can use nearly any existing OpenGL books or websites for information. Just bear in mind the restrictions of the hardware, and don't forget to convert all your values to 16.16 fixed point!


Source
The source code, including a Visual C++ 2003 project file, that accompanies this article is available here (10k).


Where now?
I hope you are aware of the great contest Gamedev.net and Qualcomm are running. This article provides enough information to get you started writing the contest winner, and next cult classic 3d game for mobile phones.


Further reading
Books
OpenGL ES Game Development by Dave Astle and Dave Durnil

Contest
Gamedev.net & Qualcomm OpenGL ES development contest

BREW
Register for a free BREW developer account
Install the BREW SDK
Get the BREW GL ES SDK (and Visual C++ addon)
BREW developer forums

Open GL ES
Official OpenGL ES web site


Open GL
OpenGL.org
NeHe OpenGL tutorials

Fixed Point Math
Fixed point math article
More fixed point math

"The road must be trod, but it will be very hard. And neither strength nor wisdom will carry us far upon it. This quest may be attempted by the weak with as much hope as the strong. Yet such is oft the course of deeds that move the wheels of the world: small hands do them because they must, while the eyes of the great are elsewhere." J.R.R. Tolkein, The Fellowship of the Ring

Email me | 0AD: A Historical RTS

Developing iPhone Games for Longer Battery Life

$
0
0
Overview
In this article I will discuss the technique developed and implemented in our new game Armageddon Wars that has extended the potential life of the iPhone’s battery during gameplay. To do this, I will demonstrate the following:
  • Lowering the frame rate of a game increases the battery life by a significant amount.
  • Games do not need a high frame rate when displaying a static screen, or a simple 2D animation.
  • Our game actively throttles down the frame rate according to what animations are playing at any given time. This reduces the battery consumption by a small, but not insignificant amount.
  • Most commercial games do not make any attempt to lower the frame rate when displaying static screens, wasting battery life.
The final section explains how the technique was implemented.


Background
A lot of game developers bring the PC mind-set with them when they start developing for mobile devices. For PC games the target is to achieve the legendary 60 frames per second rate. This is fine when the hardware is connected to an electrical outlet, but power consumption becomes an important factor when your device is running from a battery.

Rather than thinking of speed alone, you need to think of entertainment value. Imagine two people on a 3 hour flight who are playing games on their iPhones to pass the time. One has a 60 FPS game, but the battery runs out mid journey. The other is playing a game that is capped at 30 FPS and the battery lasts the whole journey. Which of these two people gets the most entertainment value?

You don't actually even need 30 FPS a lot of the time. At certain points during the game it might be showing only a static menu, or a simple animation – for instance a blinking light and you do not need 30 FPS for that.

To play an animation smoothly you need a frame rate that's twice the frame frequency of the animation. For instance, a simple 2D flag animation that has 3 frames and repeats every second only needs 6 FPS to play smoothly. Anything above that is overkill.

Apple themselves consider battery consumption to be an extremely important issue. Their own developer guidelines advise the following:

Do not draw to the screen faster than needed. Drawing is an expensive operation when it comes to power. Do not rely on the hardware to throttle your frame rates. Draw only as many frames as your application actually needs.

But, we decided to take this idea one step further.


Frame rate throttling
The technique we developed works by adjusting the frame rate according to what is being displayed on the screen at each instant. Each object displayed on screen indicates what frame rate it requires and the game loop controller picks the largest frame rate requirement. The screen-shots below illustrate this more clearly:

Posted Image After each frame is rendered both the CPU and GPU of the device sleep until the next frame needs to be rendered. The lower the frame rate the more that the CPU and GPU sleep and this is what saves battery life.


Test setup
The following devices were used:

iPhone 3GS, OS ver 3.1.3

iPod 1st Gen, OS ver 2.2.1

All measurements were done with the device fully charged, the auto-lock disabled and screen brightness at 80%. Only comparative measurements on the same device are valid, as the battery life varies widely between devices, depending on the age of the battery.


How much does the frame rate effect battery life?
We want to measure the effect of the frame rate on battery life, so we fully charged the devices, and let it run until the 10% warning message pops up on screen.

Armageddon Wars normally runs at 33 FPS during battle sequences and then throttles down to 1 FPS when animation stops. We changed the game to hard code the frame rate to various settings, 60 FPS, 33 FPS, 25 FPS and 1 FPS. (Note: the 60 FPS test could only be run on the iPhone 3GS - The iPod touch is too slow)

Device

60 FPS

33 FPS

25 FPS

1 FPS

iPhone 3GS

3h 27m

4h 15m

5h 19m

8h 45m

iPod touch 1st gen

n/a

2h 29m

3h 5m

3h 51m

Posted Image


Real world tests
Next we tried measuring the actual battery consumption while playing the game.

Our game is a turn-based strategy. The user needs time to consider their next move and while they are thinking usually the scene is static; only after they make their move are animations and special effects played: explosions, fire, smoke etc. When the animations are playing the frame rate is at maximum, but when the scene becomes static again the frame rate is throttled down. So, the average frame rate will be below the maximum rate, depending on how long the player needs to make their move. The following diagram illustrates this more clearly:

Posted Image We did an epic game play session of 2 hours on each device, both with and without the power save setting. It had to be for a long length of time for accuracy. When the power save is switched off, the game tries to maintain a constant 33 FPS (however the iPod Touch isn't quite able to make that rate.)

Power Save OFF Power Save ON Device

Battery level after 2 hours

Average Frame Rate

Battery level after 2 hours

Average Frame Rate

iPhone 3GS

0.55

33

0.6

27.6

iPod touch 1st gen

0.15

31.3

0.45

24.9

So, on the iPhone 3GS battery consumption was reduced by 11% and average frame rate by 16%.

On the iPod Touch the battery consumption was reduced by 35% and average frame rate by 20%.

The reduction was more apparent on the iPod touch because 2 hours represents nearly its entire battery life.

You have to bear in mind these figures are subjective. The figures above are for an experienced player. For inexperienced players the average frame rate would be lower, because they would spend more time making their move.


Compared against other games
The power saving technique works better for strategy games, where there is a tendency to have more static screens, so we tested against some strategy games from the App Store. Each game was put on a static screen and let run until the 10% warning message appeared on the screen.

Device Brain Challenge - test screen Lemonade Tycoon - main game screen Monopoly - manage screen Armageddon Wars - Battle scene at 1 FPS iPhone 3GS 6h 18m 4h 48m 4h 30m 8h 45m iPod touch 1st Gen 2h 47m 2h 35m 2h 14m 3h 51m For all games the battery lasted for a much smaller time than Armageddon Wars running at 1 FPS, indicating that the frame rate is higher than it needs to be.


Technical details
I'd like to give you a brief overview of how the power save technique was implemented. These diagrams have been simplified for clarity.

Firstly, the game had the following class structure:

Posted Image As you see, the game consists of several screens, all inherited from the same base class. The member currentScreen points to the current foreground screen. Each screen is composed of several objects derived from DisplayableObject. These could be sprites, user interface items, or particle managers.

And this is the call diagram of the game loop, which shows how these objects interact:

Posted Image The main view renders the game and measures the time taken. It then queries the game object for the required frame time. The game object in turn calls the current screen's getReqFrameTime function. Each DisplayableObject knows what its required frame rate needs to be. The current screen returns the shortest required frame time of all DisplayableObjects. Then the main view calculates the sleep time as the difference between the required frame time and the time taken to do the render.


Changing the standard iPhone Game loop
The default OpenGL ES iPhone template uses a NSTimer object to update the game loop. The timer is set on initialization of the game to call the game loop once every 1/60th second. The OS is responsible for making NSTimer call the game loop. If a timer event triggers while the game loop is still running, that event will be ignored and it'll just wait for the next NSTimer event. Because the timer events are not synchronized with the game loop, this method is a bit inaccurate.

For our power saving technique to work we need a more precise way to control the game loop. What we do is create a special game loop thread during app startup. Basically, the game loop looks like the following:

Posted Image The green blocks are called by the main application thread, while the red blocks are called by the game loop thread. Because two threads are accessing the same information we have to use a mutex, to only allow one thread at a time to access the game object.


Conclusions
From the tests run on Armageddon Wars there is undeniable evidence of significant battery life savings. The exact value is impossible to measure exactly, but seems to be in the 25% range.

Armageddon Wars is an OpenGL ES game. Games that do not use the GPU would probably see a smaller power saving benefit because only the CPU is sleeping when the frame rate is throttled.

This technique works better for strategy games, or board games. Action games get a lesser benefit from this technique, because the player is not looking at static screens most of the time.


Cost of implementing the technique
As you may expect, it requires extra effort to make your game "battery friendly". Firstly, you need to modify the game loop as shown in the "Technical details" section, but this this is a good idea in general, because it gives you better control over your game's frame rate. We implemented the technique at a early stage of the development and because of that most of the the issues were ironed out well before the time of the release. However, some subtle bugs did creep in because certain game objects calculated the required frame rate wrong and that required extra time to fix. Still we think the extra effort justified the benefit to the user.

Learning iOS Game Programming

$
0
0
http://www.informit.com/ShowCover.asp?isbn=0321699424&type=c Excerpt from Learning iOS Game Programming: A Hands-On Guide to Building Your First iPhone Game. By Michael Daley
Published by Addison-Wesley Professional
ISBN-10: 0-321-69942-4
ISBN-13: 978-0-321-69942-8 One of the most important elements of a game is the game loop. It is the heartbeat that keeps the game ticking. Every game has a series of tasks it must perform on a regular basis, as follows:

  • Update the game state
  • Update the position of game entities
  • Update the AI of game entities
  • Process user input, such as touches or the accelerometer
  • Play background music and sounds
This may sound like a lot, especially for a complex game that has many game elements, and it can take some time to process each of these stages. For this reason, the game loop not only makes sure that these steps take place, but it also ensures that the game runs at a constant speed.

This chapter shows you how to build the game loop for Sir Lamorak’s Quest. We take the OpenGL ES template app created in Chapter 3, “The Journey Begins,” and make changes that implement the game loop and the general structure needed to easily extend the game in later chapters.


Timing Is Everything
Let’s start out by looking at some pseudocode for a simple game loop, as shown in Listing 4.1.

Listing 4.1  A Simple Game Loop

BOOL gameRunning = true; while(gameRunning) { updateGame; renderGame; } This example game loop will continuously update the game and render to the screen until gameRunning is false.

Although this code does work, it has a serious flaw: It does not take time into account. On slower hardware, the game runs slowly, and on fast hardware, the game runs faster. If a game runs too fast on fast hardware and too slow on slow hardware, your user’s experience with the game will be disappointing. There will either be too much lag in the game play, or the user won’t be able to keep up with what’s going on in the game.

This is why timing needs to be handled within your game loop. This was not such a problem for games written back in the 1980s, because the speed of the hardware on which games were written was known, and games would only run on specific hardware for which they were designed. Today, it is possible to run a game on many different types of hardware, as is the case with the iPhone. For example, the following list sorts the devices (from slowest to fastest) that run the iOS:

  • iPhone (first generation)
  • iPod Touch 1G
  • iPhone 3G
  • iPod Touch 2G
  • iPhone 3GS/iPod Touch 3G
  • iPad/iPhone 4
As a game developer, you need to make sure that the speed of your game is consistent. It’s not a good idea to have a game on the iPhone 3GS running so fast that the player can’t keep up with the action, or so slow on an iPhone 3G that the player can make a cup of tea before the next game frame is rendered.

There are two common components used to measure a game loop’s speed, as follows:

  • Frames Per Second (FPS):FPS relates to how many times a game scene is rendered to the screen per second. The maximum for the iPhone is 60 FPS, as that is the screen’s maximum refresh rate. In Listing 4.1, this relates to how many times the renderGame method is called.
  • Update speed:This is the frequency at which the game entities are updated. In Listing 4.1, this relates to how many times the updateGame method is called.
Collision Detection
Timing is important for a number of reasons—not only the overall game experience, but, maybe more importantly, for functions such as collision detection. Identifying when objects in your game collide with each other is really important and is a basic game mechanic we need to use. In Sir Lamorak’s Quest, having the player able to walk through walls is not a great idea, and having the player’s sword pass through baddies with no effect is going to frustrate the player and keep them from playing the game.

Collision detection is normally done as part of the game’s update function. Each entity has their AI and position updated as play progresses. Those positions are checked to see if it has collided with anything. For example, Sir Lamorak could walk into (or collide with) a wall, or a ghost could collide with an axe. As you can imagine, the distance a game entity moves between each of these checks is important. If the entity moves too far during each game update, it may pass through another object before the next check for a collision.

Having entities move at a constant speed during each game update can help to reduce the chances of a collision being missed. There is, however, always a chance that a small, fast-moving entity could pass through another object or entity unless collision checks are implemented that don’t rely solely on an entities current position, but also its projected path. Collision detection is discussed in greater detail in Chapter 15, “Collision Detection,” which is where we implement it into the game.


The Game Loop
The game loop is a key, if not the key, element in a game. I spent considerable time tweaking the game loop for Sir Lamorak’s Quest, and it was something that I revisited a number of times, even after I thought I had what was needed.

There are so many different approaches to game loops. These range from the extremely simple closed loops you saw earlier in Listing 4.1, up to multithreaded loops that handle things such as path finding and complex AI on different threads. I found that I actually started off with the very simple approach that then became more complex as the game developed (and as I ran into issues).

We will not review some of the different game loops that I tried through the development of Sir Lamorak’s Quest. Instead, we’ll focus on the game loop used in the game, rather than diving down some rabbit hole we’ll never end up using.


Frame-Based
The easiest type of game loop is called a frame-based loop. This is where the game is updated and rendered once per game cycle, and is the result of the simple game loop shown in Listing 4.1. It is quick and easy to implement, which was great when I first started the game, but it does have issues.

The first of these issues is that the speed of the game is directly linked to the frame rate of the device on which the game is running—the faster the hardware, the faster the game; the slower the hardware, the slower the game. Although we are writing a game for a very small family of devices, there are differences in speed between them that this approach would highlight.

Figure 4.1 shows how frames are rendered more quickly on fast hardware and more slowly on slower hardware. I suppose that could be obvious, but it can often be overlooked when you start writing games, leaving the player open to a variable playing experience. Also remember that each of these frames is performing a single render and update cycle.

Posted Image
Figure 4.1: Entity following a curved path on slow and fast hardware. A time-based variable interval loop is similar to the frame-based approach, but it also calculates the elapsed time. This calculation is used to work out the milliseconds (delta) that have passed since the last game cycle (frame). This delta value is used during the update element of the game loop, allowing entities to move at a consistent speed regardless of the hardware’s speed.

For example, if you wanted an entity to move at 1 unit per second, you would use the following calculation:

position.x += 1.0f * delta; Although this gets over the problem of the game running at different speeds based on the speed of the hardware (and therefore the frame rate), it introduces other problems. While most of the time the delta could be relatively small and constant, it doesn’t take much to upset things, causing the delta value to increase with some very unwanted side effects. For example, if a text message arrived on the iPhone while the user was playing, it could cause the game’s frame rate to slow down. You could also see significantly larger delta values causing problems with elements, such as collision detection.

Each game cycle causes an entity in Figure 4.2 to move around the arc. As you can see in the diagram, with small deltas, the entity eventually hits the object and the necessary action can be taken.

Posted Image
Figure 4.2: Frames using a small delta value. However, if the delta value were increased, the situation shown in Figure 4.3 could arise. Here, the entity is moving at a constant distance, but the reduced frame rates (and, therefore, increased delta) has caused what should have been a collision with the object to be missed.

Posted Image
Figure 4.3: Frames using a large delta. Don’t worry—there is a reasonably easy solution, and that is to use a time-based, fixed interval system.


Time-Based, Fixed Interval
The key to this method is that the game’s state is updated a variable number of times per game cycle using a fixed interval. This provides a constant game speed, as did the time-based variable interval method, but it removes issues such as the collision problem described in the previous section.

You’ll remember that the previous methods tied the game’s update to the number of frames. This time, the game’s state could be updated more times than it is rendered, as shown in Figure 4.4. We are still passing a delta value to the game entities, but it’s a fixed value that is pre-calculated rather than the variable delta that was being used before (thus, the term fixed interval).

Posted Image
Figure 4.4: Variable numbers of updates per frame with a single render. This system causes the number of game updates to be fewer when the frame rate is high, but it also increases the number of game updates when the frame rate is low. This increase in the number of game updates when the game slows down means that the distance each frame travels is constant. The benefit is that you are not losing the chance to spot a collision by jumping a large amount in a single frame.


Getting Started
The project that accompanies this chapter already contains the game loop and other changes we are going to run through in the remainder of this chapter. You should now open the project CH04_SLQTSOR. We run through the changes and additions to the project since Chapter 3.

Note - This project should be compiled against version 3.1 or higher of the iPhone SDK. The CADisplayLink function used in this example is only available in version 3.1 of the iPhone SDK. If you compile this project using iPhone SDK 3.0 or less, it still works, but you will need to use NSTimer rather than CADisplayLink. Using iPhone SDK 3.0 or less will also generate warnings, as shown in Figure 4.5.

Posted Image
Figure 4.5: Errors generated in EAGLView.m when compiling against iPhone SDK version 3.0 or lower. When you open the CH04_SLQTSOR project in Xcode, you see a number of new groups and classes in the Groups & Files pane on the left that have been added to the project since Chapter 3, including the following:

  • Group Headers: This group holds global header files that are used throughout the project.
  • Abstract Classes: Any abstract classes that are created are kept in this group. In CH04_SLQTSOR, it contains the AbstractScene class.
  • Game Controller: The game controller is a singleton class used to control the state of the game. We will see how this class is used and built later in this chapter.
  • Game Scenes: Each game scene we create (for example, the main menu or main game) will have its own class. These classes are kept together in this group.
Let’s start with the changes made to the EAGLView class.


Inside the EAGLView Class
The first change to EAGLView.h, inside the Classes group, is the addition of a forward declaration for the GameController class. This class does not exist yet, but we will create it soon:

@class GameController; Inside the interface declaration, the following ivars have been added:

CFTimeInterval lastTime; GameController *sharedGameController; These instance variables will be used to store the last time the game loop ran and point to an instance of the GameController class, which we create later. No more changes are needed to the header file. Save your changes, and let’s move on to the implementation file.


Inside the EAGLView.m File
In Xcode, select EAGLView.m and move to the initWithCoder: method. The changes in here center around the creation of the renderer instance. In the previous version, an instance of ES2Renderer was created. If this failed, an instance of ES1Renderer was created instead. We are only going to use OpenGL ES 1.1 in Sir Lamorak’s Quest, so we don’t need to bother with ES2Renderer.

Because we are not using ES2Renderer, the ES2Renderer.h and .m files have been removed from the project. The Shaders group and its contents have also been removed.

There is also an extra line that has been added to the end of the initWithCoder method, as shown here:

sharedGameController = [GameController sharedGameController]; The next change is the actual code for the game loop. We are going to have EAGLView running the game loop and delegating the rendering and state updates to the ES1Renderer instance called renderer. Just beneath the initWithCoder: method, you can see the game loop1 code, as shown in Listing 4.2.

Listing 4.2  EAGLView gameLoop: Method

#define MAXIMUM_FRAME_RATE 45 #define MINIMUM_FRAME_RATE 15 #define UPDATE_INTERVAL (1.0 / MAXIMUM_FRAME_RATE) #define MAX_CYCLES_PER_FRAME (MAXIMUM_FRAME_RATE / MINIMUM_FRAME_RATE) - (void)gameLoop { static double lastFrameTime = 0.0f; static double cyclesLeftOver = 0.0f; double currentTime; double updateIterations; currentTime = CACurrentMediaTime(); updateIterations = ((currentTime - lastFrameTime) + cyclesLeftOver); if(updateIterations > (MAX_CYCLES_PER_FRAME * UPDATE_INTERVAL)) updateIterations = (MAX_CYCLES_PER_FRAME * UPDATE_INTERVAL); while (updateIterations >= UPDATE_INTERVAL) { updateIterations -= UPDATE_INTERVAL; [sharedGameController updateCurrentSceneWithDelta:UPDATE_INTERVAL]; } cyclesLeftOver = updateIterations; lastFrameTime = currentTime; [self drawView:nil]; } When the game loop is called from either CADisplayLink or NSTimer, it first obtains the current time using CACurrentMediaTime(). This should be used instead of CFAbsoluteTimeGetCurrent() because CACurrentMediaTime() is synced with the time on the mobile network if you are using an iPhone. Changes to the time on the network would cause hiccups in game play, so Apple recommends that you use CACurrentMediaTime().

Next, we calculate the number of updates that should be carried out during this frame and then cap the number of update cycles so we can meet the minimum frame rate.

The MAXIMUM_FRAME_RATE constant determines the frequency of update cycles, and MINIMUM_FRAME_RATE is used to constrain the number of update cycles per frame.

Capping the number of updates per frame causes the game to slow down should the hardware slow down while running a background task. When the background task has finished running, the game returns to normal speed.

Using a variable time-based approach in this situation would cause the game to skip updates with a larger delta value. The approach to use depends on the game being implemented, but skipping a large number of updates while the player has no ability to provide input can cause issues, such as the player walking into a baddie without the chance of walking around or attacking them.

I tried to come up with a scientific approach to calculating the maximum and minimum frame rate values, but in the end, it really was simple trial and error. As Sir Lamorak’s Quest developed and the scenes became more complex, I ended up tweaking these values to get the responsiveness I wanted while making sure the CPU wasn’t overloaded.

The next while loop then performs as many updates as are necessary based on updateIterations calculated earlier. updateIterations is not an actual count of the updates to be done, but an interval value that we use later:

while (updateIterations >= UPDATE_INTERVAL) { updateIterations -= UPDATE_INTERVAL; [sharedGameController updateCurrentSceneWithDelta:UPDATE_INTERVAL]; } This loops around reducing the interval in updateIterations by the fixed UPDATE_INTERVAL value and updates the games state. Once updateIterations is less than the UPDATE_INTERVAL, the loop finishes, and we load any fractions of an update left in updateIterations into cyclesLeftOver. This means we don’t lose fractions of an update cycle that we can accumulate and use later.

With all the updates completed, we then render the scene:

[self drawView:nil]; The CADisplayLink or NSTimer now calls the game loop until the player quits or the battery runs out (which it could do, given how much they will be enjoying the game!).

This is not a complex game loop, although it may take a while to get your head around the calculations being done. I found that moving to this game loop reduced the CPU usage on Sir Lamorak’s Quest quite significantly and really smoothed out the game.

The final changes to EAGLView are within the startAnimation method. To get things ready for the first time we run through the gameLoop, we need to set the lastTime ivar. Because the gameLoop will not be called until the animation has started, we need to add the following line to the startAnimation method beneath the animating = TRUE statement:

lastTime = CFAbsoluteTimeGetCurrent(); The selectors used when setting up the CADisplayLink and NSTimer also need to be changed. The new selector name should be gameLoop instead of drawView.

Having finished with the changes inside EAGLView, we need to check out the changes to ES1Renderer. This class is responsible for setting up the OpenGL ES context and buffers, as noted in Chapter 3. However, we are going to extend ES1Renderer slightly so it sets up the OpenGL ES state we need for the game and renders the currently active scene.


ES1Renderer Class
When you look inside the ES1Renderer.h file, you see a forward declaration to the GameController class, which is in the next section, and an ivar that points to the GameController instance. The rest of the header file is unchanged.

In Xcode, open the ES1Renderer.m file. The GameController.h file is imported, followed by an interface declaration, as shown here:

@interface ES1Renderer (Private) // Initialize OpenGL - (void)initOpenGL; @end This interface declaration specifies a category of Private and is being used to define a method that is internal to this implementation. (I normally create an interface declaration such as this inside my implementations so I can then declare ivars and methods that are going to be private to this class.) There is only one method declared, initOpenGL, which is responsible for setting up the OpenGL ES states when an instance of this class is created.

Although Objective-C doesn’t officially support private methods or ivars, this is a common approach used to define methods and ivars that should be treated as private.

The next change comes at the end of the init method:

sharedGameController = [GameController sharedGameController]; This is pointing the sharedGameController ivar to an instance of the GameController class. GameController is implemented as a singleton. This is a design pattern, meaning there can be only one instance of the class. It exposes a class method called sharedGameController that returns a reference to an instance of GameController. You don’t have to worry if an instance has already been created or not because that is all taken care of inside the GameController class itself.

The next change is within the render method, shown in Listing 4.3. This is where the template initially inserted drawing code for moving the colored square. We will see the code used to draw the square again, but it’s not going to be in this method. If you recall, this method is called by the game loop and needs to call the render code for the currently active scene.

Listing 4.3  EAGLView render Method

- (void) render { glClear(GL_COLOR_BUFFER_BIT); [sharedGameController renderCurrentScene]; [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } First of all, glClear(GL_COLOR_BUFFER_BIT) clears the color buffer and then clears the screen, making it ready for the next scene to be rendered. For the rendering, the game controller is asked to render the currently active scene. This is how the render message is passed from the game loop to ES1Renderer, and then onto the game controller and eventually the render method inside the currently active game scene. Figure 4.6 shows how a game scene fits into the other classes we are reviewing.

Posted Image
Figure 4.6: Class relationships. The last line in this method presents the render buffer to the screen. If you remember, this is where the image that has been built in the render buffer by the OpenGL ES drawing commands is actually displayed on the screen.

Having looked at the changes to the render method, we’ll move on to the resizeFromLayer: method. If you recall, the resizeFromLayer: method was responsible for completing the OpenGL ES configuration by assigning the renderbuffer created to the context (EAGLContext) for storage of the rendered image. It also populated the backingWidth and backingHeight ivars with the dimensions of the renderbuffer.

The following line of code has been added to this method that calls the initOpenGL method:

[self initOpenGL]; If this looks familiar, that’s because this method was placed inside the interface declaration (described earlier) as a private category. As the resizeFromLayer method assigns the render buffer to the context and finishes up the core setup of OpenGL ES, it makes sense to place this OpenGL ES configuration activity in this method so we can set up the different OpenGL ES states needed for the game.

Now move to the bottom of the implementation and look at the initOpenGL method. This method sets up a number of key OpenGL ES states that we will be using throughout the game.

If you move to the bottom of the implementation, you can see the following implementation declaration:

@implementation ES1Renderer (Private) You can tell this is related to the interface declaration at the top of the file because it’s using the same category name in brackets. There is only one method declared in this implementation: initOpenGL.

At the start of the method, a message is output to the log using the SLQLOG macro defined in the Global.h header file. The next two lines should be familiar, as they were covered in Chapter 3. We are switching to the GL_PROJECTION matrix and then loading the identity matrix, which resets any transformations that have been made to that matrix.

The next line is new and something we have not seen before:

glOrthof(0, backingWidth, 0, backingHeight, -1, 1); This command describes a transformation that produces an orthographic or parallel projection. We have set the matrix mode to GL_PROJECTION, so this command will perform a transformation on the projection matrix. The previous function sets up an orthographic projection—in other words, a projection that does not involve perspective (it’s just a flat image).

Note - I could go on now about orthographic and perspective projection, but I won’t. It’s enough to know for our purposes that glOrthof is defining the clipping planes for width, height, and depth. This has the effect of making a single OpenGL ES unit equal to a single pixel in this implementation because we are using the width and height of the screen as the parameters.

As mentioned earlier, OpenGL ES uses its own units (that is, a single OpenGL ES unit by default does not equal a single pixel). This gives you a great deal of flexibility, as you can define how things scale as they’re rendered to the screen. For Sir Lamorak’s Quest, we don’t need anything that complex, so the previous function—which results in a unit equal to a pixel—is all we need.


Configuring the View Port
It is not common to make many changes to the GL_PROJECTION matrix apart from when initially setting up the projection.

As we are setting up the projection side of things, this is a good place to also configure the view port:

glViewport(0, 0, backingWidth , backingHeight); The Viewport function specifies the dimensions and the orientation of the 2D window into which we are rendering. The first two parameters specify the coordinates of the bottom-left corner, followed by the width and height of the window in pixels. For the width and height, we are using the dimensions from the renderbuffer.

With the projections side set up, we then move onto setting up the GL_MODELVIEW matrix. This is the matrix that normally gets the most attention as it handles the transformations applied to the game’s models or sprites, such as rotation, scaling, and translation. As noted in Chapter 3, once the matrix mode has been switched to GL_MODELVIEW, the identity matrix is loaded so it can apply the transformations.

glMatrixMode(GL_MODELVIEW); glLoadIdentity(); Next, we set the color to be used when we clear the screen and also disable depth testing. Because we are working in 2D and not using the concept of depth (that is, the z-axis), we don’t need OpenGL ES to apply any tests to pixels to see if they are in front of or behind other pixels. Disabling depth testing in 2D games can really help improve performance on the iPhone.

Not using the depth buffer means that we have to manage z-indexing ourselves (that is, the scene needs to be rendered from back to front so objects at the back of the scene appear behind those at the front):

glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glDisable(GL_DEPTH_TEST); We finish up the OpenGL ES configuration by enabling more OpenGL ES functions. You may remember that OpenGL ES is a state machine. You enable or disable a specific state, and it stays that way until you change it back. We have done exactly that when disabling depth testing, which now stays disabled until we explicitly enable it again:

glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); The preceding states are used to tell OpenGL ES that we are going to be providing an array of vertices and an array of colors to be used when rendering to the screen. Other client states will be described and used later in Chapter 5, “Image Rendering.”

That’s it for the configuration of OpenGL ES. A lot of the OpenGL ES configuration should have looked familiar to you. A number of the functions in there were present in the render code from the OpenGL ES template. In the template, the states were set each time we rendered. Although this is fine, it isn’t necessary to do that unless you are using state changes to achieve specific effects.

Tip - Keep the number of state changes being made within the game loop to a minimum, as some, such as switching textures, can be expensive in terms of performance. Is really is worth creating your own state machine that stores the states set in OpenGL ES. These can then be checked locally to see if they need to change i.e. there is no point in setting them if the values are the same.

That completes all the changes that have been made to the ES1Renderer class. We have added a pointer to the game controller, so rendering can be delegated to an instance of that class. We have also added some core OpenGL ES configuration to the class as well, making it responsible for all OpenGL ES setup that gives us a single place to go when we need to change that setup.


Game Scenes and the Game Controller
Having looked at the changes that were needed within EAGLView and ES1Renderer, we need to now look at the game controller and game scenes. Because we have the game loop in place, we need to introduce new classes that will handle the introduction of other game elements. The game elements I’m talking about are the different scenes used in Sir Lamorak’s Quest, such as the following:

  • The main menu: This is where players are taken when they first launch Sir Lamorak’s Quest. The main menu provides the player with options to change the game settings, view credits, or start the game.
  • The game itself: This is where the (game) action takes place.
The idea is that a game scene is responsible for its own rendering and game logic updates. This helps to break up the game into manageable chunks. I’ve seen entire games containing multiple scenes coded in a single class. For me, this is just too confusing, and creating a separate class for each scene just seemed logical.

In addition to the game scenes, we need to create a game controller. We have already seen the game controller mentioned in the EAGLView and ES1Renderer classes, so let’s run through what it does.


Creating the Game Controller
Figure 4.6 shows the relationship between the classes (that is, EAGLView, ES1Renderer, and GameController), and the game scene classes.


The GameController Class
If we are going to have a number of scenes, and each scene is going to be responsible for its rendering and logic, we are going to need a simple way of managing these scenes and identifying which scene is active. Inside the game loop, we will be calling the game update and render methods on a regular basis. And because there will be multiple scenes, we need to know which of those scenes is currently active so the update and render methods are called on the right one. Remember from looking at EAGLView that inside the game loop, we were using the following code to update the game:

[sharedGameController updateCurrentSceneWithDelta:UPDATE_INTERVAL]; This line calls a method inside an instance of GameController. We are not telling the game controller anything about the scene that should be rendered as are expecting the GameController to already know.

Note - One important aspect of the game controller is that it is a singleton class. We don’t want to have multiple game controllers within a single game, each with their own view of the game’s state, current scene, and so forth.

Inside the Game Controller group, you find the GameController.h and GameController.m files. Open GameController.h and we’ll run through it.

Although it may sound complicated to make a class a singleton, it is well-documented within Apple’s Objective-C documentation. To make this even easier, we use the SynthesizeSingleton macro created by Matt Gallagher.2 Matt’s macro enables you to turn a class into a singleton class simply by adding a line of code to your header and another to your implementation.

At the top of the GameController.h file, add the following import statement to bring in this macro:

#import "SynthesizeSingleton.h"

Note - I won’t run through how the macro works, because all you need can be found on Matt’s website. For now, just download the macro from his site and import the SynthesizeSingleton.h file into the project.

Next is another forward declaration to AbstractScene, which is a class we will be looking at very shortly. This is followed by an interface declaration that shows this class is inheriting from NSObject and implements the UIAccelerometerDelegate protocol:

@interface GameController : NSObject UIAccelerometerDelegate is used to define this class as the delegate for accelerometer events, and it supports the methods necessary to handle events from the accelerometer.

Within the interface declaration, we have just a couple of ivars to add. The first is as follows:

NSDictionary *gameScenes; This dictionary will hold all the game scenes in Sir Lamorak’s Quest. I decided to use a dictionary because it allows me to associate a key to each scene, making it easier to retrieve a particular scene:

AbstractScene *currentScene; As you will see later, AbstractScene is an abstract class used to store the ivars and methods common between the different game scenes. Abstract classes don’t get used to create class instances themselves. Instead, they are inherited by other classes that override methods which provide class-specific logic.

Note - Objective-C does not enforce abstract classes in the same way as Java or C++. It’s really up to the developer to understand that the class is meant to be abstract, and therefore subclassed—thus placing Abstract at the beginning of the class name.

This works well for our game scenes, as each scene will have its own logic and rendering code, but it will have ivars and methods, such as updateSceneWithDelta and renderScene, that all scenes need to have. We run through the AbstractScene class in a moment.

After the interface declaration, the next step is to create a single property for the currentScene ivar. This makes it so currentScene can be both read and updated from other classes.


Creating the Singleton
So far, this looks just like any other class. Now let’s add two extra lines of code to make this a singleton class:

+ (GameController *)sharedGameController; This is a class method identified by the + at the beginning of the method declaration. Because this is going to be a singleton class, this is important. We use this method to get a pointer to the one and only instance of this class that will exist in the code, which is why the return type from this method is GameController.

Next, we have two more method declarations; the first is as follows:

- (void)updateCurrentSceneWithDelta:(float)aDelta; This method is responsible for asking the current scene to update its logic passing in the delta calculated within the game loop. The next method is responsible for asking the current scene to render:

- (void)renderCurrentScene; Now that the header is complete, open GameController.m so we can examine the implementation.


Inside GameController.m
To start, you can see that the implementation is importing a number of header files:

#import "GameController.h" #import "GameScene.h" #import "Common.h" GameScene is a new class; it inherits from the AbstractScene class. Because we will be initializing the scenes for our game in this class, each scene we created will need to be imported. Common.h just contains the DEBUG constant at the moment, but more will be added later.

Next, an interface is declared with a category of Private. This just notes that the methods defined in this interface are private and should not be called externally to the class. Objective-C does not enforce this, although there really is no concept of a private method or ivar in Objective-C:

@interface GameController (Private) - (void)initGame; @end As you can see from this code, we are using initGame to initialize game scenes.

Next is the implementation declaration for GameController. This is a standard declaration followed by a synthesize statement for currentScene, so the necessary getters and setters are created. The next line is added to turn this class into a singleton class:

SYNTHESIZE_SINGLETON_FOR_CLASS(GameController); The macro defined within the SynthesizeSingleton.h file adds all the code necessary to convert a class into a singleton. If you look inside the SynthesizeSingleton.h file, you see the code that gets inserted into the class when the project is compiled.

Notice that this class also has an init method. The init is used when the initial instance of this class is created. The formal approach to getting an instance of this class is to call the method sharedGameController, as we defined in the header file. This returns a pointer to an instance of the class. If it’s the first time that method has been called, it creates a new instance of this class and the init method is called.

Tip - The name of the class method defined in the header is important; it should be shared, followed by the name of the class (for example, sharedClassName). The class name is passed to the macro in the implementation, and it is used to create the sharedClassName method.

If an instance already exists, a pointer to that current instance will be returned instead, thus only ever allowing a single instance of this class to exist. If you tried to create an instance of this class using alloc and init, you will again be given a pointer to the class that already exists. The code introduced by the synthesize macro will stop a second instance from being created.

initGame is called within the init method and sets up the dictionary of scenes, as well as the currentScene.

If you move to the bottom of the file, you see the implementation for the private methods.

Inside the initGame method, we are writing a message to the console, before moving on to set up the dictionary. It’s good practice to make sure that all these debug messages are removed from your code before you create a release version. The next line creates a new instance of one of the game scenes, called GameScene:

AbstractScene *scene = [[GameScene alloc] init]; As you can see, GameScene inherits from AbstractScene. This means we can define *scene as that type. This enables you to treat all game scenes as an AbstractScene. If a game scene implements its own methods or properties that we need to access, we can cast from AbstractScene to the actual class the scene is an instance of, as you will see later.

Now that we have an instance of GameScene, we can add it to the dictionary:

[gameScenes setValue:scene forKey:@"game"]; This creates an entry in the dictionary that points to the scene instance and gives it a key of game. Notice that the next line releases scene:

[scene release]; Adding scene to the dictionary increases its retain count by one, so releasing it now reduces its retain count from two to one. When the dictionary is released or the object is asked to release again, the retain count on the object will drop to zero and the object’s dealloc method will be called. If we didn’t ask scene to release after adding it to the dictionary, it would not be released from memory when dictionary was released without another release call, which we may not realize we need to do. This is a standard approach for managing memory in Objective-C.

The last action of the method is to set the currentScene. This is a simple lookup in the gameScenes dictionary for the key game, which we used when adding the game scene to the dictionary. As additional game scenes are added later, we will add them to the dictionary with the following method:

currentScene = [gameScenes objectForKey:@"game"]; We only have a few more methods left to run through in the GameController class. Next up is the updateCurrentSceneWithDelta: method, shown here:

- (void)updateCurrentSceneWithDelta:(float)aDelta { [currentScene updateSceneWithDelta:aDelta]; } This takes the delta value calculated within the game loop and calls the updateSceneWithDelta method inside the currentScene. Remember that we have set the currentScene to point to an object in the gameScene dictionary. These objects should all inherit from AbstractScene and therefore support the update method.

The same approach is taken with the render method, shown here:

-(void)renderCurrentScene { [currentScene renderScene]; } The final method to review is accelerometer:didAccelerate, shown here:

- (void)accelerometer:(UIAccelerometer *)accelerometer didAcceler- ate:(UIAcceleration *)acceleration { } This delegate method needs to be implemented because the class uses the UIAccelerometerDelegate protocol. When the accelerometer is switched on, this method is passed UIAcceleration objects that can be used to find out how the iPhone is being moved. This can then be used to perform actions or control the player inside the game. We aren’t using this method in Sir Lamorak’s Quest, but it’s useful to understand how this information could be obtained. More information on user input can be found in Chapter 12, “User Input.”


AbstractScene Class
AbstractScene was mentioned earlier, and as the name implies, it is an abstract class. All the game scenes we need to create will inherit from this class.

Open AbstractScene.h in the Abstract Classes group, and we’ll take a look.

The header starts off by importing the OpenGL ES header files. This allows any class that inherits from AbstractScene.h to access those headers as well. The class itself inherits from NSObject, which means it can support operations such as alloc and init.

A number of ivars are defined within the interface declaration. Again, the ivars will be available to all classes that inherit from this class. The idea is to place useful and reusable ivars in this abstract class so they can be used by other game scenes. The ivars you will find here include the following:

  • screenBounds: Stores the dimensions of the screen as a CGRect.
  • sceneState: Stores the state of the scene. Later, we create a number of different scene states that can be used to track what a scene is doing (for example, transitioning in, transitioning out, idle, and running).
  • sceneAlpha: Stores the alpha value to be used when rendering to the screen. Being able to fade everything in and out would be cool, so storing an overall sceneAlpha value that we can use when rendering enables us to do this.
  • nextSceneKey: A string that holds the key to the next scene. If the GameController receives a request to transition out, the next scene specified in this ivar will become the current scene.
  • sceneFadeSpeed: Stores the speed at which the scene fades in and out.
After the interface declaration, two more properties are defined, as follows:

@property (nonatomic, assign) uint sceneState; @property (nonatomic, assign) GLfloat sceneAlpha; These simply provide getter and setter access to the sceneState and sceneAlpha ivars.

Next, a number of methods are defined to support the game scenes, including the update and render methods we have already discussed:

- (void)updateSceneWithDelta:(float)aDelta; - (void)renderScene; There are also a few new methods, too. The first relates to touch events. The EAGLView class responds to touch events as it inherits from UIView, and these need to be passed to the currently active game scene. The active scene uses this touch information to work out what the player is doing. The following touch methods are used to accept the touch information from EAGLView and allow the game scene to act upon it:

- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event view:(UIView*)aView; - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event view:(UIView*)aView; - (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event view:(UIView*)aView; - (void)touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event view:(UIView*)aView; The next method declared is similar to the touch methods. Just as touches are fed to each game scene, accelerometer events also need to be fed down in the same way. We have already seen that GameController is the target for accelerometer events; therefore, GameController needs to pass down accelerometer event information to the current game scene:

- (void)updateWithAccelerometer:(UIAcceleration*)aAcceleration; That’s it for the header file. Now let’s move to the implementation file by opening AbstractScene.m.

You may be surprised by what you find in the implementation file. When I said earlier that the abstract class doesn’t do anything, I really meant it. Apart from setting up the synthesizers for the two properties we declared, it just contains empty methods.

The idea is that the game scene that inherits from this class will override these methods to provide the real functionality.

That being the case, let’s jump straight to the final class to review in this chapter: the GameScene class.


GameScene Class
The GameScene class is responsible for implementing the game logic and rendering code for the scene. As described earlier, a game scene can be anything from the main menu to the actual game. Each scene is responsible for how it reacts to user input and what it displays onscreen.

For the moment, we have created a single game scene that we will use for testing the structure of Sir Lamorak’s Quest. You find the GameScene.h file inside the Game Scenes group in Xcode. When you open this file, you see that all we have defined is an ivar, called transY. We have no need to define anything else at the moment because the methods were defined within the AbstractScene class we are inheriting from.

Tip - When you inherit in this way, you need to make sure the header of the class you are inheriting from is imported in the interface declaration file (the .h file).

Because there is not much happening in the header file, open the GameScene.m file. This is where the magic takes place. All the logic for rendering something to the screen can be found in the GameScene.m file.

To keep things simple at this stage, we are simply implementing a moving box, just like you saw in Chapter 3 (refer to Figure 3.4). You may recall from the previous chapter that the logic to move the box, and the box rendering code itself, were all held within the render method. This has now been split up inside GameScene.

The updateSceneWithDelta method is called a variable number of times within the game loop. Within that method, we have defined the transY ivar, which increases within the updateSceneWithDelta method:

- (void)updateSceneWithDelta:(float)aDelta { transY += 0.075f; } When the updating has finished, the game loop will render the scene. This render request is passed to the GameController, which then asks the currently active scene to render. That request ends with the next method, renderScene.

The renderScene method is where the code to actually render something to the screen is held. As mentioned earlier, we are just mimicking the moving box example from the previous chapter, so the first declaration within this method is to set up the vertices for the box:

static const GLfloat squareVertices[] = { 50, 50, 250, 50, 50, 250, 250, 250, };

Note - Do you notice anything different between the data used in this declaration and the one used in the previous project? Don’t worry if you can’t spot it; it’s not immediately obvious.

The vertex positions in the previous example were defined using values that ranged from -1.0 to 1.0. This time, the values are much bigger.

This behavior uses the OpenGL ES configuration we defined earlier in the initOpenGL method (located inside the ES1Renderer class). We configured the orthographic projection and view port. This now causes OpenGL ES to render using pixel coordinates, rather than defining the vertices for the square.

Going forward, this will make our lives much easier, as we can more easily position items on the screen and work out how large they will be.

Having defined the vertices for the square, we can define the colors to be used within the square. This is exactly the same as in the previous example.

Next, we perform a translation that moves the point at which the square is rendered. As before, we are not changing the vertices of the square, but instead moving the drawing origin in relation to where the rendering takes place:

glTranslatef(0.0f, (GLfloat)(sinf(transY)/0.15f), 0.0f); Once the translation has finished, we point the OpenGL ES vertex pointer to the squareVertices array and the color pointer to the squareColors array:

glVertexPointer(2, GL_FLOAT, 0, squareVertices); glColorPointer(4, GL_UNSIGNED_BYTE, 0, squareColors); If you have a photographic memory, you may notice that there are a couple of lines missing from this section of code that were present in the previous example. When we last configured the vertex and color pointers, we enabled a couple of client states in OpenGL ES, which told OpenGL ES that we wanted it to use vertex and color arrays. There is no need to do that this time because we have already enabled those client states inside ES1Renderer when we performed the initOpenGL method. Remember that OpenGL ES is a state machine, and it remembers those settings until they have been explicitly changed.

Having pointed OpenGL ES at the vertices and colors, it’s now time to render to the screen:

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); At this point, our lovely colored box is rendered, but it isn’t on the screen just yet. When this method is finished, it passes control back to the GameController and then up to the render method in EAGLView, whose next task is to present the renderbuffer to the screen. It is at that point that you actually see the square in the display.


Summary
We have created a number of new classes that give us a structure on which we can build going forward. We now have the ability to create any number of different game scenes, each of which will perform its own logic and rendering, all being controlled by the GameController class.

We use this structure throughout this book and build upon it to create Sir Lamorak’s Quest.

In the next chapter, we go more in-depth on how to render images to the screen. This involves looking at OpenGL ES in greater detail and creating a number of classes that make the creation, configuration, and rendering of images in Sir Lamorak’s Quest much easier.


Exercises
If you run the project for this chapter, you see the colored box moving up and down the screen. If you want the project to do more, try making some changes to the project, such as the following:

  • Create a new game scene called TriangleScene and change the rendering code so that it draws a triangle rather than a square.
  • Hint - Rather than drawing two triangles that make a square, which GL_TRIANGLE_STRIP is for, you only need a single triangle; GL_TRIANGLES is great for that. Remember, a triangle only has three points, not four.

  • After you create your new class, initialize it in the GameController initGame method and add it to the dictionary with a key.
  • Hint - Don’t forget to make your new scene the current scene.

    If you get stuck, you can open the CH04_SLQTSOR_EXERCISE project file to see what you need to do.

Footnotes

  • The game loop code used is based on a tutorial by Alex Diener at http://sacredsoftwar...html. </li><li>SynthesizeSingleton; see <a href="cocoawithlove.com/2008/11/singletons-appdelegates-and-top-level.html">http://cocoawithlove.com/2008/11/singletons-appdelegates-and-top-level.html.

© Copyright Pearson Education. All rights reserved.
Reprinted with permission

Learning Android Development with Unity

$
0
0

Posted Image
This is an affiliate program banner - registering for design3 will also help benefit GameDev.net!

Introducing design3

Hey there! We’re design3, a game development portal that streams award-winning HD training videos to studios, 3D artists and indie developers. Our Training Center offers over 1,000 videos that cover a variety of 3D tools, game engines, and middleware, all intended to help you master the tools and techniques to make games. We’re thrilled to feature our exclusive Android Development with Unity series here on GameDev.net --- we hope you enjoy.



Casey Noland
Chief Executive Officer, design3


I'm very happy to feature our first-ever embedded media article in association with design3 and let our community have a look at what kind of content they are working on. Having met and chatted with design3 at a recent conference and learning more myself about what they are all about, I'm looking forward to crafting a deeper relationship between them and GDNet that will benefit members of both sites and work towards our common goal - helping people learn how to make games.

Drew Sikora
Executive Producer, GameDev.net


Android Development with Unity

If you’re interested in learning how to make games for Android devices, our first piece of advice is to utilize the Unity engine. Unity is an ideal platform for Android development because it is flexible, easy to learn, and it supports multiple scripting languages (JavaScript and C#). In the following design3 video tutorial series, a Unity engineer will walk you through the process of Android development. Lastly, we’ll show you where you can learn how to make your own mobile games.

Chapter 1 - Introduction & Setup

Learn how to set up your developer account, the Android SDK, and your mobile device for development builds. The necessary support files are included.



Chapter 2 - Troubleshooting

Utilize useful resources for troubleshooting problems and issues specific to Android development.



Chapter 3 - Submitting To App Market

Learn how to prepare and submit your games to the Android App Market.


Additional Mobile Game Development

Not very taken with the Android platform? design3 also has tutorials on other mobile game development. Mobile Skater, a fun Unity mobile game that has been downloaded for free in the App Store over 10,000 times, is the focus of its own series where you’ll learn how to script accelerometer and multi-touch input with Javascript. You’ll also use raycasting, animations, and mobile input classes to implement movement, tricks, and functionality. Downloadable project files are included to give you the assets and code to make this cool mobile game.


Attached Image: mobile skater.png

Conclusion

We hope you enjoyed the video tutorials on Android Development with Unity, as well as a glimpse of other mobile development options available. As Unity’s Authorized Training Partner, we have an expansive Unity training library. In addition we have expert training for UDK, Source, Maya, 3DS Max, Softimage, Photoshop, Mixamo and AllegorithmicSubstance, with more tutorials being added frequently. To view the full Training Center, click here.

With hundreds of hours of video tutorials accessible for only a $20/month subscription, design3 is ideal for both industry veterans keeping up with new tools and trends, andaspiring developers searching for professional training, expert advice and career tips.Follow@design_3 instant updates.

More Information
>>>Learn about the design3 Training Center
>>>Learn about the design3 Development Community
>>>Learn about design3’s Unity Training
>>>Play Mobile Skater
>>>Download Unity Web Player (free)



Posted Image
This is an affiliate program banner - registering for design3 will also help benefit GameDev.net!


Excerpt: Game Development Essentials - Mobile Game Development

$
0
0
Chapter 5: Art for the Small Screen
Painting angels on the head of a pin


Key Chapter Questions

  • What are some key restrictions placed on art for mobile devices?
  • How can screen size and resolution affect a game’s visual design?
  • What art asset requirements are associated with different mobile devices?
  • What are some effective character design techniques used by mobile game artists?
  • What are the benefits and disadvantages of 2D vs. 3D for a mobile game?
You might think that art crammed onto a screen smaller than your hand is not much to speak of—but with a bit of innovation and some practical tips and tricks, the art in a mobile game can turn out to be quite dynamic and engaging. With the advent of more screen real estate, greater color depth, and expanded memory with an eye toward capturing video and photos, mobile devices are fully capable of delivering a visually delightful game experience.


Art for Mobile

Art for mobile games derives from the same “old school” roots as art for more modern PC and console titles. Back when games were being pushed to mobile, scale and scope restrictions made the days of the Atari 2600 look positively epic by comparison. Two-color LCD screens the size of a thumb, memory restrictions, load requirements: These problems had been solved before, but they represented brand new ways of thinking to the current crop of game developers. The visuals, much like early classic games, were often generated programmatically rather than developed by an artist. In fact, many of the early mobile titles were direct knockoffs of classics such as Caterpillar and Tetris—games already associated with tight restrictions that served as examples of “what to do” as this new market began to take off. As mobile games grew more powerful (with photo and video capabilities, music, and larger and more dynamic color screens), they were able to grow and adapt with equal speed—rapidly expanding into 3D and including social features such as leaderboards and player matching.

Attached Image: 05-01 TetrisWorlds.jpg
Attached Image: 05-02 ResidentEvil-TheMercenaries3D.jpg

Quality of art has clearly improved from older mobile games (Tetris Worlds, left, for GBA)
to newer games (Resident Evil: The Mercenaries, right, for 3DS)
Courtesy: THQ & Capcom

There is a tendency, particularly among those who are technologically inclined, to focus on the new and nifty—to focus only on the highest end systems available. With thousands of mobile units in the marketplace that are either older or equipped with different features, there are many potential customers that developers would ignore if they only paid attention to the newest devices. Considering the fact that many carriers still want that depth of coverage, developers ignoring these other devices are looking at a sea of lost potential. The advent of app stores that focus on a single technology and manufacturer has made it easy to pay attention to that one device. However, it is worth noting that the publishers (the companies that are making considerable profit on mobile games) still maintain a focus on carrier decks and creating versions of games that are playable on many different devices—not just the shiny new toys.

Standard Software

A good copy of Photoshop can go a long way in game design; it has become the ubiquitous “do everything” program that nearly every game artist uses on a regular basis. However, while Photoshop may be superior for the task of creating and editing images for mobile, Equilibrium’s DeBabelizer has the upper hand when it comes to file compression. For example, a file saved out of Photoshop as a .png can often be a kilobyte or two larger than a .png file saved out of DeBabelizer—and when it comes to mobile, every kilobyte counts!

For 3D, both Maya and 3ds Max will do the trick—since most programs and programmers are familiar with how they handle the data, and importing/exporting Max or Maya files to game engines or custom coded games is standard practice. There are less expensive options available as well—such as MilkShape, which is a popular tool among indie developers.

Attached Image: 05-03 DeBabelizerPro6.jpg
Attached Image: 05-04 MilkShape3D.jpg

The 2D art tool, DeBabelizer (left), is superior to Photoshop when it comes to file compression—while
MilkShape (right) is an affordable alternative to the 3D standards, 3ds Max and Maya.
Courtesy: Equilibrium & Mete Ciragan, chUmbaLum sOft


Screen Sizes & Resolution Issues


For mobile phones, there is no official overall standard when it comes to screen size—but as the industry has matured, manufacturers have begun to adopt similar parameters. However, screens associated with older devices are smaller with more variety and less standardization. On phones that are still more “phone” than PDA (which are still the most common units available), the current standard is around 176 x 220 with 12 pixels of that height being taken up by phone-centric elements (e.g., battery, signal)—leaving an effective screen space of 176 x 208. The newest generation of 3G and 4G mobile devices such as iOS- and Android-based phones and tablets not only possess larger screen resolutions (anywhere from 320 x 240 to1024 x 768), but this entire space is available for development. Take a look at the different styles of mobile devices shown in the images on the next page for a comparison of screen sizes.

Attached Image: 05-05 LG LX400.jpg

Courtesy: LG

Attached Image: 05-06 SamsungReplenish.jpg

Courtesy: Samsung

Attached Image: 05-09 iPhone4.jpg

Courtesy: Apple Inc.

Attached Image: 05-08 SamsungGalaxyTab.jpg

Courtesy: Samsung

Attached Image: 05-40 3DS.jpg

Courtesy: Nintendo

Attached Image: 05-07 KyoceraEcho.jpg

Courtesy: Kyocera

Attached Image: 05-39 PSVita.jpg

Courtesy: Sony Computer Entertainment America

Mobile devices come in many different sizes. Devices shown above include the LG LX400, Samsung
Replenish, Apple iPhone 4, Samsung Galaxy Tab, 3DS, Kyocera Echo, and Sony PlayStation Vita.

Even with the larger resolution, there are still extra considerations when it comes to the art and design aspects of a mobile title. In AAA console titles, it’s all about the hyper-real—including photorealistic textures, skin tones, particle effects, shadows, and lip synching. Photorealistic graphics shrunk down to the size of a screen that is as small as the palm of a player’s hand will be muddy and difficult to see; due to this, the screen size and resolution should play a major part in any decision regarding the visual look and feel of the game.
Art certainly needs to be smaller and readable at high pixel densities. One of the reasons we like iOS is the quality of the displays on Apple devices. We can make small art that still really pops with detail.

Quinn Dunki (Chief Sarcasm Officer; One Girl, One Laptop Productions)

Visual Style
Games cover a wide variety of possible visual styles; in fact, the overall look and feel of the game will set the tone and expectations for players before they even have a chance to read the title screen. Best practices suggest that the overall style should best match the game’s genre (e.g., a noir look for a mystery game or a cartoony look for a carnival shooter). However, more often we see successful examples of games with visuals that clash with the gameplay—such as sword and sorcery games with a bright, cartoony palette or murder mysteries that are couched in a sunny suburban backdrop.

Attached Image: 05-10 AssassinsCreedAltaïrsChronicles.jpg
Attached Image: 05-11 GameDevStory.jpg
Attached Image: 05-12 ZenBound2.jpg

There are almost as many different visual styles as there are game genres. Examples shown here include Assassin’s
Creed: Altaïr’s Chronicles
(left, for DS), Game Dev Story (center, for iPhone), and Zen Bound 2 (right, for iPad).
Courtesy: Ubisoft & Kairosoft Co., Ltd & Secret Exit



Outwitters: Creating 2D Art for iOS

For our most recent iOS project, Outwitters, the characters and environments I’ve designed are simply being scaled by a certain percentage between iPhone, iPad, and retina sizes. The final asset generation has been automated with Photoshop scripts. I just make one file and prefix it with what kind of sizes I need spit out (e.g., iPhone, iPad, universal). The character designs in particular were created in vector format in the same way I used to create logos—periodically zooming way out to ensure that the important details would hold up at smaller sizes. Scale had to be determined for each device early on and double-checked for each character before anything was animated. The user interface (UI) between iPhone and iPad are designed separately, out of necessity. On iOS, screen sizes between phones are consistent—so we don’t have to worry about accommodating weird, “red-headed step children” sizes. We just have to adjust the menus between phone and tablet.

—Adam Stewart (Co-Owner, One Man Left Studios)

Attached Image: 05-13 Outwitters.jpg

Courtesy: One Man Left




Character Design

All games should have a memorable character—and we mean this in the broadest sense of the word. There should be a core element that sets a visual tone and style—whether it’s a distinct character, an iconic game piece, or a significant recurring background component. The goal is for the player to see, instantly recognize, and identify with this element.

Silhouette

One of the precepts of character design both in traditional art (e.g., animation, fine art, comic books) and in games is the use of a strong silhouette. The game’s main character or character class needs to be clearly recognizable against the game background, and it needs to stand out against the other characters in the scene. This is one of the reasons background or “filler” characters look somewhat generic in comparison to the hero of the scene. It is also a sure-fire way to help players identify one set of characters over another. Human beings are designed to recognize patterns; it’s hard-wired into our brains. If members of the Bad Guy Squad wear hats with horns on them—and members of the Good Guy Squad have funny dome-shaped helmets—then it’s a quick and easy task to tell them apart when the rockets are flying. Silhouettes are often based on bold, primary shapes (e.g., squares, triangles, circles), and each shape tends to evoke a certain idea—in part because we have already been trained by all media to recognize a certain silhouette as a powerful “hero” type, or a brick-like, impenetrable “warrior” type.

Silhouette Check

A quick way to see if you have an effective design is to do what is called a “silhouette check.” Shade your characters completely black and look to see if you can tell them apart just from the silhouette they show.

Attached Image: 05-14 BusterSword.jpg
Attached Image: 05-15 SonicSilhouette.jpg

Do you recognize these characters by just looking at their silhouettes?
Courtesy: Square Enix & Sega

When you have a strong silhouette to start with, it becomes a simpler thing to reduce that character down to a miniscule size while still maintaining those distinctive features that make them stand out from a distance. This is the reason the “bobblehead” character design is so popular on smaller screen sizes. Reducing the body down to a generalized stick figure, and focusing on the defining characteristics of the face, sets up a clearly recognizable form with which the player will be able to identify in an instant. The same goes for large-scale or exotic weaponry; visually distinctive elements such as the “buster” swords from the Final Fantasy franchise become part of that character silhouette and should be taken into account as such.

Color

With the smallest screen sizes, having a distinctive silhouette simply isn’t enough. There’s barely enough room to determine one eight-pixel tall warrior from another. This is when the internal design of the character comes into play—such as using unified colors to represent a particular group of characters, or coloring the main character in a different palette than the rest of the world (e.g., placing a flaming orange t-shirt on a character when all the backgrounds have been colored in shades of blue). These design precepts can be applied to non-character elements that are important for players to observe and identify—such as vehicles, structures, props, spaceships, giant stone obelisks, or sliding puzzle pieces.

Attached Image: 05-19 AngryBirds.jpg

Courtesy: Rovio Ltd.

Attached Image: 05-18 TheIncredibles.jpg

Courtesy: THQ

Attached Image: 05-17 Link (LegendofZeldaOcarinaofTime3D).jpg

Courtesy: Nintendo

The color palette and costuming of game characters can often be as great an identifier as their overall body shapes and silhouettes (Angry Bird, The Incredibles, and Link, shown).


A Dimension’s Worth of Difference

The real distinction between 2D and 3D games is the programming that lies beneath—or the game’s engine. In console and PC titles, the game engine comprises core bases of code that can be used to build a whole range of different games. In a 3D game, the code handles polygons—either created on the fly or imported from an extraneous program such as 3ds Max or Maya—while in a 2D game, it relies on sprites. (A sprite is a “cutout” image—whereas a polygon is an actual piece of geometry, defined by the placement of three corner points called vertices.) Even if the end result seems to be 2D, it is entirely possible that a 3D code base is being used; this might seem like a waste, but it has been and is still being done. Many times, the use of a 3D engine can allow designers greater freedom and flexibility than they might have with a 2D engine—not only within the confines of a single game, but as a production house as well. The 2.5D “hybrid child” (discussed on the following page) that is beloved by the role-playing game (RPG) genre is actually a function of the visual design rather than a defining characteristic of the underlying programming. In all cases, however, the graphics must be designed to take advantage of the strengths and weaknesses of each type of base code.

2D

Although 2D might be considered the “older” way of doing things, most games for mobile—particularly the older smart and feature phones—are built in 2D. In many cases, the graphics are entirely drawn and animated by the underlying programming. In the case of mid-grade to higher-end smartphones, the art is created separately by the artist. Flat, two-dimensional animated sprites are moved around on painted backgrounds. (“Flat” does not mean lifeless and stiff; in fact, there are a number of 2D games with spectacular painted or pre-rendered backgrounds and innovative gameplay.) An illusion of 3D depth is created by applying various perspective and parallax tricks, but the code underlying them is geared toward moving 2D animated elements around in the x/y axis only. Any illusion of depth is created in the art itself, rather than being handled by the programming.

Attached Image: 05-20 PocketGod.jpg

Simple parallaxing and creative visuals in games such as Pocket God
can give depth to a game without moving to a true 3D environment.
Courtesy: Bolt Creative

Speed is a feature. The advantage of 2D is that games are running at 60fps and faster—something harder to achieve with detailed 3D graphics. Smaller screens require instantly understandable user interfaces (UIs).

—Chris Ulm (Chief Executive Officer; Appy Entertainment, Inc)

3D
The rule of thumb has been that unless some aspect of 3D is needed to make the gameplay function, it’s better to go with 2D. It’s important to carefully consider the cost of a larger game size against the effect it will have on the gameplay of the mobile game. At the moment, 3D is reserved for leading edge smartphones with larger screen sizes. In 3D, there are several different solutions; for example, some games are technically 3D—using polygons to allow the movement of a 2D sprite in all three axes—but they still heavily employ painted 2D backgrounds and sprites. In the case of 3D, the programming has been designed to handle polygons and to work with an x, y and z axis space (rather than just the x and y axes of 2D). The polygons are either created by the programming or with their animations in another program and brought into the game engine, where they function very much like sprites; the programming slides them sideways, up, and down—but the animation (limbs moving, guns firing) is all created by the artist.

Attached Image: 05-21 Agiliste.jpg

Games such as The Agiliste rely on 3D geometry to deliver a deeper visual experience.
Courtesy: Bushi-go, Inc.

2.5D

Pseudo 3D or 2.5D is an old and established way of giving a 3D look and feel to what is essentially a 2D game. Especially popular in “sim” type farming games and “plate-spinning” style puzzlers, 2.5D involves backgrounds that have been painted in perspective and sprites that are scaled (sized up and down) by the underlying game programming in order to give the illusion of moving forward and backward in space. Now that smartphones are capable of full 3D, many games incorporate painted 2.5D backgrounds with animated 3D characters—or tiled 2D backgrounds with 3D animated characters—to help push the illusion even farther while still keeping file sizes to a minimum.

Attached Image: 05-22 MafiaWars-Yakuza.jpg

Pseudo 3D or 2.5D, used in games such as Mafia Wars: Yakuza, is an excellent
way to add depth to a game environment without pushing a game into true 3D.
Courtesy: Digital Chocolate, Inc.



Layering Program Graphics & Art

One area that doesn’t get explored very often is the layering of program-generated art with painted 2D art and sprites which keeps the game size small while still allowing for larger gameplay areas. A good example of this is Digital Chocolate’s 3D Beach Mini Golf—which uses programmed art for the backgrounds and animation associated with the waves, sky, sand, and green. However, more detailed elements such as golfers and mini golf obstacles were created as 2D sprites and have been layered on top of the programmed background art—which served to significantly reduce the size of the game while still giving a very effective, pseudo 3D look. Sometimes, the effect is reversed—and programmed game art is layered on top of painted 2D art to enable random generation of different elements. For example, in 3D Rollercoaster Rush, the tracks of the rollercoaster are generated by the underlying programming. There is no “right” way to create many of these games. Layering 2D, 3D, and programmatic elements can be effective ways of building the visuals in a game while still keeping the file sizes as small as possible.

Attached Image: 05-23 Beach Mini Golf.jpg
Attached Image: 05-24 3DRollercoasterRush.jpg

In 3D Beach Mini Golf (left), 2D characters and props are layered over programmed art backgrounds,
while in 3D Rollercoaster Rush (right), programmed art is layered over 2D art.
Courtesy: Digital Chocolate, Inc.



Why 3D?

I’m working on one project right now that is trying to push the limits of 3D rendering. In the case of mobile development, I sometimes think 2D or 2.5D is best—since having a truly 3D experience with the current technology can really impede other areas of the game (CPU power, performance, controls and playability) and create a game with a very large data size. While I appreciate the team’s passion and drive to create something amazing, sometimes less is more. You have to draw a line in the sand to re-evaluate whether having 3D is really worth all of the costs and limitations placed on other parts of the game.

—Nathan Madsen (Composer & Sound Designer, Madsen Studios LLC)

“It’s All About the Gameplay”
I’ve been in the game industry for so long that mobile art seems more like a throwback than a restriction. I actually like mobile art a lot; instead of sweating over graphic power and amazing shaders, it’s all about the gameplay. In a small screen size, you can make things really pretty without having to go crazy on the poly count. Whether 3D or 2D, mobile gaming is about the experience—not full immersion.

—Alex Bortoluzzi (Chief Executive Officer, Xoobis)


Pixel Art vs. Vector Graphics

Pixel art is somewhat of a misnomer, since practically all still images are made up of pixels. Early game artists were referred to as “pixel painters” because their work consisted primarily of drawing characters and backgrounds pixel by pixel. As the memory available for graphics expanded, and the tools for drawing and painting digitally became more advanced, pixel painting has given way to more advanced techniques. With the advent of mobile games, however, a new market for an old skillset emerged—and the term “pixel art” stuck. The big problem with what is known as “pixel art” is that it’s limited by the information contained in the pixels; “bitmap art” is more fitting. When the art is scaled up, the new space must be filled by a (somewhat sophisticated) guess: The program takes the existing pixels (e.g., a black and white pixel side by side) and averages out the color in a technique known as anti-aliasing to create the new pixels. The larger a bitmap image is scaled up, the blurrier it gets. However, this technique works well for reducing graphics down to their smallest possible size; manipulating the image on the pixel level, choosing which pixel to cut, and determining whether to make the pixel 80% black or 60% black will provide the greatest degree of control while minimizing the size of the game.

Attached Image: 05-25 Bloons.jpg

The smooth, vector-based 2D images in Bloons are scaled up and down to fit the screen—and they
can be pulled and used in print and web advertising for the game without a loss in quality.
Courtesy: Kiwi Ninja

Vector graphics comprise entirely different types of images. In layman’s terms, vector graphics are generated by a mathematical function or series of functions. Since the images are created on the fly by the program, based on a set of numbers, they can be scaled up and down with only a bare minimum of loss. Programs such as Flash and Illustrator use vector graphics to great effect for clean, easily scaled graphics for web and print use. Web-based games use vector graphics so heavily in part because the image file size is based on the vector file—not the size of the end graphic. This is a key component in many games developed using Flash for delivery on web and mobile platforms.

As efficient as this all sounds, it is far more common in mobile for vector graphics to be used in the initial creation of the visuals, which are then rendered out into a bitmap or pixel-based format for use in a mobile game. Utilizing the original art as a vector file opens up a wealth of opportunities for the creation of high-end print, web, and broadcast marketing materials.
User Your Space Wisely

As memory space continues to expand, there is a tendency to let best practices slip and file sizes bloat to fill the space. Artists and programmers should keep in mind that space is precious—and every bit wasted could have been used to add value to the game.
Palette Tricks

Palettizing is perhaps the oldest trick in the book. Early game developers used it to reduce the file size for an entire level’s worth of images, dramatically cutting down the information that needed to be stored within those files. The fewer individual colors available in any given image, the smaller the file size for that image. For example, an image with a 128-color palette is smaller than an image with a 256-color palette—and if the colors can be cut down to 32, 16 or even 8, a large amount of space will be saved.

Attached Image: 05-26 BlueFish.jpg

full-color image, 549kb / 256-color image, 184kb / 32-color image, 92kb / 8-color image, 52kb
Courtesy: KU


Attached Image: 05-26 BlueFish.jpg

full-color image, 549kb / 256-color image, 143kb / 32-color image, 86kb / 8-color image, 39kb
Courtesy: KU

Reducing the number of colors used to create an image also reduces the file size (top row). Hand-retouching images to consolidate colors as they are reduced in the palette can also help to reduce file size (bottom row).

The images in the top row above show adjustments to the palette, allowing the program (Photoshop in this case) to handle all of the decision-making during this process. Note the big differences in file size as the palette is reduced to eight colors. Notice how the image gets more “speckled” as it is reduced to fewer and fewer colors? Each of these images reflects a reduction to a smaller palette directly from the original image; for example, the final image reflects a 24-bit color image that has been reduced to eight colors in a single action.

Now for a little hand-retouching: A process known as walking down has been applied to the images in the bottom row. Starting with a single, 24-bit color image and reducing its number of colors results in a file size reduction but preserves some of the quality as well. Although this is a much more time-consuming method of adjusting images, the final product looks far superior. Adding the step of hand-retouching each image before reducing the palette results in a higher level of quality.

This time, the change in file sizes occurs not only due to storing color data but location data; the file stores the location of each individual pixel and its color—but when those colors are consolidated, making the image more cartoony, the image doesn’t have to save the color of all those pixels but the information for the entire swatch of color (which can be far less). This same trick can be applied to characters and other smaller sprites. In the case of animation sequences, an even greater decrease in file size can be achieved by combining all the animation frames into a single file with a single palette; this reduces the number of colors and the overall number of palettes that need to be saved.

Masking

When mobile graphics are set up with an element of transparency, the end result is commonly referred to as masking—which involves trimming or blocking out areas of the bitmap that should not be visible—much like using masking tape to cover up parts of the wall you don’t want to get paint on. While this may not seem like such a big deal at the outset, this technique is at the core of many visual tricks that can be used to enhance both 2D and 3D games without doing too much damage to the file size.

There are two ways to handle the edges on a masked object; the method used will depend on the mobile device and associated programming. Older smartphones will be restricted to hard-edged, aliased masking. The instance in the accompanying image depicts a hard-edged mask; a stair-step effect results where the square pixel of the character’s color backs up against the square pixel of the background color (which will ultimately be invisible). This is called aliasing—and the larger the image, the less noticeable the effect. However, for mobile games sizes as small as 8x8 pixels, this effect will be very noticeable and will help to define the end result of the art.

Attached Image: 05-28 Masked01.jpg


Attached Image: 05-29 Masked02.jpg

The pink-colored portions of the above images will be transparent when they are used in the game.
Setting a rarely used color offsets the risk that your game will show false transparencies.

Courtesy: KU

The color used for the background or masked color can be set in a number of ways. Sometimes, the color is set in the code—and it will be necessary to check with the programmers to see what was used; this happens quite a bit on smaller smartphones. When using a custom-built game engine, it’s often possible to choose the color. In this case, we recommend a color that is not used in the game—such as RGB 236, 16, 137—a particularly deep shade of pink); this will ensure that the color isn’t accidentally used elsewhere, and it will also make it easier to spot in more complex background images.

To choose a color, access the palette in Photoshop and manually set the first color in the palette to that color. When a file is set to use masking, Photoshop will automatically assign that first color slot to be transparent. When the file is saved, Photoshop will ask if transparency should be used. (Double-check this with the programmers, since sometimes the masking information saved into the file by the graphics editing programs will clash with what the programmers are doing.)

If the file will be saved in .png format (the most common compressed file type currently used in mobile development), there will also be an option of creating an “anti-aliased” edge on the graphic. Since animated sprite sizes are so small, it is not usually recommended to use anti-aliasing on characters—but it helps to smooth out the transition for backgrounds and larger animated images. In this case, the file is being anti-aliased to a transparent background rather than a background color.



Fun with Sprites

A quick way to handle transparency colors in Photoshop is to create a sprite in layers, but be sure to leave the background layer empty. After creating the sprite, delete the background layer under the “Layers” tab to the right. The resulting sprite will be on a background that looks like a soft grey checkerboard. When indexing the file, be sure “transparent” is checked—and it will save with the anti-aliasing intact. Note that this trick will not work when using a masked color; the program will only mask out the specific color assigned and leave any blended colors behind.

Attached Image: 05-41 PhotoShop Index window.jpg

Courtesy: KU

Many programs will automatically use the first position in a 256-color palette as the “transparent” color. Checking the “transparency” box during indexing will ensure that no color ends up in that box, which stays empty.


Scrolling Backgrounds & Parallax Motion

Let’s be blunt, shall we? The 176 x 208 resolution still reflects a tiny screen size. Odds are that the entire game won’t fit on a single screen. It’s big enough for classic puzzle games such as Tetris. However, for larger games such as RPGs, more space will be needed—and this means utilizing scrolling backgrounds. The classic form of a scrolling background is an image that is long enough so that it won’t be very noticeable when it repeats—and where the front and back edges have been matched up so it can tile seamlessly from end to end.

Attached Image: 05-30 GhostTrainRide.jpg

This background (from Ghost Train Ride—a Halloween-themed version of Rollercoaster Rush)
has been matched up edge to edge so that it can be looped seamlessly.
Courtesy: Digital Chocolate, Inc.

Remember those old Hanna-Barbera cartoons where the same door and end table showed up over and over again in the background as one character chased another down a never-ending hallway? This is the same sort of thing. The real key is to make the background seamless enough so that it doesn’t distract the player from the game.

Attached Image: 05-31 TiledMap.jpg

This background is composed of different tiles that can be matched up
edge to edge, like puzzle pieces, to create a complete image.

Courtesy: KU

It’s also possible to have a background that scrolls in all directions—up and down, side to side—but in this case, there is usually a finite edge rather than a looped background. A smartphone screen operates like a little window onto the larger playfield. These large backgrounds are often constructed of tiles—small image squares that can be repeated by the program where needed. The advantage is that the game only needs one copy of each of these tiles, which can then be repeated as often as they are needed to fill the spaces defined by the level designer.
Like a Puzzle

There are a number of free or inexpensively licensed tile editors such as Tile Ed and Mappy that will allow developers to load in 32 x 32 tiles and use them to lay out a complete map. Once the map is designed, the tile location data can be exported in a format the programmers can use to build the map in the game.
The trick that makes these backgrounds come to life is called parallax motion, which relies heavily on masking to give the illusion of realistic depth of field to a scene. Screen elements are layered and moved at slightly different speeds in order to add this illusion of depth. Parallax motion has been used in games since the 1980s to achieve additional depth of field. It can work equally well on side-scrolling backgrounds or the larger format tiling backgrounds just discussed.

Attached Image: 05-32 Parallax01.jpg

Different layers scroll past the player at varying speeds
in order to give an illusion of depth.

Courtesy: KU

In the background image above, each of the elements shown is a looping, side-scrolling background. As the player moves along the screen, each of these backgrounds scrolls at a different rate; the one in front is usually fastest, while the one furthest back is the slowest. Another option might be to overlay layers on top of a tiled background. The motion will be different, and it is likely that a subtler hand will be needed than with side-scrolling—but this can add a great deal of vertical depth.

Sprites & How to Make Them

Animated characters and objects are usually handled by creating sprites. The actual movement of the object on the screen is done by the programming, but the internal movement (e.g., the back and forth movement of a character’s legs, the unfurling of wings, swinging of swords, the folding and unfolding of the puzzle piece) is a part of the animated sequence for files or frames.

Depending on the programming for a game, sprites can be handled in several ways. First, there could be a list of individual frames; each file is named in sequence (e.g. walk001.png, walk002.png, walk_003.png), and the program calls a particular sequence when it needs it.

Attached Image: 05-33a Trumpeter01.jpg


Attached Image: 05-33b Trumpeter02.jpg


Attached Image: 05-33c Trumpeter03.jpg


Attached Image: 05-33d Trumpeter04.jpg

Sprite animation sequences can be saved as separate files with sequential
filenames (e.g. Trumpet001.bmp, Trumpet002.bmp, Trumpet03.bmp).
Courtesy: KU


Secondly, a separate filmstrip can be used for each animation. It will be necessary to discuss this with the game’s programmers; some code for a specific file width (each frame of the animation is presumed to be a specific size), while others code so that specific colors can be used to indicate the location of the center point of the sprite. The latter is helpful for clever secondary animation tricks where the edges of the frame need to be (which is especially useful when there are animation frames of different sizes).

Attached Image: 05-34 Trumpeter_Strip.jpg

Sprite animation sequences can be saved as a single strip of images. Note the green-colored,
one-pixel wide markers, telling the program where one frame starts and the other ends.
Courtesy: KU

Thirdly, in a holdover from RPGs on handheld devices, an animation block can be used—where every single animation frame for a specific character is placed. The end result is a large block of animation files that are on the unwieldy side, particularly for mobile titles—but they allow the file size to be reduced by palletizing all the frames at once.

Attached Image: 05-35 Trumpeter_Block.jpg

Sprite animation sequences can be saved as a large block of images. Note the same green-colored, one-pixel wide markers;
these tell the program where one frame starts and the other ends. Blocks can be unwieldy and hard to use for mobile.
Courtesy: KU

Processing power and data throughput are always going to be concerns when developing for mobile platforms. We try to create our art in as efficient a manner as possible, and we heavily manage our data.

—Gary Gattis (Chief Executive Officer; Spacetime Studios)

Break It Up

For a very large sprite with only portions of the character animated (e.g., a big boss), it can be beneficial to break the image up into individual animated pieces and reassemble them in-game rather than trying to animate the entire character over a series of overly large sprite images.

Christopher Onstad on Mobile Art Asset Restrictions

Attached Image: Onstad, Christopher MGD.jpg

Christopher P. Onstad (Lead 3D Modeler & Texture Artist, Mega Pickle Entertainment)

Courtesy: CO

Christopher Onstad is a 2D/3D artist and game developer who loves all aspects of production. Currently a freelancer, Christopher has worked as a lead modeler, production artist, graphic artist, illustrator, and web designer. He lives in San Francisco, California with his wife, Kerri, and his son, Warren.

There are numerous restrictions on art assets when developing for mobile platforms—far more than for games developed for consoles or the PC. First and foremost, there’s file size. Whether you’re using Unity to develop a 3D game for iOS, or Flash/HTML5 for a heavily stylized 2D game, file sizes must be kept to the absolute minimum—as small as possible without lessening the quality of the images produced. In some cases (and in my own experience), even the best-looking designs need to be redesigned occasionally in order to maintain a fast frame rate. No player wants to encounter choppy frames while navigating the world or engaging in a battle with an enemy. The bottom line is playability. A great looking game on the iPhone 4 will only be as successful as its gameplay and “hook.” Great art enhances the experience, but it has the potential to hamper it. Artistic designers understand and take technical limitations into consideration while they develop graphics, 3D models, and texture maps so that re-designs are only merely done to adjust aesthetics.

Screen size and resolution directly influence the design direction of any game. Moreover, the more pixels one has to play with, the more one can push the limits of the aesthetics. The iPhone 4, for instance, has more pixels on its display than prior generation iPhones and smartphones—so more details can be seen on textured 3D models and 2D graphics look crisper, sharper, and cleaner. This means that as an artist you can add more details to your assets and only worry about the pure processing capabilities rather than final output of displayed images. With 3D on mobile platforms, there’s the advantage of developing full-blown first-person shooters (FPSs) and third-person games that closely mirror those played on the PS2 and Xbox. Mid-core games, as they are now being called, are on the rise—and they offer compelling full-blown stories to accompany highly developed art assets in three dimensions. The only technical limitation is on the device itself (processing power), which requires artists and designers to produce a lower level of detail (LOD) in order to achieve playable frame rates.

The advantage of 2D art is that it takes much less time to revise and animate, since you’re only working in two dimensions on any device. What you see is what you get, literally. Every pixel needs to be accounted for—and players are less forgiving of poorly executed 2D graphics than they are with a seam in a 3D textured object.


3D Art Options

There is 3D and then there’s 3D. On the older or less game-oriented phones, 3D simply isn’t an option. On the higher-end smartphones, 3D needs to be handled with custom programming and is often restricted to primitive shapes rather than the detailed worlds we are used to thinking of when we hear “3D.” On the highest-end units, 2D and 3D game engines are emerging much like those found in game console development—as well as custom application programming interfaces (APIs) for mobile development environments such as Java ME. Working in 3D for mobile requires a different set of constraints, but many of the tricks for 2D discussed in this chapter cross-apply to 3D. Animated 2D sprites can be applied to flat, moving polygons to help provide added visual effects without going through the trouble of creating 3D models.

Attached Image: 05-36 RealRacing2HD.jpg

In games such as Real Racing 2, 2D masked images and sprites can be combined with 3D to add extra depth and detail.
Courtesy: Firemint

To the right end of the accompanying screenshot, there are crowds of people watching the player put the pedal to the metal and rip down the track. Rather than individual characters, however, the player sees a flat plane with a masked image of a crowd placed on it. This gives the player the feel of a crowded space without the cost of modeling 20 or 30 people. Remember, just because the game is 3D doesn’t mean every little bit of it has to be a 3D mesh.
Mobile Art Guidelines

A good rule of thumb is that a mobile game must look like a higher-end 2D SNES game, at the very least—something that would sell for at least $30 but realistically only costs a buck or two. Design and layout must be at the forefront of aesthetic considerations, and elements must all work on a tiny screen without looking excessively cluttered. Generally, 2D implies “easier to jump into and control,” whereas 3D might mean “early PlayStation gaming”—which implies more value if done with a budget. Higher budget 3D often looks quite stellar and can command higher price points, even (and often) at the expense of gameplay depth.

—Ron Alpert (Co-Founder, Headcase Games)

Powers of Two
One of the key restrictions of dealing with a 3D engine (as opposed to custom-coded 3D) is that the graphics need to be developed in powers of two. This can result in a little extra waste on the sprites, but it will allow the opportunity for extra space savings on the textures used for the environment. Numbers that are powers of two (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024) are the ideal measurements for most computer memory to process, and mobile units are no different.

Attached Image: 05-37 PowersofTwo.jpg

“Powers of two” refers to the number of pixels on a side. Image sizes can be square as well as rectangular.
Courtesy: Diagram by Per Olin

It is worth noting that the dimensions of the file must be in powers of two, but these can be mixed and matched to use rectangular as well as square files. Stretch or squash the textures to fit so that a texture of a door that might be 128 x 300 could be squashed down to fit onto a 128 x 256 texture, then stretched back out to fit when applied to the in-game 3D object. By its very nature, the stretch and squash distorts the image—so it’s better to place a masked sprite into a file with larger dimensions to retain the integrity of the hard-edged mask.
Limited Palette

Be sure to palettize textures before applying them to 3D models. This will help ensure that they are as small as possible before the files are sent to the game engine.
Batching Files

Let’s say that each of 100 animation frames needs to be reduced to the exact same 64-color palette and each needs to have the background color (currently a lurid green) replaced with pepto-pink that has been coded by the programmers. Sounds pretty tedious, doesn’t it? Without planning ahead (and let’s face it: planning ahead isn’t always in the plan!), each individual file will most likely need to be opened, modified, and re-saved. Fortunately, this problem was solved a long time ago by art programs such as Photoshop and DeBabelizerwhich have the ability to batch files. Batching involves setting up a macro to execute a certain set of instructions and then asking the program to execute that set of instructions on an entire folder full of files. Photoshop makes this easy by allowing a set of actions to be recorded as they are executed. For example:

Open File → Select Color → Delete Color → Reduce File to 32 Colors → Save File with a New Name → Close File

Attached Image: 05-38 PhotoShop batch window.jpg

The ability to “batch” files—making the exact same alteration to many images—through a program
such as DeBabelizer or Photoshop, can shave hours off a product’s development time.
Courtesy: KU

Art generation for mobile devices is almost uniquely universal. First and foremost, mobile game artists must take into account the resolution (pixels per inch) of the target device—which means that a single set of high-resolution art assets can be created, then adjusted for different handheld devices rather than having to create a custom set of assets for each iteration.


Something Old, Something New

Developing art assets for a mobile title is a blend of both old and new techniques. Tricks that worked for developers back when Pac-Man was king are still just as valid today and can help go a long way toward keeping file sizes as low as possible. There are new uses for old ideas in game art development being put forth all the time; the openness of mobile games allows a lot of room for innovation—for mixing and matching and trying new ways of building games while still staying within the constraints of file and screen size.

Now that we’ve taken a look at the wants and needs of the artistic development side of the mobile game process, it’s time to tackle the programming. Chapter 6 focuses on engineering development tools and techniques associated with different operating systems and mobile devices.

Adobe Flash 11 Stage3D: Setting Up Our Tools

$
0
0
Adobe's Stage3D (previously codenamed Molehill) is a set of 3D APIs that has brought 3D to the Flash platform. Being a completely new technology, there were almost no resources to get you acquainted with this revolutionary platform, until now.

In this article by Christer Kaitila, author of Adobe Flash 11 Stage3D (Molehill) Game Programming , we will:
  • Obtain Flash 11 for your browser
  • Get all the tools ready to compile Stage3D games
  • Initialize 3D graphics in Flash
  • Send mesh and texture data to the video card
  • Animate a simple 3D scene
Before we begin programming, there are two simple steps required to get everything ready for some amazing 3D graphics demos. Step 1 is to obtain the Flash 11 plugin and the Stage3D API. Step 2 is to create a template as3 project and test that it compiles to create a working Flash SWF file.

Once you have followed these two steps, you will have properly "equipped" yourself. You will truly be ready for the battle. You will have ensured that your tool-chain is set up properly. You will be ready to start programming a 3D game.

Step 1: Downloading Flash 11 (Molehill) from Adobe

Depending on the work environment you are using, setting things up will be slightly different. The basic steps are the same regardless of which tool you are using but in some cases, you will need to copy files to a particular folder and set a few program options to get everything running smoothly.

If you are using tools that came out before Flash 11 went live, you will need to download some files from Adobe which instruct your tools how to handle the new Stage3D functions. The directions to do so are outlined below in Step 1.

In the near future, of course, Flash will be upgraded to include Stage3D. If you are using CS5.5 or another new tool that is compatible with the Flash 11 plugin, you may not need to perform the steps below. If this is the case, then simply skip to Step 2.

Assuming that your development tool-chain does not yet come with Stage3D built in, you will need to gather a few things before we can start programming. Let's assemble all the equipment we need in order to embark on this grand adventure, shall we?

Time for action – getting the plugin

It is very useful to be running the debug version of Flash 11, so that your trace statements and error messages are displayed during development. Download Flash 11 (content debugger) for your web browser of choice.

At the time of writing, Flash 11 is in beta and you can get it from the following URL:

http://labs.adobe.co...shplayer11.html

Naturally, you will eventually be able to obtain it from the regular Flash download page:

http://www.adobe.com.../downloads.html

On this page, you will be able to install either the Active-X (IE) version or the Plugin (Firefox, and so on) version of the Flash player. This page also has links to an uninstaller if you wish to go back to the old version of Flash, so feel free to have some fun and don't worry about the consequences for now.

Finally, if you want to use Chrome for debugging, you need to install the plugin version and then turn off the built-in version of Flash by typing about:plugins in your Chrome address bar and clicking on Disable on the old Flash plugin, so that the new one you just downloaded will run.

We will make sure that you installed the proper version of Flash before we continue.

To test that your browser of choice has the Stage3D-capable incubator build of the Flash plugin installed, simply right-click on any Flash content and ensure that the bottom of the pop-up menu lists Version 11,0,1,60 or greater, as shown in the following screenshot. If you don't see a version number in the menu, you are running the old Flash 10 plugin.

Attached Image: image001.gif


Additionally, in some browsers, the 3D acceleration is not turned on by default. In most cases, this option will already be checked. However, just to make sure that you get the best frame rate, right-click on the Flash file and go to options, and then enable hardware acceleration, as shown in the following screenshot:

Attached Image: image002.gif


You can read more about how to set up Flash 11 at the following URL:

http://labs.adobe.co...flashplayer11/

Time for action - getting the Flash 11 profile for CS5

Now that you have the Stage3D-capable Flash plugin installed, you need to get Stage3D working in your development tools. If you are using a tool that came out after this book was written that includes built-in support for Flash 11, you don't need to do anything—skip to Step 2.

If you are going to use Flash IDE to compile your source code and you are using Flash CS5, then you need to download a special .XML file that instructs it how to handle the newer Stage3D functionality. The file can be downloaded from the following URL:

http://download.macr...ile_022711.zip

If the preceding link no longer works, do not worry. The files you need are included in the source code that accompanies this book. Once you have obtained and unzipped this file, you need to copy some files into your CS5 installation.
  • FlashPlayer11.xml goes in:
    Adobe Flash CS5\Common\Configuration\Players
  • playerglobal.swc goes in:
    Adobe Flash CS5\Common\Configuration\ActionScript 3.0\FP11
Restart Flash Professional after that and then select 'Flash Player 11' in the publish settings. It will publish to a SWF13 file.

As you are not using Flex to compile, you can skip all of the following sections regarding Flex. Simple as it can be!

Time for action – upgrading Flex

If you are going to use pure AS3 (by using FlashDevelop or Flash Builder), or even basic Flex without any IDE, then you need to compile your source code with a newer version of Flex that can handle Stage3D.

At the time of writing, the best version to use is build 19786. You can download it from the following URL:

http://opensource.ad...nload+Flex+Hero

Remember to change your IDE's compilation settings to use the new version of Flex you just downloaded.

For example, if you are using Flash Builder as part of the Abode Flex SDK, create a new ActionScript project: File | New | ActionScript project. Open the project Properties panel (right-click and select Properties). Select ActionScript Compiler from the list on the left. Use the Configure Flex SDK's option in the upper-right hand corner to point the project to Flex build 19786 and then click on OK.

Alternately, if you are using FlashDevelop, you need to instruct it to use this new version of Flex by going into Tools | Program Settings | AS3 Context | Flex SDK Location and browsing to your new Flex installation folder.

Time for action – upgrading the Flex playerglobal.swc

If you use FlashDevelop, Flash Builder, or another tool such as FDT, all ActionScript compiling is done by Flex. In order to instruct Flex about the Stage3D-specific code, you need a small file that contains definitions of all the new AS3 that is available to you.

It will eventually come with the latest version of these tools and you won't need to manually install it as described in the following section. During the Flash 11 beta period, you can download the Stage3D-enabled playerglobal.swc file from the following URL:

http://download.macr...bal_071311.swc

Rename this file to playerglobal.swc and place it into an appropriate folder. Instruct your compiler to include it in your project. For example, you may wish to copy it to your Flex installation folder, in the flex/frameworks/libs/player/11 folder.

In some code editors, there is no option to target Flash 11 (yet). By the time you read this book, upgrades may have enabled it. However, at the time of writing, the only way to get FlashDevelop to use the SWC is to copy it over the top of the one in the flex/frameworks/libs/player/10.1 folder and target this new "fake" Flash 10.1 version.

Once you have unzipped Flex to your preferred location and copied playerglobal.swc to the preceding folder, fire up your code editor. Target Flash 11 in your IDE—or whatever version number that is associated with the folder, which you used as the location for playerglobal.swc. Be sure that your IDE will compile this particular SWC along with your source code.

In order to do so in Flash Builder, for example, simply select "Flash Player 11" in the Publish Settings. If you use FlashDevelop, then open a new project and go into the Project--->Properties--->Output Platform Target drop-down list.

Time for action – using SWF Version 13 when compiling in Flex

Finally, Stage3D is considered part of the future "Version 13" of Flash and therefore, you need to set your compiler options to compile for this version. You will need to target SWF Version 13 by passing in an extra compiler argument to the Flex compiler: -swf-version=13.

1. If you are using Adobe Flash CS5, then you already copied an XML file which has all the changes, as outlined below and this is done automatically for you.

2. If you are using Flex on the command line, then simply add the preceding setting to your compilation build script command-line parameters.

3. If you are using Flash Builder to compile Flex, open the project Properties panel (right-click and choose Properties). Select ActionScript Compiler from the list on the left. Add to the Additional compiler arguments input: -swf-version=13. This ensures the outputted SWF targets SWF Version 13. If you compile on the command line and not in Flash Builder, then you need to add the same compiler argument.

4. If you are using FlashDevelop, then click on Project | Properties | Compiler Options | Additional Compiler Options, and add -swf-version=13 in this field.

Time for action – updating your template HTML file

You probably already have a basic HTML template for including Flash SWF files in your web pages. You need to make one tiny change to enable hardware 3D.

Flash will not use hardware 3D acceleration if you don't update your HTML file to instruct it to do so. All you need to do is to always remember to set wmode=direct in your HTML parameters.

For example, if you use JavaScript to inject Flash into your HTML (such as SWFObject.js), then just remember to add this parameter in your source. Alternately, if you include SWFs using basic HTML object and embed tags, your HTML will look similar to the following:


<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000"
width="640" height="480"><param name="src" value="Molehill.swf" />
<param name="wmode" value="direct" /><embed type="application/
x-shockwave-flash" width="640" height="480" src="Molehill.swf"
wmode="direct"></embed></object>


The only really important parameter is wmode=direct (and the name of your swf file)— everything else about how you put Flash into your HTML pages remains the same.

In the future, if you are running 3D demos and the frame rate seems really choppy, you might not be using hardware 3D acceleration. Be sure to view-source of the HTML file that contains your SWF and check that all mentions of the wmode parameter are set to direct.



Stage3D is now set up!

That is it! You have officially gone to the weapons store and equipped yourself with everything you require to explore the depths of Flash 11 3D graphics. That was the hard part. Now we can dive in and get to some coding, which turns out to be the easy part.

Step 2: Start coding

In Step 1, you downloaded and installed everything you need to get the Stage3D source code to compile. Whether you use Adobe CS5, Flash Builder, FlashDevelop, or you compile Flex from the command line, the actual AS3 source code required to get Stage3D working is exactly the same.

This is the demo that you will program in this article:

http://www.mcfunkypa...chapter_3_demo/

If you are the impatient type, then you can download the final source code project that you would create if you followed the steps below from the following URL:

http://www.packtpub....ers-guide/book

For the sake of learning, however, why not go through the following steps, so that you actually understand what each line does? It is a poor warrior who expects to hone their skills without training.

Time for action – creating an empty project

Your next quest is simply to prepare an empty project in whatever tool-set you like.

For the sake of matching the following source code, create a class named Stage3dGame that extends Sprite, which is defined in an as3 file named Stage3dGame.as.

How you do so depends on your tool of choice.

Flash veterans and artists often prefer to work within the comfortable Adobe Flash IDE environment, surrounded by their old friends, the timeline, and the library palette. Create a brand new .FLA file, and prepare your project and stage as you see fit. Don't use any built-in Flash MovieClips for now. Simply create an empty .FLA that links to an external .AS3 file, as shown in the following screenshot:

Attached Image: image003.gif


Some game developers prefer to stick with pure AS3 projects, free of any bloat related to the Flash IDE or Flex that uses MXML files. This technique results in the smallest source files, and typically involves use of open source (free) code editors such as FlashDevelop. If this is your weapon of choice, then all you need to do is set up a blank project with a basic as3 file that is set to compile by default.

No matter what tool you are using, once you have set up a blank project, your ultra-simplistic as3 source code should look something like the following:


package
{

  [SWF(width="640", height="480", frameRate="60", backgroundColor="#FFFFFF")]

  public class Stage3dGame extends Sprite
  {
  }

}


What just happened?

As you might imagine, the preceding source code does almost nothing. It is simply a good start, an empty class, ready for you to fill in with all sorts of 3D game goodness. Once you have a "blank" Flash file that uses the preceding class and compiles without any errors, you are ready to begin adding the Stage3D API to it.

Time for action – importing Stage3D-specific classes

In order to use vertex and fragment programs (shaders), you will need some source code from Adobe. The files AGALMiniAssembler.as and PerspectiveMatrix3D.as are included in the project source code that comes with this book. They belong in your project's source code folder in the subdirectory com/adobe/utils/, so they can be included in your project.

Once you have these new .as files in your source code folder, add the following lines of code which import various handy functions immediately before the line that reads "public class Stage3dGame extends Sprite".


import com.adobe.utils.*;
import flash.display.*;
import flash.display3D.*;
import flash.display3D.textures.*;
import flash.events.*;
import flash.geom.*;
import flash.utils.*;


What just happened?

In the lines of the preceding code, you instruct Flash to import various utility classes and functions that you will be using shortly. They include 3D vector math, Stage3D initializations, and the assembler used to compile fragment and vertex programs used by shaders.

Try to compile the source. If you get all sorts of errors that mention unknown Molehill-related classes (such as display3D), then your compiler is most likely not set up to include the playerglobal.swc we downloaded earlier. You will need to check your compiler settings or Flex installation to ensure that the brand new Stage3D-capable playerglobal.swc is being used as opposed to an older version.

If you get errors complaining about missing com.adobe.utils, then you may not have unzipped the AGALMiniAssembler.as andPerspectiveMatrix3D.as code into the correct location. Ensure these files are in a subfolder of your source directory called com/adobe/utils/.

If your code compiles without errors, you are ready to move on.

Time for action – initializing Molehill

The next step is to actually get the Stage3D API up and running. Add the following lines of code to your project inside the empty class you created by updating the empty Stage3dGame function and adding the init function below it as follows:


public function Stage3dGame()
{
  if (stage != null)
	init();
  else
	addEventListener(Event.ADDED_TO_STAGE, init)
}

private function init(e:Event = null):void

{

  if (hasEventListener(Event.ADDED_TO_STAGE))

  removeEventListener(Event.ADDED_TO_STAGE, init);

  // class constructor - sets up the stage

  stage.scaleMode = StageScaleMode.NO_SCALE;
  stage.align = StageAlign.TOP_LEFT;

  // and request a context3D from Stage3d

  stage.stage3Ds[0].addEventListener(
  Event.CONTEXT3D_CREATE, onContext3DCreate);
  stage.stage3Ds[0].requestContext3D();
}


What just happened?

This is the constructor for your Stage3dGame class, followed by a simple init function that is run once the game has been added to the stage.

The init function instructs Flash how to handle the stage size and then requests a Context3D to be created. As this can take a moment, an event is set up to instruct your program when Flash has finished setting up your 3D graphics.

Time for action – defining some variables

Next, your demo is going to need to store a few variables. Therefore, we will define these at the very top of your class definition, above any of the functions, as follows:


// constants used during inits

private const swfWidth:int = 640;
private const swfHeight:int = 480;
private const textureSize:int = 512;

// the 3d graphics window on the stage

private var context3D:Context3D;

// the compiled shader used to render our mesh

private var shaderProgram:Program3D;

// the uploaded vertexes used by our mesh

private var vertexBuffer:VertexBuffer3D;

// the uploaded indexes of each vertex of the mesh

private var indexBuffer:IndexBuffer3D;

// the data that defines our 3d mesh model

private var meshVertexData:Vector.<Number>;

// the indexes that define what data is used by each vertex

private var meshIndexData:Vector.<uint>;

// matrices that affect the mesh location and camera angles

private var projectionMatrix:PerspectiveMatrix3D =new PerspectiveMatrix3D();
private var modelMatrix:Matrix3D = new Matrix3D();
private var viewMatrix:Matrix3D = new Matrix3D();
private var modelViewProjection:Matrix3D = new Matrix3D();

// a simple frame counter used for animation

private var t:Number = 0;



What just happened?

The demo you are writing needs to store things such as the current camera angle, the vertex, and fragment programs that we are about to create, and more. By defining them here, each of the functions we are about to write can access them.

Time for action – embedding a texture

Before we start creating the functions that perform all the work, let's also define a texture. Copy any 512x512 jpeg image into your source folder where you are putting all the files for this demo.

If you are using Flex or a pure AS3 programming environment such as Flash Builder or FlashDevelop, then you don't need to do anything further. If you are using Adobe Flash CS5, then you will need to open the Library palette (F11) and drag-and-drop the jpg file, so that the image is part of your .FLA file's library, as shown in the following screenshot:

Attached Image: image004.gif


Once this texture is in the library, right-click on it and open properties. Click on the Advanced button to view more options and turn on the check mark that enables Export for ActionScript and give the new class the name myTextureBitmapData. This will be used below.

If you are using Flash CS5, then add the following code just after the other variables you recently defined:


private var myBitmapDataObject:myTextureBitmapData = new myTextureBitmapData(texture_size, texture_size);
private var myTextureData:Bitmap = new Bitmap(myBitmapDataObject);

// The Molehill Texture that uses the above myTextureData
private var myTexture:Texture;


If you are using Flex or a pure AS3 environment, you do not have a "library" and instead, can embed assets using a line of code. Instead of the preceding code, define your texture in the following way:


[Embed (source = "texture.jpg")] private var myTextureBitmap:Class;
private var myTextureData:Bitmap = new myTextureBitmap();

// The Molehill Texture that uses the above myTextureData
private var myTexture:Texture;



What just happened?

The code you just entered embeds the JPG image you selected for use as a texture. This texture will eventually be drawn on the mesh we are about to define.

Time for action – defining the geometry of your 3D mesh

For the purposes of this simple demo, all we need to define is a "quad" (a square). We will define it now as follows:

private function initData():void
{

  // Defines which vertex is used for each polygon
  // In this example a square is made from two triangles

  meshIndexData = Vector.<uint>
  ([
	0, 1, 2, 0, 2, 3,
  ]);

  // Raw data used for each of the 4 vertexes

  // Position XYZ, texture coordinate UV, normal XYZ

  meshVertexData = Vector.<Number>
  ( [
	  //X, Y, Z, U, V, nX, nY, nZ
	 -1, -1, 1, 0, 0, 0, 0, 1,
	  1, -1, 1, 1, 0, 0, 0, 1,
	  1, 1, 1, 1, 1, 0, 0, 1,
	 -1, 1, 1, 0, 1, 0, 0, 1
  ]);

}

What just happened?

The preceding function fills a couple of variables with numerical data. This data is eventually sent to the video card and is used to define the locations of each vertex in your 3D mesh. For now, all we have defined is a simple square, which is made up of two triangles that use a total of four vertexes. Eventually, your models will be complex sculptures, made up of thousands of polies.

Anything from a sword to an entire city can be constructed by listing the x,y,z locations in space for each of a mesh's vertexes. For now, a simple square will be proof-of-concept. Once we can get a square spinning around in 3D, adding more detail is a trivial process.

Time for action – starting your engines

Recall that the init() function requests a Context3D object. An event handler was set up that Flash will run when your video card has prepared itself and is ready to receive data. Let's define this event handler.

The perfect place for this new snippet of code is just below the init() function:


private function onContext3DCreate(event:Event):void
{
  // in case it is not the first time this event fired

  removeEventListener(Event.ENTER_FRAME,enterFrame);

  // Obtain the current context

  var t:Stage3D = event.target as Stage3D;
  context3D = t.context3D;

  if (context3D == null)
  {
	// Currently no 3d context is available (error!)

	return;
  }

  // Disabling error checking will drastically improve performance.

  // If set to true, Flash will send helpful error messages regarding

  // AGAL compilation errors, uninitialized program constants, etc.

  context3D.enableErrorChecking = true;

  // Initialize our mesh data

  initData();

What just happened?

Inside the onContext3DCreate event handler, all your Stage3D inits are performed. This is the proper moment for your game to upload all the graphics that will be used during play.

The reasons you cannot upload data during the constructor you already wrote are:

· It can take a fraction of a second before your device drivers, 3D card, operating system, and Flash have prepared themselves for action.

· Occasionally, in the middle of your game, the Context3D can become invalidated.

· This can happen, for example, if the user's computer goes to "sleep", or if they hit Ctrl-Alt-Delete. For this reason, it is entirely possible that during the play, the mesh and texture data will need to be re-sent to your video RAM. As it can happen more than once, this event handler will take care of everything whenever it is needed.

If you read the comments, you will be able to follow along. Firstly, as the event might fire more than once, any animation is turned off until all data has been re-sent to the video card. A Context3D object is obtained and so we remember it by assigning it to one of the variables we defined earlier. We turn on error checking, which is handy during development. Once we are finished with our our game, we will turn this off in order to get a better frame rate.

Time for action – adding to the onContext3DCreate function

The next thing we need to do in our onContext3DCreate function is to define the size of the area we want to draw to and create a simple shader that instructs Stage3D how to draw our mesh. Continue adding to the function as follows:


// The 3d back buffer size is in pixels

context3D.configureBackBuffer(swfWidth, swfHeight, 0, true);

// A simple vertex shader which does a 3D transformation

var vertexShaderAssembler:AGALMiniAssembler = new AGALMiniAssembler();

vertexShaderAssembler.assemble
(
  Context3DProgramType.VERTEX,

  // 4x4 matrix multiply to get camera angle

  "m44 op, va0, vc0\n" +

  // tell fragment shader about XYZ

  "mov v0, va0\n" +

  // tell fragment shader about UV
  "mov v1, va1\n"
);

// A simple fragment shader which will use
// the vertex position as a color

var fragmentShaderAssembler:AGALMiniAssembler = new AGALMiniAssembler();

fragmentShaderAssembler.assemble
(
  Context3DProgramType.FRAGMENT,
  // grab the texture color from texture fs0
  // using the UV coordinates stored in v1
  "tex ft0, v1, fs0 <2d,repeat,miplinear>\n" +
  // move this value to the output color
  "mov oc, ft0\n"
);

// combine shaders into a program which we then upload to the GPU
shaderProgram = context3D.createProgram();

shaderProgram.upload(vertexShaderAssembler.agalcode,
  fragmentShaderAssembler.agalcode);


What just happened?

A back-buffer is set up, which is a temporary bitmap in the video RAM where all the drawing takes place. As each polygon is rendered, this back-buffer slowly becomes the entire scene, which when completed is presented to the user.

Two AGALMiniAssembler objects are created and a string containing AGAL (Adobe Graphics Assembly Language) is turned into compiled byte code. Don't worry too much about the specific AGAL code for now, we will dive into fragment and vertex programs later. Essentially, these AGAL commands instruct your video card exactly how to draw your mesh.

We will continue working with the Context3DCreate function.

Time for action – uploading our data

In order to render the mesh, Stage3D needs to upload the mesh and texture data straight to your video card. This way, they can be accessed repeatedly by your 3D hardware without having to make Flash do any of the "heavy lifting".


// upload the mesh indexes

indexBuffer = context3D.createIndexBuffer(meshIndexData.length);
indexBuffer.uploadFromVector(meshIndexData, 0, meshIndexData.length);

// upload the mesh vertex data
// since our particular data is
// x, y, z, u, v, nx, ny, nz
// each vertex uses 8 array elements

vertexBuffer = context3D.createVertexBuffer(
meshVertexData.length/8, 8);

vertexBuffer.uploadFromVector(meshVertexData, 0,
meshVertexData.length/8);

// Generate mipmaps

myTexture = context3D.createTexture(textureSize, textureSize,
  Context3DTextureFormat.BGRA, false);

var ws:int = myTextureData.bitmapData.width;
var hs:int = myTextureData.bitmapData.height;
var level:int = 0; var tmp:BitmapData;
var transform:Matrix = new Matrix();
tmp = new BitmapData(ws, hs, true, 0x00000000);

while ( ws >= 1 && hs >= 1 ) {

  tmp.draw(myTextureData.bitmapData, transform, null, null,
	null, true);

  myTexture.uploadFromBitmapData(tmp, level);
  transform.scale(0.5, 0.5); level++; ws >>= 1; hs >>= 1;

  if (hs && ws) {

	tmp.dispose();
	tmp = new BitmapData(ws, hs, true, 0x00000000);

  }
}

tmp.dispose();

What just happened?

In the preceding code, our mesh data is uploaded to the video card. A vertex buffer and an index buffer are sent, followed by your texture data. There is a short loop that creates similar, but smaller versions of your texture and uploads for each one. This technique is called MIP mapping. By uploading a 512x512 image, followed by the one that is 256x256, then 128x128, and so on down to 1x1, the video card has a set of textures that can be used depending on how far away or acutely angled the texture is to the camera. MIP mapping ensures that you don't get any "jaggies" or "moiree patterns" and increases the quality of the visuals.

Time for action – setting up the camera

There is one final bit of code to add in our onContext3DCreate() function. We simply need to set up the camera angle and instruct Flash to start the animation. We do this as follows:


// create projection matrix for our 3D scene
projectionMatrix.identity();

// 45 degrees FOV, 640/480 aspect ratio, 0.1=near, 100=far
projectionMatrix.perspectiveFieldOfViewRH(
45.0, swfWidth / swfHeight, 0.01, 100.0);

// create a matrix that defines the camera location
viewMatrix.identity();

// move the camera back a little so we can see the mesh
viewMatrix.appendTranslation(0,0,-4);

// start animating
addEventListener(Event.ENTER_FRAME,enterFrame);

}


What just happened?

A set of matrices are defined that are used by your shader to calculate the proper viewing angle of your mesh, as well as the specifics related to the camera, such as the field of a view (how zoomed in the camera is) and the aspect ratio of the scene.

Last but not the least; now that everything is set up, an event listener is created that runs the enterFrame function every single frame. This is where our animation will take place.

That is it for the Stage3D setup. We are done programming the onContext3DCreate function.

Time for action – let's animate

The enterFrame function is run every frame, over and over, during the course of your game. This is the perfect place to change the location of your meshes, trigger sounds, and perform all game logic.


private function enterFrame(e:Event):void
{
  // clear scene before rendering is mandatory

  context3D.clear(0,0,0);
  context3D.setProgram ( shaderProgram );

  // create the various transformation matrices

  modelMatrix.identity();
  modelMatrix.appendRotation(t*0.7, Vector3D.Y_AXIS);
  modelMatrix.appendRotation(t*0.6, Vector3D.X_AXIS);
  modelMatrix.appendRotation(t*1.0, Vector3D.Y_AXIS);
  modelMatrix.appendTranslation(0.0, 0.0, 0.0);
  modelMatrix.appendRotation(90.0, Vector3D.X_AXIS);

  // rotate more next frame

  t += 2.0;

  // clear the matrix and append new angles

  modelViewProjection.identity();
  modelViewProjection.append(modelMatrix);
  modelViewProjection.append(viewMatrix);
  modelViewProjection.append(projectionMatrix);

  // pass our matrix data to the shader program

  context3D.setProgramConstantsFromMatrix(
  Context3DProgramType.VERTEX,
	0, modelViewProjection, true );



What just happened?

In the preceding code, we first clear the previous frame from the screen. We then select the shader (program) that we defined in the previous function, and set up a new modelMatrix. The modelMatrix defines the location in our scene of the mesh. By changing the position (using the appendTranslation function), as well as the rotation, we can move our mesh around and spin it to our heart's content.

Time for action – setting the render state and drawing the mesh

Continue adding to the enterFrame() function by instructing Stage3D which mesh we want to work with and which texture to use.

// associate the vertex data with current shader program
// position

context3D.setVertexBufferAt(0, vertexBuffer, 0,
Context3DVertexBufferFormat.FLOAT_3);

// tex coord

context3D.setVertexBufferAt(1, vertexBuffer, 3,
Context3DVertexBufferFormat.FLOAT_3);

// which texture should we use?

context3D.setTextureAt(0, myTexture);

// finally draw the triangles

context3D.drawTriangles(indexBuffer, 0, meshIndexData.length/3);

// present/flip back buffer

context3D.present();

}



What just happened?

Once you have moved objects around and prepared everything for the next frame (by instructing Stage3D which vertex buffer to draw and what texture to use), the new scene is rendered by calling drawTriangles and is finally presented on the screen.

In the future, when we have a more complex game, there will be more than one mesh, with multiple calls to drawTriangles and with many different textures being used. For now, in this simple demo, all we do each frame is spin the mesh around a little and then draw it.

Quest complete – time to reap the rewards

Now that the entire source code is complete, publish your .SWF file. Use your web browser to view the published HTML file. You should see something similar to the following:

Attached Image: image005.gif


If you see nothing on the screen when you view the HTML file that contains your new SWF, then your Flash incubator plugin is probably not being used. With fingers crossed, you will see a fully 3D textured square spinning around in space. Not much to look at yet, but it proves that you are running in the hardware accelerated 3D mode.

Congratulations!

You have just programmed your first Flash 11 Stage3D (Molehill) demo! It does not do much, but already you can see the vast possibilities that lay ahead of you. Instead of a simple square spinning around, you could be rendering castles, monsters, racing cars, or spaceships, along with all the particle systems, eye-candy, and special effects you could imagine.

For now, be very proud that you have overcome the hardest part—getting a working 3D demo that compiles. Many before you have tried and failed, either because they did not have the proper version of Flash, or did not have the correct tools setup, or finally could not handle the complex AS3 source code required.

The fact that you made it this far is a testament to your skill and coding prowess. You deserve a break for now. Rest easy in the satisfaction, as you just reached a major milestone toward the goal of creating an amazing 3D game.

The entire source code

All the code that you entered earlier should go in a file named Stage3dGame.as alongside your other project files. For reference, or to save typing, all source and support files are available at the following URL:

http://www.mcfunkypa...source_code.zip

The final demo can be run from the following URL:

http://www.packtpub....ers-guide/book

Your folder structure should look similar to the one shown in the following screenshot. You might have used different file names for your html files or texture, but this screenshot may be helpful to ensure you are on the right track:

Attached Image: image006.gif


Have a go hero – a fun side quest

In order to really hone your skills, it can be a good idea to challenge yourself for some extra experience.There is a side quest with which you can experiment. It is completely optional. Just like grinding in an RPG game, challenges such as these are designed to make you stronger, so that when you forge ahead, you are more than ready for the next step in your main quest.

Your side quest this time is to experiment with the mesh data. Doing so will help you understand what each number stored in the meshVertexData variable (which is defined in the initData function) means.

Play with all these numbers. Notice that each line of eight numbers is the data used for one vertex. See what happens if you give them crazy values.

For example, if you change the first vertex position to -3, your square will change shape and will become lopsided with one corner sticking out:

meshVertexData = Vector.<Number>
  ( [

		//X, Y, Z, U, V, nX, nY, nZ
	   -3, -1, 1, 0, 0, 0, 0, 1,
		1, -1, 1, 1, 0, 0, 0, 1,
		1, 1, 1, 1, 1, 0, 0, 1,
		-1, 1, 1, 0, 1, 0, 0, 1

  ]);

If you tweak the U or V texture coordinates, then you can make the rock texture more zoomed in or tiled multiple times. If you change the vertex normals (the last three numbers of each line above), then nothing will happen. Why? The reason is that the ultra-simplistic shader that we are using does not use normal data.

If all this seems a bit low level, then do not worry. Eventually, you won't be defining mesh data by hand using lists of numbers entered in the code: in the near future, we will upgrade our game engine to parse the model data that we can export from a 3D art program, such as 3D Studio Max, Maya, or Blender.

For now, however, this simple "side quest" is a great way to start becoming familiar with what a vertex buffer really is.

Summary

We were victorious in our first "boss battle" along the way to the creation of our own 3D video game in Flash. It was tricky, but somehow we managed to achieve the following milestones: we learned how to obtain Flash 11 for our browser, we got all the tools ready to compile Stage3D games, we learned how to initialize the Stage3D API, how to upload mesh and texture data to the video card, and how to animate a simple 3D scene.

Now that we have created a simple template 3D demo, we are ready to add more complexity to our project. What was merely a tech demo will soon grow into a fully-fledged video game.

Cocos2d: Working with Sprites

$
0
0
Cocos2d is first and foremost a rich graphical API which allows a game developer easy access to a broad range of functionality. In this article, we will take a look at the basic uses of sprites.

In this article by Nathan Burba, author of Cocos2d for iPhone 1 Game Development Cookbook, we will cover the following topics:


Drawing sprites
The most fundamental task in 2D game development is drawing a sprite. Cocos2d provides the user with a lot of flexibility in this area. In this recipe we will cover drawing sprites using CCSprite, spritesheets, CCSpriteFrameCache, and CCSpriteBatchNode. We will also go over mipmapping. In this recipe we see a scene with Alice from Through The Looking Glass.

Posted Image


Getting ready
Please refer to the project RecipeCollection01 for the full working code of this recipe.

How to do it...
Execute the following code:

@implementation Ch1_DrawingSprites
-(CCLayer*) runRecipe {
  /*** Draw a sprite using CCSprite ***/
  CCSprite *tree1 = [CCSprite spriteWithFile:@"tree.png"];

  //Position the sprite using the tree base as a guide (y anchor 
point = 0)
[tree1 setPosition:ccp(20,20)];
  tree1.anchorPoint = ccp(0.5f,0);
  [tree1 setScale:1.5f];
  [self addChild:tree1 z:2 tag:TAG_TREE_SPRITE_1];

  /*** Load a set of spriteframes from a PLIST file and draw one by
name ***/

  //Get the sprite frame cache singleton
  CCSpriteFrameCache *cache = [CCSpriteFrameCache
sharedSpriteFrameCache];

  //Load our scene sprites from a spritesheet
  [cache addSpriteFramesWithFile:@"alice_scene_sheet.plist"];

  //Specify the sprite frame and load it into a CCSprite
  CCSprite *alice = [CCSprite spriteWithSpriteFrameName:@"alice.png"];

  //Generate Mip Maps for the sprite
  [alice.texture generateMipmap];
  ccTexParams texParams = { GL_LINEAR_MIPMAP_LINEAR, GL_LINEAR, GL_
CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE };
  [alice.texture setTexParameters:&texParams];

  //Set other information.
  [alice setPosition:ccp(120,20)];
  [alice setScale:0.4f];
  alice.anchorPoint = ccp(0.5f,0);

  //Add Alice with a zOrder of 2 so she appears in front of other
sprites
  [self addChild:alice z:2 tag:TAG_ALICE_SPRITE];

  //Make Alice grow and shrink.
  [alice runAction: [CCRepeatForever actionWithAction:
   [CCSequence actions:[CCScaleTo actionWithDuration:4.0f scale
:0.7f], [CCScaleTo actionWithDuration:4.0f scale:0.1f], nil] ] ];

  /*** Draw a sprite CGImageRef ***/
  UIImage *uiImage = [UIImage imageNamed: @"cheshire_cat.png"];
  CGImageRef imageRef = [uiImage CGImage];
  CCSprite *cat = [CCSprite spriteWithCGImage:imageRef key:@
"cheshire_cat.png"];
  [cat setPosition:ccp(250,180)];
  [cat setScale:0.4f];
  [self addChild:cat z:3 tag:TAG_CAT_SPRITE];

  /*** Draw a sprite using CCTexture2D ***/
  CCTexture2D *texture = [[CCTextureCache sharedTextureCache]
addImage:@"tree.png"];
  CCSprite *tree2 = [CCSprite spriteWithTexture:texture];
  [tree2 setPosition:ccp(300,20)];
  tree2.anchorPoint = ccp(0.5f,0);
  [tree2 setScale:2.0f];
  [self addChild:tree2 z:2 tag:TAG_TREE_SPRITE_2];

  /*** Draw a sprite using CCSpriteFrameCache and CCTexture2D ***/
  CCSpriteFrame *frame = [CCSpriteFrame frameWithTexture:texture
rect:tree2.textureRect];
  [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFrame:
frame name:@"tree.png"];
  CCSprite *tree3 = [CCSprite spriteWithSpriteFrame:[[CCSpriteFrame
Cache sharedSpriteFrameCache] spriteFrameByName:@"tree.png"]];
  [tree3 setPosition:ccp(400,20)];
  tree3.anchorPoint = ccp(0.5f,0);
  [tree3 setScale:1.25f];
  [self addChild:tree3 z:2 tag:TAG_TREE_SPRITE_3];

  /*** Draw sprites using CCBatchSpriteNode ***/

  //Clouds
  CCSpriteBatchNode *cloudBatch = [CCSpriteBatchNode
batchNodeWithFile:@"cloud_01.png" capacity:10];
  [self addChild:cloudBatch z:1 tag:TAG_CLOUD_BATCH];
  for(int x=0; x   CCSprite *s = [CCSprite spriteWithBatchNode:cloudBatch
rect:CGRectMake(0,0,64,64)];
   [s setOpacity:100];
   [cloudBatch addChild:s];
   [s setPosition:ccp(arc4random()%500-50, arc4random()%150+200)];
  }

  //Middleground Grass
  int capacity = 10;
  CCSpriteBatchNode *grassBatch1 = [CCSpriteBatchNode
batchNodeWithFile:@"grass_01.png" capacity:capacity];
  [self addChild:grassBatch1 z:1 tag:TAG_GRASS_BATCH_1];
  for(int x=0; x   CCSprite *s = [CCSprite spriteWithBatchNode:grassBatch1
rect:CGRectMake(0,0,64,64)];
   [s setOpacity:255];
   [grassBatch1 addChild:s];
   [s setPosition:ccp(arc4random()%500-50, arc4random()%20+70)];
  }

  //Foreground Grass
  CCSpriteBatchNode *grassBatch2 = [CCSpriteBatchNode
batchNodeWithFile:@"grass_01.png" capacity:10];
  [self addChild:grassBatch2 z:3 tag:TAG_GRASS_BATCH_2];
  for(int x=0; x   CCSprite *s = [CCSprite spriteWithBatchNode:grassBatch2
rect:CGRectMake(0,0,64,64)];
   [s setOpacity:255];
   [grassBatch2 addChild:s];
   [s setPosition:ccp(arc4random()%500-50, arc4random()%40-10)];
  }

  /*** Draw colored rectangles using a 1px x 1px white texture ***/

  //Draw the sky using blank.png
  [self drawColoredSpriteAt:ccp(240,190) withRect:CGRectMa
ke(0,0,480,260) withColor:ccc3(150,200,200) withZ:0];

  //Draw the ground using blank.png
  [self drawColoredSpriteAt:ccp(240,30)
withRect:CGRectMake(0,0,480,60) withColor:ccc3(80,50,25) withZ:0];

  return self;
}

-(void) drawColoredSpriteAt:(CGPoint)position withRect:(CGRect)rect
withColor:(ccColor3B)color withZ:(float)z {
  CCSprite *sprite = [CCSprite spriteWithFile:@"blank.png"];
  [sprite setPosition:position];
  [sprite setTextureRect:rect];
  [sprite setColor:color];
  [self addChild:sprite];

  //Set Z Order
  [self reorderChild:sprite z:z];
}

@end

How it works...
This recipe takes us through most of the common ways of drawing sprites:
  • Creating a CCSprite from a file:
    First, we have the simplest way to draw a sprite. This involves using the CCSprite class method as follows:
    +(id)spriteWithFile:(NSString*)filename;
    

    This is the most straightforward way to initialize a sprite and is adequate for many situations.
  • Other ways to load a sprite from a file:
    After this, we will see examples of CCSprite creation using UIImage/CGImageRef, CCTexture2D, and a CCSpriteFrame instantiated using a CCTexture2D object. CGImageRef support allows you to tie Cocos2d into other frameworks and toolsets. CCTexture2D is the underlying mechanism for texture creation.
  • Loading spritesheets using CCSpriteFrameCache:
    Next, we will see the most powerful way to use sprites, the CCSpriteFrameCache class. Introduced in Cocos2d-iPhone v0.99, the CCSpriteFrameCache singleton is a cache of all sprite frames. Using a spritesheet and its associated PLIST file we can load multiple sprites into the cache. From here we can create CCSprite objects with sprites from the cache:
    +(id)spriteWithSpriteFrameName:(NSString*)filename;
    
  • Mipmapping:
    Mipmapping allows you to scale a texture or to zoom in or out of a scene without aliasing your sprites. When we scale Alice down to a small size, aliasing will inevitably occur. With mipmapping turned on, Cocos2d dynamically generates lower resolution textures to smooth out any pixelation at smaller scales. Go ahead and comment out the following lines:
    [alice.texture generateMipmap];
      ccTexParams texParams = { GL_LINEAR_MIPMAP_LINEAR, GL_LINEAR,
    GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE };
      [alice.texture setTexParameters:&texParams];
    

    Now you should see this pixelation as Alice gets smaller.
  • Drawing many derivative sprites with CCSpriteBatchNode:
    The CCSpriteBatchNode class, added in v0.99.5, introduces an efficient way to draw and re-draw the same sprite over and over again. A batch node is created with the following method:
    CCSpriteBatchNode *cloudBatch = [CCSpriteBatchNode
    batchNodeWithFile:@"cloud_01.png" capacity:10];
    

    Then, you create as many sprites as you want using the follow code:
    CCSprite *s = [CCSprite spriteWithBatchNode:cloudBatch
    rect:CGRectMake(0,0,64,64)];
      [cloudBatch addChild:s];
    

    Setting the capacity to the number of sprites you plan to draw tells Cocos2d to allocate that much space. This is yet another tweak for extra efficiency, though it is not absolutely necessary that you do this. In these three examples we draw 10 randomly placed clouds and 60 randomly placed bits of grass.
  • Drawing colored rectangles:
    Finally, we have a fairly simple technique that has a variety of uses. By drawing a sprite with a blank 1px by 1px white texture and then coloring it and setting its textureRect property we can create very useful colored bars:
    CCSprite *sprite = [CCSprite spriteWithFile:@"blank.png"];
    [sprite setTextureRect:CGRectMake(0,0,480,320)];
    [sprite setColor:ccc3(255,128,0)];
    

    In this example we have used this technique to create very simple ground and sky backgrounds.




Coloring sprites
In the previous recipe we used colored rectangles to draw both the ground and the sky. The ability to set texture color and opacity are simple tools which, if used properly, can create very cool effects. In this recipe we will create a cinematic scene where two samurai face each other with glowing swords.

Posted Image


Getting ready
Please refer to the project RecipeCollection01 for full working code of this recipe. Also, note that some code has been omitted for brevity.

How to do it...
Execute the following code:

#import "CCGradientLayer.h

@implementation Ch1_ColoringSprites

-(CCLayer*) runRecipe {
  [self initButtons];

  //The Fade Scene Sprite
  CCSprite *fadeSprite = [CCSprite spriteWithFile:@"blank.png"];
  [fadeSprite setOpacity:0];
  [fadeSprite setPosition:ccp(240,160)];
  [fadeSprite setTextureRect:CGRectMake(0,0,480,320)];
  [self addChild:fadeSprite z:3 tag:TAG_FADE_SPRITE];

  //Add a gradient below the mountains
//CCGradientDirectionT_B is an enum provided by CCGradientLayer
  CCGradientLayer *gradientLayer = [CCGradientLayer layerWithColor:
ccc4(61,33,62,255) toColor:ccc4(65,89,54,255) withDirection:
CCGradient DirectionT_B width:480 height:100];
  [gradientLayer setPosition:ccp(0,50)];
  [self addChild:gradientLayer z:0 tag:TAG_GROUND_GRADIENT];

  //Add a sinister red glow gradient behind the evil samurai
  CCGradientLayer *redGradient = [CCGradientLayer
layerWithColor:ccc4(0,0,0,0) toColor:ccc4(255,0,0,100) withDirection
:CCGradientDirectionT_B width:200 height:200];
  [redGradient setPosition:ccp(280,60)];
  [redGradient setRotation:-90];
  [self addChild:redGradient z:2 tag:TAG_RED_GRADIENT];

  // Make the swords glow
  [self glowAt:ccp(230,280) withScale:CGSizeMake(3.0f, 11.0f)
withColor:ccc3(0,230,255) withRotation:45.0f withSprite:goodSamurai];
  [self glowAt:ccp(70,280) withScale:CGSizeMake(3.0f, 11.0f)
withColor:ccc3(255,200,2) withRotation:-45.0f withSprite:evilSamurai];

  return self;
}

-(void) initButtons {
  [CCMenuItemFont setFontSize:16];

  //'Fade To Black' button
  CCMenuItemFont* fadeToBlack = [CCMenuItemFont itemFromString:@
"FADE TO BLACK" target:self selector:@selector(fadeToBlackCallback:)];
  CCMenu *fadeToBlackMenu = [CCMenu menuWithItems:fadeToBlack, nil];
   fadeToBlackMenu.position = ccp( 180 , 20 );
   [self addChild:fadeToBlackMenu z:4 tag:TAG_FADE_TO_BLACK];
}

/* Fade the scene to black */
-(void) fadeToBlackCallback:(id)sender {
  CCSprite *fadeSprite = [self getChildByTag:TAG_FADE_SPRITE];
  [fadeSprite stopAllActions];
  [fadeSprite setColor:ccc3(0,0,0)];
  [fadeSprite setOpacity:0.0f];
  [fadeSprite runAction:
  [CCSequence actions:[CCFadeIn actionWithDuration:2.0f], [CCFadeOut
actionWithDuration:2.0f], nil] ];
}

/* Create a glow effect */
-(void) glowAt:(CGPoint)position withScale:(CGSize)size
withColor:(ccColor3B)color withRotation:(float)rotation
withSprite:(CCSprite*)sprite {
  CCSprite *glowSprite = [CCSprite spriteWithFile:@"fire.png"];
  [glowSprite setColor:color];
  [glowSprite setPosition:position];
  [glowSprite setRotation:rotation];
  [glowSprite setBlendFunc: (ccBlendFunc) { GL_ONE, GL_ONE }];
  [glowSprite runAction: [CCRepeatForever actionWithAction:
   [CCSequence actions:[CCScaleTo actionWithDuration:0.9f
scaleX:size.width scaleY:size.height], [CCScaleTo
actionWithDuration:0.9f scaleX:size.width*0.75f scaleY:size.
height*0.75f], nil] ] ];
  [glowSprite runAction: [CCRepeatForever actionWithAction:
   [CCSequence actions:[CCFadeTo actionWithDuration:0.9f
opacity:150], [CCFadeTo actionWithDuration:0.9f opacity:255], nil] ]
];
  [sprite addChild:glowSprite];
}

@end

How it works...
This recipe shows a number of color based techniques.
  • Setting sprite color:
    The simplest use of color involves setting the color of a sprite using the following method:
    -(void) setColor:(ccColor3B)color;
    

    Setting sprite color effectively reduces the color you can display but it allows some programmatic flexibility in drawing. In this recipe we use setColor for a number of things, including drawing a blue sky, a yellow sun, black "dramatic movie bars", and more.
    ccColor3B is a C struct which contains three GLubyte variables. Use the following helper macro to create ccColor3B structures:
    ccColor3B ccc3(const GLubyte r, const GLubyte g, const GLubyte
    b);
    

    Cocos2d also specifies a number of pre-defined colors as constants. These include the following:
    ccWHITE, ccYELLOW, ccBLUE, ccGREEN, ccRED,
    ccMAGENTA, ccBLACK, ccORANGE, ccGRAY
    
  • Fading to a color:
    To fade a scene to a specific color we use the blank.png technique we went over in the last recipe. We first draw a sprite as large as the screen, then color the sprite to the color we want to fade to, and then finally run a CCFadeIn action on the sprite to fade to that color:
    [fadeSprite setColor:ccc3(255,255,255)];
    [fadeSprite setOpacity:0.0f];
    [fadeSprite runAction: [CCFadeIn actionWithDuration:2.0f] ];
    
  • Using CCGradientLayer:
    Using the CCGradientLayer class we can programmatically create gradients. To make the mountains in the background fade into the ground the two samurai are standing on we created a gradient using this method:
      CCGradientLayer *gradientLayer = [CCGradientLayer layerWithColor
    :ccc4(61,33,62,255) toColor:ccc4(65,89,54,255) withDirection:CCGra
    dientDirectionT_B width:480 height:100];
      [gradientLayer setPosition:ccp(0,50)];
      [self addChild:gradientLayer z:0 tag:TAG_GROUND_GRADIENT];
    

    Because CCGradientLayer lets you control opacity as well as color, it has many uses. As you can see there is also a sinister red glow behind the evil samurai.
  • Making a sprite glow: To make the swords in the demo glow we use subtle color manipulation, additive blending and fading and scaling actions. First we load the fire.png sprite supplied by Cocos2d. By changing its X and Y scale independently we can make it thinner or fatter. Once you have the desired scale ratio (in this demo we use x:y 3:11 because the sword is so thin) you can constantly scale and fade the sprite in and out to give some life to the effect. You also need to set the blend function to { GL_ONE, GL_ONE } for additive blending. Finally this effect sprite is added to the actual sprite to make it seem like it glows.
    CCSprite *glowSprite = [CCSprite spriteWithFile:@"fire.png"];
      [glowSprite setColor:color];
      [glowSprite setPosition:position];
      [glowSprite setRotation:rotation];
      [glowSprite setBlendFunc: (ccBlendFunc) { GL_ONE, GL_ONE }];
      [glowSprite runAction: [CCRepeatForever actionWithAction:
      [CCSequence actions:[CCScaleTo actionWithDuration:0.9f
    scaleX:size.width scaleY:size.height], [CCScaleTo
    actionWithDuration:0.9f scaleX:size.width*0.75f scaleY:size.
    height*0.75f], nil] ] ];
      [glowSprite runAction: [CCRepeatForever actionWithAction:
      [CCSequence actions:[CCFadeTo actionWithDuration:0.9f
    opacity:150], [CCFadeTo actionWithDuration:0.9f opacity:255], nil]
    ] ];
      [sprite addChild:glowSprite];
    








Animating sprites
Now it is time to add some animation to our sprites. One thing that should be stressed about animation is that it is only as complicated as you make it. In this recipe we will use very simple animation to create a compelling effect. We will create a scene where bats fly around a creepy looking castle. I've also added a cool lightning effect based on the technique used to make the swords glow in the previous recipe.

Posted Image


Getting ready
Please refer to the project RecipeCollection01 for full working code of this recipe. Also note that some code has been omitted for brevity.

How to do it...
Execute the following code:

//SimpleAnimObject.h
@interface SimpleAnimObject : CCSprite {
  int animationType;
  CGPoint velocity;
}

@interface Ch1_AnimatingSprites {
  NSMutableArray *bats;
  CCAnimation *batFlyUp;
  CCAnimation *batGlideDown;
  CCSprite *lightningBolt;
  CCSprite *lightningGlow;
  int lightningRemoveCount;
}

-(CCLayer*) runRecipe {
  //Add our PLIST to the SpriteFrameCache
  [[CCSpriteFrameCache sharedSpriteFrameCache] 
addSpriteFramesWithFile:@"simple_bat.plist"];

  //Add a lightning bolt
  lightningBolt = [CCSprite spriteWithFile:@"lightning_bolt.png"];
  [lightningBolt setPosition:ccp(240,160)];
  [lightningBolt setOpacity:64];
  [lightningBolt retain];

  //Add a sprite to make it light up other areas.
  lightningGlow = [CCSprite spriteWithFile:@"lightning_glow.png"];
  [lightningGlow setColor:ccc3(255,255,0)];
  [lightningGlow setPosition:ccp(240,160)];
  [lightningGlow setOpacity:100];
  [lightningGlow setBlendFunc: (ccBlendFunc) { GL_ONE, GL_ONE }];
  [lightningBolt addChild:lightningGlow];

  //Set a counter for lightning duration randomization
  lightningRemoveCount = 0;

  //Bats Array Initialization
  bats = [[NSMutableArray alloc] init];

  //Add bats using a batch node.
  CCSpriteBatchNode *batch1 = [CCSpriteBatchNode
batchNodeWithFile:@"simple_bat.png" capacity:10];
  [self addChild:batch1 z:2 tag:TAG_BATS];

  //Make them start flying up.
  for(int x=0; x   //Create SimpleAnimObject of bat
   SimpleAnimObject *bat = [SimpleAnimObject
spriteWithBatchNode:batch1 rect:CGRectMake(0,0,48,48)];
  [batch1 addChild:bat];
  [bat setPosition:ccp(arc4random()%400+40, arc4random()%150+150)];

  //Make the bat fly up. Get the animation delay (flappingSpeed).
  float flappingSpeed = [self makeBatFlyUp:bat];

  //Base y velocity on flappingSpeed.
  bat.velocity = ccp((arc4random()%1000)/500 + 0.2f, 0.1f/
flappingSpeed);

  //Add a pointer to this bat object to the NSMutableArray
  [bats addObject:[NSValue valueWithPointer:bat]];
  [bat retain];

  //Set the bat's direction based on x velocity.
  if(bat.velocity.x > 0){
   bat.flipX = YES;
  }
 }

  //Schedule physics updates
  [self schedule:@selector(step:)];

  return self;
}

-(float)makeBatFlyUp:(SimpleAnimObject*)bat {
  CCSpriteFrameCache * cache = [CCSpriteFrameCache
sharedSpriteFrameCache];

  //Randomize animation speed.
  float delay = (float)(arc4random()%5+5)/80;
  CCAnimation *animation = [[CCAnimation alloc] initWithName:@
"simply_bat_fly" delay:delay];

  //Randomize animation frame order.
  int num = arc4random()%4+1;
  for(int i=1; i   [animation addFrame:[cache spriteFrameByName:[NSString
stringWithFormat:@"simple_bat_0%i.png",num]]];
   num++;
   if(num > 4){ num = 1; }
  }

  //Stop any running animations and apply this one.
  [bat stopAllActions];
  [bat runAction:[CCRepeatForever actionWithAction: [CCAnimate 
actionWithAnimation:animation]]];

  //Keep track of which animation is running.
  bat.animationType = BAT_FLYING_UP;

  return delay; //We return how fast the bat is flapping.
}

-(void)makeBatGlideDown:(SimpleAnimObject*)bat {
  CCSpriteFrameCache * cache = [CCSpriteFrameCache
sharedSpriteFrameCache];

  //Apply a simple single frame gliding animation.
  CCAnimation *animation = [[CCAnimation alloc] initWithName:@
"simple_bat_glide" delay:100.0f];
 [animation addFrame:[cache spriteFrameByName:@"simple_bat_01.png"]];

  //Stop any running animations and apply this one.
  [bat stopAllActions];
  [bat runAction:[CCRepeatForever actionWithAction: [CCAnimate 
actionWithAnimation:animation]]];

  //Keep track of which animation is running.
  bat.animationType = BAT_GLIDING_DOWN;
}

-(void)step:(ccTime)delta {
  CGSize s = [[CCDirector sharedDirector] winSize];

for(id key in bats){
  //Get SimpleAnimObject out of NSArray of NSValue objects.
  SimpleAnimObject *bat = [key pointerValue];

  //Make sure bats don't fly off the screen
  if(bat.position.x > s.width){
   bat.velocity = ccp(-bat.velocity.x, bat.velocity.y);
   bat.flipX = NO;
  }else if(bat.position.x	bat.velocity = ccp(-bat.velocity.x, bat.velocity.y);
   bat.flipX = YES;
  }else if(bat.position.y > s.height){
   bat.velocity = ccp(bat.velocity.x, -bat.velocity.y);
   [self makeBatGlideDown:bat];
  }else if(bat.position.y	bat.velocity = ccp(bat.velocity.x, -bat.velocity.y);
   [self makeBatFlyUp:bat];
  }

  //Randomly make them fly back up
  if(arc4random()%100 == 7){
   if(bat.animationType == BAT_GLIDING_DOWN){ [self
makeBatFlyUp:bat]; bat.velocity = ccp(bat.velocity.x, -bat.
velocity.y); }
   else if(bat.animationType == BAT_FLYING_UP){ [self
makeBatGlideDown:bat]; bat.velocity = ccp(bat.velocity.x, -bat.
velocity.y); }
  }

  //Update bat position based on direction
  bat.position = ccp(bat.position.x + bat.velocity.x, bat.position.y
+ bat.velocity.y);
  }

  //Randomly make lightning strike
  if(arc4random()%70 == 7){
  if(lightningRemoveCount	[self addChild:lightningBolt z:1 tag:TAG_LIGHTNING_BOLT];
   lightningRemoveCount = arc4random()%5+5;
   }
  }

  //Count down
  lightningRemoveCount -= 1;

  //Clean up any old lightning bolts
  if(lightningRemoveCount == 0){
   [self removeChildByTag:TAG_LIGHTNING_BOLT cleanup:NO];
  }
}

@end

How it works...
This recipe shows us how to structure animation based classes through the use of SimpleAnimObject:
  • Animated object class structure:
    When switching from one animation to another it is often important to keep track of what state the animated object is in. In our example we use SimpleAnimObject, which keeps an arbitrary animationType variable. We also maintain a velocity variable that has a Y scalar value that is inversely proportional to the animation frame delay:
    @interface SimpleAnimObject : CCSprite {
      int animationType;
      CGPoint velocity;
    }
    

    Depending on how in-depth you want your animation system to be you should maintain more information such as, for example, a pointer to the running CCAnimation instance, frame information, and physical bodies.
There's more...
As you get more involved with Cocos2d game development you will become more and more tempted to use asynchronous actions for gameplay logic and AI. Derived from the CCAction class, these actions can be used for everything from moving a CCNode using CCMoveBy to animating a CCSprite using CCAnimate. When an action is run, an asynchronous timing mechanism is maintained in the background. First time game programmers often over-rely on this feature. The extra overhead required by this technique can multiply quickly when multiple actions are being run. In the following example we have used a simple integer timer that allows us to regulate how long lightning lasts onscreen:

//Randomly make lightning strike
if(arc4random()%70 == 7){
 if(lightningRemoveCount   [self addChild:lightningBolt z:1 tag:TAG_LIGHTNING_BOLT];
  lightningRemoveCount = arc4random()%5+5;
 }
}

//Count down
lightningRemoveCount -= 1;

//Clean up any old lightning bolts
if(lightningRemoveCount == 0){
 [self removeChildByTag:TAG_LIGHTNING_BOLT cleanup:NO];
}

Synchronous timers like the one shown in the preceding code snippet are often, but not always, preferable to asynchronous actions. Keep this in mind as your games grow in size and scope.

Summary
In this article we took a look at the basic uses of sprites.

Introducing Xcode Tools for iPhone Development

$
0
0
In this article by Steven F. Daniel, author of Xcode 4 iPhone Development , we shall:
  • Learn about the features and components of the Xcode development tools.
  • Lean about Xcode, Cocoa, Cocoa-Touch, and Objective-C.
  • Take a look into each of the iOS Technology Layers and their Components.
  • Take a look into what comprises the Xcode Developer set of Tools.
  • Take a look at the new features within the iOS4 SDK.
There is a lot of fun stuff to cover, so let's get started.

Development using the Xcode Tools
If you are running Mac OSX 10.5, chances are your machine is already running Xcode. These are located within the /Developer/Applications folder. Apple also makes this freely available through the Apple Developer Connection at http://developer.apple.com/.

The iPhone SDK includes a suite of development tools to assist you with your development of your iPhone, and other iOS device applications. We describe these in the following table.

iPhone SDK Core Components This is the main Integrated Development Environment (IDE) that enables you to manage, edit, and debug your projects. This enables you to develop web-based iPhone and iPad applications, and Dashboard widgets. The iPhone Simulator is a Cocoa-based application, which provides a software simulator to simulate an iPhone or iPad on your Mac OSX. These are the Analysis tools, which help you optimize your applications and monitor for memory leaks in real-time.

The Xcode tools require an Intel-based Mac running Mac OS X version 10.6.4 or later in order to function correctly.


Inside Xcode, Cocoa, and Objective-C
Xcode 4 is a complete toolset for building Mac OSX (Cocoa-Based) and iOS applications. The new single-windowed development interface has been redesigned to be a lot easier and even more helpful to use than it has been in previous releases. It can now also identify mistakes in both syntax and logical errors, and will even fix your code for you.

It provides you with the tools to enable you to speed up your development process, therefore becoming more productive. It also takes care of the deployment of both your Mac OSX and iOS applications.

The Integrated Development Interface (IDE) allows you to do the following:
  • Create and manage projects, including specifying platforms, target requirements, dependencies, and build configurations.
  • Supports Syntax Colouring and automatic indenting of code.
  • Enables you to navigate and search through the components of a project, including header files and documentation.
  • Enables you to Build and Run your project.
  • Enables you to debug your project locally, run within the iOS simulator, or remotely, within a graphical source-level debugger.
Xcode incorporates many new features and improvements, apart from the redesigned user interface; it features a new and improved LLVM (Low Level Virtual Machine) debugger, which has been supercharged to run 3 times faster and 2.5 times more efficient.

This new compiler is the next generation compiler technology designed for high-performance projects and completely supports C, Objective-c, and now C++. It is also incorporated into the Xcode IDE and compiles twice as fast and quickly as GCC and your applications will run faster.

The following list includes the many improvements made to this release.
  • The interface has been completely redesigned and features a single-window integrated development interface.
  • Interface Builder has now been fully integrated within the Xcode development IDE.
  • Code Assistant opens in a second window that shows you the file that you are working on, and can automatically find and open the corresponding header file(s).
  • Fix-it checks the syntax of your code and validates symbol names as you type. It will even highlight any errors that it finds and will even fix them for you.
  • The new Version Editor works with GIT (Free Open-Source) version control software or Subversion. This will show you the files entire SCM (software configuration management) history and will even compare any two versions of the file.
  • The new LLVM 2.0 compiler includes full support for C, Objective-C, and C++
  • The LLDB debugger has now been improved to be even faster, it uses less memory than the GDB debugging engine.
  • The new Xcode 4 development IDE now lets you work on several interdependent projects within the same window. It automatically determines its dependencies so that it builds the projects in the right order.
Xcode allows you to customize an unlimited number of build and debugging tools, and executable packaging. It supports several source-code management tools, namely, CVS "Version control software which is an important component of the Source Configuration Management (SCM)" and Subversion, which allows you to add files to a repository, commit changes, get updated versions and compare versions using the Version Editor tool.

The iPhone Simulator
The iPhone Simulator is a very useful tool that enables you to test your applications without using your actual device, whether this being your iPhone or any other iOS device. You do not need to launch this application manually, as this is done when you Build and run your application within the Xcode Integrated Development Environment (IDE). Xcode installs your application on the iPhone Simulator for you automatically.

The iPhone Simulator also has the capability of simulating different versions of the iPhone OS, and this can become extremely useful if your application needs to be installed on different iOS platforms, as well as testing and debugging errors reported in your application when run under different versions of the iOS.

While the iPhone Simulator acts as a good test bed for your applications, it is recommended to test your application on the actual device, rather than relying on the iPhone Simulator for testing. The iPhone Simulator can be found at the following location /Developer/Platforms/iPhoneSimulator.Platform/Developer/Applications.


Layers of the iOS Architecture
According to Apple, they describe the set of frameworks and technologies that are currently implemented within the iOS operating system as a series of layers. Each of these layers is made up of a variety of different frameworks that can be used and incorporated into your applications.

Layers of the iOS Architecture


Posted Image


We shall now go into detail and explain each of the different layers of the iOS Architecture; this will give you a better understanding of what is covered within each of the Core layers.

The Core OS Layer
This is the bottom layer of the hierarchy and is responsible for the foundation of the Operating system, which the other layers sit on top of. This important layer is in charge of managing memory - allocating and releasing of memory once it has finished with it, taking care of file system tasks, handles networking, and other Operating System tasks. It also interacts directly with the hardware.

The Core OS Layer consists of the following components:

The Core Services Layer
The Core Services layer provides an abstraction over the services provided in the Core OS layer. It provides fundamental access to the iPhone OS services. The Core Services Layer consists of the following components:

The Media Layer
The Media Layer provides Multimedia services that you can use within your iPhone, and other iOS devices. The Media Layer is made up of the following components:

The Cocoa-Touch Layer
The Cocoa-Touch layer provides an abstraction layer to expose the various libraries for programming the iPhone, and other IOS devices. You probably can understand why Cocoa-Touch is located at the top of the hierarchy due to its support for Multi-Touch capabilities. The Cocoa-Touch Layer is made up of the following components:

Understanding Cocoa, the language of the Mac
Cocoa is defined as the development framework used for the development of most native Mac OSX applications. A good example of a Cocoa related application is Mail or Text Edit.

This framework consists of a collection of shared object code libraries known as the Cocoa frameworks. It consists of a runtime system and a development environment. These set of frameworks provide you with a consistent and optimized set of prebuilt code modules that will speed up your development process.

Cocoa provides you with a rich-layer of functionality, as well as a comprehensive object-oriented like structure and APIs on which you can build your applications. Cocoa uses the Model-View-Controller (MVC) design pattern.

What are Design Patterns?
Design Patterns represent and handle specific solutions to problems that arise when developing software within a particular context. These can be either a description or a template, on how to go about to solve a problem in a variety of different situations.

What is the difference between Cocoa and Cocoa-Touch?
Cocoa-Touch is the programming language framework that drives user interaction on iOS. It consists and uses technology derived from the cocoa framework and was redesigned to handle multi-touch capabilities. The power of the iPhone and its User Interface are available to developers throughout the Cocoa-Touch frameworks.

Cocoa-Touch is built upon the Model-View-Controller structure; it provides a solid stable foundation for creating mind blowing applications. Using the Interface builder developer tool, developers will find it both very easy and fun to use the new drag-and-drop method when designing their next great masterpiece application on iOS.

The Model-View-Controller
The Model-View-Controller (or MVC) comprises a logical way of dividing up the code that makes up the GUI (Graphical User Interface) of an application. Object-Oriented applications like Java and .Net have adopted the MVC design pattern.

The MVC model comprises three distinctive categories:Model : This part defines your application's underlying data engine. It is responsible for maintaining the integrity of that data.
  • View : This part defines the user interface for your application and has no explicit knowledge of the origin of data displayed in that interface. It is made up of Windows, controls, and other elements that the user can see and interact with.
  • Controller : This part acts as a bridge between the model and view and facilitates updates between them. It binds the Model and View together and the application logic decides how to handle the user's inputs.





What is Object-Oriented Programming?
Object-Oriented programming (or formally known as "OOP"), provides an abstraction layer of the data on which you operate, it provides a concrete foundation between the data and the operations you perform with the data, in effect giving the data behavior.

By using the power of Object-Oriented programming, it allows us to create classes and later extend its characteristics to incorporate additional functionality. Objects within a class can be protected to prevent those elements being exposed; this is called "Data Hiding".


If you are interested in learning more about Object-Oriented Programming, please consult the Apple Developer documentation or via the following link https://developer.ap...cles/ooOOP.html.


What is Objective-C?
When I first started to develop for the iPhone, I realised that I needed to learn Objective-C, as this is the development language for Mac and iOS. I found it to be one of the strangest looking languages I had ever come across. Today, I really enjoy developing and working with it, so will you too.

Objective-C is an object-oriented programming language used by Apple primarily for programming Mac OSX, iPhone, and other iOS applications. It is an extension of the C-Programming Language. If you have not done any OOP programming before, I would seriously recommend that you read the OOP document from the Apple Developer website.

On the other hand, if you have used and are familiar with C, .Net or Java, learning Objective-C should be relatively easy for you to understand.

Objective-C consists of two types of files:
  • .h : These types of files are called 'Header' or 'Interface files'
  • .m: These types of files are those which contain your program code logic and make use of the 'Header' files. These are also referred to as 'implementation' files.
Most Object-Oriented development environments consist of several parts:
  • An Object-Oriented programming language.
  • An extensive library consisting of objects.
  • A development suite of developer tools.
  • A runtime environment.
For example, here is a piece of code written in Objective-C

-(int)method:(int)i {
	return [self square_root: i];
}

If we were to compare this same code to how it would be written within C, it would look like this:

int function(int i) {
	return square_root(i);
}

Let's investigate the code
Now, let's examine the code line by line to understand what is happening.

-(int)method:(int)i {

We declare a function called method and a variable i, which is passed in as a parameter. We then pass the value to a function called square_root to calculate the value and the calculated result is returned.

return [self square_root: i];

Directives
In C/C++, we use directives to include any other header files that our application will need to access, this is done by using #include. In Objective-C, we use the #import directive. If you observe the content of the MyClass.h file, you will notice that at the top of the file is a #import statement.

#import
@Interface myClass : NSObject{
}
@end

The #import statement is known as a "pre-processor directive". As I mentioned previously, in C/C++, you use the #include pre-processor directive to include a files content with the current source file. In Objective-C, you use the #import statement to do the same, with the exception that the compiler ensures that the file is only included once.

To import a header file from one of the framework libraries, you would specify the header filename using the angle brackets (), within the #import statement. If you were wanting to import one of your own header files to be used within your project, you would specify and make use of the (" "), as you can see from our code file, MyClass.m.

#import "MyClass.h"
@implementation MyClass
@end

Objective-C Classes
A Class can be simply be defined as a representation of a type of object; think of it as a blueprint that describes the object. Just as a single blueprint can be used to build multiple versions of a car engine, a class can be used to create multiple copies of an object. In Objective-C, you will spend most of your time dealing with classes and class objects. An example of a class object is the NSObject class. NSObject is the root class of most of the Objective-C classes. It defines the basic interface of a class and contains methods that are common to all classes that inherit from it.

@interface
To declare a class, you use the @interface compiler directive, as follows:

@interface MyClass : NSObject {
}

@implementation
To implement a class declared within a header file, you use the @implementation compiler directive, as follows:

#import "MyClass.h"
@implementation MyClass
@end

Class Instantiation
In Objective-C, in order for us to create an instance of a class, you would typically use the alloc keyword to allocate memory for the object and then return the variable in a class type This is shown in the following example:

MyClass *myClass = [MyClass alloc];

Class Access Privileges
In OOP, when you are defining your classes, bear in mind that by default, the access privilege of all fields within a class are @protected. These fields can also be defined as @public, or @private.

The following table shows the various access privileges that a class can contain.
A class member is made visible to all classes that instantiate this class. Class members are made visible to the class that declares it as well as other classes which inherit from the base class.

We have only covered a small part of the Objective-C programming concepts. If you are interested in reading a bit more about this area, please refer to the following website: http:// developer.apple.com/documentation/Cocoa/Conceptual/ObjectiveC/ObjC.pdf.


Introducing the Xcode Developer set of Tools
The Xcode developer set of tools comprise the Xcode Development Environment (IDE), Interface Builder, iPhone Simulator, and Instruments for Performance Analysis. These tools have been designed to integrate and work harmoniously together.

Introducing the Core Tools
The Xcode IDE is a complete full-featured development environment, which has been redesigned and built around to allow for a better smoother workflow development environment. With the integration of the GUI designer (Interface Builder), it allows a better way to integrate the editing of source code, building, compiling and debugging.

The Interface Builder is an easy to use GUI designer, which enables you to design every aspect of your applications UI, for Mac OSX and iOS applications.

All of your form objects are stored within one or more resource files, these files contain the associated relationships to each of the objects. Any changes that you make to the form design; these are automatically synchronized back to your code.

The iPhone Simulator provides you with a means of testing your application out, and to see how it will appear on the actual phone device. The Simulator makes it a perfect choice to ensure that your user interface works and behaves they way you intended it to and makes it easier for you to debug your application. The iPhone Simulator does contain some limitations, which cannot be used to test certain features, so it is always better to deploy your app to your iOS device.

The Welcome to Xcode Screen
To launch Xcode, double-click the Xcode icon located in the /Developer/Applications folder. Alternatively, you can use Spotlight to search for this: simply type Xcode into the search box and Xcode should be displayed in the list at the top.

When Xcode is launched, you should see the Welcome to Xcode Screen as shown in the following screenshot below. From this screen, you are able to create new projects, check out existing projects from the SCM and modify those files within the Xcode integrated development environment. It also contains some information about learning Xcode as well as Apple Developer resources.

The panel to the right-hand side of the screen will display any recent projects that you have opened. These can be opened and loaded into the IDE by clicking on them.


Posted Image


The Xcode – Integrated Development Environment
The Xcode Integrated Development Environment is what you will be using to start to code your iPhone applications.

This consists of a single-window user interface, consisting of the Project Window, Jump and Navigation Bars, and the newly integrated Interface Builder designer.


Attached Image: 1307_01_03(2).png


(Move the mouse over the image to enlarge.)


Features of the iPhone Simulator
The"iPhone Simulator", simulates various features of a real iOS device. Although the iPhone simulator is just a simulator to simulate certain tasks, it does come with some limitations.

Following is the list of some of the features which you are able to test using the iPhone Simulator. The following screenshot below displays the iPhone 4 simulator.


Posted Image


Please bear in mind, that being the "iPhone Simulator", it is just only a simulator. It does, however, pose some features which are impossible to handle. These have been defined as follows:

  • Making Phone calls
  • Accessing the Accelerometer/Gyroscope
  • Sending and Receiving SMS messages
  • Installing applications from the App Store
  • Accessibility to the Camera
  • Use of the Microphone
  • Several Core OpenGL ES Features
Companion Tools and Features
These tools are classified as profiling tools which are the instruments that handle the following:
  • Performance and Power Analysis Tools
  • Unit testing tools
  • Source Code Management (SCM) / Subversion
  • Version Comparison Tool
Instruments
The Xcode instruments allow you to dynamically trace and profile the performance of your Mac OSX, iPhone, and iPad applications. You can also create your own Instruments using DTrace and the Instruments custom builder.

Through the use of instruments, you can achieve the following:
  • Ability to perform Stress-tests on your applications.
  • Monitor your applications for memory leaks, which can cause unexpected results.
  • Gain a deeper understanding of the execution behavior of your applications.
  • Track down difficult to reproduce problems in your applications.


    Posted Image

  • In the following figure, we display the Instruments environment where you can start to create your robust test harness for your application to ensure that any memory leaks and resource intensive tasks are rectified to avoid problems later when your users download your app and experience issues.


    Attached Image: 1307_01_06(2).png

If you are interested in learning more about the instruments that are included with Xcode and the iOS 4 SDK, please consult the Apple Developer documentation.


iPhone OS4 SDK New Features
The iOS4 SDK comes jam-packed and is loaded with as many as 1,500 APIs. It contains some high quality enhancements and improvements, which allows endless possibilities for developers to create some stunning applications.
This is perhaps the most awaited feature everyone has been waiting for. It is an assortment of seven different services: Audio, VoIP, location, local and push notifications, task completions and fast app switching that will make it possible and simple enough to use many applications at the same time. This will allow you to play the audio continuously, receive calls while your device is locked or other apps are being used, location based applications will continue to guide you, and receiving alerts will be possible without the app running and the app will finish even when the customer leaves in the middle of it. Another important feature in iOS4 is the Apps Folder, this feature allows you to drag an icon on top of another one and a new folder will be automatically created. It will be named according to the name of the category the particular icon or application comes from. This has been vastly improved to allow for an organized approach of grouping various mails into one particular folder according to the conversation thread. Added support to handle more than one exchange account, which can also be opened through many third-party applications. Game Center provides social networking services, where you can take part in leader boards and participate in other online activities with other players. iAd is a mobile advertising platform to allow developers to incorporate advertisements into their application. It is currently supported on iPhone, iPod Touch, and iPad.

The previous list contains some of the important new features of iOS 4. If you are interested in a more detailed listing of all of the features in each of the releases; check out the following http://en.wikipedia....Version_History.


Summary
In this article, hopefully you have gained a good understanding of the Xcode and the development tools and the new and improved Single-Windowed development IDE. We have also covered some of the basics relating to Object-Oriented Programming and Objective-C. It will soon become apparent why Objective-C was chosen as the language of choice for developing Mac OSX and iOS applications.

How we Built an iOS game on PC

$
0
0
This article chronicles Catch the Monkey from ideation to sale worldwide in the App Store.

You can find out more about Mirthwerx and our projects at our website.

Intro
Many people want to get into making games, specifically mobile games. Well, we were one of you! This series is for anyone who wants to jump in and do it. Our goal is twofold:
1) To demonstrate that it is possible
2) To share lessons we learned that will hopefully benefit those starting out

Posted Image

About Us
We at Mirthwerx are a team of two: Thomas the self taught programmer and Alex the artist who studied classical animation at Sheridan. We met 20 years ago in highschool and have tried to make a game ever since.

Before we embarked on this project, I had been writing business web/mobile software with Microsoft technologies for about 15 years. With this background, we knew how to build software properly (OOP, design specs, usability concerns). But you will see later how we failed to apply it.

Design and Prototyping

Technology
From day one, we knew we wanted two things:

1) Android is the future, but iPhone is the now. We will build for both

2) We want to build on a windows platform with familiar environment and tools

I started investigating the Mac platform and XCode by buying a mac-mini. After spending a day with ObjectiveC I knew I did not want to work in that language at all, it would drive me batty. Fortunately we could address both goals with one solution: Marmalade (formerly called Airplay before Apple started calling everything Airplay).

Posted Image
Here you can see using VS2008 C++ debugging and tracing in real time with an iOS emulator

Marmalade allows the user to write once in Visual Studio C++ and run anywhere (iOS, Android, Blackberry, Windows Phone, Bada, and more). The simulator is excellent, with all the performance monitoring you’d expect, so it was a total win finding this technology. The pricing for independent developers is also very reasonable.

Design
Given this was our first title, we wanted to keep the design of the app simple. The initial concept was this:
The player swipes their finger to tickle monkeys in a farmer’s field. The monkeys come more and faster in each level. The end.

It seemed so simple at the time, and there were only two of us, that we thought we didn’t need a proper specification document. Instead we used Xmind (free!) for mind mapping all our ideas and kept “the design” in there. The game was intentionally art heavy, as our artist was able to work full time on this project, but I was only able to work after hours/weekends.

Posted Image
Mind Mapping is a powerful way to capture ideas quickly and organize them well for later reference. Xmind is a free open source tool for mind mapping.

Prototype
In business software an initial prototype for the users is critical. It removes all the guess work that comes from reading and interpreting a Word document.

Rather than programming a prototype for real, we used an extremely powerful and inexpensive ($40 for a registered version!) game making tool called GameMaker 8. This allowed us to throw together the graphics that had already been drawn with a few play mechanics and see if we had something fun or not. I think all in the first prototype took 20 hours. Since it was running on a windows screen, there was no way to test the actual touch/swipe mechanic, so we just resorted to clicking, each click simulating a swipe. So the big question: Is it fun?

Posted Image
First prototype of Catch the Monkey made in Game Maker 8

No. It was not fun. We changed around several variables (speed of monkey, clicks to make them laugh, number of monkeys at one time) but it was just too simple. There wasn’t enough to do. We couldn’t see playing it for more than 2 minutes. We had no desire to make a “gag game” so we went back to the drawing board.

In our design brainstorming session we came up with the idea of using different kinds of tools to interact with the monkeys. Tickling was just the initial tool, a feather, but later you could get other tools. This seemed to have some promise. So we thought up several types of tools, narrowed it down to a few that were easy to put into a prototype, and then made prototype 2. In this version the player has an inventory of each tool. When one ran out, the farmer would call his wife for a refill which would appear in a few moments. It made the player have to think about what tool to use when. We also gave the player control of the farmer, they could direct the farmer to walk to certain areas or pickup a certain monkey. Finally we added the concept of catching stars. Every so often a star would pop out and the player would have to click it to catch it. Stars would be used later for upgrades, though we never built it into the prototype. So: Is it fun?

Posted Image
Prototype 2 Made with Game Maker, notice the inventory counts for the differing tools

Yes and no. There was a kernel of fun in there that was trying to get out, but there were still many things blocking it. We knew choosing (essentially strategizing) between tools was fun and catching stars was fun (it was spontaneous, different, and difficult). We dropped controlling the farmer (too cumbersome), dropped the refill concept (too complex and arbitrary). We needed a game mechanic to allow the player to strategize and manage resource(s).

I must note that when we prototyped, we didn't just do it amongst ourselves, but with others who were not involved in the project to get their honest feedback. Those working on a project are too biased to give a proper perspective to what they are testing. You’ll see later how this also came to bite us in the butt.

Conclusion
At this stage we sat down for our third and final all day brainstorming session. We went through many concepts before considering the mana/cooldown mechanic in WoW. In WoW the player can’t just cast all the spells they want, they have a mana bar limiting the number that can be used in a short period of time. But some spells are so powerful that while they do use up mana, they must cool down for a long time (several minutes) so they cannot be used in a single battle. We felt if every tool required a common energy pool, but had varying cooldowns, we could strike the strategic balance we were looking for. By having enough variables we could keep things fresh and interesting for the player, and therefore they would be engaged and have fun.

One additional thing we decided was to create Star Powers. Stars to this point were only used as a currency to purchase upgrades, but a Star Power is a special ability that can help you now mid level for a certain cost in stars. By making stars dual purpose, and facing the player with a decision for a momentary benefit now rather than a long term pay off later, is a great mechanic we tried to bring to other aspects. It became a fun challenge for us as designers to make star powers that were really really useful, but the smart player only uses sparingly so they can get all the upgrades.

Posted Image
The final toolbelt design.

With the design phase basically finished (design happens all the way through), we proceeded into building the core game.

Building the Core

Intro
In the first article, we covered how Catch the Monkey started from initial simple concept, to the technology we chose, through the prototyping phase. At the end of prototyping we had a greatly increased design, but despite knowing better, we didn’t document it thoroughly. We knew we had 12 tools to create, 10 types of monkeys, and some vague concept of a store which would allow the purchase of upgrades. How many upgrades and what they would do was not finalized. It was time to start coding!

This article is longer than the previous, I have attempted to keep it of reasonable length by highlighting only the most interesting aspects from the core construction phase. If you have a specific question, just post a comment and I’ll respond.

We Going to Do this or Not?!
As mentioned in the first article, the artist was working full time but I as the programmer was only able to work part time as I was required by other aspects of the business. The project dragged. It finally reached a point where the project would be cancelled due to lack of progress. Instead, I mapped out the time remaining to build the game. About 6 weeks (50hrs x 6 = 300hrs) should do it. I made an extreme decision: I booked a 6 week hiatus from work to go to my cottage and focus 100% on the game. While my wife was less than thrilled, she was supportive of seeing me get the game done. It was time to go “all in”. Hind sight confirms this was the right way to recover the project.

Our Single Biggest Mistake
Not having a properly defined design document would appear to be our largest mistake, but we made one that completely dwarfed it.

If you study the zombies in Plants vs Zombies, you will see there are many types of zombies, but they are made up of several graphical parts (head, body, arm, arm, legs) and several optional decorators (pylon, helmet, paper). By reusing and varying these components you can have many different types of zombie with minimal memory requirements. We wanted a similar approach with many kinds of monkeys each with varying abilities and weaknesses.

Posted Image

However, and we painfully learned this later, if you want to have this kind of reuse, you have to lay down very specific rules of what the characters can and cannot do. Notice in plants verses zombies that the zombies always face the camera (like the 2D South Park animation). No matter what they do, they never turn away from the camera to a profile view.

Well, early on in our animation and prototyping we decided when the monkey arrives at a plant he will plop down, TURN HIS BACK, and begin digging. Then when he gets a potato, he will TURN BACK and proceed to eat it. We completed all the artwork for the regular monkey before we discovered what a problem this was. When we wanted to have a hat monkey, we thought we would just create a separate hat object, attach it to the monkey, and off we go. Well as we did it we realized the hat (or vest, or sunglasses) has to turn with the monkey as he turns away from the camera. This requires one decorator frame per monkey frame and pixel perfect alignment. This means a whole host of painstakingly researched coordinates per frame to get it all to look right. It was so much work, and we didn’t want to redraw the digging animation, so we made an expedient decision: just duplicate all the frames for the regular monkey to the hat monkey with the hat pasted right into the frame. The artist went ahead and did this for each of the 6 additional types of monkeys.

Here is the math of why this was such a problem later:
1 monkey has a set of interaction sprite sheets (fear, ducky, laughing, walking, climbing, etc.) taking about 20mb of VRAM memory.
7 monkey types x 20mb = 140mb VRAM
The iPhone 3GS (iPod 3+) only has ~55MB of VRAM available (with a 15MB heap) before it starts crashing.
We had initially wanted to target the iPod Touch 2+, but it has only 30MB of VRAM and it became impossible. So we increased the system requirements to iPod 3+ and scrambled to get the VRAM down. We’ll talk more about this in the next article.

So the lesson is always map out memory requirements during the design phase, before you build it, rather than in the middle, or after. Had we of known the ramifications of the monkey turning away from the camera we would have gone a different direction with the art and the game wouldn’t be noticeably different.

Cute Monkeys in a Nasty Real-Time World
Many business developers I know avoid writing multi-threaded solutions when they can avoid it. Why? Because the race conditions that can occur between two separate threads doing their own thing are a nightmare for testing. There are so many permutations of what could be happening simultaneously in the application that if it crashes, it is difficult to reproduce never mind fix permanently.

Posted Image

When it comes to games, they are already real-time in that the Update() loop is executed every so many milliseconds not matter what. There is no concept of “blocking” calls like there is in Windows Forms development. This is just the way games are, and this is not what I’m referring to.

I’m talking about a real-time game verses a turn based game. A turn based game waits for user input, then responds accordingly; while waiting for user interaction there may be things happening on screen, nice effects and such, but the actual state of the game doesn’t change. In a real-time system the game state is constantly changing regardless of player interaction.

For our first time game, we NEVER should have chosen to do a real-time game.

Catch the Monkey was an incredible amount of effort to make everything work in a constantly changing environment. The number of testing scenarios is probably 20 times greater than a turn based system. The ability to replicate scenarios is extremely difficult, even when programming specific unit tests to occur. There was a point late the construction phase I wasn’t sure I could ever get it to stop crashing. Fortunately Marmalade has some amazing memory monitoring tools built into it I was able to find all the issues (I think!).

We learned this lesson so bitterly the next title we are currently working on is turn based.

Object Hierarchy
Obviously the power of OOP is the ability to build small, focused, encapsulated objects and then work with them at a higher level. My goal was to create an object hierarchy that knew how to instantiate, move, and render itself.

There was a time in my career where I didn’t do modelling. Once someone showed me Rational Rose, UML, and modelling I never went back. I always model my code, even personal projects no one will ever see, because I find it the best way to think through the problems before the code gets in the way. Rational Rose (or any proper modelling tool) helps you think through the design as you design. I used Rational Rose for several years, but when I went out on my own I couldn’t afford the $2,000/seat license. Fortunately the Open Source community came to the rescue with StarUML. StarUML is a powerful free object modeling tool. It is virtually identical to Rational Rose (at least to the last version I used in 2003).

Posted Image
Looking at the class design diagram, notice the two fundamental objects: GameObject and UIObject. Both of these inherit from Graphic. Graphic encapsulates all the Marmalade 2D API interaction, and therefore is necessary for rendering whether it is a monkey, a story slide, or a text object.

A GameObject is an object used in a GameScene (which is a level you play). It manages its own state, sprite sheets, depth calculation, scaling (based on depth), click handling, and hit detection. All play objects inherit from GameObject. UIObject is similar to GameObject, but is more lightweight and designed for non-play scenes, such as text, buttons, and images in the store or tool selection screens.

Design Patterns
We used GoF design patterns as necessary. For example:
  • We used the Factory pattern for our Level class; feed in a week and day, and it spits out a formatted level object, complete with any necessary tutorials.
  • We used two singletons for caching image files and sound files called GraphicManager and SoundManager, so even though each object is responsible for loading/unloading it’s assets, it does it through these caches to minimize the actual memory used.We used a singleton for player state (number of stars, current progress, which tutorials have fired, upgrades purchased). This made it extremely simple to serialize/deserialize player progress.
  • We used the Decorator pattern for adding graphical effects to any GameObject, such as fade in, fade out, flashing, etc.
One of the early conceptual struggles I had was how to bring all the different types of screens (a store, a tool selection, story modes, title screens, option/menu screens, game modes) together into a nice organized OOP paradigm. While researching I found two excellent articles by iPhone game maker rivermanmedia:
The Scene System
The GUI Stack

I knew this paradigm was the way forward not just for this game, but probably all future games.
Posted Image
The Scene system breaks down the game into a series of scenes. In Catch the Monkey I ended up with 19, like SceneTitle and SceneDialog. Each of these inherit a common interface from Scene such as: Init(), Update(), Render(), Shutdown(). I created a SceneManager singleton that contains all the logic related to scene creation, shutdown, and transition. Now my code can be blissfully unaware of what else is going on at a higher level. If I want a scene to end and begin a new scene, I call:

SM->ChangeScene(new SceneShop());
If I want the new scene to be focused and on top of the current scene, I call:

SM->AddScene(new SceneOptions());
The SceneManager knows if there currently are other scenes involved, winding them down appropriately, removing their assets from memory, doing a fading transition, then initializing and firing up the new scene. With this in place the real-time game now behaves more like a Windows Form application, with dialogs able to call dialogs and just let the OS worry about sorting it all out.

Posted Image

The second key concept is the GUIStack. The GUI Stack sits inside the SceneManager and replicates “focus” of a scene just like how Windows does for forms and dialogs. By pushing and popping scenes onto the stack, I can control which scene has its Update() and Render() code called. If a scene doesn’t receive the Update() call, it is effectively frozen in time (paused). In pure form, the top scene is the only one to have its Update() called, while all in the stack have their Render() called. Later in testing I removed calling Render() to every scene in the stack for performance improvement. For scenes that require a background scene (such as a dialog window appearing over top of the game screen) I Instead take a screenshot of the current state, then display that as a backdrop to whatever the current scene is.

Using Marmalade in 2D
As previously mentioned, we were targeting both iPhone and Android simultaneously with C++ in Visual Studio 2008. While Marmalade is a 3D framework, we knew we were making a 2D title and therefore focused on the Iw2D APIs. I’ll highlight the fundamentals of 2D animation with Marmalade’s 2D API.

As you you’ll see, Marmalade works at a pretty low level. This isn’t GameSalad here, and that is one of the reasons I chose it. Given the choice, I prefer the flexibility and power of a low level API rather than being limited to what a framework designer decided I should be able to do (or not!).

Marmalade works it magic by using a custom make file called an MKB. This file allows the user to define Marmalade libraries to pull into the project, source code, assets (sounds), fonts, and texture groups.

Marmalade has a resource manager that allows the management of image groups (texture groups) by defining them like this in the MKB file:

# Provide access to resource objects via IDE
		  ["Resources"]
		  (data)		
		  fonts.group
		  templates.itx
		  UI.group
		  Loading.group
		  Title.group
You then define all your images in custom group files:

UI.GROUP
CIwResGroup
{
		  name "UI"  
		  shared true
		  useTemplate "image"	"image_template"
		
		  "./accountbuttons.png"
		  "./account1.png"
		  "./account2.png"
		  "./account3.png"
		  "./black.png"
		  "./bluestarbg.png"
		  "./pause.png"
Within the code you can test if resource groups are already loaded into memory, and then load/unload them through two simple function calls:

if (IwGetResManager()->GetGroupNamed("farm", IW_RES_PERMIT_NULL_F) != NULL)
IwGetResManager()->LoadGroup("farm.group");
Or:

 		 IwGetResManager()->DestroyGroup("farm");
Images are loaded (and automatically uploaded to OpenGL VRAM) by asking for the image by name (without the .png extension) by using:

CIw2DImage* img = Iw2DCreateImageResource(name);		
Once you have an image in memory, it can be rendered simply by calling the image drawing routine with the image you want, and the 2D vector position.

Iw2DDrawImage(img, CIwSVec2(x,y));
Marmalade automatically queues up all of the drawing calls in the order in which you called them, so this way you can control layering by doing your background draw calls first. So before I would run through my Render() routine I would sort all my objects by depth (lowest to highest) and then draw them in that sequence.

To complete the rendering I call these two routines, which tells Marmalade I’m finished, show it to the world:

Iw2DFinishDrawing();
Iw2DSurfaceShow();
That’s it. Call those drawing routines each frame and you’ve got yourself a game.

Simplifying Sprite Sheets
The game comprises over 4,000 frames of hand drawn animation, most is for the monkeys interacting with their world. To manage all these images, we put them into spritesheets. Two issues needed to be considered:
  • No dimension of the sprite sheet could be larger than 1024 (iPhone doesn’t like textures bigger than this and Marmalade started fuzzing them)
  • Sprite sheet dimensions needed to be to the power of 2 (32,64,128,256,512,1024) for the graphics card. If they weren’t, the graphics card would pad them out to make them to the power of 2 anyway.
Posted Image
An example of a GameMaker strip.

Photoshop does not have an easy way to make a sprite sheet where each frame is universal in height/width. So we discovered a trick that saved us dozens of hours:
  • Save each frame in PNG from photoshop
  • Create a sprite in Game Maker to drag and drop each PNG frame from the file system for a given animation
  • Export the sprite from Game Maker as an animation strip, which is each frame appended into one long horizontal PNG with the number of frames appended to the file name.
  • Run our custom sprite sheet program on the strip, which would break it out into a rectangle of the smallest power of 2 dimension as a PNG.
While it sounds involved, we could go from a collection of png frames to a squared sprite sheet in under 2 minutes. Just for its ability to create sprite strips GameMaker is well worth having!

Posted Image
The final spritesheet, power of 2 sized.You can't tell, but we also dropped the color depth from 32bit to 16bit for memory.

Conclusion
Who is the harshest critic, the audience or the musician? The musician, for they have the double burden of knowing every note they missed and how much better they played during practice. So while the creator is extremely biased and forgiving of their creation, there is a harsh reality of what they intended and what they ended up making. I would say the music is always much sweeter in the imagination than on the page.

At the end of the 6 weeks I had finished building the core of the game. I came in around 340hrs. Knowing I have a personal biased, having played the game over a thousand times during build cycles, I concluded this game is actually fun. There was something magical about trying to entertain 3-5 monkeys simultaneously. Because I was away at the cottage, the artist had to take my word for it. And since I hadn’t yet figured out how to deploy it, he had no way to play it other than on my laptop. But, knowing we had a good core, something we were proud of, gave us the determination for the toughest fight yet: Polishing.

Balancing and Polishing

Intro
At this point we had a working game, around 90% feature complete. The player could start a new game, play each level, interact with all 7 monkeys, use all 10 tools, save up stars, buy 28 upgrades in the store, use all 4 star powers, and save/resume their game. We declared ourselves feature complete. If we had created a more detailed design document we would have realized how untrue that statement actually was!

The Last 10% takes 90% of the Time
We didn’t know how long it would take to establish a publisher relationship, so we started showing an early prototype to some publishing agents. One commented that the game core was good, we definitely have a quality AAA title here, but we still have a lot of work left for polishing. We thought that was a rather silly statement, and proceeded to finishing off the loose ends and game balance figuring we would be in the store in about 2 weeks.

This is where games are vastly different than business software. In business software, when we are feature complete with all unit testing completed, 80%-90% of the work really has been done. Integration testing reveals its issues, but they are generally just misunderstandings between developers and spec that need to be resolved. A real-time game integration testing is about 50% of the work because of the layered interactivity/dependencies of the game elements to each other.

Getting it on a Device
Up until this point, the game could only be played in the Marmalade PC simulator. We still had no idea if performance would be an issue (memory or FPS) on real devices. It also severely hampered unit testing as the artist couldn’t test the game at all. It was time to deploy to a device.

Posted Image
Marmalade Deployment Tool for making the IPA file

Apple is extremely careful of what can be put onto their devices. This is good as it cuts down on piracy, but it also creates a whole series of hoops you must jump through to sign your code and get device ids and certificates for deploying test builds to a device.

If you are using xCode on a mac (especially the latest version of xCode), it is a relatively straight forward process, where xCode takes care of most of it for you. All of apple’s documentation tells you step by step how to do it with xCode. If you are on PC, well prepare for some hassle.

There are two things you must do: 1) Setup your machine for iOS deployments and 2) Setup your project for iOS distribution. Fortunately, the documentation in Marmalade 5.2 for how to create distribution builds is much improved over previous versions. There is a walk through that explains you to create a certificate, which you then upload to Apple, and download the Apple certificate(s) and where to put them.

With the PC setup, the project must be setup and signed for distribution. The Apple dev portal is used to assign UDIDs of devices allowable to an application. Apple provides a provisioning certificate used to sign your project. Marmalade has a deployment tool that appears when making a release ARM build of the project. You enter in provisioning and OS specific options into this tool (it saves the settings to the custom MKB file) and it does the magic of making you an IPA that can be deployed through iTunes to your iOS device.

All in I was able to get the game on my iPod in about 10 hours. This was a vital step, because we needed the touch screen to test our gestures.

Getting Jiggy with Gestures
If you recall, our core design was a player using their finger to tickle a monkey through swiping. After some prototyping, we needed other tools to break up the monotony of constantly swiping back and forth. One of our early influences was a GameLoft game Bailout Wars. The player flicks bankers to their doom, but you also have to make other motions as well.

Posted Image

We studied numerous games and came up with this list:
  • Tap
  • Tap & hold
  • Swipe Horizontal
  • Swipe Down
  • Flick Up
  • Circle (clockwise or counter clockwise)
We chose Tap, Swipe Horizontal, Swipe Down, and Flick Up. (We also had Tap & Hold, but cut it later.) We tied these gestures to tools that we felt made sense: the paper bag, for instance, is placed on a monkey’s head, so the player swipes the bag down onto the head.

iOS and Android support multi-touch (up to 10 points) but we decided to stick to just single touch. A touch is nothing more than a click event, so we inspect the s3ePointerEvent in Marmalade to capture it into a global touch variable like this:

void SingleTouchButtonCB(s3ePointerEvent* event)
{
		g_Touches[0].active = event->m_Pressed != 0;
		g_Touches[0].x = event->m_x;
		g_Touches[0].y = event->m_y;
		g_Touches[0].when = (int32)s3eTimerGetMs();
		g_Touches[0].handled = false;
}
While it is all nice to know where a finger currently is, how do you know if they are making a gesture or not? The answer is you have to do that yourself.

A gesture begins when the finger touches the screen. From then on, the current position of that touch must be tracked at a regular interval until it is released (finger is lifted). The difference in origin, progression across the points, and exit must be analyzed to determine what kind of gesture was made. I used the Strategy Pattern in a generic Tool class with the child class for each tool implementing their own unique gesture recognition.

So while this explains the technical on how to do gestures, there was a lot of refinement necessary to get it to “feel” right. It is amazing how different each person performs a simple left/right swipe. Some people do very gentle little swipes of 10 pixels, while others go all the way across the screen. Some do it straight across, others on a diagonal. Some do it so slow it didn’t register, and some do it so fast it didn’t register (we found the “right” granularity for the timing interval of each point is 50ms). At the end of the day, we went from a very strict gesture system where a horizontal swipe couldn’t have more than 20 pixels of vertical movement, to a very loose one where just about anything goes!

Saving the Story To Last
Catch the Monkey is an action game. While we need some sort of story to set the context for the player’s actions, we knew we didn’t need Crime and Punishment here. From the very beginning, we had a rough story outline:

A farmer in South Africa has a potato farm. As he sits down to lunch with his wife, a monkey is spotted in the field. He goes outside to take care of the monkey, but more and more keep coming. Eventually he overcomes all the monkeys and returns to a cold lunch. Fin.

Ok, so we aren’t winning any Oscars for screenplay writing, but it was enough to get going so we focused on building the game and then would circle back to the story.

BIG MISTAKE!

Posted Image

When it came time to do the story sequences, I decided to involve my friend Rob for help. We sat down one evening to hash out the story. He started asking basic background questions for which I had no answer:
  • How well does the farmer do, is he poor or rich?
  • How’s his marriage, good or strained?
  • What’s his demeanor: happy or surly?
  • How long have they lived in this location?
This may seem like silly fluffy stuff, but it isn’t. I knew from research in how to write fiction that before you have a story, you have to have a character. It is the characters that drive the story and we didn’t have characters. So Rob and I had to define the characters first.

Throughout the game the player unlocks new tools. How are these communicated to the player? We decided it was by the farmer’s wife “finding” them and making them available to the farmer in his tool shed. We chose to use a dialog sequence to put the tool arrival into some context. Well, you cannot write effective dialog (even simple monkey catching dialog) without knowing the characters voice. So these “fluffy” questions had to be answered before we could write a single line of dialog.

We then had to answer the two big questions:

1) Why are the monkeys coming to the farm in the first place? (Why now and not a year ago, or why not a year from now?)
2) How is it that the player stops the monkeys from coming (by resolving #1)?

We went through a lot of ideas that night, but everything good we came up with changed the flow of the game, or introduced new characters (like a boss monkey), and we just couldn’t afford to do all those changes this late in the game development. It was overwhelmingly evident we should have had this meeting in the first week of the game, not the last.

Posted Image
Before you can write a line of dialog, you have to know the character's voice

We wrote the best story we could without changing the game or requiring a lot of new art assets. The first rule of writing is “write what you know”. In the end, I based the farmer and wife on myself and my wife. The problem the farmer faces of trying to get rid of the monkeys, which seems so simple from the beginning, becomes overwhelming and takes over his whole life. This is actually a metaphor for what the monkey game became to me. When the farmer bemoans the monkeys never ending, that’s how I felt about the amount of work the game kept requiring. But in the background, unfazed, is the wife. Helping where she can, encouraging when needed. Many of the lines in the game are verbatim what my wife said. When the farmer’s wife goes away on a girl’s trip in the middle of it all, this also happened in real life.

Of course, all this is probably far too sophisticated for a simple action monkey catching game, but it is in there none the less.

Levels and the Game Master
I’ll admit this up front: sometimes I’m just plain lazy. But sometimes laziness is the mother of invention. Posted Image

When we built the second prototype each level had scripted events: when a monkey is to be released down a tree, the type of monkey, the size of a wave of monkeys at one time. Most of these events were time driven. This is how classic action games, like Capcom’s 1942, are made. Each play through is the same.

It was a lot of work scripting each level with all the events, and frankly I didn’t want to do it again in the real game. There are also problems with scripting: how do you know if the player is bored or sufficiently challenged? Releasing a monkey every 10 seconds may be fun for me, but too easy/hard for you.

So we tried to think of an alternative: what if we had a Game Master (to borrow the RPG term) that determines when, where, and type of monkey to release based on how well the player is doing. If a player is doing poorly it won’t become ridiculously intense, and if they are doing well it won’t get boring. We will define rules for the GM to behave by, and vary those rules from level to level. For instance, some levels the GM will be fast and furious, in others a slow build up. Even better, the GM can monitor things in real time, like the players energy level, and make smart decisions at the time of knowledge rather than guessing with scripting.

This seemed like a radical idea to us, so we weren’t sure what the negatives would be to building this dynamic AI “level designer” just so I didn’t have to do all that scripting.

I don’t remember why, but for some reason I felt I should play Valve’s Left for Dead FPS zombie game. I generally don’t like zombie shooters so I had never played it before. I got it off steam and noticed somewhere in the description about “The Director”, which is essentially a level AI that responds in real-time to the players to give different experiences each play through. Once I saw Valve did it, I knew we were on the right path!

Posted Image
If Valve did it, so can we!

It took a few days to build the Level GM, but once it was done it was a total win. No scripting required, all we had to do was define for the GM the resources it has (types of monkeys, total number of each monkey allowed at a time) and the level of intensity we want (earlier levels are easier than later ones).

In the book Andrew Rollings and Ernest Adams on Game Design (some of it is free on google ebooks) they explain how it is more fun to have waves of intensity followed by relief followed by intensity rather than constant intensity. Plants vs Zombies follows this formula perfectly by providing big waves of zombies followed by few zombies so you can rebuild. We implemented this concept by adding “moods” to the GM. Based on many factors, the GM will “go evil” on you and break the rules of intensity we set out. But at other times it will “go nice” and give you a chance to catch your breath.
Posted Image
We’ve detailed plenty of our mistakes in these articles, so we’re proud to say this is one where we nailed it! We’ll be using this approach in our future titles.

Game Balancing: Redeeming Features
Game balancing is tough stuff! There are no strict rules as to what makes something fun, especially in combinations. In the end we had to go with our personal play experience, and then test on others.

It took 6 weeks to make the core game, and another 10 weeks to balance it. As I look over my work logs, up until the last day we were changing cooldowns, upgrade costs, and energy costs. I could give many examples from our time of play balancing, but I believe the most valuable lesson I can share is how we took something that wasn’t good and made it great. This at the core is what the balancing phase entails. So here is the brief story of the Paper Bag:

Posted Image

In reading a book on fiction writing it stated every character thinks they are the hero of the story, so give them a chance to shine. Think of how in Lord of the Rings Sam, a tag along character for much of the story, gets to be the hero when he carries Mr. Frodo up Mount Doom. I think this applies to game features, in our case every tool should get to be the hero and legitimate chance of being a player’s favorite.

Each tool has its own purpose. Some are for taking out individual monkeys, some are for dealing with groups. Some are to prevent them from coming in, some are for dealing with them as they come in. Some require the player’s attention and dexterity, some are great because they don’t require any attention.

The paper bag is a high cost high impact tool that completely paralyzes one monkey (intended for a high threat monkey) for a long period of time. It also has an Area of Effect (AOE) which causes other monkeys to be distracted and laugh at the silly monkey stumbling around.

In the first iteration, once the bag was placed onto a monkey’s head it immediately made all monkeys in range laugh. Play testing revealed players preferring other tools over the bag. It’s not that it was bad, but it wasn’t good.

We played around with the cost, cool down, and range. At one point it made all the monkeys in the field laugh. But still the other tools were better.

Then we thought: what if instead of a onetime AOE effect it had a continual AOE? For the entire time the target monkey stumbled around any monkeys in range would be influenced. This changed the best time to use a paperbag from where a lot of monkeys are currently, to where a lot of monkeys WOULD BE. By making this AOE effect continuous we now had a very effective crowd control AOE tool while incapacitating one target monkey.

Posted Image

Now the paper bag is fantastic!

During game balancing some diamonds are in the rough. Some are rougher than others. It requires determination to cut into them to let the inner brilliance shine forth.

Features: Knowing when and how to say NO!
A game can be built forever. Should I even mention the obvious example of Duke Nukem Forever? When the goal is to create a rich enjoyable play experience, the temptation is to keep adding more and more features, because more = better, right?

Sometimes no. In our case we had to ship the game, not just for financial reasons but for psychological ones. After 2 years it is hard to keep up the enthusiasm. As we played and balanced, many ideas would come up. For instance:
  • We needed to add an easy (casual) mode for non-gamers or younger children
  • We needed a visual and audio warning to the player that they just lost a plant (seems obvious now, but it wasn’t added until the second last day)
  • We needed more sound effects
  • We changed the “level complete” requirement from reaching a certain number of monkeys, to reaching a certain number AND clearing the whole field
All of the above were implemented, but they were unscheduled tasks and our release date kept being pushed further and further out.

To decide what HAD to be done we asked ourselves these two simple questions:

1) Is the game broken without it? Meaning it is too hard to play, or doesn’t make sense to the average player, or boring/repetitive

2) Would we be embarrassed to release without it? Meaning it is obvious we cut a corner.

The second question may sound a little strange. Personal pride and brand reputation are at stake writing a game. If we tried our best and failed, that is ok. Sometimes that is life. But if we cut corners, therefore reducing chance of success, and we failed, that is just being lazy or foolish.

To reach our timeline some things had to give:
  • Cut a type of monkey
  • Cut two tools
  • We wanted level ranking and gamecenter integration, but we had to cut it
Posted Image
Watergun didn't make the cut. Maybe there will be a sequel we can put it into!

Finally, the hardest thing we had to do was what I’ll call “compress” the game. Our goal had always been to have 50 levels, each of about 3 minutes (earlier ones are shorter, later ones are longer) for 150 minutes of perfect play.

Upon play balancing we saw there were fun levels and not as fun levels. Fun levels generally had the player getting something new (a new tool, star power, monkey type). So we decided to compress the game by removing the low fun content; 13 levels. It hurt to see all that work go away. But in the end what the player now experiences is peak to peak fun with no valleys.

Conclusion
After spending twice as much time polishing the game as it took to build it, we had a game that was truly feature complete. It was fun, it flowed well, and we are proud of it. It takes some going, but once you hit the sweet spot mid way through the game it is a ton of fun. We were finally ready for testing with others.

Testing, Release, Marketing

Intro
We had a game! Hurray! It worked, it played well, it has a beginning, middle, and ending. We were ready to get the sucker out the door. But before release, we had to go through the final stage of development: testing. Wow, what an eye opener!

No one Knows How to Play Your Game, and They Don’t Care to Learn!
I submit I may be the strangest person alive.

I grew up in the 80’s and early 90’s. This is where many of my early gaming habits formed. Back then when I bought a new computer game, such as Civilization, Ultima, or Wing Commander, I would sit down and read the entire manual cover to cover before attempting to play the game. And this was when manuals were works of art: the civilization manual was over 100 pages and filled with fascinating sidebar historical facts. I continued this practice, though I’ve stopped now that the manuals are nothing more than epilepsy warnings and telling me where the square button is on my controller. (Fortunately board games still have amazing manuals, so I can enjoy those Posted Image )

Posted Image
The good old days when games were hard and manuals were thicker than your arm!

Apparently no one else read manuals so the gaming industry moved away from them altogether. Players want to play, not learn how to play. I sort of knew this, I read about it in game design books, but I didn’t have the experiential knowledge of it. It quickly came.

At our pre-release party (unfortunately 3 months before the actual release, but who’s counting) we had several iOS devices with a 4 level demo version of Catch the Monkey installed on them for people to play. We watched people pick up the game and play it for the first time. We learned two things: people enjoyed playing with the monkeys, but they had no clue how to do it. So while they had fun, they were frustrated not knowing how to use the various tools.

It seems obvious as I write this, but we discovered our need for tutorials built into the game. All I can say is that when you work on something closely for 2 years you lose track of what is “intuitive” and what isn’t. Observe strangers and reality will come crashing in. So, we used our character dialog system to retrofit in a tutorial system.

Posted Image
First Iteration of Tutorials. Nobody reads em. Posted Image

Weeks before release, we tested with a focus group of teenage girls (our demographic) to see how they enjoyed the game. I squirmed in my seat as I watched them tap “next, next, next” and completely bypass the tutorial to get on to the game. Once there, they didn’t know how to use certain features, started losing, and became frustrated.

Posted Image
Final version of tutorials. If you can't follow that, there's no helping you!!!Posted Image

We learned valuable lessons as we went through three iterations of tutorials:
  • People assume they already know how to play your game. I can’t for the life of me figure out why they come with this belief, but they do. Work with it, not against it.
  • People don’t want to learn (because of the above), they want to play. So teach them one basic thing and set them off playing
  • When teaching, we observed the average player’s patience is two: Two screens of slides, two steps of interaction, two dialog boxes, then they don’t care anymore and want to skip forward
  • After playing, people want to learn. There is a correlation between how long they play and how interested they are in learning to play. In the first 2 minutes, they have 0% interest, at 5 minutes they have 10% interest, at 10 minutes they have 50% interest. You have to space your lessons appropriately
  • Remove all flowery “in character” text from the tutorial, they want to learn as quickly and efficiently as possible and could care less if a character starts each sentence with “<gwok>”
  • They don’t want to read, they want to do. Make the tutorial visual and interactive
  • Pre-plan your tutorials into the main story/progression of the game. Don’t do what we did and try to retrofit it in, it was a lot of work after the fact
Testing with Testflight
As previously mentioned, iOS locks down the devices to which you can install a binary. This requires the unique UDID of each device, registering it through the apple portal, including the provisioning profile at compile time into the binary, and then copying a provisioning profile to the device through iTunes (which provides zero feedback if it was done successfully) and finally copying the binary to the device through iTunes. I can’t think of a process more antithetical to the apple “it just works” mantra. So, you can do all that OR you can use testflightapp.com.

Posted Image
Testflight for build distribution during testing; epic win!

With testflight, you send testers (family, friends, enemies) an email link and they go to it on their device. Testflight takes care of finding the device UDID, os version, make/model of device, and sending it to you the developer, installing the provisioning certificate. As a developer, all you need to do is register the device id to your binary. Now you can upload a build (with release notes) to testflight, and everyone you autohrize is sent an email a link to download it. It bypasses all the silly itunes file copying. Testflight’s reporting allows you to see who has what installed. Valuable when they start reporting problems as you can definitively say “Oh, that’s because you’re on a build that was so yesterday! That build was terrible! Install the NEW build, it’s wonderful.”

Testing with Non-Gamers
Would you rather know about an issue during development, or once it’s released? Of course during development!

Testing with people who play similar games as the one you are making is very helpful. You can be sure you are meeting your demographics’ demands and they can often make suggestions or give articulate feedback on issues as they have a frame of reference.

However, we found great value in testing with people who have never ever played an electronic game in their life. You know who I’m talking about: your mother-in-law whose only game experience is yahtzee on the dining room table; the friend who didn’t know brick breaker was pre-installed on his blackberry.

The official term is blackbox testing. These people can confirm if your tutorials work, but more importantly they do things you never in a million years would do. But make sure you watch them closely, they won’t be able to tell you what they did or did not do. Here is an example:

Posted Image

We finished our final bug free never crashes build Dec 15. Over the Christmas holidays I showed a non-gamer friend the finished game. Within 2 minutes of playing he crashed it.

How? He never once lifted his finger from the screen. If you recall from part 3 about our gesture system, it tracks the current finger position every 50ms. Well if the player never ever lifts their finger from the screen, it becomes one giant gesture of 2,400 points after 2 minutes (affecting performance). Even worse, the initial target object of the gesture may be destroyed while waiting for the gesture to finish, resulting in a NULL reference and therefore a crash.

It was relatively easy to replicate and fix once I saw what he did, but I have to admit I never imagined someone not lifting their finger!

How Do You Know When You are Done Testing?
This may seem like a silly question, especially coming from a business software background. In business software you would already have all your test cases written before hand (Right?) and execute them. When they all pass, you know you are good to release. Typically we would then get the customer to test the application in a pilot project, and that is the “real world” test. If it’s good, we release.

Well in games, it’s different. There is no “customer” to sign off and take responsibility that the application is good, you just have to decide at some point: it’s done.

Release too early, and the game will crash or misbehave for customers. Release too late, and you’ve squandered valuable effort that could have gone into another title.

I was listening to the MIT Open Courseware on Game Design and they asked this very question: how do you know testing is done?

Their answer: When you are out of time.

Posted Image
When you can't take another step forward, you might be done testing.

That seemed like a cheeky answer, but having now lived it, I agree. Now, of course, we made certain all features worked, it was as fun as we could make it, and it didn’t crash. (Of course now as I write this we’ve heard reports of a bug in one of our tutorials, oops!). The game will never be perfect, there is always more to add, more to test. There comes a point where you have to draw a line in the sand and ship it. For perfectionists, this is a very difficult thing to do. I am fortunate that I’m not working solo on this project, both the artist and I together were able to agree the game was ready to go out. That gave me confidence I wasn’t deluding myself or just fed up. J For those soloing it, I recommend you ask a friend to be your “quality control” and help give you the thumbs up for releasing.

Judging the Difficulty
I read in the book Level Up: Guide to Great Game Design that the game makers are the worst people for judging the difficulty. So, I knew this going in, but it is still difficult in practice. Obviously the first people that need to be happy are the ones making the game, that is your first quality control gate.

We also tested on young children (3, 8, and 9). Why? Because they’ll test for free all day long and they have nothing better to do. Posted Image (And they are the artist’s children). We found that young kids love to play Catch the Monkey, but the mid game was too hard for them. So thinking this was a secondary market, we created an Easy mode that gives the player more energy and reduces the maximum number of monkeys in the field at a single time. The kids loved the easy mode and were able to finish the game, so we were happy.

Posted Image

Later, when doing focus group testing, two of the teenage girls couldn’t get past a certain level and were getting frustrated. We recommended they restart the game in easy mode. As soon as they started playing in easy mode they said “Oh wow, this is much more fun.”

At this point we had a dilemma: do we make easy mode normal mode, and normal mode hard mode? We did and made all the code changes to reflect this.

Then, a few days later, we fixed a bug in the Level GM AI and found it was working much better than previously. So we flipped it all back to Easy and Normal rather than Normal and Hard.

Big mistake.

Now that we have released and friends/family have been buying it, the most common complaint we hear from casual gamers is that it is too hard. When we tell them through facebook to try it on easy mode, they always come back with “Oh wow, this is much more fun.” Even post release we’re still learning things the hard way!

Here are the key lessons we’ve learned:
  • Make the game too easy, rather than too hard. Too easy can still be enjoyable, too hard never is.
  • Casual gamers are not looking for a challenge, they are looking to pass the time. Easy fits within this expectation.
  • Don’t make the game hard with the ability for the player to opt into easy. Make it easy with the ability to opt into hard.
Releasing to the App Store
After making it through the arduous testing phase, we were ready to release this sucker. This has several steps:


1) Making a build signed for App Store, as opposed to ad-hoc provisioning build

2) Uploading the binary to Apple

3) Waiting for approval

It was time to fire up Visual Studio one last time and create an app store build. It was relatively easy to do, I simply copied the deployment options from my test flight build in Marmalade and off I went. Now here’s the thing: you cannot test your app store build before you upload it. Why? Because it can only be installed on a device through the app store. So better get it right!

And we didn’t. Posted Image

Posted Image
iTunes Connect is how you control your app in the app store

I made two blunders when doing the final build. The first was somehow I didn’t copy the proper icon image settings from the internal build to the final build, so it went to the app store with the default app icon. Doh! The second was far, far worse.

We knew our game didn’t work on anything below iPhone 3GS or iPod 3. I saw other games show this requirement in the app store along the left column. I didn’t see any way to set this through marmalade deployment, so I assumed it was done in the app store itself.

Well when you upload an app, especially your first, iTunes Connect walks you through a wizard. The answers you provide can never be changed, you got one shot to do it right. These we did right. Again, I didn’t see anywhere I could set the requirements, so I figured it was after I uploaded the binary. I uploaded the binary and it went into the queue for review and approval.

Well, 8 days later, it was approved. But still no way to set the system requirements. It wasn’t until later that I found out you need to modify the plist.info file to include the OpenGL ES version to 2.0 to target the devices we wanted.

No biggie, I modified the plist.info file and proceeded to upload the new binary.
ERROR!

Apple does not allow a developer to set more narrow restrictions on an application update than the app first had. So in short: if your initial version says it will run on iPod 2, you can’t later do an update that makes it no longer run on iPod 2. We screwed up the one thing you cannot correct through an app update. After much back and forth with apple support we had to resort to putting the requirement right into the app description. We already know people don’t read, but it’s the best we can do at this point.

And one final thing to know about releasing to the app store when you make your app on a PC: you REQUIRE a mac to upload the binary to apple.

ITunes Connect used to allow you to upload through http, but no longer. There is a binary uploader program in the iOS SDK that checks and uploads your binary to apple. That uploader program only works on a mac. So while you can build and test the entire app on a PC, you need a mac for 10 minutes to upload your final binary. I’ve seen someone suggest just going to an apple store and using a demo mac to upload. In my case I already had the mac mini, so it wasn’t terribly inconvenient, but it was a real surprise.

Releasing to the World
By default, all apps uploaded to apple are released to every itunes store in the world, unless you specifically turn a store off.

Posted Image
There be a lot of iTunes Stores. Fortunately you only need 6 translations to cover them all.

We had always intended to release to multiple countries, so we tried to minimize the amount of text in game and use symbols where we could (the monkey story sequences use icons rather than text for this reason, although it actually made more sense conceptually too: how do you write “monkey speak” anyway?!).

The key is to get your app description translated from your native language into the various app store languages. It costs about $100 per language to use a translation service to translate our app description. Of course, the difficulty is we have no way to judge how good the translation is!

Initial Marketing
As I write this fourth part, our game has been out in the world for 22 days. Sales haven’t skyrocketed, so we’re in no position to advise on how to market a game. However, there are two things we can share.

First, Apple controls the app store. They make their decisions based on volume. The sections like “What’s Hot” and “New and Noteworthy” are driven by volume. The more volume you can drive in the initial days, the more likely you are to appear in those sections. Obviously the key is to get into the “Top 100 Free” or “Top 100 Paid”. The only way you get there is through volume.

Posted Image
We've made it to #45 in the Family What's Hot. Go little monkey! Go!

Secondly, we knew review sites are important to get initial buzz going, but how do you find all the review sites out there? A google search will return some of the biggies, but also blogs that haven’t been updated in 2 years. So we devised a clever way to make a short list of review sites: most games put their reviews or quotes from review sites in the top part of their app description. By clicking through about 20 apps we were able to compile a list of 41 respectable mobile game review sites.

Most if not all review sites work from a backlog of about 3-4 weeks. And they all want a promo code to get the game for free, they won’t pay money for your app. Apple allows you 50 promo codes per release. Once you make a new release, the unused promo codes are invalidated.

In the 3 weeks we’ve been waiting, we’ve had 1 review come back. Fortunately it was a good one.

For our next title we’ll be doing more on the pre-release marketing side to get the game buzz out before release. As we were taking so long on Catch the Monkey and we didn’t really know if or when we would be done, we had to forgo pre-release marketing.

Conclusion
Well, there you have it: a summary of our ups and downs over roughly 2 years trying to make an iphone game.

We set a goal, and despite great difficulties, achieved it. Beyond this, three things have brought great satisfaction:

1. Our first review came in:
Posted Image
We received 4/5 stars and an editor’s choice award from the family mobile gaming site famigo.com. It’s nice to know someone objective thinks what we made is good!

2. A review someone wrote on the US store:
How fun can catching monkeys be? Hours of fun! I love this game because it's something for my kids to do that's different from princess games and phonics—and it’s something that I can do when I’m commuting, waiting for my next appointment, or just to relax. This has got to be one of the best non-violent games I’ve ever seen. Great graphics, good story, and entertaining for everyone.

3. The popularity of these articles.
When we first set out to talk about our experience, we didn’t know who would be interested. Over 3,000 reads and counting on the first article makes all this writing effort worthwhile! Thanks!

What’s next for Mirthwerx?
We’re currently working on a few things:
  • Playbook version (taking full advantage of having used Marmalade)
  • Android version (dido)
  • Free version (different from a lite version, it’s a different but similar game)
  • And our second title which is a puzzle game for the masses (remember, turn based!)
I’ve enjoyed writing these articles, I hope they’ve been of benefit. I have some ideas for maybe doing an “encore” 5th article next week, but I’d be looking for questions or comments from people before I decide to do it.

Until next we meet,
Lord Yabo

Detecting Ultrabook Sensors

Ultrabook and Tablet Windows 8* Sensors Development Guide


Hackathon: iOS game in 7 days

$
0
0
In this article I would like to share with you the story of creating our company’s first iOS game at a hackathon event, using a wonderful 2d graphics engine Cocos2d. It covers some technical problems, we’ve bumped into, while developing the game, as well as the process of creating the actual gameplay. The resulting app can be found here.

Attached Image: image06.gif

How We Did It


It was about 6 o’clock in the morning in Munich, when we met with my colleagues Anton and Valentine to work on an idea for an inside-company hackathon, which has kind of turned into a monthly event at Empatika. None of us had had an experience with any serious game development, but we thought it would be kind of cool to develop a game, since we were all tied up in the regular app projects for so long and wanted to try something new and exciting.

The initial idea we chose was a pie slicer game, where you had a nice round piece of pie, which you had to vigorously cut into small pieces in a limited amount of time. The pieces were to be me moved with the power of some kind of physics engine, so it all wouldn’t look too boring. After some research and poking around, we found out, that we would be most productive with cocos2d (since Anton I are both iOS-Devs) and box2d (since it’s free and plays nicely with cocos2d), and if we would limit ourselves only to the iOS platform.

The core for the project was found in the nice tutorial by Allen Tan, so we didn’t have to go all hardcore on the implementation of cutting and triangulation algorithms. The tutorial relies on the PRKit library, which allows drawing of a convex textured polygon and extends its PRFilledPolygon class in order to provide some additional functionality like synching with the box2d’s physical body. We decided to borrow this extended class to build our implementation on top of it.

In spite of the hardest part already being written for us, the first complications came soon. After the inital project setup and a couple of test runs we found out about the famous 8 vertices per body limitation of box2d. In order to use the example code and the provided libraries, the pie had to be a polygon, because box2d doesn’t allow a shape to be a segment of a circle (which we would get after cutting the initial shape into multiple pieces). So since the pie had to be at least relatively round and cuttable at the same time, we had to compose it from an array of 8-verticed shapes. It created some minor texturing problems, since the initial tutorial only went in detail about the implementation of such for the whole bodies. However, after some fiddling, we managed to overcome this difficulty by feeding the PRFilledPolygon an array of vertices, composing the outer body edge.

Everything seemed to be fine and dandy so far – our pie was floating in 0 gravity in the unpromising blackness of the iPad screen:

Attached Image: image03.png

However the initial cutting algorithm for sprites had to be modified to support the bodies composed from multiple shapes. After some thinking we decided to overcome this difficulty by simply increasing the 8-vertices per shape limit of the box2d. So we bumped up that number to 24 vertices (which would be definitely too crazy for any relatively serious project). The profiling showed, that in our use case it didn’t make a huge difference, whether the pieces were composed of 8 or 24 vertices. However there was another problem: when the amount of small-cut pieces was close to 200, the FPS dropped to about 10 frames, and made it pretty much impossible to play the game. Part of that was calculation of the collisions (about 20% of the processor time) and another part was drawing and animating all the mini-pieces, bumping into each other after each cut.

The decision came quickly. As soon as a piece turned small enough, we turned off the collisions calculation for it. The game was still pretty slow, which pushed us to slightly change the gameplay: the small pieces were to be removed from the screen and added to the payer’s “jar”. The size of the cleared area determined the performance of the player. Some degree of linear and angular damping were also applied to the pieces, so they wouldn’t fly around the screen in a crazy manner:

Attached Image: image08.png

By this time Valentine had drawn a nice-looking pie picture. It looked awesome but seemed to realistic for such an oversimplified cutting process. So we decided to change it to a simply drawn pizza (the credit for the textures goes to their original rights owners):

Attached Image: image10.png

However it also felt too unnatural, and at this point it was clear that the design had to be changed to something not as realistic as a pie or a pizza. Cutting of simple geometrical primitives seemed like the way to go. Since the redesign was easy and played nicely with the chosen technology (PRFilledPolygon basically allowed to do exactly that), we implemented it pretty quickly. Every cut polygon was also stroked, which was done by adding a CCDrawNode to each slice and feeding it an array of vertices, shaping the outer body of the polygon. It turned out to be pretty slow, but still faster and nicer-looking than using the standard ccDraw methods:

Attached Image: image07.png

The game started to take the right direction, but the gameplay wasn’t quite there yet. It definitely lacked some challenge. And what makes a better challenge than some obstacles and enemies? So we introduced a simple enemy – a red dot, that would interfere with cutting of the primitive. Good, but it could be better. How about some moving lasers? Done. The implementation was simple and only involved calculation of the point-line distance to the user’s touch point.

Attached Image: image02.png

With the game design and enemies down, we wrote a world-based level system. All the levels were stored in separate .plist files and described the shape, texturing rules, enemies positions, level duration and some other parameters. The game-objects tree was populated from the .plists using the standard Objective-C KVC. For example:

//......
- (void)setValue:(id)value forKey:(NSString *)key{

   if([key isEqualToString:@"position"] && [value isKindOfClass:[NSString class]]){
       CGPoint pos = CGPointFromString(value);
       self.position = pos;
   }
   else if([key isEqualToString:@"laserConditions"]){
       NSMutableArray *conditions = [NSMutableArray array];
       for(NSDictionary *conditionDescription in value){
           LaserObstacleCondition *condition = [[LaserObstacleCondition alloc] init];
           [condition  setValuesForKeysWithDictionary:conditionDescription];
           [conditions addObject:condition];
       }
       [super setValue:conditions forKey:key];
   }
   else{
       [super setValue:value forKey:key];
   }
}
//......

//Afterawrds the values got set with the dictionary, read from the plist file:
[self setValuesForKeysWithDictionary: dictionary];

To represent the world-level system, we used the standard CCMenu with some additions to it: CCMenu+Layout (lets you layout the items in a grid with a proper padding) and CCMenuAdvanced (has a scroll addition to it). So Valentine got busy with the level design, and Anton and I took off to write some effects.

For the visual effects part we gladly borrowed CCBlade, which animates the user’s touches, and powered it with some cool Star Wars-like sounds. The other effect we did, was disappearing of the small pieces. Cutting them without any interface feedback was too boring, so we decided to make them fade out with a small plus sign over them.

The fade out part involved adopting the CCLayerRGBA protocol by the PRfilledPolygon. To do that we changed the default shader programm to kCCShader_PositionTexture_uColor:

-(id) initWithPoints:(NSArray *)polygonPoints andTexture:(CCTexture2D *)fillTexture usingTriangulator:(id<PRTriangulator>)polygonTriangulator{
if( (self=[super init])) {
       //Changing the default shader program to kCCShader_PositionTexture_uColor
       self.shaderProgram = [[CCShaderCache sharedShaderCache] programForKey:kCCShader_PositionTexture_uColor];
}
return self;
}

and passed the color uniform to it:

//first we configure the color in the color setter:
colors[4] = {_displayedColor.r/255.,
             _displayedColor.b/255.,
             _displayedColor.g/255.,
             _displayedOpacity/255.};

//then we pass this color as a uniform to the shader program, where 
colorLocation = glGetUniformLocation( _shaderProgram.program, "u_color")

-(void) draw {
   //...
[_shaderProgram setUniformLocation:colorLocation with4fv:colors count:1];
   //...
}

It looked kind of nice, but with the stroke and the other effects the FPS dropped pretty low, especially when cutting through a bulk of pieces, which involved a lot of animations. A quick googling didn’t really give us anything, and we decided to move on by simply increasing the minimum area of the piece, that could be still present on the screen. It allowed a smaller amount of pieces to be simultaneously drawn and animated, which boosted the FPS. The fade out effect was also removed, and all the plus sign sprites were moved into a batch node (which was dumb of us not to use in the first place):
Attached Image: image05.png

The sound effects were done by writing a small convenience wrapper around the Simple audio engine. While implementing it, we bumped into the format problem: the .wav files we used, had to be converted into 8 or 16 bit PCM. In the other case they either wouldn’t be played at all or played with some noticeable cracking sound.

After all of that done we finally implemented the shop, where a user could buy stars if he/she hadn’t earned enough of them, while pacing through the game worlds, or share a picture in one of the social networks to get the stars for free:
Attached Image: image01.png
Attached Image: image04.png

At this point the competition’s time pressure was starting to get high and it was time to release the game to the public. Frantically fixing some late-found bugs, we uploaded the binary to the app store in the hopes of it passing its first review.

Once again, the resulting app can be found here

From Python to Android

$
0
0
I have published this article to my blog before, but because it is not so frequently visited and I did not find much advice for this subject I am writing the same article down here to share with everyone.

This article is about porting a game written in python with pygame to Android. To do so we will use the pygame subset for Android, which can be obtained here. The pygame subset for android has ported a subset of the pygame architecture to the Android architecture. Hence it allows us to easily program games in python and port them to an Android application. This article will explain how to do so by using the breakout game, programmed by myself and obtainable here: Attached File  break0ut.zip   39.08KB   42 downloads

In the end we will have a breakout game on our Android device which can be controlled by swipe gestures or by tilting the android device itself.

If you want to rebuild what we will build during this article you will need the following programs:
  • A Java Development Kit e.g. via Oracle or OpenJDK
  • Python 2.7 and pygame obtainable here and here
  • Device Drivers for your Android device, if you are using Windows and a little help for Linux here
  • pygame subset for Android (PGS4A), obtainable here
These programs are more or less needed if you want to run the breakout game itself and then later port it to your Android device. If you plan to skip this part and simply run the game on your local PC, the pygame library is not needed. The whole porting and programming is just one more click apart.

Setting everything up


The first three parts just need to be installed, either via a package management system if you are using Linux or by downloading and installing them if you are using Windows. PGS4A just needs to be extracted in a folder of your choice. As far as my project setup is concerned this looks like the following and can be viewed on github:

./pgs4a/ directly containg all PGS4A stuff
./pgs4a/breakout/ containing all the python source code and our main.py
./pgs4a/breakout/data/ containing all images and so on for our game

This structure needs to be like this because PGS4A will only work this way. Now mostly everything is set up, except PGS4A - we will start with this. First you should test if everything is up and running for PGS4A by running:

cd pgs4a
./android.py test


This should return green text stating All systems go! This basically means that we met all prerequesits. After this we need to install the corresponding Android SDK which can be done by executing

./android.py installsdk

This should start the installation of the needed Android SDK. During this installation you are asked to accept the TOS of the Android SDK and if you would like to create an application signing key, which is needed if you want to publish your application on the Play store. You should answer both questions with yes. The key is needed later on for some signing purposes and the installation process. But be warned if you want to sell the application in the Play store you will need to recreate a key, because the key created during this process uses no password. If you have finished successfully you will be rewarded with It looks like you're ready to start packaging games.

Important: At this point you must make sure that you actually installed an SDK! You may get an error or a warning that the filter android-8 is not accepted. If you received such an error you need to manually install the Android 8 sdk. This can be done by running

./android-sdk/tools/android


Attached Image: androidSDKManager.png


Now the Android SDK manager should come up. You may need to update Tools before you actually see the same content as in the window above from there you can now install the Android API 8, which is needed to port pygame to Android. This will install the required API manually, which is needed because PGS4A has long not been updated. After this we are nearly ready to start porting our breakout pygame to Android.

Adding Android to breakout


In this part we are going to actually add the android device stuff to our game. For this purpose I would recommend you read through and download the source code. Now you should be capable of running the code within your python environment and be able to play the breakout game. If not make sure you have done everything right regarding the setup of your python and pygame.

All modifications, and we do not have much modifications, are performed in the main.py, which includes the main function of our game. The modification includes importing the Android stack of PGS4A, initializing the Android stack, map Android-specific keys and adding some Android-specific stuff. Afterwards we should be capable of playing without further modifications to our game.

Importing the Android package should be done below our standard imports in the main file. Hence we need to change the header with the imports to something which looks like this:

import pygame, sys, os, random
from pygame.locals import * 
from breakout.data.Ball import Ball
from breakout.data.Bar import Bar
from breakout.data.Block import Block

# Import the android module. If we can't import it, set it to None - this
# lets us test it, and check to see if we want android-specific behavior.
try:
    import android
except ImportError:
    android = None

Now we have imported the Android-specific commands and are able to ask if we can access them or not via the android variable. The next step would be to initialize the Android environment and map some keys, we do this after we initialized pygame and set some parameters:

def main():
        """this function is called when the program starts.
				it initializes everything it needs, then runs in
				a loop until the function returns."""
# Initialize Everything
        width = 800
        height = 600
        pygame.init()
        screen = pygame.display.set_mode((width, height))
        pygame.display.set_caption('break0ut')
        pygame.mouse.set_visible(0)

        background = pygame.Surface(screen.get_size())
        background = background.convert()
        background.fill((0, 0, 0))

        # Map the back button to the escape key.
        if android:
                android.init()
                android.map_key(android.KEYCODE_BACK, pygame.K_ESCAPE)

At the bottom of the above code we checked if the Android package was loaded, if this is the case we initialize the android subsystem and map the back key of our Android device to the Escape key. So if we wanted to add a menu to our application which is normally called with the Escape key this would open the menu. In this particular example of breakout the Escape key will exit the game.

Next we have to react to some Android-specific stuff, which is needed in each game. The game may be put into a pause mode, e.g. when the application is switched or the user tries to go back to the home screen. To react to this kind of behavior we have to add the specific code to our game loop:

# main game loop
        while 1:
                clock.tick(60)

                # Android-specific:
                if android:
                        if android.check_pause():
                                android.wait_for_resume()

This would wait for a resume of our application if our game application was set in the pause mode. Due to the checks for the android variable we are capable of playing the same game on the PC as well as on our Android device. If you want to start the PC version simply run the main.py and you are ready to go. With these last additions to our main source code, we are now capable of porting our breakout game to an Android Application and develop on the PC using our standard setup.

The porting process


Now we have reached the point were we want to put everything we have created so far to our Android device. To do so we need to configure our setup, which is easily done running the following command:

./android.py configure breakout

Make sure that the breakout folder exists before you execute the command otherwise you will get some errors. After you have executed this command you will be asked several questions including the name of the application and the package. As far as the package is concerned make sure not to use any special characters including -, _ and so on, otherwise you will get errors later in the process. As far as the rest of the questions are concerned I have stuck to the default answers. After you have finished you should be able to see a file called .android.json in the breakout folder, which should look like this:

{"layout": "internal", "orientation": "landscape", "package": "net.sc.breakout", "include_pil": false, "name": "breakout", "icon_name": "breakout", "version": "0.1", "permissions": ["INTERNET", "VIBRATE"], "include_sqlite": false, "numeric_version": "1"}

The next and last step before we can play our game on our Android device is the compilation and the installation of the apk on our phone. Both is handled by the following command line:

./android.py build breakout release install

Before you execute the command make sure you have created an assets folder. If you have not the execution will fail and you are not capable of compiling the application. I guess the creator of the PGS4A forgot or did not implement stuff to create folders. Also as I mentioned before PGS4A is an old application and therefore may not work very well.

If you have done everything correctly you should be able to play the game on your android device or the emulator you have connected or started. It should look very similar to the picture below. You should now be capable of porting any game created with pygame to android.


Attached Image: breakoutOnAndroid.png

Limits of Developing a Web-Based Hidden Object Game for Learning Languages

$
0
0
As production of "Pavel Piezo - Trip to the Kite Festival" draws to a close later this year I reviewed the material I collected for the Postmortem and found it too much and too diverse to put in one huge article. So, I identified the topics that can stand very well on their own, which were not limited to this specific game or production, and decided to write three short(er) posts in advance to the Postmortem.

Setting Up Production


Earlier this year, after the game design and concept for "Pavel Piezo - Trip to the Kite Festival" were almost done and funding was secured, the crucial question had to be answered: Which game engine or programming system should be used for production?

Within our company, intolabs GmbH, the core team for this production consists only of two people, Sven Toepke as core programmer and me as game designer, producer, additional programmer, auxiliary art director and everything else. Sure, we would have external, outsourced help with artwork, audio, marketing and so on, but the core development and production would be split up between us two.

The game is to be released for tablets with iOS, later for tablets with Android and Windows and after that for Windows and Mac desktop.

A specific game engine springs to mind? Yes, it does.

But as we had virtually no budget we chose a different solution. We both had successfully done various projects with HTML5, Javascript, jQuery, Cordova / Phonegap etc. and earlier this year Adobe committed to the CreateJS suite to deal with canvas, sound, tweens, preloading and such. Since we had done some prototyping with this combination already, the decision was made to use this as the "game engine".

After all, it's just big static pictures with some sprites for items, a few animations and sounds, right? Well, yes and no.

Although the game does run well, even on our minimum-specification test device, we did hit some limits and road bumps that are worth noting for anyone who plans to delve into creating games with HTML5/JS/canvas.

Missing or Insufficient Libraries or Functions


This one's the easiest to convey as you may be already aware if you dabble in HTML5/JS. In particular, we missed particle effects and a more elaborate animation library. Sure, you can solve the problems yourself or use additional third-party libraries but although you can overcome all problems described in this article, it adds to the complexity, memory usage, programming effort and production time.

Particle Effects

Since we only needed one nice effect, instead of using an additional library for particle effects we opted to use sprite-animations with semi-transparent sprites. This is a valid solution and works well, but adds to memory management and overdraw (see below). No biggie, but using a particle system would have been easier.

Animations

Again, we opted to work with what is provided within the described combination of systems. In our case we were missing circular animations (clarification: I meant automatic animations along a curved path, we needed to animate along a circle). Sven programmed these himself, which is very doable with Javascript. However having this function within, say, CreateJS/TweenJS would have saved time. (You may find more elaborate functions for animations in libraries like melonJS, CAAT, Canvas Engine or impactJS.)

2D Overdraw in canvas

While this one wasn't bothering us too much, we could see the effect in early performance tests and countered its drag on frame rate at the very beginning. In case you are not familiar with the problem of overdraw, you can do a quick search here on gamedev.net, on Gamasutra, stackoverflow or simply by searchengine, but overdraw in 2D sprite-based games is quickly explained:

If the graphics of your sprites (objects) are not a perfectly filled rectangle, which they seldom are, you'll have transparent pixels around your sprite's graphic that fill up into the closest rectangle. This rectangle is your sprite (object) that you place and move around. Now, if you have overlapping objects which do overlap in the transparent areas, the renderer still has to calculate the visibility for every transparent pixel. If you have multiple overlapping objects/sprites this has to be calculated for every pixel in the Z (depth) order individually. This will add up, especially when animating. CreateJS/EaselJS allows you to group objects together in a container-object, which in turn can be cached, but again, this adds to complexity and programming effort.

Right after seeing the issues in our first tests we decided to preemptively cut background-layers with huge transparent areas into smaller objects and arrange them together on stage. Additionally, we changed sprites that were originally positioned partially behind a "background" object to be a rectangle that includes the pixels of the "background". That then allowed the sprite to be placed in front of the background layer, switching the sprite in question to a version with transparency only for animation.

Again, extra work.

Advanced 2D game engines provide additional, better techniques, such as putting sprites on non-rectangular meshes or using specific, well adapted renderers. These were not available in any HTML5/JS/canvas library we looked at. That may change with time but I found it's due to constraints in canvas and its implementations in different browsers itself.

Sound and FX

"Pavel Piezo - Trip to the Kite Festival" is relying heavily on sound. Each level has two different background loops, which are played out of alignment, sound effects for the GUI and game-events, plus there are a few different short spoken phrases for every item and active area in each level / picture.

For instance, if you see in the GUI that you are to look for sunglasses, tapping on those in the GUI will play one of two or three short sentences, like "You need to find the sunglasses". Tapping on them in the picture (finding them), will again play one of two or three sentences, like "Nice! You found the sunglasses." Tapping on an item, which you are not to find (yet), in the picture will also play one of two or three sentences, like "Very useful, a toothbrush." Remember, Pavel Piezo is a game for getting to know foreign languages, so we maximize exposition to the vocabulary and phrases as well as to the association between the picture of an item and the spoken word.

The biggest problem we had with handling all these sound files is described in the passage about memory, see below, but there were additional quirks and annoyances. In the end our finding was that we had to cheat our way around obvious negligence in the implementations of sound in different versions of browsers, webkit or even the version of the underlying OS or hardware.

When preloading all sounds for a level proved to be too much for the memory, we switched to streaming the bigger and less frequently used files. However, depending on the system, that added up to 500ms latency before the sound started playing. This was not (only) limited to the least powerful system we tested with. The latency varied (seemingly random) between OS versions, etc.

The most bizarre glitch we found was that one target system didn't seem to like some of our MP3s and would cut them a few hundred milliseconds short upon playing. We tried re-encoding, checking the files meta-data as well as several other methods to identify the culprit. It wasn't the longest sounds, it wasn't ones with a specific length, it wasn't something we could identify in the meta-data; we didn't find the reason within our limited time. We could however reproduce the glitch, it was always the same ones that got cut short.

In the end we just checked the playback of all audio files and extended the ones that got cut with half a second of silence... more work.

Memory Management


"But Carsten, you are developing for mobile devices, you have to be aware of memory constraints right from the start!" you say and prepare to skip this paragraph.

“Yes”, I say, “we know”, I say, “we were”.

What we were not completely prepared for is how much developing a webkit-based application tightens the corset even further, despite having developed similar applications in the past. The webkit on our minimum-spec test device left us with little more than 200 MB of usable memory, of which Cordova and the Javascript-libraries ate a good 100 MB from the start.

We spent a good deal of time and effort optimizing the graphics, handling preloading and streaming, finding the highest compression for sounds while preserving the desired quality, cutting graphics to save on transparent areas (which eat up memory when displayed on stage) and utilizing other tricks mobile developers are well acquainted with.

While time consuming, it was still very doable, of course, as you should optimize the heck out of your application for mobile devices, regardless. But even with our min-spec system, using a game engine that does not run on top of webkit would have granted us more than double the memory to use.

Conclusion


All the small problems aside, HTML5/JS/canvas can be a very viable combination for your development and with Cordova/Phonegap, there are very few other ways to have your application cross-platform capable with that little effort.

Just be aware of the constraints that are still in place today.

Until full-fledged game engines like Unreal Engine or Unity3D become available to run on top of canvas there'll be a bunch of extra work and there's still the additional memory constraints to keep in mind. We feel that we have reached some limits of what is possible with "Pavel Piezo", especially on older devices that are still widely used. It's clear that we could still optimize further with much more tinkering. From the perspective of production, though, it's just more feasible to use a full-fledged system for cross-platform game development.

Still, in our opinion the combination HTML5/JS/jQuery/createJS/Cordova/Phonegap/... is the choice to make nifty, good looking cross-platform apps in record time. Just like modern HTML/CSS, the application doesn't have to look "html-y" and if the application consists mostly of logic, "screens" and some slick animations for transitions, popups, slide-ins etc. you can't beat the speed and ease of cross-platform development for a production of a certain scope. As we relied heavily on sound and tried to use as much animation as possible for "Pavel Piezo - Trip to the Kite Festival" we did hit a point where it became clear that pushing further with coming releases, the effort for optimization would be too high, compared to using a full-fledged game system.

Maybe that will change (again) in the future but for now we have a very good idea of the boundaries that exist, when to use HTML5 and when to use purpose-built tools and engines.

The Clash of Mobile Platforms: J2ME, ExEn, Mophun and WGE

$
0
0
The Clash of Mobile Platforms: J2ME, ExEn, Mophun and WGE
by Pedro Amaro

Abstract: The author makes a brief introduction to the development of cellphone games. Some of the specific characteristics of this market are analyzed and the four primary free game development platforms are described, mentioning each one's advantages and disadvantages. This article is primarily targeted at amateur development teams who wish to evolve to professionals in this market.

1. Introduction
2. The wireless gaming market
3. J2ME
4. ExEn
5. Mophun
6. WGE
7. Which should you choose?
8: Conclusions


1. Introduction
At the moment, most programmers enter in the videogames' world "through" the computer. Taking in consideration the high licensing fees in the console market, it's extremely difficult to commercially release games on these systems (at least in a legal way). However, these last two years (and especially in 2002), another "gateway" to professional game development has been opened: the cellphone. The appearance of several free and device-independent development platforms allows amateur development teams to compete head-to-head with this sector's professionals without any notorious disadvantages. To achieve this, one must choose the platform that best fits the intended objectives. To make that choice easier, this article introduces the four main free platforms available in the market: J2ME [1], ExEn [2], Mophun[3] and WGE [4][5]. Initially, the reader is introduced in the specific characteristics of this market, to help him notice the development differences regarding other systems. This introduction is followed by an analysis of the strong and weak points of each one of the platforms mentioned above. The last paragraph contains a short summary on each platform, as well as an indication of what type of situation each one fits better.


2. The wireless gaming market
Unlike what happens in other systems, the amount of people who buy a cellphone just to play is extremely reduced. Sometimes the choice is purely based on the price, sometimes people buy whatever the mobile operator sells them and there's also people who choose a certain model just because his friends also have one… there are plenty of reasons to choose a device and the available games are usually only seen as a nice "side effect" of buying a cellphone.

With the appearance of the most recent cellphones, which have advanced graphic and sound capabilities, this market started expanding. Apart from the mentioned capabilities, the existence of free device-independent development platforms played a major role in this expansion. Looking at the success of handheld consoles (especially Nintendo's Gameboy, which became the world's best selling console in 1999), it becomes obvious that there's an excellent market to explore. Afterall, most people use their cellphone to be reachable anywhere, anytime. This means that, usually, they always take the cellphone with them. You can't say the same thing about a handheld console, since its users only carry it around when they're sure that they'll go through long waiting periods. The cellphone, on the other hand, is always there: at the bus stop, at the dentist's waiting room or in a boring class. Its main function (communicate) is constantly being requested, which means that its remaining functions (where games are included) are also always available.

Another important aspect lies on the fact that the typical cellphone user doesn't care about the technology available in his device. "J2ME", "ExEn" and "Mophun", for example, are words that most users don't know. Now, if the selected term is "Snake", it's pretty certain that at least those who usually play with a cellphone will recognize the word. Even though, at the moment, the choice of a cellphone isn't influenced by its games, it's extremely likely that this situation will change in the next 2 to 3 years. However, you shouldn't expect that common users will start checking if the device they're planning on buying supports a certain platform. What they will look for is if that device supports a good amount of quality games at reasonable prices… without forgetting that a cellphone's main function is communication.

A final thought goes to the game genres with higher probabilities of achieving success in this market. Unlike what's usual in other systems, the cellphone users don't play during long time periods. Games which involve that necessity (like, for example, RPGs and platforms) must have an extremely high quality level to "conquer" the user. Puzzles are the most common genre: if they're easy to play, fun, feature a fast action and do not demand a lot of training time, the success is almost assured. Another genre which is starting to become famous is the action: fighting games, shoot'em and beat'em up are now starting to enter the cellphone gaming market. When developing a game for this system, you shouldn't forget that there's a high probability of it being played only for 5 or 10 minutes at a time. If it doesn't "grab" the player in this time period, its commercial success will be quite limited.


3. J2ME
The "Java 2 Micro Edition" is usually considered what Java was originally supposed to be: a cross-platform language capable of working in devices with highly reduced capabilities. With that in consideration, it doesn't come as a surprise the similarities between J2SE and J2ME. As a matter of fact, J2ME is often considered a Standard Edition stripped to the essential.

Since it wasn't initially planned for games, its potential is quite reduced when compared with the other platforms created specifically for that purpose. Although MIDP 2.0 already comes with a GameAPI, the current version (MIDP 1.0) only has a few rudiments of what would be required to produce technically advanced games. For example, there's no support for resizing images, perform simple 2D rotations or even include sound. However, due to the fact that it appeared first and managed to acquire a good amount of supporters, J2ME became almost a market standard and it's the platform that carries more games in more devices.

J2ME's development costs are extremely reduced. The SDK is freely available and there are no licensing expenses, which means that anyone can create a game and market it. However, unlike the other platforms created specifically for games, there is no J2ME business model. The developer must negotiate its commercialization with three possible "partners": manufacturers, operators and distributors.

Negotiating a contract with a manufacturer is usually the most difficult option. Most of the times, it's cheaper for the manufacturer to create its own internal development team rather than paying a third party to develop games to be included in all its devices. Besides, considering that it's already possible to download games for the cellphone, the number of titles initially available in the device's memory is a characteristic losing some attention. Most of the times, these games are weaker than those the player can obtain through a simple download.

Negotiating directly with an operator is becoming the most common alternative. Most operators already have a service targeted at game developers and the current indicators show that these services will expand. The profit margins in revenue sharing are usually the highest (around 80%). However, sometimes commercializing a game can be quite difficult. Most of these services require a test period in which the game download is free. If the game is successful in this period, it moves on to a commercialization stage. The problem with this option lies in a simple fact: when the game enters the commercialization stage, the "new game" effect has wore out and the possible buyers already played it when it was freely available. Another problem lies in the limitation of negotiating with just one operator. For example, to release a game in more than one country, negotiations with at least two operators are required. This problem thickens when the developer wants a continental-wide or worldwide distribution. Anyway, sometimes it can be the best option (when, for example, due to localization difficulties, the developer wishes to target a single country and the operator does not demand a free download trial period).

The third option, dealing with a distributor, is usually the most appealing when what the developer wishes is a large scale distribution. It's quite common for distributors to have agreements with several operators. The turnoff of this lies on the lower profit margins. Usually operators take 20% of the profits, while the remaining 80% are divided among the distributor and the developer. Although it's possible to obtain a revenue share between 20% and 70% (which is above the 5% to 10% from other markets), the profit will never be as high as it would be if it was negotiated directly with the operator. Apart from this disadvantage, the developer also has to find a distributor interested in his application, which sometimes can be extremely difficult (although there are cases where it's the distributor that contacts the development team). The main advantage lies in the lack of commercially worries for the developer, since both the operator negotiations and the marketing are a task for the distributor.

Regarding J2ME's future, generally speaking you could say it's excellent. Not only does it have an extensive list of manufacturers supporting it (making it almost a standard), but it also managed to overcome the problems of JVMs that did not follow the specifications (which occurred due to the manufacturers' "rush" in releasing devices supporting this technology). On the gaming market, its future is somewhat dependent of MIDP 2.0. It's certain that it won't fade away and it should keep the leadership during 2003… but if one or more of the remaining contenders stays ahead in a technological level and manages to include its engine in an amount of devices similar to J2ME's, Sun's platform will face some difficulties in keeping the leadership in this specific market.


4. ExEn
"Execution Engine" (also known as ExEn) was developed by In-Fusio to "fight" the limitations imposed by J2ME in game development. It's also interesting to notice that In-Fusio tried to overcome those limitations working together with Sun by presenting the proposal of a GameAPI for MIDP 2.0.

ExEn was the first mass market downloadable game engine to be made available in Europe. This was an important first step that allowed ExEn to achieve the current position of leader in this continent, making it the most used game engine (which also means that it's the one with a wider range of games).

In early November 2002, there were 18 models which supported ExEn. In an European view, this means around one million available cellphones. Although it's a somewhat reduced number when compared with the five million devices which have J2ME technology, it's an impressive amount for a "small" proprietary technology.

Nevertheless, when compared with the remaining contenders, it's incorrect to say that such leadership is justified by the technological capabilities. Both in graphical and processing speed terms, ExEn is far from the lead. However, by supplying additional important game development functions (sprite zooming, parallax scrolling, raycasting, rotations), it easily overcomes J2ME. Adding to this a virtual machine that, despite not being the fastest, can be around 30 times faster than a generic VM (although usually is only 10 to 15 times) and only leaves a 5% footprint on the device's memory, it's easy to see why this is the most widely chosen game engine.

Another important reason that lead several developers to choose ExEn is In-Fusio's business model. This is divided in 2 levels: standard and premium. In the standard level (free subscription), In-Fusio offers the SDK, an emulator, on-line technical support and the possibility of, later on, upgrading to the premium package. The developers that achieve the premium level have their games marketed by In-Fusio, which promotes them in the operators who have devices supporting this engine.

Execution Engine's growth perspectives are quite good. With a new version (2.1) released in the beginning of 2003, the support of several influential software-houses (Handy Games and Iomo, for example) and an attractive business model for independent producers, the number of available games should increase considerably. In-Fusio has also started to enter the Chinese market, which should become one of the strongest (if not the strongest) in the next 2 to 3 years.


5. Mophun
Mophun is described by its creators (Synergenix) as a "software-based videogame console". Although its development began in late 1999, its market implantation only achieved a serious level in November of 2002.

Its late appearance, allied to the fact that only three devices carry this engine (Ericsson T300, T310 and T610) made some developers discard the option of developing for this system. The somewhat biased market analysis performed by Mophun's producers also "scared away" some interested developers… for example, in one of those analysis, Mophun is shown dividing the leadership of the European market with J2ME. However, while the J2ME and ExEn information reported back to October of 2002, the values presented for Mophun were predictions for 2003. This fact passed on the feeling that something went wrong with Mophun at an operator and manufacturer support level.

Technically speaking, Mophun has no rivals. Tests performed by independent organizations showed that, in a device where Mophun reaches 60 MIPS, J2ME only went as far as 400 KIPS (this represents a performance 150 times higher). Synergenix also adds that, in certain devices, part of the VM code is directly translated into native code, meaning that it's possible to achieve 90% of the device's maximum capability (for instance, reaching 90 MIPS in a device that reaches 100 MIPS when running native programs). The remaining characteristics are similar to ExEn's.

Like ExEn and J2ME, Mophun is also freely available. In some aspects, Synergenix's business model resembles In-Fusio's: after the game is developed, Synergenix handles certification, distribution and marketing. However, since its current network isn't very extended, it doesn't seem to be as appealing as ExEn's, which made some developers choose the theoretically weaker system.

Mophun's future is "semi-unknown". If Synergenix fails to quickly acquire additional support, it's quite likely for Mophun to be dropped in favour of less powerful but financially more appealing development technologies. However, if the promises that several operators and manufacturers are going to adopt Mophun briefly are followed, this system's advanced technical skills can make it the new leader.


6. WGE
The "Wireless Graphics Engine" is TTPCom's solution. Although it began being considered the main candidate for domination of the game engines' market, the lack of support by game developers ended up decreasing the initial appeal.

It's impossible denying that, from a purely technical point of view, WGE has everything to win. It may be slower than Mophun, but the several API modules make 2D and 3D programming easier (including tile management and collision detection functionalities), allow a simple access to networking functions and grant sound support, among other capabilities.

As its direct contenders, the SDK download is free and TTPCom has a business model aimed at attracting the game development teams. To the usual revenue sharing from the games sold on a download basis, there's the addition of a "minimum income" resulting from selling the games directly to the device's manufacturers.

Unfortunately, despite the initially generated "fever", the lack of support from the primary manufacturers ended up limiting WGE's success. Most software houses avoided it, which lead small companies and independent developers to follow that example. The result is easy to see: the number of games available for WGE is slightly over 30. This lack of interest shown by the majority ends up bringing an advantage for those who want to start developing for WGE: with such small internal competition, it's easier for a quality game to succeed. The disadvantage lies on the lower number of possible players, which may considerably limit the profits obtained from the game's commercialization.

Although it's wrong to say that WGE's immediate future is dark, its perspectives have been more pleasant. Considering the strong competition that the current market fragmentation will bring in the next two years, if TTPCom isn't able to bring more software houses to its catalogue, it'll hardly get the support of additional manufacturers. On the other side, without the support of additional manufacturers, it's extremely hard to attract more software houses. WGE's future depends on TTPCom's ability to break this cycle. If it makes it within the next 3 or 4 months, the growth perspectives are quite positive. Otherwise, the end is almost unavoidable.


7. Which should you choose?
At this point, a question arises: which platform shall a programmer choose? Due to the high fragmentation of this market, there isn't one answer that suits all situations. To choose the platform that best fits the situation, it's necessary to set the objectives of what the team wants to produce and analyze the advantages and disadvantages of the several available platforms.

When the objective involves reaching a wide market and it's possible to make some compromises on a performance level, J2ME is the best option. If commercializing the game is also an objective, the team must expect to lose some extra time negotiating the distribution deals.

If the project requires more potentialities than those offered by J2ME and there's the option of choosing a smaller market or if the team wishes to choose a platform that offers a simple business model, ExEn should be selected.

When the most critical aspect lies on the performances (both in speed and graphical terms), Mophun appears as one of the main choices. In this case, it's important to check if taking the risk of choosing a not yet widely spread platform is a possibility.

If the option for a platform with a reduced market isn't a problem, if the objective is the creation of a high-performance game and if Mophun isn't a satisfactory choice for any reason, WGE is the best option. Once again, it's advisable to study well the choice in order to prevent excessive expenses when compared to the expected profits.

<a name="a8" id="a8">
8. Conclusions
With this article, the author intended to make a brief introduction to the main wireless game development platforms. It is expected that this may aid the choice of the platform by those who wish to enter this emerging market. This analysis was limited to the four main freely available platforms, in order to make this article especially useful for the amateur development teams who seek an entrance into professional game development. However, all those who wish to go through such entrance must remember that it will only be possible with the production of quality products adapted to the specific needs of this market.



References

1. Sun Microsystems, J2ME Homepage, http://wireless.java.sun.com
2. In-Fusio, ExEn Homepage, http://developer.in-fusio.com
3. Synergenix, Mophun Homepage, http://www.mophun.com
4. TTPCom, TTPCom Homepage, http://www.ttpcom.com
5. 9Dots, WGE Support Page, http://www.9dots.net


Pedro Henrique Simões Amaro
Departamento de Engenharia Informática
Universidade de Coimbra
3030 Coimbra, Portugal
pamaro@student.dei.uc.pt
http://pedroamaro.pt.vu

How To Setup a BlackBerry 10 Development Environment to Build Cascades Apps

$
0
0
This is a step-by-step instructional guide on how to setup a BlackBerry 10 (BB10) development environment. This article includes instructions for downloading all the Cascades tools, installing them, and setting them up. You will learn how to get the BB10 simulator up and running and then how to connect it up to the momentics IDE to run and test code. You'll also learn how to connect to a physical BB10 device so that you can run code on a real device.

A Little Background Information


Prior to BB10, app development for BlackBerry was done using Java or HTML5. You'll find references to the older development environment as "Developing for BlackBerry OS".

BlackBerry purchased QNX in 2010 and adopted their tools for use to build the BB10 environment. Before BB10 was ready the BlackBerry PlayBook tablet was released running the new QNX OS. To build applications for PlayBook you were given the following options to choose from:
  • Native (C/C++, OpenGL)
  • HTML5
  • Adobe AIR
  • Android Runtime
About a year after the PlayBook was introduced the first BB10 device (Z10) was released. When developing apps for BB10, you have many different options to choose from:
  • Native Cascades (C/C++, Cascades QML)
  • Native Core (C/C++, OpenGL)
  • HTML5
  • Adobe AIR
  • Android Runtime
  • Appcelerator (BlackBerry Platform Partner)
  • Cordova HTML5 framework (BlackBerry Platform Partner)
  • dojo HTML5 framework (BlackBerry Platform Partner)
  • jQuery Mobile HTML5 UI framework (BlackBerry Platform Partner)
  • Marmalade cross platform development environment (BlackBerry Platform Partner)
  • Qt (BlackBerry Platform Partner)
  • Sencha Touch HTML5 framework (BlackBerry Platform Partner)
  • Unity Game Engine (BlackBerry Platform Partner)

The Native Cascades Development Environment


When building apps for BB10, the preferred route is to write your code using Cascades in the Momentics IDE. Cascades is a framework developed by The Astonishing Tribe (TAT) which is an extension of Qt. In this article I describe how to setup a Windows PC for use to make BB10 applications using Cascades.

Get the Momentics IDE


To start you will need to download the Momentics IDE. You can download the current version of the software from the BlackBerry developer site located here.

Choose your platform from the website, download the installer and run it. At the time that I am writing this article, the current version of the Momentics IDE is version 2.0. I'm using Windows 7 to set everything up so I see the following when the installation program starts.


Attached Image: 00-momenticsInstall.png


Click Next through a series of dialogs, accept the license agreement and choose an installation directory (C:/bbndk is a good default). After doing all of that, find the new icon on your desktop and launch the Momentics IDE for BlackBerry.

First Run of the Momentics IDE


When the Momentics IDE starts up for the first time you will see the following dialog appear.


Attached Image: 01-workspace.png


This dialog is asking you for the location where you want to place all the source code that you will be using to make BlackBerry applications. Either accept the default path that it gives you or choose another location. I also recommend enabling the checkbox that reads "Use this as the default and do not ask again" to prevent this dialog from popping up in the future when you start the IDE.

After you click the OK button, the IDE will start up and notice that you are missing some SDK files. At this point you should see the following dialog appear.


Attached Image: 02-initDialog.png


We will come back to setting up the SDK files in a minute, so right now just click the Cancel button in the bottom right corner. You should now see the main screen in the IDE that looks something like this:


Attached Image: 03-initScreen.jpg


The IDE is now installed so next up we will setup a BB10 simulator. For now just close the IDE.

BlackBerry 10 Simulator


You can find the latest version of the BB10 simulator to download here.

Download the installer and then run it to install the simulator on your computer. Click Next through the welcome dialog and accept the license agreement. You will then be asked where you want to install the simulator's virtual machine (VM) files.


Attached Image: 04-vmDir.png


Remember where you install the VM because you'll need this location in some following steps below. The simulator runs in a virtual environment which tries to simulate the behaviour of the BB10 hardware on your PC. For additional information about the BB10 simulator have a look here.

To run a VM, make sure that your PC has the minimum system requirements listed here.

You will need some additional software installed on your PC to run the BB10 simulator so I will be installing the VMware Player on my Windows PC. You can click the link found on the system requirements webpage or head down to https://www.vmware.com/tryvmware/?p=player&lp=1 to get a copy of the free VM Player.

After you install the VMware player, you should see a new icon on your desktop like this:


Attached Image: 05-vmWare0.jpg


Launch the VMware player to see initial screen where you can select a VM to run.


Attached Image: 06-vmWare1.png


Select the "Open a Virtual Machine" option shown in the image above and then navigate to the directory where you installed the BB10 simulator.


Attached Image: 07-vmWare2.png


Select the BlackBerry10Simulator.vmx file and click Open.


Attached Image: 08-vmWare3.png


You should now see the BB10 Simulator listed in the VMware Player as shown in the above image. Select the BB10 Simulator in the list on the left and then you can edit the VM settings by clicking on the "Edit virtual machine settings" option found in the bottom right. I will leave my BB10 simulator with the default settings so just click the "Play virtual machine" option to start the simulator.

The BB10 simulator will start to load and a screen will appear asking you which BB10 device you would like to simulate.


Attached Image: 09-vmWare4.png


I'm running the BB10_2_0X.1791 version of the simulator so I see six options. Option 1 lets me simulate a Z10 device, Option 2 simulates a Q10/Q5 device, Option 3 simulates a Z30 device. The last three options are the same as the above three but just with Safe Mode enabled.

If you don't choose an option, one will be selected for you automatically after a few seconds.

Select an option or wait for the default to be selected and after a few minutes, the simulator will finish loading and you will see the BB10 interface on the screen. The BB10 OS is heavily based around swipe gestures to navigate between different screens. To produce a swipe in the simulator press the left mouse button down on your mouse, move the mouse in the direction you want to swipe, and then release the left mouse button.


Attached Image: 10-vmWare5.jpg


Now let us take a quick look at how you can control different options in the simulator. If your mouse focus is inside the simulator press Ctrl+Alt to free it and have it leave the VmPlayer. Click on your Windows Start button and navigate to Programs > BlackBerry 10 Simulator directory and launch the Controller program.

The controller is an external program used to control different options in the BB10 simulator. If the BB10 simulator is running when the controller is started it should automatically find and connect to it. You will know when this happens because in the bottom left hand corner of the controller, you will see that it says which IP address the controller is controlling.

Using the controller you can simulate NFC transactions, simulate phone calls, sensors etc. For example open the Battery category in the controller. Have a look in the top left hand corner of the BB10 simulator to see what the current battery icon looks like.

Now in the controller click the Charging checkbox to simulate the effect of the BB10 device being plugged in to charge the battery. You should now see in the BB10 simulator that the battery icon has a lightning bolt through it as shown in the image below. You can also use the sliders here to control how much charge the "simulated" battery has in your BB10 Simulator.


Attached Image: 11-vmWare6.jpg


Play around with the other controller options to see what things you can manipulate in the simulator.

BB10 Simulator Setup in the IDE


Now that we have the BB10 simulator installed and running in the background, let's get the IDE hooked up to it.

Launch the Momentics IDE again and when it starts up, close the initial start up dialog if it appears. Click on the drop down menu that says "No Device Selected" to open it, and select "Manage Devices..."


Attached Image: 12-setup0.png


In the new dialog that appears, select the simulators tab at the top of the dialog, and then click on the "Begin Simulator Setup" button.


Attached Image: 13-setup1.png


You will be presented with a list of BB10 Simulators that you can install. Since we already downloaded and installed a simulator we don't have to do it from these options. Instead, notice that in the bottom left hand corner of the dialog there is a link to "Pair a simulator downloaded from another source".


Attached Image: 14-setup2.png


When you click the link to pair a simulator with the IDE, a dialog will open asking you to select the vmx file that you previously installed. Navigate to the location where you installed the BB10 simulator VM earlier, and select BlackBerry10Simulator.vmx.


Attached Image: 15-setup3.png


The Device Manager dialog will update and ask you for the IP address of the BB10 simulator that you want to connect to. You can either manually enter the IP address (which you can find in the bottom left hand corner of the running VM) or just click the "Auto-Detect" button in the IDE to have it find the simulator for you. Make sure your simulator is up and running. Once you have an IP address entered, click the Pair button.


Attached Image: 16-setup4.png


The IDE will connect to the simulator and realize that you are missing some debug symbols which are need when you try to debug apps. Click "Yes" in the dialog to download the symbols from the internet.


Attached Image: 17-setup5.png


Once the debug symbols are downloaded you should see your simulator listed and the version of the BB10 OS that is currently supported by the simulator listed here. If you want to change your simulator or install a new simulator you can do that from this Device Manager dialog.

Notice that you can launch the BB10 Simulator controller directly from this dialog by clicking the "Open Controller" button rather than having to find the Controller program in your start menu.


Attached Image: 18-setup6.png


Updating the BB10 API Level


Anytime that BlackBerry releases a new version of their OS, a new Application Programming Interface (API) will come with it. That means if you want to take advantage of the new features, you'll need to download the new API. This also means that you'll need to download a new simulator that supports the new features.

To install a BB10 API level in the IDE, click on the Help menu, and choose Update API Levels...


Attached Image: 19-api.png


A dialog will appear showing which API you currently have installed (if any) and which ones you can choose to download and install. Also notice that you can choose "Beta" versions of new API using the tab at the top of the dialog. Beta API give you a glimpse of what is coming up in the next version, however be careful with Beta API, there may be bugs in them, or the API may change before they are fully released.

Choose the latest API level (and the one that is supported by your simulator) from this dialog by clicking the appropriate install button on the right.


Attached Image: 20-sample10.png


When the API is done downloading and installing you will see it listed at the top of the dialog as shown below.


Attached Image: 21-sample11.png


Anytime you install a new API level, make sure you close the IDE and restart it.

Installing a BB10 Sample App


There are lots of BB10 sample apps that you can download and try, to learn how to program and make BB10 applications. I'll step you through the process of downloading and installing a sample app next.

When the momentics IDE starts up, you will see the Welcome tab (unless you turned it off) that contains links to sample apps. This isn't the only way to get sample apps!


Attached Image: 22-sample0.png


Head down to the BlackBerry developer site found here: http://developer.blackberry.com/native/sampleapps/

On this webpage you will see a listing of many different Cascades apps that you can download and try. Go and download the app called "Pull My Beard" which is found in the UI section.

Click on the icon on the webpage to open a description of the app, and then select the Download source code button.


Attached Image: 23-sampleApp.jpg


Once the zip file is downloaded to your computer, go back to the IDE and right click inside the Project Explorer tab on the left side. Make sure you right click in the white space and not on the BlackBerry10Simulator target which is also listed here.

Now choose Import... from the menu.


Attached Image: 24-import.png


The import dialog will appear on the screen. Open the General folder, and choose "Existing Project into Workspace". Then click Next.


Attached Image: 25-sample6.png


In the next dialog select the "Select archive file:" radio button option and click the Browse... button. Navigate to your download folder, and choose the zip file that you just downloaded from the BlackBerry developer website. Click the "Open" button.


Attached Image: 26-fromArchive.png


The IDE should recognize that the Pull My Beard application is inside the zip file so it will list it in the projects section. Click the Finish button to copy the Pull My Beard app out of the zip file and into your workspace.


Attached Image: 27-finish.png


Take a look on your hard drive in the workspace folder (default location is C:\Users\yourName\momentics-workspace) to see that you should now have a new folder named pullmybeard that has all the source files for the app.

The IDE has also updated to show the pullmybeard app in the Project Explorer tab.

Running a BB10 Sample App in the Simulator


Once you have a BB10 app in the IDE, you can choose to run it on a Simulator or on a real BB10 device. Let's first look at how to run it in the Simulator.

In the first drop down menu, make sure you have "Run" selected. Here you can also choose Debug which will let you step through the code while the app is running in case you need to debug the code to figure out what is going on in detail.

Using the second drop down menu at the top in the IDE, select the pullmybeard app you want to run.

The third drop down menu is used to select where you want to run the app. Make sure you still have BlackBerry10Simulator selected.


Attached Image: 28-selectApp.jpg


With the above options selected, click the first blue hammer button found at the top left of the IDE to build the application for the simulator. The console should output some messages and finally say "Build Finished".

Notice that in the Project Explorer you will now have a new section under pullmybeard that is named "Binaries". This is where your built app code is found that can be installed in the simulator to run.


Attached Image: 29-build.png


To run the app after the code has finished building, click the second green triangle button found in the top left of the IDE. This will cause the IDE to transfer the appropriate built files to the simulator and launch the application. You should now see the app running in the simulator.


Attached Image: 30-runningInSimulator.jpg


Turn your PC speakers ON and pull the beard using your mouse in the simulator. Press the left mouse button when your cursor is on the beard, drag your mouse down and then release the mouse button.

You can stop a running app by either clicking the third red square button in the IDE, or by minimizing the app and closing it in the simulator. To minimize an app in the BB10 OS, swipe up from the bottom of the screen and then click the X beside the minimized app name.

After an app is installed on the simulator it will remain there until you delete it. You can delete it by pressing and holding the left mouse button on the app icon for a few seconds. The app grid will start to pulsate and a delete icon will appear in the corner of the icon. Clicking the delete icon will delete the app from the simulator.

Note that if you make some code changes in the IDE and tell it to rebuild and run the same app, then the old version will be overwritten with the new version of the app before it will begin to run.

If you only want to run the app (and you are not interested in debugging it), you can restart the app by clicking on the app icon in the app grid.


Attached Image: 31-appAvailable.jpg


Analyzing a Running Application


The IDE can be used for more than just building and running applications on BB10, you can also use it to navigate the BB10 file system, you can copy and move files around, and you can watch how resources and memory is used in the simulator or real device.

All of these functionalities are available through a QNX perspective in the IDE. To open a new perspective, select Window > Open Perspective > Other...


Attached Image: 32-perspective.png


In the new dialog that appears, select QNX System Information and then click OK.


Attached Image: 33-qnx.png


In the top right hand corner of the IDE you will now have a new perspective available to toggle between. When the QNX System Information perspective is selected, the bottom portion of the IDE will list a number of tabs including: System Summary, Process Information, Memory Information, Malloc Information, Target File System.

On the left side of the IDE, in the Target Navigator, you will see a listing of apps and services that are running on your simulator/device. You can click on one of these options and then choose one of the tabs (Memory Information) to see how much memory the selected application is using.

Another handy thing to know about applications is that anything that you install will become available in the Sandboxes folder. See image below.


Attached Image: 34-target.png


After installing the Pull My Beard application, you can see that there is a new entry there that contains the app executable, a config folder used to store settings, data, db, logs, etc. If your app generates log messages, you will be able to find the log file in the log folder. The app code that you run when you launch the application is found in the app directory. There you can view and modify the QML code if you wanted to.

Running an App on a Real BB10 Device


So far you have seen how to setup the BB10 Simulator and run a sample application on it. Now let's build the Pull My Beard application for a real BB10 device and run it there.

A simulator is good to get you started with development, but there are certain things that can be done much better or easier on a real device. For this reason, it is always better to try to run your app on a real device before releasing it in BlackBerry World.

If you still have the BB10 simulator running, close the VM Player as you won't need it and it can take up a lot of PC resources.

Also if you are still in the QNX System Information perspective from the last section, return back to the C/C++ perspective by clicking on the icon to the left of the QNX System Perspective icon in the top right hand corner of the IDE.

Open the Device drop down menu, and you will see that it lists "No Device (USB) Detected". Choose the Manage Devices... option to setup the IDE for a real device.


Attached Image: 35-deviceSetup.png


In the Device Manager dialog, select "Devices" in the top and choose the "Set Up New BlackBerry 10 Device" button.


Attached Image: 36-setup.png


On the next screen you need to pair your device to the IDE. First of all you need to make sure that your device is running in "Development Mode". To put your device into development mode, swipe down from the top of the screen on your BB10 device to open the quick settings panel.


Attached Image: 37a-devMode.jpg


Click on the Settings button in the top left corner to open the system settings app.


Attached Image: 37b-devMode.png


Scroll down until you see the "Security and Privacy" option and click it.


Attached Image: 37c-devMode.png


Scroll down to the very bottom where you will see the "Development Mode" option and click it.


Attached Image: 37d-devMode.png


At the top of the screen there is a toggle button. Turn development mode ON by clicking on the toggle button. The device will ask you to set a device password if you don't have one set already.

Now that your device is ready in development mode, connect your BB10 device to your PC using a USB cable.

Back in the IDE, type in your device password and click the Next button to begin the pairing process.


Attached Image: 38-pair.png


After your device is paired with the IDE, you need to sign into your BlackBerry ID account so that you can generate a debug token. Fill in the input fields provided, and click the Get token button.


Attached Image: 39-token.png


A new dialog appears asking you to sign into your BlackBerry ID. If you don't have a BlackBerry ID account you can create a new one using the link at the bottom.


Attached Image: 40-login.png


After you sign into your account, you will be brought back to the IDE when your BlackBerry ID token has been successfully downloaded. Click the Next button.


Attached Image: 42-debug.png


You'll be back at the Device Manager dialog now with your newly listed BB10 device as "usb_1". The IDE will detect if you are missing any debug symbols and ask you to download them from the Internet. Click "Yes" and wait for them to download.


Attached Image: 43-symbols.png


If you go back to your BB10 device and look in the Development Mode section, you should see that a debug token has been installed on your device.


Attached Image: 44-deviceTokenDetails.png


A debug token is good for 10 days. This means if you install and run an app from the IDE on to your device, you will be able to use the app for only 10 days. After that, the app will become disabled. The IDE automatically updates the token on your device every time you run an app through the IDE. When you finish creating an app and you are ready to distribute it, you will want to create a final release which will be signed by your BlackBerry signing keys. Devices that don't have your debug token installed will not be able to run your app.

Ok, after your BlackBerry device is paired with the IDE and you have the debug symbols installed (note, it is always a good idea to restart the IDE after installing new symbols or a new API level) then you can run your app from the IDE on your BB10 device.

In the IDE, select usb_1 from the drop down list.


Attached Image: 45-selectDevice.png


Now press the blue build button to build the app code, and then click the green run button to install and run the app on your device.

To see what happens when your debug token expires, go back to the Developer Mode options and click the "Remove Debug Token" button.

Now try to run the Pull My Beard app from the app grid on your BB10 device and you'll notice that the app fails to launch.

Creating a Release Run


After you've fully tested your app and cleaned up any bugs you are ready to create a release version of your app. This is the version that you can upload to BlackBerry World to distribute to people around the world. The release version is signed by your BlackBerry signing keys and it doesn't depend on the debug token that expires after 10 days.

To create a release build, use the drop down menu and choose "Release Run".


Attached Image: 46-release.png


Select your app in the second drop down menu and click the build button. After the app is finished building you will see a new category under your app name in the Project Explorer called "BAR Packages". This is the bar file that contains your BB10 installation files that can be installed on anyone's BB10 device.

A bar file is just a zip file containing all the files that are needed to run your app. You can open a bar file in any zip file extractor program if you are interested in seeing what is inside it.


Attached Image: 47-bar.png


Conclusion


There you have it, I covered how to install the Momentics IDE on a PC and how to get it up and running for development using Cascades. We downloaded and installed the BlackBerry 10 simulator and saw how to control it using a separate Controller application. We then looked at how to download a sample application and import it into the IDE. We learned how to build and run a BB10 Cascades application on both the BB10 Simulator and on a real BB10 device. Along the way I hope you learned a few tips on how to view the BB10 file system, monitor memory usage of applications running on your device and also how to work with debug tokens and release builds.

If you are looking for detailed documentation on Cascades, the programming language used for native BB10 development, then have a look here: https://developer.blackberry.com/native/documentation/cascades/

If you have any questions or comments about this lesson send me an email, I'm interested to hear what you thought of it. Also if you have suggestions for additional BB10 lessons send them along to me!


Article Update Log


18 Feb 2014: Initial release
Viewing all 16874 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>