The current stage of the development doesn't have a public playable demo and barely a presentation demo. This was primary due to my lack of knowledge about the engines being used. If more research was put into how well the engines could process a lot of points, then I wouldn't be at this point right now.
= Setbacks, the engine =
When I started the project, I was very comfortable with the jPCT-AE engine due to it being recently used on my previous project. I noticed a problem with processing a lot of objects, but they also had a lot of vertices in each object. Since this project would only have minimal display for each object, a colored square, I thought this problem wouldn't exist....not the case.
By time I was able to finish the base code for the program and made a presentable program, the best I was getting out of the engine with 200 objects was 14 fps (out of 60 fps possible). This was very pathetic since more graphically intense games had higher fps than this.
After some research, I found the Rajawali engine and it boosted of being able to process 2,000 objects each with it's own location/ rotation/ texture in a fluid running environment. This was allowed by the engine storing all the objects in a single object and interacting with the vertices for each "object" being shown. This was the exact fix I needed to use in my program to pull the game out of the <20 fps ditch it was falling into.
= Overcoming the setback (part 1) =
So, I hunted for code in the jPCT engine which would allow me to interact with the vertex points of a single object. The best thing I found was a class called the "GenericVertexController", which was essentially a handle to the object. Every OnDraw call, I would call the apply function of this controller, which would apply a pre-defined set of commands that would adjust the vertices.
Research was then put into how to effectively use this controller. I ended up making a loop in the apply function that would go through all the particle objects (objects that now only hold information about the particle) and apply their data (position) to the vertices....nothing really happened after the apply was called. But, that's what I get for using incomplete code which was found in forums.
There was another equally as large problem too. Since all the particles were contained inside the single object, I could no longer use the convenient "setColor(int r, int g, int b)" and "setAlpha(float a)" functions that the object presented. Because by using them, it would change the alpha and color of every particle. Ok if all the particles are meant to be the same color/ transparency, but they are not supposed to be in this case.
So, after talking to the developer, he said I would need to use a vertex shader to change the colors and I could just sent an array of colors to the shader that would be applied to the objects....but there engine currently did not have this feature because of some problem which the developer was unsure of. Not the most re-assuring thing to hear.
= Overcoming the setback (part 2) =
Since jPCT-AE wasn't able to give me more power/ functionally, it was decided to switch out the jPCT engine for Rajawali. This new engine out front showed the ability to interact with vertex points, vertex colors, normals, and texture coordinates in a casual open way. But to use the engine, I had to replace many functions in the game. I also had to re-write how the game ran due to required dependencies to make the engine run.
After about 3 days of configuring objects to be compatible with the new engine while also having convenient functions which used in the old engine, I was finally able to test the game.....well, nearly.
= Shaders, it's the new (graphical) view =
To be able to display the particles with this new engine, I would need to write my own graphics shaders to properly color the vertex points. Good thing DeVry was keeping up with technology and taught us how to write shaders.....HA, no. The technology for shaders came out in 2004 and DeVry still didn't have any classes about it, but apparently there are rumors of a class being introduced in mid 2013 to teach how to use shaders (which would be after my graduation).
So, I was up on my own to learn how to work this mysterious shader magic. After searching around on the web, I found a tutorial by NeHe which gave everything possibly needed about what the shaders do and how to use them. So I spend a few hours reading the site (there was many pages), and resulted with a nicely optimized vertex and fragment shader. I was now ready to test out my game and behold the flying particles.
= Color? We don't need that stuff =
Switch out an engine for a new one, learn the language of the gpu, re-configure/ re-arrange the game to support the new particle hierarchy, and expect there to be no problems? Something had to give, and in this case, it was color. I was beating my head against this problem for nearly a week. I did everything that the other tutorials did.
Create a object and have it extend Object3D, call the constructor while creating my new object, setup the vertex information, pass this data to the parent object so it can be prepared for rendering, create a material, add the shaders to this material, start rendering.....and yet my object was black.
After looking through hundreds of lines of the engine's code (because there was no documentation for the engine), I found that a light source was required (not actually used, just required) for the color data to be sent to the shader.
Take 2, and still no color. After more research and code reading, I thought "what the hell, lets see if my shaders even work", so in the fragment shader I told it to color every fragment red....and it worked. I then told my vertex shader (on the last line) to do the same thing and the fragment shader was adjusted to use the vertex shader color....and it worked (quite the head scratch, isn't it?).
Turns out my fog code was being run and was changing the color to black. Which was strange because it was nested inside a if-case that needed the variable "fogEnabled" to be true for the code to run. So more hunting and I added some "#ifdef FOG_ENABLED" code which was used in the fog demo and re-enforced by the shader program compiler in the engine. My program finally had color.
= Nothing better to fix a problem than another problem =
And just when I thought "Hey, I might actually get the game out on time", the program throws another laugh at me. The problem I had in the first few weeks, the conversion of touching the screen to make a point in the virtual world, was back. And to add more humor, the world scaling was off and I had no easy access to the jPCT implementation of this 2D->3D conversion (well, I "acquired" some source code, but it was very heavily tied in with the rest of the engine. Too much for an easy copy/ paste).
The code which Rajawali had for the 2D-3D conversion was all weird. It either game values so small that they hovered around the 0,0,0 point or some point which wasn't even within range of the virtual world.
I looked online for how to implement the function...and this is where the project is at the moment. The game screen has 2 points (one red and one green, to for testing that show the bounds of the screen...and for some reason, they are not going away despite their data being over-written) and another point which flickers between multiple colors (I set the life of the particle very low and made it change colors when being revived). The game is not appealing to play.
Although the code is very sexy and well organized, it means nothing without the 2D->3D conversion to tell where the particles will be emitted from. Sure, there is the problem of the world coordinates being smaller than the jPCT engine, but that's nothing compared to the lack of a conversion function.
= Class specific questions =
1. Did the project teach you about emerging technologies?
- Yes and no. Yes, in the fact that I finally got my hands on learning how to use shaders (thank you again DeVry for being inline with the new technology). But no because this technology came out 9 years ago and no longer is "emerging technology". Sure, this was also my first use at implementing gravimeter (gravity + acceleration) data in a program, but this is also old technology. The only thing, somewhat new, that was "emerging technology" that was used was that I was coding a project that was compatible with Android 4.0.3 which came out early 2012....damn, I thought it was sooner than that. Correction, I didn't learn anything about emerging technologies.
2. Did it help you manage your time in relation to a project involving emerging technologies?
- I don't know what the "it" is referring to, but this project was the first time I used a time sheet to record my programming times and what I did during each session. Seems that my average coding session lasted 4.5 hours (as shown in the first picture. I stop coding to eat, then resume coding). Comparing this to another project I was working on at the same time (in flash AS2), the visibility of viewable features compared to how much has been coded starts appearing around the 45-50 hour mark.
![]() |
| Majority of the coding was done between midnight and 10 am |
![]() |
| Weekend coding marathons? |
3. Did it help you decide how you might apply your skills in games and simulation programming to the world of ideas and technologies discussed in the class?
- To be blunt, I would have gotten an extra 40% father on this program and have a public alpha if I wasn't taking this class. Although I read the lectures, read the discussion threads, joined in them a bit, and wrote papers about emerging technologies (well, more like how we got to that tech, the show Fringe showed more emerging technologies than the class)...the class's info about emerging technologies didn't help at all with this program or how to apply my skills.
- What did help determine where to apply my skills based on knowledge which I have acquired during the weeks I was in this class is that I should really invest more time in writing up my own graphics engine. The current engines out there have too much padding between the developer and data which limits performance (jPCT-AE), are too obnoxious to be used (I'm talking about you jMonkey. 10 Mb for a mobile engine, what the hell?!?), or have so many hoops to jump through to make the engine work that it becomes a pain (Rajawali). When I make my engine, I'd make it easy to change data, convenient functions for high-level stuff, and preserve access to performance gaining stuff. If I pass this class, I would finally be done with school and I'll have time to work on this engine.
= Project wrap-up =
Yes, the project ended on a sad note, there is no working demo. The demo I do have is riddled with performance issues (I archived the jPCT project code and made a new project for the Rajawali code) and the current demo is nothing more than a black screen with a flicking dot.
Although this class is over, I do plan to continue this project and make a publicly playable game which can be downloaded from the android market. Last I checked, there was no particle-based games which had particles flying around in a 3D environment. It's not like that feat is impossible, it just requires more work than a 2D one would have.
To finish what I have planned for this project, a new graphics engine would need to be created. One which allows easy creation of objects, easy alteration of the object's geometry/ color, and provides as much performance as the OpenGL code allows. I don't know a lot about graphics engines and my previous attempt to make one failed (thank you DeVry for knowing how to learn me well). But after leaving DeVry and not having to worry about online discussion posting requirements or weekly paper deadlines, I would be able to focus on the engine development with a clear mind and get more games rolling out and pushed to the mobile market (to start making some money, because loans are f-ing heavy).
= Here be dragons =
For those who are interested in testing in what I do have, here are the links to the apk files. If you don;t know how to install an apk file on your phone, ask google.
The first attempt using jPCT. Up to 8 fingers can be used, each one creating particles which interact with the orientation of the phone. Each added finger/ particle emitter WILL slow down the phone. If you add 8 fingers, expect the program to run very slowly
The second attempt, using Rajawali. Adding multiple fingers still works (works up to 10 now if your devices supports it) and it won't give your phone a heart attack like the above link....but the only thing which is visible is some flickering dots which hover around the center of the screen. There is also 1 red and 1 green dot, I'm still not sure why they won't leave (Yes, I put them there, but their data is over-written, thus they should leave...but don't).
= Below this heading is intentionally left blank =







