Friday, January 25, 2013

Late week 2: tangled in code

Damn this week was long. I had to re-wash what I knew about multi-touch (and how the device interprets touches), re-learn what was required and optional parameters in the graphics engine I was using, and work on how the particle system will work with each user (I still haven't gotten it working with a central particle database).

Discovering multi-touch
In all my previous projects, I was either dealing with one touch or touch gestures. Nothing more than two fingers, and even then with the two fingers (gestures), they were interpreted as one user.
After a long list of research and testing, I found that each touch is added to a touch index list (which is more of an abstract list of touches). The part of the touches which is the most important is the touch IDs, which are given to each touch and don't change despite touches being added/ removed.

Example:
Add 3 fingers
touch 1: (index 0), (id 0)
touch 2: (index 1), (id 1)
touch 3: (index 2), (id 2)

Remove the second finger
touch 1: (index 0),  (id 0)
touch 2: (index 1),  (id 2)

Add 1 finger
touch 1: (index 0),  (id 0)
touch 2: (index 1),  (id 2)
touch 3: (index 3),  (id 1)

The ID staying the same is useful for the multi-touch settings, which was good.

Graphics
Adding the graphics engine (jPCT-AE), wasn't that hard....figuring out which options to configure so it would use a display a smooth environment with no shading (but use depth), not so easy. When I turned off the fog and shading, the objects seemed...fake. Probably because it was just a few boxes with no perception of depth and they were all gray. Ideally, I want them to be transparent and colored, which wouldn't be done until week 3 where I punch on the coding and get each particle class it's own object....although I recall an article from the Rajawli graphics engine, the developer added all the objects to a single array. This allowed the simulation to run a large amount of simulated objects (each with it's own texture, rotation, and movement) without slowing down the app....and the app was only using 2,000 objects ( http://www.rozengain.com/blog/2012/05/03/rajawali-tutorial-22-more-optimisation/ ). This makes me worry about the speed of my app when I got 4,000 small cubes flying around.


Particles and what they can do for you

What I thought was a great way to design the particle system turned out to be horrible for multiple "users". I started with a initial particle which has the basics (x,y,y, vx, vy, vz, size, color, randomness, and life). Then I made the particle system containing them. All good so far...then I thought "where am I going to put these particle systems and how will each particle know who it's parent is while still being being the same as all the other particles in the update/render phase?" I tried to make an array of users (starts at 10 and grows when more touches are added) and each user would have a particle system within. The action of touching the screen would then be distributed to each user. but having multiple systems and some aren't even activated, it's just too must random reads and redundant memory usage.
So then I thought "why not just make 1 particle system, store the settings for the users in a static array, and each created particle contains who it's user was". If a user's setting has the particle change colors, it's settings will be configured upon creation (color array linked rather than copyed), and then pushed into the update bin with the rest of the particles.
So far, I haven't tried that. While working on one thing, I remember "Oh, I need the graphics engine to do X" or "The user needs to have these things then". Or some other distraction. Although important and required, still enough to distract me from completing the particle system.

Conclusion
So, by the end of week 2 (or by Monday), I need to have:
A basic graphics system working
A unified particle system which takes users into account
A multi-touch system which understands each user and sends the touches to the profiles accordingly

Currently, the only thing visual I have is a gray box in a purple void....and it's not even the correct size cube. As for the code image, it's what is preventing me from adding a second box (damn multi-touch and my lackfull knowledge of how to use it).



Saturday, January 12, 2013

Page 1 - pen or pencil?

First blog for my new class, GSP-475 (Emerging technology). For this class, I was assigned to work on a project which contain one of the following:
(a) a touch screen application
(b) a sensor-based device
(c) a video game level using off-the-shelf development tools
(d) a PowerPoint presentation describing the evolution of one of the newer entertainment or technology forms.

Although I once was into electronics, and pretty good at it too, I've chosen to go software due to it being obsessively on my mind while electronics was more of a hobby (cuts out option b). D didn't make much sense till I saw the course project info on the class's page. It involved Second-life or another online community of sort, and I'm not a fan of those (cuts out option D).

Which leaves options A and C...which I figured "why not merge both, make a touch-based game for the phone". Sure, I've done such in the past, but those were games with touch, not a touch based game with a game added.....yea, that probably sounds confusing.
Anyway, after talking with the teacher about some ideas for this project, one was settled on. And in case this blog's url address didn't make enough sense, the chosen game will involve water and touch.


Premise:
The user (or user and friends) touch the screen for the main interaction. Everywhere there is a touch, a little jet of water will come from the touched point and flow into 3D space. To add more sense of realism, the water will take in effect of the phone's gyroscope. So regardless of what direction the phone is in, the water will fall downward (except when told to flow another way, more details later on that).
In the settings, each touch point will have it's own settings. Color, gravity effect, splash effect, drop shape/ size, emission count (how much water to emit every second), air friction, nozzle size, and emission force (how fast to shoot the water out).
After getting the initial settings worked out, I might try adding global effects. Blur, time-lapse blur, time-z blur (old frames are pushed away from the screen and blurred), or blur horizontally/ vertically (each dot, not the world).


How it will be done:
All great ideas sound great on paper, but useless if they can't be implemented. So my timeline/ goals to create this app/ game...game, which will include the following (most likely in this order):
- create a particle which can represent all the needed features (x/y, velocity, color)
- a particle system to create the particles, move them, and change color (if set to do so)
- apply the phone's gravity to the particles
- create a menu to change the settings (this would be a nice time to try out the new "Holo" theme which Android is raving about)
- create "user" settings for each touch (max number of  "users" = max touches the screen can support, sounds convenient enough)

After these have been done and there is still time, then post-effects will be coded.


What has been done:
- I created a particle class and particle system (made from the basis of my previous game. Not exactly


What is being done now:
- Figuring out what engine to use. I've tried to write my own engine before, I suck at it. My options are: JPCT-AE (which was used in my previous game) ,a new engine (3D based), or a new engine (2D based and fake 3D). Most likely I'll try to stay with true 3D, less complex coding in the long run.