Discovering multi-touch
In all my previous projects, I was either dealing with one touch or touch gestures. Nothing more than two fingers, and even then with the two fingers (gestures), they were interpreted as one user.After a long list of research and testing, I found that each touch is added to a touch index list (which is more of an abstract list of touches). The part of the touches which is the most important is the touch IDs, which are given to each touch and don't change despite touches being added/ removed.
Example:
Add 3 fingers
touch 1: (index 0), (id 0)
touch 2: (index 1), (id 1)
touch 3: (index 2), (id 2)
Remove the second finger
touch 1: (index 0), (id 0)
touch 2: (index 1), (id 2)
Add 1 finger
touch 1: (index 0), (id 0)
touch 2: (index 1), (id 2)
touch 3: (index 3), (id 1)
The ID staying the same is useful for the multi-touch settings, which was good.
Graphics
Adding the graphics engine (jPCT-AE), wasn't that hard....figuring out which options to configure so it would use a display a smooth environment with no shading (but use depth), not so easy. When I turned off the fog and shading, the objects seemed...fake. Probably because it was just a few boxes with no perception of depth and they were all gray. Ideally, I want them to be transparent and colored, which wouldn't be done until week 3 where I punch on the coding and get each particle class it's own object....although I recall an article from the Rajawli graphics engine, the developer added all the objects to a single array. This allowed the simulation to run a large amount of simulated objects (each with it's own texture, rotation, and movement) without slowing down the app....and the app was only using 2,000 objects ( http://www.rozengain.com/blog/2012/05/03/rajawali-tutorial-22-more-optimisation/ ). This makes me worry about the speed of my app when I got 4,000 small cubes flying around.
Particles and what they can do for you
What I thought was a great way to design the particle system turned out to be horrible for multiple "users". I started with a initial particle which has the basics (x,y,y, vx, vy, vz, size, color, randomness, and life). Then I made the particle system containing them. All good so far...then I thought "where am I going to put these particle systems and how will each particle know who it's parent is while still being being the same as all the other particles in the update/render phase?" I tried to make an array of users (starts at 10 and grows when more touches are added) and each user would have a particle system within. The action of touching the screen would then be distributed to each user. but having multiple systems and some aren't even activated, it's just too must random reads and redundant memory usage.
So then I thought "why not just make 1 particle system, store the settings for the users in a static array, and each created particle contains who it's user was". If a user's setting has the particle change colors, it's settings will be configured upon creation (color array linked rather than copyed), and then pushed into the update bin with the rest of the particles.
So far, I haven't tried that. While working on one thing, I remember "Oh, I need the graphics engine to do X" or "The user needs to have these things then". Or some other distraction. Although important and required, still enough to distract me from completing the particle system.
Conclusion
So, by the end of week 2 (or by Monday), I need to have:
A basic graphics system working
A unified particle system which takes users into account
A multi-touch system which understands each user and sends the touches to the profiles accordingly


No comments:
Post a Comment