So, for the last few days (most of the time was working on other programming work), I've been converting all my code from jPCT-AE to Rajawali.
The new engine would let me: create objects and interact directly with the vertex points, vertex indices, vertex attributes, and input custom shader data/functions (which I'm still not sure how to do with jPCT).
The Exchange
At first when converting my code to the new engine, I decided to use the source code and just copy what was needed and skip the rest (Rajawali provides up-to-date source along with a compiled library). The idea behind this was that I would have a smaller file size and more control over everything that was happening.
...I ended up spending 4 hours JUST on importing files, renaming things to work with my program, and trying to figure out "do I need this section of code, or can I cut it out?". Which just ended up becoming a tiring process and a waste of good programming time.
The NEW Exchange
After a night's worth of rest, I came to the realization "just use the damn library. It comes in a convenient package and if tweaking is necessary, you can make a new engine when not under a heavy programming work load"....So, I ripped out the jPCT engine and installed Rajawali (which was the easy part)....configuring my code to use the new engine, not quite so easy.
The way jPCT works:
= Setting up the game
1. Make an Activity
2. Make your main game code
3. Make a Renderer and pass in the main game code (so the onDrawFrame code will know what to pass a reference to the frame buffer too)
4. Make a Touch Surface and pass in the main game code and renderer. The game code is so it will know where to send the touch commands to, renderer is so the app can set up the OpenGL enviroment and set the phone's renderer to your renderer.
5. Set the context view to your Touch Surface "setContentView(touchSurface);". This tells the Android system bring touchSurface to view and start running the code.
= Running the game
1. The renderer in
public void onDrawFrame(GL10 gl)
{
if(paused)
return;
fb.clear(back);
screen.Update();
screen.Render(fb);
fb.display();
}
calls to your game code to update and render, and you game does as it's told. The frame buffer is not interacted with by you, but is passed to your game code's world which has all your objects/ lights contained inside.
2. Your touch surface receives input and sends it to your game code. My first game used a queue system, now it uses as static reference system (seems more efficient).
The way Rajawali works:
= Setting up the game
1. Make an activity but have it extend RajawaliActivity instead of Activity (Android native)
2. Make a renderer and have it extend RajawaliRenderer instead of Renderer(Android native)
3. Make your main game code and give it reference to your renderer (so it can interact with the renderer)
4. Tell the renderer about the main game code (for it's onDrawFrame code which says when to update and render)
...and that's it (for the set up). The rest is covered by Rajawali....which isn't the most comforting...BUT, it does make up for it when running the game.
= Running the game
1. (...skip the boring "setting up the world view") CREATE OBJECTS...well, currently it still looks like a copy/paste of the code as seen here (and that's because it is), but that's because I've been working on something more important to game, GLSL Shaders. ;-)
Making Shaders
To begin with, and put bluntly, Holy Crap this stuff is confusing. And I blame the confusion on how "things" are processed. In conventional programming, there is a loop which cycles through all of the objects and there is a defined variable which says "this is the ##th object, lets do something with it". In shader code, which took me so long to understand and I didn't fully understand it till reading NeHe's tutorial, is that the "objects" are ALL processed at the same-ish time.
When passing the data to the shader, they are sent in buffers that contains: how much information there is, the type of info, the info itself, and how many items are in each "group" (although I may be confusing standard usage with VBOs on this item). Anyway, the data is all passed to the shader, it cuts all the data into chunks, and each gets independently processed in the vertex shader.
The end result of the vertex shader is a large pile of items containing end result result info about the pointed processed. Specifically, where is the point (saved to gl_position) and any other information which needs to be known about this point (like the end result color.
ALL the fancy lighting effects which are done on the object (including your 10+ light sources, light source color, material glossyness, diffuse and specular properties), the fog amount, fog color, ambient world color, per-vertex color, and alpha amount....all compressed down to an ARGB value (known in shader code as a vec4, a float array with 4 objects).
The vertex shader data then gets sent to the fragment shader which is.... kind of abstract. The vertex shader processes each vertex (x/y/z) point, while the fragment shader processes chunks of data. It takes a pixel "estimated?" and shoots a line from the screen into your world. Every vertex it hits which has direct interact with the line gets added to the fragment color. So in this way, multiple vertex points become 1 "point". Which explains how the machine is able to presses so much data and display it on a small screen.
The process of me learning the shaders, un-doing confusion (set by how shaders work), and making my own shaders (yay)...ended up dating a good 5-6 hours. Yea...it's that rough. Would have been nice if there was someone I could talk to to help me understand it faster/better. But Devry isn't exactly in the business of teaching and I'm the only person I know who is this deep in programming....or even programs for the matter (pretty sad).....
MY SHADERS!!
I haven't yet tested them, so there is a good chance they are not fully correct (I'm mostly worried about the fragment shader), but I am majorityly positive that the vertex shader is correct. My only confusion is how to send the alpha values for each vertex...but this is because I haven't done anything with the game code yet.
Vertex Shader:
uniform mat4 uMVPMatrix;// model view matrix
uniform vec4 uCamPos;
uniform vec4 ambiantColor;// "air" color applied to everything
uniform float fogStart;
uniform float fogEnd;
uniform vec3 fogColor;// color applied after fog start
attribute vec4 aPosition;// vertex x,y,z
attribute vec4 aPlanePosition;// triangle x,y,z
attribute vec4 aColor;// rgb
attribute float alpha;// alpha value
varying vec4 vertexColor;// color of the vertex point
const vec4 WHITE = vec4(1,1,1,1);
void main()
{
vec4 vertexPos = uCamPos * (aPosition + aPlanePosition);
vec3 fogWeight;// how heavy the fog color will be applied
vec3 fogVertexColor;// color from the fog
// make fog
if (fogStart != -1.0) {
fogWeight = clamp((-vertexPos.z - fogStart) / (fogEnd - fogStart), 0.0, 1.0);
fogVertexColor = fogColor * fogWeight;
} else {
fogWeight = -1.0;
}
// make the color
vertexColor = vec4(min(WHITE, ambiantColor).xyz, alpha);
vertexColor *= aColor;
// apply the fog
if (fogWeight>-0.9) {
vertexColor.xyz = (1.0-fogWeight) * vertexColor.xyz + fogVertexColor;
}
gl_Position = uMVPMatrix * (aPosition + aPlanePosition)
}
Fragment Shader:
varying vec4 vertexColor;
void main()
{
gl_FragColor = vertexColor;
}
.....and now that that mess is solved (been confused about shaders since mid-development of my CCMaze project), I get to jump on some java code and make some dynamic particles. If everything goes as planned, I could pump 10K particles into the world, each with their own color and alpha value, and the phone would run them all without breaking a sweat.
No comments:
Post a Comment