So now my fragment shader says "precision mediump float;" at the first line....but technically, this isn't "running code" because it's outside of the "void main()" loop...so, my fragment shader still has only 1 line of code and is still valid :-)
A new kind of "point"
After checking out the GLSL tutorials a bit more, I found a better way to draw the particles.
Instead of using the default GL_TRIANGLES, which draws a independent triangle for every 3 vertices and I'd need 2 triangles to make a particle; I could use GL_POINTS instead. This makes every vertex point a particle (square shape by default). The only difference by using this was that I had to change...a good large section of my vertex shader code
new point = new shader
uniform mat4 uMVPMatrix;// model view matrixIn case you aren't able to read GLSL code, just check out the comment line. The biggest change, besides a re-ordering of the lines, was the inclusion of "gl_PointSize". This is a OpenGL command which seems to be used with GL_POINTS is used that defines how large to draw the vertex...I don't know much besides that.
uniform vec4 uCamPos;// where the camera is (x,y,z)
uniform float uPointSize// base of how big to draw the particle
uniform vec4 ambiantColor;// "air" color applied to everything
uniform bool fogEnabled;
uniform float fogStart;
uniform float fogEnd;
uniform vec3 fogColor;// color applied after fog start
attribute vec4 aPosition;// vertex x,y,z
attribute vec4 aColor;// rgba
varying vec4 vertexColor;// color of the vertex point
const vec4 WHITE = vec4(1,1,1,1);
void main()
{
// normalize the colors and alpha
vec4 nColor = aColor;
if(nColor.x > 255.0)// red
nColor.x /= 255.0;
if(nColor.y > 255.0)// green
nColor.y /= 255.0;
if(nColor.z > 255.0)// blue
nColor.z /= 255.0;
if(nColor.w > 255.0)// alpha
nColor.w /= 255.0;
// make the color
vertexColor = vec4(min(WHITE, ambiantColor).xyz, nColor.w);
vertexColor.xyz *= nColor.xyz;
// make fog
if (fogEnabled)
{
vec4 vertexPos = uCamPos * aPosition;
float fogWeight;// how heavy the fog color will be applied
vec3 fogVertexColor;// color from the fog
fogWeight = clamp((-vertexPos.z - fogStart) / (fogEnd - fogStart), 0.0, 1.0);
fogVertexColor = fogColor * fogWeight;
// apply the fog
if (fogWeight > -0.9)
vertexColor.xyz = (1.0-fogWeight) * vertexColor.xyz + fogVertexColor;
}
gl_Position = uMVPMatrix * aPosition;
vec3 cp = vec3(uCamPos);
cp.x *= -1.0;
float pdist = length(cp - aPosition.xyz);// ifgures out how far the particle is from the camera
// then adkjusts the vertex's drawn size
gl_PointSize = uPointSize / sqrt(3.0 * pow(pdist, 3.0));
}
COMPLETED UPDATES
I was finally able to update all my code from jPCT-AE to Rajawali. This includes a complete re-modeling of how the particles are created. When the user touches down on the screen, it creates a ParticleSystem (only if one hasn't been made for this point already).
When the ParticleSystem is created, it makes an array of ParticleBuckets, these hold the particle geometry and data about each particle. It also controls the size of the geometry and data array sizes, calls update on all the particles, is the only thing connecting particle data to the data which gets sent to the renderer....why this important system is called "ParticleBucket", I still am not sure. It just sounded good at the time, and I haven't found a better name yet.
When the particle bucket is created, it makes a ParticleContainer (holder of the particle geometry data) and an array of Particles (each one containing information about 1 particle. it's position, velocity, color, and life).
To ease the speed of making particles, each Particle starts with blank data and requires a call to Create() to initialize the particle's data.
ParticleContainer has an array of vertices/ normals/ colors/ indices which are at length to the maximum each ParticleBucket can hold. If the ParticleContainer can only hold a max of 200 particles and 150 are requested to be made, then container will still allocate space for the 200 particles.
ParticleContainer has an array of vertices/ normals/ colors/ indices which are at length to the maximum each ParticleBucket can hold. If the ParticleContainer can only hold a max of 200 particles and 150 are requested to be made, then container will still allocate space for the 200 particles.
And then the problem hits
After all the code updates and shader coding...now I've come to a point where the program always seems to crash when going through. While creating the ParticleContainer, after creating the initial data, setData() is called in it's base class (BaseObject3D) which creates buffer arrays that are passed to the graphics card.
Every time now, it crashes on the same line, "BufferUtil.copy(vertices, mVertices, vertices.length, 0);", which is a call to a class in Rajawali that copies the data (stored in vertices) to a floatbuffer (mVertices).
I even tried to use data which was in one of the Rajawali tutorials, and it still crashes.
So, I wrote up a report of what my program does and sent it off to the Rajawali developer. Also included in the email was the source code for my program. Until I get a reply back from him or I figure out this problem, the program is at a stalemate. I could go back to jPCT, since I have a backup, but I would always be getting low performance issues.
No comments:
Post a Comment