!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> » OpenGL-ES Texture Mapping for iPhone / Oolong / PowerVR -- Mobile Perspectives

OpenGL-ES Texture Mapping for iPhone / Oolong / PowerVR



By paul ~ June 12th, 2009. Filed under: Resources.

Sorry it’s been so long, but I’ve been busy learning and applying texture mapping for the upcoming iPunt update.  As the old saw goes, "The more I learn, the more I realize I don’t know!"  The learning process has been pretty fun, and I hope that what I share here helps keep it fun for someone else.

I want to apologize in advance for the lack of polish in this post.  It’s kind-of a hodge-podge of information, but it’s been in the womb too long.  Time to get it out, even if it’s rough!

I’ve had a lot of trouble getting my head around texturing.  I know what I want to do, and I understand the "how to" do it for OpenGL, but there are a bunch of issues that make translating the theory to practice on the iPhone challenging:

  • The iPhone uses OpenGL-ES 1.x which is not OpenGL when it comes to texturing.  (News flash, the iPhone 3G S will support OpenGL-ES 2.0.)
  • The PowerVR MBX chipset doesn’t support all of the OpenGL-ES functionality (particularly the stencil buffer.)  It will be interesting to see what the new SGX chipset provides.
  • If you’re using Blender for your modeling, as I’m trying to do, making the translation from a texturing strategy in Blender to something that will work on the device may not be obvious.

I found myself doing a lot of googling and reading books, then trying things out only to find that the methods provided required an API call or technique that doesn’t work on the platform.  So, I’ve compiled a little list of resources and hints that perhaps will make the going easier for those that are also trying to climb the mountain.

The Basics of Texturing

There are some great tutorials on OpenGL texturing out there.  Here are a few to get you started, but there are many more that are worth mentioning:

These and others will get you the basics.  Please don’t waste time writing loaders for different graphics file types, generating textures and so-on.  Oolong and PowerVR have a solution for that which is much quicker and better.  Simply load your texture bitmap (in whatever format is convenient) into the PowerVR TexTool and generate the PVRTC (.pvr) compressed texture file.  A few easy calls and the .pvr file is loaded into your app and ready to use.  I’ll provide a more complete recipe for this a little later in this post.

One of the things that I struggled with in a big way was understanding and generating texture coordinates.  This is something that all of the tutorials make short shrift of.  Most of them provide a static array of 0′s and 1′s and leave it at that.  Those that go a bit deeper (the Red Book and Super Bible go into more detail) rely heavily on glTexGen().  Well, guess what, there is no glTexGen() for OpenGL-ES and I have found very little  help in working around that.  The one bright light is this article on "The Mathematics of glTexGen() " from the OpenGL wiki.  With this information, you can write your own procedural texture coordinate generation code.  One of the best explanations of what each of these generation "modes" does is found here , from the OpenGL Super Bible.

More than likely, the main reason that these resources don’t help much with procedural texture mapping is because most texture mapping is performed in a modeling tool.  The funny thing is that, with all my searching, it took a while before I discovered how to do this in Blender.  There is actually a very good, comprehensive tutorial in the Blender manual that explains all:  Blender UV Texture Mapping .  The relevant documentation is several web pages long, so stick with it to the end.  Note that in the currently available iPhones (the PowerVR MBX chip) there are only two texture units, so don’t plan on deep multi-texturing for this device.

The Blender->PVR->Oolong Procedure At A Glance

Here’s a high level overview of the entire procedure at a glance.  Note that I have left out many details that are covered in the Blender documentation, only including those steps that are either structural or I have additional details for.

In Blender:

  1. Create your mesh.
  2. Select the Material window configuration which puts the UV/Image Editor and 3D View up simultaneously (see image below.)
  3. Put the 3D window in texture mapped mode so that you can see the effects of texturing in real-time.
  4. Create or select a material for the object or faces.
  5. Create a UVTex mapping for the object or faces.
  6. Unwrap the mesh.  This will probably take some experimentation with the many unwrapping algorithms Blender provides, along with some manual editing of the faces in the UV window.
  7. Export the face layout to a .tga file using the Blender script UVs->Scripts->Save UV Face Layout .  Be sure to set the size to the final size of the texture (for a 256×256 texture, set the size to 256.)  Be sure size is a power of 2.

In GIMP or the image editing / paint tool of your choice…

  1. Open the .tga file and copy the layout to a semi-transparent, top layer (overlay).
  2. "Color" the texture on layers underneath it.
  3. Turn off the overlay and export to .png format

In Blender

  1. Load the texture image and assign to the mesh faces to which it applies.
  2. Use texture paint to touch up any areas where appropriate.  (Be sure to load the changed version into your image editor if you go back!)
  3. Adjust vertices and mapping as necessary and iterate.

For the most part, the result on the device will look the same as in Blender, so you can just stay in Blender and GIMP until you have it looking right.  I noticed that I occasionally had to flip normals on some faces in Blender, particularly when I manually added faces or reduced the face count during optimization.  Faces with the normal facing in the wrong direction will look transparent in Blender.

Note that the mapping is extremely flexible.  It is not necessary for faces in the UV window to have proportional area to the actual faces on the mesh.  In fact, it is a monumental waste of pixels to map faces that are a solid color or have low resolution data to large UV areas.  Devote most of the area in your texture to the high resolution content and put low resolution faces in smaller areas.  Also, remember that it’s OK if UV faces overlap as long as the result is what you want.  I sometimes just create a color splotch and put all the faces that are that color higgledy-piggledy in the splotch.

When the object looks the way you want, use the PVRTexTool to convert the texture image to .pvr format…

  1. Open the texture image produced above in the PVRTexTool (comes with the PowerVR SDK.)
  2. Generate all MipMaps.  They should automatically show up.  If they don’t, your texture may not be square and a power of 2.
  3. Use Save As to save the texture as a .pvr
  4. In the encoding dialog that pops up, select 2bpp or 4bpp as needed, turn YFlip OFF, and export all mipmaps.  Many other encodings are supported, but I have found that the PVRTC 2bpp and 4bpp encodings work very well with the Oolong kit.  Use 4bpp where you need better image quality, 2bpp where the quality is not as important.  Experiment to get what you need.

You can now load the POD and texture, apply the texturing during rendering in Oolong and it should look the same as it did in Blender.  I used the Shadows example to get me started.

Here are some examples from iPunt:

Apache helicopter being textured in Blender.

Textured helicopter as rendered on device.

Procedural Texturing

Once I figured out how to use Blender to map an image to an object by "unwrapping" it, I didn’t have as much need for procedural texturing, but for certain things (like cube mapping and projections) procedural texturing is still useful.  Here’s what I learned.

One of the conceptual things that I struggled with is that I kept trying to think of wrapping my texture around an object.  The way these algorithms work is backwards from that, and much easier.  They work backwards from the point on the object to be textured to the pixel in the texture bitmap to be used.  I don’t know if that helps anyone else, but it was an epiphany for me!

One little trick that you need to be aware of is that you’ll need to "adjust" your texture coordinates after the chosen glTexGen() algorithm has generated them.  The algorithm will generate values in some range, such as -n .. +n and you’ll want to re-map that range to 0..1 or 0..r (for repeated textures.)  Otherwise, you’ll get garbage.  I call this "re-boxing" the coordinates instead of "normalization" since normalization tends to refer to reducing vectors to unit-length in OpenGL.  Here are some code fragments that do all of the above in an Oolong environment:

Here’s an implementation of glTxGen() OBJECT_LINEAR:

// Note that these algorithms come from:
// http://www.opengl.org/wiki/Mathematics_of_glTexGen

void texGenObjectLinear(GLfloat* buf, unsigned int stride, const btVector3 &sPlane, const btVector3 &tPlane) {
  // Loop through the vertices and operate on each one
  unsigned char* p = (unsigned char*)buf;

  for (unsigned int i=0; i < numVertices(); i++, p+=stride) {
    buf = (GLfloat*)p;
    const btVector3 vtx(vertex(i)[0], vertex(i)[1], vertex(i)[2]);
    buf[0] = vtx.dot(sPlane);
    buf[1] = vtx.dot(tPlane);
  }
} // computeTexObjectLinear()

As you can see, this is just a direct translation of the pseudo-code in the Mathematics of glTexGen() article, using the Bullet vector dot product.

Here is the implementation of the vertex() method for SPOD meshes:

virtual GLfloat* vertex(unsigned int i) {
  if (_pMesh->pInterleaved != NULL)
    return (GLfloat*)(_pMesh->pInterleaved+(size_t)_pMesh->sVertex.pData+i*_pMesh->sVertex.nStride);
  else
    return &((GLfloat*)(_pMesh->sVertex.pData))[i*_pMesh->sVertex.nStride];
} // vertex()

If you’ve loaded your meshes from a POD file, you can just call this as below (note that _pMesh is an SPODMesh* and texGenObjectLinearInPlace() will compute the new texcoords in-place, over-writing the texture coords in the SPOD data structure.)

GLfloat* uvArray() {
  if (_pMesh->pInterleaved != NULL)
    return (GLfloat*)(_pMesh->pInterleaved+(size_t)_pMesh->psUVW[0].pData);
  else
    return (GLfloat*)_pMesh->psUVW[0].pData;
} // uvArray()

// Recompute text coords in-place
void texGenObjectLinearInPlace(const btVector3 &sPlane, const btVector3 &tPlane) {
  this->texGenObjectLinear(uvArray(), _pMesh->psUVW[0].nStride, sPlane, tPlane);
} // texGenObjectLinearInPlace()

I ended up writing a general-purpose routine to "re-box" vertices.  Not pretty, and my brain tells me that it could be more elegant, but I haven’t figured out how yet.  Perhaps after I’m rested up…

void BMWDrawableObject::reBoxVertices(GLfloat* buf, unsigned int stride, unsigned int numVtx,
       unsigned int coordsPerVert, const btVector4 &min, const btVector4 &max) {
  // Calculate (in-place) the new vertices based on the min and max vertices passed in.
  GLfloat* pStart = buf;
  btVector4 minAccum(FLT_MAX, FLT_MAX, FLT_MAX, FLT_MAX);
  btVector4 maxAccum(FLT_MIN, FLT_MIN, FLT_MIN, FLT_MIN);

  // Compute the current nD bounding box
  unsigned char* p = (unsigned char*)pStart;
  for (unsigned int i=0; i < numVtx; i++, p+=stride) {
    buf = (GLfloat*)p;
    for (unsigned int j=0; j < coordsPerVert; j++) {
      minAccum[j] = (buf[j] < minAccum[j]) ? buf[j] : minAccum[j];
      maxAccum[j] = (buf[j] > maxAccum[j]) ? buf[j] : maxAccum[j];
    }
  }

  // Compute scale and offset to be used on each vertex.
  btVector4 offset(min[0]-minAccum[0], min[1]-minAccum[1], min[2]-minAccum[2], min[3]-minAccum[3]);
  btVector4 oD(maxAccum[0]-minAccum[0], maxAccum[1]-minAccum[1], maxAccum[2]-minAccum[2], maxAccum[3]-minAccum[3]);
  btVector4 nD(max[0]-min[0], max[1]-min[1], max[2]-min[2], max[3]-min[3]);
  btVector4 scale(nD[0]/oD[0], nD[1]/oD[1], nD[2]/oD[2], nD[3]/oD[3]);

  // Loop through and set the values.
  p = (unsigned char*)pStart;
  for (unsigned int i=0; i < numVtx; i++, p+=stride) {
    buf = (GLfloat*)p;
    for (unsigned int j=0; j < coordsPerVert; j++)
      buf[j] = (buf[j]+offset[j])*scale[j];
  }
} // reBoxVertices()

Putting it all together, imagine that I want to map a texture to a hemisphere.  The texture is projected from the XY plan. Here’s how I would call the above routines to accomplish that:

    // Generate new texcoords based on OBJECT_LINEAR algorithm
    // We are going to put the data right back into the Mesh's UV array.
    // (pDef is a pointer to an object with the SPODMesh* _pMesh.)
    pDef->texGenObjectLinearInPlace(btVector3(1,0,0), btVector3(0,1,0));
    // Normalize the UV coords such that the texture wraps all the way around.
    reBoxVertices(pDef->uvArray(), pDef->mesh()->psUVW[0].nStride, pDef->numVertices(),
                                   pDef->mesh()->psUVW[0].n, btVector4(0,0,0,0), btVector4(1,1,1,1));
    // Re-load the VBO since we've changed the data
    pDef->loadVBO();

Additional Texturing Resources

Cube Mapping

http://developer.nvidia.com/object/cube_map_ogl_tutorial.html (Uses OpenGL extensions not available to us, but explains the process.)

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=247684

Spherical Mapping

http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=23

Bump Mapping Tutorial

http://www.paulsprojects.net/tutorials/simplebump/simplebump.html

Sky Boxes in Blender

http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Build_a_skybox

Texture Sources

http://www.cgtextures.com/

4 Responses to OpenGL-ES Texture Mapping for iPhone / Oolong / PowerVR

  1. gregj

    and what about mesh? how to export that to something oolong can load, and use in iphone app ?

    To be fair, I would love to see a full tut on how to create scene, few (possibly animated) objects in blender, and than create iphone app using that. If you can, even with little details – describe that to me, once I get around to do it, and finish it successfully – I would love to publish it.

  2. paul

    Exporting meshes from Blender to Oolong is covered here:

    http://www.blumtnwerx.com/blog/2009/03/blender-to-pod-for-oolong/

    paul

  3. accessoires

    I got into developing iPhone apps not long ago, and oddly enough the current iPhone project was actually born on the Windows platform. Game engine code is mostly designed to be platform neutral from the onset, in the hope of porting the project to other embedded devices. If you are starting out and want to approach your projects in a similar way, you may want to download the OpenGL ES 1.x and 2.0 SDK/emulators for both Windows and Linux from ImgTec.

  4. Android Games

    Thanks…

    I really need it,thank you very much!!!…