The OpenGL Tutorial — Part IV

Porting to Android

Roger Boesch
8 min readSep 18, 2024
Pong running in Android Studio Emulator

Update 2024: Over six years have passed since I originally wrote this article. When you try out the Android example, you will see that it’s no longer running with the newest APIs and the latest version of Android Studio. I have, therefore, created an update using GameActivity and, at the same time, OpenGL ES 3. You can find the updated code here and the article that describes the refactoring. The nice part is that it does not need changes outside the boilerplate code. It is an expected effect and the reason why I have separated platform and game-related code from beginning on. But now let’s continue with the original article, which I still recommend reading to understand the basics …

Original Article 2018

The first three parts of the tutorial were related to Apple platforms (macOS and iOS). Now, we go multi-platform and create an Android version in this part!

Important: This part already uses OpenGL 2.x and a programmable graphics pipeline and shaders.

Android NDK

“Normal” Android apps are written in Java or Kotlin and are based on the Java SDK. Of course, we want to use C++ for this tutorial, so we need the Android NDK (Native development kit). For more information on installing and using it, see the links below.

Boilerplate code for Android

On macOS, resp. iOS, we have created the boilerplate code in its native language, Objective-C. (Swift would also be possible, of course)

This part is written in Java on Android. It’s also boilerplate code that includes an Activity and the OpenGL context.

Also, it’s responsible for setting up JNI, which allows the call after C/C++ code from Java. But step by step.

GLActivity.java

The activity class is where it all starts. It mainly creates three views and layout them on the screen:

GL2JNIView mView;
GLTouchView mLeftView;
GLTouchView mRightView;

The first is the OpenGL view, and the two other views are a the touch-based interface similar to the iOS implementation. (See GLTouchView.java)

@Override protected void onCreate(Bundle icicle) {
super.onCreate(icicle);

setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
DisplayMetrics dm = new DisplayMetrics();
this.getWindow().getWindowManager().getDefaultDisplay().getMetrics(dm);
int width = dm.widthPixels;
int height = dm.heightPixels;

RelativeLayout layout = new RelativeLayout(this);
layout.setLayoutParams(new RelativeLayout.LayoutParams(width, height));
setContentView(layout);

mView = new GL2JNIView(getApplication());
mView.setLayoutParams(new RelativeLayout.LayoutParams(width, height));
layout.addView(mView);

mLeftView = new GLTouchView(this);
mLeftView.tag = 1;
mLeftView.setLayoutParams(new RelativeLayout.LayoutParams(width/2, height));
mLeftView.setBackgroundColor(GLActivity.TRANSPARENT);
layout.addView(mLeftView);

RelativeLayout.LayoutParams rightLayoutParams = new RelativeLayout.LayoutParams(width/2, height);
rightLayoutParams.leftMargin = width/2;
mRightView = new GLTouchView(this);
mRightView.tag = 2;
mRightView.setLayoutParams(rightLayoutParams);
mRightView.setBackgroundColor(GLActivity.TRANSPARENT);
layout.addView(mRightView);
}

There is not much magic; I use a relative layout and place first the OpenGL view in it. Then (with transparent background color), the two touch views. One on the left side and one on the right side.

GLTouchView.java

This is the class responsible for the user input and is quite easy to understand:

@Override
public boolean onTouchEvent(MotionEvent event) {
mX = event.getX();
mY = event.getY();

switch (event.getAction()){
case MotionEvent.ACTION_DOWN:
GL2JNILib.touch(tag, 1, (int)mX, (int)mY);
break;
case MotionEvent.ACTION_MOVE:
break;
case MotionEvent.ACTION_UP:
GL2JNILib.touch(tag, 0, (int)mX, (int)mY);
break;
default:
return false;
}

return true;
}

In this custom view (derived from android.view.View), we override onTouchEvent() and send the event information straight away (via JNI, see GL2JNILib.java) to the native part in RBRender.cpp.

The tag property differentiates between the left and right touch view (as used in this example) and the second parameter of GL2JNILib.touch(tag, 1, (int)mX, (int)mY); it is either 1 (Touch down) or 0 (Touch up). Straightforward approach, right?

GL2JNIView.java

This class is 1:1 from the NDK example *hello-gl2* and the view responsible for setting up a valid OpenGL ES 2.0 context by derive a view class from

GLSurfaceView

I use that class as is!

GL2JNILib.java

GL2JNILib is the bridge class between the Java part and the C++ code.

public class GL2JNILib {
static {
System.loadLibrary("gl2jni");
}

/**
* @param width the current view width
* @param height the current view height
*/
public static native void init(int width, int height);
public static native void step();

/**
* @param tag ID of touch view (useful when have multiple)
* @param down 1 if pressed, 0 if released
* @param x x position of touch
* @param y y position of touch
*/
public static native void touch(int tag, int down, int x, int y);
}

As you can see, it’s just a class with static methods that describe the native methods and make them accessible to Java. The first method, init(), will be called after initializing the OpenGL context, and the step() method will be used once per frame. Similar to renderLoop() in the iOS Version.

While both of these calls are already part of the NDK sample, I’ve added a touch method, which delivers the touch information to the native functions implemented in RBRender.cpp

RBRender.cpp

So far, so good and not so much different than on the iOS side, except that it’s implemented in Java, which we leave now… The entry point -or- middle layer between the Java boilerplate code and our game engine is in RBRender.cpp. It’s not C++ but C code. Calling C++ classes directly would require another wrapping, which I avoided here. So what’s done in RBRender.cpp? Mainly three things:

  1. JNI declaration
  2. Create the Shader
  3. Call the methods in *Game.cpp* (Our engine)

JNI declaration

extern "C" {
JNIEXPORT void JNICALL Java_com_android_gl2jni_GL2JNILib_init(JNIEnv * env, jobject obj, jint width, jint height);
JNIEXPORT void JNICALL Java_com_android_gl2jni_GL2JNILib_step(JNIEnv * env, jobject obj);
JNIEXPORT void JNICALL Java_com_android_gl2jni_GL2JNILib_touch(JNIEnv * env, jobject obj, jint tag, jint down, jint x, jint y);
};

JNIEXPORT void JNICALL Java_com_android_gl2jni_GL2JNILib_init(JNIEnv * env, jobject obj, jint width, jint height) {
setupGL(width, height);
}

JNIEXPORT void JNICALL Java_com_android_gl2jni_GL2JNILib_step(JNIEnv * env, jobject obj) {
renderFrame();
}

JNIEXPORT void JNICALL Java_com_android_gl2jni_GL2JNILib_touch(JNIEnv * env, jobject obj, jint tag, jint down, jint x, jint y) {
userInput(tag, down, x, y);
}

Besides the somewhat curious declaration, this JNI definition is needed to make the calls from Java possible. If you want to know more about the specific syntax, see the JNI-related link below. It is important to understand here that we are now in C code and able to implement all the rest in C/C++.

Create the shader

As you can see, we need slightly more to come down here, which we call RBDrawRect(). This is because we call from Java and, more importantly, we use OpenGL ES 2.0.

This has one significant difference: We no longer use CPU-related calls to define the projection matrix, color, etc., but use a shader program (Code that runs on the GPU).

Shader programming language has a C-like syntax and consists of a:

  • Vertex shader (handles the processing of individual vertices) and a
  • Fragment shader (process a fragment generated by the rasterization into a set of colors).

(For more on shader programming, see the links section below)

But let’s look at the code (we create in code and pass it later to the GPU):

attribute vec4 vPosition;
uniform float fWidth;
uniform float fHeight;

void main() {
mat4 projectionMatrix = mat4(2.0/fWidth, 0.0, 0.0, -1.0,
0.0, 2.0/fHeight, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
gl_Position = vPosition;
gl_Position *= projectionMatrix;
};

The first three lines declare the parameters we pass in later from our code. Then, in the primary function, I create the projection matrix. This replaces mainly the code glMatrixMode(GL_PROJECTION) that we used in OpenGL ES 1.0.

Next, we assign the vertex we passed in and multiply it with the projection matrix. This might look complicated at first, but it gives a lot of flexibility and also a performance boost.

But that’s not yet all. We also need a fragment shader that defines the appearance of a vertex.

In our case simple: Just a black color. This will also change later and, more importantly, get more dynamic.

But for now, that’s just fine.

void main() {
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
};

The next two functions are helper methods only to create the shader and load the program in the GPU.

GLuint loadShader(GLenum shaderType, const char* pSource);
GLuint createProgram(const char* pVertexSource, const char* pFragmentSource);

Let me focus here on the next functions, which are directly called based on the JNI exports we made earlier.

This is the last step in our boilerplate code. After we are in our platform independent game engine.

bool setupGL(int w, int h) {
gProgram = createProgram(gVertexShader, gFragmentShader);

if (!gProgram) {
return false;
}

gPosition = glGetAttribLocation(gProgram, "vPosition");
gWidth = glGetUniformLocation(gProgram, "fWidth");
gHeight = glGetUniformLocation(gProgram, "fHeight");
glViewport(0, 0, w, h);

gGame->OnInit((float)w, (float)h);

return true;
}

This is the setup code that gets one time called after the OpenGL context is created. At first, we made the shader program (which happens just once). After that, our code is compiled and loaded into the GPU. The following lines get accessors to the parameters in the shader code so we can use them later. In the end, we call the OnInit() method of the Game class (Like we do on iOS and macOS)

Render the frame

As on the other platforms, we have a method called once per frame. Because we have loaded the shader before, we now activate it by calling glUseProgram(gProgram).

Then, we pass the width and height of the game screen to the shader so that the shader can calculate and use the actual projection matrix.

Last but not least, we call the game engine. That’s all :)

Note: As you see here, the time is not calculated but passed in as 1/60; we will fix that later

void renderFrame() {
glUseProgram(gProgram);

// Set width and height
RBVector2 size = gGame->GetGamesSize();
glUniform1f(gWidth, size.width);
glUniform1f(gHeight, size.height);
gGame->OnUpdate(1.0/60.0);
gGame->OnRender();
}

User input

Last but not least, we now need to catch up on user input. This is finally handled in userInput(). This method converts the arguments passed from GLTouchView.java to the keyboard-based events we use inside the game engine and doesn’t need further explanation.

OpenGL calls

Because we have done many things before by setting up and using a shader, this part is now much simpler.

To draw a rectangle, we have to implement the following code:

void RBDrawRect(float x, float y, float width, float height, RBColor color) {
GLfloat vertices[] = {
x, y+height, 0.0f, // Upper left
x+width, y+height, 0.0f, // Upper right
x, y, 0.0f, // Lower left
x+width, y, 0.0f, // Lower right
};

glVertexAttribPointer(gPosition, 3, GL_FLOAT, GL_FALSE, 0, vertices);
glEnableVertexAttribArray(gPosition);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}

It is quite similar to what we have done on the iOS side :)

Summary

At that point we have now a little “game engine” that is really platform independent and already runs on macOS, iOS and Android. It has needed maybe more boilerplate code on every platform then you have expected, but remember one thing: This must be done only once and now we can create a variety of (2D) games that will run on all there platforms with almost no change.

And as always, the source can be found at Github.

Useful links

This article is part of an OpenGL tutorial I wrote six years ago. It’s now again on Medium, and I will eventually add some more chapters, especially on how to create 3D games. The following article is about refactoring the current code and adapting it to the upcoming chapters about 3D.

Check out my newsletter, ‘The Spatial Projects’ on Substack, where I write about Spatial Computing on Apple’s Vision Pro.

--

--

Roger Boesch
Roger Boesch

Written by Roger Boesch

Software Engineering Manager worked for Magic Leap, Microsoft and NeXT Computer - 8 years experience on spatial computing

No responses yet