The OpenGL Tutorial — Part III
Creating a version for iOS
In the first two parts of this tutorial, we set the base for creating multiplatform games. So far, we are still on macOS, but that changes now. This part is about porting the game to iOS and, in Part IV, to Android.
Porting to iOS
The way how we port our “engine” to another platform remains mostly the same:
- Create the boilerplate code for iOS.
- Implement the low-level calls to OpenGL (mainly RBDrawRect())
- Implement user input (In this case touch-based input)
Boilerplate code
The boilerplate for iOS consists of three classes:
AppDelegate: Create, similar to the macOS version, the delegate class that opens a window in iOS and assigns a View Controller.
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
self.viewController = [[ViewController alloc] initWithNibName:nil bundle:nil];
self.window.rootViewController = self.viewController;
[self.window makeKeyAndVisible];
return YES;
}
It's not too much to explain. We just create a window and set a view controller as its root.
ViewController: The view controller (as the name says) is responsible for adding the views and setting the correct size according to the display size. All the “important” things happen in EAGLView.
EAGLView: On iOS, the easiest way to use OpenGL is to create a custom view (Derived class from UIView). Then, create a CAEAGLLayer, a core animation layer capable of displaying OpenGL. Also, create an EAGLContext class, which provides an OpenGL Rendering context.
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
return nil;
}
renderInterval = 1.0 / 60.0;
}
return self;
}
After that, you have to create a frame and render buffer, which I haven’t explained in detail. Last, we have to make a timer and call it 60 times per second (lazy approach). We can then call OnUpdate() and OnRender() from within our C++ engine code. So that was the boilerplate part :)
OpenGL calls
Because we have so far a game that uses rectangles to draw the content, the OpenGL calls are straightforward to port. In fact, it’s just the method RBDrawRect() in file RBRenderHelper.m.
void RBDrawRect(float x, float y, float width, float height, RBColor color) {
GLfloat vertices[] = {
x, y+height, 0.0f, // Upper left
x+width, y+height, 0.0f, // Upper right
x, y, 0.0f, // Lower left
x+width, y, 0.0f, // Lower right
};
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(color.r, color.g, color.b, 1.0);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
The main difference here is that we can not use GL_QUADS on iOS and, therefore, must use GL_TRIANGLE_STRIP to build a rectangle.
Also, we can’t use immediate mode API on iOS. So what’s the solution? OpenGL Core Profile! This removes the old fixed pipeline calls and replaces them with buffer objects, which you can only update when needed. It is a bit more complicated, but you can do more cool stuff with it. Also, it’s much more performant.
User input
At that point, we can already show the game on iOS, and it works correctly. It has just one “pain point.” We have no user input…
Of course, on iOS, we have no keyboard attached, so we use the touch screen to control our paddles.
For this tutorial, I implemented a simple approach:
- Create a custom view class TouchView
- In the ViewController class, I instantiate two views, one on each side.
- Whenever the user touches the upper part, gGame->OnKey(KeyUp, true) is called. When the user lifts his finger, gGame->OnKey(KeyUp, false) is called.
In other words, I let the game engine think the user has pressed the Up/Down keys (resp. W/S for the second player). This allows all the game code to remain the same.
This is an approach that’s often used. The engine has no idea what kind of user input device is available; it always gets the same commands.
And as always, the source can be found at Github
Platform independent: The Advantage
Maybe you already realise the beauty of multiplatform programming using OpenGL and C++. We just have to concentrate on a very small code part to make it run on a different platform and all the game logic remains the same and can used just used as is on every platform.
Whats Next? So far we just been on Apple platforms, but finally in part IV I will show how to port it to Android.
Check out my newsletter, ‘The Spatial Projects’ on Substack, where I write about Spatial Computing on Apple’s Vision Pro.