All posts by Philip Schneider

mtetris: Tetris-like Game in X11/Motif

My first job out of school was with Digital Equipment Corporation, in Palo Alto, California. I worked there from 1987 to 1992, during the heyday (last hurrah?) of UNIX workstations and the X Window System.

Being workaholic young engineers, I and some of my colleagues spent probably too much time fiddling around with interactive X11 programs, collecting them, modifying them, writing some of our own, etc. I covered the catclock version of xclock in my first post to this blog.

At some point I came across dwtetris , an X11/Motif implementation of the famous (infamous?) Tetris® game. I believe that the story is that it was written for DECWindows by a DEC engineer in Japan, but now nearly 40 years later I don’t recall any details. I’ve spent some time combing the web, but can’t find any trace of the application. If anyone with better search skills can track down the origin of this program, that would be greatly appreciated. In any case, I made some minimal modifications for it to build and run under DEC’s Unix OS, called ULTRIX. I renamed it mtetris. As with the catclock program, I’ve managed to keep it compiling and running, and even have it working on macOS via XQuartz. It’s available in this Github repository.

The implementation is surprisingly full-featured: it includes a number of optionally displayed windows for the score, piece statistics, next piece, and UI help.

For computer history nerds: that round, three-button mouse was actually used for DECStation computers:

And yes, it was rather awful to use, as the top-mounted cord’s stiffness could cause the mouse to rotate a bit whenever you took your hand off, and your next mouse operation would not go in the direction you intended. But at least it had three buttons, making it convenient for interacting with 3D applications.

X11/Motif programs were written without the benefit of GUI builders like Xcode’s Interface Builder, so all the UI component construction and layout was done in code. The X11/Motif APIs required quite a bit of typing, so the code base for even small applications like this consisted largely of UI creation code. For example, here’s the implementation of a push button:

In any case, interested parties are welcome to peruse the git repository and see a dandy example of 1980s-90s state-of-the-art programming. Enjoy!

OpenGL on iOS: Device Orientation Changes Without All The Stretching

Once you get your first OpenGL view working on an iPhone or iPad, one of the first things you’ll likely notice is that when you rotate your device, the rendered image in that view stretches during the animated rotation of the display. This post explains why this happens, and what you can do to deal with it. Example code can be found at https://github.com/codefromabove/OpenGLDeviceRotation.

Continue reading OpenGL on iOS: Device Orientation Changes Without All The Stretching

Programmatically Adding an Icon to a Folder or File

I recently had the need to programmatically add an icon to a folder in OS X, with the source being a PNG image file. This arose in the context of setting up an app installer, which needed to set custom Finder icons on several folders. Another context in which a programmatic solution would be useful would be in a running desktop application, where new folders needing custom icons might be created (or their icons be modified for some reason).

Doing this programmatically should be trivially easy, but there are problems with solutions that are commonly provided in answer to this need. This post discusses some of those solutions and their shortcomings, and presents a usable workaround or two. It’s also a plea (or two) for help…

Continue reading Programmatically Adding an Icon to a Folder or File

Cocoa: Dynamically Loading Resources From an “External” Bundle

In a typical OS X application, UI elements are often created in Interface Builder and stored inside the application bundle, in the form of one or more nib files. Bundles are central to Apple’s application ecosystem, and the documentation on them is extensive: The Bundle Programming Guide and Code Loading Programming Topics, for example, describe how to create frameworks and application plug-ins, how to load code and resources, and on and on. Impressive, but rather daunting, and the examples they provide often obfuscate the basics. This post shows a very simple example of how an “external” bundle can be loaded on demand, and provide functionality and UI elements to the main program. This is a detailed, step-by-step tutorial that explains a fairly simple use of bundles, so the intended audience is Cocoa programmers who have enough experience to create custom window or view controllers, but who have not yet dealt with creating or using bundles that are created separately from the application.

Code for the projects can be found on github.

Continue reading Cocoa: Dynamically Loading Resources From an “External” Bundle

AV Foundation: Saving a Sequence of Raw RGB Frames to a Movie

An application may generate a sequence of images that are intended to be viewed as a movie, outside of that application. These images may be created by, say, a software 3D renderer , a procedural texture generator, etc. In a typical OS X application, these images may be in the form of a CGImage or NSImage. In such cases, there are a variety of approaches for dumping such objects to a movie. However, in some cases the image is stored simply as an array of RGB (or ARGB) values. This post discusses how to create a movie from a sequence of such “raw” (A)RGB data.

Continue reading AV Foundation: Saving a Sequence of Raw RGB Frames to a Movie

NSSavePanel: Adding an Accessory View

Cocoa’s NSSavePanel allows one to programmatically add essentially arbitrary interface elements and functionality to it, in the form of an accessory view. In this post, I show a very simple accessory view example: allowing the user to control the file type (that is, suffix) of the file to be saved. I’ll present this in two contexts: first, in a purely Objective-C usage; and second, in the case of using an NSSavePanel inside a C/C++ function. In the latter case, I show an example of using a selector in a separate object, to handle “callbacks”. This post is aimed at novice Cocoa programmers; experienced programmers looking to add a file type selection are encouraged to check out JFImageSavePanel or  JAMultiTypeSavePanelController. Apple’s Customizing NSSavePanel shows other uses for the accessory view.

Continue reading NSSavePanel: Adding an Accessory View

OS X: Launching Another Application Programmatically

Occasionally an application may require that another application be run. This other application may be some behind-the-scenes “helper” or auxiliary app, or it may be necessary for the user as part of a larger workflow. In this post, we go over some techniques for launching an application programmatically. In the process, I’ll go over a general method for passing parameters to a bundled AppleScript.

An Xcode project for this is available on github.

Continue reading OS X: Launching Another Application Programmatically

iOS Bézier Clock

Alt Text

I recently stumbled on Jack Frigaard’s Bézier Clock web page, which demonstrates his use of Processing.js to show an animated “digital” clock. He links to another page containing his Javascript code.

[Update: Jack’s original web pages are MIA, but can be found via the Internet Archive Wayback Machine here and here.]

I thought it would be fun to see if I could translate this into an iOS app; this project is the result of that effort. In truth, this is more of a transliteration than a proper translation…I converted it to Objective-C by creating equivalents to Jack’s classes, adding some UIViewControllers and UIViews, and pasting his code in. My goal was to try to simultaneously keep his code and algorithms as intact as possible, while writing fairly “proper” Objective-C. So, the resulting code is probably not quite what one would do if one started from scratch on iOS. Continue reading iOS Bézier Clock

FFmpeg: convert RGB(A) to YUV

I recently had a need to convert a series of rendered images  generated in an application to a movie file. The images are rasters of raw 32-bit RGBA values. A typical solution to this problem would be to dump the images to disk, and then use a command-line program,  such as ffmpeg, to convert them to the desired movie format (in this case, MPEG-2).

In my particular usage scenario, this simple solution was not an option for various reasons (nearly unbounded disk usage, user interface issues, etc.). Another option is to use the FFmpeg API to encode each frame’s raw RGBA data and dump it to a movie file. Unfortunately, I was unable to find a codec that would directly convert from the raw data to the desired movie format.

A web search turned up a potential solution: http://stackoverflow.com/questions/16667687/how-to-convert-rgb-from-yuv420p-for-ffmpeg-encoder

It turns out you can convert RGB or RGBA data into YUV using FFmpeg itself (SwScale), which then is compatible with output to a file. The basics are just a few lines: first, create an SwsContext that specifies the image size, and the source and destination data formats:

AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_MPEG2VIDEO);
AVCodecContext *c = avcodec_alloc_context3(codec);
// ...set up c's params
AVFrame *frame = av_frame_alloc();
// ...set up frame's params and allocate image buffer
SwsContext * ctx = sws_getContext(c->width, c->height,
                                  AV_PIX_FMT_RGBA,
                                  c->width, c->height,
                                  AV_PIX_FMT_YUV420P,
                                  0, 0, 0, 0);

And then apply the conversion to each RGBA frame (the rgba32Data pointer) as it’s generated:

uint8_t *inData[1]     = { rgba32Data };
int      inLinesize[1] = { 4 * c->width };
sws_scale(ctx, inData, inLinesize, 0, c->height, 
          frame->data, frame->linesize);

One important point to note: if your input data has padding at the end of the rows, be sure to set the inLineSize  to the actual number of bytes per row, not simply 4 times the width of the image.

If you’re familiar with the FFmpeg API, this info should be sufficient to get you going. The FFmpeg API is quite extensive and a bit arcane, and even something as functionally simple as dumping animation frames to a movie file is not completely trivial. Fortunately, the FFmpeg folks have provided some nice example files, including one that demonstrates some basic audio and video encoding and decoding: https://www.ffmpeg.org/doxygen/2.1/decoding__encoding_8c.html

I took the source for the video encoding function and hacked it up to incorporate the required RGBA to YUV conversion. The code performs all the steps needed to set up and use the FFmpeg API, start to finish, to convert a sequence of raw RGBA data to a movie file.  As with the original version of the code, it synthesizes each frame’s data (an animated ramp image) and dumps it to a file. It should be easy to change the code to use real image data generated in your application. I’ve made this available on GitHub at:

https://github.com/codefromabove/FFmpegRGBAToYUV

For Mac programmers, I’ve included an Xcode 6 project that creates a single-button Cocoa app. The non-app code is separated out cleanly, so it should be easy for Linux or Windows users to make use of it.

Other input formats

The third argument to sws_getContext describes the format/packing of your data. There are a huge number of formats defined in FFmpeg (see pixfmt.h), so if your raw data is not RGBA you shouldn’t have to change how your image is generated. Be sure to compute the correct line width (inLinesize in the code snippets) when you change the input format specification. I don’t know which input formats are supported by sws_scale (all, most, just a few?), so it would be wise to do a little experimentation.

For example, if your data is packed 24-bit RGB, and not 32-bit RGBA, then the code would look like this:

SwsContext * ctx = sws_getContext(c->width, c->height,
                                  AV_PIX_FMT_RGB24,
                                  c->width, c->height,
                                  AV_PIX_FMT_YUV420P,
                                  0, 0, 0, 0);
uint8_t *inData[1]     = { rgb24Data };
int      inLinesize[1] = { 3 * c->width };
sws_scale(ctx, inData, inLinesize, 0, c->height, 
          frame->data, frame->linesize);