The receiver of the data encoded and transmitted by the MSP430 LaunchPad will be decoded by an Android smartphone running a dedicated app
The app basically captures camera frames in preview mode and analyzes data.
In this post, I will talk a little about the basics of Android camera development, and in the next post I will talk about algorithms implemented to decode data sent through visible light
Before starting development on an application with the Camera API, you should make sure your manifest has the appropriate declarations to allow use of camera hardware and other related features.
- Camera Permission - Your application must request permission to use a device camera.
<uses-permissionandroid:name="android.permission.CAMERA"/>
- Camera Features- Your application must also declare use of camera features, for example:
<uses-featureandroid:name="android.hardware.camera"/>
Creating a preview class
For users to effectively take pictures or video, they must be able to see what the device camera sees. A camera preview class is a SurfaceView
that can display the live image data coming from a camera, so users can frame and capture a picture or video.
I created a basic camera preview class that can be included in aView layout. This class implements SurfaceHolder.Callback
in order to capture the callback events for creating and destroying the view, which are needed for assigning the camera preview input.
Here is the constructor of the class
public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback { privateSurfaceHolder mHolder; privateCamera mCamera; publicCameraPreview(Context context,Camera camera){ super(context); mCamera = camera; // Install a SurfaceHolder.Callback so we get notified when the // underlying surface is created and destroyed. mHolder = getHolder(); mHolder.addCallback(this); }
The first method to implement is the surfaceCreated method, which is invoked when the surface where the preview image will be shown has been created. I look for the available cameras. The front camera is the preferred one, the back camera is used as a fallback. In this phase, the custom preview callback (see below for details) object is set.
publicvoid surfaceCreated(SurfaceHolder holder){ int numberOfCameras = Camera.getNumberOfCameras(); CameraInfo cameraInfo = new CameraInfo(); Log.d(TAG, "NumOfCameras" + cameraInfo.toString()); for (int i = 0; i < numberOfCameras; i++) { Camera.getCameraInfo(i, cameraInfo); if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) { defaultBackCameraId = i; } else if(cameraInfo.facing == CameraInfo.CAMERA_FACING_FRONT) { defaultFrontCameraId = i; } } if(defaultFrontCameraId != -1) { try{ mCamera = Camera.open(defaultFrontCameraId); mCamera.setPreviewCallback(_previewCallback); } catch (Exception e) { Log.d(TAG, "Error setting camera preview: " + e.getMessage()); return; } return; } else if(defaultBackCameraId != -1){ try{ mCamera = Camera.open(defaultBackCameraId); mCamera.setPreviewCallback(_previewCallback); } catch (Exception e) { Log.d(TAG, "Error setting camera preview: " + e.getMessage()); return; } return; }
return;
}
When the surface changes (at app startup or, for example, it is resized or the orientation is changed) the surfaceChanged method is invoked. In my case, I get the preferred preview size (800 x 480 or the resolution that best matches), the preferred frame rate (30 fps) and the focus (I used a fixed focus, to free up system resources that could otherwise be used for autofocus)
Finally, the surfaceHolder object is set and the preview is started by calling the startPreview
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { Camera.Parameters parameters = mCamera.getParameters(); List<Size> sizes = parameters.getSupportedPreviewSizes(); Log.d(TAG, "getSupportedPreviewSizes: " + sizes.toString()); Size optimalSize = getOptimalPreviewSize(sizes, /*w*/800, /*h*/480); w = optimalSize.width; h = optimalSize.height; width = w; height = h; try{ parameters.setPictureSize(w, h); mCamera.setParameters(parameters); }catch (Exception e) { parameters = mCamera.getParameters(); } try{ parameters.setPreviewSize(w, h); parameters.setPreviewFpsRange(30, 30); mCamera.setParameters(parameters); }catch (Exception e) { parameters = mCamera.getParameters(); } try{ parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_FIXED); mCamera.setParameters(parameters); }catch (Exception e) { parameters = mCamera.getParameters(); } // start preview with new settings try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ Log.d(TAG, "Error starting camera preview: " + e.getMessage()); } int[] range = new int[2]; parameters.getPreviewFpsRange(range); Log.d(TAG, String.format("Actual camera settings: size: %d x %d, fps: %d,%d", parameters.getPreviewSize().width, parameters.getPreviewSize().height, range[0], range[1])); if (mHolder.getSurface() == null){ // preview surface does not exist return; } }
A custom preview callback is implemented in order for the camera frames to be processed by the custom algorithm devised to detect VLC data
private final PreviewCallback _previewCallback = new PreviewCallback() { public void onPreviewFrame(final byte[] data, Camera camera) { Log.d(TAG, "--> onPreviewFrame: " + data.length + " bytes"); previewData = data; appendSamples(data, width, height); mHandler.post(new Runnable() { public void run() { updateUI(); } }); framesCounter++; if ((framesCounter % NUM_FRAMES_TO_CAPTURE) == 0) { mHandler.post(new Runnable() { public void run() { decodeSamples(); numSamples = 0; } }); } } };
Here three important pieces are included
- appendSamples: the preview image is sampled and the values are enqueued in an array for later processing
- updateUI: this function (which is not invoked “directly” but through a post because it not allowed to update UI elements from a different thread) shows some information about the processing being performed
- decodeSamples: process the data enqueued by the appendSamples function
The data parameter contains (according to Google’s documentation for Android developers) the image in a format names YCrCb 420 semi planar. I googled in order to understand how the pixel are encoded, but I got a bit confused. Luckily, Android is an open source project so I browsed through the source code of the YuvImage and found the answer
bool YUVImage::initializeYUVPointers() { int32_t numberOfPixels = mWidth * mHeight; if (mYUVFormat == YUV420Planar) { mYdata = (uint8_t *)mBuffer; mUdata = mYdata + numberOfPixels; mVdata = mUdata + (numberOfPixels >> 2); } else if (mYUVFormat == YUV420SemiPlanar) { // U and V channels are interleaved as VUVUVU. // So V data starts at the end of Y channel and // U data starts right after V's start. mYdata = (uint8_t *)mBuffer; mVdata = mYdata + numberOfPixels; mUdata = mVdata + 1; } else { ALOGE("Format not supported"); return false; } return true; }
According to this code, data has the Y component first and the U and V components interleaved as UVUVU…
Placing preview in a layout
The camera preview class can now be placed in the layout of an activity along with other user interface controls for taking a picture or video.
The following layout code provides a very basic view that can be used to display a camera preview. In this example, the LinearLayout
element is meant to be the container for the camera preview class.
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:id="@+id/camera_preview" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" /> <Button android:id="@+id/button_capture" android:text="Capture" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" /> </LinearLayout>