The camera APIs are designed to control the video stream from the DRC camera. After the camera on DRC is started, it generates a compressed video stream at a regular (isochronous) rate. The camera library allows the user to uncompress the stream using the Universal Video Decode (UVD) engine in Cafe and get frames suitable for rendering.
The compressed stream that the DRC generates goes to the console wirelessly and the compression algorithm is modified for this transmission. As a result, unless an error in transmission is observed, all frames except for the first one are "inter" frames. So the camera library decodes every previous frame to be able to correctly decode the latest frame, and cannot skip frames while decoding.
Another noteworthy feature of the camera library is that it does not use the user's surfaces for referencing previous frames while decoding the latest one. Instead, the application allocates a large chunk of work memory upfront when initializing the library. The video decoder hardware maintains a local copy of the reference frames in this work memory. This has the benefit that users can free up the surfaces or modify them any way they want (such as for post-processing or adding camera effects) without copying the surfaces to another framebuffer first to avoid corrupting the decoding process.
The driver is meant to be initialized once by calling
CAMInit. At that time, work memory is provided for driver's own use. The
minimum required size of this memory can be found by calling
This memory should be freed up only after calling
CAMExit. At the time of
CAMInit application should also provide a callback
function that the driver can use to inform the application about any events.
After streaming is started the application needs to call
with empty surfaces. When the driver has a surface ready for rendering, it
CAMERA_DECODE_DONE event. The event handler function lets the application
know which framebuffer is ready for rendering. This framebuffer may not
necessarily belong to the last submitted surface, because the surfaces submitted by
the application go in a FIFO and get pulled out by a separate thread in the driver.
If the DRC detaches from the Wii U for any reason the camera driver closes itself automatically and calls the
CAMERA_DRC_DETACHES event. In case
the DRC attaches itself at a later time the application will need to call
CAMOpen before it can use the camera again.
Camera driver also closes itself automatically when the camera process is released from the foreground by a HOME Button press or because of some other reason. When the camera process comes back in the foreground, it does not resume camera operation automatically. Instead applications need to call
CAMOpen explicitly to start streaming video again.
If the application wants to reinitialize the camera driver again, it needs to call
CAMExit before calling
Camera input from the DRC is decoded by hardware. The decoded plane consist of a Y plane and a UV interleaved plane. The Y plane is located at the beginning of the frame and the UV plane is located after the Y plane. The developer needs to convert it into RGB format by using a GPU shader. One option to do this is to fetch the Y buffer as an 8-bit single channel 640x480 texture and the UV buffer as an 8-bit two channels 320x240 texture.
YUV to RGB conversion in gamma space is assumed to be
A sample shader is stored at
The current camera gamma setting is 2.2. YUV2RGB conversion math is based on gamma space. If linear space effects are applied to camera output, modifications are necessary. Following is one example for use-case.
2013/05/08 Automated cleanup pass.
2012/08/02 Cleanup Pass.
2011/10/27 Initial version.