This application can play video/audio using a monitor with HDMI input terminals and a built-in speaker. However, currently, operations are tested against only several MP4 files we created. Also, of test data mentioned in 5, only 5 and 6 were used to verify operations during 2-screen playback. Because the file system (
FSReadFileAsync) does not function properly during 2-screen simultaneous playback.
Because file input/output between the production machine and PC is performed via the following folders, check the folder composition before running the application.
In this application, the following 5 threads are generated:
Input buffer to store Video/Audio bitstreams. Memory region to store each frame bitstream data for video/audio separated by the MP4 demultiplexer
Video: 60 counts (assuming max 60fps) Assuming 1-second-worth of buffer count
Audio: 300 counts (assuming max 48kHz) The audio buffer count assumes video being at least "1fps". Video decoder starts outputting image data after 5 frame-worth of decoding completes because of the library spec; it is actually necessary to maintain 5 second-worth of audio data. This is for MP4Demux to output the audio (48000/1024) frame for 1 video frame. Formula: (48000/1024) x (5+1) = 282(300)
Start decoding. Video: After data is accumulated halfway through the frame rate (@ 30fps, 15 of 60 counts), decoding starts. Audio: After data is accumulated halfway through the sampling frequency (@ 48kHz, (48000/1024/2) of 300 counts), decoding starts.
Start drawing/audio playback. Video: Data is accumulated on the output buffer halfway though the frame rate (@ 30fps, 15 of 60 counts). Audio: Data is accumulated on the output buffer halfway through the sampling frequency (@ 48kHz, (48000/1024/2) of 300 counts). When the above 2 conditions are satisfied, drawing/audio playback starts.
Synch Control. Retaining the time when drawing/audio playback starts at 4 as the start time, when comparing with the timestamp corresponding to the decoding result, if it has reached at the current time, start drawing. Video: Specify images on the corresponding framebuffer for the texture region, cancel Active state of the region after drawing and then reset to Unused. Audio: Use the mixer to play audio fixed at 48kHz. The position of data to be played gets updated regularly by interrupts; monitoring the updated position on the application side, cancel Active state of the region that has been played and then reset to Unused.
AXRegisterDeviceFinalMixCallbackis used to start/stop the mixer. After calling the API, interrupting does not seem to work properly unless approximately 50ms wait is inserted. Normally, at 48kHz,
AXPBOFFSETstructure should get updated every 21msec; however, the interval becomes "0,42,63..." and update will not take place at the 21msec mark. Due to this issue, data that has been played and is still left on the buffer ends up being output for 50ms or so after playing all decoding results. For this reason, the buffer is cleared (silencing) after decoding completes.
How to run the application in the Cafe SDK environment is shown below. Building at Step 3 generates an executable file in
Run the file by using the command below. The current directory assumes
% caferun mp4player.elf "TV input file" "DRC input file"
* The path on the dev machine should be used for "input files".
For example, when using the following in the demo:
"./vol/content/codecdemo/mp4/DRCtest.MP4" should be used as "input files".
% caferun mp4demux_movie.elf ./vol/content/codecdemo/mp4/TVtest.MP4 ./vol/content/codecdemo/mp4/DRCtest.MP4
The following compile switches are available for the Makefile.
||Instead of passing the file name in an argument, files defined at the beginning of the main file are read. When passing arguments, 2 input file names must be specified in the arguments for
the command line when executing. The 1st should be the file name for TV output, and the 2nd the file name for DRC output.
||Respective decoding results for H.264/AAC are output into files. The file name must be defined at the beginning of the main file. When outputting a file, the file access wait goes up, which results in frame skipping. To avoid frame skipping, it is necessary to comment out synchronous processing.|
||Enable profiling. By enabling this switch and then adding
In this sample code, by modifying the following parameters in the main file (
mp4demux_main.c), the user can change 2-screen simultaneous playback setting, allowable delay time, and input file size that goes into memory.
||Allowable processing delay time: The default should be set to 70[ms] (for approximately 2 frame-worth of input data @ 30fps). If PTS shows that the specified time has already elapsed from the current time on the display thread, video decoding will be skipped until the next key frame. In this case, the display will stop until the next key frame, but the audio will play continuously without stopping. * However, jumpiness due to processing delay caused by file read may not be avoided..|
||The maximum file size to read into memory. By reading input data into memory ahead of time, processing delay due to a bottleneck caused by file access can be avoided. * Normal operation has been verified for the 512Mbyte setting. When specifying a smaller size and providing data via file input, frame skipping occurs during 2-screen simultaneous playback.|
||Setting for 2-screen simultaneous playback switch (only when the compile switch:
1 : Play 1 stream, and display images on TV.
2 : Play 1 stream, and display images on DRC.
Other than 1 or 2: Play 2 streams at the same time, and display images on TV and DRC.
||mp4player source code|
2013/10/07 Added note that mp4player demo can not decode 5.1ch sound.
2013/08/30 5.1ch surround supported.
2013/05/08 Automated cleanup pass.