| 110 | |
| 111 | |
| 112 | == API regression testing == |
| 113 | |
| 114 | '''Description:''' FATE is FFmpeg's automated testing suite. However, it only covers users of the ffmpeg executable, and not API users. This means that library users can often have bugs which the ffmpeg executable may hide. |
| 115 | |
| 116 | '''Expected results:''' Build a suite of tests for API users, including advanced options such as draw_horiz_band |
| 117 | |
| 118 | '''Prerequisites:''' Good C coding skills, basic familiarity with git |
| 119 | |
| 120 | '''Qualification Task:''' Write a test for a libavcodec decoder using the API |
| 121 | |
| 122 | '''Mentor:''' Kieran Kunhya(''kierank'' in #ffmpeg-devel on Freenode IRC, kieran[at]kunhya.com) |
| 123 | |
| 124 | '''Backup Mentor:''' |
| 125 | |
| 126 | |
| 127 | == directshow digital video capture == |
| 128 | |
| 129 | '''Description:''' FFmpeg today includes support for windows directshow boxes via creating its own dshow graphs, an internal capture sink, etc. This had support added recently for things like analog capture devices and analog TV tuners, but lacks support for digital TV tuner capture devices (ATSC etc.) https://msdn.microsoft.com/en-us/library/windows/desktop/dd695354(v=vs.85).aspx |
| 130 | |
| 131 | The first step will be DVB or ATSC, then vice versa, until it encompasses all the various digital capture options and all parameters for the same. |
| 132 | |
| 133 | The next step will be creating a libavfilter that can "pass through" frames to be encoded by a windows directshow encoder (for instance, there are some encoders typically only available in windows as dshow devices, like lagarith). |
| 134 | |
| 135 | Another step would be allowing for "DV capture" (i.e. from a live webcam) viz: https://msdn.microsoft.com/en-us/library/windows/desktop/dd373388(v=vs.85).aspx |
| 136 | |
| 137 | It would also be nice to refactor the directshow code so that it can take multiple inputs instead of just two today. Input would look like "-i video=name=Webcam:show_input_video_options=true:framerate=25:audio=name=Audio Device:rate=44100:video=name=Webcam 2:show_input_options=false" etc. |
| 138 | |
| 139 | Finally, it would be nice to implement the libav "enumerate devices" API to at least show which devices exist on the system. Initially return just the devices, then return the devices along with any options they have like "name=Capture device:input_crossbar_device_number=3" (one each for each option) as a follow on. |
| 140 | |
| 141 | '''Expected results:''' It will have the ability to capture video and audio from digital TV tuner devices. This basically involves setting up the right filter graph and sending it a tuning request, and exposing the output to FFmpeg with the right codecs presented. |
| 142 | |
| 143 | '''Prerequisites:''' C coding skills, basic familiarity with git, desire to learn, access to windows native box, and eventually, a digital capture device. |
| 144 | |
| 145 | '''Qualification Task:''' Study all links on digital video capture graphs: http://stackoverflow.com/questions/14150210/having-trouble-capturing-digital-tv-using-directshow and create some test graphs using MSVC graphedit that capture digital video successfully. Also add an IPersistStream option to and from file for the dshow code base for video basefilter (basically, you can call this to "serialize" it's setting to a file after setting them, and then read them from a file to get back to what they were set to previously). This also involves a new command line option. |
| 146 | |
| 147 | '''Mentor:''' Roger Pack (rogerdpack@gmail.com) the dshow module maintainer. |
| 148 | |
| 149 | '''Backup Mentor:''' Ramiro Polla (''ramiro'' in #ffmpeg-devel on Freenode IRC, ramiro DOT polla AT gmail DOT com) |
| 150 | |
| 151 | |
| 152 | == Postproc optimizations == |
| 153 | |
| 154 | [[Image(wiki:SponsoringPrograms/GSoC/2014:PostProc.jpg, 240, right, nolink)]] |
| 155 | |
| 156 | |
| 157 | '''Description:''' FFmpeg contains libpostproc, which is used to postprocess 8x8 DCT-MC based video and images (jpeg, mpeg-1/2/4, H.263 among others). Postprocessing removes blocking (and other) artifacts from low bitrate / low quality images and videos. The code though has been written a long time ago and its SIMD optimizations need to be updated to what modern CPUs support (AVX2 and SSE2+). |
| 158 | |
| 159 | '''Expected results:''' |
| 160 | - Convert all gcc inline asm in libpostproc to YASM. |
| 161 | - Restructure the code so that it works with block sizes compatible with modern SIMD. |
| 162 | - Add Integer SSE2 and AVX2 optimizations for each existing MMX/MMX2/3dnow optimization in libpostproc. |
| 163 | |
| 164 | '''Prerequisites:''' C coding skills, good x86 assembly coding skills, basic familiarity with git. |
| 165 | |
| 166 | '''Qualification Task:''' convert 1 or 2 MMX2 functions to SSE2 and AVX2. |
| 167 | |
| 168 | '''Mentor:''' Michael Niedermayer (''michaelni'' in #ffmpeg-devel on Freenode IRC, michaelni@gmx.at) |
| 169 | |
| 170 | '''Backup Mentor:''' |
| 171 | |
| 172 | == MPEG-4 Audio Lossless Coding (ALS) encoder == |
| 173 | |
| 174 | [[Image(wiki:SponsoringPrograms/GSoC/2014:showwaves_green.png, 240, right, nolink)]] |
| 175 | |
| 176 | '''Description:''' |
| 177 | A MPEG-4 ALS decoder was implemented several years ago but an encoder is still missing in the official codebase. A rudimentary encoder has already been written and is available on [https://github.com/justinruggles/FFmpeg-alsenc.git github]. For this project, that encoder is first to be updated to fit into the current codebase of FFmpeg and to be tested for conformance using the [http://www.nue.tu-berlin.de/menue/forschung/projekte/beendete_projekte/mpeg-4_audio_lossless_coding_als/parameter/en/#230252 reference codec and specifications]. Second, the encoder is to be brought through the usual reviewing process to hit the codebase at the end of the project. |
| 178 | |
| 179 | '''Expected results:''' |
| 180 | |
| 181 | - Update the existing encoder to fit into the current codebase. |
| 182 | - Ensure conformance of the encoder by verifying using the reference codec and generate a test case for [http://ffmpeg.org/fate.html FATE]. |
| 183 | - Ensure the FFmpeg decoder processes all generated files without warnings. |
| 184 | - Enhance the rudimentary feature set of the encoder. |
| 185 | |
| 186 | '''Prerequisites:''' C coding skills, basic familiarity with git. A certain interest in audio coding and/or knowledge about the FFmpeg codebase could be beneficial. |
| 187 | |
| 188 | '''Qualification Task:''' Add floating point support to MPEG-4 ALS decoder |
| 189 | |
| 190 | '''Mentor:''' Thilo Borgmann (thilo DOT borgmann AT mail DOT de) |
| 191 | |
| 192 | '''Backup Mentor:''' Paul B Mahol (''durandal_1707'' in #ffmpeg-devel on Freenode IRC, onemda@gmail.com), Stefano Sabatini (''saste'' in #ffmpeg-devel on Freenode IRC, stefasab AT gmail DOT com) |
| 193 | |
| 194 | |
| 195 | == Hardware Acceleration API Software / Tracing Implementation == |
| 196 | |
| 197 | [[Image(wiki:SponsoringPrograms/GSoC/2014:hwaccel.jpg, right, nolink)]] |
| 198 | |
| 199 | '''Description:''' Our support for hardware accelerated decoding basically remains untested. This is in part due to FFmpeg only implementing part of the required steps, and in part since it requires specific operating systems and hardware. |
| 200 | |
| 201 | The idea would be to start with a simple stub implementation of an API like e.g. VDPAU that provides only the most core functions. These would then serialize out the function calls and data to allow easy comparison and regression testing. Improvements to this approach are adding basic input validation and replay capability to allow testing regression data against real hardware. This would be similar to what [https://github.com/apitrace/apitrace apitrace] does for OpenGL. |
| 202 | |
| 203 | A further step would be to actually add support for decoding in software, so that full testing including visual inspection is possible without the need for special hardware. |
| 204 | |
| 205 | '''Prerequisites:''' C coding skills, basic familiarity with git |
| 206 | |
| 207 | '''Qualification Task:''' Anything related to the hardware acceleration code, though producing first ideas and code pieces for this task would also be reasonable |
| 208 | |
| 209 | '''Mentor:''' Reimar Doeffinger (''reimar'' in #ffmpeg-devel on Freenode IRC, but since I'm rarely there better email me first: Reimar.Doeffinger [at] gmx.de) |
| 210 | |
| 211 | '''Backup Mentor:''' Stefano Sabatini (''saste'' in #ffmpeg-devel on Freenode IRC, stefasab AT gmail DOT com) |
| 212 | |
| 213 | |
| 214 | == MXF Demuxer Improvements == |
| 215 | |
| 216 | '''Description:''' The MXF demuxer needs a proper, compact way to map !EssenceContainer ULs to !WrappingKind. See [https://trac.ffmpeg.org/ticket/2776 ticket #2776] in our bug tracker, and [https://trac.ffmpeg.org/ticket/2776 ticket #1916] contains additional relevant information. |
| 217 | |
| 218 | Essence in MXF is typically stored in one of two ways: as an audio/video interleave or with each stream in one huge chunk (such as 1 GiB audio followed by 10 GiB video). Previous ways of telling these apart have been technically wrong, but worked and we lack samples demonstrating the contrary. |
| 219 | |
| 220 | '''Expected results:''' The sample in ticket [https://trac.ffmpeg.org/ticket/2776 ticket #2776] should demux correctly. Add a test case in [http://ffmpeg.org/fate.html FATE]. The solution should grow libavformat by no more than 32 KiB. |
| 221 | |
| 222 | '''Prerequisites:''' C coding skills, basic familiarity with git. Knowledge of MXF would be useful |
| 223 | |
| 224 | '''Qualification Task:''' Investigate if there may be a compact way of representing the UL -> !WrappingKind mapping specified in the [http://www.smpte-ra.org/mdd/RP224v10-publication-20081215.xls official RP224 Excel document]. The tables takes up about half a megabyte verbatim which is unacceptable in a library as large as libavformat. |
| 225 | |
| 226 | '''Mentor:''' Tomas Haerdin (''thardin'' in #ffmpeg-devel on Freenode IRC, tomas.hardin a codemill.se) |
| 227 | |
| 228 | '''Backup Mentor:''' Stefano Sabatini (''saste'' in #ffmpeg-devel on Freenode IRC, stefasab AT gmail DOT com) |
| 229 | |
| 230 | |
| 231 | == Basic servers for network protocols == |
| 232 | |
| 233 | '''Description:''' libavformat contains clients for various network protocols used in multimedia streaming: HTTP, RTMP, MMS, RTSP. Your work will be to implement the server side for one or several of these protocols. |
| 234 | |
| 235 | The libavformat framework is not designed to build general-purpose server applications with several clients, and nothing similar to the configuration features of real servers like Apache is expected, but libavformat should be able to stream a single predefined bytestream to/from a single client. |
| 236 | |
| 237 | Note: server support is already implemented for the receiving side of RTSP. |
| 238 | |
| 239 | '''Expected results:''' basic servers for network protocols capable of interoperating with third-party clients. |
| 240 | |
| 241 | '''Prerequisites:''' C coding skills, basic familiarity with git, network programming. |
| 242 | |
| 243 | '''Qualification Task:''' proof-of-concept server for one of the protocol, capable of interacting with a particular client in controlled circumstances; or anything network-related, e.g. fixing a ticket in our [https://trac.ffmpeg.org/ bug tracker]. |
| 244 | |
| 245 | '''Mentor:''' Nicolas George (george ad nsup dot org) |
| 246 | |
| 247 | '''Backup mentor:''' Reynaldo Verdejo (''reynaldo'' in #ffmpeg-devel on Freenode IRC, R Verdejo on g mail) |
| 248 | |
| 249 | |
| 250 | == HTTP/2 == |
| 251 | |
| 252 | '''Description:''' the [https://www.mnot.net/blog/2015/02/18/http2 final draft for the HTTP/2 protocol] has been published. It contains various new features that will probably be used to enhance distribution of multimedia contents. Therefore, FFmpeg needs an implementation. |
| 253 | |
| 254 | '''Expected results:''' HTTP/2 client over TLS and TCP for reading and writing, capable of interacting with stock servers, including using the same connection for simultaneous requests. |
| 255 | |
| 256 | '''Prerequisites:''' C coding skills, basic familiarity with git, network programming. |
| 257 | |
| 258 | '''Qualification Tasks:''' |
| 259 | |
| 260 | * Rework current HTTP/1 client code to make it input-driven and support non-blocking mode. |
| 261 | |
| 262 | * Implement the Websocket protocol on top of the HTTP/1 client code. |
| 263 | |
| 264 | '''Mentor:''' Nicolas George (george ad nsup dot org) |
| 265 | |
| 266 | '''Backup mentor:''' TBA |
| 267 | |
| 268 | |
| 269 | == TrueHD encoder == |
| 270 | |
| 271 | '''Description:''' FFmpeg currently does not support encoding to TrueHD, one of the lossless audio formats used on Bluray discs. A nearly functional Meridian Lossless Packing (MLP) encoder has already been written and is available on [https://github.com/ramiropolla/soc/tree/master/mlp github]. The MLP codec is the basis for TrueHD. For this project, that encoder is first to be updated to fit into the current codebase of FFmpeg and to be tested for conformance against Surcode's MLP encoder. Second, the encoder is to be updated to implement TrueHD functionality, allowing it to losslessly encode audio to play it on hardware devices capable of TrueHD decoding. Finally, the encoder is to be brought through the usual reviewing process to hit the codebase at the end of the project. |
| 272 | |
| 273 | '''Expected results:''' a TrueHD encoder that allows to losslessly encode audio to play it on hardware devices capable of TrueHD decoding with a competetive compression rate |
| 274 | |
| 275 | '''Prerequisites:''' C coding skills, basic familiarity with git |
| 276 | |
| 277 | '''Qualification Task:''' Update the MLP encoder so that it produces a valid bitstream that can be decoded by FFmpeg without errors to silence. Find out how to validate the generated bitstream besides using FFmpeg. |
| 278 | |
| 279 | '''Mentor:''' Ramiro Polla (''ramiro'' in #ffmpeg-devel on Freenode IRC, ramiro DOT polla AT gmail DOT com) |
| 280 | |
| 281 | '''Backup mentor:''' Stefano Sabatini (''saste'' in #ffmpeg-devel on Freenode IRC, stefasab AT gmail DOT com) |
| 282 | |
| 283 | == Implement full support for 3GPP Timed Text (movtext, QuickTime) subtitles == |
| 284 | |
| 285 | The standard subtitle format used in MP4 containers is 3GPP timed text, as defined in [http://www.3gpp.org/DynaReport/26245.htm 3GPP TS 26.245]. It is the only subtitle format supported in Apple's media players on OS X and iOS, and the only format that's part of the mpeg4 standard. As such it is important for FFmpeg to support the format as fully as possible. Currently, it supports a limited subset of the format without any rich text formatting or the ability to position text on the screen. For this project, the goal would be to implement complete support for these features and have the implementation fully reviewed and merged into FFmpeg. |
| 286 | |
| 287 | '''Expected Results:''' |
| 288 | - A display window for subtitles can be specified by the user when encoding or transcoding subtitles |
| 289 | - A default window size based on the primary video stream will be implemented |
| 290 | - As much text formatting metadata as can be expressed in ASS will be supported for both transcoding to and from timed text, including positional metadata |
| 291 | - The OSX QuickTime player should be used to evaluate the behaviour of formatting metadata. (This appears to be the most feature complete player with respect to the formatting features of 3GPP Timed Text) |
| 292 | - Subtitle merging for overlapping subtitles will be implemented |
| 293 | |
| 294 | '''Prerequisites:''' C coding skills, basic familiarity with git, access to the OS X QuickTime player for playback verification. |
| 295 | |
| 296 | '''Qualification Task:''' Implement support for transcoding bold, italic, and underline formatting to and from ffmpeg's internal ASS format. (Will require temporary patches (which already exist) for display window sizing/positioning) |
| 297 | |
| 298 | '''Mentor:''' Philip Langdale (''philipl'' in #ffmpeg-devel on Freenode IRC, philipl AT overt DOT org) |
| 299 | |
| 300 | '''Backup mentor:''' Carl Eugen Hoyos (''cehoyos'' in #ffmpeg-devel on Freenode IRC, ce AT hoyos.ws) |
| 301 | |
| 302 | == Improve Selftest coverage == |
| 303 | |
| 304 | '''Description:''' FFmpeg contains many selftests, still more code is not tested than tested thus more such tests are needed |
| 305 | to ensure regressions and platform specific bugs are quickly detected. Examples of existing self tests can be seen under #ifdef TEST |
| 306 | in various files |
| 307 | |
| 308 | '''Expected results:''' Significantly improve selftest code coverage |
| 309 | |
| 310 | '''Prerequisites:''' Good C coding skills, basic familiarity with git |
| 311 | |
| 312 | '''Qualification Task:''' Improve selftest code coverage by at least 1% of two of the main libs (libavcodec, libavformat, libavdevice, libavfilter, libavutil, libswresample, libswscale) as listed at http://coverage.ffmpeg.org/index.html |
| 313 | |
| 314 | '''Mentor:''' Michael Niedermayer (''michaelni'' in #ffmpeg-devel on Freenode IRC, michaelni@gmx.at) |
| 315 | |
| 316 | '''Backup Mentor:''' |