Nvidia Develops an Incredibly Smart Slow-motion Video System

Facebooktwittergoogle_plusredditpinterestlinkedinmail

This new method will allow the conversion of regular speed images into slow-motion using artificial intelligence. This innovative method has special advantages in terms of hardware performance and creation time for video clips recorded with smartphones.

Slow-motion is certainly one of the more demanding features on the cameras of today’s smartphones. Thanks to this company, you will be able to record slow-motion video on your phone without any distress. Recording videos at speeds of 120 to 240 frames per second is now quite common today, although it’s not that simple in terms of technology. If you would like to buy a camera with the same number of frames per second for slow motion only, you would need to allocate a significant amount of money for it.

The reason is that high speed imaging in seconds requires serious hardware performance and consumes a lot of energy. Therefore, you should not be surprised if your battery is almost empty, and the device is being hot after use. Nvidia is trying to ease slow motion using artificial intelligence by using more software and the results are more than good.

The human eye sees the image as a movement only if it displays 24 frames in one second. In cinemas we watch movies at the same speed, and the television standard is 25-30 frames per second. The larger the number of frames, the more dynamic the image is, but also the less realistic. Therefore, the post-production must adjust the speed of 24 fps. Logically, a larger number of images means more information, and if we show them in a longer time interval, we get a clearer view within a specific time unit.

For this reason, the standard speed of 24, 25 and 30 fps cannot be processed in an acceptable way without the appearance of certain errors. However, a research conducted by Nvidia attempts to overcome these physical barriers with the help of science. Professional video editing programs have been trying to achieve better results by mixing certain components for some time. This works best when the software is able to recognize what’s on the video.

Here a deep learning algorithm enters the scene trying to recognize the object on the basis of the information collected, and then duplicate thumbnails and make a uniform redistribution of pixels. It can be of great help in post-processing because the thumbnails might slow down the recording even if there were no physical preconditions for it. On the other hand, it would allow slowing down of shots on smartphones, but would save both processor performance and process time and memory space.

This research is only at its initial stage. This is a deep learning algorithm, so we will have to tolerate it until the new technology reaches the user. Learning alone is a long and very tiring process because it is necessary for the software to be learned by millions of compositions, objects and situations, adding external factors, such as moving the video camera, changing light conditions and composition for stabilizing images on smartphones. So we cannot deny that this is a very interesting project.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply