Imagine that we returned 2007. Our mother is anarchy, and our photos are noisy 0.6 Mpx jeeps taken on a skateboard. Approximately then we have the first irresistible desire to nafigachit presets on them to hide the rising maturity of mobile matrices. We will not deny yourself.
Matan and Instagram
With the release of instagram all are obsessed with filters. As a person who at one time reverse-engineer X-Pro II, Lo-Fi and Valencia for, of course, research (cake) purposes , I still remember that they consisted of three components:
The color settings (Hue, Saturation, Lightness, Contrast, Levels, etc.) are simple digital coefficients, exactly as in any presets that photographers have used since ancient times.
Tone Mapping maps are a vector of values, each of which told us that “red with a shade of 128 must be turned into a shade of 240”. Often it was represented as a single-pixel image, something like this . This is an example for the X-Pro II filter.
The overlay is a translucent image with dust, grain, vignette, and everything else that can be superimposed on top to get the least obvious effect of the old film. Present not always.
Modern filters are not far from this triple, but have become a little more difficult in mathematics. With the advent of hardware shaders and OpenCL on smartphones, they were quickly rewritten under the GPU and this was considered wildly awesome. For 2012, of course. Today, any student can do the same in CSS and he still will not fall on graduation.
However, the progress of the filters today has not stopped. The guys from Dehanser , for example, are perfectly picked up by non-linear filters - instead of proletarian tone mapping, they use more complex non-linear transformations, which they say opens up much more possibilities. Follow their blog, who are interested.
Nonlinear transformations can do a lot of things, but they are incredibly complex, and we humans are incredibly stupid. As soon as science deals with non-linear transformations, we prefer to go to numerical methods and cram neuronets everywhere so that they write masterpieces for us. The same was here.
Automation and dreams of the button "masterpiece"
When everyone got used to the filters, we started embedding them right in the cameras. The story hides exactly who of the manufacturers was the first, but purely to understand how long ago it was - in iOS 5.0, which came out already in 2011, there was already a public API for Auto Enhancing Images . Jobs alone knows how much he was still used before being opened to the public.
The automatics did the same as each of us, opening a photo in the editor - pulling out the dips in the light and shadows, heaping up the saturation, removing red eyes and fixing the complexion. Users did not even realize that the “dramatically improved camera” in the new smartphone was only the merit of a pair of new shaders. There was still five years before the release of Google Pixel and the start of the hype in computational photography.
Today, the battles for the “masterpiece” button have been transferred to the machine learning field. Having played enough with the tone mapping, everyone rushed to train CNNs and GANs to move sliders instead of the user. In other words, from the input image, define a set of optimal parameters that would bring this image closer to a certain subjective understanding of “good photography”. Implemented in the same Pixelmator Pro and other editors. Works, as you can guess, not very and not always. On the links below you can read articles, take datasets and practice yourself.
conti…