Today, no smartphone presentation can do without licking his camera. Every month we hear about another success of mobile cameras: Google teaches Pixel to shoot in the dark, Huawei zooms like binoculars, Samsung inserts a lidar, and Apple makes the most round corners in the world. There are few places where innovation is flowing so boldly.
Mirror, at the same time, as if rubbing in place. Sony annually showered everyone with new matrices, and manufacturers lazily update the latest version of the version and continue to lazily sniff coke on the sidelines. I have a $ 3000 DSLR on my desk, but on my travels I take an iPhone. Why?
As the classic said, I went online with this question. Some “algorithms” and “neural networks” are discussed there, without knowing how exactly they affect the photo. Journalists are loudly reading out the number of megapixels, bloggers are chanting sawed up paid boxing, and the aesthetes are smeared with “sensual perception of the color palette of the matrix”. Everything is as usual.
I had to sit down, spend half my life and figure it out myself. In this article I will tell you what I learned.
What is computational photography?
Everywhere, including Wikipedia , they give about the following definition: computational photography - any technique of capturing and processing images, where digital computations are used instead of optical transformations . Everything is good in him, except that it is shit. Even autofocus fits under it, but the plenoptic does not fit, which has already brought us a lot of useful information. The blur of official definitions hints that we have no idea what we are talking about.
The pioneer of computational photography, Stanford professor Marc Levoy (he is now in charge of the camera at Google Pixel) provides another definition - a set of computer visualization techniques that enhance or expand digital photography, which results in a regular photo that could not technically be taken on this camera in the traditional way. In the post, I adhere to it.
So, smartphones were to blame for everything.
Smartphones had no choice but to give life to a new kind of photo - computing.
Their little noisy matrices and tiny illusive lenses, according to all the laws of physics, were supposed to bring only pain and suffering. They brought, until their developers have guessed slyly to use their strengths to overcome the weak - fast electronic closures, powerful processors and software.
Most high-profile research in the field of computational photography falls on the years 2005-2015, which in science is considered just yesterday. Right now, before our eyes and in our pockets, a new area of knowledge and technology is developing, which has never existed.
Computational photography is not only a selfie with neuro-side. A recent photograph of a black hole would not have come into existence without the methods of computational photography. To take such a photo on an ordinary telescope, we would have to make it the size of the Earth. However, by combining the data of eight radio telescopes at different points of our ball and having written some scripts in python , we received the world’s first photograph of the event horizon. For selfies, too amiss.
account…