The future of photography is code part 1

Today, no smartphone presentation can do without licking his camera. Every month we hear about another success of mobile cameras: Google teaches Pixel to shoot in the dark, Huawei zooms like binoculars, Samsung inserts a lidar, and Apple makes the most round corners in the world. There are few places where innovation is flowing so boldly.

Mirror, at the same time, as if rubbing in place. Sony annually showered everyone with new matrices, and manufacturers lazily update the latest version of the version and continue to lazily sniff coke on the sidelines. I have a $ 3000 DSLR on my desk, but on my travels I take an iPhone. Why?

As the classic said, I went online with this question. Some “algorithms” and “neural networks” are discussed there, without knowing how exactly they affect the photo. Journalists are loudly reading out the number of megapixels, bloggers are chanting sawed up paid boxing, and the aesthetes are smeared with “sensual perception of the color palette of the matrix”. Everything is as usual.

I had to sit down, spend half my life and figure it out myself. In this article I will tell you what I learned.

What is computational photography?

Everywhere, including Wikipedia , they give about the following definition: computational photography - any technique of capturing and processing images, where digital computations are used instead of optical transformations . Everything is good in him, except that it is shit. Even autofocus fits under it, but the plenoptic does not fit, which has already brought us a lot of useful information. The blur of official definitions hints that we have no idea what we are talking about.

The pioneer of computational photography, Stanford professor Marc Levoy (he is now in charge of the camera at Google Pixel) provides another definition - a set of computer visualization techniques that enhance or expand digital photography, which results in a regular photo that could not technically be taken on this camera in the traditional way. In the post, I adhere to it.

So, smartphones were to blame for everything.

Smartphones had no choice but to give life to a new kind of photo - computing.

Their little noisy matrices and tiny illusive lenses, according to all the laws of physics, were supposed to bring only pain and suffering. They brought, until their developers have guessed slyly to use their strengths to overcome the weak - fast electronic closures, powerful processors and software.

Most high-profile research in the field of computational photography falls on the years 2005-2015, which in science is considered just yesterday. Right now, before our eyes and in our pockets, a new area of ​​knowledge and technology is developing, which has never existed.

Computational photography is not only a selfie with neuro-side. A recent photograph of a black hole would not have come into existence without the methods of computational photography. To take such a photo on an ordinary telescope, we would have to make it the size of the Earth. However, by combining the data of eight radio telescopes at different points of our ball and having written some scripts in python , we received the world’s first photograph of the event horizon. For selfies, too amiss.

account…

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值