High-quality images in Google Pixel phones are not only thanks to the camera!
Got pixel camera phones from Google on 89 points, according to the measure DxO a figure that any smartphone camera did not get it yet, making it better for the time being after the experiments and the words of specialists in this field.
In other words, the more accurate technical Pixel phones and Pixel XL supplied lens of the Sony IMX378 Industry Co., a lens is also used in the phone Xiaomi Mi 5S but the difference is clear certainly the quality of the devices. Here should be noted that 99% of smartphones in the imaging lenses are manufactured by Sony as well.
Some might think that the use of high-tech lens specifications may suffice. He is also already but not in all cases, especially when companies are Google, Samsung, and Apple is the sizes of those used lenses. It is true that these lenses are able to capture an image with high accuracy, but the thirst of those companies and their desire to reach the best paid to the development of very sophisticated systems capable of processing the image from the lens through simple fractions of a second.
The default mode for shooting in pixel phones called HDR + captures more than one picture very quickly to start a later analysis of the details that are operations inside and modify the part of the tones to show the best possible image before it produces all these processes one image accuracy and detail is very high.
In addition offers Google Smart Burst feature which analyzes 30 frames per second when the user captures an image, and in the rough, the system captures 10 images, depending on the tonal balance, in addition to the faces located inside the snapshot, to the later saves images in photo album with propose a single image as best after analyzing all the details and choose the clearest absolutely.
Light sensors in camera phones are very large, which allows the introduction of a larger amount of light during the filming. This is very useful in the status of night photography. But Google also has worked to use this command in image stabilization.
During image capture Google decided to make the time between opening the lens is very short and close, which allows capturing more than one frame very quickly, before the integration of all the frames together, and thus produces an image with high accuracy and the details are clear. Last, meaning the user presses the image capture button, but the lens to capture the scene more than once to analyze elements and merge them together, this allows the colors to capture the best possible grades and also merge them together to provide the best result.
Also benefited Google algorithms, which had been developed to ensure the stability of the image and the lack of concussion where even when concussion phone itself, the lens is equipped with this sensor, which is known as Optical Image Stabilization, or OIS, but the system is able to overcome this problem with ease.
What the reality of the system is to take advantage of clear image provided by the lens that details, with the speed of closing and opening the lens that allows capturing more than one frame of the same scene, with some algorithms are making sure that the elements in place or they shifted from its place as a result of a concussion, for the system as after fix this problem if they exist.
But when shooting videos Google has connecting lens with sensor and acceleration direction of the phone, where the imaging system in the mobile-pixel read the direction and the status of the phone 200 times per second, which in turn sponsor prevent concussion scene during the filming as well.
Integration provided by Google between the lens and the system does not stop at this point; Thanks to Sony's fast lens and image processing chip provided by Google pixel camera phones quoted imaging of what is beyond that.
When capturing an image is captured more than one frame at the same time as we mentioned earlier the lens automatically communicates with the system to focus on the elements within the image and identify objects moving toward or reversal of the camera, and this is what allows showing the elements of all the details and not to neglect any part of it .
This requires a large amount of light reflected from the items in the scene and thus camera can see it, but what about low-light situations? Google solved the problem by using a laser beam.
When you look at the lens on the back side of the camera user will notice the presence of two holes next to the microphone, the first hole contains the sender of the laser beam, and the second contains the future of radiology.
The system transmits laser infrared cone shaped, and like any beam will be reflected when it collides with any item in front of him, and therefore the future of radiology reading this reflection and recognize the item and place.
The above shows the size of the effort in the pixel camera phones from Google in terms of algorithms and the system that runs through all of these techniques are very simple fractions of a second, without the user by feel. In order not to underestimate the rest of the corporate right, iPhone7 cameras, or Galaxy S 7 EDGE is also not less important, and even systems that address the image which is very strong and conducting a wide range of operations in a very simple fractions of a second too, but excel highlights in minor details and mechanism dealing with common problems and solve them.
In other words, the more accurate technical Pixel phones and Pixel XL supplied lens of the Sony IMX378 Industry Co., a lens is also used in the phone Xiaomi Mi 5S but the difference is clear certainly the quality of the devices. Here should be noted that 99% of smartphones in the imaging lenses are manufactured by Sony as well.
Some might think that the use of high-tech lens specifications may suffice. He is also already but not in all cases, especially when companies are Google, Samsung, and Apple is the sizes of those used lenses. It is true that these lenses are able to capture an image with high accuracy, but the thirst of those companies and their desire to reach the best paid to the development of very sophisticated systems capable of processing the image from the lens through simple fractions of a second.
The default mode for shooting in pixel phones called HDR + captures more than one picture very quickly to start a later analysis of the details that are operations inside and modify the part of the tones to show the best possible image before it produces all these processes one image accuracy and detail is very high.
In addition offers Google Smart Burst feature which analyzes 30 frames per second when the user captures an image, and in the rough, the system captures 10 images, depending on the tonal balance, in addition to the faces located inside the snapshot, to the later saves images in photo album with propose a single image as best after analyzing all the details and choose the clearest absolutely.
Light sensors in camera phones are very large, which allows the introduction of a larger amount of light during the filming. This is very useful in the status of night photography. But Google also has worked to use this command in image stabilization.
During image capture Google decided to make the time between opening the lens is very short and close, which allows capturing more than one frame very quickly, before the integration of all the frames together, and thus produces an image with high accuracy and the details are clear. Last, meaning the user presses the image capture button, but the lens to capture the scene more than once to analyze elements and merge them together, this allows the colors to capture the best possible grades and also merge them together to provide the best result.
Also benefited Google algorithms, which had been developed to ensure the stability of the image and the lack of concussion where even when concussion phone itself, the lens is equipped with this sensor, which is known as Optical Image Stabilization, or OIS, but the system is able to overcome this problem with ease.
What the reality of the system is to take advantage of clear image provided by the lens that details, with the speed of closing and opening the lens that allows capturing more than one frame of the same scene, with some algorithms are making sure that the elements in place or they shifted from its place as a result of a concussion, for the system as after fix this problem if they exist.
But when shooting videos Google has connecting lens with sensor and acceleration direction of the phone, where the imaging system in the mobile-pixel read the direction and the status of the phone 200 times per second, which in turn sponsor prevent concussion scene during the filming as well.
Integration provided by Google between the lens and the system does not stop at this point; Thanks to Sony's fast lens and image processing chip provided by Google pixel camera phones quoted imaging of what is beyond that.
When capturing an image is captured more than one frame at the same time as we mentioned earlier the lens automatically communicates with the system to focus on the elements within the image and identify objects moving toward or reversal of the camera, and this is what allows showing the elements of all the details and not to neglect any part of it .
This requires a large amount of light reflected from the items in the scene and thus camera can see it, but what about low-light situations? Google solved the problem by using a laser beam.
When you look at the lens on the back side of the camera user will notice the presence of two holes next to the microphone, the first hole contains the sender of the laser beam, and the second contains the future of radiology.
The system transmits laser infrared cone shaped, and like any beam will be reflected when it collides with any item in front of him, and therefore the future of radiology reading this reflection and recognize the item and place.
The above shows the size of the effort in the pixel camera phones from Google in terms of algorithms and the system that runs through all of these techniques are very simple fractions of a second, without the user by feel. In order not to underestimate the rest of the corporate right, iPhone7 cameras, or Galaxy S 7 EDGE is also not less important, and even systems that address the image which is very strong and conducting a wide range of operations in a very simple fractions of a second too, but excel highlights in minor details and mechanism dealing with common problems and solve them.
Comments
Post a Comment