How do recent smartphone obtain higher resolution pictures through pixel binning
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.
However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.
Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"
I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.
image-processing smartphone
add a comment |
First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.
However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.
Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"
I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.
image-processing smartphone
add a comment |
First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.
However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.
Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"
I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.
image-processing smartphone
First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.
However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.
Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"
I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.
image-processing smartphone
image-processing smartphone
asked May 17 at 8:16
rangerhrangerh
183
183
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.
Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.
The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).
This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.
When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.
(To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)
EDIT:
The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471
Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.
– rangerh
May 17 at 9:43
@rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471
– vide
May 17 at 10:47
Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.
– szulat
May 17 at 16:11
add a comment |
You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?
And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "61"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f108328%2fhow-do-recent-smartphone-obtain-higher-resolution-pictures-through-pixel-binning%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.
Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.
The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).
This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.
When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.
(To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)
EDIT:
The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471
Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.
– rangerh
May 17 at 9:43
@rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471
– vide
May 17 at 10:47
Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.
– szulat
May 17 at 16:11
add a comment |
According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.
Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.
The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).
This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.
When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.
(To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)
EDIT:
The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471
Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.
– rangerh
May 17 at 9:43
@rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471
– vide
May 17 at 10:47
Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.
– szulat
May 17 at 16:11
add a comment |
According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.
Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.
The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).
This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.
When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.
(To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)
EDIT:
The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471
According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.
Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.
The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).
This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.
When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.
(To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)
EDIT:
The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471
edited May 17 at 10:49
answered May 17 at 9:18
videvide
1165
1165
Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.
– rangerh
May 17 at 9:43
@rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471
– vide
May 17 at 10:47
Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.
– szulat
May 17 at 16:11
add a comment |
Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.
– rangerh
May 17 at 9:43
@rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471
– vide
May 17 at 10:47
Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.
– szulat
May 17 at 16:11
Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.
– rangerh
May 17 at 9:43
Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.
– rangerh
May 17 at 9:43
@rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471
– vide
May 17 at 10:47
@rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471
– vide
May 17 at 10:47
Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.
– szulat
May 17 at 16:11
Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.
– szulat
May 17 at 16:11
add a comment |
You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?
And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.
add a comment |
You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?
And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.
add a comment |
You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?
And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.
You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?
And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.
answered May 17 at 9:20
szulatszulat
4,26811127
4,26811127
add a comment |
add a comment |
Thanks for contributing an answer to Photography Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f108328%2fhow-do-recent-smartphone-obtain-higher-resolution-pictures-through-pixel-binning%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown