How do recent smartphone obtain higher resolution pictures through pixel binning





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







3















First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.



However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.



Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"



I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.










share|improve this question





























    3















    First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.



    However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.



    Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"



    I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.










    share|improve this question

























      3












      3








      3








      First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.



      However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.



      Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"



      I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.










      share|improve this question














      First of all I should assert the fact that I have no particular knowledge in photography. I am not qualified to understand technical processes.



      However, I recently came accross some debates about phones having only 12 MPx sensors and taking 48 MPx shots (Redmi Note 7 for example). I read about this subject which seemed interesting and people have mentioned Pixel Binning technique, which I've read about. From what I understood, Pixel Binning should REDUCE the image resolution.



      Thus this question : How do recent smartphone obtain higher resolution pictures through "pixel binning"



      I wasn't able to find a concrete answer to this question yet and hope someone can explain this to me in simple terms. Thanks in advance.







      image-processing smartphone






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked May 17 at 8:16









      rangerhrangerh

      183




      183






















          2 Answers
          2






          active

          oldest

          votes


















          6














          According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.



          Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.



          The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).



          This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.



          When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.



          traditional bayer filter arrangementblock arrangement



          (To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)





          EDIT:
          The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471



          speculation






          share|improve this answer


























          • Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.

            – rangerh
            May 17 at 9:43











          • @rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471

            – vide
            May 17 at 10:47











          • Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.

            – szulat
            May 17 at 16:11



















          2














          You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?



          And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.






          share|improve this answer
























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "61"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f108328%2fhow-do-recent-smartphone-obtain-higher-resolution-pictures-through-pixel-binning%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            6














            According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.



            Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.



            The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).



            This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.



            When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.



            traditional bayer filter arrangementblock arrangement



            (To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)





            EDIT:
            The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471



            speculation






            share|improve this answer


























            • Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.

              – rangerh
              May 17 at 9:43











            • @rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471

              – vide
              May 17 at 10:47











            • Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.

              – szulat
              May 17 at 16:11
















            6














            According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.



            Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.



            The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).



            This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.



            When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.



            traditional bayer filter arrangementblock arrangement



            (To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)





            EDIT:
            The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471



            speculation






            share|improve this answer


























            • Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.

              – rangerh
              May 17 at 9:43











            • @rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471

              – vide
              May 17 at 10:47











            • Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.

              – szulat
              May 17 at 16:11














            6












            6








            6







            According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.



            Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.



            The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).



            This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.



            When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.



            traditional bayer filter arrangementblock arrangement



            (To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)





            EDIT:
            The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471



            speculation






            share|improve this answer















            According to this source, the Redmi Note 7 Pro uses an IMX586 sensor. The press release by Sony about the sensor is available here. The sensor actually has 48 megapixels instead of 12. The page actually explains the technology quite well but I'll try to make it even clearer.



            Normally camera sensors have something called a Bayer filter on top of the actual "pixels", which is essentially a colored grid. This enables the sensor to measure different colors. A neat image is provided in the Wikipedia article.



            The trick in this sensor is that the filter is arranged so that the red, blue and green pixels are arranged in blocks instead of the traditional configuration. (This will result in a little bit less accurate colors, see EDIT below).



            This allows the sensor to use all the single-color blocks together to form bigger pixels which are more sensitive and have greater dynamic range. This is called pixel binning. The bigger blocks are then used to calculate single result pixel values, but the resolution is only 1/4 of the original, so 12 megapixels.



            When shooting in daylight conditions, an algorithm calculates the values for if the bayer filter would be "normal" and so 48MP images are obtained.



            traditional bayer filter arrangementblock arrangement



            (To actually answer the question in the title: they don't. Pixel binning is used to obtain more accurate values by sacrificing resolution.)





            EDIT:
            The following image is speculation on how the 48MP images are made using the modified bayer filter, seen in discussion in this thread. Sony doesn't reveal the full details on how it's actually done. It will probably result in decreased color accuracy: https://news.ycombinator.com/item?id=17601471



            speculation







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited May 17 at 10:49

























            answered May 17 at 9:18









            videvide

            1165




            1165













            • Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.

              – rangerh
              May 17 at 9:43











            • @rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471

              – vide
              May 17 at 10:47











            • Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.

              – szulat
              May 17 at 16:11



















            • Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.

              – rangerh
              May 17 at 9:43











            • @rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471

              – vide
              May 17 at 10:47











            • Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.

              – szulat
              May 17 at 16:11

















            Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.

            – rangerh
            May 17 at 9:43





            Thank you good sir, this is a very clear answer. I must however say that I didn't quite understand that last sentence: "When shooting in daylight conditions, an algorithm calculates the values for if the bayes filter would be "normal" and so 48MP images are obtained." So, the Bayes filter is useful for Pixel Binning, and the algorithm allows to NOT do that technique and use the sensor as if it didn't have the filter ? If I understood well, I guess that this confusion might be why I've seen people debating about the fact that the 48MPx sensor was fake and only for marketing.

            – rangerh
            May 17 at 9:43













            @rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471

            – vide
            May 17 at 10:47





            @rangerh I added an edit to the answer adding speculation on how the 48MP images are made. You can see how the "smaller blocks" outlined in yellow can still be made even though the physical bayer filter is arranged differently. The main point is that there really is 48MP and Sony just uses clever arranging of the filter to use larger groups when necessary (low light). This will probably result in lower color accuracy, though. See more technical details here: news.ycombinator.com/item?id=17601471

            – vide
            May 17 at 10:47













            Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.

            – szulat
            May 17 at 16:11





            Re: the speculation - no, they are not algorithmically recreating the hypothetical tiny bayer pattern - why should they? The goal is to obtain pixel colors from the information, which does not always come in a form of the well known pattern. Just use what is available, shuffling the pixel readings in hope to get bayer is not a good idea.

            – szulat
            May 17 at 16:11













            2














            You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?



            And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.






            share|improve this answer




























              2














              You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?



              And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.






              share|improve this answer


























                2












                2








                2







                You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?



                And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.






                share|improve this answer













                You don't get high resolution through binning; binning decreases the resolution. So the question should be: how do they achieve high resolution despite binning?



                And the answer is simple: by not using it. That's the whole point of the binning, it can be enabled and disabled on demand so we can use the same sensor in low resolution, high sensitivity mode in low light and still have the highest resolution when lighting conditions allow.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered May 17 at 9:20









                szulatszulat

                4,26811127




                4,26811127






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Photography Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f108328%2fhow-do-recent-smartphone-obtain-higher-resolution-pictures-through-pixel-binning%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

                    He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

                    Bunad