Creating thinned models in during Dropout processMathematically modeling neural networks as graphical modelsQuestions about understanding convolutional neural network (with Tensorflow's example)How to efficiently and un-biasedly evaluate a deep net for classification taskRecursive neural networks for Part-of-speech tagging?Why do Srivastava et al. claim that “the best” theoretical regularization technique involves all possible network parameter settings?Understanding dropout method: one bask per batch, or more?Why is dropout causing my network to overfit so badly?Dropout in Deep Neural NetworksNeural Networks Mappings( Topology)What does it mean by “approach the performance of the Bayesian gold standard”?

Why didn't Voldemort know what Grindelwald looked like?

Why didn’t Eve recognize the little cockroach as a living organism?

Why Shazam when there is already Superman?

How many people need to be born every 8 years to sustain population?

Usage of an old photo with expired copyright

In One Punch Man, is King actually weak?

Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?

Anime with legendary swords made from talismans and a man who could change them with a shattered body

Why does a 97 / 92 key piano exist by Bösendorfer?

Grepping string, but include all non-blank lines following each grep match

What does "tick" mean in this sentence?

Has the laser at Magurele, Romania reached a tenth of the Sun's power?

What is the meaning of the following sentence?

How much do grades matter for a future academia position?

Typing CO_2 easily

Can you identify this lizard-like creature I observed in the UK?

what is the name of this formula derived from Poisson Distribution?

What is the English pronunciation of pain au chocolat?

Mimic lecturing on blackboard, facing audience

Using streams for a null-safe conversion from an array to list

Air travel with refrigerated insulin

Personal or impersonal in a technical resume

Check if object is null and return null

Is it feasible to let a newcomer play the "Gandalf"-like figure I created for my campaign?



Creating thinned models in during Dropout process


Mathematically modeling neural networks as graphical modelsQuestions about understanding convolutional neural network (with Tensorflow's example)How to efficiently and un-biasedly evaluate a deep net for classification taskRecursive neural networks for Part-of-speech tagging?Why do Srivastava et al. claim that “the best” theoretical regularization technique involves all possible network parameter settings?Understanding dropout method: one bask per batch, or more?Why is dropout causing my network to overfit so badly?Dropout in Deep Neural NetworksNeural Networks Mappings( Topology)What does it mean by “approach the performance of the Bayesian gold standard”?













4












$begingroup$



Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




Source:
Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



How are we getting these 2^n models?










share|cite|improve this question









New contributor




ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    4












    $begingroup$



    Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




    Source:
    Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



    How are we getting these 2^n models?










    share|cite|improve this question









    New contributor




    ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      4












      4








      4





      $begingroup$



      Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




      Source:
      Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



      How are we getting these 2^n models?










      share|cite|improve this question









      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$





      Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




      Source:
      Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



      How are we getting these 2^n models?







      machine-learning deep-learning dropout






      share|cite|improve this question









      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question









      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question








      edited 2 days ago









      Djib2011

      2,58931125




      2,58931125






      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 2 days ago









      ashirwadashirwad

      213




      213




      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          2 Answers
          2






          active

          oldest

          votes


















          4












          $begingroup$

          The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



          The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



          Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






          share|cite|improve this answer











          $endgroup$




















            0












            $begingroup$

            I too haven't understood their reasoning, I always assumed it was a typo or something...



            The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



            $$
            fracn!r! cdot (n-r)!
            $$



            possible combinations (not $2^n$ as the authors state).




            Example:



            Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



            Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



            Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:



            1. $h_1, h_2$

            2. $h_1, h_3$

            3. $h_1, h_4$

            4. $h_2, h_3$

            5. $h_2, h_4$

            6. $h_3, h_4$

            or by applying the formula:



            $$
            frac4!2! cdot (4-2)! = frac242 cdot 2 = 6
            $$






            share|cite|improve this answer









            $endgroup$








            • 3




              $begingroup$
              I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
              $endgroup$
              – Daniel López
              2 days ago






            • 1




              $begingroup$
              @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
              $endgroup$
              – usεr11852
              2 days ago







            • 2




              $begingroup$
              Agree, but I believe the above example is transmitting the idea that exactly $n cdot textprob$ units are disabled with dropout, where $textprob$ is the dropout probability. And this is not how dropout works.
              $endgroup$
              – Daniel López
              2 days ago











            • $begingroup$
              Well... LLN is our friend. :)
              $endgroup$
              – usεr11852
              2 days ago










            • $begingroup$
              The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
              $endgroup$
              – Sycorax
              2 days ago











            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );






            ashirwad is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398114%2fcreating-thinned-models-in-during-dropout-process%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4












            $begingroup$

            The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



            The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



            Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






            share|cite|improve this answer











            $endgroup$

















              4












              $begingroup$

              The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



              The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



              Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






              share|cite|improve this answer











              $endgroup$















                4












                4








                4





                $begingroup$

                The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



                The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



                Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






                share|cite|improve this answer











                $endgroup$



                The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



                The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



                Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 2 days ago

























                answered 2 days ago









                usεr11852usεr11852

                19.4k14274




                19.4k14274























                    0












                    $begingroup$

                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    fracn!r! cdot (n-r)!
                    $$



                    possible combinations (not $2^n$ as the authors state).




                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:



                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$

                    or by applying the formula:



                    $$
                    frac4!2! cdot (4-2)! = frac242 cdot 2 = 6
                    $$






                    share|cite|improve this answer









                    $endgroup$








                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      2 days ago






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      2 days ago







                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot textprob$ units are disabled with dropout, where $textprob$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      2 days ago











                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      2 days ago










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      2 days ago
















                    0












                    $begingroup$

                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    fracn!r! cdot (n-r)!
                    $$



                    possible combinations (not $2^n$ as the authors state).




                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:



                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$

                    or by applying the formula:



                    $$
                    frac4!2! cdot (4-2)! = frac242 cdot 2 = 6
                    $$






                    share|cite|improve this answer









                    $endgroup$








                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      2 days ago






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      2 days ago







                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot textprob$ units are disabled with dropout, where $textprob$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      2 days ago











                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      2 days ago










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      2 days ago














                    0












                    0








                    0





                    $begingroup$

                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    fracn!r! cdot (n-r)!
                    $$



                    possible combinations (not $2^n$ as the authors state).




                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:



                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$

                    or by applying the formula:



                    $$
                    frac4!2! cdot (4-2)! = frac242 cdot 2 = 6
                    $$






                    share|cite|improve this answer









                    $endgroup$



                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    fracn!r! cdot (n-r)!
                    $$



                    possible combinations (not $2^n$ as the authors state).




                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:



                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$

                    or by applying the formula:



                    $$
                    frac4!2! cdot (4-2)! = frac242 cdot 2 = 6
                    $$







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered 2 days ago









                    Djib2011Djib2011

                    2,58931125




                    2,58931125







                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      2 days ago






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      2 days ago







                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot textprob$ units are disabled with dropout, where $textprob$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      2 days ago











                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      2 days ago










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      2 days ago













                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      2 days ago






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      2 days ago







                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot textprob$ units are disabled with dropout, where $textprob$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      2 days ago











                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      2 days ago










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      2 days ago








                    3




                    3




                    $begingroup$
                    I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                    $endgroup$
                    – Daniel López
                    2 days ago




                    $begingroup$
                    I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                    $endgroup$
                    – Daniel López
                    2 days ago




                    1




                    1




                    $begingroup$
                    @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                    $endgroup$
                    – usεr11852
                    2 days ago





                    $begingroup$
                    @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                    $endgroup$
                    – usεr11852
                    2 days ago





                    2




                    2




                    $begingroup$
                    Agree, but I believe the above example is transmitting the idea that exactly $n cdot textprob$ units are disabled with dropout, where $textprob$ is the dropout probability. And this is not how dropout works.
                    $endgroup$
                    – Daniel López
                    2 days ago





                    $begingroup$
                    Agree, but I believe the above example is transmitting the idea that exactly $n cdot textprob$ units are disabled with dropout, where $textprob$ is the dropout probability. And this is not how dropout works.
                    $endgroup$
                    – Daniel López
                    2 days ago













                    $begingroup$
                    Well... LLN is our friend. :)
                    $endgroup$
                    – usεr11852
                    2 days ago




                    $begingroup$
                    Well... LLN is our friend. :)
                    $endgroup$
                    – usεr11852
                    2 days ago












                    $begingroup$
                    The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                    $endgroup$
                    – Sycorax
                    2 days ago





                    $begingroup$
                    The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                    $endgroup$
                    – Sycorax
                    2 days ago











                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.









                    draft saved

                    draft discarded


















                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.












                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.











                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.














                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398114%2fcreating-thinned-models-in-during-dropout-process%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

                    He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

                    Slayer Innehåll Historia | Stil, komposition och lyrik | Bandets betydelse och framgångar | Sidoprojekt och samarbeten | Kontroverser | Medlemmar | Utmärkelser och nomineringar | Turnéer och festivaler | Diskografi | Referenser | Externa länkar | Navigeringsmenywww.slayer.net”Metal Massacre vol. 1””Metal Massacre vol. 3””Metal Massacre Volume III””Show No Mercy””Haunting the Chapel””Live Undead””Hell Awaits””Reign in Blood””Reign in Blood””Gold & Platinum – Reign in Blood””Golden Gods Awards Winners”originalet”Kerrang! Hall Of Fame””Slayer Looks Back On 37-Year Career In New Video Series: Part Two””South of Heaven””Gold & Platinum – South of Heaven””Seasons in the Abyss””Gold & Platinum - Seasons in the Abyss””Divine Intervention””Divine Intervention - Release group by Slayer””Gold & Platinum - Divine Intervention””Live Intrusion””Undisputed Attitude””Abolish Government/Superficial Love””Release “Slatanic Slaughter: A Tribute to Slayer” by Various Artists””Diabolus in Musica””Soundtrack to the Apocalypse””God Hates Us All””Systematic - Relationships””War at the Warfield””Gold & Platinum - War at the Warfield””Soundtrack to the Apocalypse””Gold & Platinum - Still Reigning””Metallica, Slayer, Iron Mauden Among Winners At Metal Hammer Awards””Eternal Pyre””Eternal Pyre - Slayer release group””Eternal Pyre””Metal Storm Awards 2006””Kerrang! Hall Of Fame””Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Bullet-For My Valentine booed at Metal Hammer Golden Gods Awards””Unholy Aliance””The End Of Slayer?””Slayer: We Could Thrash Out Two More Albums If We're Fast Enough...””'The Unholy Alliance: Chapter III' UK Dates Added”originalet”Megadeth And Slayer To Co-Headline 'Canadian Carnage' Trek”originalet”World Painted Blood””Release “World Painted Blood” by Slayer””Metallica Heading To Cinemas””Slayer, Megadeth To Join Forces For 'European Carnage' Tour - Dec. 18, 2010”originalet”Slayer's Hanneman Contracts Acute Infection; Band To Bring In Guest Guitarist””Cannibal Corpse's Pat O'Brien Will Step In As Slayer's Guest Guitarist”originalet”Slayer’s Jeff Hanneman Dead at 49””Dave Lombardo Says He Made Only $67,000 In 2011 While Touring With Slayer””Slayer: We Do Not Agree With Dave Lombardo's Substance Or Timeline Of Events””Slayer Welcomes Drummer Paul Bostaph Back To The Fold””Slayer Hope to Unveil Never-Before-Heard Jeff Hanneman Material on Next Album””Slayer Debut New Song 'Implode' During Surprise Golden Gods Appearance””Release group Repentless by Slayer””Repentless - Slayer - Credits””Slayer””Metal Storm Awards 2015””Slayer - to release comic book "Repentless #1"””Slayer To Release 'Repentless' 6.66" Vinyl Box Set””BREAKING NEWS: Slayer Announce Farewell Tour””Slayer Recruit Lamb of God, Anthrax, Behemoth + Testament for Final Tour””Slayer lägger ner efter 37 år””Slayer Announces Second North American Leg Of 'Final' Tour””Final World Tour””Slayer Announces Final European Tour With Lamb of God, Anthrax And Obituary””Slayer To Tour Europe With Lamb of God, Anthrax And Obituary””Slayer To Play 'Last French Show Ever' At Next Year's Hellfst””Slayer's Final World Tour Will Extend Into 2019””Death Angel's Rob Cavestany On Slayer's 'Farewell' Tour: 'Some Of Us Could See This Coming'””Testament Has No Plans To Retire Anytime Soon, Says Chuck Billy””Anthrax's Scott Ian On Slayer's 'Farewell' Tour Plans: 'I Was Surprised And I Wasn't Surprised'””Slayer””Slayer's Morbid Schlock””Review/Rock; For Slayer, the Mania Is the Message””Slayer - Biography””Slayer - Reign In Blood”originalet”Dave Lombardo””An exclusive oral history of Slayer”originalet”Exclusive! Interview With Slayer Guitarist Jeff Hanneman”originalet”Thinking Out Loud: Slayer's Kerry King on hair metal, Satan and being polite””Slayer Lyrics””Slayer - Biography””Most influential artists for extreme metal music””Slayer - Reign in Blood””Slayer guitarist Jeff Hanneman dies aged 49””Slatanic Slaughter: A Tribute to Slayer””Gateway to Hell: A Tribute to Slayer””Covered In Blood””Slayer: The Origins of Thrash in San Francisco, CA.””Why They Rule - #6 Slayer”originalet”Guitar World's 100 Greatest Heavy Metal Guitarists Of All Time”originalet”The fans have spoken: Slayer comes out on top in readers' polls”originalet”Tribute to Jeff Hanneman (1964-2013)””Lamb Of God Frontman: We Sound Like A Slayer Rip-Off””BEHEMOTH Frontman Pays Tribute To SLAYER's JEFF HANNEMAN””Slayer, Hatebreed Doing Double Duty On This Year's Ozzfest””System of a Down””Lacuna Coil’s Andrea Ferro Talks Influences, Skateboarding, Band Origins + More””Slayer - Reign in Blood””Into The Lungs of Hell””Slayer rules - en utställning om fans””Slayer and Their Fans Slashed Through a No-Holds-Barred Night at Gas Monkey””Home””Slayer””Gold & Platinum - The Big 4 Live from Sofia, Bulgaria””Exclusive! Interview With Slayer Guitarist Kerry King””2008-02-23: Wiltern, Los Angeles, CA, USA””Slayer's Kerry King To Perform With Megadeth Tonight! - Oct. 21, 2010”originalet”Dave Lombardo - Biography”Slayer Case DismissedArkiveradUltimate Classic Rock: Slayer guitarist Jeff Hanneman dead at 49.”Slayer: "We could never do any thing like Some Kind Of Monster..."””Cannibal Corpse'S Pat O'Brien Will Step In As Slayer'S Guest Guitarist | The Official Slayer Site”originalet”Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Kerrang! Awards 2006 Blog: Kerrang! Hall Of Fame””Kerrang! Awards 2013: Kerrang! Legend”originalet”Metallica, Slayer, Iron Maien Among Winners At Metal Hammer Awards””Metal Hammer Golden Gods Awards””Bullet For My Valentine Booed At Metal Hammer Golden Gods Awards””Metal Storm Awards 2006””Metal Storm Awards 2015””Slayer's Concert History””Slayer - Relationships””Slayer - Releases”Slayers officiella webbplatsSlayer på MusicBrainzOfficiell webbplatsSlayerSlayerr1373445760000 0001 1540 47353068615-5086262726cb13906545x(data)6033143kn20030215029