Intuition behind eigenvalues of an adjacency matrix





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{
margin-bottom:0;
}








9














$begingroup$


I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.

Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.

I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).

But nothing about spectral partitioning and graph theory.



Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?










share|cite|improve this question












$endgroup$















  • $begingroup$
    Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
    $endgroup$
    – Yuval Filmus
    May 28 at 14:14










  • $begingroup$
    @YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
    $endgroup$
    – m.raynal
    May 28 at 15:00


















9














$begingroup$


I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.

Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.

I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).

But nothing about spectral partitioning and graph theory.



Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?










share|cite|improve this question












$endgroup$















  • $begingroup$
    Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
    $endgroup$
    – Yuval Filmus
    May 28 at 14:14










  • $begingroup$
    @YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
    $endgroup$
    – m.raynal
    May 28 at 15:00














9












9








9


3



$begingroup$


I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.

Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.

I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).

But nothing about spectral partitioning and graph theory.



Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?










share|cite|improve this question












$endgroup$




I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.

Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.

I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).

But nothing about spectral partitioning and graph theory.



Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?







graph-theory adjacency-matrix






share|cite|improve this question
















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 29 at 9:57









Glorfindel

2641 gold badge4 silver badges11 bronze badges




2641 gold badge4 silver badges11 bronze badges










asked May 28 at 13:36









m.raynalm.raynal

1677 bronze badges




1677 bronze badges















  • $begingroup$
    Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
    $endgroup$
    – Yuval Filmus
    May 28 at 14:14










  • $begingroup$
    @YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
    $endgroup$
    – m.raynal
    May 28 at 15:00


















  • $begingroup$
    Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
    $endgroup$
    – Yuval Filmus
    May 28 at 14:14










  • $begingroup$
    @YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
    $endgroup$
    – m.raynal
    May 28 at 15:00
















$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14




$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14












$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00




$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00










3 Answers
3






active

oldest

votes


















6
















$begingroup$

The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.



Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)



This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrt{nlog n}$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrt{nlog n})$. We can improve this using spectral techniques: if we plant a clique of size $Csqrt{n}$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.



More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.






share|cite|improve this answer










$endgroup$























    5
















    $begingroup$

    (Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)



    An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbb{R}^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbb{R}$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbb{R}^n$ such that there is $lambda in mathbb{R}$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_{u in N(v)} f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.






    share|cite|improve this answer










    $endgroup$















    • $begingroup$
      Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
      $endgroup$
      – m.raynal
      May 28 at 14:09






    • 1




      $begingroup$
      Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
      $endgroup$
      – dkaeae
      May 28 at 14:14



















    5
















    $begingroup$

    I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.



    For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbb{R}$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
    $$
    langle f, Lfrangle = frac{1}{d} sum_{(u,v) in E}{(f(u) - f(v))^2}.
    $$



    The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - frac{lambda_2}{d}$. By the min-max principle, we have



    $$
    1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f, frangle}:sum_{v in V}{f(v)} = 0, f neq 0right}.
    $$



    Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbb{R}$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac{1}{n}sum_{v in V}{f(v)}$, and write



    $$
    1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f_0, f_0rangle}: f text{ not constant}right}.
    $$



    Now a bit of calculation shows that $langle f_0, f_0rangle = frac{1}{n}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}$, and substituting above and dividing numerator and denominator by $frac{n}{2}$, we have



    $$
    1 - frac{lambda_2}{d}=minleft{frac{frac{2}{nd} sum_{(u,v) in E}{(f(u) - f(v))^2}}{frac{2}{n^2}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}}: f text{ not constant}right}.
    $$



    What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $frac{d}{d - lambda_2}$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).






    share|cite|improve this answer












    $endgroup$

















      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "419"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });















      draft saved

      draft discarded
















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109963%2fintuition-behind-eigenvalues-of-an-adjacency-matrix%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown


























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      6
















      $begingroup$

      The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.



      Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)



      This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrt{nlog n}$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrt{nlog n})$. We can improve this using spectral techniques: if we plant a clique of size $Csqrt{n}$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.



      More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.






      share|cite|improve this answer










      $endgroup$




















        6
















        $begingroup$

        The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.



        Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)



        This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrt{nlog n}$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrt{nlog n})$. We can improve this using spectral techniques: if we plant a clique of size $Csqrt{n}$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.



        More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.






        share|cite|improve this answer










        $endgroup$


















          6














          6










          6







          $begingroup$

          The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.



          Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)



          This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrt{nlog n}$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrt{nlog n})$. We can improve this using spectral techniques: if we plant a clique of size $Csqrt{n}$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.



          More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.






          share|cite|improve this answer










          $endgroup$



          The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.



          Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)



          This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrt{nlog n}$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrt{nlog n})$. We can improve this using spectral techniques: if we plant a clique of size $Csqrt{n}$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.



          More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.







          share|cite|improve this answer













          share|cite|improve this answer




          share|cite|improve this answer










          answered May 28 at 15:15









          Yuval FilmusYuval Filmus

          209k15 gold badges202 silver badges370 bronze badges




          209k15 gold badges202 silver badges370 bronze badges




























              5
















              $begingroup$

              (Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)



              An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbb{R}^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbb{R}$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbb{R}^n$ such that there is $lambda in mathbb{R}$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_{u in N(v)} f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.






              share|cite|improve this answer










              $endgroup$















              • $begingroup$
                Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
                $endgroup$
                – m.raynal
                May 28 at 14:09






              • 1




                $begingroup$
                Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
                $endgroup$
                – dkaeae
                May 28 at 14:14
















              5
















              $begingroup$

              (Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)



              An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbb{R}^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbb{R}$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbb{R}^n$ such that there is $lambda in mathbb{R}$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_{u in N(v)} f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.






              share|cite|improve this answer










              $endgroup$















              • $begingroup$
                Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
                $endgroup$
                – m.raynal
                May 28 at 14:09






              • 1




                $begingroup$
                Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
                $endgroup$
                – dkaeae
                May 28 at 14:14














              5














              5










              5







              $begingroup$

              (Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)



              An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbb{R}^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbb{R}$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbb{R}^n$ such that there is $lambda in mathbb{R}$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_{u in N(v)} f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.






              share|cite|improve this answer










              $endgroup$



              (Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)



              An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbb{R}^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbb{R}$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbb{R}^n$ such that there is $lambda in mathbb{R}$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_{u in N(v)} f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.







              share|cite|improve this answer













              share|cite|improve this answer




              share|cite|improve this answer










              answered May 28 at 14:03









              dkaeaedkaeae

              4,5721 gold badge11 silver badges29 bronze badges




              4,5721 gold badge11 silver badges29 bronze badges















              • $begingroup$
                Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
                $endgroup$
                – m.raynal
                May 28 at 14:09






              • 1




                $begingroup$
                Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
                $endgroup$
                – dkaeae
                May 28 at 14:14


















              • $begingroup$
                Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
                $endgroup$
                – m.raynal
                May 28 at 14:09






              • 1




                $begingroup$
                Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
                $endgroup$
                – dkaeae
                May 28 at 14:14
















              $begingroup$
              Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
              $endgroup$
              – m.raynal
              May 28 at 14:09




              $begingroup$
              Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
              $endgroup$
              – m.raynal
              May 28 at 14:09




              1




              1




              $begingroup$
              Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
              $endgroup$
              – dkaeae
              May 28 at 14:14




              $begingroup$
              Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
              $endgroup$
              – dkaeae
              May 28 at 14:14











              5
















              $begingroup$

              I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.



              For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbb{R}$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
              $$
              langle f, Lfrangle = frac{1}{d} sum_{(u,v) in E}{(f(u) - f(v))^2}.
              $$



              The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - frac{lambda_2}{d}$. By the min-max principle, we have



              $$
              1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f, frangle}:sum_{v in V}{f(v)} = 0, f neq 0right}.
              $$



              Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbb{R}$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac{1}{n}sum_{v in V}{f(v)}$, and write



              $$
              1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f_0, f_0rangle}: f text{ not constant}right}.
              $$



              Now a bit of calculation shows that $langle f_0, f_0rangle = frac{1}{n}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}$, and substituting above and dividing numerator and denominator by $frac{n}{2}$, we have



              $$
              1 - frac{lambda_2}{d}=minleft{frac{frac{2}{nd} sum_{(u,v) in E}{(f(u) - f(v))^2}}{frac{2}{n^2}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}}: f text{ not constant}right}.
              $$



              What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $frac{d}{d - lambda_2}$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).






              share|cite|improve this answer












              $endgroup$




















                5
















                $begingroup$

                I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.



                For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbb{R}$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
                $$
                langle f, Lfrangle = frac{1}{d} sum_{(u,v) in E}{(f(u) - f(v))^2}.
                $$



                The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - frac{lambda_2}{d}$. By the min-max principle, we have



                $$
                1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f, frangle}:sum_{v in V}{f(v)} = 0, f neq 0right}.
                $$



                Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbb{R}$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac{1}{n}sum_{v in V}{f(v)}$, and write



                $$
                1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f_0, f_0rangle}: f text{ not constant}right}.
                $$



                Now a bit of calculation shows that $langle f_0, f_0rangle = frac{1}{n}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}$, and substituting above and dividing numerator and denominator by $frac{n}{2}$, we have



                $$
                1 - frac{lambda_2}{d}=minleft{frac{frac{2}{nd} sum_{(u,v) in E}{(f(u) - f(v))^2}}{frac{2}{n^2}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}}: f text{ not constant}right}.
                $$



                What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $frac{d}{d - lambda_2}$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).






                share|cite|improve this answer












                $endgroup$


















                  5














                  5










                  5







                  $begingroup$

                  I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.



                  For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbb{R}$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
                  $$
                  langle f, Lfrangle = frac{1}{d} sum_{(u,v) in E}{(f(u) - f(v))^2}.
                  $$



                  The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - frac{lambda_2}{d}$. By the min-max principle, we have



                  $$
                  1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f, frangle}:sum_{v in V}{f(v)} = 0, f neq 0right}.
                  $$



                  Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbb{R}$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac{1}{n}sum_{v in V}{f(v)}$, and write



                  $$
                  1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f_0, f_0rangle}: f text{ not constant}right}.
                  $$



                  Now a bit of calculation shows that $langle f_0, f_0rangle = frac{1}{n}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}$, and substituting above and dividing numerator and denominator by $frac{n}{2}$, we have



                  $$
                  1 - frac{lambda_2}{d}=minleft{frac{frac{2}{nd} sum_{(u,v) in E}{(f(u) - f(v))^2}}{frac{2}{n^2}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}}: f text{ not constant}right}.
                  $$



                  What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $frac{d}{d - lambda_2}$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).






                  share|cite|improve this answer












                  $endgroup$



                  I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.



                  For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbb{R}$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
                  $$
                  langle f, Lfrangle = frac{1}{d} sum_{(u,v) in E}{(f(u) - f(v))^2}.
                  $$



                  The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - frac{lambda_2}{d}$. By the min-max principle, we have



                  $$
                  1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f, frangle}:sum_{v in V}{f(v)} = 0, f neq 0right}.
                  $$



                  Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbb{R}$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac{1}{n}sum_{v in V}{f(v)}$, and write



                  $$
                  1 - frac{lambda_2}{d}=minleft{frac{langle f, Lfrangle}{langle f_0, f_0rangle}: f text{ not constant}right}.
                  $$



                  Now a bit of calculation shows that $langle f_0, f_0rangle = frac{1}{n}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}$, and substituting above and dividing numerator and denominator by $frac{n}{2}$, we have



                  $$
                  1 - frac{lambda_2}{d}=minleft{frac{frac{2}{nd} sum_{(u,v) in E}{(f(u) - f(v))^2}}{frac{2}{n^2}sum_{{u,v}in {Vchoose 2}}{(f(u) - f(v))^2}}: f text{ not constant}right}.
                  $$



                  What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $frac{d}{d - lambda_2}$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).







                  share|cite|improve this answer















                  share|cite|improve this answer




                  share|cite|improve this answer








                  edited May 30 at 3:55

























                  answered May 29 at 5:04









                  Sasho NikolovSasho Nikolov

                  2,32711 silver badges20 bronze badges




                  2,32711 silver badges20 bronze badges


































                      draft saved

                      draft discarded



















































                      Thanks for contributing an answer to Computer Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109963%2fintuition-behind-eigenvalues-of-an-adjacency-matrix%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown









                      Popular posts from this blog

                      Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

                      He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

                      Slayer Innehåll Historia | Stil, komposition och lyrik | Bandets betydelse och framgångar | Sidoprojekt och samarbeten | Kontroverser | Medlemmar | Utmärkelser och nomineringar | Turnéer och festivaler | Diskografi | Referenser | Externa länkar | Navigeringsmenywww.slayer.net”Metal Massacre vol. 1””Metal Massacre vol. 3””Metal Massacre Volume III””Show No Mercy””Haunting the Chapel””Live Undead””Hell Awaits””Reign in Blood””Reign in Blood””Gold & Platinum – Reign in Blood””Golden Gods Awards Winners”originalet”Kerrang! Hall Of Fame””Slayer Looks Back On 37-Year Career In New Video Series: Part Two””South of Heaven””Gold & Platinum – South of Heaven””Seasons in the Abyss””Gold & Platinum - Seasons in the Abyss””Divine Intervention””Divine Intervention - Release group by Slayer””Gold & Platinum - Divine Intervention””Live Intrusion””Undisputed Attitude””Abolish Government/Superficial Love””Release “Slatanic Slaughter: A Tribute to Slayer” by Various Artists””Diabolus in Musica””Soundtrack to the Apocalypse””God Hates Us All””Systematic - Relationships””War at the Warfield””Gold & Platinum - War at the Warfield””Soundtrack to the Apocalypse””Gold & Platinum - Still Reigning””Metallica, Slayer, Iron Mauden Among Winners At Metal Hammer Awards””Eternal Pyre””Eternal Pyre - Slayer release group””Eternal Pyre””Metal Storm Awards 2006””Kerrang! Hall Of Fame””Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Bullet-For My Valentine booed at Metal Hammer Golden Gods Awards””Unholy Aliance””The End Of Slayer?””Slayer: We Could Thrash Out Two More Albums If We're Fast Enough...””'The Unholy Alliance: Chapter III' UK Dates Added”originalet”Megadeth And Slayer To Co-Headline 'Canadian Carnage' Trek”originalet”World Painted Blood””Release “World Painted Blood” by Slayer””Metallica Heading To Cinemas””Slayer, Megadeth To Join Forces For 'European Carnage' Tour - Dec. 18, 2010”originalet”Slayer's Hanneman Contracts Acute Infection; Band To Bring In Guest Guitarist””Cannibal Corpse's Pat O'Brien Will Step In As Slayer's Guest Guitarist”originalet”Slayer’s Jeff Hanneman Dead at 49””Dave Lombardo Says He Made Only $67,000 In 2011 While Touring With Slayer””Slayer: We Do Not Agree With Dave Lombardo's Substance Or Timeline Of Events””Slayer Welcomes Drummer Paul Bostaph Back To The Fold””Slayer Hope to Unveil Never-Before-Heard Jeff Hanneman Material on Next Album””Slayer Debut New Song 'Implode' During Surprise Golden Gods Appearance””Release group Repentless by Slayer””Repentless - Slayer - Credits””Slayer””Metal Storm Awards 2015””Slayer - to release comic book "Repentless #1"””Slayer To Release 'Repentless' 6.66" Vinyl Box Set””BREAKING NEWS: Slayer Announce Farewell Tour””Slayer Recruit Lamb of God, Anthrax, Behemoth + Testament for Final Tour””Slayer lägger ner efter 37 år””Slayer Announces Second North American Leg Of 'Final' Tour””Final World Tour””Slayer Announces Final European Tour With Lamb of God, Anthrax And Obituary””Slayer To Tour Europe With Lamb of God, Anthrax And Obituary””Slayer To Play 'Last French Show Ever' At Next Year's Hellfst””Slayer's Final World Tour Will Extend Into 2019””Death Angel's Rob Cavestany On Slayer's 'Farewell' Tour: 'Some Of Us Could See This Coming'””Testament Has No Plans To Retire Anytime Soon, Says Chuck Billy””Anthrax's Scott Ian On Slayer's 'Farewell' Tour Plans: 'I Was Surprised And I Wasn't Surprised'””Slayer””Slayer's Morbid Schlock””Review/Rock; For Slayer, the Mania Is the Message””Slayer - Biography””Slayer - Reign In Blood”originalet”Dave Lombardo””An exclusive oral history of Slayer”originalet”Exclusive! Interview With Slayer Guitarist Jeff Hanneman”originalet”Thinking Out Loud: Slayer's Kerry King on hair metal, Satan and being polite””Slayer Lyrics””Slayer - Biography””Most influential artists for extreme metal music””Slayer - Reign in Blood””Slayer guitarist Jeff Hanneman dies aged 49””Slatanic Slaughter: A Tribute to Slayer””Gateway to Hell: A Tribute to Slayer””Covered In Blood””Slayer: The Origins of Thrash in San Francisco, CA.””Why They Rule - #6 Slayer”originalet”Guitar World's 100 Greatest Heavy Metal Guitarists Of All Time”originalet”The fans have spoken: Slayer comes out on top in readers' polls”originalet”Tribute to Jeff Hanneman (1964-2013)””Lamb Of God Frontman: We Sound Like A Slayer Rip-Off””BEHEMOTH Frontman Pays Tribute To SLAYER's JEFF HANNEMAN””Slayer, Hatebreed Doing Double Duty On This Year's Ozzfest””System of a Down””Lacuna Coil’s Andrea Ferro Talks Influences, Skateboarding, Band Origins + More””Slayer - Reign in Blood””Into The Lungs of Hell””Slayer rules - en utställning om fans””Slayer and Their Fans Slashed Through a No-Holds-Barred Night at Gas Monkey””Home””Slayer””Gold & Platinum - The Big 4 Live from Sofia, Bulgaria””Exclusive! Interview With Slayer Guitarist Kerry King””2008-02-23: Wiltern, Los Angeles, CA, USA””Slayer's Kerry King To Perform With Megadeth Tonight! - Oct. 21, 2010”originalet”Dave Lombardo - Biography”Slayer Case DismissedArkiveradUltimate Classic Rock: Slayer guitarist Jeff Hanneman dead at 49.”Slayer: "We could never do any thing like Some Kind Of Monster..."””Cannibal Corpse'S Pat O'Brien Will Step In As Slayer'S Guest Guitarist | The Official Slayer Site”originalet”Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Kerrang! Awards 2006 Blog: Kerrang! Hall Of Fame””Kerrang! Awards 2013: Kerrang! Legend”originalet”Metallica, Slayer, Iron Maien Among Winners At Metal Hammer Awards””Metal Hammer Golden Gods Awards””Bullet For My Valentine Booed At Metal Hammer Golden Gods Awards””Metal Storm Awards 2006””Metal Storm Awards 2015””Slayer's Concert History””Slayer - Relationships””Slayer - Releases”Slayers officiella webbplatsSlayer på MusicBrainzOfficiell webbplatsSlayerSlayerr1373445760000 0001 1540 47353068615-5086262726cb13906545x(data)6033143kn20030215029