What is the intuition behind uniform continuity?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{
margin-bottom:0;
}
.everyonelovesstackoverflow{position:absolute;height:1px;width:1px;opacity:0;top:0;left:0;pointer-events:none;}








74












$begingroup$


There’s another post asking for the motivation behind uniform continuity. I’m not a huge fan of it since the top-rated comment spoke about local and global interactions of information, and frankly I just did not get it.



Playing with the definition, I want to say uniform continuity implies there’s a maximum “average rate of change”. Not literally a derivative, but the rate of change between two points is bounded in the domain. I’m aware that this is essentially Lipschitz continuity, and that Lipschitz implies uniform. This implies there’s more to uniform continuity than just having a bounded average rate of change.



And also, how is it that $ f(x)=x$ is uniform yet $f(x)f(x)=g(x)=x^2$ is not? I understand why it isn’t, I can prove it. But I just don’t understand the motivation and importance of uniform continuity.










share|cite|improve this question











$endgroup$










  • 2




    $begingroup$
    Related: math.stackexchange.com/q/2729618/279515
    $endgroup$
    – Brahadeesh
    May 27 at 10:07






  • 5




    $begingroup$
    For a Lipschitz-continuous function the choice of $delta$ has to be proportional to $varepsilon$, for uniform continuity it can be any function of $varepsilon$. Consider $xmapstosqrt{x}$ on the positive reals, which is uniformly continuous but not Lipschitz continuous.
    $endgroup$
    – Christoph
    May 27 at 10:07






  • 2




    $begingroup$
    As for motivation and importance, have you read any of the applications of uniform continuity? For example, the proof that every continuous function on a closed interval is Riemann integrable?
    $endgroup$
    – Lee Mosher
    May 27 at 12:03






  • 4




    $begingroup$
    Could you please link the post/comment explaining about local and global interactions of information, for the benefit of readers of this question? Thank you
    $endgroup$
    – ignis
    May 28 at 10:26








  • 3




    $begingroup$
    I changed it Daniel. And Ignis, I can't find the post. :( I don't understand, it was usually the first post to crop up when I googled "what is the motivation behind uniform continuity".
    $endgroup$
    – Spencer Kraisler
    May 28 at 20:50


















74












$begingroup$


There’s another post asking for the motivation behind uniform continuity. I’m not a huge fan of it since the top-rated comment spoke about local and global interactions of information, and frankly I just did not get it.



Playing with the definition, I want to say uniform continuity implies there’s a maximum “average rate of change”. Not literally a derivative, but the rate of change between two points is bounded in the domain. I’m aware that this is essentially Lipschitz continuity, and that Lipschitz implies uniform. This implies there’s more to uniform continuity than just having a bounded average rate of change.



And also, how is it that $ f(x)=x$ is uniform yet $f(x)f(x)=g(x)=x^2$ is not? I understand why it isn’t, I can prove it. But I just don’t understand the motivation and importance of uniform continuity.










share|cite|improve this question











$endgroup$










  • 2




    $begingroup$
    Related: math.stackexchange.com/q/2729618/279515
    $endgroup$
    – Brahadeesh
    May 27 at 10:07






  • 5




    $begingroup$
    For a Lipschitz-continuous function the choice of $delta$ has to be proportional to $varepsilon$, for uniform continuity it can be any function of $varepsilon$. Consider $xmapstosqrt{x}$ on the positive reals, which is uniformly continuous but not Lipschitz continuous.
    $endgroup$
    – Christoph
    May 27 at 10:07






  • 2




    $begingroup$
    As for motivation and importance, have you read any of the applications of uniform continuity? For example, the proof that every continuous function on a closed interval is Riemann integrable?
    $endgroup$
    – Lee Mosher
    May 27 at 12:03






  • 4




    $begingroup$
    Could you please link the post/comment explaining about local and global interactions of information, for the benefit of readers of this question? Thank you
    $endgroup$
    – ignis
    May 28 at 10:26








  • 3




    $begingroup$
    I changed it Daniel. And Ignis, I can't find the post. :( I don't understand, it was usually the first post to crop up when I googled "what is the motivation behind uniform continuity".
    $endgroup$
    – Spencer Kraisler
    May 28 at 20:50














74












74








74


44



$begingroup$


There’s another post asking for the motivation behind uniform continuity. I’m not a huge fan of it since the top-rated comment spoke about local and global interactions of information, and frankly I just did not get it.



Playing with the definition, I want to say uniform continuity implies there’s a maximum “average rate of change”. Not literally a derivative, but the rate of change between two points is bounded in the domain. I’m aware that this is essentially Lipschitz continuity, and that Lipschitz implies uniform. This implies there’s more to uniform continuity than just having a bounded average rate of change.



And also, how is it that $ f(x)=x$ is uniform yet $f(x)f(x)=g(x)=x^2$ is not? I understand why it isn’t, I can prove it. But I just don’t understand the motivation and importance of uniform continuity.










share|cite|improve this question











$endgroup$




There’s another post asking for the motivation behind uniform continuity. I’m not a huge fan of it since the top-rated comment spoke about local and global interactions of information, and frankly I just did not get it.



Playing with the definition, I want to say uniform continuity implies there’s a maximum “average rate of change”. Not literally a derivative, but the rate of change between two points is bounded in the domain. I’m aware that this is essentially Lipschitz continuity, and that Lipschitz implies uniform. This implies there’s more to uniform continuity than just having a bounded average rate of change.



And also, how is it that $ f(x)=x$ is uniform yet $f(x)f(x)=g(x)=x^2$ is not? I understand why it isn’t, I can prove it. But I just don’t understand the motivation and importance of uniform continuity.







real-analysis intuition uniform-continuity lipschitz-functions






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 29 at 14:50









Acccumulation

8,1252 gold badges9 silver badges20 bronze badges




8,1252 gold badges9 silver badges20 bronze badges










asked May 27 at 9:52









Spencer KraislerSpencer Kraisler

8934 silver badges13 bronze badges




8934 silver badges13 bronze badges











  • 2




    $begingroup$
    Related: math.stackexchange.com/q/2729618/279515
    $endgroup$
    – Brahadeesh
    May 27 at 10:07






  • 5




    $begingroup$
    For a Lipschitz-continuous function the choice of $delta$ has to be proportional to $varepsilon$, for uniform continuity it can be any function of $varepsilon$. Consider $xmapstosqrt{x}$ on the positive reals, which is uniformly continuous but not Lipschitz continuous.
    $endgroup$
    – Christoph
    May 27 at 10:07






  • 2




    $begingroup$
    As for motivation and importance, have you read any of the applications of uniform continuity? For example, the proof that every continuous function on a closed interval is Riemann integrable?
    $endgroup$
    – Lee Mosher
    May 27 at 12:03






  • 4




    $begingroup$
    Could you please link the post/comment explaining about local and global interactions of information, for the benefit of readers of this question? Thank you
    $endgroup$
    – ignis
    May 28 at 10:26








  • 3




    $begingroup$
    I changed it Daniel. And Ignis, I can't find the post. :( I don't understand, it was usually the first post to crop up when I googled "what is the motivation behind uniform continuity".
    $endgroup$
    – Spencer Kraisler
    May 28 at 20:50














  • 2




    $begingroup$
    Related: math.stackexchange.com/q/2729618/279515
    $endgroup$
    – Brahadeesh
    May 27 at 10:07






  • 5




    $begingroup$
    For a Lipschitz-continuous function the choice of $delta$ has to be proportional to $varepsilon$, for uniform continuity it can be any function of $varepsilon$. Consider $xmapstosqrt{x}$ on the positive reals, which is uniformly continuous but not Lipschitz continuous.
    $endgroup$
    – Christoph
    May 27 at 10:07






  • 2




    $begingroup$
    As for motivation and importance, have you read any of the applications of uniform continuity? For example, the proof that every continuous function on a closed interval is Riemann integrable?
    $endgroup$
    – Lee Mosher
    May 27 at 12:03






  • 4




    $begingroup$
    Could you please link the post/comment explaining about local and global interactions of information, for the benefit of readers of this question? Thank you
    $endgroup$
    – ignis
    May 28 at 10:26








  • 3




    $begingroup$
    I changed it Daniel. And Ignis, I can't find the post. :( I don't understand, it was usually the first post to crop up when I googled "what is the motivation behind uniform continuity".
    $endgroup$
    – Spencer Kraisler
    May 28 at 20:50








2




2




$begingroup$
Related: math.stackexchange.com/q/2729618/279515
$endgroup$
– Brahadeesh
May 27 at 10:07




$begingroup$
Related: math.stackexchange.com/q/2729618/279515
$endgroup$
– Brahadeesh
May 27 at 10:07




5




5




$begingroup$
For a Lipschitz-continuous function the choice of $delta$ has to be proportional to $varepsilon$, for uniform continuity it can be any function of $varepsilon$. Consider $xmapstosqrt{x}$ on the positive reals, which is uniformly continuous but not Lipschitz continuous.
$endgroup$
– Christoph
May 27 at 10:07




$begingroup$
For a Lipschitz-continuous function the choice of $delta$ has to be proportional to $varepsilon$, for uniform continuity it can be any function of $varepsilon$. Consider $xmapstosqrt{x}$ on the positive reals, which is uniformly continuous but not Lipschitz continuous.
$endgroup$
– Christoph
May 27 at 10:07




2




2




$begingroup$
As for motivation and importance, have you read any of the applications of uniform continuity? For example, the proof that every continuous function on a closed interval is Riemann integrable?
$endgroup$
– Lee Mosher
May 27 at 12:03




$begingroup$
As for motivation and importance, have you read any of the applications of uniform continuity? For example, the proof that every continuous function on a closed interval is Riemann integrable?
$endgroup$
– Lee Mosher
May 27 at 12:03




4




4




$begingroup$
Could you please link the post/comment explaining about local and global interactions of information, for the benefit of readers of this question? Thank you
$endgroup$
– ignis
May 28 at 10:26






$begingroup$
Could you please link the post/comment explaining about local and global interactions of information, for the benefit of readers of this question? Thank you
$endgroup$
– ignis
May 28 at 10:26






3




3




$begingroup$
I changed it Daniel. And Ignis, I can't find the post. :( I don't understand, it was usually the first post to crop up when I googled "what is the motivation behind uniform continuity".
$endgroup$
– Spencer Kraisler
May 28 at 20:50




$begingroup$
I changed it Daniel. And Ignis, I can't find the post. :( I don't understand, it was usually the first post to crop up when I googled "what is the motivation behind uniform continuity".
$endgroup$
– Spencer Kraisler
May 28 at 20:50










6 Answers
6






active

oldest

votes


















121














$begingroup$

The real "gist" of continuity, in its various forms, is that it's the "property that makes calculators and measurements useful". Calculators and measurements are fundamentally approximate devices which contain limited amounts of precision. Special functions, like those which are put on the buttons of a calculator, then, if they are to be useful, should have with them some kind of "promise" that, if we only know the input to a limited amount of precision, then we will at least know the output to some useful level of precision as well.



Simple continuity is the weakest form of this. It tells us that if we want to know the value of a target function $f$ to within some tolerance $epsilon$ at a target value $x$, but using an approximating value $x'$ with limited precision instead of the true value $x$ to which we may not have access or otherwise know to unlimited precision, i.e. we want



$$|f(x) - f(x')| < epsilon$$



then we will be able to have that if we can make our measurement of $x$ suitably accurate, i.e. we can make that



$$|x - x'| < delta$$



for some $delta > 0$ which may or may not be the same for every $epsilon$ and $x$.



Uniform continuity is stronger. It tells us that not only do we have the above property, but in fact the same $delta$ threshold on $x'$'s accuracy will be sufficient to get $epsilon$ worth of accuracy in the approximation of $f$ no matter what $x$ is. Basically, if the special function I care about is uniform continuous, and I want 0.001 accuracy, and the max $delta$ required for that is, say, 0.0001, by measuring to that same tolerance I am assured to always get 0.001 accuracy in the output no matter what $x$ I am measuring. If, on the other hand, it were the case that the function is merely continuous but not uniformly so, I could perhaps measure at one value of $x$ with 0.0001 accuracy and that accuracy would be sufficient to get 0.001 accuracy in the function output, but if I am measuring at another, such a tolerance might give me only 0.5 accuracy - terrible!



Lipschitz continuity is even better: it tells us that the max error in approximating $f$ is proportional to that in approximating $x$, i.e. $epsilon propto delta$, so that if we make our measurement 10 times more accurate, say (i.e. one more significant figure), we are assured 10 times more accuracy in the function (i.e. gaining a significant figure in the measurement lets us gain one in the function result as well).



And in fact, all the functions (that are real-analytic, not combinatorial functions like nCr and what not) on your real-life calculator are at least locally Lipschitz continuous, so that while this proportionality factor (effectively, absolutely how many sig figs you get for a given number of such in the input) may not be the same everywhere, you can still be assured that in relative terms, adding 10x the precision to your measurements, i.e. one more significant figure, will always make the approximation (however good or not it actually is) returned by your calculator 10x more accurate, i.e. also to one more significant figure.



And to top it all off, all these forms of continuity - at least in their local variants, that is, over any bounded interval - are implied by differentiability.






share|cite|improve this answer











$endgroup$











  • 22




    $begingroup$
    This is how it should be explained in beginners' books. This explanation is extraordinarily clear.
    $endgroup$
    – rubik
    May 28 at 10:36






  • 11




    $begingroup$
    @rubik : I agree. I think that the way that so much of this stuff is commonly handled in elementary texts is terrible. Moreover, knowing how the error behaves in approximations is something I think is vital for work in any scientific or technical field where that real-life measurements will be treated and put through complicated formulas.
    $endgroup$
    – The_Sympathizer
    May 28 at 10:51






  • 3




    $begingroup$
    @Teepeemm : Yeah ... on an interval containing zero , not so much. But if you leave off zero :) is what I was thinking about.
    $endgroup$
    – The_Sympathizer
    May 28 at 14:25








  • 3




    $begingroup$
    My calculator actually has a cube root function that works on negative numbers. (Though it is not real-analytic, but that's kind of a cop-out condition: all the functions on the calculator that are extremely nice are also somewhat nice).
    $endgroup$
    – Henning Makholm
    May 28 at 18:53








  • 6




    $begingroup$
    "1 part in 1000" is a bad wording: $epsilon$ is upper bound on absolute error, not relative, so it's not 1 part in 1000, but rather absolute error of $0.001$, whatever the actual value of the function is. E.g. if $f(x)=10^{-8}$, then relative error of $f(x')$ could be $10^5$ parts in $1$, and that's still within the bounds.
    $endgroup$
    – Ruslan
    May 28 at 20:48



















43














$begingroup$

While I really like The_Sympathizer's answer, none of the answers describe my intuition for how I think about uniform continuity.



Uniform continuity is about horizontal shifts not changing the graph too much



In precalculus we learn how to move graphs around. If we have a function $f(x)$, then we can shift the graph of the function to the right by an increment $Delta$ by graphing the function $f(x-Delta)$.



Then let's take a look at the definition of uniform continuity. $f$ is uniformly continuous if for all $epsilon > 0$, there is some $delta$ such that for all $x,x'$, $$|f(x)-f(x')| < epsilon$$ if $|x-x'|<delta$.



Another way to say this is to let $x' = x-Delta$, and say that when $|Delta| < delta$, then $|f(x)-f(x-Delta)|<epsilon$.



Intuitively, $f$ is uniformly continuous if, when we bump the graph of $f$ left or right by a small enough amount, then the vertical distance between the shifted graph and the original graph will also be small.



Here's an example of how this works on Desmos. The slider controls how much we shift the graph by. The function in the fourth slot measures the vertical distance between the graphs. Unless we make the shift zero, the vertical distance between the shifted graph and the original graph always goes off to infinity, and is never bounded, no matter how small the shift is. In other words, $f(x)=x^2$ is not uniformly continuous, because no matter how small the left or right shift is, the graph of the shifted function gets really far away from the graph of the original function.



Alternative view: Uniform continuity is about the difference between horizontal and vertical shifts



Another (basically equivalent) way to say this is by comparing to vertical shifts.



Imagine the region bounded by the graph of $f$ shifted up by $epsilon$ and the graph of $f$ shifted down by $epsilon$. Do small horizontal shifts of the original graph stay in this region?



If the answer is yes, that sufficiently small horizontal shifts stay in the region, then $f$ is uniformly continuous. If the answer is no, no nonzero horizontal shift remains in the region, then $f$ is not uniformly continuous.



Here's a Desmos (again with $x^2$) for this view point.






share|cite|improve this answer











$endgroup$











  • 2




    $begingroup$
    I love this. So you're saying if $f$ is uniformly continuous, then $|f(x)-f(x-Delta)|$ is bounded for some small $Delta$?
    $endgroup$
    – Spencer Kraisler
    May 28 at 20:48








  • 2




    $begingroup$
    @SpencerKraisler Yes, exactly, at least for small shifts, and if you take small enough shifts you can make that bound as small as you like.
    $endgroup$
    – jgon
    May 28 at 20:49






  • 2




    $begingroup$
    I'm going to have to play around with the math later today to understand it for myself. I never thought of it that way though. Thanks for the Desmos.
    $endgroup$
    – Spencer Kraisler
    May 28 at 20:51








  • 2




    $begingroup$
    @SpencerKraisler Glad you found it helpful :) Also, I edited to add a very closely related, but slightly different point of view with a second Desmos.
    $endgroup$
    – jgon
    May 28 at 21:00






  • 1




    $begingroup$
    To be honest, I never understood even the definition of uniform continuity. With your answer, however, I was able to. Thanks a lot.
    $endgroup$
    – onurcanbektas
    May 29 at 8:48



















22














$begingroup$

I'd like to point out one misconception in the problem statement:




... the rate of chance between two points is bounded in the domain




This is incorrect, the function $f:[0,infty) to [0,infty)$ defined by



$$f(x)=sqrt{x}$$



is uniformly continuous over the whole domain $[0,infty)$, despite having an unlimited derivative near $0$. For any given $epsilon > 0$, we can choose $delta=epsilon^2$, which fulfills the uniformly continuity condition:



$$|x_1 - x_2| le delta Rightarrow |sqrt{x_1}-sqrt{x_2}| le epsilon$$



The difference to cases like $y=x^2$ or $y=tan(x)$ is that $f$ is itself bounded around the point where the limit of the derivative is unbounded.






share|cite|improve this answer









$endgroup$























    15














    $begingroup$

    Continuity means that for every $x$ in the domain of $f$ and every $varepsilon>0$, there is a $delta>0$ such that$$lvert y-xrvert<deltaimpliesbigllvert f(y)-f(x)bigrrvert<varepsilon.$$By this definition, $delta$ may depend on both $x$ and $varepsilon$.



    Uniform continuity is when we can pick $delta$ depending only on $varepsilon$, but not on $x$.






    share|cite|improve this answer









    $endgroup$























      7














      $begingroup$

      Uniform continuity simply means the turning of the graph is uniform. More intuitively, the sharpness of the turns are somewhat limited.



      enter image description here
      If you properly understand the meaning of the definition of continuity then look at the intersection of the boxes. For continuity, at each point you get a delta which may change if you change your point of interest. That means, the size of the box changes as you move along the curve. But if your function is uniformly continuous then you can move the box along the curve without changing the size and still the off diagonal endpoints be on the curve.
      (image source: https://www.geeksforgeeks.org/mathematics-limits-continuity-differentiability/)






      share|cite|improve this answer









      $endgroup$























        4














        $begingroup$

        One interpretation that I love is the one using non-standard analysis:




        Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:





        1. $f$ is uniformly continuous.

        2. Its hyperreal version ${{}^*!f} : {{}^*!E} subseteq {}^*mathbb{R} to {}^*mathbb{R}$ is continuous, i.e., whenever $x, y in {{}^*!E}$ are infinitely close, ${{}^*!f}(x)$ and ${{}^*!f}(y)$ are infinitely close as well.




        Here, the 'hyperreal version' refers to the $*$-transform of $f$. The catch is that the domain ${{}^*!E}$ of ${{}^*!f}$ contains hyperreal numbers which are either infinitely close to $E$ or infinitely large. So the continuity needs to be tested around those numbers as well to establish uniform continuity.



        This statement can be recast in terms of real numbers only:




        Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:




        1. $f$ is uniformly continuous.


        2. For any sequences $(a_n)$ and $(b_n)$ in $E$ such that $|a_n - b_n| to 0$, we have $|f(a_n) - f(b_n)| to 0$.





        In this version, the role of infinitely close hyperreal numbers is replaced by a pair of sequences which become arbitrarily close. Again, we see that uniformly continuity is really about enforcing continuity infinitesimally beyond $E$. Here are some examples that demonstrate this idea:



        Example 1. Let $f(x) = 1/x$ on $(0, infty)$. If we pick two 'infinitesimals' $a_n = frac{1}{n}$ and $b_n = frac{1}{n+1}$, then they become arbitrarily close but $|f(a_n) - f(b_n)| = 1 notto 0$. So $f$ is not uniformly continuous.



        Example 2. Let $f(x) = x^2$ on $mathbb{R}$. If we pick two 'infinitely large' $a_n = n$ and $b_n = n+frac{1}{n}$, then $a_n$ and $b_n$ become arbitrarily close, but $|f(a_n) - f(b_n)| = 2 + frac{1}{n} notto 0$. So $f$ is not uniformly continuous.






        share|cite|improve this answer









        $endgroup$

















          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });















          draft saved

          draft discarded
















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3241309%2fwhat-is-the-intuition-behind-uniform-continuity%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          6 Answers
          6






          active

          oldest

          votes








          6 Answers
          6






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          121














          $begingroup$

          The real "gist" of continuity, in its various forms, is that it's the "property that makes calculators and measurements useful". Calculators and measurements are fundamentally approximate devices which contain limited amounts of precision. Special functions, like those which are put on the buttons of a calculator, then, if they are to be useful, should have with them some kind of "promise" that, if we only know the input to a limited amount of precision, then we will at least know the output to some useful level of precision as well.



          Simple continuity is the weakest form of this. It tells us that if we want to know the value of a target function $f$ to within some tolerance $epsilon$ at a target value $x$, but using an approximating value $x'$ with limited precision instead of the true value $x$ to which we may not have access or otherwise know to unlimited precision, i.e. we want



          $$|f(x) - f(x')| < epsilon$$



          then we will be able to have that if we can make our measurement of $x$ suitably accurate, i.e. we can make that



          $$|x - x'| < delta$$



          for some $delta > 0$ which may or may not be the same for every $epsilon$ and $x$.



          Uniform continuity is stronger. It tells us that not only do we have the above property, but in fact the same $delta$ threshold on $x'$'s accuracy will be sufficient to get $epsilon$ worth of accuracy in the approximation of $f$ no matter what $x$ is. Basically, if the special function I care about is uniform continuous, and I want 0.001 accuracy, and the max $delta$ required for that is, say, 0.0001, by measuring to that same tolerance I am assured to always get 0.001 accuracy in the output no matter what $x$ I am measuring. If, on the other hand, it were the case that the function is merely continuous but not uniformly so, I could perhaps measure at one value of $x$ with 0.0001 accuracy and that accuracy would be sufficient to get 0.001 accuracy in the function output, but if I am measuring at another, such a tolerance might give me only 0.5 accuracy - terrible!



          Lipschitz continuity is even better: it tells us that the max error in approximating $f$ is proportional to that in approximating $x$, i.e. $epsilon propto delta$, so that if we make our measurement 10 times more accurate, say (i.e. one more significant figure), we are assured 10 times more accuracy in the function (i.e. gaining a significant figure in the measurement lets us gain one in the function result as well).



          And in fact, all the functions (that are real-analytic, not combinatorial functions like nCr and what not) on your real-life calculator are at least locally Lipschitz continuous, so that while this proportionality factor (effectively, absolutely how many sig figs you get for a given number of such in the input) may not be the same everywhere, you can still be assured that in relative terms, adding 10x the precision to your measurements, i.e. one more significant figure, will always make the approximation (however good or not it actually is) returned by your calculator 10x more accurate, i.e. also to one more significant figure.



          And to top it all off, all these forms of continuity - at least in their local variants, that is, over any bounded interval - are implied by differentiability.






          share|cite|improve this answer











          $endgroup$











          • 22




            $begingroup$
            This is how it should be explained in beginners' books. This explanation is extraordinarily clear.
            $endgroup$
            – rubik
            May 28 at 10:36






          • 11




            $begingroup$
            @rubik : I agree. I think that the way that so much of this stuff is commonly handled in elementary texts is terrible. Moreover, knowing how the error behaves in approximations is something I think is vital for work in any scientific or technical field where that real-life measurements will be treated and put through complicated formulas.
            $endgroup$
            – The_Sympathizer
            May 28 at 10:51






          • 3




            $begingroup$
            @Teepeemm : Yeah ... on an interval containing zero , not so much. But if you leave off zero :) is what I was thinking about.
            $endgroup$
            – The_Sympathizer
            May 28 at 14:25








          • 3




            $begingroup$
            My calculator actually has a cube root function that works on negative numbers. (Though it is not real-analytic, but that's kind of a cop-out condition: all the functions on the calculator that are extremely nice are also somewhat nice).
            $endgroup$
            – Henning Makholm
            May 28 at 18:53








          • 6




            $begingroup$
            "1 part in 1000" is a bad wording: $epsilon$ is upper bound on absolute error, not relative, so it's not 1 part in 1000, but rather absolute error of $0.001$, whatever the actual value of the function is. E.g. if $f(x)=10^{-8}$, then relative error of $f(x')$ could be $10^5$ parts in $1$, and that's still within the bounds.
            $endgroup$
            – Ruslan
            May 28 at 20:48
















          121














          $begingroup$

          The real "gist" of continuity, in its various forms, is that it's the "property that makes calculators and measurements useful". Calculators and measurements are fundamentally approximate devices which contain limited amounts of precision. Special functions, like those which are put on the buttons of a calculator, then, if they are to be useful, should have with them some kind of "promise" that, if we only know the input to a limited amount of precision, then we will at least know the output to some useful level of precision as well.



          Simple continuity is the weakest form of this. It tells us that if we want to know the value of a target function $f$ to within some tolerance $epsilon$ at a target value $x$, but using an approximating value $x'$ with limited precision instead of the true value $x$ to which we may not have access or otherwise know to unlimited precision, i.e. we want



          $$|f(x) - f(x')| < epsilon$$



          then we will be able to have that if we can make our measurement of $x$ suitably accurate, i.e. we can make that



          $$|x - x'| < delta$$



          for some $delta > 0$ which may or may not be the same for every $epsilon$ and $x$.



          Uniform continuity is stronger. It tells us that not only do we have the above property, but in fact the same $delta$ threshold on $x'$'s accuracy will be sufficient to get $epsilon$ worth of accuracy in the approximation of $f$ no matter what $x$ is. Basically, if the special function I care about is uniform continuous, and I want 0.001 accuracy, and the max $delta$ required for that is, say, 0.0001, by measuring to that same tolerance I am assured to always get 0.001 accuracy in the output no matter what $x$ I am measuring. If, on the other hand, it were the case that the function is merely continuous but not uniformly so, I could perhaps measure at one value of $x$ with 0.0001 accuracy and that accuracy would be sufficient to get 0.001 accuracy in the function output, but if I am measuring at another, such a tolerance might give me only 0.5 accuracy - terrible!



          Lipschitz continuity is even better: it tells us that the max error in approximating $f$ is proportional to that in approximating $x$, i.e. $epsilon propto delta$, so that if we make our measurement 10 times more accurate, say (i.e. one more significant figure), we are assured 10 times more accuracy in the function (i.e. gaining a significant figure in the measurement lets us gain one in the function result as well).



          And in fact, all the functions (that are real-analytic, not combinatorial functions like nCr and what not) on your real-life calculator are at least locally Lipschitz continuous, so that while this proportionality factor (effectively, absolutely how many sig figs you get for a given number of such in the input) may not be the same everywhere, you can still be assured that in relative terms, adding 10x the precision to your measurements, i.e. one more significant figure, will always make the approximation (however good or not it actually is) returned by your calculator 10x more accurate, i.e. also to one more significant figure.



          And to top it all off, all these forms of continuity - at least in their local variants, that is, over any bounded interval - are implied by differentiability.






          share|cite|improve this answer











          $endgroup$











          • 22




            $begingroup$
            This is how it should be explained in beginners' books. This explanation is extraordinarily clear.
            $endgroup$
            – rubik
            May 28 at 10:36






          • 11




            $begingroup$
            @rubik : I agree. I think that the way that so much of this stuff is commonly handled in elementary texts is terrible. Moreover, knowing how the error behaves in approximations is something I think is vital for work in any scientific or technical field where that real-life measurements will be treated and put through complicated formulas.
            $endgroup$
            – The_Sympathizer
            May 28 at 10:51






          • 3




            $begingroup$
            @Teepeemm : Yeah ... on an interval containing zero , not so much. But if you leave off zero :) is what I was thinking about.
            $endgroup$
            – The_Sympathizer
            May 28 at 14:25








          • 3




            $begingroup$
            My calculator actually has a cube root function that works on negative numbers. (Though it is not real-analytic, but that's kind of a cop-out condition: all the functions on the calculator that are extremely nice are also somewhat nice).
            $endgroup$
            – Henning Makholm
            May 28 at 18:53








          • 6




            $begingroup$
            "1 part in 1000" is a bad wording: $epsilon$ is upper bound on absolute error, not relative, so it's not 1 part in 1000, but rather absolute error of $0.001$, whatever the actual value of the function is. E.g. if $f(x)=10^{-8}$, then relative error of $f(x')$ could be $10^5$ parts in $1$, and that's still within the bounds.
            $endgroup$
            – Ruslan
            May 28 at 20:48














          121














          121










          121







          $begingroup$

          The real "gist" of continuity, in its various forms, is that it's the "property that makes calculators and measurements useful". Calculators and measurements are fundamentally approximate devices which contain limited amounts of precision. Special functions, like those which are put on the buttons of a calculator, then, if they are to be useful, should have with them some kind of "promise" that, if we only know the input to a limited amount of precision, then we will at least know the output to some useful level of precision as well.



          Simple continuity is the weakest form of this. It tells us that if we want to know the value of a target function $f$ to within some tolerance $epsilon$ at a target value $x$, but using an approximating value $x'$ with limited precision instead of the true value $x$ to which we may not have access or otherwise know to unlimited precision, i.e. we want



          $$|f(x) - f(x')| < epsilon$$



          then we will be able to have that if we can make our measurement of $x$ suitably accurate, i.e. we can make that



          $$|x - x'| < delta$$



          for some $delta > 0$ which may or may not be the same for every $epsilon$ and $x$.



          Uniform continuity is stronger. It tells us that not only do we have the above property, but in fact the same $delta$ threshold on $x'$'s accuracy will be sufficient to get $epsilon$ worth of accuracy in the approximation of $f$ no matter what $x$ is. Basically, if the special function I care about is uniform continuous, and I want 0.001 accuracy, and the max $delta$ required for that is, say, 0.0001, by measuring to that same tolerance I am assured to always get 0.001 accuracy in the output no matter what $x$ I am measuring. If, on the other hand, it were the case that the function is merely continuous but not uniformly so, I could perhaps measure at one value of $x$ with 0.0001 accuracy and that accuracy would be sufficient to get 0.001 accuracy in the function output, but if I am measuring at another, such a tolerance might give me only 0.5 accuracy - terrible!



          Lipschitz continuity is even better: it tells us that the max error in approximating $f$ is proportional to that in approximating $x$, i.e. $epsilon propto delta$, so that if we make our measurement 10 times more accurate, say (i.e. one more significant figure), we are assured 10 times more accuracy in the function (i.e. gaining a significant figure in the measurement lets us gain one in the function result as well).



          And in fact, all the functions (that are real-analytic, not combinatorial functions like nCr and what not) on your real-life calculator are at least locally Lipschitz continuous, so that while this proportionality factor (effectively, absolutely how many sig figs you get for a given number of such in the input) may not be the same everywhere, you can still be assured that in relative terms, adding 10x the precision to your measurements, i.e. one more significant figure, will always make the approximation (however good or not it actually is) returned by your calculator 10x more accurate, i.e. also to one more significant figure.



          And to top it all off, all these forms of continuity - at least in their local variants, that is, over any bounded interval - are implied by differentiability.






          share|cite|improve this answer











          $endgroup$



          The real "gist" of continuity, in its various forms, is that it's the "property that makes calculators and measurements useful". Calculators and measurements are fundamentally approximate devices which contain limited amounts of precision. Special functions, like those which are put on the buttons of a calculator, then, if they are to be useful, should have with them some kind of "promise" that, if we only know the input to a limited amount of precision, then we will at least know the output to some useful level of precision as well.



          Simple continuity is the weakest form of this. It tells us that if we want to know the value of a target function $f$ to within some tolerance $epsilon$ at a target value $x$, but using an approximating value $x'$ with limited precision instead of the true value $x$ to which we may not have access or otherwise know to unlimited precision, i.e. we want



          $$|f(x) - f(x')| < epsilon$$



          then we will be able to have that if we can make our measurement of $x$ suitably accurate, i.e. we can make that



          $$|x - x'| < delta$$



          for some $delta > 0$ which may or may not be the same for every $epsilon$ and $x$.



          Uniform continuity is stronger. It tells us that not only do we have the above property, but in fact the same $delta$ threshold on $x'$'s accuracy will be sufficient to get $epsilon$ worth of accuracy in the approximation of $f$ no matter what $x$ is. Basically, if the special function I care about is uniform continuous, and I want 0.001 accuracy, and the max $delta$ required for that is, say, 0.0001, by measuring to that same tolerance I am assured to always get 0.001 accuracy in the output no matter what $x$ I am measuring. If, on the other hand, it were the case that the function is merely continuous but not uniformly so, I could perhaps measure at one value of $x$ with 0.0001 accuracy and that accuracy would be sufficient to get 0.001 accuracy in the function output, but if I am measuring at another, such a tolerance might give me only 0.5 accuracy - terrible!



          Lipschitz continuity is even better: it tells us that the max error in approximating $f$ is proportional to that in approximating $x$, i.e. $epsilon propto delta$, so that if we make our measurement 10 times more accurate, say (i.e. one more significant figure), we are assured 10 times more accuracy in the function (i.e. gaining a significant figure in the measurement lets us gain one in the function result as well).



          And in fact, all the functions (that are real-analytic, not combinatorial functions like nCr and what not) on your real-life calculator are at least locally Lipschitz continuous, so that while this proportionality factor (effectively, absolutely how many sig figs you get for a given number of such in the input) may not be the same everywhere, you can still be assured that in relative terms, adding 10x the precision to your measurements, i.e. one more significant figure, will always make the approximation (however good or not it actually is) returned by your calculator 10x more accurate, i.e. also to one more significant figure.



          And to top it all off, all these forms of continuity - at least in their local variants, that is, over any bounded interval - are implied by differentiability.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jun 15 at 1:07

























          answered May 27 at 10:26









          The_SympathizerThe_Sympathizer

          11.3k3 gold badges32 silver badges56 bronze badges




          11.3k3 gold badges32 silver badges56 bronze badges











          • 22




            $begingroup$
            This is how it should be explained in beginners' books. This explanation is extraordinarily clear.
            $endgroup$
            – rubik
            May 28 at 10:36






          • 11




            $begingroup$
            @rubik : I agree. I think that the way that so much of this stuff is commonly handled in elementary texts is terrible. Moreover, knowing how the error behaves in approximations is something I think is vital for work in any scientific or technical field where that real-life measurements will be treated and put through complicated formulas.
            $endgroup$
            – The_Sympathizer
            May 28 at 10:51






          • 3




            $begingroup$
            @Teepeemm : Yeah ... on an interval containing zero , not so much. But if you leave off zero :) is what I was thinking about.
            $endgroup$
            – The_Sympathizer
            May 28 at 14:25








          • 3




            $begingroup$
            My calculator actually has a cube root function that works on negative numbers. (Though it is not real-analytic, but that's kind of a cop-out condition: all the functions on the calculator that are extremely nice are also somewhat nice).
            $endgroup$
            – Henning Makholm
            May 28 at 18:53








          • 6




            $begingroup$
            "1 part in 1000" is a bad wording: $epsilon$ is upper bound on absolute error, not relative, so it's not 1 part in 1000, but rather absolute error of $0.001$, whatever the actual value of the function is. E.g. if $f(x)=10^{-8}$, then relative error of $f(x')$ could be $10^5$ parts in $1$, and that's still within the bounds.
            $endgroup$
            – Ruslan
            May 28 at 20:48














          • 22




            $begingroup$
            This is how it should be explained in beginners' books. This explanation is extraordinarily clear.
            $endgroup$
            – rubik
            May 28 at 10:36






          • 11




            $begingroup$
            @rubik : I agree. I think that the way that so much of this stuff is commonly handled in elementary texts is terrible. Moreover, knowing how the error behaves in approximations is something I think is vital for work in any scientific or technical field where that real-life measurements will be treated and put through complicated formulas.
            $endgroup$
            – The_Sympathizer
            May 28 at 10:51






          • 3




            $begingroup$
            @Teepeemm : Yeah ... on an interval containing zero , not so much. But if you leave off zero :) is what I was thinking about.
            $endgroup$
            – The_Sympathizer
            May 28 at 14:25








          • 3




            $begingroup$
            My calculator actually has a cube root function that works on negative numbers. (Though it is not real-analytic, but that's kind of a cop-out condition: all the functions on the calculator that are extremely nice are also somewhat nice).
            $endgroup$
            – Henning Makholm
            May 28 at 18:53








          • 6




            $begingroup$
            "1 part in 1000" is a bad wording: $epsilon$ is upper bound on absolute error, not relative, so it's not 1 part in 1000, but rather absolute error of $0.001$, whatever the actual value of the function is. E.g. if $f(x)=10^{-8}$, then relative error of $f(x')$ could be $10^5$ parts in $1$, and that's still within the bounds.
            $endgroup$
            – Ruslan
            May 28 at 20:48








          22




          22




          $begingroup$
          This is how it should be explained in beginners' books. This explanation is extraordinarily clear.
          $endgroup$
          – rubik
          May 28 at 10:36




          $begingroup$
          This is how it should be explained in beginners' books. This explanation is extraordinarily clear.
          $endgroup$
          – rubik
          May 28 at 10:36




          11




          11




          $begingroup$
          @rubik : I agree. I think that the way that so much of this stuff is commonly handled in elementary texts is terrible. Moreover, knowing how the error behaves in approximations is something I think is vital for work in any scientific or technical field where that real-life measurements will be treated and put through complicated formulas.
          $endgroup$
          – The_Sympathizer
          May 28 at 10:51




          $begingroup$
          @rubik : I agree. I think that the way that so much of this stuff is commonly handled in elementary texts is terrible. Moreover, knowing how the error behaves in approximations is something I think is vital for work in any scientific or technical field where that real-life measurements will be treated and put through complicated formulas.
          $endgroup$
          – The_Sympathizer
          May 28 at 10:51




          3




          3




          $begingroup$
          @Teepeemm : Yeah ... on an interval containing zero , not so much. But if you leave off zero :) is what I was thinking about.
          $endgroup$
          – The_Sympathizer
          May 28 at 14:25






          $begingroup$
          @Teepeemm : Yeah ... on an interval containing zero , not so much. But if you leave off zero :) is what I was thinking about.
          $endgroup$
          – The_Sympathizer
          May 28 at 14:25






          3




          3




          $begingroup$
          My calculator actually has a cube root function that works on negative numbers. (Though it is not real-analytic, but that's kind of a cop-out condition: all the functions on the calculator that are extremely nice are also somewhat nice).
          $endgroup$
          – Henning Makholm
          May 28 at 18:53






          $begingroup$
          My calculator actually has a cube root function that works on negative numbers. (Though it is not real-analytic, but that's kind of a cop-out condition: all the functions on the calculator that are extremely nice are also somewhat nice).
          $endgroup$
          – Henning Makholm
          May 28 at 18:53






          6




          6




          $begingroup$
          "1 part in 1000" is a bad wording: $epsilon$ is upper bound on absolute error, not relative, so it's not 1 part in 1000, but rather absolute error of $0.001$, whatever the actual value of the function is. E.g. if $f(x)=10^{-8}$, then relative error of $f(x')$ could be $10^5$ parts in $1$, and that's still within the bounds.
          $endgroup$
          – Ruslan
          May 28 at 20:48




          $begingroup$
          "1 part in 1000" is a bad wording: $epsilon$ is upper bound on absolute error, not relative, so it's not 1 part in 1000, but rather absolute error of $0.001$, whatever the actual value of the function is. E.g. if $f(x)=10^{-8}$, then relative error of $f(x')$ could be $10^5$ parts in $1$, and that's still within the bounds.
          $endgroup$
          – Ruslan
          May 28 at 20:48













          43














          $begingroup$

          While I really like The_Sympathizer's answer, none of the answers describe my intuition for how I think about uniform continuity.



          Uniform continuity is about horizontal shifts not changing the graph too much



          In precalculus we learn how to move graphs around. If we have a function $f(x)$, then we can shift the graph of the function to the right by an increment $Delta$ by graphing the function $f(x-Delta)$.



          Then let's take a look at the definition of uniform continuity. $f$ is uniformly continuous if for all $epsilon > 0$, there is some $delta$ such that for all $x,x'$, $$|f(x)-f(x')| < epsilon$$ if $|x-x'|<delta$.



          Another way to say this is to let $x' = x-Delta$, and say that when $|Delta| < delta$, then $|f(x)-f(x-Delta)|<epsilon$.



          Intuitively, $f$ is uniformly continuous if, when we bump the graph of $f$ left or right by a small enough amount, then the vertical distance between the shifted graph and the original graph will also be small.



          Here's an example of how this works on Desmos. The slider controls how much we shift the graph by. The function in the fourth slot measures the vertical distance between the graphs. Unless we make the shift zero, the vertical distance between the shifted graph and the original graph always goes off to infinity, and is never bounded, no matter how small the shift is. In other words, $f(x)=x^2$ is not uniformly continuous, because no matter how small the left or right shift is, the graph of the shifted function gets really far away from the graph of the original function.



          Alternative view: Uniform continuity is about the difference between horizontal and vertical shifts



          Another (basically equivalent) way to say this is by comparing to vertical shifts.



          Imagine the region bounded by the graph of $f$ shifted up by $epsilon$ and the graph of $f$ shifted down by $epsilon$. Do small horizontal shifts of the original graph stay in this region?



          If the answer is yes, that sufficiently small horizontal shifts stay in the region, then $f$ is uniformly continuous. If the answer is no, no nonzero horizontal shift remains in the region, then $f$ is not uniformly continuous.



          Here's a Desmos (again with $x^2$) for this view point.






          share|cite|improve this answer











          $endgroup$











          • 2




            $begingroup$
            I love this. So you're saying if $f$ is uniformly continuous, then $|f(x)-f(x-Delta)|$ is bounded for some small $Delta$?
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:48








          • 2




            $begingroup$
            @SpencerKraisler Yes, exactly, at least for small shifts, and if you take small enough shifts you can make that bound as small as you like.
            $endgroup$
            – jgon
            May 28 at 20:49






          • 2




            $begingroup$
            I'm going to have to play around with the math later today to understand it for myself. I never thought of it that way though. Thanks for the Desmos.
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:51








          • 2




            $begingroup$
            @SpencerKraisler Glad you found it helpful :) Also, I edited to add a very closely related, but slightly different point of view with a second Desmos.
            $endgroup$
            – jgon
            May 28 at 21:00






          • 1




            $begingroup$
            To be honest, I never understood even the definition of uniform continuity. With your answer, however, I was able to. Thanks a lot.
            $endgroup$
            – onurcanbektas
            May 29 at 8:48
















          43














          $begingroup$

          While I really like The_Sympathizer's answer, none of the answers describe my intuition for how I think about uniform continuity.



          Uniform continuity is about horizontal shifts not changing the graph too much



          In precalculus we learn how to move graphs around. If we have a function $f(x)$, then we can shift the graph of the function to the right by an increment $Delta$ by graphing the function $f(x-Delta)$.



          Then let's take a look at the definition of uniform continuity. $f$ is uniformly continuous if for all $epsilon > 0$, there is some $delta$ such that for all $x,x'$, $$|f(x)-f(x')| < epsilon$$ if $|x-x'|<delta$.



          Another way to say this is to let $x' = x-Delta$, and say that when $|Delta| < delta$, then $|f(x)-f(x-Delta)|<epsilon$.



          Intuitively, $f$ is uniformly continuous if, when we bump the graph of $f$ left or right by a small enough amount, then the vertical distance between the shifted graph and the original graph will also be small.



          Here's an example of how this works on Desmos. The slider controls how much we shift the graph by. The function in the fourth slot measures the vertical distance between the graphs. Unless we make the shift zero, the vertical distance between the shifted graph and the original graph always goes off to infinity, and is never bounded, no matter how small the shift is. In other words, $f(x)=x^2$ is not uniformly continuous, because no matter how small the left or right shift is, the graph of the shifted function gets really far away from the graph of the original function.



          Alternative view: Uniform continuity is about the difference between horizontal and vertical shifts



          Another (basically equivalent) way to say this is by comparing to vertical shifts.



          Imagine the region bounded by the graph of $f$ shifted up by $epsilon$ and the graph of $f$ shifted down by $epsilon$. Do small horizontal shifts of the original graph stay in this region?



          If the answer is yes, that sufficiently small horizontal shifts stay in the region, then $f$ is uniformly continuous. If the answer is no, no nonzero horizontal shift remains in the region, then $f$ is not uniformly continuous.



          Here's a Desmos (again with $x^2$) for this view point.






          share|cite|improve this answer











          $endgroup$











          • 2




            $begingroup$
            I love this. So you're saying if $f$ is uniformly continuous, then $|f(x)-f(x-Delta)|$ is bounded for some small $Delta$?
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:48








          • 2




            $begingroup$
            @SpencerKraisler Yes, exactly, at least for small shifts, and if you take small enough shifts you can make that bound as small as you like.
            $endgroup$
            – jgon
            May 28 at 20:49






          • 2




            $begingroup$
            I'm going to have to play around with the math later today to understand it for myself. I never thought of it that way though. Thanks for the Desmos.
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:51








          • 2




            $begingroup$
            @SpencerKraisler Glad you found it helpful :) Also, I edited to add a very closely related, but slightly different point of view with a second Desmos.
            $endgroup$
            – jgon
            May 28 at 21:00






          • 1




            $begingroup$
            To be honest, I never understood even the definition of uniform continuity. With your answer, however, I was able to. Thanks a lot.
            $endgroup$
            – onurcanbektas
            May 29 at 8:48














          43














          43










          43







          $begingroup$

          While I really like The_Sympathizer's answer, none of the answers describe my intuition for how I think about uniform continuity.



          Uniform continuity is about horizontal shifts not changing the graph too much



          In precalculus we learn how to move graphs around. If we have a function $f(x)$, then we can shift the graph of the function to the right by an increment $Delta$ by graphing the function $f(x-Delta)$.



          Then let's take a look at the definition of uniform continuity. $f$ is uniformly continuous if for all $epsilon > 0$, there is some $delta$ such that for all $x,x'$, $$|f(x)-f(x')| < epsilon$$ if $|x-x'|<delta$.



          Another way to say this is to let $x' = x-Delta$, and say that when $|Delta| < delta$, then $|f(x)-f(x-Delta)|<epsilon$.



          Intuitively, $f$ is uniformly continuous if, when we bump the graph of $f$ left or right by a small enough amount, then the vertical distance between the shifted graph and the original graph will also be small.



          Here's an example of how this works on Desmos. The slider controls how much we shift the graph by. The function in the fourth slot measures the vertical distance between the graphs. Unless we make the shift zero, the vertical distance between the shifted graph and the original graph always goes off to infinity, and is never bounded, no matter how small the shift is. In other words, $f(x)=x^2$ is not uniformly continuous, because no matter how small the left or right shift is, the graph of the shifted function gets really far away from the graph of the original function.



          Alternative view: Uniform continuity is about the difference between horizontal and vertical shifts



          Another (basically equivalent) way to say this is by comparing to vertical shifts.



          Imagine the region bounded by the graph of $f$ shifted up by $epsilon$ and the graph of $f$ shifted down by $epsilon$. Do small horizontal shifts of the original graph stay in this region?



          If the answer is yes, that sufficiently small horizontal shifts stay in the region, then $f$ is uniformly continuous. If the answer is no, no nonzero horizontal shift remains in the region, then $f$ is not uniformly continuous.



          Here's a Desmos (again with $x^2$) for this view point.






          share|cite|improve this answer











          $endgroup$



          While I really like The_Sympathizer's answer, none of the answers describe my intuition for how I think about uniform continuity.



          Uniform continuity is about horizontal shifts not changing the graph too much



          In precalculus we learn how to move graphs around. If we have a function $f(x)$, then we can shift the graph of the function to the right by an increment $Delta$ by graphing the function $f(x-Delta)$.



          Then let's take a look at the definition of uniform continuity. $f$ is uniformly continuous if for all $epsilon > 0$, there is some $delta$ such that for all $x,x'$, $$|f(x)-f(x')| < epsilon$$ if $|x-x'|<delta$.



          Another way to say this is to let $x' = x-Delta$, and say that when $|Delta| < delta$, then $|f(x)-f(x-Delta)|<epsilon$.



          Intuitively, $f$ is uniformly continuous if, when we bump the graph of $f$ left or right by a small enough amount, then the vertical distance between the shifted graph and the original graph will also be small.



          Here's an example of how this works on Desmos. The slider controls how much we shift the graph by. The function in the fourth slot measures the vertical distance between the graphs. Unless we make the shift zero, the vertical distance between the shifted graph and the original graph always goes off to infinity, and is never bounded, no matter how small the shift is. In other words, $f(x)=x^2$ is not uniformly continuous, because no matter how small the left or right shift is, the graph of the shifted function gets really far away from the graph of the original function.



          Alternative view: Uniform continuity is about the difference between horizontal and vertical shifts



          Another (basically equivalent) way to say this is by comparing to vertical shifts.



          Imagine the region bounded by the graph of $f$ shifted up by $epsilon$ and the graph of $f$ shifted down by $epsilon$. Do small horizontal shifts of the original graph stay in this region?



          If the answer is yes, that sufficiently small horizontal shifts stay in the region, then $f$ is uniformly continuous. If the answer is no, no nonzero horizontal shift remains in the region, then $f$ is not uniformly continuous.



          Here's a Desmos (again with $x^2$) for this view point.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jun 1 at 18:09









          Ross Shulman

          32 bronze badges




          32 bronze badges










          answered May 28 at 19:39









          jgonjgon

          19.7k3 gold badges24 silver badges48 bronze badges




          19.7k3 gold badges24 silver badges48 bronze badges











          • 2




            $begingroup$
            I love this. So you're saying if $f$ is uniformly continuous, then $|f(x)-f(x-Delta)|$ is bounded for some small $Delta$?
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:48








          • 2




            $begingroup$
            @SpencerKraisler Yes, exactly, at least for small shifts, and if you take small enough shifts you can make that bound as small as you like.
            $endgroup$
            – jgon
            May 28 at 20:49






          • 2




            $begingroup$
            I'm going to have to play around with the math later today to understand it for myself. I never thought of it that way though. Thanks for the Desmos.
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:51








          • 2




            $begingroup$
            @SpencerKraisler Glad you found it helpful :) Also, I edited to add a very closely related, but slightly different point of view with a second Desmos.
            $endgroup$
            – jgon
            May 28 at 21:00






          • 1




            $begingroup$
            To be honest, I never understood even the definition of uniform continuity. With your answer, however, I was able to. Thanks a lot.
            $endgroup$
            – onurcanbektas
            May 29 at 8:48














          • 2




            $begingroup$
            I love this. So you're saying if $f$ is uniformly continuous, then $|f(x)-f(x-Delta)|$ is bounded for some small $Delta$?
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:48








          • 2




            $begingroup$
            @SpencerKraisler Yes, exactly, at least for small shifts, and if you take small enough shifts you can make that bound as small as you like.
            $endgroup$
            – jgon
            May 28 at 20:49






          • 2




            $begingroup$
            I'm going to have to play around with the math later today to understand it for myself. I never thought of it that way though. Thanks for the Desmos.
            $endgroup$
            – Spencer Kraisler
            May 28 at 20:51








          • 2




            $begingroup$
            @SpencerKraisler Glad you found it helpful :) Also, I edited to add a very closely related, but slightly different point of view with a second Desmos.
            $endgroup$
            – jgon
            May 28 at 21:00






          • 1




            $begingroup$
            To be honest, I never understood even the definition of uniform continuity. With your answer, however, I was able to. Thanks a lot.
            $endgroup$
            – onurcanbektas
            May 29 at 8:48








          2




          2




          $begingroup$
          I love this. So you're saying if $f$ is uniformly continuous, then $|f(x)-f(x-Delta)|$ is bounded for some small $Delta$?
          $endgroup$
          – Spencer Kraisler
          May 28 at 20:48






          $begingroup$
          I love this. So you're saying if $f$ is uniformly continuous, then $|f(x)-f(x-Delta)|$ is bounded for some small $Delta$?
          $endgroup$
          – Spencer Kraisler
          May 28 at 20:48






          2




          2




          $begingroup$
          @SpencerKraisler Yes, exactly, at least for small shifts, and if you take small enough shifts you can make that bound as small as you like.
          $endgroup$
          – jgon
          May 28 at 20:49




          $begingroup$
          @SpencerKraisler Yes, exactly, at least for small shifts, and if you take small enough shifts you can make that bound as small as you like.
          $endgroup$
          – jgon
          May 28 at 20:49




          2




          2




          $begingroup$
          I'm going to have to play around with the math later today to understand it for myself. I never thought of it that way though. Thanks for the Desmos.
          $endgroup$
          – Spencer Kraisler
          May 28 at 20:51






          $begingroup$
          I'm going to have to play around with the math later today to understand it for myself. I never thought of it that way though. Thanks for the Desmos.
          $endgroup$
          – Spencer Kraisler
          May 28 at 20:51






          2




          2




          $begingroup$
          @SpencerKraisler Glad you found it helpful :) Also, I edited to add a very closely related, but slightly different point of view with a second Desmos.
          $endgroup$
          – jgon
          May 28 at 21:00




          $begingroup$
          @SpencerKraisler Glad you found it helpful :) Also, I edited to add a very closely related, but slightly different point of view with a second Desmos.
          $endgroup$
          – jgon
          May 28 at 21:00




          1




          1




          $begingroup$
          To be honest, I never understood even the definition of uniform continuity. With your answer, however, I was able to. Thanks a lot.
          $endgroup$
          – onurcanbektas
          May 29 at 8:48




          $begingroup$
          To be honest, I never understood even the definition of uniform continuity. With your answer, however, I was able to. Thanks a lot.
          $endgroup$
          – onurcanbektas
          May 29 at 8:48











          22














          $begingroup$

          I'd like to point out one misconception in the problem statement:




          ... the rate of chance between two points is bounded in the domain




          This is incorrect, the function $f:[0,infty) to [0,infty)$ defined by



          $$f(x)=sqrt{x}$$



          is uniformly continuous over the whole domain $[0,infty)$, despite having an unlimited derivative near $0$. For any given $epsilon > 0$, we can choose $delta=epsilon^2$, which fulfills the uniformly continuity condition:



          $$|x_1 - x_2| le delta Rightarrow |sqrt{x_1}-sqrt{x_2}| le epsilon$$



          The difference to cases like $y=x^2$ or $y=tan(x)$ is that $f$ is itself bounded around the point where the limit of the derivative is unbounded.






          share|cite|improve this answer









          $endgroup$




















            22














            $begingroup$

            I'd like to point out one misconception in the problem statement:




            ... the rate of chance between two points is bounded in the domain




            This is incorrect, the function $f:[0,infty) to [0,infty)$ defined by



            $$f(x)=sqrt{x}$$



            is uniformly continuous over the whole domain $[0,infty)$, despite having an unlimited derivative near $0$. For any given $epsilon > 0$, we can choose $delta=epsilon^2$, which fulfills the uniformly continuity condition:



            $$|x_1 - x_2| le delta Rightarrow |sqrt{x_1}-sqrt{x_2}| le epsilon$$



            The difference to cases like $y=x^2$ or $y=tan(x)$ is that $f$ is itself bounded around the point where the limit of the derivative is unbounded.






            share|cite|improve this answer









            $endgroup$


















              22














              22










              22







              $begingroup$

              I'd like to point out one misconception in the problem statement:




              ... the rate of chance between two points is bounded in the domain




              This is incorrect, the function $f:[0,infty) to [0,infty)$ defined by



              $$f(x)=sqrt{x}$$



              is uniformly continuous over the whole domain $[0,infty)$, despite having an unlimited derivative near $0$. For any given $epsilon > 0$, we can choose $delta=epsilon^2$, which fulfills the uniformly continuity condition:



              $$|x_1 - x_2| le delta Rightarrow |sqrt{x_1}-sqrt{x_2}| le epsilon$$



              The difference to cases like $y=x^2$ or $y=tan(x)$ is that $f$ is itself bounded around the point where the limit of the derivative is unbounded.






              share|cite|improve this answer









              $endgroup$



              I'd like to point out one misconception in the problem statement:




              ... the rate of chance between two points is bounded in the domain




              This is incorrect, the function $f:[0,infty) to [0,infty)$ defined by



              $$f(x)=sqrt{x}$$



              is uniformly continuous over the whole domain $[0,infty)$, despite having an unlimited derivative near $0$. For any given $epsilon > 0$, we can choose $delta=epsilon^2$, which fulfills the uniformly continuity condition:



              $$|x_1 - x_2| le delta Rightarrow |sqrt{x_1}-sqrt{x_2}| le epsilon$$



              The difference to cases like $y=x^2$ or $y=tan(x)$ is that $f$ is itself bounded around the point where the limit of the derivative is unbounded.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered May 27 at 11:30









              IngixIngix

              7,8052 gold badges5 silver badges11 bronze badges




              7,8052 gold badges5 silver badges11 bronze badges


























                  15














                  $begingroup$

                  Continuity means that for every $x$ in the domain of $f$ and every $varepsilon>0$, there is a $delta>0$ such that$$lvert y-xrvert<deltaimpliesbigllvert f(y)-f(x)bigrrvert<varepsilon.$$By this definition, $delta$ may depend on both $x$ and $varepsilon$.



                  Uniform continuity is when we can pick $delta$ depending only on $varepsilon$, but not on $x$.






                  share|cite|improve this answer









                  $endgroup$




















                    15














                    $begingroup$

                    Continuity means that for every $x$ in the domain of $f$ and every $varepsilon>0$, there is a $delta>0$ such that$$lvert y-xrvert<deltaimpliesbigllvert f(y)-f(x)bigrrvert<varepsilon.$$By this definition, $delta$ may depend on both $x$ and $varepsilon$.



                    Uniform continuity is when we can pick $delta$ depending only on $varepsilon$, but not on $x$.






                    share|cite|improve this answer









                    $endgroup$


















                      15














                      15










                      15







                      $begingroup$

                      Continuity means that for every $x$ in the domain of $f$ and every $varepsilon>0$, there is a $delta>0$ such that$$lvert y-xrvert<deltaimpliesbigllvert f(y)-f(x)bigrrvert<varepsilon.$$By this definition, $delta$ may depend on both $x$ and $varepsilon$.



                      Uniform continuity is when we can pick $delta$ depending only on $varepsilon$, but not on $x$.






                      share|cite|improve this answer









                      $endgroup$



                      Continuity means that for every $x$ in the domain of $f$ and every $varepsilon>0$, there is a $delta>0$ such that$$lvert y-xrvert<deltaimpliesbigllvert f(y)-f(x)bigrrvert<varepsilon.$$By this definition, $delta$ may depend on both $x$ and $varepsilon$.



                      Uniform continuity is when we can pick $delta$ depending only on $varepsilon$, but not on $x$.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered May 27 at 10:01









                      José Carlos SantosJosé Carlos Santos

                      217k26 gold badges168 silver badges293 bronze badges




                      217k26 gold badges168 silver badges293 bronze badges


























                          7














                          $begingroup$

                          Uniform continuity simply means the turning of the graph is uniform. More intuitively, the sharpness of the turns are somewhat limited.



                          enter image description here
                          If you properly understand the meaning of the definition of continuity then look at the intersection of the boxes. For continuity, at each point you get a delta which may change if you change your point of interest. That means, the size of the box changes as you move along the curve. But if your function is uniformly continuous then you can move the box along the curve without changing the size and still the off diagonal endpoints be on the curve.
                          (image source: https://www.geeksforgeeks.org/mathematics-limits-continuity-differentiability/)






                          share|cite|improve this answer









                          $endgroup$




















                            7














                            $begingroup$

                            Uniform continuity simply means the turning of the graph is uniform. More intuitively, the sharpness of the turns are somewhat limited.



                            enter image description here
                            If you properly understand the meaning of the definition of continuity then look at the intersection of the boxes. For continuity, at each point you get a delta which may change if you change your point of interest. That means, the size of the box changes as you move along the curve. But if your function is uniformly continuous then you can move the box along the curve without changing the size and still the off diagonal endpoints be on the curve.
                            (image source: https://www.geeksforgeeks.org/mathematics-limits-continuity-differentiability/)






                            share|cite|improve this answer









                            $endgroup$


















                              7














                              7










                              7







                              $begingroup$

                              Uniform continuity simply means the turning of the graph is uniform. More intuitively, the sharpness of the turns are somewhat limited.



                              enter image description here
                              If you properly understand the meaning of the definition of continuity then look at the intersection of the boxes. For continuity, at each point you get a delta which may change if you change your point of interest. That means, the size of the box changes as you move along the curve. But if your function is uniformly continuous then you can move the box along the curve without changing the size and still the off diagonal endpoints be on the curve.
                              (image source: https://www.geeksforgeeks.org/mathematics-limits-continuity-differentiability/)






                              share|cite|improve this answer









                              $endgroup$



                              Uniform continuity simply means the turning of the graph is uniform. More intuitively, the sharpness of the turns are somewhat limited.



                              enter image description here
                              If you properly understand the meaning of the definition of continuity then look at the intersection of the boxes. For continuity, at each point you get a delta which may change if you change your point of interest. That means, the size of the box changes as you move along the curve. But if your function is uniformly continuous then you can move the box along the curve without changing the size and still the off diagonal endpoints be on the curve.
                              (image source: https://www.geeksforgeeks.org/mathematics-limits-continuity-differentiability/)







                              share|cite|improve this answer












                              share|cite|improve this answer



                              share|cite|improve this answer










                              answered May 27 at 10:25









                              skylarkskylark

                              4072 silver badges12 bronze badges




                              4072 silver badges12 bronze badges


























                                  4














                                  $begingroup$

                                  One interpretation that I love is the one using non-standard analysis:




                                  Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:





                                  1. $f$ is uniformly continuous.

                                  2. Its hyperreal version ${{}^*!f} : {{}^*!E} subseteq {}^*mathbb{R} to {}^*mathbb{R}$ is continuous, i.e., whenever $x, y in {{}^*!E}$ are infinitely close, ${{}^*!f}(x)$ and ${{}^*!f}(y)$ are infinitely close as well.




                                  Here, the 'hyperreal version' refers to the $*$-transform of $f$. The catch is that the domain ${{}^*!E}$ of ${{}^*!f}$ contains hyperreal numbers which are either infinitely close to $E$ or infinitely large. So the continuity needs to be tested around those numbers as well to establish uniform continuity.



                                  This statement can be recast in terms of real numbers only:




                                  Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:




                                  1. $f$ is uniformly continuous.


                                  2. For any sequences $(a_n)$ and $(b_n)$ in $E$ such that $|a_n - b_n| to 0$, we have $|f(a_n) - f(b_n)| to 0$.





                                  In this version, the role of infinitely close hyperreal numbers is replaced by a pair of sequences which become arbitrarily close. Again, we see that uniformly continuity is really about enforcing continuity infinitesimally beyond $E$. Here are some examples that demonstrate this idea:



                                  Example 1. Let $f(x) = 1/x$ on $(0, infty)$. If we pick two 'infinitesimals' $a_n = frac{1}{n}$ and $b_n = frac{1}{n+1}$, then they become arbitrarily close but $|f(a_n) - f(b_n)| = 1 notto 0$. So $f$ is not uniformly continuous.



                                  Example 2. Let $f(x) = x^2$ on $mathbb{R}$. If we pick two 'infinitely large' $a_n = n$ and $b_n = n+frac{1}{n}$, then $a_n$ and $b_n$ become arbitrarily close, but $|f(a_n) - f(b_n)| = 2 + frac{1}{n} notto 0$. So $f$ is not uniformly continuous.






                                  share|cite|improve this answer









                                  $endgroup$




















                                    4














                                    $begingroup$

                                    One interpretation that I love is the one using non-standard analysis:




                                    Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:





                                    1. $f$ is uniformly continuous.

                                    2. Its hyperreal version ${{}^*!f} : {{}^*!E} subseteq {}^*mathbb{R} to {}^*mathbb{R}$ is continuous, i.e., whenever $x, y in {{}^*!E}$ are infinitely close, ${{}^*!f}(x)$ and ${{}^*!f}(y)$ are infinitely close as well.




                                    Here, the 'hyperreal version' refers to the $*$-transform of $f$. The catch is that the domain ${{}^*!E}$ of ${{}^*!f}$ contains hyperreal numbers which are either infinitely close to $E$ or infinitely large. So the continuity needs to be tested around those numbers as well to establish uniform continuity.



                                    This statement can be recast in terms of real numbers only:




                                    Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:




                                    1. $f$ is uniformly continuous.


                                    2. For any sequences $(a_n)$ and $(b_n)$ in $E$ such that $|a_n - b_n| to 0$, we have $|f(a_n) - f(b_n)| to 0$.





                                    In this version, the role of infinitely close hyperreal numbers is replaced by a pair of sequences which become arbitrarily close. Again, we see that uniformly continuity is really about enforcing continuity infinitesimally beyond $E$. Here are some examples that demonstrate this idea:



                                    Example 1. Let $f(x) = 1/x$ on $(0, infty)$. If we pick two 'infinitesimals' $a_n = frac{1}{n}$ and $b_n = frac{1}{n+1}$, then they become arbitrarily close but $|f(a_n) - f(b_n)| = 1 notto 0$. So $f$ is not uniformly continuous.



                                    Example 2. Let $f(x) = x^2$ on $mathbb{R}$. If we pick two 'infinitely large' $a_n = n$ and $b_n = n+frac{1}{n}$, then $a_n$ and $b_n$ become arbitrarily close, but $|f(a_n) - f(b_n)| = 2 + frac{1}{n} notto 0$. So $f$ is not uniformly continuous.






                                    share|cite|improve this answer









                                    $endgroup$


















                                      4














                                      4










                                      4







                                      $begingroup$

                                      One interpretation that I love is the one using non-standard analysis:




                                      Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:





                                      1. $f$ is uniformly continuous.

                                      2. Its hyperreal version ${{}^*!f} : {{}^*!E} subseteq {}^*mathbb{R} to {}^*mathbb{R}$ is continuous, i.e., whenever $x, y in {{}^*!E}$ are infinitely close, ${{}^*!f}(x)$ and ${{}^*!f}(y)$ are infinitely close as well.




                                      Here, the 'hyperreal version' refers to the $*$-transform of $f$. The catch is that the domain ${{}^*!E}$ of ${{}^*!f}$ contains hyperreal numbers which are either infinitely close to $E$ or infinitely large. So the continuity needs to be tested around those numbers as well to establish uniform continuity.



                                      This statement can be recast in terms of real numbers only:




                                      Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:




                                      1. $f$ is uniformly continuous.


                                      2. For any sequences $(a_n)$ and $(b_n)$ in $E$ such that $|a_n - b_n| to 0$, we have $|f(a_n) - f(b_n)| to 0$.





                                      In this version, the role of infinitely close hyperreal numbers is replaced by a pair of sequences which become arbitrarily close. Again, we see that uniformly continuity is really about enforcing continuity infinitesimally beyond $E$. Here are some examples that demonstrate this idea:



                                      Example 1. Let $f(x) = 1/x$ on $(0, infty)$. If we pick two 'infinitesimals' $a_n = frac{1}{n}$ and $b_n = frac{1}{n+1}$, then they become arbitrarily close but $|f(a_n) - f(b_n)| = 1 notto 0$. So $f$ is not uniformly continuous.



                                      Example 2. Let $f(x) = x^2$ on $mathbb{R}$. If we pick two 'infinitely large' $a_n = n$ and $b_n = n+frac{1}{n}$, then $a_n$ and $b_n$ become arbitrarily close, but $|f(a_n) - f(b_n)| = 2 + frac{1}{n} notto 0$. So $f$ is not uniformly continuous.






                                      share|cite|improve this answer









                                      $endgroup$



                                      One interpretation that I love is the one using non-standard analysis:




                                      Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:





                                      1. $f$ is uniformly continuous.

                                      2. Its hyperreal version ${{}^*!f} : {{}^*!E} subseteq {}^*mathbb{R} to {}^*mathbb{R}$ is continuous, i.e., whenever $x, y in {{}^*!E}$ are infinitely close, ${{}^*!f}(x)$ and ${{}^*!f}(y)$ are infinitely close as well.




                                      Here, the 'hyperreal version' refers to the $*$-transform of $f$. The catch is that the domain ${{}^*!E}$ of ${{}^*!f}$ contains hyperreal numbers which are either infinitely close to $E$ or infinitely large. So the continuity needs to be tested around those numbers as well to establish uniform continuity.



                                      This statement can be recast in terms of real numbers only:




                                      Let $f : E subseteq mathbb{R} to mathbb{R}$ be a function. Then the followings are equivalent:




                                      1. $f$ is uniformly continuous.


                                      2. For any sequences $(a_n)$ and $(b_n)$ in $E$ such that $|a_n - b_n| to 0$, we have $|f(a_n) - f(b_n)| to 0$.





                                      In this version, the role of infinitely close hyperreal numbers is replaced by a pair of sequences which become arbitrarily close. Again, we see that uniformly continuity is really about enforcing continuity infinitesimally beyond $E$. Here are some examples that demonstrate this idea:



                                      Example 1. Let $f(x) = 1/x$ on $(0, infty)$. If we pick two 'infinitesimals' $a_n = frac{1}{n}$ and $b_n = frac{1}{n+1}$, then they become arbitrarily close but $|f(a_n) - f(b_n)| = 1 notto 0$. So $f$ is not uniformly continuous.



                                      Example 2. Let $f(x) = x^2$ on $mathbb{R}$. If we pick two 'infinitely large' $a_n = n$ and $b_n = n+frac{1}{n}$, then $a_n$ and $b_n$ become arbitrarily close, but $|f(a_n) - f(b_n)| = 2 + frac{1}{n} notto 0$. So $f$ is not uniformly continuous.







                                      share|cite|improve this answer












                                      share|cite|improve this answer



                                      share|cite|improve this answer










                                      answered Jun 1 at 19:26









                                      Sangchul LeeSangchul Lee

                                      103k12 gold badges183 silver badges299 bronze badges




                                      103k12 gold badges183 silver badges299 bronze badges


































                                          draft saved

                                          draft discarded



















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3241309%2fwhat-is-the-intuition-behind-uniform-continuity%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

                                          Bunad

                                          Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum