Formal Definition of Dot Product












16












$begingroup$


In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
$$y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vec{e_1} cdot vec{e_1'}$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hat{e}_i$ and $hat{e}_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20








  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06
















16












$begingroup$


In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
$$y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vec{e_1} cdot vec{e_1'}$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hat{e}_i$ and $hat{e}_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20








  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06














16












16








16


3



$begingroup$


In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
$$y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vec{e_1} cdot vec{e_1'}$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?










share|cite|improve this question











$endgroup$




In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
$$y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vec{e_1} cdot vec{e_1'}$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?







vectors coordinate-systems linear-algebra






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 13 at 1:56









Gilbert

5,215919




5,215919










asked May 13 at 0:08









dtsdts

411615




411615








  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hat{e}_i$ and $hat{e}_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20








  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06














  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hat{e}_i$ and $hat{e}_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20








  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06








1




1




$begingroup$
However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
$endgroup$
– Trunk
May 13 at 16:03




$begingroup$
However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
$endgroup$
– Trunk
May 13 at 16:03












$begingroup$
What you'd need is the change of basis matrix for the relationship between $hat{e}_i$ and $hat{e}_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
$endgroup$
– Triatticus
May 13 at 18:20






$begingroup$
What you'd need is the change of basis matrix for the relationship between $hat{e}_i$ and $hat{e}_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
$endgroup$
– Triatticus
May 13 at 18:20






1




1




$begingroup$
@Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
$endgroup$
– MSalters
May 14 at 7:06




$begingroup$
@Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
$endgroup$
– MSalters
May 14 at 7:06










8 Answers
8






active

oldest

votes


















16












$begingroup$

Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



$$(a_x hat{x}+a_y hat{y} + a_z hat{z}) cdot (b_x hat{X}+b_y hat{Y} + b_z hat{Z})$$



you get nine terms like $( a_x b_x hat{x}cdothat{X}) + (a_x b_y hat{x}cdothat{Y})+$ etc. In the usual orthonormal basis, the same-axis $hat{x}cdothat{X}$ factors just become 1, while the different-axis $hat{x}cdothat{Y}$ et al factors are zero. That reduces to the formula you know.



In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






share|cite|improve this answer











$endgroup$









  • 13




    $begingroup$
    I don't think the dot product is associative.
    $endgroup$
    – eyeballfrog
    May 13 at 1:18






  • 4




    $begingroup$
    "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
    $endgroup$
    – Acccumulation
    May 13 at 17:14






  • 3




    $begingroup$
    @Acccumulation This is Physics Stack Exchange.
    $endgroup$
    – Bob Jacobsen
    May 13 at 17:18






  • 4




    $begingroup$
    @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
    $endgroup$
    – scaphys
    May 13 at 18:36






  • 2




    $begingroup$
    "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
    $endgroup$
    – Display Name
    May 13 at 18:54



















18












$begingroup$

Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



Using what is called the Gram-Schmidt process, one can then construct a basis ${e_1,cdots e_n}$ for $V$ in which the inner product takes the computational form which you stated in your question.



In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



In general, an orthonormal basis ${e_1,e_2,e_3}$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






share|cite|improve this answer









$endgroup$





















    9












    $begingroup$

    The dot product can be defined in a coordinate-independent way as



    $$vec{a}cdotvec{b}=|vec{a}||vec{b}|costheta$$



    where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



    To use your first formula, the coordinates must be in the same basis.



    You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



    $$vec{a}cdotvec{b}=frac{1}{2}left(|vec{a}+vec{b}|^2-|vec{a}|^2-|vec{b}|^2right).$$



    This formula is another purely-geometric, coordinate-free definition of the dot product.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
      $endgroup$
      – dts
      May 13 at 0:21










    • $begingroup$
      Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
      $endgroup$
      – G. Smith
      May 13 at 0:31






    • 1




      $begingroup$
      "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
      $endgroup$
      – infinitezero
      May 13 at 16:33





















    7












    $begingroup$

    The coordinate free definition of a dot product is:



    $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



    It's up to you to figure out what the norm is:



    $$ ||vec a|| = sqrt{(vec a)^2}$$



    Here is a reference for this viewpoint:
    http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
    Section 2.3






    share|cite|improve this answer











    $endgroup$









    • 3




      $begingroup$
      This is a circular definition as the norm is defined via the dot product.
      $endgroup$
      – Winther
      May 13 at 9:01










    • $begingroup$
      @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
      $endgroup$
      – Denis Nardin
      May 13 at 11:06






    • 2




      $begingroup$
      This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
      $endgroup$
      – Jannik Pitt
      May 13 at 11:50










    • $begingroup$
      Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrt{acdot a}$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
      $endgroup$
      – Winther
      May 13 at 13:48










    • $begingroup$
      @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbb{R}^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbb{C}$).
      $endgroup$
      – Denis Nardin
      May 13 at 18:10



















    3












    $begingroup$

    On computing the following matrix will give you the dot product $$begin{bmatrix} x_1 & x_2& x_3 end{bmatrix}.begin{bmatrix} e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3end{bmatrix}.begin{bmatrix}y_1\y_2\y_3end{bmatrix}$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
      $endgroup$
      – dmckee
      May 16 at 16:28



















    1












    $begingroup$

    A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



    This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



    For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



    Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



    The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.





    These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



    The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



    The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






    share|cite|improve this answer









    $endgroup$





















      1












      $begingroup$


      How, would I compute their dot product?




      You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




      In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




      The value of $vec{e_1} cdot vec{e_1'}$ is an empirical value. You can't calculate it simply from a definition.




      Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




      Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



      In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






      share|cite|improve this answer









      $endgroup$





















        1












        $begingroup$

        The formula



        $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



        is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



        You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



        Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
        $$e_1 equiv langle 1,0,0 rangle \
        e_2 equiv langle 0,1,0 rangle \
        e_3 equiv langle 0,0,1 rangle$$

        and
        $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 text{etc.}$$
        If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



        The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



        The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.





        To actually answer your question: let



        $$vec{x} = x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
        $$vec{y} = y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



        such that $(vec{e_1}, vec{e_2}, vec{e_3})$ is the standard basis. Let further



        $$vec{e_i'} = sum_{j=1}^3 E_{i,j} vec{e_j},$$



        so using distributivity and linearity it holds that



        $$vec{e_i'} cdot vec{e_k}
        = left( sum_{j=1}^3 E_{i,j} vec{e_j} right) cdot vec{e_k}
        = sum_{j=1}^3 E_{i,j} left( vec{e_j} cdot vec{e_k} right)
        = sum_{j=1}^3 E_{i,j} delta_{jk} (**)
        = E_{i,k},$$



        (also $vec{e_k} cdot vec{e_i'} = E_{i,k}$), so



        $$vec{x} cdot vec{y}
        = left( sum_{i=1}^3 x_i vec{e_i} right) cdot left( sum_{j=1}^3 y_j vec{e_j'} right)
        = sum_{i=1}^3 sum_{j=1}^3 x_i y_j left( vec{e_i} cdot vec{e_j'} right)
        = sum_{i=1}^3 sum_{j=1}^3 x_i y_j E_{j,i}.$$



        You can use this formula for taking dot products of two vertices in different bases.
        I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_{i,j})$ anyway. You won't need to explicitly write $vec{y}$ in the $(vec{e_i})$ basis beforehand, though.





        (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



        (**) $delta_{jk}$ is shorthand for "one when $j=k$ and zero otherwise".






        share|cite|improve this answer











        $endgroup$














          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "151"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f479656%2fformal-definition-of-dot-product%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          8 Answers
          8






          active

          oldest

          votes








          8 Answers
          8






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          16












          $begingroup$

          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hat{x}+a_y hat{y} + a_z hat{z}) cdot (b_x hat{X}+b_y hat{Y} + b_z hat{Z})$$



          you get nine terms like $( a_x b_x hat{x}cdothat{X}) + (a_x b_y hat{x}cdothat{Y})+$ etc. In the usual orthonormal basis, the same-axis $hat{x}cdothat{X}$ factors just become 1, while the different-axis $hat{x}cdothat{Y}$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






          share|cite|improve this answer











          $endgroup$









          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54
















          16












          $begingroup$

          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hat{x}+a_y hat{y} + a_z hat{z}) cdot (b_x hat{X}+b_y hat{Y} + b_z hat{Z})$$



          you get nine terms like $( a_x b_x hat{x}cdothat{X}) + (a_x b_y hat{x}cdothat{Y})+$ etc. In the usual orthonormal basis, the same-axis $hat{x}cdothat{X}$ factors just become 1, while the different-axis $hat{x}cdothat{Y}$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






          share|cite|improve this answer











          $endgroup$









          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54














          16












          16








          16





          $begingroup$

          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hat{x}+a_y hat{y} + a_z hat{z}) cdot (b_x hat{X}+b_y hat{Y} + b_z hat{Z})$$



          you get nine terms like $( a_x b_x hat{x}cdothat{X}) + (a_x b_y hat{x}cdothat{Y})+$ etc. In the usual orthonormal basis, the same-axis $hat{x}cdothat{X}$ factors just become 1, while the different-axis $hat{x}cdothat{Y}$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






          share|cite|improve this answer











          $endgroup$



          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hat{x}+a_y hat{y} + a_z hat{z}) cdot (b_x hat{X}+b_y hat{Y} + b_z hat{Z})$$



          you get nine terms like $( a_x b_x hat{x}cdothat{X}) + (a_x b_y hat{x}cdothat{Y})+$ etc. In the usual orthonormal basis, the same-axis $hat{x}cdothat{X}$ factors just become 1, while the different-axis $hat{x}cdothat{Y}$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited May 13 at 1:36

























          answered May 13 at 0:26









          Bob JacobsenBob Jacobsen

          6,2201021




          6,2201021








          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54














          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54








          13




          13




          $begingroup$
          I don't think the dot product is associative.
          $endgroup$
          – eyeballfrog
          May 13 at 1:18




          $begingroup$
          I don't think the dot product is associative.
          $endgroup$
          – eyeballfrog
          May 13 at 1:18




          4




          4




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
          $endgroup$
          – Acccumulation
          May 13 at 17:14




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
          $endgroup$
          – Acccumulation
          May 13 at 17:14




          3




          3




          $begingroup$
          @Acccumulation This is Physics Stack Exchange.
          $endgroup$
          – Bob Jacobsen
          May 13 at 17:18




          $begingroup$
          @Acccumulation This is Physics Stack Exchange.
          $endgroup$
          – Bob Jacobsen
          May 13 at 17:18




          4




          4




          $begingroup$
          @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
          $endgroup$
          – scaphys
          May 13 at 18:36




          $begingroup$
          @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
          $endgroup$
          – scaphys
          May 13 at 18:36




          2




          2




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
          $endgroup$
          – Display Name
          May 13 at 18:54




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
          $endgroup$
          – Display Name
          May 13 at 18:54











          18












          $begingroup$

          Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



          Using what is called the Gram-Schmidt process, one can then construct a basis ${e_1,cdots e_n}$ for $V$ in which the inner product takes the computational form which you stated in your question.



          In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



          In general, an orthonormal basis ${e_1,e_2,e_3}$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






          share|cite|improve this answer









          $endgroup$


















            18












            $begingroup$

            Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



            Using what is called the Gram-Schmidt process, one can then construct a basis ${e_1,cdots e_n}$ for $V$ in which the inner product takes the computational form which you stated in your question.



            In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



            In general, an orthonormal basis ${e_1,e_2,e_3}$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






            share|cite|improve this answer









            $endgroup$
















              18












              18








              18





              $begingroup$

              Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



              Using what is called the Gram-Schmidt process, one can then construct a basis ${e_1,cdots e_n}$ for $V$ in which the inner product takes the computational form which you stated in your question.



              In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



              In general, an orthonormal basis ${e_1,e_2,e_3}$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






              share|cite|improve this answer









              $endgroup$



              Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



              Using what is called the Gram-Schmidt process, one can then construct a basis ${e_1,cdots e_n}$ for $V$ in which the inner product takes the computational form which you stated in your question.



              In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



              In general, an orthonormal basis ${e_1,e_2,e_3}$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered May 13 at 2:32









              user52817user52817

              3313




              3313























                  9












                  $begingroup$

                  The dot product can be defined in a coordinate-independent way as



                  $$vec{a}cdotvec{b}=|vec{a}||vec{b}|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vec{a}cdotvec{b}=frac{1}{2}left(|vec{a}+vec{b}|^2-|vec{a}|^2-|vec{b}|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.






                  share|cite|improve this answer











                  $endgroup$













                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33


















                  9












                  $begingroup$

                  The dot product can be defined in a coordinate-independent way as



                  $$vec{a}cdotvec{b}=|vec{a}||vec{b}|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vec{a}cdotvec{b}=frac{1}{2}left(|vec{a}+vec{b}|^2-|vec{a}|^2-|vec{b}|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.






                  share|cite|improve this answer











                  $endgroup$













                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33
















                  9












                  9








                  9





                  $begingroup$

                  The dot product can be defined in a coordinate-independent way as



                  $$vec{a}cdotvec{b}=|vec{a}||vec{b}|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vec{a}cdotvec{b}=frac{1}{2}left(|vec{a}+vec{b}|^2-|vec{a}|^2-|vec{b}|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.






                  share|cite|improve this answer











                  $endgroup$



                  The dot product can be defined in a coordinate-independent way as



                  $$vec{a}cdotvec{b}=|vec{a}||vec{b}|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vec{a}cdotvec{b}=frac{1}{2}left(|vec{a}+vec{b}|^2-|vec{a}|^2-|vec{b}|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited May 13 at 0:23

























                  answered May 13 at 0:14









                  G. SmithG. Smith

                  14.3k12349




                  14.3k12349












                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33




















                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33


















                  $begingroup$
                  Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                  $endgroup$
                  – dts
                  May 13 at 0:21




                  $begingroup$
                  Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                  $endgroup$
                  – dts
                  May 13 at 0:21












                  $begingroup$
                  Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                  $endgroup$
                  – G. Smith
                  May 13 at 0:31




                  $begingroup$
                  Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                  $endgroup$
                  – G. Smith
                  May 13 at 0:31




                  1




                  1




                  $begingroup$
                  "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                  $endgroup$
                  – infinitezero
                  May 13 at 16:33






                  $begingroup$
                  "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                  $endgroup$
                  – infinitezero
                  May 13 at 16:33













                  7












                  $begingroup$

                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt{(vec a)^2}$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3






                  share|cite|improve this answer











                  $endgroup$









                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrt{acdot a}$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbb{R}^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbb{C}$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10
















                  7












                  $begingroup$

                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt{(vec a)^2}$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3






                  share|cite|improve this answer











                  $endgroup$









                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrt{acdot a}$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbb{R}^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbb{C}$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10














                  7












                  7








                  7





                  $begingroup$

                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt{(vec a)^2}$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3






                  share|cite|improve this answer











                  $endgroup$



                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt{(vec a)^2}$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited May 13 at 3:50

























                  answered May 13 at 1:30









                  JEBJEB

                  7,0531919




                  7,0531919








                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrt{acdot a}$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbb{R}^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbb{C}$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10














                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrt{acdot a}$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbb{R}^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbb{C}$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10








                  3




                  3




                  $begingroup$
                  This is a circular definition as the norm is defined via the dot product.
                  $endgroup$
                  – Winther
                  May 13 at 9:01




                  $begingroup$
                  This is a circular definition as the norm is defined via the dot product.
                  $endgroup$
                  – Winther
                  May 13 at 9:01












                  $begingroup$
                  @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                  $endgroup$
                  – Denis Nardin
                  May 13 at 11:06




                  $begingroup$
                  @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                  $endgroup$
                  – Denis Nardin
                  May 13 at 11:06




                  2




                  2




                  $begingroup$
                  This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                  $endgroup$
                  – Jannik Pitt
                  May 13 at 11:50




                  $begingroup$
                  This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                  $endgroup$
                  – Jannik Pitt
                  May 13 at 11:50












                  $begingroup$
                  Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrt{acdot a}$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                  $endgroup$
                  – Winther
                  May 13 at 13:48




                  $begingroup$
                  Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrt{acdot a}$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                  $endgroup$
                  – Winther
                  May 13 at 13:48












                  $begingroup$
                  @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbb{R}^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbb{C}$).
                  $endgroup$
                  – Denis Nardin
                  May 13 at 18:10




                  $begingroup$
                  @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbb{R}^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbb{C}$).
                  $endgroup$
                  – Denis Nardin
                  May 13 at 18:10











                  3












                  $begingroup$

                  On computing the following matrix will give you the dot product $$begin{bmatrix} x_1 & x_2& x_3 end{bmatrix}.begin{bmatrix} e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3end{bmatrix}.begin{bmatrix}y_1\y_2\y_3end{bmatrix}$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






                  share|cite|improve this answer











                  $endgroup$









                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28
















                  3












                  $begingroup$

                  On computing the following matrix will give you the dot product $$begin{bmatrix} x_1 & x_2& x_3 end{bmatrix}.begin{bmatrix} e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3end{bmatrix}.begin{bmatrix}y_1\y_2\y_3end{bmatrix}$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






                  share|cite|improve this answer











                  $endgroup$









                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28














                  3












                  3








                  3





                  $begingroup$

                  On computing the following matrix will give you the dot product $$begin{bmatrix} x_1 & x_2& x_3 end{bmatrix}.begin{bmatrix} e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3end{bmatrix}.begin{bmatrix}y_1\y_2\y_3end{bmatrix}$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






                  share|cite|improve this answer











                  $endgroup$



                  On computing the following matrix will give you the dot product $$begin{bmatrix} x_1 & x_2& x_3 end{bmatrix}.begin{bmatrix} e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3end{bmatrix}.begin{bmatrix}y_1\y_2\y_3end{bmatrix}$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited May 17 at 2:34

























                  answered May 13 at 1:13









                  walber97walber97

                  489111




                  489111








                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28














                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28








                  1




                  1




                  $begingroup$
                  I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                  $endgroup$
                  – dmckee
                  May 16 at 16:28




                  $begingroup$
                  I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                  $endgroup$
                  – dmckee
                  May 16 at 16:28











                  1












                  $begingroup$

                  A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                  This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                  For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                  Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                  The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.





                  These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                  The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                  The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






                  share|cite|improve this answer









                  $endgroup$


















                    1












                    $begingroup$

                    A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                    This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                    For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                    Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                    The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.





                    These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                    The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                    The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






                    share|cite|improve this answer









                    $endgroup$
















                      1












                      1








                      1





                      $begingroup$

                      A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                      This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                      For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                      Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                      The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.





                      These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                      The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                      The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






                      share|cite|improve this answer









                      $endgroup$



                      A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                      This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                      For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                      Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                      The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.





                      These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                      The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                      The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered May 13 at 3:03









                      mmesser314mmesser314

                      9,66821834




                      9,66821834























                          1












                          $begingroup$


                          How, would I compute their dot product?




                          You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                          In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                          The value of $vec{e_1} cdot vec{e_1'}$ is an empirical value. You can't calculate it simply from a definition.




                          Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                          Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                          In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






                          share|cite|improve this answer









                          $endgroup$


















                            1












                            $begingroup$


                            How, would I compute their dot product?




                            You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                            In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                            The value of $vec{e_1} cdot vec{e_1'}$ is an empirical value. You can't calculate it simply from a definition.




                            Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                            Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                            In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






                            share|cite|improve this answer









                            $endgroup$
















                              1












                              1








                              1





                              $begingroup$


                              How, would I compute their dot product?




                              You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                              In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                              The value of $vec{e_1} cdot vec{e_1'}$ is an empirical value. You can't calculate it simply from a definition.




                              Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                              Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                              In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






                              share|cite|improve this answer









                              $endgroup$




                              How, would I compute their dot product?




                              You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                              In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                              The value of $vec{e_1} cdot vec{e_1'}$ is an empirical value. You can't calculate it simply from a definition.




                              Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                              Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                              In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$







                              share|cite|improve this answer












                              share|cite|improve this answer



                              share|cite|improve this answer










                              answered May 13 at 18:09









                              AcccumulationAcccumulation

                              3,694715




                              3,694715























                                  1












                                  $begingroup$

                                  The formula



                                  $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                  is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                  You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                  Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                  $$e_1 equiv langle 1,0,0 rangle \
                                  e_2 equiv langle 0,1,0 rangle \
                                  e_3 equiv langle 0,0,1 rangle$$

                                  and
                                  $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 text{etc.}$$
                                  If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                  The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                  The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.





                                  To actually answer your question: let



                                  $$vec{x} = x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
                                  $$vec{y} = y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



                                  such that $(vec{e_1}, vec{e_2}, vec{e_3})$ is the standard basis. Let further



                                  $$vec{e_i'} = sum_{j=1}^3 E_{i,j} vec{e_j},$$



                                  so using distributivity and linearity it holds that



                                  $$vec{e_i'} cdot vec{e_k}
                                  = left( sum_{j=1}^3 E_{i,j} vec{e_j} right) cdot vec{e_k}
                                  = sum_{j=1}^3 E_{i,j} left( vec{e_j} cdot vec{e_k} right)
                                  = sum_{j=1}^3 E_{i,j} delta_{jk} (**)
                                  = E_{i,k},$$



                                  (also $vec{e_k} cdot vec{e_i'} = E_{i,k}$), so



                                  $$vec{x} cdot vec{y}
                                  = left( sum_{i=1}^3 x_i vec{e_i} right) cdot left( sum_{j=1}^3 y_j vec{e_j'} right)
                                  = sum_{i=1}^3 sum_{j=1}^3 x_i y_j left( vec{e_i} cdot vec{e_j'} right)
                                  = sum_{i=1}^3 sum_{j=1}^3 x_i y_j E_{j,i}.$$



                                  You can use this formula for taking dot products of two vertices in different bases.
                                  I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_{i,j})$ anyway. You won't need to explicitly write $vec{y}$ in the $(vec{e_i})$ basis beforehand, though.





                                  (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                  (**) $delta_{jk}$ is shorthand for "one when $j=k$ and zero otherwise".






                                  share|cite|improve this answer











                                  $endgroup$


















                                    1












                                    $begingroup$

                                    The formula



                                    $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                    is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                    You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                    Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                    $$e_1 equiv langle 1,0,0 rangle \
                                    e_2 equiv langle 0,1,0 rangle \
                                    e_3 equiv langle 0,0,1 rangle$$

                                    and
                                    $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 text{etc.}$$
                                    If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                    The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                    The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.





                                    To actually answer your question: let



                                    $$vec{x} = x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
                                    $$vec{y} = y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



                                    such that $(vec{e_1}, vec{e_2}, vec{e_3})$ is the standard basis. Let further



                                    $$vec{e_i'} = sum_{j=1}^3 E_{i,j} vec{e_j},$$



                                    so using distributivity and linearity it holds that



                                    $$vec{e_i'} cdot vec{e_k}
                                    = left( sum_{j=1}^3 E_{i,j} vec{e_j} right) cdot vec{e_k}
                                    = sum_{j=1}^3 E_{i,j} left( vec{e_j} cdot vec{e_k} right)
                                    = sum_{j=1}^3 E_{i,j} delta_{jk} (**)
                                    = E_{i,k},$$



                                    (also $vec{e_k} cdot vec{e_i'} = E_{i,k}$), so



                                    $$vec{x} cdot vec{y}
                                    = left( sum_{i=1}^3 x_i vec{e_i} right) cdot left( sum_{j=1}^3 y_j vec{e_j'} right)
                                    = sum_{i=1}^3 sum_{j=1}^3 x_i y_j left( vec{e_i} cdot vec{e_j'} right)
                                    = sum_{i=1}^3 sum_{j=1}^3 x_i y_j E_{j,i}.$$



                                    You can use this formula for taking dot products of two vertices in different bases.
                                    I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_{i,j})$ anyway. You won't need to explicitly write $vec{y}$ in the $(vec{e_i})$ basis beforehand, though.





                                    (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                    (**) $delta_{jk}$ is shorthand for "one when $j=k$ and zero otherwise".






                                    share|cite|improve this answer











                                    $endgroup$
















                                      1












                                      1








                                      1





                                      $begingroup$

                                      The formula



                                      $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                      is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                      You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                      Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                      $$e_1 equiv langle 1,0,0 rangle \
                                      e_2 equiv langle 0,1,0 rangle \
                                      e_3 equiv langle 0,0,1 rangle$$

                                      and
                                      $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 text{etc.}$$
                                      If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                      The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                      The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.





                                      To actually answer your question: let



                                      $$vec{x} = x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
                                      $$vec{y} = y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



                                      such that $(vec{e_1}, vec{e_2}, vec{e_3})$ is the standard basis. Let further



                                      $$vec{e_i'} = sum_{j=1}^3 E_{i,j} vec{e_j},$$



                                      so using distributivity and linearity it holds that



                                      $$vec{e_i'} cdot vec{e_k}
                                      = left( sum_{j=1}^3 E_{i,j} vec{e_j} right) cdot vec{e_k}
                                      = sum_{j=1}^3 E_{i,j} left( vec{e_j} cdot vec{e_k} right)
                                      = sum_{j=1}^3 E_{i,j} delta_{jk} (**)
                                      = E_{i,k},$$



                                      (also $vec{e_k} cdot vec{e_i'} = E_{i,k}$), so



                                      $$vec{x} cdot vec{y}
                                      = left( sum_{i=1}^3 x_i vec{e_i} right) cdot left( sum_{j=1}^3 y_j vec{e_j'} right)
                                      = sum_{i=1}^3 sum_{j=1}^3 x_i y_j left( vec{e_i} cdot vec{e_j'} right)
                                      = sum_{i=1}^3 sum_{j=1}^3 x_i y_j E_{j,i}.$$



                                      You can use this formula for taking dot products of two vertices in different bases.
                                      I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_{i,j})$ anyway. You won't need to explicitly write $vec{y}$ in the $(vec{e_i})$ basis beforehand, though.





                                      (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                      (**) $delta_{jk}$ is shorthand for "one when $j=k$ and zero otherwise".






                                      share|cite|improve this answer











                                      $endgroup$



                                      The formula



                                      $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                      is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                      You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                      Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                      $$e_1 equiv langle 1,0,0 rangle \
                                      e_2 equiv langle 0,1,0 rangle \
                                      e_3 equiv langle 0,0,1 rangle$$

                                      and
                                      $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 text{etc.}$$
                                      If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                      The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                      The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.





                                      To actually answer your question: let



                                      $$vec{x} = x_1 vec{e_1} + x_2 vec{e_2} + x_3 vec{e_3}$$
                                      $$vec{y} = y_1 vec{e_1'} + y_2 vec{e_2'} + y_3 vec{e_3'}$$



                                      such that $(vec{e_1}, vec{e_2}, vec{e_3})$ is the standard basis. Let further



                                      $$vec{e_i'} = sum_{j=1}^3 E_{i,j} vec{e_j},$$



                                      so using distributivity and linearity it holds that



                                      $$vec{e_i'} cdot vec{e_k}
                                      = left( sum_{j=1}^3 E_{i,j} vec{e_j} right) cdot vec{e_k}
                                      = sum_{j=1}^3 E_{i,j} left( vec{e_j} cdot vec{e_k} right)
                                      = sum_{j=1}^3 E_{i,j} delta_{jk} (**)
                                      = E_{i,k},$$



                                      (also $vec{e_k} cdot vec{e_i'} = E_{i,k}$), so



                                      $$vec{x} cdot vec{y}
                                      = left( sum_{i=1}^3 x_i vec{e_i} right) cdot left( sum_{j=1}^3 y_j vec{e_j'} right)
                                      = sum_{i=1}^3 sum_{j=1}^3 x_i y_j left( vec{e_i} cdot vec{e_j'} right)
                                      = sum_{i=1}^3 sum_{j=1}^3 x_i y_j E_{j,i}.$$



                                      You can use this formula for taking dot products of two vertices in different bases.
                                      I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_{i,j})$ anyway. You won't need to explicitly write $vec{y}$ in the $(vec{e_i})$ basis beforehand, though.





                                      (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                      (**) $delta_{jk}$ is shorthand for "one when $j=k$ and zero otherwise".







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited May 14 at 7:35

























                                      answered May 14 at 7:28









                                      The VeeThe Vee

                                      873413




                                      873413






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Physics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f479656%2fformal-definition-of-dot-product%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

                                          He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

                                          Slayer Innehåll Historia | Stil, komposition och lyrik | Bandets betydelse och framgångar | Sidoprojekt och samarbeten | Kontroverser | Medlemmar | Utmärkelser och nomineringar | Turnéer och festivaler | Diskografi | Referenser | Externa länkar | Navigeringsmenywww.slayer.net”Metal Massacre vol. 1””Metal Massacre vol. 3””Metal Massacre Volume III””Show No Mercy””Haunting the Chapel””Live Undead””Hell Awaits””Reign in Blood””Reign in Blood””Gold & Platinum – Reign in Blood””Golden Gods Awards Winners”originalet”Kerrang! Hall Of Fame””Slayer Looks Back On 37-Year Career In New Video Series: Part Two””South of Heaven””Gold & Platinum – South of Heaven””Seasons in the Abyss””Gold & Platinum - Seasons in the Abyss””Divine Intervention””Divine Intervention - Release group by Slayer””Gold & Platinum - Divine Intervention””Live Intrusion””Undisputed Attitude””Abolish Government/Superficial Love””Release “Slatanic Slaughter: A Tribute to Slayer” by Various Artists””Diabolus in Musica””Soundtrack to the Apocalypse””God Hates Us All””Systematic - Relationships””War at the Warfield””Gold & Platinum - War at the Warfield””Soundtrack to the Apocalypse””Gold & Platinum - Still Reigning””Metallica, Slayer, Iron Mauden Among Winners At Metal Hammer Awards””Eternal Pyre””Eternal Pyre - Slayer release group””Eternal Pyre””Metal Storm Awards 2006””Kerrang! Hall Of Fame””Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Bullet-For My Valentine booed at Metal Hammer Golden Gods Awards””Unholy Aliance””The End Of Slayer?””Slayer: We Could Thrash Out Two More Albums If We're Fast Enough...””'The Unholy Alliance: Chapter III' UK Dates Added”originalet”Megadeth And Slayer To Co-Headline 'Canadian Carnage' Trek”originalet”World Painted Blood””Release “World Painted Blood” by Slayer””Metallica Heading To Cinemas””Slayer, Megadeth To Join Forces For 'European Carnage' Tour - Dec. 18, 2010”originalet”Slayer's Hanneman Contracts Acute Infection; Band To Bring In Guest Guitarist””Cannibal Corpse's Pat O'Brien Will Step In As Slayer's Guest Guitarist”originalet”Slayer’s Jeff Hanneman Dead at 49””Dave Lombardo Says He Made Only $67,000 In 2011 While Touring With Slayer””Slayer: We Do Not Agree With Dave Lombardo's Substance Or Timeline Of Events””Slayer Welcomes Drummer Paul Bostaph Back To The Fold””Slayer Hope to Unveil Never-Before-Heard Jeff Hanneman Material on Next Album””Slayer Debut New Song 'Implode' During Surprise Golden Gods Appearance””Release group Repentless by Slayer””Repentless - Slayer - Credits””Slayer””Metal Storm Awards 2015””Slayer - to release comic book "Repentless #1"””Slayer To Release 'Repentless' 6.66" Vinyl Box Set””BREAKING NEWS: Slayer Announce Farewell Tour””Slayer Recruit Lamb of God, Anthrax, Behemoth + Testament for Final Tour””Slayer lägger ner efter 37 år””Slayer Announces Second North American Leg Of 'Final' Tour””Final World Tour””Slayer Announces Final European Tour With Lamb of God, Anthrax And Obituary””Slayer To Tour Europe With Lamb of God, Anthrax And Obituary””Slayer To Play 'Last French Show Ever' At Next Year's Hellfst””Slayer's Final World Tour Will Extend Into 2019””Death Angel's Rob Cavestany On Slayer's 'Farewell' Tour: 'Some Of Us Could See This Coming'””Testament Has No Plans To Retire Anytime Soon, Says Chuck Billy””Anthrax's Scott Ian On Slayer's 'Farewell' Tour Plans: 'I Was Surprised And I Wasn't Surprised'””Slayer””Slayer's Morbid Schlock””Review/Rock; For Slayer, the Mania Is the Message””Slayer - Biography””Slayer - Reign In Blood”originalet”Dave Lombardo””An exclusive oral history of Slayer”originalet”Exclusive! Interview With Slayer Guitarist Jeff Hanneman”originalet”Thinking Out Loud: Slayer's Kerry King on hair metal, Satan and being polite””Slayer Lyrics””Slayer - Biography””Most influential artists for extreme metal music””Slayer - Reign in Blood””Slayer guitarist Jeff Hanneman dies aged 49””Slatanic Slaughter: A Tribute to Slayer””Gateway to Hell: A Tribute to Slayer””Covered In Blood””Slayer: The Origins of Thrash in San Francisco, CA.””Why They Rule - #6 Slayer”originalet”Guitar World's 100 Greatest Heavy Metal Guitarists Of All Time”originalet”The fans have spoken: Slayer comes out on top in readers' polls”originalet”Tribute to Jeff Hanneman (1964-2013)””Lamb Of God Frontman: We Sound Like A Slayer Rip-Off””BEHEMOTH Frontman Pays Tribute To SLAYER's JEFF HANNEMAN””Slayer, Hatebreed Doing Double Duty On This Year's Ozzfest””System of a Down””Lacuna Coil’s Andrea Ferro Talks Influences, Skateboarding, Band Origins + More””Slayer - Reign in Blood””Into The Lungs of Hell””Slayer rules - en utställning om fans””Slayer and Their Fans Slashed Through a No-Holds-Barred Night at Gas Monkey””Home””Slayer””Gold & Platinum - The Big 4 Live from Sofia, Bulgaria””Exclusive! Interview With Slayer Guitarist Kerry King””2008-02-23: Wiltern, Los Angeles, CA, USA””Slayer's Kerry King To Perform With Megadeth Tonight! - Oct. 21, 2010”originalet”Dave Lombardo - Biography”Slayer Case DismissedArkiveradUltimate Classic Rock: Slayer guitarist Jeff Hanneman dead at 49.”Slayer: "We could never do any thing like Some Kind Of Monster..."””Cannibal Corpse'S Pat O'Brien Will Step In As Slayer'S Guest Guitarist | The Official Slayer Site”originalet”Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Kerrang! Awards 2006 Blog: Kerrang! Hall Of Fame””Kerrang! Awards 2013: Kerrang! Legend”originalet”Metallica, Slayer, Iron Maien Among Winners At Metal Hammer Awards””Metal Hammer Golden Gods Awards””Bullet For My Valentine Booed At Metal Hammer Golden Gods Awards””Metal Storm Awards 2006””Metal Storm Awards 2015””Slayer's Concert History””Slayer - Relationships””Slayer - Releases”Slayers officiella webbplatsSlayer på MusicBrainzOfficiell webbplatsSlayerSlayerr1373445760000 0001 1540 47353068615-5086262726cb13906545x(data)6033143kn20030215029