How to get the last not-null value in an ordered column of a huge table?Quickly change NULL column to NOT NULLTSQL change column constraint between null and not nullHow to get the first or last row in ordered result set, depending by column value?Can I use SPARSE somehow on a non-nullable bit column with mostly false values?Columnstore Aggregate Pushdown doesn't work for float/real data typesHow to handle null value number DataWarehouseHow to get 0 in the null value fieldsHOW to work with NULL in a NOT NULL column?How to select the set of last non-NULL values per column over a group?Select last non-null value for given partition - Postgres 10

Is Soreness in Middle Knuckle of Fretting Hand Index Finger Normal for Beginners?

Is an HNN extension of a virtually torsion-free group virtually torsion-free?

3D Volume in TIKZ

Adding command shortcuts to /bin

Seeing 2 very different execution plans for an UPDATE between test & prod environments

How do LIGO and VIRGO know that a gravitational wave has its origin in a neutron star or a black hole?

My first c++ game (snake console game)

Typeset year in old-style numbers with biblatex

Handling Null values (and equivalents) routinely in Python

When an imagined world resembles or has similarities with a famous world

How can I get people to remember my character's gender?

How does the reduce() method work in Java 8?

Install LibreOffice-Writer Only not LibreOffice whole package

Is “snitty” a popular American English term? What is its origin?

Definition of conditional probability and a problem.

Nested loops to process groups of pictures

What is a common way to tell if an academic is "above average," or outstanding in their field? Is their h-index (Hirsh index) one of them?

Which Sphere is Fastest?

Is there a word for food that's gone 'bad', but is still edible?

How is Per Object Storage Usage Calculated

Why would a military not separate its forces into different branches?

Out of scope work duties and resignation

How long would it take for people to notice a mass disappearance?

Is there precedent or are there procedures for a US president refusing to concede to an electoral defeat?



How to get the last not-null value in an ordered column of a huge table?


Quickly change NULL column to NOT NULLTSQL change column constraint between null and not nullHow to get the first or last row in ordered result set, depending by column value?Can I use SPARSE somehow on a non-nullable bit column with mostly false values?Columnstore Aggregate Pushdown doesn't work for float/real data typesHow to handle null value number DataWarehouseHow to get 0 in the null value fieldsHOW to work with NULL in a NOT NULL column?How to select the set of last non-NULL values per column over a group?Select last non-null value for given partition - Postgres 10






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








9















I have the following input:



 id | value 
----+-------
1 | 136
2 | NULL
3 | 650
4 | NULL
5 | NULL
6 | NULL
7 | 954
8 | NULL
9 | 104
10 | NULL


I expect the following result:



 id | value 
----+-------
1 | 136
2 | 136
3 | 650
4 | 650
5 | 650
6 | 650
7 | 954
8 | 954
9 | 104
10 | 104


The trivial solution would be join the tables with a < relation, and then selecting the MAX value in a GROUP BY:



WITH tmp AS (
SELECT t2.id, MAX(t1.id) AS lastKnownId
FROM t t1, t t2
WHERE
t1.value IS NOT NULL
AND
t2.id >= t1.id
GROUP BY t2.id
)
SELECT
tmp.id, t.value
FROM t, tmp
WHERE t.id = tmp.lastKnownId;


However, the trivial execution of this code would create internally the square of the count of the rows of the input table ( O(n^2) ). I expected t-sql to optimize it out - on a block/record level, the task to do is very easy and linear, essentially a for loop ( O(n) ).



However, on my experiments, the latest MS SQL 2016 can't optimize this query correctly, making this query impossible to execute for a large input table.



Furthermore, the query has to run quickly, making a similarly easy (but very different) cursor-based solution infeasible.



Using some memory-backed temporary table could be a good compromise, but I am not sure if it can be run significantly quicker, considered that my example query using subqueries didn't work.



I am also thinking on to dig out some windowing function from the t-sql docs, what could be tricked to do what I want. For example, cumulative sum is doing some very similar, but I couldn't trick it to give the latest non-null element, and not the sum of the elements before.



The ideal solution would be a quick query without procedural code or temporary tables. Alternatively, also a solution with temporary tables is okay, but iterating the table procedurally is not.










share|improve this question






























    9















    I have the following input:



     id | value 
    ----+-------
    1 | 136
    2 | NULL
    3 | 650
    4 | NULL
    5 | NULL
    6 | NULL
    7 | 954
    8 | NULL
    9 | 104
    10 | NULL


    I expect the following result:



     id | value 
    ----+-------
    1 | 136
    2 | 136
    3 | 650
    4 | 650
    5 | 650
    6 | 650
    7 | 954
    8 | 954
    9 | 104
    10 | 104


    The trivial solution would be join the tables with a < relation, and then selecting the MAX value in a GROUP BY:



    WITH tmp AS (
    SELECT t2.id, MAX(t1.id) AS lastKnownId
    FROM t t1, t t2
    WHERE
    t1.value IS NOT NULL
    AND
    t2.id >= t1.id
    GROUP BY t2.id
    )
    SELECT
    tmp.id, t.value
    FROM t, tmp
    WHERE t.id = tmp.lastKnownId;


    However, the trivial execution of this code would create internally the square of the count of the rows of the input table ( O(n^2) ). I expected t-sql to optimize it out - on a block/record level, the task to do is very easy and linear, essentially a for loop ( O(n) ).



    However, on my experiments, the latest MS SQL 2016 can't optimize this query correctly, making this query impossible to execute for a large input table.



    Furthermore, the query has to run quickly, making a similarly easy (but very different) cursor-based solution infeasible.



    Using some memory-backed temporary table could be a good compromise, but I am not sure if it can be run significantly quicker, considered that my example query using subqueries didn't work.



    I am also thinking on to dig out some windowing function from the t-sql docs, what could be tricked to do what I want. For example, cumulative sum is doing some very similar, but I couldn't trick it to give the latest non-null element, and not the sum of the elements before.



    The ideal solution would be a quick query without procedural code or temporary tables. Alternatively, also a solution with temporary tables is okay, but iterating the table procedurally is not.










    share|improve this question


























      9












      9








      9


      1






      I have the following input:



       id | value 
      ----+-------
      1 | 136
      2 | NULL
      3 | 650
      4 | NULL
      5 | NULL
      6 | NULL
      7 | 954
      8 | NULL
      9 | 104
      10 | NULL


      I expect the following result:



       id | value 
      ----+-------
      1 | 136
      2 | 136
      3 | 650
      4 | 650
      5 | 650
      6 | 650
      7 | 954
      8 | 954
      9 | 104
      10 | 104


      The trivial solution would be join the tables with a < relation, and then selecting the MAX value in a GROUP BY:



      WITH tmp AS (
      SELECT t2.id, MAX(t1.id) AS lastKnownId
      FROM t t1, t t2
      WHERE
      t1.value IS NOT NULL
      AND
      t2.id >= t1.id
      GROUP BY t2.id
      )
      SELECT
      tmp.id, t.value
      FROM t, tmp
      WHERE t.id = tmp.lastKnownId;


      However, the trivial execution of this code would create internally the square of the count of the rows of the input table ( O(n^2) ). I expected t-sql to optimize it out - on a block/record level, the task to do is very easy and linear, essentially a for loop ( O(n) ).



      However, on my experiments, the latest MS SQL 2016 can't optimize this query correctly, making this query impossible to execute for a large input table.



      Furthermore, the query has to run quickly, making a similarly easy (but very different) cursor-based solution infeasible.



      Using some memory-backed temporary table could be a good compromise, but I am not sure if it can be run significantly quicker, considered that my example query using subqueries didn't work.



      I am also thinking on to dig out some windowing function from the t-sql docs, what could be tricked to do what I want. For example, cumulative sum is doing some very similar, but I couldn't trick it to give the latest non-null element, and not the sum of the elements before.



      The ideal solution would be a quick query without procedural code or temporary tables. Alternatively, also a solution with temporary tables is okay, but iterating the table procedurally is not.










      share|improve this question
















      I have the following input:



       id | value 
      ----+-------
      1 | 136
      2 | NULL
      3 | 650
      4 | NULL
      5 | NULL
      6 | NULL
      7 | 954
      8 | NULL
      9 | 104
      10 | NULL


      I expect the following result:



       id | value 
      ----+-------
      1 | 136
      2 | 136
      3 | 650
      4 | 650
      5 | 650
      6 | 650
      7 | 954
      8 | 954
      9 | 104
      10 | 104


      The trivial solution would be join the tables with a < relation, and then selecting the MAX value in a GROUP BY:



      WITH tmp AS (
      SELECT t2.id, MAX(t1.id) AS lastKnownId
      FROM t t1, t t2
      WHERE
      t1.value IS NOT NULL
      AND
      t2.id >= t1.id
      GROUP BY t2.id
      )
      SELECT
      tmp.id, t.value
      FROM t, tmp
      WHERE t.id = tmp.lastKnownId;


      However, the trivial execution of this code would create internally the square of the count of the rows of the input table ( O(n^2) ). I expected t-sql to optimize it out - on a block/record level, the task to do is very easy and linear, essentially a for loop ( O(n) ).



      However, on my experiments, the latest MS SQL 2016 can't optimize this query correctly, making this query impossible to execute for a large input table.



      Furthermore, the query has to run quickly, making a similarly easy (but very different) cursor-based solution infeasible.



      Using some memory-backed temporary table could be a good compromise, but I am not sure if it can be run significantly quicker, considered that my example query using subqueries didn't work.



      I am also thinking on to dig out some windowing function from the t-sql docs, what could be tricked to do what I want. For example, cumulative sum is doing some very similar, but I couldn't trick it to give the latest non-null element, and not the sum of the elements before.



      The ideal solution would be a quick query without procedural code or temporary tables. Alternatively, also a solution with temporary tables is okay, but iterating the table procedurally is not.







      sql-server t-sql null window-functions running-totals






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 3 at 3:17









      Paul White

      54.5k14290461




      54.5k14290461










      asked Mar 31 at 17:19









      peterhpeterh

      1,14551532




      1,14551532




















          3 Answers
          3






          active

          oldest

          votes


















          11














          A common solution to this type of problem is given by Itzik Ben-Gan in his article The Last non NULL Puzzle:



          DROP TABLE IF EXISTS dbo.Example;

          CREATE TABLE dbo.Example
          (
          id integer PRIMARY KEY,
          val integer NULL
          );

          INSERT dbo.Example
          (id, val)
          VALUES
          (1, 136),
          (2, NULL),
          (3, 650),
          (4, NULL),
          (5, NULL),
          (6, NULL),
          (7, 954),
          (8, NULL),
          (9, 104),
          (10, NULL);

          SELECT
          E.id,
          E.val,
          lastval =
          CAST(
          SUBSTRING(
          MAX(CAST(E.id AS binary(4)) + CAST(E.val AS binary(4))) OVER (
          ORDER BY E.id
          ROWS UNBOUNDED PRECEDING),
          5, 4)
          AS integer)
          FROM dbo.Example AS E
          ORDER BY
          E.id;


          Demo: db<>fiddle






          share|improve this answer






























            9















            I expected t-sql to optimize it out - on a block/record level, the
            task to do is very easy and linear, essentially a for loop ( O(n) ).




            That's not the query that you wrote. It may not be equivalent to the query that you wrote depending on some otherwise minor detail of the table schema. You're expecting too much from the query optimizer.



            With the right indexing you can get the algorithm that you seek through the following T-SQL:



            SELECT t1.id, ca.[VALUE] 
            FROM dbo.[BIG_TABLE(FOR_U)] t1
            CROSS APPLY (
            SELECT TOP (1) [VALUE]
            FROM dbo.[BIG_TABLE(FOR_U)] t2
            WHERE t2.ID <= t1.ID AND t2.[VALUE] IS NOT NULL
            ORDER BY t2.ID DESC
            ) ca; --ORDER BY t1.ID ASC


            For each row, the query processor traverses the index backwards and stops when it finds a row with a non null value for [VALUE]. On my machine this finishes in about 90 seconds for 100 million rows in the source table. The query runs longer than necessary because some amount of time is wasted on the client discarding all of those rows.



            It's not clear to me if you need ordered results or what you plan on doing with such a large result set. The query can be adjusted to meet the actual scenario. The biggest advantage of this approach is that it does not require a sort in the query plan. That can help for larger result sets. One disadvantage is that performance will not be optimal if there are a lot of NULLs in the table because many rows will be read from the index and discarded. You should be able to improve performance with a filtered index that excludes NULLs for that case.



            Sample data for the test:



            DROP TABLE IF EXISTS #t;

            CREATE TABLE #t (
            ID BIGINT NOT NULL
            );

            INSERT INTO #t WITH (TABLOCK)
            SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1
            FROM master..spt_values t1
            CROSS JOIN master..spt_values t2
            OPTION (MAXDOP 1);

            DROP TABLE IF EXISTS dbo.[BIG_TABLE(FOR_U)];

            CREATE TABLE dbo.[BIG_TABLE(FOR_U)] (
            ID BIGINT NOT NULL,
            [VALUE] BIGINT NULL
            );

            INSERT INTO dbo.[BIG_TABLE(FOR_U)] WITH (TABLOCK)
            SELECT 10000 * t1.ID + t2.ID, CASE WHEN (t1.ID + t2.ID) % 3 = 1 THEN t2.ID ELSE NULL END
            FROM #t t1
            CROSS JOIN #t t2;

            CREATE UNIQUE CLUSTERED INDEX ADD_ORDERING ON dbo.[BIG_TABLE(FOR_U)] (ID);





            share|improve this answer























            • Thanks the answer! I have missing data in my input, I need to approximate them based on the last known value. The ratio of the not-NULLs is between 0.1% and 1%, I have around 100million records, the latest recent hardware, and mssql 2016.

              – peterh
              Apr 1 at 2:57











            • I used a simple naive queries, without any (indexed) temporary tables, with WITH ... AS constructs. I am happy that ms sql can optimize it if it got enough advices. I am experimenting on.

              – peterh
              Apr 1 at 13:04











            • @peterh: To restate, in your real data about 99% of the rows have a NULLfor [value]?

              – Joe Obbish
              Apr 1 at 22:49











            • Yes. Maybe 99.9. In the real data, also the "value"s are growing with the ids, if it helps.

              – peterh
              Apr 2 at 6:11


















            7














            One method, by using OVER() and MAX() and COUNT() based on this source could be:



            SELECT ID, MAX(value) OVER (PARTITION BY Value2) as value
            FROM
            (
            SELECT ID, value
            ,COUNT(value) OVER (ORDER BY ID) AS Value2
            FROM dbo.HugeTable
            ) a
            ORDER BY ID;


            Result



            Id UpdatedValue
            1 136
            2 136
            3 650
            4 650
            5 650
            6 650
            7 954
            8 954
            9 104
            10 104



            Another method based on this source, closely related to the first example



            ;WITH CTE As 
            (
            SELECT value,
            Id,
            COUNT(value)
            OVER(ORDER BY Id) As Value2
            FROM dbo.HugeTable
            ),

            CTE2 AS (
            SELECT Id,
            value,
            First_Value(value)
            OVER( PARTITION BY Value2
            ORDER BY Id) As UpdatedValue
            FROM CTE
            )
            SELECT Id,UpdatedValue
            FROM CTE2;





            share|improve this answer




















            • 3





              Consider adding details about how these approaches perform with a "huge table".

              – Joe Obbish
              Mar 31 at 22:32











            • Thanks! This is what I finally did, although not COUNT()... OVER, instead MAX(). It became much faster, but it is still slow. Be back soon.

              – peterh
              Apr 1 at 3:02












            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "182"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233610%2fhow-to-get-the-last-not-null-value-in-an-ordered-column-of-a-huge-table%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            3 Answers
            3






            active

            oldest

            votes








            3 Answers
            3






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            11














            A common solution to this type of problem is given by Itzik Ben-Gan in his article The Last non NULL Puzzle:



            DROP TABLE IF EXISTS dbo.Example;

            CREATE TABLE dbo.Example
            (
            id integer PRIMARY KEY,
            val integer NULL
            );

            INSERT dbo.Example
            (id, val)
            VALUES
            (1, 136),
            (2, NULL),
            (3, 650),
            (4, NULL),
            (5, NULL),
            (6, NULL),
            (7, 954),
            (8, NULL),
            (9, 104),
            (10, NULL);

            SELECT
            E.id,
            E.val,
            lastval =
            CAST(
            SUBSTRING(
            MAX(CAST(E.id AS binary(4)) + CAST(E.val AS binary(4))) OVER (
            ORDER BY E.id
            ROWS UNBOUNDED PRECEDING),
            5, 4)
            AS integer)
            FROM dbo.Example AS E
            ORDER BY
            E.id;


            Demo: db<>fiddle






            share|improve this answer



























              11














              A common solution to this type of problem is given by Itzik Ben-Gan in his article The Last non NULL Puzzle:



              DROP TABLE IF EXISTS dbo.Example;

              CREATE TABLE dbo.Example
              (
              id integer PRIMARY KEY,
              val integer NULL
              );

              INSERT dbo.Example
              (id, val)
              VALUES
              (1, 136),
              (2, NULL),
              (3, 650),
              (4, NULL),
              (5, NULL),
              (6, NULL),
              (7, 954),
              (8, NULL),
              (9, 104),
              (10, NULL);

              SELECT
              E.id,
              E.val,
              lastval =
              CAST(
              SUBSTRING(
              MAX(CAST(E.id AS binary(4)) + CAST(E.val AS binary(4))) OVER (
              ORDER BY E.id
              ROWS UNBOUNDED PRECEDING),
              5, 4)
              AS integer)
              FROM dbo.Example AS E
              ORDER BY
              E.id;


              Demo: db<>fiddle






              share|improve this answer

























                11












                11








                11







                A common solution to this type of problem is given by Itzik Ben-Gan in his article The Last non NULL Puzzle:



                DROP TABLE IF EXISTS dbo.Example;

                CREATE TABLE dbo.Example
                (
                id integer PRIMARY KEY,
                val integer NULL
                );

                INSERT dbo.Example
                (id, val)
                VALUES
                (1, 136),
                (2, NULL),
                (3, 650),
                (4, NULL),
                (5, NULL),
                (6, NULL),
                (7, 954),
                (8, NULL),
                (9, 104),
                (10, NULL);

                SELECT
                E.id,
                E.val,
                lastval =
                CAST(
                SUBSTRING(
                MAX(CAST(E.id AS binary(4)) + CAST(E.val AS binary(4))) OVER (
                ORDER BY E.id
                ROWS UNBOUNDED PRECEDING),
                5, 4)
                AS integer)
                FROM dbo.Example AS E
                ORDER BY
                E.id;


                Demo: db<>fiddle






                share|improve this answer













                A common solution to this type of problem is given by Itzik Ben-Gan in his article The Last non NULL Puzzle:



                DROP TABLE IF EXISTS dbo.Example;

                CREATE TABLE dbo.Example
                (
                id integer PRIMARY KEY,
                val integer NULL
                );

                INSERT dbo.Example
                (id, val)
                VALUES
                (1, 136),
                (2, NULL),
                (3, 650),
                (4, NULL),
                (5, NULL),
                (6, NULL),
                (7, 954),
                (8, NULL),
                (9, 104),
                (10, NULL);

                SELECT
                E.id,
                E.val,
                lastval =
                CAST(
                SUBSTRING(
                MAX(CAST(E.id AS binary(4)) + CAST(E.val AS binary(4))) OVER (
                ORDER BY E.id
                ROWS UNBOUNDED PRECEDING),
                5, 4)
                AS integer)
                FROM dbo.Example AS E
                ORDER BY
                E.id;


                Demo: db<>fiddle







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Apr 1 at 0:30









                Paul WhitePaul White

                54.5k14290461




                54.5k14290461























                    9















                    I expected t-sql to optimize it out - on a block/record level, the
                    task to do is very easy and linear, essentially a for loop ( O(n) ).




                    That's not the query that you wrote. It may not be equivalent to the query that you wrote depending on some otherwise minor detail of the table schema. You're expecting too much from the query optimizer.



                    With the right indexing you can get the algorithm that you seek through the following T-SQL:



                    SELECT t1.id, ca.[VALUE] 
                    FROM dbo.[BIG_TABLE(FOR_U)] t1
                    CROSS APPLY (
                    SELECT TOP (1) [VALUE]
                    FROM dbo.[BIG_TABLE(FOR_U)] t2
                    WHERE t2.ID <= t1.ID AND t2.[VALUE] IS NOT NULL
                    ORDER BY t2.ID DESC
                    ) ca; --ORDER BY t1.ID ASC


                    For each row, the query processor traverses the index backwards and stops when it finds a row with a non null value for [VALUE]. On my machine this finishes in about 90 seconds for 100 million rows in the source table. The query runs longer than necessary because some amount of time is wasted on the client discarding all of those rows.



                    It's not clear to me if you need ordered results or what you plan on doing with such a large result set. The query can be adjusted to meet the actual scenario. The biggest advantage of this approach is that it does not require a sort in the query plan. That can help for larger result sets. One disadvantage is that performance will not be optimal if there are a lot of NULLs in the table because many rows will be read from the index and discarded. You should be able to improve performance with a filtered index that excludes NULLs for that case.



                    Sample data for the test:



                    DROP TABLE IF EXISTS #t;

                    CREATE TABLE #t (
                    ID BIGINT NOT NULL
                    );

                    INSERT INTO #t WITH (TABLOCK)
                    SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1
                    FROM master..spt_values t1
                    CROSS JOIN master..spt_values t2
                    OPTION (MAXDOP 1);

                    DROP TABLE IF EXISTS dbo.[BIG_TABLE(FOR_U)];

                    CREATE TABLE dbo.[BIG_TABLE(FOR_U)] (
                    ID BIGINT NOT NULL,
                    [VALUE] BIGINT NULL
                    );

                    INSERT INTO dbo.[BIG_TABLE(FOR_U)] WITH (TABLOCK)
                    SELECT 10000 * t1.ID + t2.ID, CASE WHEN (t1.ID + t2.ID) % 3 = 1 THEN t2.ID ELSE NULL END
                    FROM #t t1
                    CROSS JOIN #t t2;

                    CREATE UNIQUE CLUSTERED INDEX ADD_ORDERING ON dbo.[BIG_TABLE(FOR_U)] (ID);





                    share|improve this answer























                    • Thanks the answer! I have missing data in my input, I need to approximate them based on the last known value. The ratio of the not-NULLs is between 0.1% and 1%, I have around 100million records, the latest recent hardware, and mssql 2016.

                      – peterh
                      Apr 1 at 2:57











                    • I used a simple naive queries, without any (indexed) temporary tables, with WITH ... AS constructs. I am happy that ms sql can optimize it if it got enough advices. I am experimenting on.

                      – peterh
                      Apr 1 at 13:04











                    • @peterh: To restate, in your real data about 99% of the rows have a NULLfor [value]?

                      – Joe Obbish
                      Apr 1 at 22:49











                    • Yes. Maybe 99.9. In the real data, also the "value"s are growing with the ids, if it helps.

                      – peterh
                      Apr 2 at 6:11















                    9















                    I expected t-sql to optimize it out - on a block/record level, the
                    task to do is very easy and linear, essentially a for loop ( O(n) ).




                    That's not the query that you wrote. It may not be equivalent to the query that you wrote depending on some otherwise minor detail of the table schema. You're expecting too much from the query optimizer.



                    With the right indexing you can get the algorithm that you seek through the following T-SQL:



                    SELECT t1.id, ca.[VALUE] 
                    FROM dbo.[BIG_TABLE(FOR_U)] t1
                    CROSS APPLY (
                    SELECT TOP (1) [VALUE]
                    FROM dbo.[BIG_TABLE(FOR_U)] t2
                    WHERE t2.ID <= t1.ID AND t2.[VALUE] IS NOT NULL
                    ORDER BY t2.ID DESC
                    ) ca; --ORDER BY t1.ID ASC


                    For each row, the query processor traverses the index backwards and stops when it finds a row with a non null value for [VALUE]. On my machine this finishes in about 90 seconds for 100 million rows in the source table. The query runs longer than necessary because some amount of time is wasted on the client discarding all of those rows.



                    It's not clear to me if you need ordered results or what you plan on doing with such a large result set. The query can be adjusted to meet the actual scenario. The biggest advantage of this approach is that it does not require a sort in the query plan. That can help for larger result sets. One disadvantage is that performance will not be optimal if there are a lot of NULLs in the table because many rows will be read from the index and discarded. You should be able to improve performance with a filtered index that excludes NULLs for that case.



                    Sample data for the test:



                    DROP TABLE IF EXISTS #t;

                    CREATE TABLE #t (
                    ID BIGINT NOT NULL
                    );

                    INSERT INTO #t WITH (TABLOCK)
                    SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1
                    FROM master..spt_values t1
                    CROSS JOIN master..spt_values t2
                    OPTION (MAXDOP 1);

                    DROP TABLE IF EXISTS dbo.[BIG_TABLE(FOR_U)];

                    CREATE TABLE dbo.[BIG_TABLE(FOR_U)] (
                    ID BIGINT NOT NULL,
                    [VALUE] BIGINT NULL
                    );

                    INSERT INTO dbo.[BIG_TABLE(FOR_U)] WITH (TABLOCK)
                    SELECT 10000 * t1.ID + t2.ID, CASE WHEN (t1.ID + t2.ID) % 3 = 1 THEN t2.ID ELSE NULL END
                    FROM #t t1
                    CROSS JOIN #t t2;

                    CREATE UNIQUE CLUSTERED INDEX ADD_ORDERING ON dbo.[BIG_TABLE(FOR_U)] (ID);





                    share|improve this answer























                    • Thanks the answer! I have missing data in my input, I need to approximate them based on the last known value. The ratio of the not-NULLs is between 0.1% and 1%, I have around 100million records, the latest recent hardware, and mssql 2016.

                      – peterh
                      Apr 1 at 2:57











                    • I used a simple naive queries, without any (indexed) temporary tables, with WITH ... AS constructs. I am happy that ms sql can optimize it if it got enough advices. I am experimenting on.

                      – peterh
                      Apr 1 at 13:04











                    • @peterh: To restate, in your real data about 99% of the rows have a NULLfor [value]?

                      – Joe Obbish
                      Apr 1 at 22:49











                    • Yes. Maybe 99.9. In the real data, also the "value"s are growing with the ids, if it helps.

                      – peterh
                      Apr 2 at 6:11













                    9












                    9








                    9








                    I expected t-sql to optimize it out - on a block/record level, the
                    task to do is very easy and linear, essentially a for loop ( O(n) ).




                    That's not the query that you wrote. It may not be equivalent to the query that you wrote depending on some otherwise minor detail of the table schema. You're expecting too much from the query optimizer.



                    With the right indexing you can get the algorithm that you seek through the following T-SQL:



                    SELECT t1.id, ca.[VALUE] 
                    FROM dbo.[BIG_TABLE(FOR_U)] t1
                    CROSS APPLY (
                    SELECT TOP (1) [VALUE]
                    FROM dbo.[BIG_TABLE(FOR_U)] t2
                    WHERE t2.ID <= t1.ID AND t2.[VALUE] IS NOT NULL
                    ORDER BY t2.ID DESC
                    ) ca; --ORDER BY t1.ID ASC


                    For each row, the query processor traverses the index backwards and stops when it finds a row with a non null value for [VALUE]. On my machine this finishes in about 90 seconds for 100 million rows in the source table. The query runs longer than necessary because some amount of time is wasted on the client discarding all of those rows.



                    It's not clear to me if you need ordered results or what you plan on doing with such a large result set. The query can be adjusted to meet the actual scenario. The biggest advantage of this approach is that it does not require a sort in the query plan. That can help for larger result sets. One disadvantage is that performance will not be optimal if there are a lot of NULLs in the table because many rows will be read from the index and discarded. You should be able to improve performance with a filtered index that excludes NULLs for that case.



                    Sample data for the test:



                    DROP TABLE IF EXISTS #t;

                    CREATE TABLE #t (
                    ID BIGINT NOT NULL
                    );

                    INSERT INTO #t WITH (TABLOCK)
                    SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1
                    FROM master..spt_values t1
                    CROSS JOIN master..spt_values t2
                    OPTION (MAXDOP 1);

                    DROP TABLE IF EXISTS dbo.[BIG_TABLE(FOR_U)];

                    CREATE TABLE dbo.[BIG_TABLE(FOR_U)] (
                    ID BIGINT NOT NULL,
                    [VALUE] BIGINT NULL
                    );

                    INSERT INTO dbo.[BIG_TABLE(FOR_U)] WITH (TABLOCK)
                    SELECT 10000 * t1.ID + t2.ID, CASE WHEN (t1.ID + t2.ID) % 3 = 1 THEN t2.ID ELSE NULL END
                    FROM #t t1
                    CROSS JOIN #t t2;

                    CREATE UNIQUE CLUSTERED INDEX ADD_ORDERING ON dbo.[BIG_TABLE(FOR_U)] (ID);





                    share|improve this answer














                    I expected t-sql to optimize it out - on a block/record level, the
                    task to do is very easy and linear, essentially a for loop ( O(n) ).




                    That's not the query that you wrote. It may not be equivalent to the query that you wrote depending on some otherwise minor detail of the table schema. You're expecting too much from the query optimizer.



                    With the right indexing you can get the algorithm that you seek through the following T-SQL:



                    SELECT t1.id, ca.[VALUE] 
                    FROM dbo.[BIG_TABLE(FOR_U)] t1
                    CROSS APPLY (
                    SELECT TOP (1) [VALUE]
                    FROM dbo.[BIG_TABLE(FOR_U)] t2
                    WHERE t2.ID <= t1.ID AND t2.[VALUE] IS NOT NULL
                    ORDER BY t2.ID DESC
                    ) ca; --ORDER BY t1.ID ASC


                    For each row, the query processor traverses the index backwards and stops when it finds a row with a non null value for [VALUE]. On my machine this finishes in about 90 seconds for 100 million rows in the source table. The query runs longer than necessary because some amount of time is wasted on the client discarding all of those rows.



                    It's not clear to me if you need ordered results or what you plan on doing with such a large result set. The query can be adjusted to meet the actual scenario. The biggest advantage of this approach is that it does not require a sort in the query plan. That can help for larger result sets. One disadvantage is that performance will not be optimal if there are a lot of NULLs in the table because many rows will be read from the index and discarded. You should be able to improve performance with a filtered index that excludes NULLs for that case.



                    Sample data for the test:



                    DROP TABLE IF EXISTS #t;

                    CREATE TABLE #t (
                    ID BIGINT NOT NULL
                    );

                    INSERT INTO #t WITH (TABLOCK)
                    SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1
                    FROM master..spt_values t1
                    CROSS JOIN master..spt_values t2
                    OPTION (MAXDOP 1);

                    DROP TABLE IF EXISTS dbo.[BIG_TABLE(FOR_U)];

                    CREATE TABLE dbo.[BIG_TABLE(FOR_U)] (
                    ID BIGINT NOT NULL,
                    [VALUE] BIGINT NULL
                    );

                    INSERT INTO dbo.[BIG_TABLE(FOR_U)] WITH (TABLOCK)
                    SELECT 10000 * t1.ID + t2.ID, CASE WHEN (t1.ID + t2.ID) % 3 = 1 THEN t2.ID ELSE NULL END
                    FROM #t t1
                    CROSS JOIN #t t2;

                    CREATE UNIQUE CLUSTERED INDEX ADD_ORDERING ON dbo.[BIG_TABLE(FOR_U)] (ID);






                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Mar 31 at 22:31









                    Joe ObbishJoe Obbish

                    22.6k43495




                    22.6k43495












                    • Thanks the answer! I have missing data in my input, I need to approximate them based on the last known value. The ratio of the not-NULLs is between 0.1% and 1%, I have around 100million records, the latest recent hardware, and mssql 2016.

                      – peterh
                      Apr 1 at 2:57











                    • I used a simple naive queries, without any (indexed) temporary tables, with WITH ... AS constructs. I am happy that ms sql can optimize it if it got enough advices. I am experimenting on.

                      – peterh
                      Apr 1 at 13:04











                    • @peterh: To restate, in your real data about 99% of the rows have a NULLfor [value]?

                      – Joe Obbish
                      Apr 1 at 22:49











                    • Yes. Maybe 99.9. In the real data, also the "value"s are growing with the ids, if it helps.

                      – peterh
                      Apr 2 at 6:11

















                    • Thanks the answer! I have missing data in my input, I need to approximate them based on the last known value. The ratio of the not-NULLs is between 0.1% and 1%, I have around 100million records, the latest recent hardware, and mssql 2016.

                      – peterh
                      Apr 1 at 2:57











                    • I used a simple naive queries, without any (indexed) temporary tables, with WITH ... AS constructs. I am happy that ms sql can optimize it if it got enough advices. I am experimenting on.

                      – peterh
                      Apr 1 at 13:04











                    • @peterh: To restate, in your real data about 99% of the rows have a NULLfor [value]?

                      – Joe Obbish
                      Apr 1 at 22:49











                    • Yes. Maybe 99.9. In the real data, also the "value"s are growing with the ids, if it helps.

                      – peterh
                      Apr 2 at 6:11
















                    Thanks the answer! I have missing data in my input, I need to approximate them based on the last known value. The ratio of the not-NULLs is between 0.1% and 1%, I have around 100million records, the latest recent hardware, and mssql 2016.

                    – peterh
                    Apr 1 at 2:57





                    Thanks the answer! I have missing data in my input, I need to approximate them based on the last known value. The ratio of the not-NULLs is between 0.1% and 1%, I have around 100million records, the latest recent hardware, and mssql 2016.

                    – peterh
                    Apr 1 at 2:57













                    I used a simple naive queries, without any (indexed) temporary tables, with WITH ... AS constructs. I am happy that ms sql can optimize it if it got enough advices. I am experimenting on.

                    – peterh
                    Apr 1 at 13:04





                    I used a simple naive queries, without any (indexed) temporary tables, with WITH ... AS constructs. I am happy that ms sql can optimize it if it got enough advices. I am experimenting on.

                    – peterh
                    Apr 1 at 13:04













                    @peterh: To restate, in your real data about 99% of the rows have a NULLfor [value]?

                    – Joe Obbish
                    Apr 1 at 22:49





                    @peterh: To restate, in your real data about 99% of the rows have a NULLfor [value]?

                    – Joe Obbish
                    Apr 1 at 22:49













                    Yes. Maybe 99.9. In the real data, also the "value"s are growing with the ids, if it helps.

                    – peterh
                    Apr 2 at 6:11





                    Yes. Maybe 99.9. In the real data, also the "value"s are growing with the ids, if it helps.

                    – peterh
                    Apr 2 at 6:11











                    7














                    One method, by using OVER() and MAX() and COUNT() based on this source could be:



                    SELECT ID, MAX(value) OVER (PARTITION BY Value2) as value
                    FROM
                    (
                    SELECT ID, value
                    ,COUNT(value) OVER (ORDER BY ID) AS Value2
                    FROM dbo.HugeTable
                    ) a
                    ORDER BY ID;


                    Result



                    Id UpdatedValue
                    1 136
                    2 136
                    3 650
                    4 650
                    5 650
                    6 650
                    7 954
                    8 954
                    9 104
                    10 104



                    Another method based on this source, closely related to the first example



                    ;WITH CTE As 
                    (
                    SELECT value,
                    Id,
                    COUNT(value)
                    OVER(ORDER BY Id) As Value2
                    FROM dbo.HugeTable
                    ),

                    CTE2 AS (
                    SELECT Id,
                    value,
                    First_Value(value)
                    OVER( PARTITION BY Value2
                    ORDER BY Id) As UpdatedValue
                    FROM CTE
                    )
                    SELECT Id,UpdatedValue
                    FROM CTE2;





                    share|improve this answer




















                    • 3





                      Consider adding details about how these approaches perform with a "huge table".

                      – Joe Obbish
                      Mar 31 at 22:32











                    • Thanks! This is what I finally did, although not COUNT()... OVER, instead MAX(). It became much faster, but it is still slow. Be back soon.

                      – peterh
                      Apr 1 at 3:02
















                    7














                    One method, by using OVER() and MAX() and COUNT() based on this source could be:



                    SELECT ID, MAX(value) OVER (PARTITION BY Value2) as value
                    FROM
                    (
                    SELECT ID, value
                    ,COUNT(value) OVER (ORDER BY ID) AS Value2
                    FROM dbo.HugeTable
                    ) a
                    ORDER BY ID;


                    Result



                    Id UpdatedValue
                    1 136
                    2 136
                    3 650
                    4 650
                    5 650
                    6 650
                    7 954
                    8 954
                    9 104
                    10 104



                    Another method based on this source, closely related to the first example



                    ;WITH CTE As 
                    (
                    SELECT value,
                    Id,
                    COUNT(value)
                    OVER(ORDER BY Id) As Value2
                    FROM dbo.HugeTable
                    ),

                    CTE2 AS (
                    SELECT Id,
                    value,
                    First_Value(value)
                    OVER( PARTITION BY Value2
                    ORDER BY Id) As UpdatedValue
                    FROM CTE
                    )
                    SELECT Id,UpdatedValue
                    FROM CTE2;





                    share|improve this answer




















                    • 3





                      Consider adding details about how these approaches perform with a "huge table".

                      – Joe Obbish
                      Mar 31 at 22:32











                    • Thanks! This is what I finally did, although not COUNT()... OVER, instead MAX(). It became much faster, but it is still slow. Be back soon.

                      – peterh
                      Apr 1 at 3:02














                    7












                    7








                    7







                    One method, by using OVER() and MAX() and COUNT() based on this source could be:



                    SELECT ID, MAX(value) OVER (PARTITION BY Value2) as value
                    FROM
                    (
                    SELECT ID, value
                    ,COUNT(value) OVER (ORDER BY ID) AS Value2
                    FROM dbo.HugeTable
                    ) a
                    ORDER BY ID;


                    Result



                    Id UpdatedValue
                    1 136
                    2 136
                    3 650
                    4 650
                    5 650
                    6 650
                    7 954
                    8 954
                    9 104
                    10 104



                    Another method based on this source, closely related to the first example



                    ;WITH CTE As 
                    (
                    SELECT value,
                    Id,
                    COUNT(value)
                    OVER(ORDER BY Id) As Value2
                    FROM dbo.HugeTable
                    ),

                    CTE2 AS (
                    SELECT Id,
                    value,
                    First_Value(value)
                    OVER( PARTITION BY Value2
                    ORDER BY Id) As UpdatedValue
                    FROM CTE
                    )
                    SELECT Id,UpdatedValue
                    FROM CTE2;





                    share|improve this answer















                    One method, by using OVER() and MAX() and COUNT() based on this source could be:



                    SELECT ID, MAX(value) OVER (PARTITION BY Value2) as value
                    FROM
                    (
                    SELECT ID, value
                    ,COUNT(value) OVER (ORDER BY ID) AS Value2
                    FROM dbo.HugeTable
                    ) a
                    ORDER BY ID;


                    Result



                    Id UpdatedValue
                    1 136
                    2 136
                    3 650
                    4 650
                    5 650
                    6 650
                    7 954
                    8 954
                    9 104
                    10 104



                    Another method based on this source, closely related to the first example



                    ;WITH CTE As 
                    (
                    SELECT value,
                    Id,
                    COUNT(value)
                    OVER(ORDER BY Id) As Value2
                    FROM dbo.HugeTable
                    ),

                    CTE2 AS (
                    SELECT Id,
                    value,
                    First_Value(value)
                    OVER( PARTITION BY Value2
                    ORDER BY Id) As UpdatedValue
                    FROM CTE
                    )
                    SELECT Id,UpdatedValue
                    FROM CTE2;






                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Apr 1 at 13:13

























                    answered Mar 31 at 17:54









                    Randi VertongenRandi Vertongen

                    5,3611926




                    5,3611926







                    • 3





                      Consider adding details about how these approaches perform with a "huge table".

                      – Joe Obbish
                      Mar 31 at 22:32











                    • Thanks! This is what I finally did, although not COUNT()... OVER, instead MAX(). It became much faster, but it is still slow. Be back soon.

                      – peterh
                      Apr 1 at 3:02













                    • 3





                      Consider adding details about how these approaches perform with a "huge table".

                      – Joe Obbish
                      Mar 31 at 22:32











                    • Thanks! This is what I finally did, although not COUNT()... OVER, instead MAX(). It became much faster, but it is still slow. Be back soon.

                      – peterh
                      Apr 1 at 3:02








                    3




                    3





                    Consider adding details about how these approaches perform with a "huge table".

                    – Joe Obbish
                    Mar 31 at 22:32





                    Consider adding details about how these approaches perform with a "huge table".

                    – Joe Obbish
                    Mar 31 at 22:32













                    Thanks! This is what I finally did, although not COUNT()... OVER, instead MAX(). It became much faster, but it is still slow. Be back soon.

                    – peterh
                    Apr 1 at 3:02






                    Thanks! This is what I finally did, although not COUNT()... OVER, instead MAX(). It became much faster, but it is still slow. Be back soon.

                    – peterh
                    Apr 1 at 3:02


















                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Database Administrators Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233610%2fhow-to-get-the-last-not-null-value-in-an-ordered-column-of-a-huge-table%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

                    He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

                    Slayer Innehåll Historia | Stil, komposition och lyrik | Bandets betydelse och framgångar | Sidoprojekt och samarbeten | Kontroverser | Medlemmar | Utmärkelser och nomineringar | Turnéer och festivaler | Diskografi | Referenser | Externa länkar | Navigeringsmenywww.slayer.net”Metal Massacre vol. 1””Metal Massacre vol. 3””Metal Massacre Volume III””Show No Mercy””Haunting the Chapel””Live Undead””Hell Awaits””Reign in Blood””Reign in Blood””Gold & Platinum – Reign in Blood””Golden Gods Awards Winners”originalet”Kerrang! Hall Of Fame””Slayer Looks Back On 37-Year Career In New Video Series: Part Two””South of Heaven””Gold & Platinum – South of Heaven””Seasons in the Abyss””Gold & Platinum - Seasons in the Abyss””Divine Intervention””Divine Intervention - Release group by Slayer””Gold & Platinum - Divine Intervention””Live Intrusion””Undisputed Attitude””Abolish Government/Superficial Love””Release “Slatanic Slaughter: A Tribute to Slayer” by Various Artists””Diabolus in Musica””Soundtrack to the Apocalypse””God Hates Us All””Systematic - Relationships””War at the Warfield””Gold & Platinum - War at the Warfield””Soundtrack to the Apocalypse””Gold & Platinum - Still Reigning””Metallica, Slayer, Iron Mauden Among Winners At Metal Hammer Awards””Eternal Pyre””Eternal Pyre - Slayer release group””Eternal Pyre””Metal Storm Awards 2006””Kerrang! Hall Of Fame””Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Bullet-For My Valentine booed at Metal Hammer Golden Gods Awards””Unholy Aliance””The End Of Slayer?””Slayer: We Could Thrash Out Two More Albums If We're Fast Enough...””'The Unholy Alliance: Chapter III' UK Dates Added”originalet”Megadeth And Slayer To Co-Headline 'Canadian Carnage' Trek”originalet”World Painted Blood””Release “World Painted Blood” by Slayer””Metallica Heading To Cinemas””Slayer, Megadeth To Join Forces For 'European Carnage' Tour - Dec. 18, 2010”originalet”Slayer's Hanneman Contracts Acute Infection; Band To Bring In Guest Guitarist””Cannibal Corpse's Pat O'Brien Will Step In As Slayer's Guest Guitarist”originalet”Slayer’s Jeff Hanneman Dead at 49””Dave Lombardo Says He Made Only $67,000 In 2011 While Touring With Slayer””Slayer: We Do Not Agree With Dave Lombardo's Substance Or Timeline Of Events””Slayer Welcomes Drummer Paul Bostaph Back To The Fold””Slayer Hope to Unveil Never-Before-Heard Jeff Hanneman Material on Next Album””Slayer Debut New Song 'Implode' During Surprise Golden Gods Appearance””Release group Repentless by Slayer””Repentless - Slayer - Credits””Slayer””Metal Storm Awards 2015””Slayer - to release comic book "Repentless #1"””Slayer To Release 'Repentless' 6.66" Vinyl Box Set””BREAKING NEWS: Slayer Announce Farewell Tour””Slayer Recruit Lamb of God, Anthrax, Behemoth + Testament for Final Tour””Slayer lägger ner efter 37 år””Slayer Announces Second North American Leg Of 'Final' Tour””Final World Tour””Slayer Announces Final European Tour With Lamb of God, Anthrax And Obituary””Slayer To Tour Europe With Lamb of God, Anthrax And Obituary””Slayer To Play 'Last French Show Ever' At Next Year's Hellfst””Slayer's Final World Tour Will Extend Into 2019””Death Angel's Rob Cavestany On Slayer's 'Farewell' Tour: 'Some Of Us Could See This Coming'””Testament Has No Plans To Retire Anytime Soon, Says Chuck Billy””Anthrax's Scott Ian On Slayer's 'Farewell' Tour Plans: 'I Was Surprised And I Wasn't Surprised'””Slayer””Slayer's Morbid Schlock””Review/Rock; For Slayer, the Mania Is the Message””Slayer - Biography””Slayer - Reign In Blood”originalet”Dave Lombardo””An exclusive oral history of Slayer”originalet”Exclusive! Interview With Slayer Guitarist Jeff Hanneman”originalet”Thinking Out Loud: Slayer's Kerry King on hair metal, Satan and being polite””Slayer Lyrics””Slayer - Biography””Most influential artists for extreme metal music””Slayer - Reign in Blood””Slayer guitarist Jeff Hanneman dies aged 49””Slatanic Slaughter: A Tribute to Slayer””Gateway to Hell: A Tribute to Slayer””Covered In Blood””Slayer: The Origins of Thrash in San Francisco, CA.””Why They Rule - #6 Slayer”originalet”Guitar World's 100 Greatest Heavy Metal Guitarists Of All Time”originalet”The fans have spoken: Slayer comes out on top in readers' polls”originalet”Tribute to Jeff Hanneman (1964-2013)””Lamb Of God Frontman: We Sound Like A Slayer Rip-Off””BEHEMOTH Frontman Pays Tribute To SLAYER's JEFF HANNEMAN””Slayer, Hatebreed Doing Double Duty On This Year's Ozzfest””System of a Down””Lacuna Coil’s Andrea Ferro Talks Influences, Skateboarding, Band Origins + More””Slayer - Reign in Blood””Into The Lungs of Hell””Slayer rules - en utställning om fans””Slayer and Their Fans Slashed Through a No-Holds-Barred Night at Gas Monkey””Home””Slayer””Gold & Platinum - The Big 4 Live from Sofia, Bulgaria””Exclusive! Interview With Slayer Guitarist Kerry King””2008-02-23: Wiltern, Los Angeles, CA, USA””Slayer's Kerry King To Perform With Megadeth Tonight! - Oct. 21, 2010”originalet”Dave Lombardo - Biography”Slayer Case DismissedArkiveradUltimate Classic Rock: Slayer guitarist Jeff Hanneman dead at 49.”Slayer: "We could never do any thing like Some Kind Of Monster..."””Cannibal Corpse'S Pat O'Brien Will Step In As Slayer'S Guest Guitarist | The Official Slayer Site”originalet”Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Kerrang! Awards 2006 Blog: Kerrang! Hall Of Fame””Kerrang! Awards 2013: Kerrang! Legend”originalet”Metallica, Slayer, Iron Maien Among Winners At Metal Hammer Awards””Metal Hammer Golden Gods Awards””Bullet For My Valentine Booed At Metal Hammer Golden Gods Awards””Metal Storm Awards 2006””Metal Storm Awards 2015””Slayer's Concert History””Slayer - Relationships””Slayer - Releases”Slayers officiella webbplatsSlayer på MusicBrainzOfficiell webbplatsSlayerSlayerr1373445760000 0001 1540 47353068615-5086262726cb13906545x(data)6033143kn20030215029