Rebuild or Reorganise on 6TB databaseIs there any benefit to defragmenting SQL indexes in a SAN...

What if you do not believe in the project benefits?

Is Screenshot Time-tracking Common?

Identical projects by students at two different colleges: still plagiarism?

Why is quixotic not Quixotic (a proper adjective)?

Did the characters in Moving Pictures not know about cameras like Twoflower's?

How can I portray body horror and still be sensitive to people with disabilities?

Coworker is trying to get me to sign his petition to run for office. How to decline politely?

Was the Soviet N1 really capable of sending 9.6 GB/s of telemetry?

Question: "Are you hungry?" Answer: "I feel like eating."

Draw triangle with text in vertices/edges

Integral problem. Unsure of the approach.

Are all power cords made equal?

Is it common to refer to someone as "Prof. Dr. [LastName]"?

Have the UK Conservatives lost the working majority and if so, what does this mean?

Can I do anything else with aspersions other than cast them?

Why does this quiz question say that protons and electrons do not combine to form neutrons?

Can you wish for more wishes from an Efreeti bound to service via an Efreeti Bottle?

Is it Safe to Plug an Extension Cord Into a Power Strip?

Cryptic cross... with words

Badly designed reimbursement form. What does that say about the company?

What is an explicit bijection in combinatorics?

Manager has noticed coworker's excessive breaks. Should I warn him?

A cancellation property for permutations?

Isn't a semicolon (';') needed after a function declaration in C++?



Rebuild or Reorganise on 6TB database


Is there any benefit to defragmenting SQL indexes in a SAN environment?Reducing Log Impact During Re-IndexingMaintenance plansSQL Server recreating plans each dayInfluence of index rebuild tasks in tempdb on the transaction log?Index fragmentation increased significantly after rebuildRead Committed Snapshot Isolation vs Read Committed - pros and cons?Multiple files per partition/filegroup?SQL Free space managementBackup Files with Append option growing huge Vs Better Options













0















A have a 6 TB database. Index rebuilds or reorganizations take more than two days and it blocks all incoming connections. Right now I'm using Ola's script.



The problems:




  1. Index fragmentation will be very high because we will delete 4 .ndf files weekly (old data, we don’t maintain historic data) and we will add 4 other .ndf files.

  2. We will get at least 2 GB of incoming data per hour (there will be a number of manipulations going on).


Is there any other script to do online index maintenance without blocking the incoming connections?



We used to run maintenance weekly once, but its not working. At any cost we need to run daily to speed up the data retrieve process.



We use only 2 tables frequently. For rebuilding index on 1 table it's taking 2hrs and its blocking all incoming connections & log files is growing up to 150 GB.



Every day it's showing 99.9% fragmentation level. To retrieve this fragmentation level stats it's taking 4 hrs.



Actually we maintain only 15 days of data and ndf size is fixed 100 GB. If data exceeds 100GB it will automatically route to another newly created .ndf files. Incoming data is segregated into 2 file types so daily it will create 4 .ndf files.










share|improve this question
















bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.











  • 4





    Have you established that fragmentation is actually causing significant problems and is something that needs to be fixed at all in your case?

    – Martin Smith
    Aug 2 '16 at 10:48











  • Have you also checked your stats?

    – Jason B.
    Aug 2 '16 at 13:26











  • @Rateeshreddy I think you should seriously drop the idea of running index rebuild daily and that too for 6 TB database, you are bound to face issues. What you can do is figure out indexes which are frequently fragmented and run index rebuild on those.

    – Shanky
    Aug 2 '16 at 14:15






  • 1





    That hasn't answered the question at the top. Are you sure fragmentation is causing a negative performance impact, that couldn't be mitigated by simply running UPDATE STATISTICS with a reasonable SAMPLE size?

    – Randolph West
    Aug 4 '16 at 3:29
















0















A have a 6 TB database. Index rebuilds or reorganizations take more than two days and it blocks all incoming connections. Right now I'm using Ola's script.



The problems:




  1. Index fragmentation will be very high because we will delete 4 .ndf files weekly (old data, we don’t maintain historic data) and we will add 4 other .ndf files.

  2. We will get at least 2 GB of incoming data per hour (there will be a number of manipulations going on).


Is there any other script to do online index maintenance without blocking the incoming connections?



We used to run maintenance weekly once, but its not working. At any cost we need to run daily to speed up the data retrieve process.



We use only 2 tables frequently. For rebuilding index on 1 table it's taking 2hrs and its blocking all incoming connections & log files is growing up to 150 GB.



Every day it's showing 99.9% fragmentation level. To retrieve this fragmentation level stats it's taking 4 hrs.



Actually we maintain only 15 days of data and ndf size is fixed 100 GB. If data exceeds 100GB it will automatically route to another newly created .ndf files. Incoming data is segregated into 2 file types so daily it will create 4 .ndf files.










share|improve this question
















bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.











  • 4





    Have you established that fragmentation is actually causing significant problems and is something that needs to be fixed at all in your case?

    – Martin Smith
    Aug 2 '16 at 10:48











  • Have you also checked your stats?

    – Jason B.
    Aug 2 '16 at 13:26











  • @Rateeshreddy I think you should seriously drop the idea of running index rebuild daily and that too for 6 TB database, you are bound to face issues. What you can do is figure out indexes which are frequently fragmented and run index rebuild on those.

    – Shanky
    Aug 2 '16 at 14:15






  • 1





    That hasn't answered the question at the top. Are you sure fragmentation is causing a negative performance impact, that couldn't be mitigated by simply running UPDATE STATISTICS with a reasonable SAMPLE size?

    – Randolph West
    Aug 4 '16 at 3:29














0












0








0








A have a 6 TB database. Index rebuilds or reorganizations take more than two days and it blocks all incoming connections. Right now I'm using Ola's script.



The problems:




  1. Index fragmentation will be very high because we will delete 4 .ndf files weekly (old data, we don’t maintain historic data) and we will add 4 other .ndf files.

  2. We will get at least 2 GB of incoming data per hour (there will be a number of manipulations going on).


Is there any other script to do online index maintenance without blocking the incoming connections?



We used to run maintenance weekly once, but its not working. At any cost we need to run daily to speed up the data retrieve process.



We use only 2 tables frequently. For rebuilding index on 1 table it's taking 2hrs and its blocking all incoming connections & log files is growing up to 150 GB.



Every day it's showing 99.9% fragmentation level. To retrieve this fragmentation level stats it's taking 4 hrs.



Actually we maintain only 15 days of data and ndf size is fixed 100 GB. If data exceeds 100GB it will automatically route to another newly created .ndf files. Incoming data is segregated into 2 file types so daily it will create 4 .ndf files.










share|improve this question
















A have a 6 TB database. Index rebuilds or reorganizations take more than two days and it blocks all incoming connections. Right now I'm using Ola's script.



The problems:




  1. Index fragmentation will be very high because we will delete 4 .ndf files weekly (old data, we don’t maintain historic data) and we will add 4 other .ndf files.

  2. We will get at least 2 GB of incoming data per hour (there will be a number of manipulations going on).


Is there any other script to do online index maintenance without blocking the incoming connections?



We used to run maintenance weekly once, but its not working. At any cost we need to run daily to speed up the data retrieve process.



We use only 2 tables frequently. For rebuilding index on 1 table it's taking 2hrs and its blocking all incoming connections & log files is growing up to 150 GB.



Every day it's showing 99.9% fragmentation level. To retrieve this fragmentation level stats it's taking 4 hrs.



Actually we maintain only 15 days of data and ndf size is fixed 100 GB. If data exceeds 100GB it will automatically route to another newly created .ndf files. Incoming data is segregated into 2 file types so daily it will create 4 .ndf files.







sql-server index index-maintenance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 23 '17 at 9:05









Paul White

52.2k14278451




52.2k14278451










asked Aug 2 '16 at 10:34









Rateesh reddyRateesh reddy

356




356





bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.










  • 4





    Have you established that fragmentation is actually causing significant problems and is something that needs to be fixed at all in your case?

    – Martin Smith
    Aug 2 '16 at 10:48











  • Have you also checked your stats?

    – Jason B.
    Aug 2 '16 at 13:26











  • @Rateeshreddy I think you should seriously drop the idea of running index rebuild daily and that too for 6 TB database, you are bound to face issues. What you can do is figure out indexes which are frequently fragmented and run index rebuild on those.

    – Shanky
    Aug 2 '16 at 14:15






  • 1





    That hasn't answered the question at the top. Are you sure fragmentation is causing a negative performance impact, that couldn't be mitigated by simply running UPDATE STATISTICS with a reasonable SAMPLE size?

    – Randolph West
    Aug 4 '16 at 3:29














  • 4





    Have you established that fragmentation is actually causing significant problems and is something that needs to be fixed at all in your case?

    – Martin Smith
    Aug 2 '16 at 10:48











  • Have you also checked your stats?

    – Jason B.
    Aug 2 '16 at 13:26











  • @Rateeshreddy I think you should seriously drop the idea of running index rebuild daily and that too for 6 TB database, you are bound to face issues. What you can do is figure out indexes which are frequently fragmented and run index rebuild on those.

    – Shanky
    Aug 2 '16 at 14:15






  • 1





    That hasn't answered the question at the top. Are you sure fragmentation is causing a negative performance impact, that couldn't be mitigated by simply running UPDATE STATISTICS with a reasonable SAMPLE size?

    – Randolph West
    Aug 4 '16 at 3:29








4




4





Have you established that fragmentation is actually causing significant problems and is something that needs to be fixed at all in your case?

– Martin Smith
Aug 2 '16 at 10:48





Have you established that fragmentation is actually causing significant problems and is something that needs to be fixed at all in your case?

– Martin Smith
Aug 2 '16 at 10:48













Have you also checked your stats?

– Jason B.
Aug 2 '16 at 13:26





Have you also checked your stats?

– Jason B.
Aug 2 '16 at 13:26













@Rateeshreddy I think you should seriously drop the idea of running index rebuild daily and that too for 6 TB database, you are bound to face issues. What you can do is figure out indexes which are frequently fragmented and run index rebuild on those.

– Shanky
Aug 2 '16 at 14:15





@Rateeshreddy I think you should seriously drop the idea of running index rebuild daily and that too for 6 TB database, you are bound to face issues. What you can do is figure out indexes which are frequently fragmented and run index rebuild on those.

– Shanky
Aug 2 '16 at 14:15




1




1





That hasn't answered the question at the top. Are you sure fragmentation is causing a negative performance impact, that couldn't be mitigated by simply running UPDATE STATISTICS with a reasonable SAMPLE size?

– Randolph West
Aug 4 '16 at 3:29





That hasn't answered the question at the top. Are you sure fragmentation is causing a negative performance impact, that couldn't be mitigated by simply running UPDATE STATISTICS with a reasonable SAMPLE size?

– Randolph West
Aug 4 '16 at 3:29










1 Answer
1






active

oldest

votes


















0














I think your best option here is to pick the tables which are most important to you and look at rebuild/reorganize indexes on those tables first, you can put it into a separate job and run this regularly, and do the other tables on a different schedule.



by the way why are you adding and removing .ndf files, can't you just drop the tables and re-create them ? and in that process remove/re-create the indexes ?



more context around your process would be nice so we can understand and see if we could help you improve these processes.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "182"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f145601%2frebuild-or-reorganise-on-6tb-database%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    I think your best option here is to pick the tables which are most important to you and look at rebuild/reorganize indexes on those tables first, you can put it into a separate job and run this regularly, and do the other tables on a different schedule.



    by the way why are you adding and removing .ndf files, can't you just drop the tables and re-create them ? and in that process remove/re-create the indexes ?



    more context around your process would be nice so we can understand and see if we could help you improve these processes.






    share|improve this answer




























      0














      I think your best option here is to pick the tables which are most important to you and look at rebuild/reorganize indexes on those tables first, you can put it into a separate job and run this regularly, and do the other tables on a different schedule.



      by the way why are you adding and removing .ndf files, can't you just drop the tables and re-create them ? and in that process remove/re-create the indexes ?



      more context around your process would be nice so we can understand and see if we could help you improve these processes.






      share|improve this answer


























        0












        0








        0







        I think your best option here is to pick the tables which are most important to you and look at rebuild/reorganize indexes on those tables first, you can put it into a separate job and run this regularly, and do the other tables on a different schedule.



        by the way why are you adding and removing .ndf files, can't you just drop the tables and re-create them ? and in that process remove/re-create the indexes ?



        more context around your process would be nice so we can understand and see if we could help you improve these processes.






        share|improve this answer













        I think your best option here is to pick the tables which are most important to you and look at rebuild/reorganize indexes on those tables first, you can put it into a separate job and run this regularly, and do the other tables on a different schedule.



        by the way why are you adding and removing .ndf files, can't you just drop the tables and re-create them ? and in that process remove/re-create the indexes ?



        more context around your process would be nice so we can understand and see if we could help you improve these processes.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 2 '16 at 12:40









        entDbaentDba

        1813




        1813






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Database Administrators Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f145601%2frebuild-or-reorganise-on-6tb-database%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Szabolcs (Ungheria) Altri progetti | Menu di navigazione48°10′14.56″N 21°29′33.14″E /...

            Discografia di Klaus Schulze Indice Album in studio | Album dal vivo | Singoli | Antologie | Colonne...

            How to make inet_server_addr() return localhost in spite of ::1/128RETURN NEXT in Postgres FunctionConnect to...