mysqldump from AWS RDS slowing downUsers complain that system runs slow when mysqldump is in progress. How to...

Why aren't there more gauls like Obelix?

Are Wave equations equivalent to Maxwell equations in free space?

Faulty RAID1 disk now shows as foreign

DC input on op amp integrator

Is there a math equivalent to the conditional ternary operator?

If nine coins are tossed, what is the probability that the number of heads is even?

What does it mean when I add a new variable to my linear model and the R^2 stays the same?

The (Easy) Road to Code

Does the US political system, in principle, allow for a no-party system?

Ultrafilters as a double dual

Who is at the mall?

The Key to the Door

I've given my players a lot of magic items. Is it reasonable for me to give them harder encounters?

School performs periodic password audits. Is my password compromised?

Giving a talk in my old university, how prominently should I tell students my salary?

Align equations with text before one of them

How spaceships determine each other's mass in space?

Are there other characters in the Star Wars universe who had damaged bodies and needed to wear an outfit like Darth Vader?

Should I use HTTPS on a domain that will only be used for redirection?

Remove object from array based on array of some property of that object

Can a Tiny Servant be used as a messenger?

Why would the IRS ask for birth certificates or even audit a small tax return?

Dukha vs legitimate need

An Undercover Army



mysqldump from AWS RDS slowing down


Users complain that system runs slow when mysqldump is in progress. How to prevent high memory, CPU and time consumption by MySQL restore?ENABLE KEYS on a large table is crashing mysqlBackup / Export data from MySQL 5.5 attachments table keeps failing!Deadlock from mysqldump --single-transaction, is it possible?mysqldump with --single-transaction seems to be “fixing” InnoDB tables with page corruption - why?Amazon RDS freezes on mysqldumpHow to batch multiple insert statements for database dump with MySQL?MySQL InnoDB Configuration with 60Gb of MemoryMySQL slave stuck on Applying batch of row changes (write)













0















Im taking a db backup from AWS RDS (MySQL) slave from an EC2 (m5.large - 2c/8GB) in same AZ using -



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db > db.sql


The backup dies at around 60-70 minutes with error : mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table xx at row: 1743503




  1. The error row changes but the table is the same at which mysqldump errors out

  2. Table has 4M records and size is around 1.5 GB

  3. max_allowed_packet on source is 4MB (although this is inconsequential I believe)


So I tried with single table backup with pv enabled :



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db tbl_xx | pv > db_tbl_xx.sql


The pv stream was interesting as I saw ~10 MiB/s speed for sometime, then froze to 0 B/s for a few seconds. When it resumed, it hovered around 700-800 KiB/s for the rest of the time till the backup finished (1.79GB - 35 minutes). Only in case of entire DB, the backup never finished but died throwing the "Error 2013".



Interestingly, a backup from SQLyog GUI for the same table finished in around 5 minutes over the internet.



I'm unsure if the mysqldump parameters are not supporting large backups or some network level configuration is the problem here.
Some help with pointers to check on either side will be great, thanks!










share|improve this question














bumped to the homepage by Community 14 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • It is more likely to be a timeout.

    – Rick James
    Nov 28 '18 at 2:47











  • a timeout should terminate the thread as soon it happens. But this process keeps crawling for a good duration before stopping/timing out.

    – gnyanendra
    Dec 6 '18 at 9:19











  • Timing out the client does nothing for stopping the server since the server won't communicate until it is ready to.

    – Rick James
    Dec 6 '18 at 17:41
















0















Im taking a db backup from AWS RDS (MySQL) slave from an EC2 (m5.large - 2c/8GB) in same AZ using -



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db > db.sql


The backup dies at around 60-70 minutes with error : mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table xx at row: 1743503




  1. The error row changes but the table is the same at which mysqldump errors out

  2. Table has 4M records and size is around 1.5 GB

  3. max_allowed_packet on source is 4MB (although this is inconsequential I believe)


So I tried with single table backup with pv enabled :



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db tbl_xx | pv > db_tbl_xx.sql


The pv stream was interesting as I saw ~10 MiB/s speed for sometime, then froze to 0 B/s for a few seconds. When it resumed, it hovered around 700-800 KiB/s for the rest of the time till the backup finished (1.79GB - 35 minutes). Only in case of entire DB, the backup never finished but died throwing the "Error 2013".



Interestingly, a backup from SQLyog GUI for the same table finished in around 5 minutes over the internet.



I'm unsure if the mysqldump parameters are not supporting large backups or some network level configuration is the problem here.
Some help with pointers to check on either side will be great, thanks!










share|improve this question














bumped to the homepage by Community 14 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • It is more likely to be a timeout.

    – Rick James
    Nov 28 '18 at 2:47











  • a timeout should terminate the thread as soon it happens. But this process keeps crawling for a good duration before stopping/timing out.

    – gnyanendra
    Dec 6 '18 at 9:19











  • Timing out the client does nothing for stopping the server since the server won't communicate until it is ready to.

    – Rick James
    Dec 6 '18 at 17:41














0












0








0








Im taking a db backup from AWS RDS (MySQL) slave from an EC2 (m5.large - 2c/8GB) in same AZ using -



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db > db.sql


The backup dies at around 60-70 minutes with error : mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table xx at row: 1743503




  1. The error row changes but the table is the same at which mysqldump errors out

  2. Table has 4M records and size is around 1.5 GB

  3. max_allowed_packet on source is 4MB (although this is inconsequential I believe)


So I tried with single table backup with pv enabled :



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db tbl_xx | pv > db_tbl_xx.sql


The pv stream was interesting as I saw ~10 MiB/s speed for sometime, then froze to 0 B/s for a few seconds. When it resumed, it hovered around 700-800 KiB/s for the rest of the time till the backup finished (1.79GB - 35 minutes). Only in case of entire DB, the backup never finished but died throwing the "Error 2013".



Interestingly, a backup from SQLyog GUI for the same table finished in around 5 minutes over the internet.



I'm unsure if the mysqldump parameters are not supporting large backups or some network level configuration is the problem here.
Some help with pointers to check on either side will be great, thanks!










share|improve this question














Im taking a db backup from AWS RDS (MySQL) slave from an EC2 (m5.large - 2c/8GB) in same AZ using -



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db > db.sql


The backup dies at around 60-70 minutes with error : mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table xx at row: 1743503




  1. The error row changes but the table is the same at which mysqldump errors out

  2. Table has 4M records and size is around 1.5 GB

  3. max_allowed_packet on source is 4MB (although this is inconsequential I believe)


So I tried with single table backup with pv enabled :



mysqldump -hxx -u xx -p --extended-insert  --single-transaction  --max_allowed_packet=1G db tbl_xx | pv > db_tbl_xx.sql


The pv stream was interesting as I saw ~10 MiB/s speed for sometime, then froze to 0 B/s for a few seconds. When it resumed, it hovered around 700-800 KiB/s for the rest of the time till the backup finished (1.79GB - 35 minutes). Only in case of entire DB, the backup never finished but died throwing the "Error 2013".



Interestingly, a backup from SQLyog GUI for the same table finished in around 5 minutes over the internet.



I'm unsure if the mysqldump parameters are not supporting large backups or some network level configuration is the problem here.
Some help with pointers to check on either side will be great, thanks!







mysql mysqldump amazon-rds






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 27 '18 at 10:24









gnyanendragnyanendra

1




1





bumped to the homepage by Community 14 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 14 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • It is more likely to be a timeout.

    – Rick James
    Nov 28 '18 at 2:47











  • a timeout should terminate the thread as soon it happens. But this process keeps crawling for a good duration before stopping/timing out.

    – gnyanendra
    Dec 6 '18 at 9:19











  • Timing out the client does nothing for stopping the server since the server won't communicate until it is ready to.

    – Rick James
    Dec 6 '18 at 17:41



















  • It is more likely to be a timeout.

    – Rick James
    Nov 28 '18 at 2:47











  • a timeout should terminate the thread as soon it happens. But this process keeps crawling for a good duration before stopping/timing out.

    – gnyanendra
    Dec 6 '18 at 9:19











  • Timing out the client does nothing for stopping the server since the server won't communicate until it is ready to.

    – Rick James
    Dec 6 '18 at 17:41

















It is more likely to be a timeout.

– Rick James
Nov 28 '18 at 2:47





It is more likely to be a timeout.

– Rick James
Nov 28 '18 at 2:47













a timeout should terminate the thread as soon it happens. But this process keeps crawling for a good duration before stopping/timing out.

– gnyanendra
Dec 6 '18 at 9:19





a timeout should terminate the thread as soon it happens. But this process keeps crawling for a good duration before stopping/timing out.

– gnyanendra
Dec 6 '18 at 9:19













Timing out the client does nothing for stopping the server since the server won't communicate until it is ready to.

– Rick James
Dec 6 '18 at 17:41





Timing out the client does nothing for stopping the server since the server won't communicate until it is ready to.

– Rick James
Dec 6 '18 at 17:41










1 Answer
1






active

oldest

votes


















0














On a re:Invent conference I watched once, an AWS engineer suggested when we are going to use high IOPS processes, to increase the instance type. My suggestion would be to change your instance type to db.m5.4xlarge, wait 20 minutes and then try again the dump.



Chances are that it will work like a charm!






share|improve this answer
























  • shouldn't mysqldump consume the resources of machine it is initiated on, rather than using remote host's? I would believe for the remote host - only disk reads are going to be a bottleneck, not the CPU or memory. Also, resizing the instance for backups doesn't sound a convincing solution.

    – gnyanendra
    Dec 6 '18 at 9:23











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f223531%2fmysqldump-from-aws-rds-slowing-down%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














On a re:Invent conference I watched once, an AWS engineer suggested when we are going to use high IOPS processes, to increase the instance type. My suggestion would be to change your instance type to db.m5.4xlarge, wait 20 minutes and then try again the dump.



Chances are that it will work like a charm!






share|improve this answer
























  • shouldn't mysqldump consume the resources of machine it is initiated on, rather than using remote host's? I would believe for the remote host - only disk reads are going to be a bottleneck, not the CPU or memory. Also, resizing the instance for backups doesn't sound a convincing solution.

    – gnyanendra
    Dec 6 '18 at 9:23
















0














On a re:Invent conference I watched once, an AWS engineer suggested when we are going to use high IOPS processes, to increase the instance type. My suggestion would be to change your instance type to db.m5.4xlarge, wait 20 minutes and then try again the dump.



Chances are that it will work like a charm!






share|improve this answer
























  • shouldn't mysqldump consume the resources of machine it is initiated on, rather than using remote host's? I would believe for the remote host - only disk reads are going to be a bottleneck, not the CPU or memory. Also, resizing the instance for backups doesn't sound a convincing solution.

    – gnyanendra
    Dec 6 '18 at 9:23














0












0








0







On a re:Invent conference I watched once, an AWS engineer suggested when we are going to use high IOPS processes, to increase the instance type. My suggestion would be to change your instance type to db.m5.4xlarge, wait 20 minutes and then try again the dump.



Chances are that it will work like a charm!






share|improve this answer













On a re:Invent conference I watched once, an AWS engineer suggested when we are going to use high IOPS processes, to increase the instance type. My suggestion would be to change your instance type to db.m5.4xlarge, wait 20 minutes and then try again the dump.



Chances are that it will work like a charm!







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 28 '18 at 11:56









LeonidasLeonidas

163




163













  • shouldn't mysqldump consume the resources of machine it is initiated on, rather than using remote host's? I would believe for the remote host - only disk reads are going to be a bottleneck, not the CPU or memory. Also, resizing the instance for backups doesn't sound a convincing solution.

    – gnyanendra
    Dec 6 '18 at 9:23



















  • shouldn't mysqldump consume the resources of machine it is initiated on, rather than using remote host's? I would believe for the remote host - only disk reads are going to be a bottleneck, not the CPU or memory. Also, resizing the instance for backups doesn't sound a convincing solution.

    – gnyanendra
    Dec 6 '18 at 9:23

















shouldn't mysqldump consume the resources of machine it is initiated on, rather than using remote host's? I would believe for the remote host - only disk reads are going to be a bottleneck, not the CPU or memory. Also, resizing the instance for backups doesn't sound a convincing solution.

– gnyanendra
Dec 6 '18 at 9:23





shouldn't mysqldump consume the resources of machine it is initiated on, rather than using remote host's? I would believe for the remote host - only disk reads are going to be a bottleneck, not the CPU or memory. Also, resizing the instance for backups doesn't sound a convincing solution.

– gnyanendra
Dec 6 '18 at 9:23


















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f223531%2fmysqldump-from-aws-rds-slowing-down%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Szabolcs (Ungheria) Altri progetti | Menu di navigazione48°10′14.56″N 21°29′33.14″E /...

Discografia di Klaus Schulze Indice Album in studio | Album dal vivo | Singoli | Antologie | Colonne...

How to make inet_server_addr() return localhost in spite of ::1/128RETURN NEXT in Postgres FunctionConnect to...