MySQL: Merge Same Database on Several Location into Single LocationCan I normalize my experimental data table...

How do I fight with Heavy Armor as a Wizard with Tenser's Transformation?

Why do single electrical receptacles exist?

Crack the bank account's password!

How can I give a Ranger advantage on a check due to Favored Enemy without spoiling the story for the player?

Is there any danger of my neighbor having my wife's signature?

Boss asked me to sign a resignation paper without a date on it along with my new contract

Reason for small-valued feedback resistors in low noise Op Amp

How can I differentiate duration vs starting time

Did ancient Germans take pride in leaving the land untouched?

What is this mysterious *green square* on my Content Editor?

Was the Spartan by Mimic Systems a real product?

If I tried and failed to start my own business, how do I apply for a job without job experience?

Will the duration of traveling to Ceres using the same tech developed for going to Mars be proportional to the distance to go to Mars or not?

Create a line break in a subscript-position term

What is the reward?

How to deal with an underperforming subordinate?

In the Lost in Space intro why was Dr. Smith actor listed as a special guest star?

Is there any way to play D&D without a DM?

What is the smallest molar volume?

Disk space full during insert, what happens?

What could cause an entire planet of humans to become aphasic?

Are all power cords made equal?

Why don't you get burned by the wood benches in a sauna?

Is there a way to pause a running process on Linux systems and resume later?



MySQL: Merge Same Database on Several Location into Single Location


Can I normalize my experimental data table further?Merge multiple tables in a databaseHow to organize the tables of a MySQL database with several statistics data of several years?Schema check or “DATE as separate table”Have several rows, or one single row with a huge comma separated data?How to merge all multiple tables with a single blank table in SQL?Recommended innodb_file_per_table value for AWS MySQL RDS database“Centralising” a “decentral” designSQL to merge related documents into 1 query row (MySQL)Splitting large database into smaller ones













1















I have a registration form, which is exactly the same for several locations. Each of them has a REG column, which is unique for each inserted row. Basically, each data input has a different REG number for it, which is auto incremented.



The location's form references a location's table which has the same structure for each individual location.



However, we have one central database which pulls all the data from several locations. This central database has the job of manipulating and populate ing all the location's databases.



Let's say, I have 3 locations, A, B and C. All those locations are using same database structure to save the data of each individual location.



What I need help on here, is how must I change or configure the database table or column so I can "restore" each individual database from the different location's into one single database on the central location. If the structure is same, then the previous data will be replaced by the new one when restore/add to the database in the central location.



I have a column called CODE which stored specific hard code for each location, it is just a simple character either A, B or C based on where the location of the database is saved.



So, at the specific time, from 3 different locations, the form table will be backed up and sent to the central database. And there, it will be restored or added to the database to populate all from those 3 locations.



Any idea and little help, please? Thanks in advance.



Here the database layout I would like to be done.
Layout

And the engine is InnoDB.










share|improve this question





























    1















    I have a registration form, which is exactly the same for several locations. Each of them has a REG column, which is unique for each inserted row. Basically, each data input has a different REG number for it, which is auto incremented.



    The location's form references a location's table which has the same structure for each individual location.



    However, we have one central database which pulls all the data from several locations. This central database has the job of manipulating and populate ing all the location's databases.



    Let's say, I have 3 locations, A, B and C. All those locations are using same database structure to save the data of each individual location.



    What I need help on here, is how must I change or configure the database table or column so I can "restore" each individual database from the different location's into one single database on the central location. If the structure is same, then the previous data will be replaced by the new one when restore/add to the database in the central location.



    I have a column called CODE which stored specific hard code for each location, it is just a simple character either A, B or C based on where the location of the database is saved.



    So, at the specific time, from 3 different locations, the form table will be backed up and sent to the central database. And there, it will be restored or added to the database to populate all from those 3 locations.



    Any idea and little help, please? Thanks in advance.



    Here the database layout I would like to be done.
    Layout

    And the engine is InnoDB.










    share|improve this question



























      1












      1








      1


      1






      I have a registration form, which is exactly the same for several locations. Each of them has a REG column, which is unique for each inserted row. Basically, each data input has a different REG number for it, which is auto incremented.



      The location's form references a location's table which has the same structure for each individual location.



      However, we have one central database which pulls all the data from several locations. This central database has the job of manipulating and populate ing all the location's databases.



      Let's say, I have 3 locations, A, B and C. All those locations are using same database structure to save the data of each individual location.



      What I need help on here, is how must I change or configure the database table or column so I can "restore" each individual database from the different location's into one single database on the central location. If the structure is same, then the previous data will be replaced by the new one when restore/add to the database in the central location.



      I have a column called CODE which stored specific hard code for each location, it is just a simple character either A, B or C based on where the location of the database is saved.



      So, at the specific time, from 3 different locations, the form table will be backed up and sent to the central database. And there, it will be restored or added to the database to populate all from those 3 locations.



      Any idea and little help, please? Thanks in advance.



      Here the database layout I would like to be done.
      Layout

      And the engine is InnoDB.










      share|improve this question
















      I have a registration form, which is exactly the same for several locations. Each of them has a REG column, which is unique for each inserted row. Basically, each data input has a different REG number for it, which is auto incremented.



      The location's form references a location's table which has the same structure for each individual location.



      However, we have one central database which pulls all the data from several locations. This central database has the job of manipulating and populate ing all the location's databases.



      Let's say, I have 3 locations, A, B and C. All those locations are using same database structure to save the data of each individual location.



      What I need help on here, is how must I change or configure the database table or column so I can "restore" each individual database from the different location's into one single database on the central location. If the structure is same, then the previous data will be replaced by the new one when restore/add to the database in the central location.



      I have a column called CODE which stored specific hard code for each location, it is just a simple character either A, B or C based on where the location of the database is saved.



      So, at the specific time, from 3 different locations, the form table will be backed up and sent to the central database. And there, it will be restored or added to the database to populate all from those 3 locations.



      Any idea and little help, please? Thanks in advance.



      Here the database layout I would like to be done.
      Layout

      And the engine is InnoDB.







      mysql innodb merge






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 7 mins ago









      Glorfindel

      9851816




      9851816










      asked Aug 13 '14 at 6:29









      dhicomdhicom

      61




      61






















          2 Answers
          2






          active

          oldest

          votes


















          0














          Let me first suggest an approach that may not work for you, but it seems the ideal approach for saving bandwidth and resources:



          Multi-source replication is a relatively new feature, available on MariaDB 10 and MySQL 5.7 (not yet released). It is used precisely for the case you are talking about: merging data from different servers into one, typically for analytical purposes.



          Here it is an overview of how it works for MariaDB 10 and for MySQL 5.7. If the tables have the same name, and you set the proper filters, you could have the table almost up-to-date in real time without having to do imports and exports every single time (which will become more and more inefficient each time). Even if you do not want that, it would be just easier to start and stop the replication each time, unless the tables are completely rewritten between synchronisations.



          As writing to the same table could become insecure (for the data integrity)- depending on the kind of DML queries that you execute at each location-, you could alternatively write to separate tables and use a VIEW to see it as a single table. This may or may not be better for performance depending on the queries executed on the central location.



          If this does not work for you (for example, because you cannot use those versions of MySQL- after all, this is a relatively new features), I can suggest alternative methods.



          EDIT:



          As you seem to not be able to use a stable internet connection (although that wouldn't be a problem, replication is asynchronous and it will work even if it stops very frequently), I will assume you want to do the transfer via a USB drive.



          My second recommendation would be to backup the table or tables using CSV if the data is not too big, and in binary format if they are relatively big.



          For CSV format, you can use mysqldump --tab or mydumper and then import them back with mysqlimport/myloader.



          If CSV format is too slow, I would recommend you to use something like Percona XtraBackup that, combined with InnoDB, allows you to export tables in binary format and then reimport them separately (it requires Percona Server or MySQL 5.6 on the importing node and innodb_file_per_table). You can first import each table separately and then merge it into a single table as partitions with EXCHANGE PARTITION. That would allow you to do the export/import at nearly disk speed.






          share|improve this answer


























          • Thanks for your information, jynus. Really appreciated that, I will learn that new thing. However, in my case, not because we like to replicate, but since the location itself on another "client" are remote, and no adequate internet connection, we just want to "pull" those databases from each location into single central location to manipulate the data further more. That's why we need to manually perform backup on each location and send this backup to central, and will be restore / added there.

            – dhicom
            Aug 14 '14 at 1:45











          • @dhicom I have updated the answer with an alternative method.

            – jynus
            Aug 15 '14 at 8:40











          • The problem for me is, when this backup being restore into single database, it will trigger an error since the id is already taken / there. I need to know is this possible to backup without the primary key column (id). I've read some information around in these few days to figure this out. Seems that's my only solution for now. Do you have any ideas how to do that with simple one? What I have read that I should "clone" or "dump" the database, then get rid of the primary key one by one. But, since I have many of table like that, it would be not practical doing that manually, each time.

            – dhicom
            Aug 15 '14 at 12:51



















          0














          Given that the other answer might be a problem for you, I would suggest you try pt-archiver instead specifically using the --columns option, of course in combination with --source and --dest option you may be able to pull this off somehow. Good luck!






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "182"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f73853%2fmysql-merge-same-database-on-several-location-into-single-location%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            Let me first suggest an approach that may not work for you, but it seems the ideal approach for saving bandwidth and resources:



            Multi-source replication is a relatively new feature, available on MariaDB 10 and MySQL 5.7 (not yet released). It is used precisely for the case you are talking about: merging data from different servers into one, typically for analytical purposes.



            Here it is an overview of how it works for MariaDB 10 and for MySQL 5.7. If the tables have the same name, and you set the proper filters, you could have the table almost up-to-date in real time without having to do imports and exports every single time (which will become more and more inefficient each time). Even if you do not want that, it would be just easier to start and stop the replication each time, unless the tables are completely rewritten between synchronisations.



            As writing to the same table could become insecure (for the data integrity)- depending on the kind of DML queries that you execute at each location-, you could alternatively write to separate tables and use a VIEW to see it as a single table. This may or may not be better for performance depending on the queries executed on the central location.



            If this does not work for you (for example, because you cannot use those versions of MySQL- after all, this is a relatively new features), I can suggest alternative methods.



            EDIT:



            As you seem to not be able to use a stable internet connection (although that wouldn't be a problem, replication is asynchronous and it will work even if it stops very frequently), I will assume you want to do the transfer via a USB drive.



            My second recommendation would be to backup the table or tables using CSV if the data is not too big, and in binary format if they are relatively big.



            For CSV format, you can use mysqldump --tab or mydumper and then import them back with mysqlimport/myloader.



            If CSV format is too slow, I would recommend you to use something like Percona XtraBackup that, combined with InnoDB, allows you to export tables in binary format and then reimport them separately (it requires Percona Server or MySQL 5.6 on the importing node and innodb_file_per_table). You can first import each table separately and then merge it into a single table as partitions with EXCHANGE PARTITION. That would allow you to do the export/import at nearly disk speed.






            share|improve this answer


























            • Thanks for your information, jynus. Really appreciated that, I will learn that new thing. However, in my case, not because we like to replicate, but since the location itself on another "client" are remote, and no adequate internet connection, we just want to "pull" those databases from each location into single central location to manipulate the data further more. That's why we need to manually perform backup on each location and send this backup to central, and will be restore / added there.

              – dhicom
              Aug 14 '14 at 1:45











            • @dhicom I have updated the answer with an alternative method.

              – jynus
              Aug 15 '14 at 8:40











            • The problem for me is, when this backup being restore into single database, it will trigger an error since the id is already taken / there. I need to know is this possible to backup without the primary key column (id). I've read some information around in these few days to figure this out. Seems that's my only solution for now. Do you have any ideas how to do that with simple one? What I have read that I should "clone" or "dump" the database, then get rid of the primary key one by one. But, since I have many of table like that, it would be not practical doing that manually, each time.

              – dhicom
              Aug 15 '14 at 12:51
















            0














            Let me first suggest an approach that may not work for you, but it seems the ideal approach for saving bandwidth and resources:



            Multi-source replication is a relatively new feature, available on MariaDB 10 and MySQL 5.7 (not yet released). It is used precisely for the case you are talking about: merging data from different servers into one, typically for analytical purposes.



            Here it is an overview of how it works for MariaDB 10 and for MySQL 5.7. If the tables have the same name, and you set the proper filters, you could have the table almost up-to-date in real time without having to do imports and exports every single time (which will become more and more inefficient each time). Even if you do not want that, it would be just easier to start and stop the replication each time, unless the tables are completely rewritten between synchronisations.



            As writing to the same table could become insecure (for the data integrity)- depending on the kind of DML queries that you execute at each location-, you could alternatively write to separate tables and use a VIEW to see it as a single table. This may or may not be better for performance depending on the queries executed on the central location.



            If this does not work for you (for example, because you cannot use those versions of MySQL- after all, this is a relatively new features), I can suggest alternative methods.



            EDIT:



            As you seem to not be able to use a stable internet connection (although that wouldn't be a problem, replication is asynchronous and it will work even if it stops very frequently), I will assume you want to do the transfer via a USB drive.



            My second recommendation would be to backup the table or tables using CSV if the data is not too big, and in binary format if they are relatively big.



            For CSV format, you can use mysqldump --tab or mydumper and then import them back with mysqlimport/myloader.



            If CSV format is too slow, I would recommend you to use something like Percona XtraBackup that, combined with InnoDB, allows you to export tables in binary format and then reimport them separately (it requires Percona Server or MySQL 5.6 on the importing node and innodb_file_per_table). You can first import each table separately and then merge it into a single table as partitions with EXCHANGE PARTITION. That would allow you to do the export/import at nearly disk speed.






            share|improve this answer


























            • Thanks for your information, jynus. Really appreciated that, I will learn that new thing. However, in my case, not because we like to replicate, but since the location itself on another "client" are remote, and no adequate internet connection, we just want to "pull" those databases from each location into single central location to manipulate the data further more. That's why we need to manually perform backup on each location and send this backup to central, and will be restore / added there.

              – dhicom
              Aug 14 '14 at 1:45











            • @dhicom I have updated the answer with an alternative method.

              – jynus
              Aug 15 '14 at 8:40











            • The problem for me is, when this backup being restore into single database, it will trigger an error since the id is already taken / there. I need to know is this possible to backup without the primary key column (id). I've read some information around in these few days to figure this out. Seems that's my only solution for now. Do you have any ideas how to do that with simple one? What I have read that I should "clone" or "dump" the database, then get rid of the primary key one by one. But, since I have many of table like that, it would be not practical doing that manually, each time.

              – dhicom
              Aug 15 '14 at 12:51














            0












            0








            0







            Let me first suggest an approach that may not work for you, but it seems the ideal approach for saving bandwidth and resources:



            Multi-source replication is a relatively new feature, available on MariaDB 10 and MySQL 5.7 (not yet released). It is used precisely for the case you are talking about: merging data from different servers into one, typically for analytical purposes.



            Here it is an overview of how it works for MariaDB 10 and for MySQL 5.7. If the tables have the same name, and you set the proper filters, you could have the table almost up-to-date in real time without having to do imports and exports every single time (which will become more and more inefficient each time). Even if you do not want that, it would be just easier to start and stop the replication each time, unless the tables are completely rewritten between synchronisations.



            As writing to the same table could become insecure (for the data integrity)- depending on the kind of DML queries that you execute at each location-, you could alternatively write to separate tables and use a VIEW to see it as a single table. This may or may not be better for performance depending on the queries executed on the central location.



            If this does not work for you (for example, because you cannot use those versions of MySQL- after all, this is a relatively new features), I can suggest alternative methods.



            EDIT:



            As you seem to not be able to use a stable internet connection (although that wouldn't be a problem, replication is asynchronous and it will work even if it stops very frequently), I will assume you want to do the transfer via a USB drive.



            My second recommendation would be to backup the table or tables using CSV if the data is not too big, and in binary format if they are relatively big.



            For CSV format, you can use mysqldump --tab or mydumper and then import them back with mysqlimport/myloader.



            If CSV format is too slow, I would recommend you to use something like Percona XtraBackup that, combined with InnoDB, allows you to export tables in binary format and then reimport them separately (it requires Percona Server or MySQL 5.6 on the importing node and innodb_file_per_table). You can first import each table separately and then merge it into a single table as partitions with EXCHANGE PARTITION. That would allow you to do the export/import at nearly disk speed.






            share|improve this answer















            Let me first suggest an approach that may not work for you, but it seems the ideal approach for saving bandwidth and resources:



            Multi-source replication is a relatively new feature, available on MariaDB 10 and MySQL 5.7 (not yet released). It is used precisely for the case you are talking about: merging data from different servers into one, typically for analytical purposes.



            Here it is an overview of how it works for MariaDB 10 and for MySQL 5.7. If the tables have the same name, and you set the proper filters, you could have the table almost up-to-date in real time without having to do imports and exports every single time (which will become more and more inefficient each time). Even if you do not want that, it would be just easier to start and stop the replication each time, unless the tables are completely rewritten between synchronisations.



            As writing to the same table could become insecure (for the data integrity)- depending on the kind of DML queries that you execute at each location-, you could alternatively write to separate tables and use a VIEW to see it as a single table. This may or may not be better for performance depending on the queries executed on the central location.



            If this does not work for you (for example, because you cannot use those versions of MySQL- after all, this is a relatively new features), I can suggest alternative methods.



            EDIT:



            As you seem to not be able to use a stable internet connection (although that wouldn't be a problem, replication is asynchronous and it will work even if it stops very frequently), I will assume you want to do the transfer via a USB drive.



            My second recommendation would be to backup the table or tables using CSV if the data is not too big, and in binary format if they are relatively big.



            For CSV format, you can use mysqldump --tab or mydumper and then import them back with mysqlimport/myloader.



            If CSV format is too slow, I would recommend you to use something like Percona XtraBackup that, combined with InnoDB, allows you to export tables in binary format and then reimport them separately (it requires Percona Server or MySQL 5.6 on the importing node and innodb_file_per_table). You can first import each table separately and then merge it into a single table as partitions with EXCHANGE PARTITION. That would allow you to do the export/import at nearly disk speed.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Aug 15 '14 at 8:39

























            answered Aug 13 '14 at 15:04









            jynusjynus

            11.1k11832




            11.1k11832













            • Thanks for your information, jynus. Really appreciated that, I will learn that new thing. However, in my case, not because we like to replicate, but since the location itself on another "client" are remote, and no adequate internet connection, we just want to "pull" those databases from each location into single central location to manipulate the data further more. That's why we need to manually perform backup on each location and send this backup to central, and will be restore / added there.

              – dhicom
              Aug 14 '14 at 1:45











            • @dhicom I have updated the answer with an alternative method.

              – jynus
              Aug 15 '14 at 8:40











            • The problem for me is, when this backup being restore into single database, it will trigger an error since the id is already taken / there. I need to know is this possible to backup without the primary key column (id). I've read some information around in these few days to figure this out. Seems that's my only solution for now. Do you have any ideas how to do that with simple one? What I have read that I should "clone" or "dump" the database, then get rid of the primary key one by one. But, since I have many of table like that, it would be not practical doing that manually, each time.

              – dhicom
              Aug 15 '14 at 12:51



















            • Thanks for your information, jynus. Really appreciated that, I will learn that new thing. However, in my case, not because we like to replicate, but since the location itself on another "client" are remote, and no adequate internet connection, we just want to "pull" those databases from each location into single central location to manipulate the data further more. That's why we need to manually perform backup on each location and send this backup to central, and will be restore / added there.

              – dhicom
              Aug 14 '14 at 1:45











            • @dhicom I have updated the answer with an alternative method.

              – jynus
              Aug 15 '14 at 8:40











            • The problem for me is, when this backup being restore into single database, it will trigger an error since the id is already taken / there. I need to know is this possible to backup without the primary key column (id). I've read some information around in these few days to figure this out. Seems that's my only solution for now. Do you have any ideas how to do that with simple one? What I have read that I should "clone" or "dump" the database, then get rid of the primary key one by one. But, since I have many of table like that, it would be not practical doing that manually, each time.

              – dhicom
              Aug 15 '14 at 12:51

















            Thanks for your information, jynus. Really appreciated that, I will learn that new thing. However, in my case, not because we like to replicate, but since the location itself on another "client" are remote, and no adequate internet connection, we just want to "pull" those databases from each location into single central location to manipulate the data further more. That's why we need to manually perform backup on each location and send this backup to central, and will be restore / added there.

            – dhicom
            Aug 14 '14 at 1:45





            Thanks for your information, jynus. Really appreciated that, I will learn that new thing. However, in my case, not because we like to replicate, but since the location itself on another "client" are remote, and no adequate internet connection, we just want to "pull" those databases from each location into single central location to manipulate the data further more. That's why we need to manually perform backup on each location and send this backup to central, and will be restore / added there.

            – dhicom
            Aug 14 '14 at 1:45













            @dhicom I have updated the answer with an alternative method.

            – jynus
            Aug 15 '14 at 8:40





            @dhicom I have updated the answer with an alternative method.

            – jynus
            Aug 15 '14 at 8:40













            The problem for me is, when this backup being restore into single database, it will trigger an error since the id is already taken / there. I need to know is this possible to backup without the primary key column (id). I've read some information around in these few days to figure this out. Seems that's my only solution for now. Do you have any ideas how to do that with simple one? What I have read that I should "clone" or "dump" the database, then get rid of the primary key one by one. But, since I have many of table like that, it would be not practical doing that manually, each time.

            – dhicom
            Aug 15 '14 at 12:51





            The problem for me is, when this backup being restore into single database, it will trigger an error since the id is already taken / there. I need to know is this possible to backup without the primary key column (id). I've read some information around in these few days to figure this out. Seems that's my only solution for now. Do you have any ideas how to do that with simple one? What I have read that I should "clone" or "dump" the database, then get rid of the primary key one by one. But, since I have many of table like that, it would be not practical doing that manually, each time.

            – dhicom
            Aug 15 '14 at 12:51













            0














            Given that the other answer might be a problem for you, I would suggest you try pt-archiver instead specifically using the --columns option, of course in combination with --source and --dest option you may be able to pull this off somehow. Good luck!






            share|improve this answer




























              0














              Given that the other answer might be a problem for you, I would suggest you try pt-archiver instead specifically using the --columns option, of course in combination with --source and --dest option you may be able to pull this off somehow. Good luck!






              share|improve this answer


























                0












                0








                0







                Given that the other answer might be a problem for you, I would suggest you try pt-archiver instead specifically using the --columns option, of course in combination with --source and --dest option you may be able to pull this off somehow. Good luck!






                share|improve this answer













                Given that the other answer might be a problem for you, I would suggest you try pt-archiver instead specifically using the --columns option, of course in combination with --source and --dest option you may be able to pull this off somehow. Good luck!







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jul 28 '16 at 10:26









                jerichoriverajerichorivera

                61745




                61745






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Database Administrators Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f73853%2fmysql-merge-same-database-on-several-location-into-single-location%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Szabolcs (Ungheria) Altri progetti | Menu di navigazione48°10′14.56″N 21°29′33.14″E /...

                    Discografia di Klaus Schulze Indice Album in studio | Album dal vivo | Singoli | Antologie | Colonne...

                    How to make inet_server_addr() return localhost in spite of ::1/128RETURN NEXT in Postgres FunctionConnect to...