6.4 Maintenance


  • Stop Service

    The service such as Switcher could be stopped by killing process directly.

    The application integrating the Zdb service, such as Auany, GlobalId, and Provider, has two kinds of stop methods.

    1. After stopping Switcher, wait for more than one Zdb checkpoint period, then stop the process again to ensure that the data are totally flushed to the underlying database.

    2. Correctly configure the JmxServer parameter of the service, then execute the Stop sub-command of the jmxtool to ensure that the Zdb data are flushed to the underlying database and the services are stopped accroding to the correct order. For example, to stop the auany service, the below command should be executed:


    java -jar limax.jar jmxtool stop -c "service:jmx:rmi://localhost:10202/jndi/rmi://localhost:10201/jmxrmi"
    

    This command allows an extra -d delay parameter and the delay is the millsecond which orders that the server should stop when the delay time limitation reaches.


  • Backup and Recovery

    The mode of the backup and recovery depends on the engine used by the underlying database.

    • The backup based on the EDB engine

      Using the backup sub-command of the jmxtool to implement the backup function. It supports full backup and incremental backup two ways.

      The backup sub-command parameter:

      1. –d <backupdirectory> appoint the backup directory

      2. –i <true or false> true means executing the incremental backup, false means not executing the incremental backup

      This backup here is different with the backup of the sqlserver because the database such as sqlserver requires that a full backup should exist before an incremental backup. So the incremental backup must follow the dump database, (dump transaction)+ way to execute. The implementation of the limax is that only if the backup sub-command is executed as the incremental backup, the full backup is executed firstly, the database log is automatically copied to the backup directory when logrotate. This way could simplify the rule design for the backup. For example, if there is a requirement of an incremental backup with one day as the period, it only needs to execute one incremental backup at the specific time each day.

      For example: incremental backup the zdb database of the auany, the below command need to be executed.


      java -jar limax.jar jmxtool backup -c "service:jmx:rmi://localhost:10202/jndi/rmi://localhost:10201/jmxrmi" -d c:\temp\backup –i true
      

    • The recovery based on the EDB engine

      If the recovery is the full backup, it only needs to copy the backup directory as the zdb directory.

      If the recovery is the incremental backup, it needs to follow the below steps:

      1. Rename the zdb directory as the zdb.old

      2. Copy the backup directory as the zdb directory

      3. Copy and cover the files of the zdb.old/log directory to the zdb/log. It is because that the log files of the zdb.old/log might have the latest checkpoint which has not been submitted to the backup directory.

      If the recovery bases on the selected time point, the EDB interactive tool edbtool should be executed via command way in the zdb directory after the copy finishes.


      java -jar limax.jar edbtool
      #help
      rescue <src dbpath> <dst dbpath>
      list checkpoint <dbpath>
      recover checkpoint <dbpath> <recordNumber>
      out <filename> <charset> #default System.out UTF-8
      exit
      quit
      #list checkpoint c:\temp\backup
      0 : 2015-04-18 16:15:36.210
      #
      

      There are two piece of key commands: list checkpoint command, lists all the checkpoint time point for the current database, numbering from 0; executes the recover checkpoint command based on the exepected time point number selected from the list. In the case of a large amount of database data and a lot of time point, this command needs more time to execute. After the command finishes, the database recovers to the status of the selected time point.


    • The backup and recovery based on the Mysql engine

      Directly use the backup and recovery strategy of the Mysql.


  • Conversion of the data format

    After the application upgrades, the data format stored in the Zdb might be converted. It needs to implement the function which is similar to the ALTER TABLE function of the sql database. The most part of this task should be finished by the developers, providing the relative conversion program to the operation staffs to execute the conversion in the produce environment.

    The Limax framework provides the support to the conversion. The zdb description provided in the previous example as the example is provided here to introduce the conversion method.

    • The example of the conversion

      1. zdb description

      2. add a new record


      import limax.util.Pair;
      import limax.util.Trace;
      import limax.zdb.DBC;
      import limax.zdb.Procedure;
      import limax.zdb.Zdb;
      import limax.zdb.tool.DataWalker;
      
      public final class ConvertTest {
      	public static void main(String[] args) throws Exception {
      		new java.io.File("zdb").mkdir();
      		Trace.set(Trace.ERROR);
      		limax.xmlgen.Zdb meta = limax.xmlgen.Zdb.loadFromClass();
      		meta.setDbHome("zdb");
      		Zdb.getInstance().start(meta);
      		Procedure.call(() -> {
      			Pair<Long, xbean.MyXbean> pair = table.Mytable.insert();
      			pair.getValue().setVar0((short) 123);
      			pair.getValue().getSlist().add("name123");
      			return true;
      		});
      		Zdb.getInstance().stop();
      		DBC.start();
      		DBC dbc = DBC.open(meta);
      		DataWalker.walk(dbc.openTable("mytable"), kv -> {
      			System.out.println(kv.getKey() + ", " + kv.getValue());
      			return true;
      		});
      		DBC.stop();
      	}
      }
      

      Run this program and get the below result

      4096, {var0:123, slist:["name123", ], }

      3. Update the xml description of the zdb


      <xbean name="MyXbean">
      	<variable name="var0" type="short" />
      	<variable name="slist" type="vector" value="string" />
      </xbean>
      <table name="mytable" key="long" value="MyXbean" autoIncrement="true"/>
      

      It is noted that the type of the MyXbean.var0 is changed from the int to the short.

      4. Regenerate the source code, flush the eclipse, then there is error for the code, change the pair.getValue().setVar0(123); as the pair.getValue().setVar0((short)123);

      5. Run the program again.

      Then, get the running error.

      Exception in thread "main" limax.zdb.XError: convert needed: {mytable=MANUAL}

      This running result points out that the Zdb database of the current version is not compatible with the previous one. The mytable table need to be manually converted for the compatibility.

      6. Run the conversion tool to generate the source code.

      Create the zdbcov directory in the current directory of the application, then execute the below command.


      java -cp <path to limax.jar>;bin limax.zdb.tool.DBTool -e "convert zdb zdbcov"
      

      It is noted that there are two classpath, one is limax.jar and the other is the bin directory of the current application.

      The below output is obtained here:

      mytable MANUAL

      -----COV.class not found, generate-----

      make dir [cov]

      make dir [cov\convert]

      generating cov\convert\Mytable.java

      generating cov\COV.java

      This output points out that the mytable table needs the manual conversion. The cov directory is created and the conversion framework code is placed in this directory.

      Flush the eclipse, configure the project property, and set the cov directory as the source code directory.

      Look for the //TODO line in the Mytable.java file, which is the position to fill the manual conversion source code.


      // TODO var0 = s.var0;
      

      Suppose to modify this line like the below:


      var0 = (short) -s.var0;
      

      7. Run the code conversion tools again.


      java -cp <path to limax.jar>;bin limax.zdb.tool.DBTool -e "convert zdb zdbcov"
      

      Get the below result

      2015-05-12 17:56:14.283 INFO <main> limax.zdb.DBC start ...

      mytable MANUAL

      -----COV.class found, manual convert start-----

      copying... _sys_

      converting... mytable

      2015-05-12 17:56:14.390 INFO <main> limax.zdb.DBC stop begin

      2015-05-12 17:56:14.397 INFO <main> limax.zdb.DBC stop end

      -----manual convert end-----

      The new directory zdbcov is created here to store the converted database. The mytable table executes the conversion, and the none transferable_sys_ table is directly copied.

      Verify the conversion effect.

      Rename the zdb directory of the previous version as the zdb.old, and rename the zdbcov directory as the zdb.

      Re-execute the ConverTest.java>, and obtain the below result:

      4096, {var0:-123, slist:["name123", ], }

      8192, {var0:123, slist:["name123", ], }

      It is noted that in the key=4096 line, the var0 has changed as -123, which is the right funtion of the conversion code. The conversion successes.


    • The detail information related to the conversion

      The conversion type of the mytable in the previous sample is reported as the MANUAL, and the conversion type of the _sys_ is reported as the SAME. Actually, there are four kinds of conversion types provides by the system, SAME, AUTO, MAYBE_AUTO, and MANUAL, which are defined in the limax.zdb.tool.ConvertType.

      The meanings of the conversion type are below:

      • SAME

        It is same, so the table is directly copied during the conversion.

      • AUTO

        The conversion dose not loss the accuracy, such as the integer's conversion from short to long; a bean, removes some fields, and this bean does not exist as the key for any map. This kind of conversion could be automatical and without the intervention from the user.

      • MAYBE_AUTO

        The conversion may loss some accuracy, such as the conversion from the integer to the float; a bean, adds some fields which need to be added in the initiation. This kind of conversion could be automatical, and the user also could interfere it.

      • MANUAL

        All the other cases, could be converted under the intervention from the user.

      If the launch of the application with the new version fails, it needs to execute the conversion. The below command needs to be executed firstly:


      java -cp <path to limax.jar>;bin limax.zdb.tool.DBTool -e "convert zdb zdbcov"
      

      Obtain the conversion type, if conversion of the MANUAl or MAYBE_AUTO type exists, this command will generate the framwork source code. Fill the relative TODO, then rerun the source code conversion tool after compling.

      For the above result that only MABYE_AUTO without MANUAL exists, if the conversion with accuracy loss is allowed, the generated cov directory could be deleted and execute the command directly.


      java -cp <path to limax.jar>;bin limax.zdb.tool.DBTool -e "convert zdb zdbcov true"
      

      For example, change the var0 type of the above example as the float, then run the above command and get the below result:

      mytable MAYBE_AUTO

      -----no need generate!, auto convert start-----

      2015-05-12 23:40:52.532 INFO <main> limax.zdb.DBC start ...

      mytable MAYBE_AUTO

      copying... _sys_

      auto converting... mytable

      2015-05-12 23:40:52.623 INFO <main> limax.zdb.DBC stop begin

      2015-05-12 23:40:52.630 INFO <main> limax.zdb.DBC stop end

      -----auto convert end-----

      Viewed from above content, the MAYBE_AUTO table is automatically converted.

      Actually, the paramter format of the convert command is below:


      convert [fromDB [toDB [autoConvertWhenMaybeAuto [generateSolver]]]]
      

      fromDB appoints the source of the conversion, and the default value is zdb.

      toDB appoints the destination of the conversion, and the default value is zdbcov.

      autoConvertWhenMaybeAuto appoints whether to directly convert the table of the MAYBE_AUTO type, the default value is false.

      generateSolver appoints whether to generate the merge code, the default value is false. The merge of the database will be introduced next.


    • The summary of the conversion

      1. For the application developer:

      After the development on the new version has finished, the application should be run on the zdb database with the previous version. If the error notifies the data conversion, the zdbcov directory should be created and run the below command in the current directory of the project.


      java -cp <path to limax.jar>;bin limax.zdb.tool.DBTool -e "convert zdb zdbcov"
      

      If the conversion source code is generated, the developer provides his own record to implement the conversion according to the requirement, packages the application with the new version, integrates the conversion tools, and submits to the operation environment to execute in the production system after the test passes.

      2. For the operation environment:

      After obtaining the new version application, if the conversion is necessary, the below command should be executed.


      java -cp limax.jar;application.jar limax.zdb.tool.DBTool -e "convert zdb zdbcov"
      

      After the command is executed, the operation staff backups the original zdb directory, and renames the zdbcov directory as the zdb.

      Finally, the operation staff launches the new version application.

      3. If the parameter of the fromDB and toDB of the convert is MYSQL url, the jar package of the mysql/Java connector should be added behind the java -cp parameter.

      4. The user should manually create the conversion destination zdbcov, a directory created for EDB engine, and the relevant database created for MYSQL engine. If the MYSQL database is used, the relevant dbhome in the configuration file could be directly pointed to the new database after the conversion to reduce the copy of the database.


    • Matters need attention

      For the relationship database, the ALTER TABLE operation for the large-scale table is very time-consuming. Similarly, it is very time-consuming to convert the large-scale zdb database. In the actual usage, if there is the conversion for the large-scale database, we suggest to use the backup database to test the time-lenght for the conversion and ensure the time maximum of pause. If the time of pause is unacceptable, the only solution is to specially design the application to finish the conversion gradually in the operation processing.


  • The merge of the database

    This kind of application, which is separately operated in the beginning, has the requirement to merge the database some time later. The Limax has a serial methods to support this kind of application.

    • The support for the merge

      1. Using the GlobalId service, the application in the same GlobalId domain provides the unique id through GlobalId service. With this relevant id as the key of the table, this table could be safely merged without the conflict of the key.

      2. The configuration of the incremental key of the Zdb, autoKeyInitValue, autoKeyStep, the same kind of application with the separate operation configures the same autoKeyStep and different autoKeyInitValue in the beginning. So, all the tables with the incremental key could be safely merged without the conflict of the key.

      3. In fact, the merge of the database is a special type format conversion. Compared with the previous format conversion, the destination database of the common format conversion is empty, and after executing the merge, the destination database exists. For the table with the same name, if the same key is found in the source database and the destination database, there is conflict. In this case, the relevant framework source code could be generated to notify the uses to resolve the conflict.


    • The operation of the merge

      1. Backup the destination database, zdb -> zdb.bak

      2. Prepare the source database, assume as the zdbsrc

      3. Execute the merge command


      java -cp limax.jar;application.jar limax.zdb.tool.DBTool -e "convert zdbsrc zdb"
      

      4. If there is no conflict, the merge finishes. If there is conflict, execute the below commands.


      java -cp limax.jar;application.jar limax.zdb.tool.DBTool -e "convert zdbsrc zdb false true"
      

      The last parameter of the convert as the true points out that it needs to generate the souce code to resolve the conflict and submit the generated cov directory to the application.

      5. The application uses the cov directory as the source directory. Compared with the previous data format conversion, the cov directory contains an extra solver package, which includes all the source code to resolve the conflict for the database tables. The method in the source code is below:


      public OctetsStream solve(OctetsStream sourceValue, OctetsStream targetValue, OctetsStream key)
      

      Fill the necessary TODO code and re-package the application.

      6. Submit the application, re-execute the step #3.


    • Matters need attention

      1. We suggest the user to use the GlobalId and correctly configure the incremental key of the Zdb to avoid the conflict which need to be fixed.

      2. For the possible conflict, the design should have foreseen. Actually, the process that the merge conflict appears in the operation phase, generates the souce code to resolve the conflict, and resolve the conflict in the previous content should not exist. If this case appears, it should be considered as the design defect.

      3. If the application runs on the MYSQL engine and could ensure the merge operation has no conflict, all the tables except for the _meta_ and _sys_ could be merged by using SQL command on the MYSQL. For the table _meta_ and _sys_, it is acceptable to keep the destination database version.

      4. The convert could exchange between the EDB database and MYSQL database.

      The convert could generate the source code for the format conversion and the source code to resolve the conflict appeared when merging at the same time. That means that the format conversion and merge could be processed at the same time. To avoid the mess, we suggest the user follow the order of conversion first and merge second to operate step by step.


Prev Next