Storage Manager: Disaster recovery plan
In this guide we are going to offer a basic strategy for disaster recovery of storage cluster in production environment. The recovery is about files visible on the component "workspace". The file metadata are managed by "JackRabbit" server. The metadata contains info on remote location in the storage cluster. If there are some files not found on the storage manager cluster, but present on jackRabbit server, these files can be restored by a storage backup server. Actually, the storage cluster (on production environment) has an evolutive backup strategy.
Contents
Components
For restore the files lost we need the following components:
- storage-transfer component: a java component able to transfer files from the backup storage to the producion storage cluster
- backup server: the backup that we assume have the lost file
- csv file: the file that contains the remote path, and other information, on lost files
Storage transfer component
The storage-transfer library is a command line interface able to transfer files from a backup server to the production environment. The library uses a "csv" file where are described all the files that we want to transfer. In the next paragraph will be described the "csv" format. The input parameters are:
- ip of backup server
- the csv file path
The library coordinates are:
<dependency> <groupId>org.gcube.contentmanagement</groupId> <artifactId>storage-transfer</artifactId> <version>1.0.0-SNAPSHOT</version> </dependency>
You can download it from nexus.
Backup server
The backup server is a mongoDB server restored from a backup of a production node. Once the machine is ready, you can proceed with the configuration of the machine and then subsequently launch it. For the server configuration, we have to edit the MongoDB configuration file and disable the "replicaSet" member. The configuration file is in /etc/mongodb.conf it is need to comment the "replSet" line as follow
old:
# in replica set configuration, specify the name of the replica set replSet=storage
new:
# in replica set configuration, specify the name of the replica set #replSet=storage
After that, you can start mongo:
/usr/bin/mongod --config /etc/mongodb.conf &
If there is an exception on connection of type:
Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84
exception: connect failed
You have to launch a restoring procedure in this way: First, delete the mongo lock file if present
sudo rm /data/mongo_home/mongod.lock
After that, run the repair procedure (this may take a long time)
mongod --repair
Now, if there are not errors, it is possible connect to the server.
csv file
The fields of the csv file must be the following:
- JackRabbit path: the path in the JackRabbit tree. ex: /Home/roberto.cirillo/Workspace/xxxx/yyyy/Period 1/1 Networking Activities/Project.pdf
- RemotePath in the storage. ex: Home/roberto.cirillo/Workspace/xxxx/yyyy/Deliverables/Period 1/1 Networking Activities16394176-0c8e-4476-b3b5-64b386773954
The separator must be: "###"
The csv file can be created, retrieving Storage remote Path and JR path, and checking if such remote path is valid. If it is not, we can write the line in the csv file. An example here:
public static BufferedWriter writer; public static void main(String[] args) throws IOException { try { File file = new File(reportFile); FileWriter fw = new FileWriter(file); writer = new BufferedWriter(fw); getItems("/Home/username/Workspace/folder"); writer.flush(); writer.close(); fw.close(); } catch (Exception e) { e.printStackTrace(); }finally{ if (writer!=null) writer.close(); session.logout(); } private static void getItems(String path) throws RepositoryException, InternalErrorException, ItemNotFoundException, ItemNotFoundException { try { Node contentNode = myNode.getNode(CONTENT); if (contentNode.hasProperty(REMOTE_STORAGE_PATH)) { remotePath = contentNode.getProperty(REMOTE_STORAGE_PATH).getString(); } } catch (RepositoryException e) { try{ if (myNode.hasProperty(REMOTE_STORAGE_PATH)) { remotePath = myNode.getProperty(REMOTE_STORAGE_PATH).getString(); } }catch (Exception e1) { e.printStackTrace(); } } if(remotePath!=null){ long size = 0; try{ size = GCUBEStorage.getRemoteFileSize(remotePath); if (size<0){ if (myNode.getName().contains("jcr:content")){ writeToFile(myNode.getParent().getPath() + delimiter + remotePath); } else{ writeToFile(myNode.getPath()+ delimiter + remotePath); } } } catch (Exception e) { e.printStackTrace(); } } NodeIterator children = myNode.getNodes(); while (children.hasNext()){ Node child = children.nextNode(); try{ //recorsive if ((child.hasNodes() & !(child.getPath().equals("/jcr:system")) & !(child.getPath().equals("/rep:policy") & !(child.getName().startsWith("jcr:"))))) getItems(child.getPath()); }catch (Exception e) { e.printStackTrace(); } } private static void writeToFile(String line) { try { writer.write(line); writer.newLine(); } catch (IOException e) { e.printStackTrace(); } }
Get Start
Now, it's possible to use the storage-transfer library for perform the restoring files operation.
There are two ways for restore files by storage-transfer CLI library:
- DataTransfer class: restore files from a csv file to the production storage
- FileTransfer class: restore a file from localPath to the production storage
DataTransfer
The DataTransfer is able to restore one or more files from a csv file.
Follow an example of use:
java -cp storage-transfer-1.0.0-SNAPSHOT-jar-with-dependencies.jar org.gcube.contentmanagement.storage.data.transfer.DataTransfer arg 1 arg2
where
- arg1 ip of backup storage server
- arg2 csv file path
FileTransfer
The FileTransfer class is able to restore only one file from a local path.
Follow an example of use:
java -cp storage-transfer-1.0.0-SNAPSHOT-jar-with-dependencies.jar org.gcube.contentmanagement.storage.data.transfer.FileTransfer arg 1 arg2
where
- arg1 ip of backup storage server
- arg2 local file path