Bulk Automation Guide
This user guide explains how to use the Automation Enabler Script for bulk creation of Backup sets, Storage, Schedule plans, and Sources.
Introduction
This user guide explains how to use the Automation Enabler Script known as the 'python creator.py' utility. It helps in the bulk creation and configuration of Backup sets, Sources, and other Zmanda entities. Here, the sources are attached to backup sets, and the backup sets are created and associated with schedules and storage. It expedites the setup and backup processes of users using Zmanda at scale.
- Python 3.6 or higher version. You can run the following command to see if python is installed:
python3 --version
- Before running the utility’s operations, ensure you have created and populated the input file using the exact name detailed in each operation.
You can create the following Zmanda elements using the Automation enabler script:
- Backup sets (create and update)
- Sources (create)
- Storage (create)
- Schedule plans (create)
- Read lists of Backup sets, Sources, Storages, and Schedules (read)
Please note that you must fill up the corresponding CSV forms to use the script.
The following use cases can be simplified using this utility, running the operations detailed in this document.

- 1.Fresh Zmanda Instance set-up
- 1.Run the script and press 10 to create backup servers.
- 2.Press 9 to create the storage
- 3.Press 11 to create schedules (Optional, much simpler through ZMC)
- 4.Press 2 to create backup sets
- 5.Press 3 to create sources
At this point, you’re ready to let your backups run based on the schedules created (if the optional third step was done). You can also trigger an ad-hoc backup operation through ZMC or the utilities operation #6 – Backup Now (The user can also change the default backup config through #4 – Backup How before running backup now). Caution: Please note that many of the inputs required for these steps will be generated through the previous steps. It is recommended to run fetch operations for existing entities. Example: when a user creates a Backupset, that backup set id will be required when creating Sources. To get that ID, the user should run Operation # 1 – Fetch Backup sets. - 2.Update existing Zmanda Instance adding large numbers of clients/servers to be protected
- 1.First, fetch the existing backup sets to capture the IDs to be used for the new clients/sources that will be added. Run operation #1 – Fetch Backup sets.
- 2.Second, decide if the existing backup set will be used to map the clients/sources to be added. If user wants to bind these to a new backup set, they can run operation #2 – Create Backup sets.
- 3.Third, create the new clients/sources; the user can run operation #3 – Create Sources. At this point, the user can let the sources be backed up automatically through the schedule selected for the Backupset the clients/sources are mapped to. Alternatively, the user can run an ad-hoc backup and run operation #6 – Backup Now (the user can also change the default backup config through #4 – Backup How before running a backup now).
This operation covers both the backup set creation as well as configuration of the backup sets up to Backup Where. The process progresses as indicated in the below flowchart.

Backup sets creation and configuration
- 1.You begin by filling up the CSV sheet with the backup set data. The columns in the CSV form will be discussed shortly.
- 2.It is then fed to the python3 creator.py script.
- 3.Please authenticate yourself by providing your credentials to proceed. You will also need to provide the ZMC IP address and ZMC port.
- 4.Select option 2 to create backup sets.
- 5.The script begins to read the data from the CSV file and performs asynchronous API calls for creating the backup sets. For the backup sets created, the backup where configuration is also completed. The storage device bundled with the backup set will be used for the configuration.
You can evaluate the outcome of the script operation via the following reports compiled under the results directory:
- results_backupset_create.csv: Backup set creation report
- results_backupset_where.csv: Backup where configuration report

results_backupset_create.csv report

CSV file for backup set creation
Caution: Please note that the input file should have the following name for the script to process it successfully “backupsets_create.csv”.
backupset_name | This allows you to specify a name for the backupset name you want to create. | String · Minimum 5 characters. · Not more than 64 characters. · It can have only alphabets, numbers, '-', '.' and '_'. · Cannot start or end with special characters or begin with zmc_test. | Yes |
backupset_desc | Description is optional and intended to serve as a reminder of what type of backups are included or where it is configured. | String Only alphanumeric characters, '-', '.' and '_' are allowed. | No |
comments | Comments are optional and intended to serve as a reminder as to why the backup set was created. | String Only alphanumeric characters, '-', '.' and '_' are allowed. | No |
storage_device | This allows you to specify a storage id of the storage that you want to link with the backup set. To get the information of existing storage and their corresponding ids, choose 8 in the options and get a CSV at results/storages_information.csv, which contains the existing storage devices. | Integer Provide the storage id. | Yes |
schedule | This allows you to specify the schedule id with which you want to link your backup set. Ex: In the above table, opendrives_set_10 is linked to schedule id 2. To get the information of existing schedules, select 5 in the options and get a CSV at results/schedule_information.csv, which contains the existing schedules. | Integer Schedule id of the existing schedule else, leave it blank. | No |
aebackup_server | Allows you to specify the server id to which the storage is linked. To get the information of existing servers and their corresponding ids, choose 7 in the options and get a CSV at results/servers_information.csv, which contains the existing storage devices. | Integer Backupserver id | Yes |
active | If the schedule is not specified(blank), set it as FALSE. If the schedule is linked to the backup set and you wish to run the backups as per the linked schedule, set it as TRUE. | Boolean Value TRUE FALSE | Yes |
The backup how defines the key internal parameters that control how the backup set will run after it has been activated. Mail can be configured to receive the backup and restore notifications via emails.
backupset | This allows you to specify the backup set id. | Integer | Yes |
taperalgo | This allows you to specify the algorithm that determines the order in which the completed backup images are moved from the holding disk to the backup media. | String Options: firstfit, largest, largestfit, smallest, last Description · "first": First in - first out. · “firstfit”: The first backup image that will fit on the current media volume. · "largest": The largest backup image first. · “largestfit”: The largest backup image that will fit on the current media volume. · “smallest”: The smallest backup image first. · “last”: Last in - first out. | Yes |
Inparallel (Server parallel backups) | This allows you to set the number of parallel data backups that are performed from the Amanda clients to the holding disk in a backup run. | Integer | Yes |
Maxdumps (client parallel backups) | This allows you to set the number of parallel backups performed from an Amanda client. The default value is 1, which specifies that all Sources on a client are backed up sequentially. | Integer Example: 3 | Yes |
dumporder | This allows you to assign priorities to each of the parallel backup processes. | s -> smallest size first S -> biggest size first t -> smallest time first T -> biggest time first b -> smallest bandwidth first B -> biggest bandwidth first A string like "sssS" which represents the priority order for four parallel backups indicates that three dumpers will seek the smallest size hosts while one dumper will seek the biggest size host to backup. | Yes |
taper_parallel_write | This allows you to specify the maximum number of dumpers that will fetch data from the client. This parameter is unaffected by the inparallel and maxdumps colums values. [verified only for disk] | Integer | Yes |
reserved_tcp_port | It allows you to specify the port range to use on Amanda server to connect to clients. Amanda will use all the ports in the range that are not bound by another process or not reserved in the /etc/services file. Increase the value if you are increasing the value of inparallel column. | Example: 800-850 | Yes |
send_amreport_on | This allows you to specify the type of notifications you wat to receive via email. | String all, error, never Values all - Email recipients will be notified about successful/failure of the backup with a detailed report, same as in the Management Console. error - Email recipients will be notified only when the backup fails with error details. never - No notification emails will be sent upon selecting this option. | Yes |
mailto | This allows you to specify the email addresses to which the emails are to be sent if the mail server is configured. | Yes | |
etimeout | This allows you to specify the magnitude of time for limiting the time ZMC will spend on the Planning phase of the backup process. The Amanda backup process first estimates the backup size for each Source configured in the backup set. Estimation for all Sources in the backup set is done in parallel. | Integer | Yes |
etimeout_display | This allows you to specify the unit of etimeout value in minutes/seconds/hours/ | String Values · minutes · seconds · hours | Yes |
ctimeout | This allows you to specify the magnitude of time within which the client verification should be completed. | Integer | Yes |
ctimeout_display | This allows you to specify the unit of ctimeout value in minutes/seconds/hours/ | String Values · minutes · seconds · hours | Yes |
dtimeout | This allows you to specify the time limit that the Amanda server will wait (during the backup data transfer phase) for a client to begin to respond to a backup request. | Integer | Yes |
dtimeout_display | This allows you to specify the unit of dtimeout value in minutes/seconds/hours/ | String Values · minutes · seconds · hours | Yes |
To configure the backup how section of the backup sets, you need to fill up the backupsets_how CSV file.
Columns within backupsets_how.csv file

CSV file for Backup How configuration
Caution: Please note that the input file should have the following name for the script to process it successfully “backupsets_how.csv”.
This operation covers the source creation. The process progresses as indicated in the below flowchart. We have considered a Linux type for illustration.

Source creation flowchart
- 1.You begin by filling up the sources_create.csv sheet with the source data. The columns in the CSV form will be discussed shortly.
- 2.It is then fed to the python3 creator.py script.
- 3.Select option 3 to create sources.
- 4.The script begins to read the data from the CSV file and performs asynchronous API calls for creating the sources.
You can evaluate the outcome of the script operation via the results_sources_create.csv file for the creation of the source, which provides the status of the operation. The report is generated under the results directory.

CSV file for source creation
All of the columns are mandatory.
Caution: Please note that the input file should have the following name for the script to process it successfully “sources_create.csv”.
category_type | This allows you to specify the category id for the source type. The category id for Linux type source is 1. | Integer Values Filesystem 1: Linux |
hostname | This allows you to specify or select a hostname (or IP address) to back up. An error is displayed if you attempt to add a host/directory combination that already exists in the backup set. | String Ex: 192.168.53.187 |
directory_path | This allows you to specify the directory path you want to backup. There is a limit of 255 characters for a directory name. | String |
data_deduplication | This allows you to turn on/off data duplication If it is set as TRUE, then encrypt_strategy value is overridden to NONE. | Boolean field TRUE FALSE |
encrypt_strategy | This allows you to specify whether encryption should be enabled and, if yes, where it should be performed. Backup encryption can be performed on the server or on the client. It is important to store encryption key passphrases and certificates securely. Backups cannot be retrieved if the passphrase files or certificates are lost. | String Accepted values: · SERVER · CLIENT · NONE |
compress_strategy | This allows you to specify whether compression should be enabled and, if yes, what type of compression should be performed. Compression of the data can be done on the Amanda server or client. · Fast compression provides smaller backup window. · Best compression likely provides a smaller backup size. · Custom compression provides you an option to specify different compression command. You should specify it in custom_compress column. | String Accepted values: · SERVER FAST · NONE · CLIENT FAST · CLIENT BEST CLIENT CUSTOM · SERVER FAST · SERVER BEST SERVER CUSTOM |
exclude_path | It allows you to specify the list of file paths to be excluded from the backup. Specify the file paths inside []. | String Example: [“/home/john/mybackup2/”,”/home/john/mybackup3/”] In the above example, the directories specified in the list will be excluded from the backup. If you do not want to exclude anything specify the value as [] |
apply_global_exclusion | A Boolean value which specifies whether the global exclusion policy should be applied for the source or not. | Boolean field TRUE FALSE |
custom_compress | It is used if user specifies CLIENT CUSTOM or SERVER CUSTOM in compress_strategy field. It is used to specify the path to the compression program. | String Example bin/pigz |
backuptool | The Linux/Solaris/Mac OS X filesystems use the GNU-tar utility, which supports exclude patterns. | String gtar |
backupsets | This allows you to specify the list of backup set ids you want to link to the source. | Example [1,2] Here the backupset whose id is 1 and 2 are linked to the source. |
comments | This allows you to include any comments on the Source. | String |
The backup now configuration allows you to trigger the backups for the backup sets specified in the CSV file.
- 1.You begin by filling up the backupsets_now.csv file. It contains the following columns.
- 2.It is then fed to the python3 creator.py script .
- 3.Select option 6 to configure Backup now.

CSV file for triggering backups
All of the columns are mandatory.
Caution: Please note that the input file should have the following name for the script to process it successfully “backupsets_now.csv”.
backup_level |
| String Values · full · incremental · smart |
backupset | This allows you to specify the backup set names for which the backup is to be performed for the sources associated with the backup set. | String Ex: opendrives_set_5 |
The script begins to read the data from the CSV file and performs asynchronous API calls for triggering the backups for the specified backup sets.
To fetch information about existing backup sets, you should execute the script and choose option “1”. Then, the information of all the backup sets will be fetched and put in the path “/results/results_all_backupsets.csv”.

Screenshot of results_all_backupsets.csv
To fetch information about existing servers, you should execute the script and choose option “7”. The information on servers will be fetched and placed in the path “/results/servers_information.csv”.

Screenshot of servers_information.csv
To fetch information about existing schedule plans, you should execute the script and choose option “5”. The information on schedules is fetched and put in the path “results/schedule_information.csv”.
The main purpose of using this operation is to get the mapping between the schedule name and schedule id so that the end user can attach the schedule id during backup set creation.

Screenshot of schedule_information.csv
To fetch information about existing storage, you should execute the script and choose option “8”. The information of storage is fetched and put in the path “results/storages_information.csv.
The main purpose of using this operation is to get the mapping between the storage name and storage id so that the end user can attach the storage id during backup set creation.

Screenshot of storages_information.csv
This operation covers the schedule creation. The process progresses as indicated in the below flowchart.

Schedule plan creation flowchart
- 1.You begin by filling up the schedule_create.csv sheet with the schedule data. The columns in the CSV form will be discussed shortly.
- 2.It is then fed to the python3 creator.py script.
- 3.Please authenticate yourself by providing your credentials to proceed. You will also need to provide the ZMC IP address and ZMC port.
- 4.Select option 11 to schedule.
- 5.The script begins to read the data from the CSV file and performs synchronous API calls for creating the schedule.
You can evaluate the outcome of the operation using the results_schedule_create.csv report, which provides the status of the operation. The report is generated under the results directory.

Screenshots of results_schedule_create.csv
To fetch all the previously added records/schedules use choice 11. The results/report is generated in schedule_information.csv under results.

Screenshots of schedule_information.csv

Screenshot of schedule_create.csv file
Caution: Please note that the input file should have the following name for the script to process it successfully “schedule_create.csv”.
schedule_name | This allows you to specify the schedule name for the schedule you want to create. | String Value
1. Minimum 7 characters.
2. Not more than 64 characters.
3. It can have only alphabets, numbers, '-' and '_'.
4. Cannot start or end with special characters. Example: Week-test | Yes |
schedule_start_time | This allows you to specify the schedule start time for the schedule. | String Value
Format : Date(yyyy-mm-dd)T(hh:mm:ss)
Example: Z2022-06-01T11:36:38Z
T is the separator that the ISO 8601 combined date-time format requires.
Z stands for the Zero time zone, Coordinated Universal Time (UTC). Example: To set time as 2.30 p.m. IST -> 09.00 a.m. UTC should be entered. | Yes |
scheduled_zone | This allows you to specify a time zone for your schedule. | String
Example: Asia/Calcutta | Yes |
start_time_minute | This allows you to specify the start time in minutes in UTC. It will be automatically converted into the time zone you specified earlier. | Integer
Example: 11 (UTC Time) | Yes |
start_time_hours | This allows you to specify start time in hours in UTC. It will be automatically converted into the time zone you specified earlier. | Integer
Example: 12 (UTC Time) | Yes |
schedule_type | This allows you to specify the Schedule Type for your schedule | String Value
Choose between Daily, Weekly, Monthly_Weeks, Monthly_Days | Yes |
full_back_up_time_specified | This allows you to specify True/False if you want full backup time | String Value
Either” true” or “false” | Yes |
full_back_up_start_time_hours | This allows you to specify start time in hours for full backup
(Is used only when full_back_up_time_specified is set to True) | Integer
Example: 11 (UTC Time) | No
(Conditional) |
full_back_up_start_time_minute | This allows you to specify start time in minutes for full backup
(Is used only when full_back_up_time_specified is set to True) | Integer
Example: 30 (UTC Time) | No
(Conditional) |
weeks_to_backup | This allows you to specify the weeks you want backup to happen [Note: Only for Weekly and Monthly_weeks schedule type] | Integer or String containing Integers
Permitted values: 1 to 5
When single value entered use single digit integer.
Example 1: 2
When multiple values use string with values
Example 2: "1,2,5" | Yes |
months_to_backup | This allows you to specify the months you want backup to happen [Note: Only for Monthly and Monthly_Days schedule type] | String
Permitted values : jan to dec (first 3 chars of each month)
Example 1 - "jan"
Example 2 - "jan,nov,oct,aug" | Yes |
backup_type_full | For Daily, Weekly, Monthly_weeks : This allows you to specify the days of week you want backup type to be FULL. | String
Permitted Values: mon to sun (first 3 chars of each day)
Example 1 : "mon"
Example 2 : "mon,wed" | Yes
(only if you want backup to be Full) |
(Continued from the previous cell) | For Monthly_days : This allows you to specify the days you want backup type to be FULL. | Integer or String containing Integers Permitted values 1 to 31
When single value entered use single digit integer.
Example 1: 2
When multiple values use string with values
Example 2: "19,15" | Yes
(only if you want backup to be Full) |
backup_type_incremental | For Daily, Weekly, Monthly_weeks : This allows you to specify the days of week you want backup type to be INCREMENTAL. | String
Permitted Values mon to sun (first 3 chars of each day) Example 1 : "mon"
Example 2 : "mon,wed" | Yes
(only if you want backup to be Incremental) |
(Continued from the previous cell) | For Monthly_days : This allows you to specify the days you want backup type to be INCREMENTAL. | Integer or String containing Integers
Permitted values 1 to 31
When single value entered use single digit integer.
Example 1: 2
When multiple values use string with values
Example 2: "19,15" | Yes
(only if you want backup to be Incremental) |
backup_type_nobackup | For Daily, Weekly, Monthly_weeks : This allows you to specify the days of week you want backup type to be NO BACKUP | String
Permitted Values mon to sun (first 3 chars of each day)
Example 1 : "mon"
Example 2 : "mon,wed" | Yes
(only if you want backup to be No backup) |
(Continued from the previous cell) | For Monthly_days : This allows you to specify the days you want backup type to be FULL. | Integer or String containing Integers
Permitted values 1 to 31
When single value entered use single digit integer.
Example 1: 2
When multiple values use string with values
Example - "19,15" | Yes
(only if you want backup to be No backup) |
This operation covers the storage creation. The process progresses as indicated in the below flowchart.

Storage creation flowchart
- 1.You begin by filling up the storage_create.csv sheet with the schedule data. The columns in the CSV form will be discussed shortly.
- 2.It is then fed to the python3 creator.py script.
- 3.Please authenticate yourself by providing your credentials to proceed. You will also need to provide the ZMC IP address and ZMC port.
- 4.Select option 9 to create storage.
- 5.The script begins to read the data from the CSV file and performs asynchronous API calls for creating the storage.
You can evaluate the outcome of the operation using the storage_results.csv report for the creation of storage which provides the status of the operation. The report is generated under the results directory.

Screenshot of storage_create.csv
Caution: Please note that the input file should have the following name for the script to process it successfully “storage_create.csv".
All of the columns are mandatory.
ae_backup_servers | This allows you to specify the id of the backup server being used. | Integer Ex: 1 |
storage_device_name | This allows you to specify a name for the storage. If you attempt to create storage that already exists, an error is displayed. | String Ex: store1 |
root path | This allows you to specify the root path you want to create storage in. It is the directory where the backup images will be stored. This is normally specified when the device was configured in the Storages page | String Ex: /var/lib/amanda/disk |
comments | This allows you to specify comments. Comments are optional and are intended to serve as a reminder as to why the backup set was created. | String Only alphanumeric characters, '-', '.' and '_' are allowed. |
category_type | This allows you to specify the category type of the storage device (simple disk, vtape). | Integer Ex: 5 Values Simple disk: 5 Vtape: 4 |
output_buffer_abbr | This allows you to specify the unit of storage for the below output buffer size | String Values KiB: k MiB: m |
output_buffer_size | This allows you to specify the amount of memory used by Amanda to hold data as it is read from the network or disk before it is written to the output device. | Integer |
This operation covers the server creation. The process progresses as indicated in the below flowchart.

Creation of server flowchart
- 1.You begin by filling up the server_create.csv sheet with the schedule data. The columns in the CSV form will be discussed shortly.
- 2.It is then fed to the python3 creator.py script.
- 3.Please authenticate yourself by providing your credentials to proceed. You will also need to provide the ZMC IP address and ZMC port.
- 4.Select option 10 to create storage.
- 5.The script begins to read the data from the CSV file and performs asynchronous API calls for creating the server.
You can evaluate the outcome of the operation using the server_results.csv report, which provides the status of the operation. The report is generated under the results directory.

Screenshot of server_create.csv
All of the columns are mandatory.
Caution: Please note that the input file should have the following name for the script to process it successfully “server_create.csv”.
ae_version | This allows you to specify the version of the ae server | Integer Ex: 1 |
port | This allows you to specify the port number of the machine being used. | Integer Ex: 8008 |
server_hostname | This allows you to specify the name of the host server | String Only alphanumeric characters, '-', '.' and '_' are allowed. |
server_ip | This allows you to specify the IP address of the ae server. User can’t add same server multiple times. | String Only numbers and '.' are allowed. Ex : 192.168.52.96 |
server_name | This allows you to specify custom name for server. This cannot be edited once created. | String Only alphanumeric characters, '-', '.' and '_' are allowed. |
server_port | This allows you to specify the port that is used to access the ae server. | Integer Ex: 8002 |
server_region | This allows you to specify place where the backup server is located. | String Only alphanumeric characters, '-', '.' and '_' are allowed. Ex: India |
zmc_ip | This allows you to specify the IP address of the machine being used to access the ZMC console. | String Only numbers and ' . ' are allowed. Ex : 192.168.52.113 |

Screenshot of backupset_schedule_update.csv
Caution: Please note that the input file should have the following name for the script to process it successfully “backupset_schedule_update.csv”.
- 1.The data is fed into the backupset_schedule_update.csv file. The csv file contains the following columns.
- 2.The following table gives an overview of the columns, their description, and the values expected to be entered in that specific column.
- 3.Execute the script using python3 creator.py Provide the username, password, ZMC IP address, and ZMC port. Enter option 12 to update backup-set with the new schedule id to which it is to be linked.
Column name | Description | Values |
---|---|---|
backupset_id | Specify backupset id you want to update | Integer Ex: 1 |
schedule | Specify the schedule id you want to the backupset to link to | Integer Ex: 1 |
active | Activate the specified schedule or deactivate. Enter TRUE to activate and FALSE to deactivate. | Boolean Ex: TRUE, FALSE |
You can evaluate the outcome of the operation using the results_backupset_update.csv report which provides the status of the operation. The report is generated under the results directory.
This operation covers dedupe source creation. The process progresses as indicated in the below flowchart. We have considered a Linux type for illustration.

- 1.You begin by filling the sources_create.csv sheet with the source data. The columns in the CSV form will be discussed shortly.
- 2.The CSV file is then fed to the python3 creator.py script.
- 3.Select option 3 to create sources.
- 4.The script begins to read the data from the CSV file and performs asynchronous API calls for creating the sources.
You can evaluate the outcome of the script operation via the results_sources_create.csv file for the creation of the source, which provides the operation status. The report is generated under the results directory.
Columns within sources_create.csv file

CSV file for source creation. All the columns are mandatory.
Caution: Please note that the input file should be named “sources_create.csv” for the script to process it successfully.
category_type | This allows you to specify the category id for the source type. The category id for Linux type source is 1. | Integer Values Filesystem 1: Linux |
hostname | This allows you to specify or select a hostname (or IP address) to back up. An error is displayed if you attempt to add a host/directory combination that already exists in the backup set. | String Ex: 192.168.53.187 |
directory_path | This allows you to specify the directory path you want to backup. There is a limit of 255 characters for a directory name. | String |
data_deduplication | This allows you to turn on/off data duplication If it is set as TRUE, then encrypt_strategy value is overridden to NONE. | Boolean field TRUE FALSE |
encrypt_strategy | This allows you to specify whether encryption should be enabled and, if yes, where it should be performed. Backup encryption can be performed on the server or the client. It is important to store encryption key passphrases and certificates securely. Backups cannot be retrieved if the passphrase files or certificates are lost. | String Accepted values: · SERVER · CLIENT · NONE |
compress_strategy | This allows you to specify whether compression should be enabled and, if yes, what type of compression should be performed. Compression of the data can be done on the Amanda server or client. · Fast compression provides a smaller backup window. · Best compression likely provides a smaller backup size. · Custom compression provides you with an option to specify different compression commands. You should specify it in custom_compress column. | String Accepted values: · SERVER FAST · NONE · CLIENT FAST · CLIENT BEST CLIENT CUSTOM · SERVER FAST · SERVER BEST SERVER CUSTOM |
exclude_path | It allows you to specify the list of file paths to be excluded from the backup. Specify the file paths inside []. | String Example: [“/home/john/mybackup2/”,”/home/john/mybackup3/”] In the above example, the directories specified in the list will be excluded from the backup. If you do not want to exclude anything specify the value as [] |
apply_global_exclusion | A Boolean value specifies whether the global exclusion policy should be applied for the source or not. | Boolean field TRUE FALSE |
custom_compress | It is used if the user specifies CLIENT CUSTOM or SERVER CUSTOM in the compress_strategy field. It is used to specify the path to the compression program. | String Example bin/pigz |
backuptool | The Linux/Solaris/Mac OS X filesystems use the GNU-tar utility, which supports exclude patterns. | String gtar |
backupsets | This allows you to specify the list of backup set ids you want to link to the source. | Example: [1,2] Here the backupset whose id is 1 and 2 are linked to the source. |
comments | This allows you to include any comments on the Source. | String |
All the storage from your prior versions is created in your new version using this operation.

- 1.Run the python creator.py script by executing the command: python3 creator.py
- 2.Provide ZMC IP address and ZMC port.
- 3.Select option 14 to fetch all the information of storage devices in your current version.
- 4.The script begins to perform asynchronous API calls for fetching the storage and storing the storage data in the CSV file.
- 5.Once you have upgraded to a new version of ZMC, you can select option 9 to create the storage.
- 6.The script begins to read the data from the CSV file that was generated previously and performs asynchronous API calls to create the storage.
You can evaluate the outcome of the operation using the storage_results.csv report, which provides the status of the operation. This report is generated under the results directory.
Last modified 3mo ago