This job was previously called: Import Data into Related Data Set.

This job imports data from an XML or CSV file into a related data set. You can import one or more files depending on your selection for rd-import-multiple-files. If you select the option Loop through the files, they will then be imported in order via iteration. However, if there is a zip archive, only one file from it will be imported. As such, we suggest not importing via zip archives which contain multiple files. 

When a data set is imported from a CSV file, the columns (structure) must already exist in the related data set. With an XML file, both the structure (i.e. the columns of the table) and the data can be added during import.

For more information, see Structure of XML and CSV Files (Related Data)​.

Please note, we have an upper limit of 2,000 characters for Related Data text columns. The searchable string (for use in Segmentation) is limited to 200 characters.

Basic Settings​

​Name​

This input field defines the name of the automation. The name is used to identify the automation in the system. It appears on the ​Time-based Automations Overview​ window. The maximum length is 100 characters.

​Description​

This input field defines a description of the automation. The description makes it easier to identify the automation in the system.

​Job​

This input field defines the job. For a list of all available jobs, see All Available Time-Based Jobs​.

Define Job​

This area defines the specific parameters of the job.

  • Related Data Set: This drop-down list defines the related data set into which the data is imported.

  • Import Mode: This drop-down list defines how the records are imported from the source file. The following options are available:

    • Add: This option saves new records from the import file to the data set without changing any existing records. If the data set is created with unique keys, records with keys that match existing records are not imported.

    • Replace: This option overwrites all of the existing records in the data set with the records from the import file.

    • Update: This option replaces existing records in the data set with the corresponding record from the import file. Only existing records which have a key that matches a record in the import file are replaced. Existing records in the data set which do not correspond to any records in the import file remain unchanged. This option does not add records with new keys to the data set. For non-unique data sets, multiple records can be added or deleted if the value in the key column of the import file matches an existing record.

    • Update and Add: This option replaces existing records in the data set with the corresponding record from the import file and adds new records to the data set. Existing records in the data set which do not correspond to any records in the import file remain unchanged. Records with new keys are added to the data set.

The update modes ​Update​ and ​Update and Add​ replace all existing records in the related data set with records in the import file that contain matching key column values. In non-unique data sets, this means that the data set can actually contain fewer records for a specific key value after import.

For example, a data set called ​Purchase​ has five purchase records for a key value ​Customer_ID​. The import file only contains three records for the same key value. The update modes ​Update​ and ​Update and Add​ delete the original five records and leave only the three new records from the import file in the data set.

  • Synchronization Mode: These options refine how the import mode functions when one of the two update options are selected. The following options are available:

    • Overwrite existing rows completely: This option deletes the entire row of a record whose key column value matches a record in the import file. The row is replaced with values from the corresponding record in the import file. As a result, previously existing column values which are not included in the import file are deleted.
      For non-unique data sets containing unique non-key columns, the combination of key column and unique column values is used find matching records. This is used to update single records inside a non-unique data set. For example, when a related data set contains multiple orders for a customer (identified by email address as the key column), the record for a specific order is identified by the combination of email address and the unique order ID.

    • Overwrite columns only: This option is only available for unique data sets. In this case, a record whose key column value matches a record in the import file is deleted. The column values, however, are stored temporarily. The new record is added to the data set. Columns that do not contain values in the import file are populated with values from the deleted record.
      This import mode is used to update related data sets from multiple data sources without merging the data into a single import file.

  • Error Mode: This drop-down list defines how errors in the source file are handled during data import. The following options are available:

    • Skip Value: If an error is encountered in a certain row in the import file, only the value which contains the error is not imported. If there are other values in the data set that can be imported, they are imported.

    • Skip Row: If an error is encountered in a certain row in the import file, the entire row which contains the error is not imported.

  • System Name: This drop-down list selects the Engage system. If more than one system is available, the name of each system is displayed.

  • Encoding: This drop-down list defines the character encoding of the source file. The character encoding ensures that the imported text is displayed correctly. The  symbol opens a pop-up window. In the window it is possible to select from a list of all available types of character encoding.

  • Source File: This input field defines the name of the data file to be imported. This input field can also be used to identify different files uploaded to the same location. In this case, a file pattern is specified using the asterisk as a wildcard character. For example, the file pattern ​*.xml​ will upload any XML file uploaded to the location specified in the ​Address​ input field. Complex patterns can be specified with a regular expression.
    A zip archive can also be used as the source file. When a zip file is used, the type of compression must be specified with an additional parameter. For more information, see the additional Paramerters section below.

Automations that upload or import files from a server will process the specified files at the scheduled time, even if the source file is incomplete. For this reason, it is recommended to use a different file name or extension when transferring files to the server and rename the file with a script when transfer is complete.

  • File Type: This drop-down list selects the file type of the import data. The following options are available:

  • Address: This input field defines the location where the import file is stored. A connection can be established with the server over FTP (FTP, PFTP (passive FTP), AFTP (active FTP), SFTP, FTPS), SCP and HTTP (HTTP, HTTPS). This address has the following format: ​protocol://user:password@host.port/url-path​.
    Example: ​ftp://joesample:1password@ftp.mapp.com/import​
    The server account credentials can be encrypted with ​Transfer Security​ records. The user name, password or complete user account are specified with placeholders for Transfer Security records. For more information, see Credentials Options​.

Additional Parameters​

Custom parameters can be used to modify the job in the following ways:

These fields add additional parameters that modify how the job is processed. The first input field identifies the characteristic or parameter to modify. The second input field identifies the value to use for the parameter.

  • : This button creates additional input fields for a custom parameter.

  • : This button removes the input fields for a custom parameter.

Enter the parameters exactly as listed here. The entries are case-sensitive.

Modification

Parameter

Value

Change the separator in a CSV-file

A standard CSV export has either a comma or tab as separator. The parameter ​separator​ defines a different separator.

separator

Enter the desired separator.

Example:​ ;​ (semicolon).

Unzip zip and gzip files

If the file to be imported is in a compressed format, the type of compression must be specified.

source-compression

Enter the compression type.

Example: ​gzip​ or ​zip​

Send an XML version of the import report to a specific email address

An XML version of the import report can be sent to a specific email address.

admin-report-email

Enter the email address.

Save a backup copy of the imported data in the data system or on a server

As a general rule, Engage deletes the imported file as soon as the import is complete. If the imported data cannot be reproduced from a CRM or shop system, it is a good idea to make a backup copy when the import is finished. When this parameter is used, a default number of five backup files are saved at the specified address.

source-backup-url

Enter the FTP address or the URL of the backup directory.

Define the number of backups

This parameter can only be used together with ​source-backup-url​. This option sets a limit to the number of backups saved. If the number of saved files exceeds the limit, Engage replaces the oldest file. No other files should be saved in the specified directory because all the files located there are managed by the backup process.

source-backup-files

Enter a value for the number of backup files.

Schedule

​Schedule

This button saves your changes and opens the ​Schedule Time-based Automation​ window. Here you define the date and time of execution. For more information, see Schedule Time-Based Automation (window)​.