WANdisco announced the launch of LiveData Migrator, an automated, self-service solution that democratizes cloud data migration at any scale by enabling companies to start migrating Hadoop data from on-premises to Amazon Web Services (AWS) within minutes, even while the source data sets are under active use. Available as a free trial for up to five terabytes, businesses can migrate HDFS data without the expertise of engineers or other consultants – the program can be implemented immediately to enable companies’ digital transformations. LiveData Migrator works without any production system downtime or business disruption while ensuring the migration is complete and continuous and any ongoing data changes are replicated to the target cloud environment.

LiveData Migrator delivers migrating unstructured data into cloud storage to then take advantage of machine-learning (ML) powered cloud analytics such as Amazon EMR, Databricks or Snowflake. LiveData Migrator also enables the transition to a hybrid architecture, where on-premises and cloud environments are kept consistent for active-active replication capabilities, and sets the foundation for a future multi-cloud architecture. LiveData Migrator Capabilities:

  • Complete and Continuous Data Migration
    Migrates any changes made to the source data sets, allowing applications to continue to modify the source system’s data without causing divergence between source and target.
  • Rapid Availability
    Enables data to become available for use in the target environment as soon as it has been migrated, without having to wait for all data set migrations to complete.
  • Any Scale
    Migrates any volume of data, from terabytes to exabytes, to cloud storage without needing to stop changes to data at source during migration
  • Hadoop & Object Storage Conversion
    Migrates HDFS data to other Hadoop-compatible file systems and cloud storage, including the ongoing changes made to those data before, throughout and after migration.
  • Selective Migration
    Allows selection of which data sets should be migrated and selectively excludes data from migration to specific clusters in the new environment.