Cloud Storage

Datometry Inspires Teradata and Oracle Customers to Shift to Cloud Without Changing the Code

Teradata is a notable identity for its advanced SQL engine designed for handling extreme complex functions such as recursive queries, unique syntax, and custom logic for parallelizing workloads. Additionally, it is also known for its scalability and performance. As a result, Teradata has skillfully positioned itself for organizations with the most challenging analytic issues, and it finally embraced the cloud aggressively.

Further, as representative of the modern cloud, the Redshifts, Synapses, Snowflakes, and BigQueries hyperscale the alternatives with pay-as-you-go pricing that provides more cost-efficient options to legacy Teradata platforms. Unfortunately, for several organizations, functionality gaps or source code and/or schema changes have been the roadblocks for migration.

Datometry says that the solution is not data virtualization but database virtualization. Its technique is to insert a runtime that performs as a buffer between your Teradata SQL statements and the target cloud environment. The strategy is to empower Teradata customers to run their Teradata queries on diverse targets without modifying or completely rewriting the existing SQL programs. Its Hyper Q product is including Oracle to the list of database sources.

The center of Datometry's approach is its trademarked hypervisor that emulates the SQL database calls on the go. It breaks down those complex calls, stored procedures, and/or macros into nuclear operations understandable by the target environment at the backend. Assuming that these operations are probably critical, it offers policy-based queueing that syncs with existing policies run on the source. It also gives JDBC and ODBC APIs for BI and ETL tools.

Certainly, Datometry is not the first to state not to change the existing codes. There are SQL translators, but Datometry claims that they might not be optimally effective. According to Datometry, code converters should handle approximately 60-70% of all workloads. The traditional way was adding the non-SQL code in the application to compensate for the variance between the Teradata SQL and SQL of the target database. Similarly, schema migration tools of cloud databases often miss the custom data types and structures.

But the question is whether Datometry can handle all the eccentricities of Teradata SQL? The organization claims that it has 99% coverage of Teradata workloads. Invariably, there is a cost involved- Datometry's virtualization layer will include 1-2% of overhead. Though they claim that with EPA ratings, your mileage will differ depending on the workloads, it is a small price to pay compared to the overhead cost of maintaining SQL code and schema conversion tools.

About four years ago, Datometry performed its initial proof of concept (POC) with SQL Server on HPE Superdome machines on-premises. Further, it has pivoted to support Azure Synapse and Google BigQuery in the cloud. Finally, as disclosed above, it has just announced a preview for Oracle. Most importantly, Datometry has not yet aimed at Amazon Redshift or Snowflake - so it still has a lot to work on.

Spotlight

Other News

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More