All About Hybrid Cloud Environment Before Workload Migration

Sayantani Bhattacharya | June 11, 2021 | 142 views

Hybrid Cloud Environment
Modern organizations need fast-paced solutions to scale their services and integrate their applications for a digital foothold, resulting in a demand to shift their ecosystem from on-premise to a cloud environment. Therefore, selecting the exemplary cloud architecture based on the organizational ecosystem is the need of the hour; otherwise, it may directly impact their business growth. Hence, companies are interested in opting for a hybrid cloud solution that will allow gradual migration of their workloads without affecting the daily operations, providing enhanced security by allowing segregation of the critical and general workloads to private and public clouds, respectively.

Many businesses across the globe are raising concerns on whether to adopt a public, private or hybrid cloud platform. According to Gartner, over 75% of midsize and large organizations will adopt a hybrid or multi-cloud strategy by 2021. Today, organizations realize that a hybrid platform is a good-to-go solution after managing IT resources and workloads in public and private cloud environments.

The hybrid cloud definition says that it is a combination of storage, computing, and services environments of an on-premise, private and public cloud. It creates a single platform to operate both in on-premises, private resources, and public cloud resources, such as those offered by Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). It allows seamless sharing of data and applications when computing and processing requests vary within the organization. A hybrid ecosystem establishes interconnectivity initially through data virtualization, following via connection tools and protocols such as application programming interfaces (APIs), virtual private networks (VPNs), and/or wide area networks (WANs). The goal is to create a collaborative, automated, secured, and organized environment, where integrating these applications among one another and with on-premises systems is easy and fast. 

How is Hybrid Platform Different from Others?

Deployment of a hybrid infrastructure describes the organization's ecosystem where on-premise, public, and private cloud environment conglomerate to provide a secure and unified platform. Hybrid cloud platforms allow you to leverage the utilities of the private and public cloud ecosystem.

The private cloud is an infrastructure that can isolate all your vital business data of an organization behind firewalls on a single server, which are non-sharable and thus is a perfect fit for businesses dealing with confidential and sensitive information.Whereas public cloud platforms are shared spaces that offer massive amounts of storage capacity and other resources, they are highly scalable. In addition, due to the shared architecture, public clouds are affordable and allow organizations to control the security and backups provided by the server's data center.

Hybrid platforms are secured, scalable, and cost-effective, allowing you to have a flexible option to separate the workloads and benefit private and public cloud platforms.

Benefits of Hybrid Cloud

According to the Mordor Intelligence research report, Hybrid Cloud Market was valued at USD 52.16 billion in 2020 and projects to record USD 145 billion by 2026, at a CAGR of 18.73% for the forecast period 2021 to 2026. The vital factors such as flexibility, reliability, scalability, cost-effectiveness, security, and rapidity drive its progress.

Flexibility and Reliability

One of the prime benefits of a hybrid environment is its flexibility. A typical hybrid cloud solutions flexibility refers to the service that enables you to utilize the ecosystem as per their needs. Hybrid cloud architecture can use traditional on-premise systems and the latest cloud technology, with/without registering to a third-party host. Organizations with hybrid infrastructure can migrate workloads and access information to and from their traditional systems and to the public/private cloud whenever necessary, without any service interruption. 

Scalability

As we know, a hybrid cloud infrastructure allows us to utilize both a public cloud and a private cloud. It permits more resources via the public cloud services that enable the organizations to expand their storage capacity and computing resources. Thus, workload migration to a hybrid ecosystem makes it easier to facilitate, implement and scale the resources whenever demand exceeds the ability of the on-premise infrastructure.

Cost-Effectiveness

Typically comprised of private cloud and public cloud architectures, hybrid cloud management allows you to own and operate the data center infrastructure with a significant capital expense. On the other hand, it also enables the public cloud infrastructure to offer resources and services accountable as variable and operational expenses. Thus, hybrid cloud users can select any infrastructures that make it affordable to run the workloads. Moreover, it can accommodate the business demand upsurge and increase its capacity without additional cost. As cost-saving is one of the prime aims of organizations, a hybrid infrastructure leverages a cost-effective approach with no compromise on your ecosystem's scalability, flexibility, security, and agility. According to a study by IT management solution provider Flexera, 76 % of organizations use cost efficiency and savings to measure cloud progress.

Security

Securing business-critical information is always a challenge in any network-enabled ecosystem. Typically for a public cloud, susceptibility to data breaches and data leakages are common. While cloud service providers make exhaustive efforts to ensure data protection for their clients, yet public cloud infrastructure remains at high risk because of its open environment framework. Whereas, for the private cloud ecosystem, companies hold direct control over the stored data. They can manage and establish strict protocols for accesses because the data stored in the private ecosystem are generally highly critical and confidential. However, with a hybrid service, companies can leverage the security of a private cloud with the flexibility and benefits of a public cloud. You can migrate the business-critical data from the private ecosystem to a public cloud for different operations, analytics, and applications. You can also implement extensive encoding methods to ensure data security as much as possible. Hence, Hybrid cloud security is simply protecting the data, applications, and infrastructure that incorporates a certain degree of workload sharing, consistency, and management across multiple cloud environments.

Rapidity

Speed/Rapidity is one of the noteworthy attributes of network-enabled applications. Although a hybrid platform is not fundamentally quicker than a public cloud platform, it allows network optimization to curtail latency and simplify data migration. In addition, hybrid cloud storage transfers non-critical workloads to the public cloud and critical workload on the private cloud and optimizes the network to streamline the traffic. As a result, you can work faster and increase their productivity.

Few Setbacks to Keep in Mind

Despite several benefits, you should consider some of the hybrid cloud challenges before workload migration. Flexera's 2020 State of the Cloud Report says that the complexity and dynamic nature, the hybrid/multi-cloud environment brings many challenges, such as assessing the suitability of on-premises apps for migrating to the cloud. However, analyzing and noting the limitations can make it easy for organizations to utilize their hybrid ecosystem optimally.

Latency and Compatibility

As we know that a hybrid cloud strategy is derived by combining private and public cloud platforms, there can be compatibility issues between them because of their distinct properties. Moreover, analyzing the compatibility of the on-premise applications before migrating to the cloud requires a lot of effort and bandwidth. For example, the private cloud component of a hybrid application cannot respond as fast as the front-end public cloud component and can cause operational latency and other complexities.

Temporary Risk Due to Data Transfers

Data transfers across a hybrid cloud platform can involve third-party (public cloud host) for many organizations, resulting in unnecessary and unacceptable security risks. In addition, data leakages are also a common concern while shifting the on-premise applications to the cloud infrastructure. Therefore, organizations must consider encryption of all traffic to protect the network and avoid temporary security risks for the data in transit.

IAM Complexities

Using Identity and Access Management (IAM) protocols steadily across private and public clouds requires a highly synchronized effort to meet security and compliance requirements. It is also an essential protocol to ensure no gaps are present in the hybrid cloud design. However, organizations dealing with confidential data, such as the healthcare or finance industries, may face few compliance setbacks. In addition, knowing the actual position of the data and who has access can be a real challenge in a hybrid environment. Hence, organizations need to adopt single sign-on applications and allocate authorizations only when critical and necessary.

Initial Implementation Cost

The on-premise or private cloud component requires substantial investment, maintenance, and operational skills for traditional hybrid cloud architecture. Thus, implementing additional software, though necessary, can further add to the initial cost for a private cloud. At the same time, proper data planning, security tools, employee training, and cloud certifications can boost the initial investment for the public cloud.

Is Hybrid Cloud Environment A Right Choice for Your Business?

Advanced companies align their business transformation strategy with the orchestration of their cloud platforms to achieve a 'GenX' business model. This model establishes an automated and agile organization, empowered by data, directed by AI insights. IBM's Voice of the Enterprise Digital Pulse report by 451 Research says that 3 out of 5 of approximately 1,000 organizations surveyed have implemented a hybrid ecosystem with integrated on-and off-premises cloud services.

Here are some hybrid cloud use cases to help you analyze whether a hybrid infrastructure fits your organizational ecosystem.

Dynamic Workloads

Hybrid cloud is particularly effective for dynamic workloads. For example, a trading company entry system that experiences significant demand surges is ideal for implementing hybrid cloud infrastructure. Using an easily scalable public cloud for your dynamic workloads while leaving more sensitive workloads to a private cloud or on-premises data center helps to increase your operational efficiency without hampering the security of your critical data.

Segregation of Critical and Non-Critical Workloads

When your company leverages several SaaS applications, it requires identifying and segregating their workloads to perform while keeping the data security aspect high priority optimally. In comparison, hybrid cloud storage diversification allows moving business-critical workloads to the private cloud with access control mechanisms for security. You can shift the non-critical applications to a public environment and utilize them for business analytics. Under a hybrid ecosystem, both these platforms share information under the same data management yet remain distinct. 

Periodical Migration to Cloud Environment

Suppose you are planning to upgrade your operational infrastructure but not sure about its operability. In that case, you can migrate a portion of your critical workload to the private cloud and general applications to the public environment of the hybrid cloud platform and analyze the performance. You can continue expanding your cloud presence as needed by periodical migration of the workload, utilizing the hybrid structure. It enables you to assess the platform in terms of your current requirements and potential future growth.

Big Data Management

Hence, opting for a more brilliant choice by implementing a hybrid cloud strategy will allow you to run a portion of the big data analytics using a highly scalable public cloud platform. Furthermore, you can ensure data security and retain your confidential big data behind the firewall while using the private cloud.

Capacity Management

A hybrid cloud allows you to assign public cloud resources for short-term projects at a much lower cost than the on-premise data center. Thus, you can maintain efficient investment by controlling over-expenditures on the equipment that you may require temporarily.

Additionally, utilizing a hybrid environment to supplement your on-premise infrastructure is a better choice for projects that require multi-user collaboration or significant data storage and may hinder your current network performance or surpass your network capacity.

Multiple Business Requirements

Assume your organization needs to fulfill several business requirements. You can cater to some of your critical business needs by only a private cloud and other essentials through a public cloud. Under such circumstances, the hybrid solution is perfect for you. Hybrid cloud services help in many ways to provide benefits of both solutions for a business.

Inference

A well-integrated and balanced hybrid approach gives your business the best of both worlds. It allows you to leverage public cloud and private cloud services without completely offloading your data to a third-party data center.  Hybrid Cloud Computing can be an ideal solution for your high-focussed businesses.

With the extensive features and benefits of the hybrid ecosystem, your organization will be able to take a step ahead to the world of modern technology and get to see the advantage of combining the security and control of private infrastructure with the scalability and versatility of public cloud computing. 

FAQ’s

How do hybrid clouds work?

The hybrid cloud creates a single platform for the on-premise, private, and public cloud data, allowing gradual migration of the workloads without disturbing the daily transactions by establishing interconnectivity between different platforms. In addition, the hybrid cloud provides the security of the private ecosystem and the flexibility of the public infrastructure. 

Are hybrid clouds secure?

Hybrid clouds allow the migration of sensitive workload to its private infrastructure that you can protect behind the company's firewall. Thus, you can secure all the confidential data within the hybrid cloud infrastructure. In addition, you can shift the remaining non-critical workloads to the public portion of the hybrid cloud ecosystem, which you can utilize for several computational purposes.

Why hybrid cloud?

Hybrid cloud provides the security of the private environment and the scalability and flexibility of the private environment, which makes it one of the most preferred infrastructures for several organizations.

What is hybrid cloud storage?

Hybrid cloud storage is a method to manage cloud storage that uses both local and off-site data. With hybrid cloud storage, businesses can shift their workloads between on-premises or private clouds. In addition, hybrid clouds help organizations to get the most out of containers, which simplifies shifting workloads among clouds.


Spotlight

Qubole

"Qubole delivers a Self-Service Platform for Big Data Analytics built on Amazon Web Services, Microsoft and Google Clouds. We were started by the team that built and ran Facebook's Data Service when they founded and authored Apache Hive. With Qubole, a data scientist can now spin up hundreds of clusters on their public cloud of choice and begin creating ad hoc and/or batch queries in under five minutes and have the system autoscale to the optimal compute levels as needed. Please feel free to test Qubole Data Services for yourself by clicking ""Free Trial"" on the website."

OTHER ARTICLES
CLOUD SECURITY

Cloud Cryptography: Using Encryption to Protect Data

Article | August 4, 2022

Even privacy specialists agree that encryption is a fundamental technology and the cornerstone of security, but cloud encryption can be daunting. In addition, small and medium-sized enterprises can become confused by the sheer quantity of encryption techniques. Cloud cryptography encrypts cloud-based data. It allows users to securely use shared cloud services while cloud provider data is encrypted. Cloud cryptography safeguards sensitive data without slowing exchange. Cloud encryption makes it possible to secure sensitive data outside your organization's corporate IT infrastructure when that data is no longer under your control. Companies utilize a variety of cryptographic key types for cloud security. Three algorithms are used for cloud data encryption: Symmetric Algorithm One key encrypts and decrypts data. It requires little computing resources and excels at encryption. Two-way keys ensure verification and approval in symmetrical algorithms. Encrypted information in the cloud can't be deciphered unless the client possesses the key. Asymmetric Algorithm Encryption and decoding need distinct keys. Every recipient needs a decoder—the recipient's private key. The encryption key belongs to someone. The most secure approach requires both keys to access explicit data. Hashing It's key to blockchain security. In a blockchain, data is stored in blocks and linked by cryptographic protocols. A code or hash is assigned to each information block added to the chain. Hashing helps arrange and recover data. Businesses need to adopt a data-centric approach in this complex and evolving world of virtualization, cloud services, and mobility to protect their sensitive information from contemporary threats. Companies should deploy data security solutions that secure sensitive data consistently, including cloud data encryption and key management. Comprehensive cloud security and encryption platform should include robust access controls and key management to help enterprises use encryption successfully and cost-efficiently.

Read More
CLOUD SECURITY

Is It Time For Your Organization To Adopt Cloud Computing?

Article | July 8, 2022

The potential of cloud computing is becoming increasingly apparent to various businesses, and it is also growing. AWS, Microsoft Azure, and Google GCP are just a few of the numerous cloud service providers that are accessible globally. In addition, you can choose from a variety of migration strategies to go from local servers to cloud servers. Many businesses are considering shifting to the cloud. What are the indications that you are prepared, and why should you relocate? There's a chance your company is already utilizing an on-premise solution. Since it's been in use for a while, organizations are accustomed to it. But the need for greater flexibility has grown exponentially now that the shift to digital has accelerated in recent years. Threats to On-premise There are various drawbacks to on-premise software. Updates aren't usually frequent, and they’re not always supported. This implies that firms won't always have access to the most recent features and abilities. A custom build is much more time-consuming if you require a feature right away than getting it added to quarterly updates. There's a chance that the program an organization is using will someday be completely phased out. Then the organization is stuck using a solution that won't receive any more updates. In addition, with the hardware getting older, current operating systems might be unable to execute older programs. In the meantime, rivals would have switched to cutting-edge, affordable cloud-based technologies, which allow them to run their businesses and provide a much smoother client experience. Why Choose the Cloud? Moving to the cloud applies to every aspect of your business. Real-time data is provided, allowing for far more precise decision-making. Automating routine manual chores streamlines operations and frees up team members' time for activities they enjoy. It is also perfect for emerging forms of working, like remote and hybrid work, because it can be accessed from anywhere, on any device, at any time.

Read More
CLOUD SECURITY

Managing Multi-Cloud Complexities for a Seamless Experience

Article | July 6, 2022

Introduction The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today. Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions. However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others. Managing Multi-Cloud Complexities Like a Pro Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below. Managing Data with AI and ML AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction. AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control. Integrated Management Structure Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives. Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds. Developing Security Strategy Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss. Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications. Skillset Management Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies. Closing Lines Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Read More
CLOUD SECURITY

Evaluating the Importance of Cloud Native Security

Article | June 29, 2022

As time goes on, an increasing number of businesses worldwide are taking their approach to digital transformation a step farther than their competitors, who are yet to explore the digital front as effectively. As a result, from corporate regulations and financial limits to compliance penalties and new attack vectors, security teams face increasing difficulties when businesses move and scale their apps and services across multiple clouds. The creation of cloud-native applications has also increased as more businesses ramp up their digital transformation initiatives. Despite not having a clearly defined boundary to secure, contemporary distributed networks based in the cloud require network security. In addition, more sophisticated observability and security capabilities are also necessary due to the rising development and deployment of cloud-native apps. Organizations must understand what security entails for each new layer of the application stack in order to better secure cloud-native applications. They must also understand that the entire development pipeline requires a security management toolkit. In a perfect world, all cloud-native applications would secure every one of their endpoints and restrict access to only services or users with valid credentials. Every request for resources from an application should specify who is making it, their access role, and any privileges they may have. The difficulty of keeping track of these assets, as well as the constantly changing nature of cloud resources, adds to the complexity. As they scale up, cloud-native solutions like serverless present new difficulties. In particular, serverless apps frequently have hundreds of functions, making it challenging to manage all this data and the services that utilize it as the program grows. Due to this, resources must be immediately recognized as soon as they are produced and tracked through all modifications until they are no longer available. Despite the complexity of cloud-native applications, the fundamentals of cybersecurity remain the same. Beyond the necessity of end-user training, it appears that the five pillars of zero trust are strikingly similar to the essentials of cybersecurity: Network Application workload Identities Data Devices (physical security) Although using the cloud benefits businesses, security flaws, mistakes, and incorrect configurations are common. Moreover, different approaches leave security weaknesses. Lack of insight and end-to-end context about risk further hinders your capacity to safeguard the cloud. Additionally, as cloud expansion and the rate of agile software deployment rise, the task is getting steadily more complicated. And nobody wants to give up growth or speed in the name of security.

Read More

Spotlight

Qubole

"Qubole delivers a Self-Service Platform for Big Data Analytics built on Amazon Web Services, Microsoft and Google Clouds. We were started by the team that built and ran Facebook's Data Service when they founded and authored Apache Hive. With Qubole, a data scientist can now spin up hundreds of clusters on their public cloud of choice and begin creating ad hoc and/or batch queries in under five minutes and have the system autoscale to the optimal compute levels as needed. Please feel free to test Qubole Data Services for yourself by clicking ""Free Trial"" on the website."

Related News

CLOUD APP MANAGEMENT

AWS Announces General Availability of Amazon EC2 DL1 Instances

Amazon | October 27, 2021

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) DL1 instances, a new instance type designed for training machine learning models. DL1 instances are powered by Gaudi accelerators from Habana Labs (an Intel company) to provide up to 40% better price performance for training machine learning models than the latest GPU-powered Amazon EC2 instances. With DL1 instances, customers can train their machine learning models faster and more cost effectively for use cases like natural language processing, object detection and classification, fraud detection, recommendation and personalization engines, intelligent document processing, business forecasting, and more. DL1 instances are available on demand via a low-cost pay-as-you-go usage model with no upfront commitments. To get started with DL1 instances, Machine learning has become mainstream as customers have realized tangible business impact from deploying machine learning models at scale in the cloud. To use machine learning in their business applications, customers start by building and training a model to recognize patterns by learning from sample data, and then apply the model on new data to make predictions. For example, a machine learning model trained on large numbers of contact center transcripts can make predictions to provide real-time personalized assistance to customers through a conversational chatbot. To improve a model's prediction accuracy, data scientists and machine learning engineers are building increasingly larger and more complex models. To maintain prediction accuracy and high quality of the models, these engineers need to tune and retrain their models frequently. This requires a considerable amount of high-performance compute resources, resulting in increased infrastructure costs. These costs can be prohibitive for customers to retrain their models at the frequency they need to maintain high-accuracy predictions, while also posing an obstacle to customers that want to begin experimenting with machine learning. New DL1 instances use Gaudi accelerators built specifically to accelerate machine learning model training by delivering higher compute efficiency at a lower cost compared to general purpose GPUs. DL1 instances feature up to eight Gaudi accelerators, 256 GB of high-bandwidth memory, 768 GB of system memory, 2nd generation Amazon custom Intel Xeon Scalable (Cascade Lake) processors, 400 Gbps of networking throughput, and up to 4 TB of local NVMe storage. Together, these innovations translate to up to 40% better price performance than the latest GPU-powered Amazon EC2 instances for training common machine learning models. Customers can quickly and easily get started with DL1 instances using the included Habana SynapseAI SDK, which is integrated with leading machine learning frameworks (e.g. TensorFlow and PyTorch), helping customers to seamlessly migrate their existing machine learning models currently running on GPU-based or CPU-based instances onto DL1 instances, with minimal code changes. Developers and data scientists can also start with reference models optimized for Gaudi accelerators available in Habana’s GitHub repository, which includes popular models for diverse applications, including image classification, object detection, natural language processing, and recommendation systems. “The use of machine learning has skyrocketed. One of the challenges with training machine learning models, however, is that it is computationally intensive and can get expensive as customers refine and retrain their modelsAWS already has the broadest choice of powerful compute for any machine learning project or application. The addition of DL1 instances featuring Gaudi accelerators provides the most cost-effective alternative to GPU-based instances in the cloud to date. Their optimal combination of price and performance makes it possible for customers to reduce the cost to train, train more models, and innovate faster.” David Brown, Vice President, of Amazon EC2, at AWS Customers can launch DL1 instances using AWS Deep Learning AMIs or using Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS) for containerized applications. For a more managed experience, customers can access DL1 instances through Amazon SageMaker, making it even easier and faster for developers and data scientists to build, train, and deploy machine learning models in the cloud and at the edge. DL1 instances benefit from the AWS Nitro System, a collection of building blocks that offload many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead. DL1 instances are available for purchase as On-Demand Instances, with Savings Plans, as Reserved Instances, or as Spot Instances. DL1 instances are currently available in the US East (N. Virginia) and US West (Oregon) AWS Regions. Seagate Technology has been a global leader offering data storage and management solutions for over 40 years. Seagate’s data science and machine learning engineers have built an advanced deep learning (DL) defect detection system and deployed it globally across the company’s manufacturing facilities. In a recent proof of concept project, Habana Gaudi exceeded the performance targets for training one of the DL semantic segmentation models currently used in Seagate’s production. “We expect the significant price performance advantage of Amazon EC2 DL1 instances, powered by Habana Gaudi accelerators, could make a compelling future addition to AWS compute clusters,” said Darrell Louder, Senior Engineering Director of Operations, Technology and Advanced Analytics, at Seagate. “As Habana Labs continues to evolve and enables broader coverage of operators, there is potential for expanding to additional enterprise use cases, and thereby harnessing additional cost savings.” Intel has created 3D Athlete Tracking technology that analyzes athlete-in-action video in real time to inform performance training processes and enhance audience experiences during competitions. “Training our models on Amazon EC2 DL1 instances, powered by Gaudi accelerators from Habana Labs, will enable us to accurately and reliably process thousands of videos and generate associated performance data, while lowering training cost,” said Rick Echevarria, Vice President, Sales and Marketing Group, Intel. “With DL1 instances, we can now train at the speed and cost required to productively serve athletes, teams, and broadcasters of all levels across a variety of sports.” Riskfuel provides real-time valuations and risk sensitivities to companies managing financial portfolios, helping them increase trading accuracy and performance. “Two factors drew us to Amazon EC2 DL1 instances based on Habana Gaudi AI accelerators,” said Ryan Ferguson, CEO of Riskfuel. “First, we want to make sure our banking and insurance clients can run Riskfuel models that take advantage of the newest hardware. We found migrating our models to DL1 instances to be simple and straightforward—really, it was just a matter of changing a few lines of code. Second, training costs are a big component of our spending, and the promise of up to 40% improvement in price performance offers potentially substantial benefit to our bottom line.” Leidos is recognized as a top 10 health IT provider delivering a broad range of customizable, scalable solutions to hospitals and health systems, biomedical organizations, and every U.S. federal agency focused on health. “One of the numerous technologies we are enabling to advance healthcare today is the use of machine learning and deep learning for disease diagnosis based on medical imaging data. Our massive data sets require timely and efficient training to aid researchers seeking to solve some of the most urgent medical mysteries,” said Chetan Paul, CTO Health and Human Services at Leidos. “Given Leidos’ and its customers’ need for quick, easy, and cost-effective training for deep learning models, we are excited to have begun this journey with Intel and AWS to use Amazon EC2 DL1 instances based on Habana Gaudi AI processors. Using DL1 instances, we expect an increase in model training speed and efficiency, with a subsequent reduction in risk and cost of research and development.” Fractal is a global leader in artificial intelligence and analytics, powering decisions in Fortune 500 companies. “AI and deep learning are at the core of our healthcare imaging business, enabling customers to make better medical decisions. In order to improve accuracy, medical datasets are becoming larger and more complex, requiring more training and retraining of models, and driving the need for improved computing price performance,” said Srikanth Velamakanni, Group CEO of Fractal. “The new Amazon EC2 DL1 instances promise significantly lower cost training than GPU-based EC2 instances, which can help us contain costs and make AI decision-making more accessible to a broader array of customers.” About Amazon Web Services For over 15 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud offering. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 81 Availability Zones (AZs) within 25 geographic regions, with announced plans for 24 more Availability Zones and eight more AWS Regions in Australia, India, Indonesia, Israel, New Zealand, Spain, Switzerland, and the United Arab Emirates. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. About Amazon Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out technology, Amazon Studios, and The Climate Pledge are some of the things pioneered by Amazon.

Read More

CLOUD APP DEVELOPMENT

Wipro Partners With National Grid to Drive Data Center Consolidation and Implement Next Generation Hybrid Cloud Architecture

Wipro | October 22, 2021

Wipro Limited a leading global information technology, consulting and business process services company has signed a multi-year global strategic IT and digital deal with London - headquartered National Grid, a leading multinational electric and gas utility provider to accelerate their digital innovation journey.As part of this engagement, Wipro through its Boundaryless Enterprise solutions will facilitate National Grid’s continued digital transformation, integration of its managed services and consolidation of multiple data centers across UK and US to next generation hosting services. These sustainable data centers will allow for enhanced program governance, as well as heightened consolidation and the migration of all server and application functions from traditional data centers. Wipro will also help with mainframe migration and transition to managed services, including the eventual implementation of a hybrid cloud solution for National Grid. Shannon Soland, Chief Technology Officer, National Grid said, “As a strategic partner, Wipro will help us accelerate our digital journey as we work to achieve next generation capabilities in infrastructure hosting services. Wipro’s expertise will be instrumental as we work to improve our operating model to align with our Net Zero carbon commitment.” “Our data center consolidation efforts will allow us to realize an over 60% reduction in our data center footprint as well as realize a 40% reduction in our data center CO2 emissions. Additionally this transformational program, in conjunction with Wipro, will position our IT capabilities to enable modernized SDDC techniques, technologies, and operating model to accelerate our own digital transformation as National Grid continues to build the future of energy.” Daniel Jablonski, Head of Cloud and Hosting Services, National Grid As part of the collaboration with National Grid, Wipro through its innovative solutions and expertise will deliver a flexible, scalable and resilient digital transformation journey for National Grid. Geoffrey Jue, Vice President - ENU Sector Head, Wipro Limited said, “National Grid is one of the world’s largest utility companies, and Wipro is excited to be named as a strategic partner. This new collaboration builds on the successful two-decade-old partnership between the two companies. Wipro will employ standardized tools and processes to provide cloud services that will strengthen National Grid’s infrastructure services, and support its strategic business objectives. About Wipro Limited Wipro Limited is a leading global information technology, consulting and business process services company. We harness the power of cognitive computing, hyper-automation, robotics, cloud, analytics and emerging technologies to help our clients adapt to the digital world and make them successful. A company recognized globally for its comprehensive portfolio of services, strong commitment to sustainability and good corporate citizenship, we have over 220,000 dedicated employees serving clients across six continents. Together, we discover ideas and connect the dots to build a better and a bold new future.

Read More

CLOUD APP MANAGEMENT

NetApp Expands Hybrid Cloud Solutions Portfolio to Unlock Best of Cloud

NetApp | October 21, 2021

Today at INSIGHT 2021, NetApp®, a global cloud-led, data-centric software company, announced new additions and enhanced capabilities across its hybrid cloud portfolio to help organizations modernize their IT infrastructures and accelerate digital transformation. Delivering new secure ways to consume and operate data services on-premises and in the cloud, NetApp hybrid cloud solutions make it simpler for enterprise customers to put their data to work — wherever and whenever they need it. As the only solutions provider with native integrations for the world’s largest public clouds, NetApp’s industry leading ONTAP® software continues to serve as the foundation for hybrid cloud. With the latest release of ONTAP, NetApp is introducing enhanced protection against ransomware, expanded data management capabilities, and NVMe/TCP support for accelerated performance. The company is also announcing new digital wallet capabilities for NetApp Cloud Manager and enhanced data services for simplified administration across a hybrid cloud, more flexible consumption options to better control costs, as well as new Professional Services offerings to help customers unlock the full value of on-premises and hybrid cloud resources. “The promised benefits of migrating to the cloud may be profound, but many IT departments are still working to overcome on-premises challenges, like managing the complexity and costs of moving data, protecting against ransomware, and ensuring reliable performance for critical applications. As the hybrid cloud specialist, NetApp can help enterprises move their digital transformation efforts forward to deliver business results faster and within budget—whether they are still developing a strategy or in the middle of executing large-scale migrations.” Brad Anderson, Executive Vice President, Hybrid Cloud Group at NetApp “IDC’s research shows that approximately 70% of enterprise IT customers plan to modernize their storage infrastructures in the next two years to support next-generation workloads. But the key operational advantage will be in optimizing workload placement across traditional on-premises and cloud environments,” said Eric Burgener, Research Vice President, Infrastructure Systems Group at IDC. “As an industry leader with years of innovation and expertise delivering hybrid cloud solutions, NetApp is uniquely positioned to help enterprises transition to hybrid cloud models to achieve the scalability and flexibility they need to deliver critical data services and workload capabilities that drive business value.” “Formula One racing has always been about finding the competitive edge, and with Aston Martin Cognizant’s return to the F1™ grid this year, we’re embracing an ambitious data-centric strategy to maximize our performance both on and off the track as we seek pole position,” said Otmar Szafnauer, Chief Executive Officer and Team Principal at Aston Martin Cognizant Formula One Team. “By partnering with NetApp to build our data fabric and standardize operations with its world-class hybrid cloud solutions, we’re working to ensure that everything we do—from capturing real-time data on car and component performance to how we streamline factory and engineering operations—is focused on constant improvement and driving the team forward.” NetApp’s latest portfolio innovations announced today include: ONTAP Data Management Software Enhancements: The latest release of ONTAP enables enterprises to autonomously protect against ransomware attacks based on machine learning with integrated preemptive detection and accelerated data recovery. The new release also delivers enterprise-grade performance for SAN and modern workloads with NVMe/TCP support, expanded object storage capabilities, and simplified management. In addition, this latest ONTAP release will power the upcoming NetApp AFF A900, the next-generation high-resiliency all-flash system for business-critical workloads. Enhanced Data Services: With new digital wallet capabilities available in NetApp Cloud Manager, customers can benefit from greater mobility and more visibility into usage of data service licenses across a hybrid cloud, with prepayment of credits enabling streamlined deployment to avoid procurement hassles. Additional updates include enhancements to NetApp Cloud Backup and Cloud Data Sense services, simplified deployment of Cloud Volumes ONTAP with new customer-ready templates, fully embedded Active IQ, and deeper integrations with NetApp Cloud Insights and ONTAP software to support Kubernetes workloads. More Flexible Consumption Options: NetApp Keystone Flex Subscription, an on-premises storage-as-a-service offering with native cloud integration, continues to gain momentum with customers. The offering is now supported on four continents—encompassing petabytes of capacity within just under one year of availability. NetApp is now offering a new freemium service tier for Cloud Volumes ONTAP, providing customers with access to a fully featured, perpetual license to use ONTAP in the cloud for workloads needing less than 500GB of storage. This consumption flexibility gives organizations the freedom to use enterprise-grade data services for small workloads such as Kubernetes clusters at no initial cost. An organization only needs to convert to a subscription when the workload matures and scales. “As a leading IT consultancy specializing in cloud infrastructure and services, our clients are increasingly working with us to reduce CAPEX spending by taking advantage of cloud-like consumption models for their on-premises environments,” said Kent Christensen, Virtual Practice Director for cloud and data center transformation at Insight. “NetApp Keystone helps us provide a truly flexible consumption model for enterprises, serving as a platform to provide business-critical data services across the entire hybrid cloud data fabric, which will be a huge boon for our growing client base." More Accessible Hybrid Cloud Expertise: NetApp is also introducing new Support and Professional Services offerings to make it even easier to access experts for step-by-step guidance as they transition to hybrid cloud. With SupportEdge Advisor for Cloud, NetApp is extending its data center support model to cloud services with rapid, direct access to trained specialists. NetApp Flexible Professional Services (FlexPS) is also available for customers that require on-demand and ongoing support as they transition to a hybrid cloud. With this new subscription-based offering, organizations can get the professional help they need to design and build a data fabric strategy, implement solutions, and optimize their hybrid cloud with predictable costs and avoid procurement delays. About NetApp NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation. The company provides systems, software and cloud services that enable them to run their applications optimally from data center to cloud, whether they are developing in the cloud, moving to the cloud, or creating their own cloudlike experiences on premises. With solutions that perform across diverse environments, NetApp helps organizations build their own data fabric and securely deliver the right data, services and applications to the right people—anytime, anywhere.

Read More

CLOUD APP MANAGEMENT

AWS Announces General Availability of Amazon EC2 DL1 Instances

Amazon | October 27, 2021

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) DL1 instances, a new instance type designed for training machine learning models. DL1 instances are powered by Gaudi accelerators from Habana Labs (an Intel company) to provide up to 40% better price performance for training machine learning models than the latest GPU-powered Amazon EC2 instances. With DL1 instances, customers can train their machine learning models faster and more cost effectively for use cases like natural language processing, object detection and classification, fraud detection, recommendation and personalization engines, intelligent document processing, business forecasting, and more. DL1 instances are available on demand via a low-cost pay-as-you-go usage model with no upfront commitments. To get started with DL1 instances, Machine learning has become mainstream as customers have realized tangible business impact from deploying machine learning models at scale in the cloud. To use machine learning in their business applications, customers start by building and training a model to recognize patterns by learning from sample data, and then apply the model on new data to make predictions. For example, a machine learning model trained on large numbers of contact center transcripts can make predictions to provide real-time personalized assistance to customers through a conversational chatbot. To improve a model's prediction accuracy, data scientists and machine learning engineers are building increasingly larger and more complex models. To maintain prediction accuracy and high quality of the models, these engineers need to tune and retrain their models frequently. This requires a considerable amount of high-performance compute resources, resulting in increased infrastructure costs. These costs can be prohibitive for customers to retrain their models at the frequency they need to maintain high-accuracy predictions, while also posing an obstacle to customers that want to begin experimenting with machine learning. New DL1 instances use Gaudi accelerators built specifically to accelerate machine learning model training by delivering higher compute efficiency at a lower cost compared to general purpose GPUs. DL1 instances feature up to eight Gaudi accelerators, 256 GB of high-bandwidth memory, 768 GB of system memory, 2nd generation Amazon custom Intel Xeon Scalable (Cascade Lake) processors, 400 Gbps of networking throughput, and up to 4 TB of local NVMe storage. Together, these innovations translate to up to 40% better price performance than the latest GPU-powered Amazon EC2 instances for training common machine learning models. Customers can quickly and easily get started with DL1 instances using the included Habana SynapseAI SDK, which is integrated with leading machine learning frameworks (e.g. TensorFlow and PyTorch), helping customers to seamlessly migrate their existing machine learning models currently running on GPU-based or CPU-based instances onto DL1 instances, with minimal code changes. Developers and data scientists can also start with reference models optimized for Gaudi accelerators available in Habana’s GitHub repository, which includes popular models for diverse applications, including image classification, object detection, natural language processing, and recommendation systems. “The use of machine learning has skyrocketed. One of the challenges with training machine learning models, however, is that it is computationally intensive and can get expensive as customers refine and retrain their modelsAWS already has the broadest choice of powerful compute for any machine learning project or application. The addition of DL1 instances featuring Gaudi accelerators provides the most cost-effective alternative to GPU-based instances in the cloud to date. Their optimal combination of price and performance makes it possible for customers to reduce the cost to train, train more models, and innovate faster.” David Brown, Vice President, of Amazon EC2, at AWS Customers can launch DL1 instances using AWS Deep Learning AMIs or using Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS) for containerized applications. For a more managed experience, customers can access DL1 instances through Amazon SageMaker, making it even easier and faster for developers and data scientists to build, train, and deploy machine learning models in the cloud and at the edge. DL1 instances benefit from the AWS Nitro System, a collection of building blocks that offload many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead. DL1 instances are available for purchase as On-Demand Instances, with Savings Plans, as Reserved Instances, or as Spot Instances. DL1 instances are currently available in the US East (N. Virginia) and US West (Oregon) AWS Regions. Seagate Technology has been a global leader offering data storage and management solutions for over 40 years. Seagate’s data science and machine learning engineers have built an advanced deep learning (DL) defect detection system and deployed it globally across the company’s manufacturing facilities. In a recent proof of concept project, Habana Gaudi exceeded the performance targets for training one of the DL semantic segmentation models currently used in Seagate’s production. “We expect the significant price performance advantage of Amazon EC2 DL1 instances, powered by Habana Gaudi accelerators, could make a compelling future addition to AWS compute clusters,” said Darrell Louder, Senior Engineering Director of Operations, Technology and Advanced Analytics, at Seagate. “As Habana Labs continues to evolve and enables broader coverage of operators, there is potential for expanding to additional enterprise use cases, and thereby harnessing additional cost savings.” Intel has created 3D Athlete Tracking technology that analyzes athlete-in-action video in real time to inform performance training processes and enhance audience experiences during competitions. “Training our models on Amazon EC2 DL1 instances, powered by Gaudi accelerators from Habana Labs, will enable us to accurately and reliably process thousands of videos and generate associated performance data, while lowering training cost,” said Rick Echevarria, Vice President, Sales and Marketing Group, Intel. “With DL1 instances, we can now train at the speed and cost required to productively serve athletes, teams, and broadcasters of all levels across a variety of sports.” Riskfuel provides real-time valuations and risk sensitivities to companies managing financial portfolios, helping them increase trading accuracy and performance. “Two factors drew us to Amazon EC2 DL1 instances based on Habana Gaudi AI accelerators,” said Ryan Ferguson, CEO of Riskfuel. “First, we want to make sure our banking and insurance clients can run Riskfuel models that take advantage of the newest hardware. We found migrating our models to DL1 instances to be simple and straightforward—really, it was just a matter of changing a few lines of code. Second, training costs are a big component of our spending, and the promise of up to 40% improvement in price performance offers potentially substantial benefit to our bottom line.” Leidos is recognized as a top 10 health IT provider delivering a broad range of customizable, scalable solutions to hospitals and health systems, biomedical organizations, and every U.S. federal agency focused on health. “One of the numerous technologies we are enabling to advance healthcare today is the use of machine learning and deep learning for disease diagnosis based on medical imaging data. Our massive data sets require timely and efficient training to aid researchers seeking to solve some of the most urgent medical mysteries,” said Chetan Paul, CTO Health and Human Services at Leidos. “Given Leidos’ and its customers’ need for quick, easy, and cost-effective training for deep learning models, we are excited to have begun this journey with Intel and AWS to use Amazon EC2 DL1 instances based on Habana Gaudi AI processors. Using DL1 instances, we expect an increase in model training speed and efficiency, with a subsequent reduction in risk and cost of research and development.” Fractal is a global leader in artificial intelligence and analytics, powering decisions in Fortune 500 companies. “AI and deep learning are at the core of our healthcare imaging business, enabling customers to make better medical decisions. In order to improve accuracy, medical datasets are becoming larger and more complex, requiring more training and retraining of models, and driving the need for improved computing price performance,” said Srikanth Velamakanni, Group CEO of Fractal. “The new Amazon EC2 DL1 instances promise significantly lower cost training than GPU-based EC2 instances, which can help us contain costs and make AI decision-making more accessible to a broader array of customers.” About Amazon Web Services For over 15 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud offering. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 81 Availability Zones (AZs) within 25 geographic regions, with announced plans for 24 more Availability Zones and eight more AWS Regions in Australia, India, Indonesia, Israel, New Zealand, Spain, Switzerland, and the United Arab Emirates. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. About Amazon Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out technology, Amazon Studios, and The Climate Pledge are some of the things pioneered by Amazon.

Read More

CLOUD APP DEVELOPMENT

Wipro Partners With National Grid to Drive Data Center Consolidation and Implement Next Generation Hybrid Cloud Architecture

Wipro | October 22, 2021

Wipro Limited a leading global information technology, consulting and business process services company has signed a multi-year global strategic IT and digital deal with London - headquartered National Grid, a leading multinational electric and gas utility provider to accelerate their digital innovation journey.As part of this engagement, Wipro through its Boundaryless Enterprise solutions will facilitate National Grid’s continued digital transformation, integration of its managed services and consolidation of multiple data centers across UK and US to next generation hosting services. These sustainable data centers will allow for enhanced program governance, as well as heightened consolidation and the migration of all server and application functions from traditional data centers. Wipro will also help with mainframe migration and transition to managed services, including the eventual implementation of a hybrid cloud solution for National Grid. Shannon Soland, Chief Technology Officer, National Grid said, “As a strategic partner, Wipro will help us accelerate our digital journey as we work to achieve next generation capabilities in infrastructure hosting services. Wipro’s expertise will be instrumental as we work to improve our operating model to align with our Net Zero carbon commitment.” “Our data center consolidation efforts will allow us to realize an over 60% reduction in our data center footprint as well as realize a 40% reduction in our data center CO2 emissions. Additionally this transformational program, in conjunction with Wipro, will position our IT capabilities to enable modernized SDDC techniques, technologies, and operating model to accelerate our own digital transformation as National Grid continues to build the future of energy.” Daniel Jablonski, Head of Cloud and Hosting Services, National Grid As part of the collaboration with National Grid, Wipro through its innovative solutions and expertise will deliver a flexible, scalable and resilient digital transformation journey for National Grid. Geoffrey Jue, Vice President - ENU Sector Head, Wipro Limited said, “National Grid is one of the world’s largest utility companies, and Wipro is excited to be named as a strategic partner. This new collaboration builds on the successful two-decade-old partnership between the two companies. Wipro will employ standardized tools and processes to provide cloud services that will strengthen National Grid’s infrastructure services, and support its strategic business objectives. About Wipro Limited Wipro Limited is a leading global information technology, consulting and business process services company. We harness the power of cognitive computing, hyper-automation, robotics, cloud, analytics and emerging technologies to help our clients adapt to the digital world and make them successful. A company recognized globally for its comprehensive portfolio of services, strong commitment to sustainability and good corporate citizenship, we have over 220,000 dedicated employees serving clients across six continents. Together, we discover ideas and connect the dots to build a better and a bold new future.

Read More

CLOUD APP MANAGEMENT

NetApp Expands Hybrid Cloud Solutions Portfolio to Unlock Best of Cloud

NetApp | October 21, 2021

Today at INSIGHT 2021, NetApp®, a global cloud-led, data-centric software company, announced new additions and enhanced capabilities across its hybrid cloud portfolio to help organizations modernize their IT infrastructures and accelerate digital transformation. Delivering new secure ways to consume and operate data services on-premises and in the cloud, NetApp hybrid cloud solutions make it simpler for enterprise customers to put their data to work — wherever and whenever they need it. As the only solutions provider with native integrations for the world’s largest public clouds, NetApp’s industry leading ONTAP® software continues to serve as the foundation for hybrid cloud. With the latest release of ONTAP, NetApp is introducing enhanced protection against ransomware, expanded data management capabilities, and NVMe/TCP support for accelerated performance. The company is also announcing new digital wallet capabilities for NetApp Cloud Manager and enhanced data services for simplified administration across a hybrid cloud, more flexible consumption options to better control costs, as well as new Professional Services offerings to help customers unlock the full value of on-premises and hybrid cloud resources. “The promised benefits of migrating to the cloud may be profound, but many IT departments are still working to overcome on-premises challenges, like managing the complexity and costs of moving data, protecting against ransomware, and ensuring reliable performance for critical applications. As the hybrid cloud specialist, NetApp can help enterprises move their digital transformation efforts forward to deliver business results faster and within budget—whether they are still developing a strategy or in the middle of executing large-scale migrations.” Brad Anderson, Executive Vice President, Hybrid Cloud Group at NetApp “IDC’s research shows that approximately 70% of enterprise IT customers plan to modernize their storage infrastructures in the next two years to support next-generation workloads. But the key operational advantage will be in optimizing workload placement across traditional on-premises and cloud environments,” said Eric Burgener, Research Vice President, Infrastructure Systems Group at IDC. “As an industry leader with years of innovation and expertise delivering hybrid cloud solutions, NetApp is uniquely positioned to help enterprises transition to hybrid cloud models to achieve the scalability and flexibility they need to deliver critical data services and workload capabilities that drive business value.” “Formula One racing has always been about finding the competitive edge, and with Aston Martin Cognizant’s return to the F1™ grid this year, we’re embracing an ambitious data-centric strategy to maximize our performance both on and off the track as we seek pole position,” said Otmar Szafnauer, Chief Executive Officer and Team Principal at Aston Martin Cognizant Formula One Team. “By partnering with NetApp to build our data fabric and standardize operations with its world-class hybrid cloud solutions, we’re working to ensure that everything we do—from capturing real-time data on car and component performance to how we streamline factory and engineering operations—is focused on constant improvement and driving the team forward.” NetApp’s latest portfolio innovations announced today include: ONTAP Data Management Software Enhancements: The latest release of ONTAP enables enterprises to autonomously protect against ransomware attacks based on machine learning with integrated preemptive detection and accelerated data recovery. The new release also delivers enterprise-grade performance for SAN and modern workloads with NVMe/TCP support, expanded object storage capabilities, and simplified management. In addition, this latest ONTAP release will power the upcoming NetApp AFF A900, the next-generation high-resiliency all-flash system for business-critical workloads. Enhanced Data Services: With new digital wallet capabilities available in NetApp Cloud Manager, customers can benefit from greater mobility and more visibility into usage of data service licenses across a hybrid cloud, with prepayment of credits enabling streamlined deployment to avoid procurement hassles. Additional updates include enhancements to NetApp Cloud Backup and Cloud Data Sense services, simplified deployment of Cloud Volumes ONTAP with new customer-ready templates, fully embedded Active IQ, and deeper integrations with NetApp Cloud Insights and ONTAP software to support Kubernetes workloads. More Flexible Consumption Options: NetApp Keystone Flex Subscription, an on-premises storage-as-a-service offering with native cloud integration, continues to gain momentum with customers. The offering is now supported on four continents—encompassing petabytes of capacity within just under one year of availability. NetApp is now offering a new freemium service tier for Cloud Volumes ONTAP, providing customers with access to a fully featured, perpetual license to use ONTAP in the cloud for workloads needing less than 500GB of storage. This consumption flexibility gives organizations the freedom to use enterprise-grade data services for small workloads such as Kubernetes clusters at no initial cost. An organization only needs to convert to a subscription when the workload matures and scales. “As a leading IT consultancy specializing in cloud infrastructure and services, our clients are increasingly working with us to reduce CAPEX spending by taking advantage of cloud-like consumption models for their on-premises environments,” said Kent Christensen, Virtual Practice Director for cloud and data center transformation at Insight. “NetApp Keystone helps us provide a truly flexible consumption model for enterprises, serving as a platform to provide business-critical data services across the entire hybrid cloud data fabric, which will be a huge boon for our growing client base." More Accessible Hybrid Cloud Expertise: NetApp is also introducing new Support and Professional Services offerings to make it even easier to access experts for step-by-step guidance as they transition to hybrid cloud. With SupportEdge Advisor for Cloud, NetApp is extending its data center support model to cloud services with rapid, direct access to trained specialists. NetApp Flexible Professional Services (FlexPS) is also available for customers that require on-demand and ongoing support as they transition to a hybrid cloud. With this new subscription-based offering, organizations can get the professional help they need to design and build a data fabric strategy, implement solutions, and optimize their hybrid cloud with predictable costs and avoid procurement delays. About NetApp NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation. The company provides systems, software and cloud services that enable them to run their applications optimally from data center to cloud, whether they are developing in the cloud, moving to the cloud, or creating their own cloudlike experiences on premises. With solutions that perform across diverse environments, NetApp helps organizations build their own data fabric and securely deliver the right data, services and applications to the right people—anytime, anywhere.

Read More

Events