How to maximise business objectives using hybrid cloud architecture

April 2, 2020 | 64 views

The cloud has had a major impact on broadcast workflows, with media companies moving their infrastructure to the cloud or private data centres in a bid to simplify workflows and reduce costs. Many broadcasters are now looking to a hybrid cloud approach, which leverages a mixed computing storage and services environment that combines on-premises infrastructure (and/or private cloud services) and a public cloud such as Amazon Web Services (AWS) or Microsoft Azure – with orchestration between these various platforms.

Spotlight

EY Brightree

Brightree is a multi award winning SAP Partner. We provide a full spectrum of Management Consulting, Systems Integration & Implementation and Application Management Services in Applications, Analytics, Database and Technology (SAP HANA), Mobility and Cloud.

OTHER ARTICLES
CLOUD SECURITY

Vulnerable AWS Lambda Function – Initial Access in Cloud Attacks

Article | July 13, 2022

Our security research team prepared to explain a real attack scenario from the black box and white box perspective on how a vulnerable AWS Lambda function could be used by attackers as initial access into your cloud environment. Finally, we show the best practices to mitigate this vector of attack. Serverless is becoming mainstream in business applications to achieve scalability, performance, and cost efficiency without managing the underlying infrastructure. These workloads are able to scale to thousands of concurrent requests per second. One of the most used Serverless functions in cloud environments is the AWS Lambda function. One essential element of production raising an application is security. An error in code or a lack of user input validation may cause the function to be compromised and could lead the attackers to get access to your cloud account. About AWS Lambda function AWS Lambda is an event-driven, serverless compute service which permits the execution of code written in different programming languages and automates actions inside a cloud environment. One of the main benefits of this approach is that Lambda runs our code in a highly available compute infrastructure directly managed by AWS. The cloud provider takes care of all the administrative activities related to the infrastructure underneath, including server and operating system maintenance, automatic scaling, patching, and logging. The user can just use the service implementing their code and the function is ready to go. Security, a shared pain From a security perspective, due to its nature to be managed by the cloud provider but still configurable by the user, even the security concerns and risks are shared between the two actors. Since the user doesn’t have control over the infrastructure behind a specific Lambda function, the security risks on the infrastructure underneath are managed directly by the cloud provider. Using AWS IAM, it’s possible for the user to restrict the access and the permitted actions of the lambda function and its components. Misconfiguration on permission over IAM roles or objects used by the Lambda function might cause serious damage, leading attackers inside the cloud environment. Even more importantly, the code implemented into the Lambda function is under user control and, as we will see in the next sections, if there are security holes into the code, the function might be used to access the cloud account and move laterally. Attack Scenarios We are going through two attack scenarios using two different testing approaches: black box and white box testing, which are two of the main testing approaches used in penetration testing to assess the security posture of a specific infrastructure, application, or function. Looking at the Lambda function from a different perspective would help to create a better overall picture of the security posture of our function, and help us better understand the possible attacks and the related risks. Black box vs white box In Black box testing, whoever is attacking the environment doesn’t have any information about the environment itself and the internal workings of the software system. In this approach, the attacker needs to make assumptions about what might be behind the logic of a specific feature and keep testing those assumptions to find a way in. For our scenario, the attacker doesn't have any access to the cloud environment and doesn’t have any internal information about the cloud environment or the functions and roles available in the account. In White box testing, the attacker already has internal information which can be used during the attack to achieve their goals. In this case, the attacker has all the information needed to find the possible vulnerabilities and security issues. For this reason, white box testing is considered the most exhaustive way of testing. In our scenario, the attacker has read-only initial access in the cloud environment and this information can be used by the attacker to assess what is already deployed and better target the attack. In this attack scenario the attacker found a misconfigured S3 bucket open to the public where there are different files owned by the company. The attacker is able to upload files into the bucket and check the files configuration once uploaded. A Lambda function is being used to calculate the tag for each file uploaded, although the attacker doesn’t know anything about the code implemented in the lambda. We can be pretty confident there is an AWS Lambda function behind those values. The function appears to be triggered when a new object is created into the bucket. The two tags, Path and Size, seem to be calculated dynamically for each file, perhaps executing OS commands to retrieve information. We can assume the file name is used to look for the file inside the OS and also to calculate the file size. In other words, the file name might be a user input which is used in the OS command to retrieve the information to put in the tags. Missing a user input validation might lead an attacker to submit unwanted input or execute arbitrary commands into the machine. In this case, we can try to inject other commands into the file name to achieve remote code execution. Concatenating commands, using a semicolon, is a common way to append arbitrary commands into the user input so that the code would execute them if the user input isn’t well sanitized. Mitigation We have seen the attack scenario from the black box and white box perspectives, but what can we do to mitigate this scenario? In the proposed scenario, we covered different AWS components, like S3 buckets and AWS lambda, in which some security aspects have been neglected. In order to successfully mitigate this scenario, we can act on different levels and different features. In particular, we could: Disable the public access for the S3 bucket, so that it will be accessible just from inside and to the users who are authenticated into the cloud account. Check the code used inside the lambda function, to be sure there aren’t any security bugs inside it and all the user inputs are correctly sanitized following the security guidelines for writing code securely. Apply the least privileges concept in all the AWS IAM Roles applied to cloud features to avoid unwanted actions or possible privilege escalation paths inside the account. Let’s have a look at all the points mentioned above in detail on how we can enforce those mitigations. Disable the public access for the S3 bucket An S3 bucket is one of the key components in AWS used as storage. S3 buckets are often used by attackers who want to break into cloud accounts. It’s critical to keep S3 buckets as secure as possible, applying all the security settings available and avoiding unwanted access to our data or files. For this specific scenario, the bucket was publicly open and all the unauthorized users were able to read and write objects into the bucket. To avoid this behavior, we need to make sure that the bucket is available, privately applying the following security settings to restrict the access.

Read More
CLOUD SECURITY

IBM's Cloudburst: A Credible Step Forward in the Cloud Computing Arena

Article | July 8, 2022

IBM CloudBurst is a ready-to-use, self-contained service delivery platform that can be deployed fast and efficiently in a data center. It enables the data center to create service platforms for a wide range of workload types with a high degree of integration, flexibility, and resource optimization, resulting in a better request-driven user experience while also reducing costs and expediting time to market for the company. In addition, the GTS installation service is included in IBM CloudBurst, making it a comprehensive bundle of hardware, software, and services to get it up and running in your environment right away. To quickly integrate cloud computing, IBM CloudBurst combines the necessary hardware, software, and service components. IBM CloudBurst, as a single solution, simplifies the complexities of establishing a cloud computing architecture, allowing businesses to immediately grasp the benefits and financial potential of a dynamic infrastructure. For de-mystifying a cloud computing model, CloudBurst can help enterprises discover the benefits and business possibilities of a dynamic infrastructure more rapidly. As a cloud computing quickstart, IBM CloudBurst allows businesses to demonstrate the benefits of the delivery model in a specific area of their data center or for a single internal project. Building a Dynamic Infrastructure With IBM Cloudburst IBM’s Service Delivery Manager Solution for x86 and Power systems is based on a pre-integrated, software-only stack. It is installed as a set of virtual images that automate the deployment of IT services and enables resource monitoring, cost management, and service provisioning in the cloud. Similarly, depending on your company's needs, you can pick how much capacity to employ by adding cloud bursting to your LSF Cluster. When time is money, the cloud is ready. The meter stops, and the cloud waits when demand is low. Computing Heavy Workloads With Resource Planning It's challenging to establish a delicate balance between the expense of computing resources and the cost of delayed judgments. However, with the introduction of cloud bursting, a new level of flexibility has become available to break the impasse. When space in your data center is limited, you can now extend your existing IBM Spectrum LSF cluster to the IBM Cloud, where you can access almost infinite resources and only pay for what you use. “Automating IT resources to support new applications is critical because at most companies, a business user typically must wait weeks to get access to new IT resources due to the manual processes required to set up resources,” said Lauren States, vice president of Tivoli Cloud Computing for IBM. This automation not only simplifies the initial creation of a proof-of-concept cluster but also provides the basic toolset for the rapid provisioning and takedown of resources that define cloud bursting. Conclusion One of the numerous advantages of the cloud is that you are never tied to a specific piece of hardware. If you choose a storage cluster and later require more capacity or performance, you can always rebuild it with more resources. Today's businesses can pick from a variety of storage technologies. That is why it is critical to comprehend the many alternatives, their usefulness, and the appropriate use cases for the various storage mechanisms. To meet today's modern business needs, IBM offers a variety of storage options.

Read More
CLOUD SECURITY

Elastic releases security solution for the cloud

Article | August 4, 2022

Elastic announced the launch of Elastic Security for Cloud, extending the capabilities of the existing Elastic Security offering, which included SIEM and endpoint protection, to incorporate cloud risk and posture management and cloud workload protection. The new solution allows enterprises to manage their security posture for cloud-native and hybrid environments with infrastructure detection and response (IDR) and a machine learning offering that can detect known and unknown threats in cloud environments. This approach means that enterprises can detect and respond to malicious activity in the cloud as soon as possible to minimize the risk and damage caused by intrusions. Addressing cloud complexity The release comes as modern enterprises are struggling to secure cloud environments. Research shows that not only are 76% of organizations using two or more cloud providers, but also that 35% of organizations have more than 50% of their workloads in the cloud. This means for many organizations, protecting the cloud attack surface is now vital to maintaining operational stability. In other words, the complexity of cloud deployments has created a need for solutions with cloud detection and response capabilities, so that enterprises can respond to threat actors who are targeting this new attack surface. “The world has rapidly transitioned to the cloud and chosen operations over security. Cloud infrastructure is stood up and torn down at a blazing rate and many different teams are deploying these cloud instances. Just answering the question, ‘Am I good?’ is an increasingly complex question to answer for CIO/CISOs,” Mike Nichols, vice president of product for Elastic Security Elastic’s answer to monitoring these environments, is a cloud posture management and IDR solution that can improve security teams visibility over external threats. The cloud security market Elastic Security is competing in the global cloud security market, which researchers valued at $40.8 billion in 2021 and anticipate will grow to $77.5 billion by 2026 as enterprises attempt to keep up with the increasing sophistication of cyberattacks, and rise in bring your own device (BYOD) and (choose your own device) policies. In the realm of cloud security, one of Elastic’s main competitors is Splunk, a cloud and observability monitoring platform that can monitor public clouds, apps, services, on-premise data centers, and edge services. Last year, Splunk announced it had received a $1 billion investment from Silver Lake. Another competitor in the market is Wazuh, an XDR and SIEM tool designed for protecting endpoints and cloud workloads with vulnerability detection, log data analysis, workload protection, and container security. Wazhu currently has more than 10 million downloads per year and is used by a range of companies including Verifone, Walgreens, Rappi, Grubhub, Intuit, and more. Although, one of the key differentiators between Elastic Security is that the solution is built on the Elastic Search Platform, with analytics, SIEM, endpoint protection, XDR and cloud security all offered as part of one single offering.

Read More
CLOUD APP DEVELOPMENT

Database Management in the Cloud Computing Era

Article | May 20, 2022

Database Management in the Cloud Computing Era A cloud computing database is ideally a service that is built, deployed, and delivered via a cloud platform. A cloud platform as a service (PaaS) delivery model allows organizations, end-users and applications to store, manage and retrieve data using the cloud. When seen from a structural and design perspective, a cloud database is not very different from one that operates its business using its own on-premise servers. However, ever since big data has entered space, database management has become a little more complex. In addition to all the traditional and structured data, we also have semi-structured and unstructured data coming in from almost all directions. In recent times, there has been significant adoption of cloud platforms, infrastructures and services. The idea of the blend of cloud technology with database services has garnered more demand with respect to management. Cloud databases, which arealso widely termed Database-as-a-Service (DaaS), offer various added options for organizations to choose. With the current rate of adoption, experts have a strong feeling that DaaS will, just like any other "as-a-service", will become the standard solution for all highly sensitive and mission-critical data. How Can Cloud Database Management Help Your Business? Every organization has the constant need to manage its data in the most efficient way possible. A cloud database effectively fulfills all the needs of any organization with respect to data, ranging from keeping the information secure, accurate and consistent for resource utilization and optimal performance. Cloud data management is constantly changing the way organizations think about data. The cloud helps bring in the required versatility, security, and professional data management assistance that is required. For any business to survive and succeed, it should ensure that the data is healthy so that everyone in the organization has access to the data they need, when they need it. "Line-of-business leaders everywhere are bypassing IT departments to get applications from the cloud (also known as software as a service, or SaaS) and pay for them similar to a magazine subscription. And when the service is no longer required, they can cancel that subscription with no equipment left unused in the corner." - Daryl Plummer, Gartner analyst Effective Strategies forDatabase Management in Cloud Computing Using a cloud-based database ensures that it is easy for your database to grow along with your needs and requirements, in addition to scaling up or down on-demand to accommodate specific peak-workload phases. Ideally, before procuring a cloud data management system, it is essential to have a solid strategy that would suit best with your organization's ecosystem and, at the same time, would also help you getthe most out of the system you select. Exploring the Best Practices for DBM in Cloud Computing With the idea ofeffectively developing, monitoring and managing database infrastructure, there are various methods that organizations can use. These days, organizations also have the option for a ready-made database management system or picking a tailored solution as per their requirements. Keeping all these aspects in mind, it is also essential that these organizations enroll themselves in the best practices followed to ensure optimum results are achieved. Some of the best practices for database management in cloud computing are mentioned below: Before moving to the cloud, build a robust data management architecture. Give cloud data integration requirements the first priority. Regardless of the data's platform or location, govern it comprehensively. Use encryption and VPN to protect data in transit. Automate database management tasks to keep track of them. Why Is Cloud DBMS the C-Suite’s Pick? While it is more than evident that in today's age and time, data management is one of the most crucial components of any organization, it is also right up the area of value addition for all the top executives in an organization. Every industry relies vastly on data and its management, with the significant shift towards cloud servicestaking place constantly several organizations and the c-suite are now evaluating what cloud services bring to their table, especially on the data front. Once the core pain points are assessed and the c-suite gets the picture of how cloud database management systems would help them become more efficient, there wouldn't be any other obvious choice. Microsoft Consulting Services, along with Tata Consultancy Services (TCS), partnered with Walgreens to create the Information, Data, and Insights (IDI) platform powered by Azure.When patients or medical providers initiate the prescription fulfilment process, Azure Databricks and Azure Synapse Analytics move hundreds of related data points through the IDI. Pharmacists can access information through a centralized dashboard interface and request data visualizations.Pharmacists gain real-time insights, and the system can scale as needed to meet current needs. During peak times, the platform can handle more than 40,000 transactions per second. Compared to Walgreens’ previous solution, the private cloud-based architecture, saves an enormous amount of time on every transaction — prescription data that once took about 48 hours to reach its data warehouse can be handled by Walgreens within minutes. Conclusion With the next big things being data and the cloud, how can an amalgamation of both be wrong? Having said this, everyone willhave their own share of concerns and questions. But with what is being built and functions being worked upon, this space is bound to bring in numerous opportunities. So when the world is slowly makingthis shift, it's time you reconsidered, if you haven't already, before you get overwhelmed at the helm of data! FAQ What Are the Best Features Available in Cloud Database Management? While there are numerous features that one can leverage using Cloud Database Management, the top 5 of its features are listed below: Elasticity Scalability High availability Easy data reduction Redundancy Which Is the Most Popular Cloud Database? One of the most popular and widely used cloud databases is the Microsoft Azure cloud database. It provides services in computing, networking, databases, analytics, artificial intelligence, and the Internet of Things. In Cloud Computing, What Is Database Management? Cloud data management is a technique for managing data across cloud platforms, either in combination with or instead of on-premises storage. Disaster recovery, backup, and long-term archiving can benefit from using the cloud as a data storage tier. In addition, resources can be acquired as needed using cloud data management.

Read More

Spotlight

EY Brightree

Brightree is a multi award winning SAP Partner. We provide a full spectrum of Management Consulting, Systems Integration & Implementation and Application Management Services in Applications, Analytics, Database and Technology (SAP HANA), Mobility and Cloud.

Related News

CLOUD APP MANAGEMENT

AWS Announces General Availability of Amazon EC2 DL1 Instances

Amazon | October 27, 2021

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) DL1 instances, a new instance type designed for training machine learning models. DL1 instances are powered by Gaudi accelerators from Habana Labs (an Intel company) to provide up to 40% better price performance for training machine learning models than the latest GPU-powered Amazon EC2 instances. With DL1 instances, customers can train their machine learning models faster and more cost effectively for use cases like natural language processing, object detection and classification, fraud detection, recommendation and personalization engines, intelligent document processing, business forecasting, and more. DL1 instances are available on demand via a low-cost pay-as-you-go usage model with no upfront commitments. To get started with DL1 instances, Machine learning has become mainstream as customers have realized tangible business impact from deploying machine learning models at scale in the cloud. To use machine learning in their business applications, customers start by building and training a model to recognize patterns by learning from sample data, and then apply the model on new data to make predictions. For example, a machine learning model trained on large numbers of contact center transcripts can make predictions to provide real-time personalized assistance to customers through a conversational chatbot. To improve a model's prediction accuracy, data scientists and machine learning engineers are building increasingly larger and more complex models. To maintain prediction accuracy and high quality of the models, these engineers need to tune and retrain their models frequently. This requires a considerable amount of high-performance compute resources, resulting in increased infrastructure costs. These costs can be prohibitive for customers to retrain their models at the frequency they need to maintain high-accuracy predictions, while also posing an obstacle to customers that want to begin experimenting with machine learning. New DL1 instances use Gaudi accelerators built specifically to accelerate machine learning model training by delivering higher compute efficiency at a lower cost compared to general purpose GPUs. DL1 instances feature up to eight Gaudi accelerators, 256 GB of high-bandwidth memory, 768 GB of system memory, 2nd generation Amazon custom Intel Xeon Scalable (Cascade Lake) processors, 400 Gbps of networking throughput, and up to 4 TB of local NVMe storage. Together, these innovations translate to up to 40% better price performance than the latest GPU-powered Amazon EC2 instances for training common machine learning models. Customers can quickly and easily get started with DL1 instances using the included Habana SynapseAI SDK, which is integrated with leading machine learning frameworks (e.g. TensorFlow and PyTorch), helping customers to seamlessly migrate their existing machine learning models currently running on GPU-based or CPU-based instances onto DL1 instances, with minimal code changes. Developers and data scientists can also start with reference models optimized for Gaudi accelerators available in Habana’s GitHub repository, which includes popular models for diverse applications, including image classification, object detection, natural language processing, and recommendation systems. “The use of machine learning has skyrocketed. One of the challenges with training machine learning models, however, is that it is computationally intensive and can get expensive as customers refine and retrain their modelsAWS already has the broadest choice of powerful compute for any machine learning project or application. The addition of DL1 instances featuring Gaudi accelerators provides the most cost-effective alternative to GPU-based instances in the cloud to date. Their optimal combination of price and performance makes it possible for customers to reduce the cost to train, train more models, and innovate faster.” David Brown, Vice President, of Amazon EC2, at AWS Customers can launch DL1 instances using AWS Deep Learning AMIs or using Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS) for containerized applications. For a more managed experience, customers can access DL1 instances through Amazon SageMaker, making it even easier and faster for developers and data scientists to build, train, and deploy machine learning models in the cloud and at the edge. DL1 instances benefit from the AWS Nitro System, a collection of building blocks that offload many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead. DL1 instances are available for purchase as On-Demand Instances, with Savings Plans, as Reserved Instances, or as Spot Instances. DL1 instances are currently available in the US East (N. Virginia) and US West (Oregon) AWS Regions. Seagate Technology has been a global leader offering data storage and management solutions for over 40 years. Seagate’s data science and machine learning engineers have built an advanced deep learning (DL) defect detection system and deployed it globally across the company’s manufacturing facilities. In a recent proof of concept project, Habana Gaudi exceeded the performance targets for training one of the DL semantic segmentation models currently used in Seagate’s production. “We expect the significant price performance advantage of Amazon EC2 DL1 instances, powered by Habana Gaudi accelerators, could make a compelling future addition to AWS compute clusters,” said Darrell Louder, Senior Engineering Director of Operations, Technology and Advanced Analytics, at Seagate. “As Habana Labs continues to evolve and enables broader coverage of operators, there is potential for expanding to additional enterprise use cases, and thereby harnessing additional cost savings.” Intel has created 3D Athlete Tracking technology that analyzes athlete-in-action video in real time to inform performance training processes and enhance audience experiences during competitions. “Training our models on Amazon EC2 DL1 instances, powered by Gaudi accelerators from Habana Labs, will enable us to accurately and reliably process thousands of videos and generate associated performance data, while lowering training cost,” said Rick Echevarria, Vice President, Sales and Marketing Group, Intel. “With DL1 instances, we can now train at the speed and cost required to productively serve athletes, teams, and broadcasters of all levels across a variety of sports.” Riskfuel provides real-time valuations and risk sensitivities to companies managing financial portfolios, helping them increase trading accuracy and performance. “Two factors drew us to Amazon EC2 DL1 instances based on Habana Gaudi AI accelerators,” said Ryan Ferguson, CEO of Riskfuel. “First, we want to make sure our banking and insurance clients can run Riskfuel models that take advantage of the newest hardware. We found migrating our models to DL1 instances to be simple and straightforward—really, it was just a matter of changing a few lines of code. Second, training costs are a big component of our spending, and the promise of up to 40% improvement in price performance offers potentially substantial benefit to our bottom line.” Leidos is recognized as a top 10 health IT provider delivering a broad range of customizable, scalable solutions to hospitals and health systems, biomedical organizations, and every U.S. federal agency focused on health. “One of the numerous technologies we are enabling to advance healthcare today is the use of machine learning and deep learning for disease diagnosis based on medical imaging data. Our massive data sets require timely and efficient training to aid researchers seeking to solve some of the most urgent medical mysteries,” said Chetan Paul, CTO Health and Human Services at Leidos. “Given Leidos’ and its customers’ need for quick, easy, and cost-effective training for deep learning models, we are excited to have begun this journey with Intel and AWS to use Amazon EC2 DL1 instances based on Habana Gaudi AI processors. Using DL1 instances, we expect an increase in model training speed and efficiency, with a subsequent reduction in risk and cost of research and development.” Fractal is a global leader in artificial intelligence and analytics, powering decisions in Fortune 500 companies. “AI and deep learning are at the core of our healthcare imaging business, enabling customers to make better medical decisions. In order to improve accuracy, medical datasets are becoming larger and more complex, requiring more training and retraining of models, and driving the need for improved computing price performance,” said Srikanth Velamakanni, Group CEO of Fractal. “The new Amazon EC2 DL1 instances promise significantly lower cost training than GPU-based EC2 instances, which can help us contain costs and make AI decision-making more accessible to a broader array of customers.” About Amazon Web Services For over 15 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud offering. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 81 Availability Zones (AZs) within 25 geographic regions, with announced plans for 24 more Availability Zones and eight more AWS Regions in Australia, India, Indonesia, Israel, New Zealand, Spain, Switzerland, and the United Arab Emirates. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. About Amazon Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out technology, Amazon Studios, and The Climate Pledge are some of the things pioneered by Amazon.

Read More

CLOUD APP DEVELOPMENT

Wipro Partners With National Grid to Drive Data Center Consolidation and Implement Next Generation Hybrid Cloud Architecture

Wipro | October 22, 2021

Wipro Limited a leading global information technology, consulting and business process services company has signed a multi-year global strategic IT and digital deal with London - headquartered National Grid, a leading multinational electric and gas utility provider to accelerate their digital innovation journey.As part of this engagement, Wipro through its Boundaryless Enterprise solutions will facilitate National Grid’s continued digital transformation, integration of its managed services and consolidation of multiple data centers across UK and US to next generation hosting services. These sustainable data centers will allow for enhanced program governance, as well as heightened consolidation and the migration of all server and application functions from traditional data centers. Wipro will also help with mainframe migration and transition to managed services, including the eventual implementation of a hybrid cloud solution for National Grid. Shannon Soland, Chief Technology Officer, National Grid said, “As a strategic partner, Wipro will help us accelerate our digital journey as we work to achieve next generation capabilities in infrastructure hosting services. Wipro’s expertise will be instrumental as we work to improve our operating model to align with our Net Zero carbon commitment.” “Our data center consolidation efforts will allow us to realize an over 60% reduction in our data center footprint as well as realize a 40% reduction in our data center CO2 emissions. Additionally this transformational program, in conjunction with Wipro, will position our IT capabilities to enable modernized SDDC techniques, technologies, and operating model to accelerate our own digital transformation as National Grid continues to build the future of energy.” Daniel Jablonski, Head of Cloud and Hosting Services, National Grid As part of the collaboration with National Grid, Wipro through its innovative solutions and expertise will deliver a flexible, scalable and resilient digital transformation journey for National Grid. Geoffrey Jue, Vice President - ENU Sector Head, Wipro Limited said, “National Grid is one of the world’s largest utility companies, and Wipro is excited to be named as a strategic partner. This new collaboration builds on the successful two-decade-old partnership between the two companies. Wipro will employ standardized tools and processes to provide cloud services that will strengthen National Grid’s infrastructure services, and support its strategic business objectives. About Wipro Limited Wipro Limited is a leading global information technology, consulting and business process services company. We harness the power of cognitive computing, hyper-automation, robotics, cloud, analytics and emerging technologies to help our clients adapt to the digital world and make them successful. A company recognized globally for its comprehensive portfolio of services, strong commitment to sustainability and good corporate citizenship, we have over 220,000 dedicated employees serving clients across six continents. Together, we discover ideas and connect the dots to build a better and a bold new future.

Read More

CLOUD APP MANAGEMENT

NetApp Expands Hybrid Cloud Solutions Portfolio to Unlock Best of Cloud

NetApp | October 21, 2021

Today at INSIGHT 2021, NetApp®, a global cloud-led, data-centric software company, announced new additions and enhanced capabilities across its hybrid cloud portfolio to help organizations modernize their IT infrastructures and accelerate digital transformation. Delivering new secure ways to consume and operate data services on-premises and in the cloud, NetApp hybrid cloud solutions make it simpler for enterprise customers to put their data to work — wherever and whenever they need it. As the only solutions provider with native integrations for the world’s largest public clouds, NetApp’s industry leading ONTAP® software continues to serve as the foundation for hybrid cloud. With the latest release of ONTAP, NetApp is introducing enhanced protection against ransomware, expanded data management capabilities, and NVMe/TCP support for accelerated performance. The company is also announcing new digital wallet capabilities for NetApp Cloud Manager and enhanced data services for simplified administration across a hybrid cloud, more flexible consumption options to better control costs, as well as new Professional Services offerings to help customers unlock the full value of on-premises and hybrid cloud resources. “The promised benefits of migrating to the cloud may be profound, but many IT departments are still working to overcome on-premises challenges, like managing the complexity and costs of moving data, protecting against ransomware, and ensuring reliable performance for critical applications. As the hybrid cloud specialist, NetApp can help enterprises move their digital transformation efforts forward to deliver business results faster and within budget—whether they are still developing a strategy or in the middle of executing large-scale migrations.” Brad Anderson, Executive Vice President, Hybrid Cloud Group at NetApp “IDC’s research shows that approximately 70% of enterprise IT customers plan to modernize their storage infrastructures in the next two years to support next-generation workloads. But the key operational advantage will be in optimizing workload placement across traditional on-premises and cloud environments,” said Eric Burgener, Research Vice President, Infrastructure Systems Group at IDC. “As an industry leader with years of innovation and expertise delivering hybrid cloud solutions, NetApp is uniquely positioned to help enterprises transition to hybrid cloud models to achieve the scalability and flexibility they need to deliver critical data services and workload capabilities that drive business value.” “Formula One racing has always been about finding the competitive edge, and with Aston Martin Cognizant’s return to the F1™ grid this year, we’re embracing an ambitious data-centric strategy to maximize our performance both on and off the track as we seek pole position,” said Otmar Szafnauer, Chief Executive Officer and Team Principal at Aston Martin Cognizant Formula One Team. “By partnering with NetApp to build our data fabric and standardize operations with its world-class hybrid cloud solutions, we’re working to ensure that everything we do—from capturing real-time data on car and component performance to how we streamline factory and engineering operations—is focused on constant improvement and driving the team forward.” NetApp’s latest portfolio innovations announced today include: ONTAP Data Management Software Enhancements: The latest release of ONTAP enables enterprises to autonomously protect against ransomware attacks based on machine learning with integrated preemptive detection and accelerated data recovery. The new release also delivers enterprise-grade performance for SAN and modern workloads with NVMe/TCP support, expanded object storage capabilities, and simplified management. In addition, this latest ONTAP release will power the upcoming NetApp AFF A900, the next-generation high-resiliency all-flash system for business-critical workloads. Enhanced Data Services: With new digital wallet capabilities available in NetApp Cloud Manager, customers can benefit from greater mobility and more visibility into usage of data service licenses across a hybrid cloud, with prepayment of credits enabling streamlined deployment to avoid procurement hassles. Additional updates include enhancements to NetApp Cloud Backup and Cloud Data Sense services, simplified deployment of Cloud Volumes ONTAP with new customer-ready templates, fully embedded Active IQ, and deeper integrations with NetApp Cloud Insights and ONTAP software to support Kubernetes workloads. More Flexible Consumption Options: NetApp Keystone Flex Subscription, an on-premises storage-as-a-service offering with native cloud integration, continues to gain momentum with customers. The offering is now supported on four continents—encompassing petabytes of capacity within just under one year of availability. NetApp is now offering a new freemium service tier for Cloud Volumes ONTAP, providing customers with access to a fully featured, perpetual license to use ONTAP in the cloud for workloads needing less than 500GB of storage. This consumption flexibility gives organizations the freedom to use enterprise-grade data services for small workloads such as Kubernetes clusters at no initial cost. An organization only needs to convert to a subscription when the workload matures and scales. “As a leading IT consultancy specializing in cloud infrastructure and services, our clients are increasingly working with us to reduce CAPEX spending by taking advantage of cloud-like consumption models for their on-premises environments,” said Kent Christensen, Virtual Practice Director for cloud and data center transformation at Insight. “NetApp Keystone helps us provide a truly flexible consumption model for enterprises, serving as a platform to provide business-critical data services across the entire hybrid cloud data fabric, which will be a huge boon for our growing client base." More Accessible Hybrid Cloud Expertise: NetApp is also introducing new Support and Professional Services offerings to make it even easier to access experts for step-by-step guidance as they transition to hybrid cloud. With SupportEdge Advisor for Cloud, NetApp is extending its data center support model to cloud services with rapid, direct access to trained specialists. NetApp Flexible Professional Services (FlexPS) is also available for customers that require on-demand and ongoing support as they transition to a hybrid cloud. With this new subscription-based offering, organizations can get the professional help they need to design and build a data fabric strategy, implement solutions, and optimize their hybrid cloud with predictable costs and avoid procurement delays. About NetApp NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation. The company provides systems, software and cloud services that enable them to run their applications optimally from data center to cloud, whether they are developing in the cloud, moving to the cloud, or creating their own cloudlike experiences on premises. With solutions that perform across diverse environments, NetApp helps organizations build their own data fabric and securely deliver the right data, services and applications to the right people—anytime, anywhere.

Read More

CLOUD APP MANAGEMENT

AWS Announces General Availability of Amazon EC2 DL1 Instances

Amazon | October 27, 2021

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) DL1 instances, a new instance type designed for training machine learning models. DL1 instances are powered by Gaudi accelerators from Habana Labs (an Intel company) to provide up to 40% better price performance for training machine learning models than the latest GPU-powered Amazon EC2 instances. With DL1 instances, customers can train their machine learning models faster and more cost effectively for use cases like natural language processing, object detection and classification, fraud detection, recommendation and personalization engines, intelligent document processing, business forecasting, and more. DL1 instances are available on demand via a low-cost pay-as-you-go usage model with no upfront commitments. To get started with DL1 instances, Machine learning has become mainstream as customers have realized tangible business impact from deploying machine learning models at scale in the cloud. To use machine learning in their business applications, customers start by building and training a model to recognize patterns by learning from sample data, and then apply the model on new data to make predictions. For example, a machine learning model trained on large numbers of contact center transcripts can make predictions to provide real-time personalized assistance to customers through a conversational chatbot. To improve a model's prediction accuracy, data scientists and machine learning engineers are building increasingly larger and more complex models. To maintain prediction accuracy and high quality of the models, these engineers need to tune and retrain their models frequently. This requires a considerable amount of high-performance compute resources, resulting in increased infrastructure costs. These costs can be prohibitive for customers to retrain their models at the frequency they need to maintain high-accuracy predictions, while also posing an obstacle to customers that want to begin experimenting with machine learning. New DL1 instances use Gaudi accelerators built specifically to accelerate machine learning model training by delivering higher compute efficiency at a lower cost compared to general purpose GPUs. DL1 instances feature up to eight Gaudi accelerators, 256 GB of high-bandwidth memory, 768 GB of system memory, 2nd generation Amazon custom Intel Xeon Scalable (Cascade Lake) processors, 400 Gbps of networking throughput, and up to 4 TB of local NVMe storage. Together, these innovations translate to up to 40% better price performance than the latest GPU-powered Amazon EC2 instances for training common machine learning models. Customers can quickly and easily get started with DL1 instances using the included Habana SynapseAI SDK, which is integrated with leading machine learning frameworks (e.g. TensorFlow and PyTorch), helping customers to seamlessly migrate their existing machine learning models currently running on GPU-based or CPU-based instances onto DL1 instances, with minimal code changes. Developers and data scientists can also start with reference models optimized for Gaudi accelerators available in Habana’s GitHub repository, which includes popular models for diverse applications, including image classification, object detection, natural language processing, and recommendation systems. “The use of machine learning has skyrocketed. One of the challenges with training machine learning models, however, is that it is computationally intensive and can get expensive as customers refine and retrain their modelsAWS already has the broadest choice of powerful compute for any machine learning project or application. The addition of DL1 instances featuring Gaudi accelerators provides the most cost-effective alternative to GPU-based instances in the cloud to date. Their optimal combination of price and performance makes it possible for customers to reduce the cost to train, train more models, and innovate faster.” David Brown, Vice President, of Amazon EC2, at AWS Customers can launch DL1 instances using AWS Deep Learning AMIs or using Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS) for containerized applications. For a more managed experience, customers can access DL1 instances through Amazon SageMaker, making it even easier and faster for developers and data scientists to build, train, and deploy machine learning models in the cloud and at the edge. DL1 instances benefit from the AWS Nitro System, a collection of building blocks that offload many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead. DL1 instances are available for purchase as On-Demand Instances, with Savings Plans, as Reserved Instances, or as Spot Instances. DL1 instances are currently available in the US East (N. Virginia) and US West (Oregon) AWS Regions. Seagate Technology has been a global leader offering data storage and management solutions for over 40 years. Seagate’s data science and machine learning engineers have built an advanced deep learning (DL) defect detection system and deployed it globally across the company’s manufacturing facilities. In a recent proof of concept project, Habana Gaudi exceeded the performance targets for training one of the DL semantic segmentation models currently used in Seagate’s production. “We expect the significant price performance advantage of Amazon EC2 DL1 instances, powered by Habana Gaudi accelerators, could make a compelling future addition to AWS compute clusters,” said Darrell Louder, Senior Engineering Director of Operations, Technology and Advanced Analytics, at Seagate. “As Habana Labs continues to evolve and enables broader coverage of operators, there is potential for expanding to additional enterprise use cases, and thereby harnessing additional cost savings.” Intel has created 3D Athlete Tracking technology that analyzes athlete-in-action video in real time to inform performance training processes and enhance audience experiences during competitions. “Training our models on Amazon EC2 DL1 instances, powered by Gaudi accelerators from Habana Labs, will enable us to accurately and reliably process thousands of videos and generate associated performance data, while lowering training cost,” said Rick Echevarria, Vice President, Sales and Marketing Group, Intel. “With DL1 instances, we can now train at the speed and cost required to productively serve athletes, teams, and broadcasters of all levels across a variety of sports.” Riskfuel provides real-time valuations and risk sensitivities to companies managing financial portfolios, helping them increase trading accuracy and performance. “Two factors drew us to Amazon EC2 DL1 instances based on Habana Gaudi AI accelerators,” said Ryan Ferguson, CEO of Riskfuel. “First, we want to make sure our banking and insurance clients can run Riskfuel models that take advantage of the newest hardware. We found migrating our models to DL1 instances to be simple and straightforward—really, it was just a matter of changing a few lines of code. Second, training costs are a big component of our spending, and the promise of up to 40% improvement in price performance offers potentially substantial benefit to our bottom line.” Leidos is recognized as a top 10 health IT provider delivering a broad range of customizable, scalable solutions to hospitals and health systems, biomedical organizations, and every U.S. federal agency focused on health. “One of the numerous technologies we are enabling to advance healthcare today is the use of machine learning and deep learning for disease diagnosis based on medical imaging data. Our massive data sets require timely and efficient training to aid researchers seeking to solve some of the most urgent medical mysteries,” said Chetan Paul, CTO Health and Human Services at Leidos. “Given Leidos’ and its customers’ need for quick, easy, and cost-effective training for deep learning models, we are excited to have begun this journey with Intel and AWS to use Amazon EC2 DL1 instances based on Habana Gaudi AI processors. Using DL1 instances, we expect an increase in model training speed and efficiency, with a subsequent reduction in risk and cost of research and development.” Fractal is a global leader in artificial intelligence and analytics, powering decisions in Fortune 500 companies. “AI and deep learning are at the core of our healthcare imaging business, enabling customers to make better medical decisions. In order to improve accuracy, medical datasets are becoming larger and more complex, requiring more training and retraining of models, and driving the need for improved computing price performance,” said Srikanth Velamakanni, Group CEO of Fractal. “The new Amazon EC2 DL1 instances promise significantly lower cost training than GPU-based EC2 instances, which can help us contain costs and make AI decision-making more accessible to a broader array of customers.” About Amazon Web Services For over 15 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud offering. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 81 Availability Zones (AZs) within 25 geographic regions, with announced plans for 24 more Availability Zones and eight more AWS Regions in Australia, India, Indonesia, Israel, New Zealand, Spain, Switzerland, and the United Arab Emirates. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. About Amazon Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out technology, Amazon Studios, and The Climate Pledge are some of the things pioneered by Amazon.

Read More

CLOUD APP DEVELOPMENT

Wipro Partners With National Grid to Drive Data Center Consolidation and Implement Next Generation Hybrid Cloud Architecture

Wipro | October 22, 2021

Wipro Limited a leading global information technology, consulting and business process services company has signed a multi-year global strategic IT and digital deal with London - headquartered National Grid, a leading multinational electric and gas utility provider to accelerate their digital innovation journey.As part of this engagement, Wipro through its Boundaryless Enterprise solutions will facilitate National Grid’s continued digital transformation, integration of its managed services and consolidation of multiple data centers across UK and US to next generation hosting services. These sustainable data centers will allow for enhanced program governance, as well as heightened consolidation and the migration of all server and application functions from traditional data centers. Wipro will also help with mainframe migration and transition to managed services, including the eventual implementation of a hybrid cloud solution for National Grid. Shannon Soland, Chief Technology Officer, National Grid said, “As a strategic partner, Wipro will help us accelerate our digital journey as we work to achieve next generation capabilities in infrastructure hosting services. Wipro’s expertise will be instrumental as we work to improve our operating model to align with our Net Zero carbon commitment.” “Our data center consolidation efforts will allow us to realize an over 60% reduction in our data center footprint as well as realize a 40% reduction in our data center CO2 emissions. Additionally this transformational program, in conjunction with Wipro, will position our IT capabilities to enable modernized SDDC techniques, technologies, and operating model to accelerate our own digital transformation as National Grid continues to build the future of energy.” Daniel Jablonski, Head of Cloud and Hosting Services, National Grid As part of the collaboration with National Grid, Wipro through its innovative solutions and expertise will deliver a flexible, scalable and resilient digital transformation journey for National Grid. Geoffrey Jue, Vice President - ENU Sector Head, Wipro Limited said, “National Grid is one of the world’s largest utility companies, and Wipro is excited to be named as a strategic partner. This new collaboration builds on the successful two-decade-old partnership between the two companies. Wipro will employ standardized tools and processes to provide cloud services that will strengthen National Grid’s infrastructure services, and support its strategic business objectives. About Wipro Limited Wipro Limited is a leading global information technology, consulting and business process services company. We harness the power of cognitive computing, hyper-automation, robotics, cloud, analytics and emerging technologies to help our clients adapt to the digital world and make them successful. A company recognized globally for its comprehensive portfolio of services, strong commitment to sustainability and good corporate citizenship, we have over 220,000 dedicated employees serving clients across six continents. Together, we discover ideas and connect the dots to build a better and a bold new future.

Read More

CLOUD APP MANAGEMENT

NetApp Expands Hybrid Cloud Solutions Portfolio to Unlock Best of Cloud

NetApp | October 21, 2021

Today at INSIGHT 2021, NetApp®, a global cloud-led, data-centric software company, announced new additions and enhanced capabilities across its hybrid cloud portfolio to help organizations modernize their IT infrastructures and accelerate digital transformation. Delivering new secure ways to consume and operate data services on-premises and in the cloud, NetApp hybrid cloud solutions make it simpler for enterprise customers to put their data to work — wherever and whenever they need it. As the only solutions provider with native integrations for the world’s largest public clouds, NetApp’s industry leading ONTAP® software continues to serve as the foundation for hybrid cloud. With the latest release of ONTAP, NetApp is introducing enhanced protection against ransomware, expanded data management capabilities, and NVMe/TCP support for accelerated performance. The company is also announcing new digital wallet capabilities for NetApp Cloud Manager and enhanced data services for simplified administration across a hybrid cloud, more flexible consumption options to better control costs, as well as new Professional Services offerings to help customers unlock the full value of on-premises and hybrid cloud resources. “The promised benefits of migrating to the cloud may be profound, but many IT departments are still working to overcome on-premises challenges, like managing the complexity and costs of moving data, protecting against ransomware, and ensuring reliable performance for critical applications. As the hybrid cloud specialist, NetApp can help enterprises move their digital transformation efforts forward to deliver business results faster and within budget—whether they are still developing a strategy or in the middle of executing large-scale migrations.” Brad Anderson, Executive Vice President, Hybrid Cloud Group at NetApp “IDC’s research shows that approximately 70% of enterprise IT customers plan to modernize their storage infrastructures in the next two years to support next-generation workloads. But the key operational advantage will be in optimizing workload placement across traditional on-premises and cloud environments,” said Eric Burgener, Research Vice President, Infrastructure Systems Group at IDC. “As an industry leader with years of innovation and expertise delivering hybrid cloud solutions, NetApp is uniquely positioned to help enterprises transition to hybrid cloud models to achieve the scalability and flexibility they need to deliver critical data services and workload capabilities that drive business value.” “Formula One racing has always been about finding the competitive edge, and with Aston Martin Cognizant’s return to the F1™ grid this year, we’re embracing an ambitious data-centric strategy to maximize our performance both on and off the track as we seek pole position,” said Otmar Szafnauer, Chief Executive Officer and Team Principal at Aston Martin Cognizant Formula One Team. “By partnering with NetApp to build our data fabric and standardize operations with its world-class hybrid cloud solutions, we’re working to ensure that everything we do—from capturing real-time data on car and component performance to how we streamline factory and engineering operations—is focused on constant improvement and driving the team forward.” NetApp’s latest portfolio innovations announced today include: ONTAP Data Management Software Enhancements: The latest release of ONTAP enables enterprises to autonomously protect against ransomware attacks based on machine learning with integrated preemptive detection and accelerated data recovery. The new release also delivers enterprise-grade performance for SAN and modern workloads with NVMe/TCP support, expanded object storage capabilities, and simplified management. In addition, this latest ONTAP release will power the upcoming NetApp AFF A900, the next-generation high-resiliency all-flash system for business-critical workloads. Enhanced Data Services: With new digital wallet capabilities available in NetApp Cloud Manager, customers can benefit from greater mobility and more visibility into usage of data service licenses across a hybrid cloud, with prepayment of credits enabling streamlined deployment to avoid procurement hassles. Additional updates include enhancements to NetApp Cloud Backup and Cloud Data Sense services, simplified deployment of Cloud Volumes ONTAP with new customer-ready templates, fully embedded Active IQ, and deeper integrations with NetApp Cloud Insights and ONTAP software to support Kubernetes workloads. More Flexible Consumption Options: NetApp Keystone Flex Subscription, an on-premises storage-as-a-service offering with native cloud integration, continues to gain momentum with customers. The offering is now supported on four continents—encompassing petabytes of capacity within just under one year of availability. NetApp is now offering a new freemium service tier for Cloud Volumes ONTAP, providing customers with access to a fully featured, perpetual license to use ONTAP in the cloud for workloads needing less than 500GB of storage. This consumption flexibility gives organizations the freedom to use enterprise-grade data services for small workloads such as Kubernetes clusters at no initial cost. An organization only needs to convert to a subscription when the workload matures and scales. “As a leading IT consultancy specializing in cloud infrastructure and services, our clients are increasingly working with us to reduce CAPEX spending by taking advantage of cloud-like consumption models for their on-premises environments,” said Kent Christensen, Virtual Practice Director for cloud and data center transformation at Insight. “NetApp Keystone helps us provide a truly flexible consumption model for enterprises, serving as a platform to provide business-critical data services across the entire hybrid cloud data fabric, which will be a huge boon for our growing client base." More Accessible Hybrid Cloud Expertise: NetApp is also introducing new Support and Professional Services offerings to make it even easier to access experts for step-by-step guidance as they transition to hybrid cloud. With SupportEdge Advisor for Cloud, NetApp is extending its data center support model to cloud services with rapid, direct access to trained specialists. NetApp Flexible Professional Services (FlexPS) is also available for customers that require on-demand and ongoing support as they transition to a hybrid cloud. With this new subscription-based offering, organizations can get the professional help they need to design and build a data fabric strategy, implement solutions, and optimize their hybrid cloud with predictable costs and avoid procurement delays. About NetApp NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation. The company provides systems, software and cloud services that enable them to run their applications optimally from data center to cloud, whether they are developing in the cloud, moving to the cloud, or creating their own cloudlike experiences on premises. With solutions that perform across diverse environments, NetApp helps organizations build their own data fabric and securely deliver the right data, services and applications to the right people—anytime, anywhere.

Read More

Events