Strategize Your Cloud Migration and Attain Digital Presence

Sayantani Bhattacharya | November 18, 2021 | 142 views

Cloud Migration
In the post-Covid-19 era, businesses across the world are going through rapid reformations. Now, digitization is the first and the foremost aim for every business to scale, achieve success and stay ahead in the competition. Transforming to obtain a digital presence is crucial for businesses to expand and gain market attention. Cloud migration is one of the effective ways to accelerate digital transformation. 

Cloud computing is a disruptive technology that provides innovating and increasing business process efficiency and effectiveness. Further, moving to the cloud makes resource sharing seamless, boosts the agility of your business ecosystem, and scales computation and storage capacity as per the demand spikes. As a metered service, the cloud also helps to control initial capital investments. This article explains cloud migration, its different tactics, and how industry leaders are implementing cloud migration to embrace digital transformation.

According to IDC, digitally transformed organizations are estimated to contribute more than half of the global GDP by 2023, valued at $53.3 trillion approximately. When a business transforms digitally, it enhances its proficiency, increases customer value, manages risk efficiently, and explores innovative scopes for revenue generation. 

“Cloud computing is the third wave of the digital revolution.”

Lowell McAdam, CEO of Verizon


There are two ways you can adopt cloud for your business- cloud migration and modernization of applications.

Cloud Migration: A Significant Method to Fortify Digital Transformation

According to Flexera, 94% of the enterprises are using cloud services by now.  Cloud and digital transformation are correlated to each other. Migrating to the cloud from legacy infrastructure helps reform your business and opens the avenue towards seamless digital transformation. It brings in innovation and agility to attract market attention.

Cloud Migration is the process that shifts the workloads, data, and other applications from an on-premise server to the cloud server. It is also termed the "lift and shift" process, which moves explicit workloads and applications to the cloud server. This move leverages the cloud features such as automatic scaling, enhanced efficiency, prompt responsiveness, agile performance, robust security, storage, quick computation, and networking.

Vital Tactics for an Effective Cloud Migration

According to research conducted by Multisoft, primarily within start-ups and SMBs, the advantages of cloud computing are instantaneous. Moreover, 80% of the surveyed companies report process enhancements within the first few months of adopting the cloud.

An effective migration strategy is the best way to enable business growth. It also empowers to stay aligned with the digitalization objectives. As a result, businesses are driven towards implementing their migration plan from on-premise to the cloud. The migration strategy is influenced by business vision, goals, and interests. According to Markets and Markets, the cloud migration services market is expected to grow at a Compound Annual Growth Rate (CAGR) of 24.5% by 2022. To carry out a successful cloud migration efficiently and effectively, here are some crucial steps you can consider for an effective shift to the cloud.

Knowing Your Goal

It is critical to have a clear idea about your business goal before making your migration plan. Identifying the goal of shifting to the cloud is about the technology and the upliftment of the organization. Earlier, business and the supporting IT services used to work separately. But now, IT services and businesses work collaboratively because every business needs technical fortification. Otherwise, it would not be able to survive the competition. Adopting cloud platforms is one of the effective ways to leverage the technology for your organization. In addition, it allows you to scale your efficiency by using tailored services automatically.

Strategizing the Shift

Before reaping the benefits of cloud migration, it is crucial to plan out the shift. First, you should identify what you need to move. Determining the cloud-readiness of your current infrastructure and application is crucial. Next, consider the foundation metrics of your infrastructure to align workloads to your assets and applications. It will help you establish key performance indicators (KPIs) for migration, such as time for loading a page, response times, availability of the resources, CPU utilization, memory usage, and conversion rates. Strategizing your migration process has to be well-planned in a way that prioritizes your business objectives and considers the cost analysis pre and post shift.

Selecting the Right Migration Strategy

According to Unisys, one in three cloud migrations remains unsuccessful globally because the cloud is not a part of the core business strategy and is not planned correctly. Therefore, selecting the right migration strategy is one of the integral steps to attain business transformation. The 6Rs of migration strategy are- Rehosting, Replatforming, Repurchasing, Refactoring, Retiring, and Retaining. One of the critical reasons for a successful migration is to identify whether the applications that are supposed to be migrated are ready to perform and produce desired business value in the new environment. If not, then the migration can’t be considered a success.

Choosing the Ideal Platform

Digital transformation and cloud go hand-in-hand. It simply means cloud computing has become the basic foundation of the technologies that enable digital transformation. As a result, based on your business requirement, you need to identify, select, and set the right cloud platform for your business. You need to decide if public, private, hybrid, or multi-cloud environments are the right course of action for your needs. Further, deciding amongst eminent cloud vendors for your organization, such as Microsoft Azure, AWS, and Google Cloud Platform, is an integral step for a successful migration.

Making your Cloud Infrastructure Ready for the Shift

Once you decide on a specific platform for the migration, you need to make it ready to host the on-prem applications. Also, decide and select the suitable applications and cloud services for the new cloud platform. The major cloud services providers offer a wide range of choices that is close to the on-premises offerings. Further, some third-party service providers offer an amalgamation of on-premises and multi-cloud platforms, such as Databricks and Snowflake on AWS and Azure. The cloud platform and services you opt for will also decide the amount of code revision required to make the applications cloud-ready.

Emphasizing on Cloud Governance

Governance defines process, policy, and criteria for effective decision-making. It also helps to measure success, track milestone planning, and scope management. For successful mass application transformation to the cloud, defining a governance model will help to cover security, financial, performance, collaboration, and communication.

Defining Cloud KPI’s

While you strategize the migration plan of your business to the cloud, defining Key Performance Indicators (KPIs) helps you measure how the applications are performing against your expectations. It is crucial to understand that whether your pre-defined KPIs are accurate in the cloud environment. The best KPIs for a cloud migration reflect your migration status and identify any problems within your application. Further, it is ideal if your KPIs help you determine whether the migration is complete and successful.

Maintaining Security During Data Transfer

Understanding your organization's current security requirements, analyzing gaps, and establishing cloud security best practices leverages security controls offered by cloud providers and improves the security scenario based on application requirements. Aligning security with Cloud Controls Matrix (CCM) framework provides essential security principles to control the overall security risks on a cloud platform and service model. It strengthens information security control, identifies consistent security threats and vulnerabilities in the cloud, and provides means to reduce it with standardized security and operational risk management.

Implementing the Movement

Once you follow all the steps meticulously, you’re ready for the shift. However, your migration tactics would depend on the complexity of your applications and infrastructure. Your strategy must also include the risk assessment that can arise due to operational disruption. However, the primary focus should be implementing the migration quickly with reduced cost and utilizing the benefits of cloud migration. To achieve it, you can shift your entire application and analyze how it functions. Otherwise, you can take a more granular approach by moving gradually until the entire application moves to the cloud.

How Spotify Moved to GCP to Become the Best Music Service in the World?

We all know about Spotify. Right? Spotify was founded in 2008, and today, it is the most significant market driver of the musical industry. Earlier, Spotify used legacy data centers to host their storage and infrastructure. However, the scaling requirements, and the growing competition of the audio streaming market, motivated Spotify to plan their cloud migration in 2016 to Google Cloud Platform (GCP). As a result, it migrated 1200 online services and data processing DAGs (directed acyclic graphs) and 20,000 tracks.

The three primary reasons for Spotify to migrate to GCP were to get instant and unlimited scaling facilities, work in collaboration, efficient problem-solving capacity, and innovative tools for big data processing.

The three primary reasons for Spotify to migrate to GCP were to get instant and unlimited scaling facilities, work in collaboration, efficient problem-solving capacity, and innovative tools for big data processing.

It was vital for Spotify to have a responsive product, which delivers customer satisfaction with advanced tools and features like recommendation, music discovery, and connecting people. Further, it also helps find new songs and podcasts and helps artists connect with fans and team up.

Today, Spotify has leveraged the role of cloud migration in digital transformation and has become the most popular global audio streaming service with 248m users, including 113m subscribers, across 79 markets. It is one of the key competitors of the music industry. 

Final Thoughts

A cloud migration strategy needs t vigilant planning, assessment, and resourcing because there are a lot of dependencies on a migration process. However, a study by McAfee revealed that 97% of the businesses surveyed confirmed using a cloud service in their daily operation. ,Therefore, business leaders will have to fundamentally develop a formal strategy to put individual cloud decisions to obtain enterprise’s strategic goals. Most importantly, it would be best to secure your business and operations from the very beginning to meet the business objectives. If your migration strategy ensures the security of the data and resources and leverages cutting-edge cloud computing technologies, then it is ideal to attain cloud digital transformation.  

Frequently Asked Questions


What is a Cloud Migration Plan?

A cloud migration plan is an organization's strategy to move its data and applications from on-premises architecture to the cloud. Unfortunately, not all workloads are cloud-ready nor can benefit from running on the cloud infrastructure, so it is crucial to plan out the most efficient way to select and migrate applications to the cloud.


What is the Right Time for Cloud Migration?

The perfect time to plan the movement to the cloud is whenever your legacy data center needs renewal, ideally when your hardware is three or more years old. The simple reason is that you can utilize the cost in cloud migration and get more benefits than replacing the hardware in your legacy data center. 


Can Cloud Migration Help in Digital Transformation?

Cloud migration allows organizations to scale infrastructure capacities as needed to support changing business requirements. Further, it optimizes resource utilization and allows easy access to data from any location. It also helps in global expansion, with increased efficiency and improved decision-making powered by digital transformation.

Spotlight

Structured Communication Systems

Structured is an award-winning solution provider delivering secure, cloud-connected digital infrastructure. For nearly 30 years, we’ve helped clients through all phases of digital transformation by securely bridging people, business and technology. Customers trust us to provide valuable insight throughout the process of selecting and implementing secure and scalable IT strategies, platforms, and processes that meet modern expectations and drive measurable improvements throughout the enterprise.

OTHER ARTICLES
CLOUD SECURITY

Green Cloud Computing: The Future of Sustainable Cloud Computing

Article | August 4, 2022

Energy consumption has risen sharply as a result of the increasing demand for cloud infrastructure. This desire for power has substantially impacted the carbon footprint of the environment. The exponential expansion of data centers with their tens of thousands of servers and other infrastructure, is mostly to blame for the steadily rising energy demand. Environmental Effects of Green Cloud Computing Remote workers result in lower carbon footprints Reducing paper use to save the environment Reduce power use to cut back on energy use Dematerialization that reduces emissions of greenhouse gases Green cloud computing entails creating, using, and designing digital spaces in a way that has a lower environmental impact. As a result, a green cloud solution can drastically lower business operational expenses while saving energy. It is therefore crucial to the development of enterprise cloud computing. Green cloud computing enables users to take advantage of cloud storage's advantages while reducing its negative environmental effects, which ultimately affect human wellbeing. EU data centers accounted for 2.7 percent of electricity demand in 2018, and if electricity demand continues on its current trend, this is predicted to increase to 3.21 percent by 2030, according to a European Commission study on energy-efficient cloud computing technologies. Data centers are using a substantial amount of energy, and as more businesses migrate to the cloud and data center growth continues, this demand will increase even though the 2018 rate is higher than the global average. The advantages offered by cloud computing technology have come a long way at this point. In addition to providing you with ease, flexibility, scalability, and cost savings, it has evolved into a tool to innovate processes and operations that would not worsen the impending and expanding environmental problems experienced by people worldwide. By relying on green cloud computing, your organization can improve staff productivity, develop new business processes, and contribute to a cleaner environment.

Read More
CLOUD SECURITY

Cloud Cryptography: Using Encryption to Protect Data

Article | July 8, 2022

Even privacy specialists agree that encryption is a fundamental technology and the cornerstone of security, but cloud encryption can be daunting. In addition, small and medium-sized enterprises can become confused by the sheer quantity of encryption techniques. Cloud cryptography encrypts cloud-based data. It allows users to securely use shared cloud services while cloud provider data is encrypted. Cloud cryptography safeguards sensitive data without slowing exchange. Cloud encryption makes it possible to secure sensitive data outside your organization's corporate IT infrastructure when that data is no longer under your control. Companies utilize a variety of cryptographic key types for cloud security. Three algorithms are used for cloud data encryption: Symmetric Algorithm One key encrypts and decrypts data. It requires little computing resources and excels at encryption. Two-way keys ensure verification and approval in symmetrical algorithms. Encrypted information in the cloud can't be deciphered unless the client possesses the key. Asymmetric Algorithm Encryption and decoding need distinct keys. Every recipient needs a decoder—the recipient's private key. The encryption key belongs to someone. The most secure approach requires both keys to access explicit data. Hashing It's key to blockchain security. In a blockchain, data is stored in blocks and linked by cryptographic protocols. A code or hash is assigned to each information block added to the chain. Hashing helps arrange and recover data. Businesses need to adopt a data-centric approach in this complex and evolving world of virtualization, cloud services, and mobility to protect their sensitive information from contemporary threats. Companies should deploy data security solutions that secure sensitive data consistently, including cloud data encryption and key management. Comprehensive cloud security and encryption platform should include robust access controls and key management to help enterprises use encryption successfully and cost-efficiently.

Read More
CLOUD SECURITY

Is It Time For Your Organization To Adopt Cloud Computing?

Article | July 11, 2022

The potential of cloud computing is becoming increasingly apparent to various businesses, and it is also growing. AWS, Microsoft Azure, and Google GCP are just a few of the numerous cloud service providers that are accessible globally. In addition, you can choose from a variety of migration strategies to go from local servers to cloud servers. Many businesses are considering shifting to the cloud. What are the indications that you are prepared, and why should you relocate? There's a chance your company is already utilizing an on-premise solution. Since it's been in use for a while, organizations are accustomed to it. But the need for greater flexibility has grown exponentially now that the shift to digital has accelerated in recent years. Threats to On-premise There are various drawbacks to on-premise software. Updates aren't usually frequent, and they’re not always supported. This implies that firms won't always have access to the most recent features and abilities. A custom build is much more time-consuming if you require a feature right away than getting it added to quarterly updates. There's a chance that the program an organization is using will someday be completely phased out. Then the organization is stuck using a solution that won't receive any more updates. In addition, with the hardware getting older, current operating systems might be unable to execute older programs. In the meantime, rivals would have switched to cutting-edge, affordable cloud-based technologies, which allow them to run their businesses and provide a much smoother client experience. Why Choose the Cloud? Moving to the cloud applies to every aspect of your business. Real-time data is provided, allowing for far more precise decision-making. Automating routine manual chores streamlines operations and frees up team members' time for activities they enjoy. It is also perfect for emerging forms of working, like remote and hybrid work, because it can be accessed from anywhere, on any device, at any time.

Read More
CLOUD SECURITY

Managing Multi-Cloud Complexities for a Seamless Experience

Article | July 6, 2022

Introduction The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today. Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions. However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others. Managing Multi-Cloud Complexities Like a Pro Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below. Managing Data with AI and ML AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction. AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control. Integrated Management Structure Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives. Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds. Developing Security Strategy Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss. Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications. Skillset Management Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies. Closing Lines Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Read More

Spotlight

Structured Communication Systems

Structured is an award-winning solution provider delivering secure, cloud-connected digital infrastructure. For nearly 30 years, we’ve helped clients through all phases of digital transformation by securely bridging people, business and technology. Customers trust us to provide valuable insight throughout the process of selecting and implementing secure and scalable IT strategies, platforms, and processes that meet modern expectations and drive measurable improvements throughout the enterprise.

Related News

CLOUD SECURITY

Rapidly Evolving Hybrid Cloud Security Requirements are Driving the Need for Deep Observability

Gigamon | September 28, 2022

Gigamon, the leading deep observability company, is guiding the industry forward today, bringing application and network-level intelligence together for the first time to help network, security, and cloud IT operations teams eliminate security blind spots and deliver defense in depth across their highly distributed hybrid and multi-cloud infrastructure. Leading market intelligence firm, the 650 Group, forecasts the deep observability market’s CAGR to grow over 60 percent to reach $2 billion by 2026. They predict Gigamon will take a commanding lead with 68 percent market share in the first half of 2022. Together with an expanding ecosystem of technology alliance partners, Gigamon harnesses actionable network-level intelligence that amplifies the power of cloud, security, and observability tools, ultimately empowering large organizations to achieve the transformational promise of the cloud. A recent IDC global survey of 900 large organization IT executives and managers* revealed that 'strengthened cybersecurity posture and practices' is the number one benefit of deep observability intelligence and insights. And to overcome their concerns for security vulnerabilities, 79 percent of respondents indicate they have made good-to-excellent progress in leveraging network intelligence and performance metrics for security insights. When asked specifically where the alignment of NetOps and SecOps efforts and tools have improved security management, a strong majority of respondents cited the following: provide complete visibility into on-premises systems and cloud services, reduce false positives, improve speed and accuracy of triage, and validate remediation. To further underscore the urgency for organizations to address security blind spots, a recent Vitreous World State of Ransomware for 2022 and Beyond survey revealed more than 95 percent of the more than one thousand global respondents, consisting of large organization IT and security executives, had experienced ransomware attacks in the past year. The research also revealed that 89 percent of global security leaders surveyed agree deep observability is an important element of cloud security with 50 percent of global CISOs/CIOs strongly agreeing with this statement. “As a cloud-first dental support organization, we are continuously seeking new ways to fortify our security posture and equip our supported owner doctors with the latest, proven technology and highly skilled support staff, so they can focus on providing the perfect patient experience to patients with an extraordinary, differentiated care experience,” said Nemi George, vice president of IT and information security officer and IT service operations at Pacific Dental Services. “With the deep observability we gain from Gigamon, we can eliminate security blind spots at the network layer of our hybrid cloud infrastructure, deliver defense in depth, and confidently scale our operations.” A New Frontier: Deep Observability The Gigamon Hawk Deep Observability Pipeline harnesses actionable network-level intelligence to amplify the power of cloud, security, and observability tools, enabling IT organizations to assure security and compliance governance, speed root-cause analysis of performance bottlenecks and lower the operational overhead associated with managing today’s highly distributed hybrid and multi-cloud infrastructure. Gigamon extends the value of these tools with real-time network intelligence derived from packets, flows and application metadata to deliver defense in depth and performance management. Gigamon has an extensive ecosystem of technology alliance partners that includes leading observability vendors Dynatrace, New Relic and Sumo Logic. “We are proud to partner with Gigamon and integrate their network-level intelligence with the Dynatrace platform’s full-stack observability, application security, and AIOps capabilities to enable our joint customers to innovate faster and more securely,” said Bob Wambach, vice president of product marketing at Dynatrace. “Large organizations continue to embrace hybrid-cloud, multi-cloud, and cloud-native technologies as the foundation for their digital services and innovation. As a result, applications have become increasingly complex and distributed. The combination of Dynatrace and Gigamon gives customers unprecedented abilities to simplify cloud complexity. The actionable, network-level intelligence of the Gigamon deep observability pipeline provides additional network-security context to the precise answers and intelligent automation delivered by Dynatrace.” “At Trace3 we help our customers design, move and re-architect workloads to the Cloud. One of the challenges we face is maintaining visibility into key applications regardless of cloud architecture pattern, in alignment with guidance from well-architected frameworks,” said Chris Nicholas, vice president cloud and cloud solutions group at Trace3. “Gigamon Hawk helps us deliver actionable network-level intelligence against many advanced security and observability use-cases. The built-in performance tools help us accelerate troubleshooting while lowering operational costs.” “IT organizations are navigating an unprecedented increase in cyber threats across all vectors of their hybrid and multi-cloud infrastructure, and the underlying complexity and disparity of tools used to manage these environments introduces blind spots that can expose their organizations to risk, Gigamon is at the right place at the right time to capitalize on this high growth market and deliver more value to our customers by extending the value of tools they have already deployed and empowering them with actionable network-level intelligence for the hybrid cloud so they can run fast, stay secure, and accelerate innovation.” Shane Buckley, president and CEO of Gigamon About Gigamon Gigamon offers a deep observability pipeline that harnesses actionable network-level intelligence to amplify the power of observability tools. This powerful combination helps IT organizations to assure security and compliance governance, speed root-cause analysis of performance bottlenecks, and lower operational overhead associated with managing hybrid and multi-cloud IT infrastructures. The result: modern enterprises realize the full transformational promise of the cloud. Gigamon serves more than 4,000 customers worldwide, including over 80 percent of Fortune 100 enterprises, 9 of the 10 largest mobile network providers, and hundreds of governments and educational organizations worldwide.

Read More

CLOUD DEPLOYMENT MODELS

Red Hat Drives Greater Consistency and Management Across the Hybrid Cloud with Latest Version of OpenShift Platform Plus

Red Hat | August 17, 2022

Red Hat Inc., the world's leading provider of open source solutions, today announced a new iteration of Red Hat OpenShift Platform Plus, with new features and capabilities that go beyond the base Kubernetes platform to encompass storage, management and more. This further extends Red Hat OpenShift Platform Plus as a singular Kubernetes platform to span the breadth of enterprise IT scenarios, whether a traditional datacenter, distributed edge operations or multiple public cloud environments. In the Gartner® report, The Innovation Leader’s Guide to Navigating the Cloud-Native Container Ecosystem, the research firm recommends that, “organizations strive to standardize on a consistent platform, to the extent possible across use cases.”1 As organizations grow application landscapes to meet evolving needs, Kubernetes-powered cloud platforms need to not only span open hybrid cloud infrastructure footprints, but also the variety of workloads and applications running on this foundation. Red Hat OpenShift Platform Plus is engineered to provide a more consistent foundation for organizations to drive transformational IT standardization. The newest offering includes the necessary tools to more simply build, protect and manage applications throughout the software lifecycle and across Kubernetes clusters. These underlying technology updates include: A comprehensive platform for workloads that span the hybrid cloud As organizations continue to scale operating environments, the need for greater consistency across these heterogeneous footprints has grown as well. Red Hat OpenShift 4.11, based on Kubernetes 1.24 and CRI-O 1.24 runtime interface, is designed to make it easier to consume enterprise Kubernetes however and wherever needed across the open hybrid cloud. The latest version of Red Hat OpenShift enables organizations to install OpenShift directly from major public cloud marketplaces, including AWS marketplace and Azure marketplace. This provides even greater flexibility in how an enterprise chooses to run OpenShift and enables IT teams to better meet dynamic technology requirements. New features and capabilities in Red Hat OpenShift 4.11 include: Pod Security Admission integration, which enables users to define different isolation levels for Kubernetes pods to help enforce clearer, more consistent pod behaviors. Installer provisioned infrastructure (IPI) support for Nutanix for users to employ the IPI process for fully automated, integrated, one-click installation of OpenShift on supported Nutanix virtualized environments. Additional architectures for sandboxed containers, including the ability to run sandboxed containers on AWS as well as on single node OpenShift. Sandboxed containers provide an optional additional layer of isolation for workloads, even at the far reaches of the network’s edge. Enhanced oversight and compliance across hybrid environments Managing disparate workloads can frequently require additional oversight and governance. To help users better manage ever-growing container fleets at the edge, Red Hat Advanced Cluster Management 2.6, as part of Red Hat OpenShift Platform Plus, adds new features aimed at improving availability in high latency, low bandwidth use cases. A single Red Hat Advanced Cluster Management hub cluster can now deploy and manage up to 2,500 single-node OpenShift clusters, which can be deployed and managed at the edge through zero touch provisioning. Additionally, Red Hat Advanced Cluster Management 2.6 provides edge metrics-collectors designed specifically for single-node and small workloads, allowing for greater observibility of remote operations. Red Hat Advanced Cluster Management also offers new integrations with key tools, providing users with the flexibility to continue using existing workflows. Key integrations include: Automatic fleet wide visibility of applications, including wider visibility on the application topology, displaying applications created straight through OpenShift. Cluster management directly from Red Hat Ansible Automation Platform, available as a technology preview, enables Ansible users to interact with Red Hat Advanced Cluster Management natively. Integration with Kyverno PolicySet, available as a technology preview, provides users more options to keep pace with Kubernetes policy landscapes. Data services and persistent storage designed for modern workloads As organizations move their systems to the hybrid cloud, resilience is often a critical concern. To help minimize data loss and business disruption in the event of a failure, Red Hat OpenShift Data Foundation 4.11 includes OpenShift API for data protection. The operator-based application programming interface (API) can be used to backup and restore applications and data specifics, natively or by using existing data protection applications across the hybrid cloud. Additionally, Red Hat OpenShift Data Foundation now provides multicluster monitoring capabilities via Red Hat Advanced Cluster Management. This allows for a single view of cluster data management health across multiple clusters and can help reduce operational costs by consolidating cluster management across environments through a single tool. “As organizations turn to modern applications to deliver better, more dynamic user experiences, they need a platform that delivers consistency, whether it’s a traditional application in a datacenter to containerized-workloads spanning the edge and multiple public clouds. With today’s updates, Red Hat OpenShift Platform Plus remains positioned to provide this consistent foundation along with a comprehensive set of integrated tools for enhanced management, improved data resiliency, and a stronger security posture.” Joe Fernandes, vice president and general manager, Platforms Business Group, Red Hat About Red Hat, Inc. Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Read More

CLOUD SECURITY

New Spectro Cloud Palette Edge Platform Brings World-Class Security and Operational Efficiencies to Kubernetes at the Edge

Spectro Cloud | September 30, 2022

Spectro Cloud, a leader in modern Kubernetes (K8s) management software, today announced a major new release of its Palette Edge platform. Kubernetes at the edge has spurred the interest of businesses around the world as they seek to enhance competitiveness and agility. To date, however, K8s at the edge has failed to realize its true potential. Why? A study by Dimensional Research found 72% of Kubernetes users effectively said: “It’s too challenging to deploy and manage Kubernetes on edge devices.” The Palette Edge platform, first launched in March 2022, earned Spectro Cloud recognition as a 2022 Gartner Cool Vendor in Edge Computing solves this problem, enabling organizations to re-define how cost-efficiently they can deploy and manage edge K8s clusters at scale, including at locations with small form factor devices, no on-site IT skills and marginal connectivity. Palette Edge delivers remote troubleshooting, zero-downtime rolling upgrades and patch management, even in single-server edge deployments, due to its unique A/B OS partition, multi-node failsafe design and support for both ARM and x86 architectures, including Intel’s Trusted Platform Module (TPM). Palette Edge derives its functionality from Spectro Cloud’s core Palette platform, which enables organizations to consistently manage K8s clusters across their full lifecycle, across public clouds, virtualized or bare metal data centers, as well as edge locations. Through a unique extension of Cloud Native Computing Foundation (CNCF’s) Cluster API, Palette enables IT teams to model their full Kubernetes stacks from the OS to the application in a true declarative model, creating project-curated, reusable Cluster Profiles while providing a choice of operating systems, K8s distributions and tools from the broad K8s ecosystem. Palette is architected to scale, delivering centralized and automated management combined with decentralized orchestration and policy enforcement — together enabling a virtually infinite scale from few to tens of thousands of clusters. Extending this core Palette foundation, Palette Edge today adds unique security, visibility and usability capabilities, setting a new industry standard for deploying and centrally managing edge K8s at scale, dramatically lowering total cost of ownership and risk for organizations of any size expanding to the edge. Palette Edge is purpose-built to support key industry use cases including Internet of Things device management and orchestration, data ingestion, streaming, analytics and AI inference. “For us, edge is an enabler to help clinicians deliver better patient outcomes by deploying technology closer to the user,” said Vignesh Shetty, SVP & GM Edison AI and Platform at GE Healthcare Digital. “The need for a secure, cost-effective approach to manage Kubernetes at the edge at scale is more relevant than ever before.” The new Palette Edge delivers on the key priorities for edge K8s users with: Tamperproof security for Kubernetes at the edge: Spectro Cloud research found that security is the #1 concern when adopting edge Kubernetes. Edge Kubernetes devices deployed in remote, unmonitored locations are particularly vulnerable to deliberate tampering and unintentional configuration drift, where their operating system, distribution and other software elements move out of compliance through ad hoc configuration changes. Palette Edge now enables operations teams to build highly secure configurations for edge devices, including their preferred Kubernetes distribution and the underlying OS, which once deployed become immutable, read-only and unmodifiable by the application user, just like the firmware on a smartphone. The now-immutable stack also enables zero-downtime rolling upgrades, due to a failsafe deployment design. Palette eXtended Kubernetes Edge (PXK-E): This new edge-optimized Kubernetes distribution version of Spectro Cloud’s CNCF-upstream Kubernetes distribution is available now to all Palette customers. PXK-E incorporates Palette’s new immutability capability, along with NIST-800 security hardening. It is certified for more than 50 open source and commercial cloud native integrations and provides high availability and zero-downtime rolling upgrades even in single-server configurations. With Palette Edge, businesses can choose the PXK-E distribution or Palette-optimized versions of any other K8s distribution, verified and supported by Spectro Cloud. A powerful NOC-like dashboard: Now organizations scaling to thousands or tens of thousands of edge devices have the power to manage their fleet more easily and with greater control than ever before. Palette Edge’s Network Operations Center-like (NOC) dashboard provides a highly intuitive user experience with live status for key events, plus advanced capabilities to filter, tag and drill down to clusters by location, status or other attribute. Importantly, operators can define powerful workflows for managing clusters, with almost infinite possibilities: for example, they can phase deployments of cluster updates by location for canary testing, or schedule patching to follow the sun. Ultra-simple edge device onboarding: In edge Kubernetes projects, organizations can find the act of deploying new devices in remote locations incredibly problematic; often, costly field engineering truck rolls are needed. Palette Edge makes it easy for non-specialist staff to quickly power up and onboard a new device into a managed cluster, using a variety of methods, such as through Palette Edge’s user interface, leveraging its open API, the Spectro Cloud Terraform provider, or by simply scanning a QR code on the edge device itself. The features delivered in this new Palette Edge release reflect real customer requirements of K8s at the edge. To address them and also contribute to the broader cloud native community, Spectro Cloud is now leading a unique open source project which delivers failsafe immutability at the edge: Kairos. Version 1.0 of Kairos is now generally available with extended community support, and is free to download and use. For more information, visit www.kairos.io. This is another example demonstrating Spectro Cloud’s continued commitment to foster innovation as a member of the CNCF and Linux Foundation, contributing to major Kubernetes ecosystem projects such as Cluster API and the Cluster API Provider for Canonical MAAS. These major new features are available today in Spectro Cloud’s Palette Edge edition and further position Palette as the first choice for organizations running Kubernetes at the edge at scale, enabling them to bring modern applications and data close to their end-users. Customers of Palette Edge are already realizing significant benefits by avoiding otherwise necessary field engineering visits at edge locations, which can result to up to 90% reduction in operational costs. “A key use case for 5G Edge compute is mission critical, ultra-low latency, workloads. That means cyber-security is a foundational principle for Edge and not an afterthought. Spectro Cloud is delivering a customer solution for deploying modern apps to the Edge that can integrate readily into end-to-end Zero Trust architectures,” said Dr. Ken Urquhart, Global Vice-President, 5G at Zscaler. “This brand new set of capabilities is making edge K8s locations as easy as a cloud for our customers, With a platform that can scale to tens of thousands of edge locations, requirements like security, resiliency and ease-of-use can be game changers, and this has been our focus in the latest release. At Spectro Cloud we are committed champions of the innovation coming out of the open source community, and we couldn’t be more excited to collaborate with some of the most interesting projects to deliver some of those new capabilities.” Spectro Cloud co-founder and CEO Tenry Fu About Spectro Cloud Spectro Cloud uniquely enables organizations to deploy and manage Kubernetes in production, at scale. Its Palette enterprise Kubernetes management platform gives IT Operations and DevOps engineering teams effortless control of the full Kubernetes lifecycle even across multiple clouds, data centers, bare metal and edge environments. Ops teams are empowered to support their developers with curated Kubernetes stacks and tools based on their specific needs, with granular governance and enterprise-grade security.

Read More

CLOUD SECURITY

Rapidly Evolving Hybrid Cloud Security Requirements are Driving the Need for Deep Observability

Gigamon | September 28, 2022

Gigamon, the leading deep observability company, is guiding the industry forward today, bringing application and network-level intelligence together for the first time to help network, security, and cloud IT operations teams eliminate security blind spots and deliver defense in depth across their highly distributed hybrid and multi-cloud infrastructure. Leading market intelligence firm, the 650 Group, forecasts the deep observability market’s CAGR to grow over 60 percent to reach $2 billion by 2026. They predict Gigamon will take a commanding lead with 68 percent market share in the first half of 2022. Together with an expanding ecosystem of technology alliance partners, Gigamon harnesses actionable network-level intelligence that amplifies the power of cloud, security, and observability tools, ultimately empowering large organizations to achieve the transformational promise of the cloud. A recent IDC global survey of 900 large organization IT executives and managers* revealed that 'strengthened cybersecurity posture and practices' is the number one benefit of deep observability intelligence and insights. And to overcome their concerns for security vulnerabilities, 79 percent of respondents indicate they have made good-to-excellent progress in leveraging network intelligence and performance metrics for security insights. When asked specifically where the alignment of NetOps and SecOps efforts and tools have improved security management, a strong majority of respondents cited the following: provide complete visibility into on-premises systems and cloud services, reduce false positives, improve speed and accuracy of triage, and validate remediation. To further underscore the urgency for organizations to address security blind spots, a recent Vitreous World State of Ransomware for 2022 and Beyond survey revealed more than 95 percent of the more than one thousand global respondents, consisting of large organization IT and security executives, had experienced ransomware attacks in the past year. The research also revealed that 89 percent of global security leaders surveyed agree deep observability is an important element of cloud security with 50 percent of global CISOs/CIOs strongly agreeing with this statement. “As a cloud-first dental support organization, we are continuously seeking new ways to fortify our security posture and equip our supported owner doctors with the latest, proven technology and highly skilled support staff, so they can focus on providing the perfect patient experience to patients with an extraordinary, differentiated care experience,” said Nemi George, vice president of IT and information security officer and IT service operations at Pacific Dental Services. “With the deep observability we gain from Gigamon, we can eliminate security blind spots at the network layer of our hybrid cloud infrastructure, deliver defense in depth, and confidently scale our operations.” A New Frontier: Deep Observability The Gigamon Hawk Deep Observability Pipeline harnesses actionable network-level intelligence to amplify the power of cloud, security, and observability tools, enabling IT organizations to assure security and compliance governance, speed root-cause analysis of performance bottlenecks and lower the operational overhead associated with managing today’s highly distributed hybrid and multi-cloud infrastructure. Gigamon extends the value of these tools with real-time network intelligence derived from packets, flows and application metadata to deliver defense in depth and performance management. Gigamon has an extensive ecosystem of technology alliance partners that includes leading observability vendors Dynatrace, New Relic and Sumo Logic. “We are proud to partner with Gigamon and integrate their network-level intelligence with the Dynatrace platform’s full-stack observability, application security, and AIOps capabilities to enable our joint customers to innovate faster and more securely,” said Bob Wambach, vice president of product marketing at Dynatrace. “Large organizations continue to embrace hybrid-cloud, multi-cloud, and cloud-native technologies as the foundation for their digital services and innovation. As a result, applications have become increasingly complex and distributed. The combination of Dynatrace and Gigamon gives customers unprecedented abilities to simplify cloud complexity. The actionable, network-level intelligence of the Gigamon deep observability pipeline provides additional network-security context to the precise answers and intelligent automation delivered by Dynatrace.” “At Trace3 we help our customers design, move and re-architect workloads to the Cloud. One of the challenges we face is maintaining visibility into key applications regardless of cloud architecture pattern, in alignment with guidance from well-architected frameworks,” said Chris Nicholas, vice president cloud and cloud solutions group at Trace3. “Gigamon Hawk helps us deliver actionable network-level intelligence against many advanced security and observability use-cases. The built-in performance tools help us accelerate troubleshooting while lowering operational costs.” “IT organizations are navigating an unprecedented increase in cyber threats across all vectors of their hybrid and multi-cloud infrastructure, and the underlying complexity and disparity of tools used to manage these environments introduces blind spots that can expose their organizations to risk, Gigamon is at the right place at the right time to capitalize on this high growth market and deliver more value to our customers by extending the value of tools they have already deployed and empowering them with actionable network-level intelligence for the hybrid cloud so they can run fast, stay secure, and accelerate innovation.” Shane Buckley, president and CEO of Gigamon About Gigamon Gigamon offers a deep observability pipeline that harnesses actionable network-level intelligence to amplify the power of observability tools. This powerful combination helps IT organizations to assure security and compliance governance, speed root-cause analysis of performance bottlenecks, and lower operational overhead associated with managing hybrid and multi-cloud IT infrastructures. The result: modern enterprises realize the full transformational promise of the cloud. Gigamon serves more than 4,000 customers worldwide, including over 80 percent of Fortune 100 enterprises, 9 of the 10 largest mobile network providers, and hundreds of governments and educational organizations worldwide.

Read More

CLOUD DEPLOYMENT MODELS

Red Hat Drives Greater Consistency and Management Across the Hybrid Cloud with Latest Version of OpenShift Platform Plus

Red Hat | August 17, 2022

Red Hat Inc., the world's leading provider of open source solutions, today announced a new iteration of Red Hat OpenShift Platform Plus, with new features and capabilities that go beyond the base Kubernetes platform to encompass storage, management and more. This further extends Red Hat OpenShift Platform Plus as a singular Kubernetes platform to span the breadth of enterprise IT scenarios, whether a traditional datacenter, distributed edge operations or multiple public cloud environments. In the Gartner® report, The Innovation Leader’s Guide to Navigating the Cloud-Native Container Ecosystem, the research firm recommends that, “organizations strive to standardize on a consistent platform, to the extent possible across use cases.”1 As organizations grow application landscapes to meet evolving needs, Kubernetes-powered cloud platforms need to not only span open hybrid cloud infrastructure footprints, but also the variety of workloads and applications running on this foundation. Red Hat OpenShift Platform Plus is engineered to provide a more consistent foundation for organizations to drive transformational IT standardization. The newest offering includes the necessary tools to more simply build, protect and manage applications throughout the software lifecycle and across Kubernetes clusters. These underlying technology updates include: A comprehensive platform for workloads that span the hybrid cloud As organizations continue to scale operating environments, the need for greater consistency across these heterogeneous footprints has grown as well. Red Hat OpenShift 4.11, based on Kubernetes 1.24 and CRI-O 1.24 runtime interface, is designed to make it easier to consume enterprise Kubernetes however and wherever needed across the open hybrid cloud. The latest version of Red Hat OpenShift enables organizations to install OpenShift directly from major public cloud marketplaces, including AWS marketplace and Azure marketplace. This provides even greater flexibility in how an enterprise chooses to run OpenShift and enables IT teams to better meet dynamic technology requirements. New features and capabilities in Red Hat OpenShift 4.11 include: Pod Security Admission integration, which enables users to define different isolation levels for Kubernetes pods to help enforce clearer, more consistent pod behaviors. Installer provisioned infrastructure (IPI) support for Nutanix for users to employ the IPI process for fully automated, integrated, one-click installation of OpenShift on supported Nutanix virtualized environments. Additional architectures for sandboxed containers, including the ability to run sandboxed containers on AWS as well as on single node OpenShift. Sandboxed containers provide an optional additional layer of isolation for workloads, even at the far reaches of the network’s edge. Enhanced oversight and compliance across hybrid environments Managing disparate workloads can frequently require additional oversight and governance. To help users better manage ever-growing container fleets at the edge, Red Hat Advanced Cluster Management 2.6, as part of Red Hat OpenShift Platform Plus, adds new features aimed at improving availability in high latency, low bandwidth use cases. A single Red Hat Advanced Cluster Management hub cluster can now deploy and manage up to 2,500 single-node OpenShift clusters, which can be deployed and managed at the edge through zero touch provisioning. Additionally, Red Hat Advanced Cluster Management 2.6 provides edge metrics-collectors designed specifically for single-node and small workloads, allowing for greater observibility of remote operations. Red Hat Advanced Cluster Management also offers new integrations with key tools, providing users with the flexibility to continue using existing workflows. Key integrations include: Automatic fleet wide visibility of applications, including wider visibility on the application topology, displaying applications created straight through OpenShift. Cluster management directly from Red Hat Ansible Automation Platform, available as a technology preview, enables Ansible users to interact with Red Hat Advanced Cluster Management natively. Integration with Kyverno PolicySet, available as a technology preview, provides users more options to keep pace with Kubernetes policy landscapes. Data services and persistent storage designed for modern workloads As organizations move their systems to the hybrid cloud, resilience is often a critical concern. To help minimize data loss and business disruption in the event of a failure, Red Hat OpenShift Data Foundation 4.11 includes OpenShift API for data protection. The operator-based application programming interface (API) can be used to backup and restore applications and data specifics, natively or by using existing data protection applications across the hybrid cloud. Additionally, Red Hat OpenShift Data Foundation now provides multicluster monitoring capabilities via Red Hat Advanced Cluster Management. This allows for a single view of cluster data management health across multiple clusters and can help reduce operational costs by consolidating cluster management across environments through a single tool. “As organizations turn to modern applications to deliver better, more dynamic user experiences, they need a platform that delivers consistency, whether it’s a traditional application in a datacenter to containerized-workloads spanning the edge and multiple public clouds. With today’s updates, Red Hat OpenShift Platform Plus remains positioned to provide this consistent foundation along with a comprehensive set of integrated tools for enhanced management, improved data resiliency, and a stronger security posture.” Joe Fernandes, vice president and general manager, Platforms Business Group, Red Hat About Red Hat, Inc. Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Read More

CLOUD SECURITY

New Spectro Cloud Palette Edge Platform Brings World-Class Security and Operational Efficiencies to Kubernetes at the Edge

Spectro Cloud | September 30, 2022

Spectro Cloud, a leader in modern Kubernetes (K8s) management software, today announced a major new release of its Palette Edge platform. Kubernetes at the edge has spurred the interest of businesses around the world as they seek to enhance competitiveness and agility. To date, however, K8s at the edge has failed to realize its true potential. Why? A study by Dimensional Research found 72% of Kubernetes users effectively said: “It’s too challenging to deploy and manage Kubernetes on edge devices.” The Palette Edge platform, first launched in March 2022, earned Spectro Cloud recognition as a 2022 Gartner Cool Vendor in Edge Computing solves this problem, enabling organizations to re-define how cost-efficiently they can deploy and manage edge K8s clusters at scale, including at locations with small form factor devices, no on-site IT skills and marginal connectivity. Palette Edge delivers remote troubleshooting, zero-downtime rolling upgrades and patch management, even in single-server edge deployments, due to its unique A/B OS partition, multi-node failsafe design and support for both ARM and x86 architectures, including Intel’s Trusted Platform Module (TPM). Palette Edge derives its functionality from Spectro Cloud’s core Palette platform, which enables organizations to consistently manage K8s clusters across their full lifecycle, across public clouds, virtualized or bare metal data centers, as well as edge locations. Through a unique extension of Cloud Native Computing Foundation (CNCF’s) Cluster API, Palette enables IT teams to model their full Kubernetes stacks from the OS to the application in a true declarative model, creating project-curated, reusable Cluster Profiles while providing a choice of operating systems, K8s distributions and tools from the broad K8s ecosystem. Palette is architected to scale, delivering centralized and automated management combined with decentralized orchestration and policy enforcement — together enabling a virtually infinite scale from few to tens of thousands of clusters. Extending this core Palette foundation, Palette Edge today adds unique security, visibility and usability capabilities, setting a new industry standard for deploying and centrally managing edge K8s at scale, dramatically lowering total cost of ownership and risk for organizations of any size expanding to the edge. Palette Edge is purpose-built to support key industry use cases including Internet of Things device management and orchestration, data ingestion, streaming, analytics and AI inference. “For us, edge is an enabler to help clinicians deliver better patient outcomes by deploying technology closer to the user,” said Vignesh Shetty, SVP & GM Edison AI and Platform at GE Healthcare Digital. “The need for a secure, cost-effective approach to manage Kubernetes at the edge at scale is more relevant than ever before.” The new Palette Edge delivers on the key priorities for edge K8s users with: Tamperproof security for Kubernetes at the edge: Spectro Cloud research found that security is the #1 concern when adopting edge Kubernetes. Edge Kubernetes devices deployed in remote, unmonitored locations are particularly vulnerable to deliberate tampering and unintentional configuration drift, where their operating system, distribution and other software elements move out of compliance through ad hoc configuration changes. Palette Edge now enables operations teams to build highly secure configurations for edge devices, including their preferred Kubernetes distribution and the underlying OS, which once deployed become immutable, read-only and unmodifiable by the application user, just like the firmware on a smartphone. The now-immutable stack also enables zero-downtime rolling upgrades, due to a failsafe deployment design. Palette eXtended Kubernetes Edge (PXK-E): This new edge-optimized Kubernetes distribution version of Spectro Cloud’s CNCF-upstream Kubernetes distribution is available now to all Palette customers. PXK-E incorporates Palette’s new immutability capability, along with NIST-800 security hardening. It is certified for more than 50 open source and commercial cloud native integrations and provides high availability and zero-downtime rolling upgrades even in single-server configurations. With Palette Edge, businesses can choose the PXK-E distribution or Palette-optimized versions of any other K8s distribution, verified and supported by Spectro Cloud. A powerful NOC-like dashboard: Now organizations scaling to thousands or tens of thousands of edge devices have the power to manage their fleet more easily and with greater control than ever before. Palette Edge’s Network Operations Center-like (NOC) dashboard provides a highly intuitive user experience with live status for key events, plus advanced capabilities to filter, tag and drill down to clusters by location, status or other attribute. Importantly, operators can define powerful workflows for managing clusters, with almost infinite possibilities: for example, they can phase deployments of cluster updates by location for canary testing, or schedule patching to follow the sun. Ultra-simple edge device onboarding: In edge Kubernetes projects, organizations can find the act of deploying new devices in remote locations incredibly problematic; often, costly field engineering truck rolls are needed. Palette Edge makes it easy for non-specialist staff to quickly power up and onboard a new device into a managed cluster, using a variety of methods, such as through Palette Edge’s user interface, leveraging its open API, the Spectro Cloud Terraform provider, or by simply scanning a QR code on the edge device itself. The features delivered in this new Palette Edge release reflect real customer requirements of K8s at the edge. To address them and also contribute to the broader cloud native community, Spectro Cloud is now leading a unique open source project which delivers failsafe immutability at the edge: Kairos. Version 1.0 of Kairos is now generally available with extended community support, and is free to download and use. For more information, visit www.kairos.io. This is another example demonstrating Spectro Cloud’s continued commitment to foster innovation as a member of the CNCF and Linux Foundation, contributing to major Kubernetes ecosystem projects such as Cluster API and the Cluster API Provider for Canonical MAAS. These major new features are available today in Spectro Cloud’s Palette Edge edition and further position Palette as the first choice for organizations running Kubernetes at the edge at scale, enabling them to bring modern applications and data close to their end-users. Customers of Palette Edge are already realizing significant benefits by avoiding otherwise necessary field engineering visits at edge locations, which can result to up to 90% reduction in operational costs. “A key use case for 5G Edge compute is mission critical, ultra-low latency, workloads. That means cyber-security is a foundational principle for Edge and not an afterthought. Spectro Cloud is delivering a customer solution for deploying modern apps to the Edge that can integrate readily into end-to-end Zero Trust architectures,” said Dr. Ken Urquhart, Global Vice-President, 5G at Zscaler. “This brand new set of capabilities is making edge K8s locations as easy as a cloud for our customers, With a platform that can scale to tens of thousands of edge locations, requirements like security, resiliency and ease-of-use can be game changers, and this has been our focus in the latest release. At Spectro Cloud we are committed champions of the innovation coming out of the open source community, and we couldn’t be more excited to collaborate with some of the most interesting projects to deliver some of those new capabilities.” Spectro Cloud co-founder and CEO Tenry Fu About Spectro Cloud Spectro Cloud uniquely enables organizations to deploy and manage Kubernetes in production, at scale. Its Palette enterprise Kubernetes management platform gives IT Operations and DevOps engineering teams effortless control of the full Kubernetes lifecycle even across multiple clouds, data centers, bare metal and edge environments. Ops teams are empowered to support their developers with curated Kubernetes stacks and tools based on their specific needs, with granular governance and enterprise-grade security.

Read More

Events