Edge computing and deployment strategies for communication service providers

| February 26, 2020

article image
Communication Service Providers (CSP) are looking for new revenue sources to grow their businesses, especially in the enterprise area which will be increasingly important in the future. According to the Ericsson report “5G for business: a 2030 market compass”, by 2030 up to USD 700 billion of the 5G-enabled, business-to-business value could be addressed by CSPs. With the introduction of 5G and edge computing, they are now in a better position to provide new offerings both to enterprises that need to automate industrial processes and to consumers who require improved user experiences for on-line gaming. Edge computing provides distributed computing and storage resources closer to the location where it is needed and targets new business opportunities that provide support for specific application use cases. Some examples of use case areas are augmented and virtual reality, manufacturing and automotive. The innovation rate in this part of the application ecosystem will be significant going forward. The edge opportunity should be seen in a larger context of the enterprise opportunity, where edge computing will be an enabler for many broader use cases, for example within the Internet of Things (IoT) and potentially bundled with other enterprise offerings such as 5G private networks.

Spotlight

OffsiteDataSync

"OffsiteDataSync was founded in Rochester, NY in 2002 on the belief that data retention and disaster recovery requires a unique expertise, solid technology and guaranteed service. Our mission is to deliver highly reliable solutions, at a competitive cost, that are unmatched in performance while drastically reducing maintenance costs. Today we provide data retention, disaster recovery, and hosted cloud computing services supported by three strategically located data centers. Further strengthening our leadership role in the DRaaS and IaaS industry, we are a 2014 Veeam Cloud Connect Launch Partner."

OTHER ARTICLES

How to Master Multi-Cloud Data Complexities

Article | May 20, 2020

The current patterns of cloud migration include simple “lift and shift,” which moves data with as little work as possible, typically by refactoring or redoing the applications and data so they work more efficiently on a cloud-based platform. More and more migrations include multi-cloud, which contributes to the appearance of new data complexity issues. When leveraging multi-cloud architectures, it’s important for IT leaders and cloud professionals to rethink how to deal with data complexity. If businesses are tasked with this massive and growing data management problem, it seems to me they ought to get their IT house in order. That means across a vast heterogeneity of systems, deployments, and data types. That should happen in order to master the data equation for your lines of business applications and services. Table of Contents What is multi-cloud? Why use multiple clouds? How to manage multi-cloud data complexities? What is multi-cloud? Multi-cloud is the use of two or more cloud computing services, including any combination of public, private, and hybrid. The end result is the capacity to orchestrate resources across multiple private or public cloud platforms that contain multiple cloud vendors, accounts, availability zones, or regions or premises. Why use multiple clouds? The three most important benefits of utilizing multiple clouds are: High availability – The multi-cloud provides protection for an organization’s data storage against threats. If a cloud is unavailable, other clouds remain online to run applications. Flexibility – Multi-cloud gives businesses the option to select the “best” of each cloud to suit their particular needs based on economics, location, and timing. Avoid Vendor Lock-In – This allows the application, workload, and data to be run in any cloud based on business or technical requirements at any given time. Cost effectiveness – Multi-cloud enables businesses to control their costs by optimizing the public cloud and choosing infrastructure vendors based on price. Public cloud services deliver functionality without having to hire personnel. Multi-cloud allows you to choose the right platform for your application and customers while using the best features from each cloud service provider. This gives companies the flexibility they need to select the “best” of each cloud to suit their particular needs based on economics, location, and timing. Multi-cloud also provides protection against the failure of a single cloud platform. Large enterprises may also be able to maximize the benefits of different infrastructure vendors that are competing on price for their business (smaller companies won’t have this luxury). Cloud is very different from your internal IT stuff — the way you program it, the way you develop applications. It has a wonderful cost proposition, at least initially. But now, of course, these companies have to deal with all of this complexity. - Martin Hingley, President and Market Analyst, IT Candor Limited Read more: How To Derive Data Insights In Hybrid Cloud Model And Drive Innovation How to manage multi-cloud data complexities? The reasons for the rising data complexity issues are fairly well known and include the following: The rising use of unstructured data that doesn’t have native schemas. Schemas are typically defined at access. The rising use of streaming data that many businesses employ to gather information as it happens and then process it in flight. The rise of IoT devices that spin off massive amounts of data. The changing nature of transactional databases, moving to NoSQL and other non-relational models. The continued practice of binding single-purpose databases to applications. Finally, and most importantly, the rise of as-a-service cloud-based and cloud-only databases, such as those now offered by all major cloud providers that are emerging as the preferred databases for applications built both inside and outside of the public clouds. Moreover, the use of heterogeneous distributed databases within multi-cloud architectures are preferred. Challenge of multi-cloud For the most part, those who build today’s data systems just try to keep up rather than get ahead of data complexity issues. The migration of data to net-new systems in multi-clouds is more about tossing money and database technology at the problem than solving it. Missing is core thinking about how data complexity should be managed, along with data governance and data security. We’re clearly missing the use of new approaches and helpful enabling technology within multi-cloud deployments that will remove the core drawbacks of data complexity. The challenge is that you need a single version of the truth. Lots of IT organizations don’t have that. Data governance is hugely important; it’s not nice to have, it’s essential to have. - Martin Hingley, President and Market Analyst, IT Candor Limited The core issue is to move toward application architectures that decouple the database from the applications, or even move toward collections of services, so you can deal with the data at another layer of abstraction. The use of abstraction is not new, but we haven’t had the required capabilities until the last few years. These capabilities include master data management (MDM), data service enablement, and the ability to deal with the physical databases using a configuration mechanism that can place volatility and complexity into a single domain. Virtual databases are a feature of database middleware services that technology suppliers provide. They serve to drive a configurable structure and management layer over existing physical databases, if that layer is in the requirements. This means that you can alter the way the databases are accessed. You can create common access mechanisms that are changeable within the middleware and do not require risky and expensive changes to the underlying physical database. Moving up the stack, we have data orchestration and data management. These layers provide enterprise data management with the ability to provide services such as MDM, recovery, access management, performance management, etc., as core services that exist on top of the physical or virtual databases, in the cloud or local. Moving up to the next layer, we have the externalization and management of core data services or microservices. These are managed, governed, and secured under common governance and security layers that can track, provision, control, and provide access to any number of requesting applications or users. ACT NOW Most enterprises are ignoring the rapid increase of data, as well as that of data complexity. Many hope that something magical will happen that will solve the problem for them, such as standards. The rapid rise in the use of multi-cloud means that your data complexity issues will be multiplied by the number of public cloud providers that end up being part of your multi-cloud. So, we’ll see complexity evolve from a core concern into a major hindrance to making multi-cloud deployment work effectively for the business. What’s needed now is to understand that a problem exists, and then think through potential solutions and approaches. Once you do that, the technology to employ is rather easy to figure out. Don’t make the mistake of tossing tools at the problem. Tools alone won’t be able to deal with the core issues of complexity. Considering the discussion above, you can accomplish this in two steps. First, define a logical data access layer that can leverage any type of back-end database storage system. Second, define metadata management with the system use of both security and governance. The solution occurs at the conceptual level, not with the introduction of another complex array of technology on top of already complex arrays of technology. It’s time to realize that we’re already in a hole. Stop digging. Read more:Flexible building blocks for the new cloud and data-driven world

Read More

Top 8 Kubernetes Best Practices to Get the Most out of It

Article | June 23, 2021

With the growth of cloud technology, several enterprises are now aware of the benefits of adopting hybrid and multi-cloud models. However, when shifted between different cloud environments, they face a set of challenges to ensure the applications' reliable performance. Thus, the introduction of the concept of containers in cloud computing comes into action. Containerization is simply the clustering of an application and all its components into a single, portable suite. With containerization rapidly gaining popularity in cloud computing, leading providers, including Amazon Web Services (AWS), Azure, and Google, offer container services and orchestration tools to accomplish container creation and deployment. Based on Forrester's 2020 Container Adoption Survey, about 65% of entrepreneurs are already using or are planning to implement container orchestration platforms as part of their IT transformation approach. Kubernetes is one of the most known and future-ready, portable container orchestration platforms developed by Google. It is a scalable, reliable, robust, and secured platform, which can manage and accommodate high traffic for cloud applications and microservices. For optimal performance, implementing Kubernetes best practices and following a tailored configuration model is significant to ensure the optimal platform efficiency your enterprise requires. This article will highlight the top eight best practices of Kubernetes that will help you orchestrate, scale, control, and automate your enterprise applications. But before we start, let us know the basic concept of Kubernetes. What is Kubernetes? Kubernetes (a.k.a. k8s or "Kube") is Google's open-source container management platform spanning public, private, and hybrid clouds that automates numerous manual processes involved in scaling, deploying, and managing the containerized applications. Kubernetes is an ideal platform for hosting cloud-native applications that require quick scaling, like real-time data streaming through Apache Kafka. In simple words, you can cluster together groups of hosts running Linux containers, and Kubernetes supports you in managing those clusters quickly and efficiently. Gartner forecasts that worldwide container management revenue will grow sturdily from a small base of $465.8 million in 2020 to register $944 million by 2024. It seems that the popularity of Kubernetes across the enterprises will make the forecast achievable. Using Kubernetes, outsourcing the data centers to public cloud service providers, or providing web hosting to optimize software development processes can become manageable. Moreover, website and mobile applications with intricate custom codes can deploy Kubernetes on their product hardware for a cost-effective solution. Moreover, it helps you completely implement and trust a container-based infrastructure in production environments, where you need to manage the containers that run the applications and ensure zero downtime. For example, if a container goes down and another container needs to start, Kubernetes handles the situation with its distributed system framework efficiently. Reasons Behind the Popularity of Kubernetes Strategy Kubernetes is in the headlines, and we hear about it on social media or at user groups and conferences. So, what is the reason behind its popularity? According to the Kubernetes service providers, it has become the standard for container management platform as it offers several advantages: Scalability: It offers easy scalability of the containers across many servers in a cluster through the auto-scaler service, thereby maximizing resource utilization with a simple command, with a UI, or automatically based on CPU utilization. Flexibility: The flexibility of Kubernetes expands for applications to operate consistently and efficiently irrespective of the complexity of the requirement. Storage Orchestration: The open-source attribute of Kubernetes gives you the liberty to take advantage of storage orchestration from different cloud environments and shift the workloads effortlessly to their destinations. Automation: Using Kubernetes, you can automatically place containers as their resource requirements without any availability concerns. It helps to combine critical and best-effort workloads, to drive the utilization and save resources. Health-Check and Self Heal: Kubernetes allows you to perform the health-check and self-heal your containers with auto-replacement, auto-restart, auto-replication, and auto-scaling properties. Reliability and Security: Kubernetes offers tolerance and clustering of significant errors, bringing stability and reliability to the project. Built-in data encryption, vulnerability scanning, etc., are some of the services of Kubernetes that enhance its security aspect. Self-discovery: Kubernetes allows self-discovery by providing their IP address to the containers and providing a single DNS name to a group of containers. Roll Out and Roll Back Automation: Kubernetes gradually rolls out changes to your application or its configuration while monitoring the application's health to ensure it does not kill all your instances at the same time. Kubernetes rolls back the changes in case of any discrepancy. 8 Kubernetes Best Practices for Efficient Deployment According to Red Hat's "The State of Enterprise Open-Source Report," 85% of the interviewees agree that Kubernetes is the key to cloud-native application strategies. Kubernetes has evolved from the code that Google used to manage its data centers at scale. Nowadays, organizations use Kubernetes for complete data center outsourcing, web/mobile applications, SaaS support, cloud web hosting, or high-performance computing. For any platform to operate and perform at its optimum capacity, there are certain best practices that you should consider. In this article, we will discuss a few of the best practices of Kubernetes that can improve the efficiency of your production environment. Use The Latest Version and Enable RBAC Kubernetes releases new features, bug fixes, and platform upgrades with its consistent version update. As a rule, you must always use the latest version to make sure that you have optimized your Kubernetes. By upgrading to the newest release, you will get technical support and a host of advanced security features to control the potential threats while fixing reported vulnerabilities. Enabling RBAC (Role-Based Access Control) will help you control access and admittance to users and applications on the system or network. The introduction of RBAC in the Kubernetes 1.8 version helped to create authorization policies using rbac.authorization.k8s.io API group. It allows Kubernetes to permit access to the user, add/remove approvals, set up regulations, etc. Organize With Kubernetes Namespaces A namespace is a kind of virtual cluster that helps your Kubernetes environment organize, secure, and perform. Thus, it can be considered one of the Kubernetes best practices that enables you to create logical partitions and apply separation of your resources and restrict the scope of user permissions. Thus, you can use it in a multi-user environment spanning multiple teams or projects. Namespaces cannot be nested inside one another, and each Kubernetes resource must be in its own unique namespace. However, it is not essential to use multiple namespaces to distinguish slightly unlike resources, such as different releases of the same software: use labels to separate resources within the identical namespace. Consider Small Container Images Using base images may include the unnecessary inclusion of additional packages and libraries. Hence it is significant to use smaller container images as it helps you to create a high-performing platform quickly. As one of the Kubernetes best practices, you can consider Alpine Linux Images, which are much smaller than the base images. Alpine Images have access to a package repository that has necessary add-ons. You can add essential packages and libraries for your application as required. Smaller container images are also less vulnerable to security threats as they have lesser attack surfaces. Setting Up Health Checks Managing extensive distributed systems can be complex, especially when things are not running perfectly. The primary reason for the complication in the distributed system is that multiple operations work together for the system to function. So, in case of any discrepancy, the system has to identify and fix it automatically. Kubernetes health checks are simple ways to ensure that application instances are working. Health checks are the effective Kubernetes best practices to analyze whether your system is operational or not. If an instance is non-operational or failed, then other services should not access or communicate with it. As an alternative, the system can divert the requests to some other ready and operational instances. Moreover, the system should bring your app back to a healthy state. Kubernetes provides you two types of health checks, and it is significant to recognize their differences and utilities. Readiness probes allow Kubernetes to identify whether the application is ready to serve the traffic before permitting the traffic to a pod (most miniature Kubernetes objects). It fundamentally shows the availability of the pod to accept the workload traffic and respond to requests. In case the readiness probe fails, Kubernetes halts the traffic towards the pod until it is successful. The liveliness probe allows Kubernetes to perform a health check to verify whether the application operates as desired. In case it fails, Kubernetes removes the pod and initiates its replacement. Setting Kubernetes Resource Usage (Requests and Limits) Kubernetes uses the process requests and limits to control resource usages such as CPU and memory. If a container requests a resource, Kubernetes will only map the schedule on a node to provide the requested resource. Whereas limits help to make sure a container never goes beyond a specific request value. The container will be automatically restricted if it goes beyond the limit. To get a total resource value of Kubernetes pods (usually available in a group) comprising one or multiple containers, you need to add the limits and requests for each container. While your Kubernetes cluster might be operational without setting the resource 'requests and limits', you will start getting stability issues as the workloads start scaling. Adding 'requests and limits' will help you to get the optimal benefit of Kubernetes. Discovering External Services on Kubernetes If you want to discover and access the services living outside your Kubernetes cluster, you can do it by using the external service endpoint or Config Maps directly in your code. Even if you are unwilling to identify those today, there can be a compulsion to do so tomorrow. Mapping your external services to the internal ones will enhance the flexibility to transmit these services into the cluster in the future while reducing recoding efforts. Additionally, it will help you to easily manage and understand the external services your organization is using. Database Running- Whether to consider or not? Running a database on Kubernetes can get you some benefits regarding the automation Kubernetes provides to keep the database application active. However, it would help if you analyze them before you start. There can be failure incidents because the pods (database app containers) are susceptible to letdowns compared to a traditionally hosted or fully managed database. Databases with concepts like sharing, failover elections, and replication built into its DNA will be easier to run on Kubernetes. Thus, simple questions like the following will help to draw a Kubernetes strategy to consider whether to run a database or not. Are the features of the database Kubernetes-friendly? Are the workloads of the database compatible with the Kubernetes environment? What is the limit of the Ops workload acceptable in the Kubernetes environment? If the answers to all the questions are affirmative, your external database is ready to run on the Kubernetes environment. Otherwise, you should consider other platforms such as managed DB or VM. Termination Practices Addressing failures is inevitable in distributed systems. Kubernetes helps in handling failures by using controllers that can keep an eye on the state of your system and restart the halted services. However, Kubernetes can also often compulsorily terminate your application as part of the regular operation of the system. It can terminate Kubernetes objects for various reasons because enabling your application to handle these terminations efficiently is essential to create a steady plan and provide a great user experience. Winding Up The CNCF survey report 2020 highlights the progressive inclination towards adapting the Kubernetes platform. The survey received 1324 responses inferred in 2020, where 91% of respondents report using Kubernetes, 83% of them are in production, showing a steady upsurge of 78% from last year and 58% in 2018. Adhering to the Kubernetes best practices will provide you an opportunity to take your production environment to the next level and meet the business requirements. In addition, it will have a positive impact on the Kubernetes market size. Several top marketers and service providers are doing their best to ensure their customers get the desired benefits for Kubernetes deployment in production. Moreover, they also pitch for services on Kubernetes and allow businesses to gain the most out of it. In the latest interview with Media 7, Red Hat's Director of Product Marketing, Irshaad Raihan says, "We look to inspire great ideas and help our customers reach for the impossible. Once we have buy-in into the "why," we arm customers with the most relevant data points to help them make a purchase decision around product and vendor." FAQ’s What exactly is Kubernetes? Kubernetes is an open-source, portable, and scalable platform for container orchestration, automating several manual tasks for managing the containerized workloads. Kubernetes allows clustering running Linux containers and supports managing those clusters quickly and efficiently. Why is Kubernetes so popular? Kubernetes has become one of the efficient container management systems as it offers several advantages, such as easy scaling of the containers across many servers in a cluster. It helps in the easy movement of workloads between different types of environments. It also offers high error tolerance, which contributes to the stability and reliability of the workload and has built-in security tools that provide enhanced safety to the platform. What is an example of Kubernetes? One of the most popular Kubernetes use cases is the popular game Pokemon Go. Niantic Inc. was the developer and witnessed more than 500 million downloads with 20 million active users every day. Pokemon Go's parent company was not expecting this kind of traffic. As an advanced solution, they opted for Google Container Engine powered by Kubernetes. Where is Kubernetes used? You can use Kubernetes to manage microserver architecture. Kubernetes simplifies various facets of running a service-oriented application infrastructure. For instance, it can control the allocation of resources and drive traffic for cloud applications and microservices. { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [{ "@type": "Question", "name": "What exactly is Kubernetes?", "acceptedAnswer": { "@type": "Answer", "text": "Kubernetes is an open-source, portable, and scalable platform for container orchestration, automating several manual tasks for managing the containerized workloads. Kubernetes allows clustering running Linux containers and supports managing those clusters quickly and efficiently." } },{ "@type": "Question", "name": "Why is Kubernetes so popular?", "acceptedAnswer": { "@type": "Answer", "text": "Kubernetes has become one of the efficient container management systems as it offers several advantages, such as easy scaling of the containers across many servers in a cluster. It helps in the easy movement of workloads between different types of environments. It also offers high error tolerance, which contributes to the stability and reliability of the workload and has built-in security tools that provide enhanced safety to the platform." } },{ "@type": "Question", "name": "What is an example of Kubernetes?", "acceptedAnswer": { "@type": "Answer", "text": "One of the most popular Kubernetes use cases is the popular game Pokemon Go. Niantic Inc. was the developer and witnessed more than 500 million downloads with 20 million active users every day. Pokemon Go's parent company was not expecting this kind of traffic. As an advanced solution, they opted for Google Container Engine powered by Kubernetes." } },{ "@type": "Question", "name": "Where is Kubernetes used?", "acceptedAnswer": { "@type": "Answer", "text": "You can use Kubernetes to manage microserver architecture. Kubernetes simplifies various facets of running a service-oriented application infrastructure. For instance, it can control the allocation of resources and drive traffic for cloud applications and microservices." } }] }

Read More

Closing Digital Gaps With The Cloud

Article | April 9, 2020

People complain about Germany’s digital backwardness: lack of broadband expansion, poor network coverage, no widespread use of cloud computing, etc.However, in companies the situation is not as bad as some might think: According to the Bitkom Digital Office Index 2018, a representative survey of 1,108 companies with 20 employees or more, as many as 67 percent of companies are up to date when it comes to the digital office.However,this also means that one in three still has some catching up to do. They mostly suffer from processes having points of interruption.

Read More

Tufin is the First and Only Vendor to Provide Unified Security Policy Management for the Hybrid Cloud

Article | February 12, 2020

Today we announced Tufin SecureCloudTM, the newest addition to the Tufin Orchestration Suite establishing Tufin as the first and only vendor to unify security policy management across on-premise, cloud-native, and hybrid cloud environments. SecureCloud, a cloud-native SaaS solution, enables organizations to set and automatically apply consistent security policy and micro-segmentation to any application/workload, at any scale, across the hybrid cloud environment. It leverages our knowledge of Kubernetes plus our deep experience with security policy management and our broad integration with all leading firewall and router brands.

Read More

Spotlight

OffsiteDataSync

"OffsiteDataSync was founded in Rochester, NY in 2002 on the belief that data retention and disaster recovery requires a unique expertise, solid technology and guaranteed service. Our mission is to deliver highly reliable solutions, at a competitive cost, that are unmatched in performance while drastically reducing maintenance costs. Today we provide data retention, disaster recovery, and hosted cloud computing services supported by three strategically located data centers. Further strengthening our leadership role in the DRaaS and IaaS industry, we are a 2014 Veeam Cloud Connect Launch Partner."

Events