Top 8 Kubernetes Best Practices to Get the Most out of It

SAYANTANI BHATTACHARYA | June 23, 2021 | 117 views

With the growth of cloud technology, several enterprises are now aware of the benefits of adopting hybrid and multi-cloud models. However, when shifted between different cloud environments, they face a set of challenges to ensure the applications' reliable performance. Thus, the introduction of the concept of containers in cloud computing comes into action. Containerization is simply the clustering of an application and all its components into a single, portable suite. With containerization rapidly gaining popularity in cloud computing, leading providers, including Amazon Web Services (AWS), Azure, and Google, offer container services and orchestration tools to accomplish container creation and deployment. Based on Forrester's 2020 Container Adoption Survey, about 65% of entrepreneurs are already using or are planning to implement container orchestration platforms as part of their IT transformation approach. Kubernetes is one of the most known and future-ready, portable container orchestration platforms developed by Google. It is a scalable, reliable, robust, and secured platform, which can manage and accommodate high traffic for cloud applications and microservices. For optimal performance, implementing Kubernetes best practices and following a tailored configuration model is significant to ensure the optimal platform efficiency your enterprise requires.

This article will highlight the top eight best practices of Kubernetes that will help you orchestrate, scale, control, and automate your enterprise applications. But before we start, let us know the basic concept of Kubernetes. 

What is Kubernetes?

Kubernetes (a.k.a. k8s or "Kube") is Google's open-source container management platform spanning public, private, and hybrid clouds that automates numerous manual processes involved in scaling, deploying, and managing the containerized applications. Kubernetes is an ideal platform for hosting cloud-native applications that require quick scaling, like real-time data streaming through Apache Kafka. In simple words, you can cluster together groups of hosts running Linux containers, and Kubernetes supports you in managing those clusters quickly and efficiently. Gartner forecasts that worldwide container management revenue will grow sturdily from a small base of $465.8 million in 2020 to register $944 million by 2024. It seems that the popularity of Kubernetes across the enterprises will make the forecast achievable.

Using Kubernetes, outsourcing the data centers to public cloud service providers, or providing web hosting to optimize software development processes can become manageable. Moreover, website and mobile applications with intricate custom codes can deploy Kubernetes on their product hardware for a cost-effective solution. Moreover, it helps you completely implement and trust a container-based infrastructure in production environments, where you need to manage the containers that run the applications and ensure zero downtime. For example, if a container goes down and another container needs to start, Kubernetes handles the situation with its distributed system framework efficiently. 

Reasons Behind the Popularity of Kubernetes Strategy

Kubernetes is in the headlines, and we hear about it on social media or at user groups and conferences. So, what is the reason behind its popularity? According to the Kubernetes service providers, it has become the standard for container management platform as it offers several advantages:
  • Scalability: It offers easy scalability of the containers across many servers in a cluster through the auto-scaler service, thereby maximizing resource utilization with a simple command, with a UI, or automatically based on CPU utilization.
  • Flexibility: The flexibility of Kubernetes expands for applications to operate consistently and efficiently irrespective of the complexity of the requirement.
  • Storage Orchestration: The open-source attribute of Kubernetes gives you the liberty to take advantage of storage orchestration from different cloud environments and shift the workloads effortlessly to their destinations.
  • Automation: Using Kubernetes, you can automatically place containers as their resource requirements without any availability concerns. It helps to combine critical and best-effort workloads, to drive the utilization and save resources.
  • Health-Check and Self Heal: Kubernetes allows you to perform the health-check and self-heal your containers with auto-replacement, auto-restart, auto-replication, and auto-scaling properties. 
  • Reliability and Security: Kubernetes offers tolerance and clustering of significant errors, bringing stability and reliability to the project. Built-in data encryption, vulnerability scanning, etc., are some of the services of Kubernetes that enhance its security aspect.
  • Self-discovery: Kubernetes allows self-discovery by providing their IP address to the containers and providing a single DNS name to a group of containers.
  • Roll Out and Roll Back Automation: Kubernetes gradually rolls out changes to your application or its configuration while monitoring the application's health to ensure it does not kill all your instances at the same time. Kubernetes rolls back the changes in case of any discrepancy.

8 Kubernetes Best Practices for Efficient Deployment

According to Red Hat's "The State of Enterprise Open-Source Report," 85% of the interviewees agree that Kubernetes is the key to cloud-native application strategies. Kubernetes has evolved from the code that Google used to manage its data centers at scale. Nowadays, organizations use Kubernetes for complete data center outsourcing, web/mobile applications, SaaS support, cloud web hosting, or high-performance computing. For any platform to operate and perform at its optimum capacity, there are certain best practices that you should consider. In this article, we will discuss a few of the best practices of Kubernetes that can improve the efficiency of your production environment.

Use The Latest Version and Enable RBAC

Kubernetes releases new features, bug fixes, and platform upgrades with its consistent version update. As a rule, you must always use the latest version to make sure that you have optimized your Kubernetes. By upgrading to the newest release, you will get technical support and a host of advanced security features to control the potential threats while fixing reported vulnerabilities.

Enabling RBAC (Role-Based Access Control) will help you control access and admittance to users and applications on the system or network. The introduction of RBAC in the Kubernetes 1.8 version helped to create authorization policies using rbac.authorization.k8s.io API group. It allows Kubernetes to permit access to the user, add/remove approvals, set up regulations, etc.

Organize With Kubernetes Namespaces

A namespace is a kind of virtual cluster that helps your Kubernetes environment organize, secure, and perform. Thus, it can be considered one of the Kubernetes best practices that enables you to create logical partitions and apply separation of your resources and restrict the scope of user permissions. Thus, you can use it in a multi-user environment spanning multiple teams or projects.

Namespaces cannot be nested inside one another, and each Kubernetes resource must be in its own unique namespace. However, it is not essential to use multiple namespaces to distinguish slightly unlike resources, such as different releases of the same software: use labels to separate resources within the identical namespace.

Consider Small Container Images

Using base images may include the unnecessary inclusion of additional packages and libraries. Hence it is significant to use smaller container images as it helps you to create a high-performing platform quickly. As one of the Kubernetes best practices, you can consider Alpine Linux Images, which are much smaller than the base images. Alpine Images have access to a package repository that has necessary add-ons. You can add essential packages and libraries for your application as required. Smaller container images are also less vulnerable to security threats as they have lesser attack surfaces.

Setting Up Health Checks

Managing extensive distributed systems can be complex, especially when things are not running perfectly. The primary reason for the complication in the distributed system is that multiple operations work together for the system to function. So, in case of any discrepancy, the system has to identify and fix it automatically. Kubernetes health checks are simple ways to ensure that application instances are working.

Health checks are the effective Kubernetes best practices to analyze whether your system is operational or not. If an instance is non-operational or failed, then other services should not access or communicate with it. As an alternative, the system can divert the requests to some other ready and operational instances. Moreover, the system should bring your app back to a healthy state. Kubernetes provides you two types of health checks, and it is significant to recognize their differences and utilities.

Readiness probes allow Kubernetes to identify whether the application is ready to serve the traffic before permitting the traffic to a pod (most miniature Kubernetes objects). It fundamentally shows the availability of the pod to accept the workload traffic and respond to requests. In case the readiness probe fails, Kubernetes halts the traffic towards the pod until it is successful. 

The liveliness probe allows Kubernetes to perform a health check to verify whether the application operates as desired. In case it fails, Kubernetes removes the pod and initiates its replacement.

Setting Kubernetes Resource Usage (Requests and Limits)

Kubernetes uses the process requests and limits to control resource usages such as CPU and memory. If a container requests a resource, Kubernetes will only map the schedule on a node to provide the requested resource. Whereas limits help to make sure a container never goes beyond a specific request value. The container will be automatically restricted if it goes beyond the limit. 

To get a total resource value of Kubernetes pods (usually available in a group) comprising one or multiple containers, you need to add the limits and requests for each container. While your Kubernetes cluster might be operational without setting the resource 'requests and limits', you will start getting stability issues as the workloads start scaling. Adding 'requests and limits' will help you to get the optimal benefit of Kubernetes.

Discovering External Services on Kubernetes

If you want to discover and access the services living outside your Kubernetes cluster, you can do it by using the external service endpoint or Config Maps directly in your code. Even if you are unwilling to identify those today, there can be a compulsion to do so tomorrow. Mapping your external services to the internal ones will enhance the flexibility to transmit these services into the cluster in the future while reducing recoding efforts. Additionally, it will help you to easily manage and understand the external services your organization is using.

Database Running- Whether to consider or not?

Running a database on Kubernetes can get you some benefits regarding the automation Kubernetes provides to keep the database application active. However, it would help if you analyze them before you start. There can be failure incidents because the pods (database app containers) are susceptible to letdowns compared to a traditionally hosted or fully managed database. Databases with concepts like sharing, failover elections, and replication built into its DNA will be easier to run on Kubernetes. 

Thus, simple questions like the following will help to draw a Kubernetes strategy to consider whether to run a database or not.
  • Are the features of the database Kubernetes-friendly?
  • Are the workloads of the database compatible with the Kubernetes environment?
  • What is the limit of the Ops workload acceptable in the Kubernetes environment?
If the answers to all the questions are affirmative, your external database is ready to run on the Kubernetes environment. Otherwise, you should consider other platforms such as managed DB or VM. 

Termination Practices

Addressing failures is inevitable in distributed systems. Kubernetes helps in handling failures by using controllers that can keep an eye on the state of your system and restart the halted services. However, Kubernetes can also often compulsorily terminate your application as part of the regular operation of the system. It can terminate Kubernetes objects for various reasons because enabling your application to handle these terminations efficiently is essential to create a steady plan and provide a great user experience.

Winding Up

The CNCF survey report 2020 highlights the progressive inclination towards adapting the Kubernetes platform. The survey received 1324 responses inferred in 2020, where 91% of respondents report using Kubernetes, 83% of them are in production, showing a steady upsurge of 78% from last year and 58% in 2018. Adhering to the Kubernetes best practices will provide you an opportunity to take your production environment to the next level and meet the business requirements. In addition, it will have a positive impact on the Kubernetes market size.

Several top marketers and service providers are doing their best to ensure their customers get the desired benefits for Kubernetes deployment in production. Moreover, they also pitch for services on Kubernetes and allow businesses to gain the most out of it. In the latest interview with Media 7, Red Hat's Director of Product Marketing, Irshaad Raihan says, "We look to inspire great ideas and help our customers reach for the impossible. Once we have buy-in into the "why," we arm customers with the most relevant data points to help them make a purchase decision around product and vendor."

FAQ’s

What exactly is Kubernetes?

Kubernetes is an open-source, portable, and scalable platform for container orchestration, automating several manual tasks for managing the containerized workloads. Kubernetes allows clustering running Linux containers and supports managing those clusters quickly and efficiently.

Why is Kubernetes so popular?

Kubernetes has become one of the efficient container management systems as it offers several advantages, such as easy scaling of the containers across many servers in a cluster. It helps in the easy movement of workloads between different types of environments. It also offers high error tolerance, which contributes to the stability and reliability of the workload and has built-in security tools that provide enhanced safety to the platform.

What is an example of Kubernetes?

One of the most popular Kubernetes use cases is the popular game Pokemon Go. Niantic Inc. was the developer and witnessed more than 500 million downloads with 20 million active users every day. Pokemon Go's parent company was not expecting this kind of traffic. As an advanced solution, they opted for Google Container Engine powered by Kubernetes.

Where is Kubernetes used?

You can use Kubernetes to manage microserver architecture. Kubernetes simplifies various facets of running a service-oriented application infrastructure. For instance, it can control the allocation of resources and drive traffic for cloud applications and microservices.

Spotlight

HGST, a Western Digital brand

HGST’S storage solutions are everywhere, touching lives and enabling possibilities for the enterprise, cloud computing and sophisticated infrastructures in healthcare, energy, finance and government.

OTHER ARTICLES
CLOUD SECURITY

Cloud Cryptography: Using Encryption to Protect Data

Article | July 13, 2022

Even privacy specialists agree that encryption is a fundamental technology and the cornerstone of security, but cloud encryption can be daunting. In addition, small and medium-sized enterprises can become confused by the sheer quantity of encryption techniques. Cloud cryptography encrypts cloud-based data. It allows users to securely use shared cloud services while cloud provider data is encrypted. Cloud cryptography safeguards sensitive data without slowing exchange. Cloud encryption makes it possible to secure sensitive data outside your organization's corporate IT infrastructure when that data is no longer under your control. Companies utilize a variety of cryptographic key types for cloud security. Three algorithms are used for cloud data encryption: Symmetric Algorithm One key encrypts and decrypts data. It requires little computing resources and excels at encryption. Two-way keys ensure verification and approval in symmetrical algorithms. Encrypted information in the cloud can't be deciphered unless the client possesses the key. Asymmetric Algorithm Encryption and decoding need distinct keys. Every recipient needs a decoder—the recipient's private key. The encryption key belongs to someone. The most secure approach requires both keys to access explicit data. Hashing It's key to blockchain security. In a blockchain, data is stored in blocks and linked by cryptographic protocols. A code or hash is assigned to each information block added to the chain. Hashing helps arrange and recover data. Businesses need to adopt a data-centric approach in this complex and evolving world of virtualization, cloud services, and mobility to protect their sensitive information from contemporary threats. Companies should deploy data security solutions that secure sensitive data consistently, including cloud data encryption and key management. Comprehensive cloud security and encryption platform should include robust access controls and key management to help enterprises use encryption successfully and cost-efficiently.

Read More
CLOUD SECURITY

Is It Time For Your Organization To Adopt Cloud Computing?

Article | July 19, 2022

The potential of cloud computing is becoming increasingly apparent to various businesses, and it is also growing. AWS, Microsoft Azure, and Google GCP are just a few of the numerous cloud service providers that are accessible globally. In addition, you can choose from a variety of migration strategies to go from local servers to cloud servers. Many businesses are considering shifting to the cloud. What are the indications that you are prepared, and why should you relocate? There's a chance your company is already utilizing an on-premise solution. Since it's been in use for a while, organizations are accustomed to it. But the need for greater flexibility has grown exponentially now that the shift to digital has accelerated in recent years. Threats to On-premise There are various drawbacks to on-premise software. Updates aren't usually frequent, and they’re not always supported. This implies that firms won't always have access to the most recent features and abilities. A custom build is much more time-consuming if you require a feature right away than getting it added to quarterly updates. There's a chance that the program an organization is using will someday be completely phased out. Then the organization is stuck using a solution that won't receive any more updates. In addition, with the hardware getting older, current operating systems might be unable to execute older programs. In the meantime, rivals would have switched to cutting-edge, affordable cloud-based technologies, which allow them to run their businesses and provide a much smoother client experience. Why Choose the Cloud? Moving to the cloud applies to every aspect of your business. Real-time data is provided, allowing for far more precise decision-making. Automating routine manual chores streamlines operations and frees up team members' time for activities they enjoy. It is also perfect for emerging forms of working, like remote and hybrid work, because it can be accessed from anywhere, on any device, at any time.

Read More
CLOUD APP DEVELOPMENT

Managing Multi-Cloud Complexities for a Seamless Experience

Article | July 18, 2022

Introduction The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today. Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions. However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others. Managing Multi-Cloud Complexities Like a Pro Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below. Managing Data with AI and ML AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction. AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control. Integrated Management Structure Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives. Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds. Developing Security Strategy Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss. Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications. Skillset Management Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies. Closing Lines Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Read More
CLOUD SECURITY

Evaluating the Importance of Cloud Native Security

Article | June 29, 2022

As time goes on, an increasing number of businesses worldwide are taking their approach to digital transformation a step farther than their competitors, who are yet to explore the digital front as effectively. As a result, from corporate regulations and financial limits to compliance penalties and new attack vectors, security teams face increasing difficulties when businesses move and scale their apps and services across multiple clouds. The creation of cloud-native applications has also increased as more businesses ramp up their digital transformation initiatives. Despite not having a clearly defined boundary to secure, contemporary distributed networks based in the cloud require network security. In addition, more sophisticated observability and security capabilities are also necessary due to the rising development and deployment of cloud-native apps. Organizations must understand what security entails for each new layer of the application stack in order to better secure cloud-native applications. They must also understand that the entire development pipeline requires a security management toolkit. In a perfect world, all cloud-native applications would secure every one of their endpoints and restrict access to only services or users with valid credentials. Every request for resources from an application should specify who is making it, their access role, and any privileges they may have. The difficulty of keeping track of these assets, as well as the constantly changing nature of cloud resources, adds to the complexity. As they scale up, cloud-native solutions like serverless present new difficulties. In particular, serverless apps frequently have hundreds of functions, making it challenging to manage all this data and the services that utilize it as the program grows. Due to this, resources must be immediately recognized as soon as they are produced and tracked through all modifications until they are no longer available. Despite the complexity of cloud-native applications, the fundamentals of cybersecurity remain the same. Beyond the necessity of end-user training, it appears that the five pillars of zero trust are strikingly similar to the essentials of cybersecurity: Network Application workload Identities Data Devices (physical security) Although using the cloud benefits businesses, security flaws, mistakes, and incorrect configurations are common. Moreover, different approaches leave security weaknesses. Lack of insight and end-to-end context about risk further hinders your capacity to safeguard the cloud. Additionally, as cloud expansion and the rate of agile software deployment rise, the task is getting steadily more complicated. And nobody wants to give up growth or speed in the name of security.

Read More

Spotlight

HGST, a Western Digital brand

HGST’S storage solutions are everywhere, touching lives and enabling possibilities for the enterprise, cloud computing and sophisticated infrastructures in healthcare, energy, finance and government.

Related News

CLOUD STORAGE

Databricks and Google Cloud Announces New Partnership

Google Cloud, Databricks | February 19, 2021

Databricks and Google Cloud today declared another association that will bring to Databricks clients a profound combination with Google's BigQuery stage and Google Kubernetes Engine. This will permit Databricks' clients to bring their data lakes and the service’s analytics capabilities to Google Cloud. Databricks as of now include profound incorporation with Microsoft Azure — one that works out in a good way past this new organization with Google Cloud — and the organization is likewise an AWS accomplice. By adding Google Cloud to this rundown, the organization would now be able to profess to be the "solitary brought together information stage accessible across every one of the three clouds (Google, AWS and Azure)." It’s worth stressing, though, that Databricks’ Azure integration is a bit of a different deal from this new partnership with Google Cloud. “Azure Databricks is a first-party Microsoft Azure service that is sold and supported directly by Microsoft. The first-party service is unique to our Microsoft partnership. Customers on Google Cloud will purchase directly from Databricks through the Google Cloud Marketplace,” a company spokesperson told me. That makes it a bit more of a run-of-the-mill partnership compared to the Microsoft deal, but that doesn’t mean the two companies aren’t just as excited about it. “We’re delighted to deliver Databricks’ lakehouse for AI and ML-driven analytics on Google Cloud,” said Google Cloud CEO Thomas Kurian (or, more likely, one of the company’s many PR specialists who likely wrote and re-wrote this for him a few times before it got approved). “By combining Databricks’ capabilities in data engineering and analytics with Google Cloud’s global, secure network—and our expertise in analytics and delivering containerized applications—we can help companies transform their businesses through the power of data.” About Databricks As the leader in Unified Data Analytics, Databricks helps organizations make all their data ready for analytics, empower data science and data-driven decisions across the organization, and rapidly adopt machine learning to outpace the competition. By providing data teams with the ability to process massive amounts of data in the Cloud and power AI with that data, Databricks helps organizations innovate faster and tackle challenges like treating chronic disease through faster drug discovery, improving energy efficiency, and protecting financial markets.

Read More

VMware eases container deployment on premise and in the cloud

December 08, 2014

Virtualisation heavyweight VMware announced it has bolstered technology integrations with Docker, Kubernetes, and Mesosphere in a bid to open up its platform to a range of containerisation technologies. The move comes as Linux containers start to pick up traction in the enterprise.

Read More

Google joins OpenStack to build bridges between public and private clouds

Google | July 16, 2015

Google has officially signed up to sponsor the OpenStack Foundation, the first of the big three – Google, Microsoft and AWS – to formally throw its weight behind the open source cloud orchestration software. Analysts believe the move will improve support for Linux containers across public and private cloud environments.Google has already set to work integrating Kubernetes with OpenStack with pure-play OpenStack software vendor Mirantis, a move the company said would help bolster its hybrid cloud capabilities.

Read More

CLOUD STORAGE

Databricks and Google Cloud Announces New Partnership

Google Cloud, Databricks | February 19, 2021

Databricks and Google Cloud today declared another association that will bring to Databricks clients a profound combination with Google's BigQuery stage and Google Kubernetes Engine. This will permit Databricks' clients to bring their data lakes and the service’s analytics capabilities to Google Cloud. Databricks as of now include profound incorporation with Microsoft Azure — one that works out in a good way past this new organization with Google Cloud — and the organization is likewise an AWS accomplice. By adding Google Cloud to this rundown, the organization would now be able to profess to be the "solitary brought together information stage accessible across every one of the three clouds (Google, AWS and Azure)." It’s worth stressing, though, that Databricks’ Azure integration is a bit of a different deal from this new partnership with Google Cloud. “Azure Databricks is a first-party Microsoft Azure service that is sold and supported directly by Microsoft. The first-party service is unique to our Microsoft partnership. Customers on Google Cloud will purchase directly from Databricks through the Google Cloud Marketplace,” a company spokesperson told me. That makes it a bit more of a run-of-the-mill partnership compared to the Microsoft deal, but that doesn’t mean the two companies aren’t just as excited about it. “We’re delighted to deliver Databricks’ lakehouse for AI and ML-driven analytics on Google Cloud,” said Google Cloud CEO Thomas Kurian (or, more likely, one of the company’s many PR specialists who likely wrote and re-wrote this for him a few times before it got approved). “By combining Databricks’ capabilities in data engineering and analytics with Google Cloud’s global, secure network—and our expertise in analytics and delivering containerized applications—we can help companies transform their businesses through the power of data.” About Databricks As the leader in Unified Data Analytics, Databricks helps organizations make all their data ready for analytics, empower data science and data-driven decisions across the organization, and rapidly adopt machine learning to outpace the competition. By providing data teams with the ability to process massive amounts of data in the Cloud and power AI with that data, Databricks helps organizations innovate faster and tackle challenges like treating chronic disease through faster drug discovery, improving energy efficiency, and protecting financial markets.

Read More

VMware eases container deployment on premise and in the cloud

December 08, 2014

Virtualisation heavyweight VMware announced it has bolstered technology integrations with Docker, Kubernetes, and Mesosphere in a bid to open up its platform to a range of containerisation technologies. The move comes as Linux containers start to pick up traction in the enterprise.

Read More

Google joins OpenStack to build bridges between public and private clouds

Google | July 16, 2015

Google has officially signed up to sponsor the OpenStack Foundation, the first of the big three – Google, Microsoft and AWS – to formally throw its weight behind the open source cloud orchestration software. Analysts believe the move will improve support for Linux containers across public and private cloud environments.Google has already set to work integrating Kubernetes with OpenStack with pure-play OpenStack software vendor Mirantis, a move the company said would help bolster its hybrid cloud capabilities.

Read More

Events