Vulnerable AWS Lambda Function – Initial Access in Cloud Attacks

June 10, 2022 | 77 views

cloudsecurityalliance
Our security research team prepared to explain a real attack scenario from the black box and white box perspective on how a vulnerable AWS Lambda function could be used by attackers as initial access into your cloud environment. Finally, we show the best practices to mitigate this vector of attack.

Serverless is becoming mainstream in business applications to achieve scalability, performance, and cost efficiency without managing the underlying infrastructure. These workloads are able to scale to thousands of concurrent requests per second. One of the most used Serverless functions in cloud environments is the AWS Lambda function.

One essential element of production raising an application is security. An error in code or a lack of user input validation may cause the function to be compromised and could lead the attackers to get access to your cloud account.

About AWS Lambda function
AWS Lambda is an event-driven, serverless compute service which permits the execution of code written in different programming languages and automates actions inside a cloud environment.

One of the main benefits of this approach is that Lambda runs our code in a highly available compute infrastructure directly managed by AWS. The cloud provider takes care of all the administrative activities related to the infrastructure underneath, including server and operating system maintenance, automatic scaling, patching, and logging.

The user can just use the service implementing their code and the function is ready to go.

Security, a shared pain
From a security perspective, due to its nature to be managed by the cloud provider but still configurable by the user, even the security concerns and risks are shared between the two actors.

Since the user doesn’t have control over the infrastructure behind a specific Lambda function, the security risks on the infrastructure underneath are managed directly by the cloud provider.

Using AWS IAM, it’s possible for the user to restrict the access and the permitted actions of the lambda function and its components. Misconfiguration on permission over IAM roles or objects used by the Lambda function might cause serious damage, leading attackers inside the cloud environment. Even more importantly, the code implemented into the Lambda function is under user control and, as we will see in the next sections, if there are security holes into the code, the function might be used to access the cloud account and move laterally.

Attack Scenarios
We are going through two attack scenarios using two different testing approaches: black box and white box testing, which are two of the main testing approaches used in penetration testing to assess the security posture of a specific infrastructure, application, or function.

Looking at the Lambda function from a different perspective would help to create a better overall picture of the security posture of our function, and help us better understand the possible attacks and the related risks.

Black box vs white box
In Black box testing, whoever is attacking the environment doesn’t have any information about the environment itself and the internal workings of the software system. In this approach, the attacker needs to make assumptions about what might be behind the logic of a specific feature and keep testing those assumptions to find a way in. For our scenario, the attacker doesn't have any access to the cloud environment and doesn’t have any internal information about the cloud environment or the functions and roles available in the account.

In White box testing, the attacker already has internal information which can be used during the attack to achieve their goals. In this case, the attacker has all the information needed to find the possible vulnerabilities and security issues.

For this reason, white box testing is considered the most exhaustive way of testing. In our scenario, the attacker has read-only initial access in the cloud environment and this information can be used by the attacker to assess what is already deployed and better target the attack.

In this attack scenario the attacker found a misconfigured S3 bucket open to the public where there are different files owned by the company.

The attacker is able to upload files into the bucket and check the files configuration once uploaded. A Lambda function is being used to calculate the tag for each file uploaded, although the attacker doesn’t know anything about the code implemented in the lambda.

We can be pretty confident there is an AWS Lambda function behind those values. The function appears to be triggered when a new object is created into the bucket. The two tags, Path and Size, seem to be calculated dynamically for each file, perhaps executing OS commands to retrieve information.

We can assume the file name is used to look for the file inside the OS and also to calculate the file size. In other words, the file name might be a user input which is used in the OS command to retrieve the information to put in the tags. Missing a user input validation might lead an attacker to submit unwanted input or execute arbitrary commands into the machine.

In this case, we can try to inject other commands into the file name to achieve remote code execution. Concatenating commands, using a semicolon, is a common way to append arbitrary commands into the user input so that the code would execute them if the user input isn’t well sanitized.

Mitigation
We have seen the attack scenario from the black box and white box perspectives, but what can we do to mitigate this scenario? In the proposed scenario, we covered different AWS components, like S3 buckets and AWS lambda, in which some security aspects have been neglected.

In order to successfully mitigate this scenario, we can act on different levels and different features. In particular, we could:

Disable the public access for the S3 bucket, so that it will be accessible just from inside and to the users who are authenticated into the cloud account.
Check the code used inside the lambda function, to be sure there aren’t any security bugs inside it and all the user inputs are correctly sanitized following the security guidelines for writing code securely.
Apply the least privileges concept in all the AWS IAM Roles applied to cloud features to avoid unwanted actions or possible privilege escalation paths inside the account.
Let’s have a look at all the points mentioned above in detail on how we can enforce those mitigations.

Disable the public access for the S3 bucket
An S3 bucket is one of the key components in AWS used as storage. S3 buckets are often used by attackers who want to break into cloud accounts.

It’s critical to keep S3 buckets as secure as possible, applying all the security settings available and avoiding unwanted access to our data or files.

For this specific scenario, the bucket was publicly open and all the unauthorized users were able to read and write objects into the bucket. To avoid this behavior, we need to make sure that the bucket is available, privately applying the following security settings to restrict the access.

Spotlight

SYSPRO Asia Pacific

SYSPRO software is an award-winning, best-of-breed Enterprise Resource Planning (ERP) software solution for cost-effective on-premise and cloud-based utilization. Industry analysts rank SYSPRO software among the finest, best-in-class enterprise-resource planning solutions in the world. SYSPRO software’s powerful features, simplicity of use, scalability, information visibility, analytic/reporting capabilities, business process and rapid deployment methodology are unmatched in its sector.

OTHER ARTICLES
CLOUD SECURITY

IBM's Cloudburst: A Credible Step Forward in the Cloud Computing Arena

Article | August 4, 2022

IBM CloudBurst is a ready-to-use, self-contained service delivery platform that can be deployed fast and efficiently in a data center. It enables the data center to create service platforms for a wide range of workload types with a high degree of integration, flexibility, and resource optimization, resulting in a better request-driven user experience while also reducing costs and expediting time to market for the company. In addition, the GTS installation service is included in IBM CloudBurst, making it a comprehensive bundle of hardware, software, and services to get it up and running in your environment right away. To quickly integrate cloud computing, IBM CloudBurst combines the necessary hardware, software, and service components. IBM CloudBurst, as a single solution, simplifies the complexities of establishing a cloud computing architecture, allowing businesses to immediately grasp the benefits and financial potential of a dynamic infrastructure. For de-mystifying a cloud computing model, CloudBurst can help enterprises discover the benefits and business possibilities of a dynamic infrastructure more rapidly. As a cloud computing quickstart, IBM CloudBurst allows businesses to demonstrate the benefits of the delivery model in a specific area of their data center or for a single internal project. Building a Dynamic Infrastructure With IBM Cloudburst IBM’s Service Delivery Manager Solution for x86 and Power systems is based on a pre-integrated, software-only stack. It is installed as a set of virtual images that automate the deployment of IT services and enables resource monitoring, cost management, and service provisioning in the cloud. Similarly, depending on your company's needs, you can pick how much capacity to employ by adding cloud bursting to your LSF Cluster. When time is money, the cloud is ready. The meter stops, and the cloud waits when demand is low. Computing Heavy Workloads With Resource Planning It's challenging to establish a delicate balance between the expense of computing resources and the cost of delayed judgments. However, with the introduction of cloud bursting, a new level of flexibility has become available to break the impasse. When space in your data center is limited, you can now extend your existing IBM Spectrum LSF cluster to the IBM Cloud, where you can access almost infinite resources and only pay for what you use. “Automating IT resources to support new applications is critical because at most companies, a business user typically must wait weeks to get access to new IT resources due to the manual processes required to set up resources,” said Lauren States, vice president of Tivoli Cloud Computing for IBM. This automation not only simplifies the initial creation of a proof-of-concept cluster but also provides the basic toolset for the rapid provisioning and takedown of resources that define cloud bursting. Conclusion One of the numerous advantages of the cloud is that you are never tied to a specific piece of hardware. If you choose a storage cluster and later require more capacity or performance, you can always rebuild it with more resources. Today's businesses can pick from a variety of storage technologies. That is why it is critical to comprehend the many alternatives, their usefulness, and the appropriate use cases for the various storage mechanisms. To meet today's modern business needs, IBM offers a variety of storage options.

Read More
CLOUD SECURITY

Breaking the multi-cloud barrier in a regulated industry

Article | August 4, 2022

How Kubernetes and Linkerd became Lunar’s multi-cloud communication backbone At Lunar, a Scandinavian online bank, we embraced cloud native tech early on. We’ve been running Kubernetes since 2017 and today have over 250 microservices distributed across three clouds. This blog will explore how we set out to centralize all platform services. The gains were substantial — from being better prepared to absorb newly acquired companies to improved developer productivity. About Lunar Founded in 2015, Lunar set out to challenge the banking status quo by reinventing how people interact with their finances. Lunar is for those who want everything money-related in one place — 100% digital, right in their hands. For us, that meant offering customers a smarter way to manage their money with more control, faster savings, easier investments, and no meaningless fees. That’s how we envision the future of banking. In 2021, Lunar acquired Lendify, a Swedish lending company; and PayLike, a Danish fintech startup. This is all part of Lunar’s broader strategy to grow and scale. It also meant we had to integrate all these systems, so they work together smoothly. Lunar’s commitment to cloud native principles Lunar’s team of 150+ full-time engineers push about 40 releases to production on any given day. Out of these 150, ten are platform engineers, and that’s the team that I lead. We operate nine Kubernetes clusters across three cloud providers (AWS, Microsoft Azure, and Google Cloud Platform) on multiple availability zones. We also run 250+ microservices plus a range of platform services that are part of our self-service developer platform. We want our teams — or Squads as we call them — to be autonomous and self-driven. To support this “shift left” mindset, a group of platform Squads builds abstractions and tooling to ensure developers can move their features fast, securely, compliant, and efficiently. The Lendify acquisition means we now have an Azure-based platform we have to integrate and adapt, so it complies with the same cloud native principles Lunar is built on. We are currently working on seamlessly connecting our AWS and Azure environments. There are multiple reasons why we chose the cloud native path. First, we needed a platform that allowed our teams to manage their services and be fully autonomous. Secondly, as a fintech company pioneering cloud-based banking, we had to provide a clear exit strategy for cloud providers — a regulatory requirement by the Danish FSA. Kubernetes was a perfect fit. Functioning as an abstraction on top of a cloud provider, it helped us achieve both goals. This autonomy allowed us to scale easily as most dependencies were removed. Squads are also supported by a mix of open source tooling, including Backstage, Prometheus, and Jaeger, and some custom-built solutions, which we have open-sourced, such as shuttle and release-manager. This multi-cloud strategy and work style support the company’s goal of scaling, both in terms of the number of employees and mergers and acquisitions. It also allows us to stay technology agnostic and choose the technologies that best fit our needs. Oh no, where are our production logs? The idea of centralizing platform services started with our log management system Humio. At the time, we were developing failover processes for our production Kubernetes clusters. As it turned out, this led to missing logs in our log management system. That’s when we realized we had to remove the system from our production cluster and centralize it before performing any failover in production. From logs to centralizing all platform services After successfully centralizing our log management system, we decided to embark on a platform services centralization journey prior to any corporate acquisitions. While we had multiple environments, many of our platform services, such as our observability stack, were replicated in each environment. These services require a vast amount of resources and are fairly complex. Services such as Humio, Prometheus, and Jaeger (with Elasticsearch), are stateful services. Having stateful services in “workload” clusters makes failover and disaster recovery much harder. For this reason, we decided to minimize the number of stateful services in these environments. Additionally, running nine replicated setups, simply didn’t scale — we needed a centralized solution. Moreover, having multiple endpoints for accessing things like Grafana, led to lots of duplication of users, dashboards, etc. This caused some confusion for our developers, changes had to be made in multiple places, leading to drift between environments, and other challenges. Managing users in one system was a lot more efficient than doing so in nine (or more). That’s why we decided to create a centralized cluster owned by the platform team that would eventually run the entire observability stack, release management, developer tooling, and cluster-API. Today, our log and release management runs as centralized services the platform team provides. Also, Backstage is provided out of the centralized environment along with a handful of other tools. Next in line is our monitoring setup, a mix of Buoyant Cloud and Prometheus/Grafana. The quest to connect our clusters Once we started centralizing platform services, we needed to connect our clusters. At the time, we were only running clusters in AWS and considered VPC peering across our accounts. Doing that was somewhat painful due to clashing CIDR ranges. We also evaluated VPNs but aren’t big fans of using technologies with two static boxes on each end. Besides, we wanted to move towards zero trust networking, following the principles of BeyondProd by Google. Service meshes finally caught up with our needs! We continuously evaluated service meshes during our 5+ years of running Kubernetes in production. In 2017, we had Linkerd running as a PoC but decided against it. It was still the JVM-based Linkerd 1 and quite complex. We kept following the development and evolution of service meshes and, when we saw the Linkerd 2.8 release and its multi-cluster capabilities, we realized it was time to give service meshes another shot. Our decision was further reinforced by some problems we were experiencing with gRPC load balancing (which is not natively supported by Kubernetes) and the need to switch to mTLS for all internal communication. A service mesh made a lot more sense now. While we evaluated both Linkerd and Istio, we have always been big fans of the approach Linkerd took: start with the basics and make that work well. We gave ourselves a week: two engineers; one playing with Istio and the other one with Linkerd. We had the Linkerd multi-cluster up and running within an hour! After a few days of struggling with Istio, we gave up on it. Linkerd did the job fast and easily — the perfect mesh for us. It had all the features we needed at the time; was easy to operate, had a great community, and solid documentation. Since going live, we also started using Buoyant Cloud for better visibility across all our environments. Lunar is committed to the CNCF stack At Lunar, we are big fans of CNCF projects and use many of them (in fact, I’m a CNCF Ambassador and love educating the community on these awesome projects!). Lunar is also a CNCF End User Member. Our stack includes Kubernetes, Prometheus, cert-manager, Jaeger, Core DNS, Fluent-bit, Flux, Open Policy Agent, Backstage, gRPC, and Envoy among others. We’ve built an Envoy-based ingress/egress gateway in all clusters to provide a nice abstraction for developers to expose services in different clouds. Prepared to scale our business and shake up the European banking market From a technology perspective, we have now achieved a fairly simple way to provide and connect clusters across clouds. Kubernetes allows us to run anywhere, Linkerd enables us to seamlessly connect our clusters, and GitOps provides an audited way to manage our environments across multiple clouds with the same tooling and process. And from a developer perspective, whether you deploy on GCP or AWS, the process is identical. Seamless integration with newly acquired startups The business impact has been substantial. With our new multi-cloud communication backbone, we are better positioned to support upcoming mergers and acquisitions — a key part of our business strategy. Having a cloud agnostic way to extend the Lunar platform regardless of where they run, is incredibly powerful. It also allows us to select the provider that best fits our needs for each use case. Fully prepared for DR while compliant with government regulations The fact that we are no longer losing logs during failover is huge. We’ll soon implement quarterly failovers for our production clusters. We need to ensure we know exactly how our system behaves in case of a failure and how to bring it back up. It’s important both from a regulatory perspective and a business perspective. If our customers were to lose access to their account information, it would have disastrous consequences for our business. That’s why we proactively train for the worst-case scenario. If something were to happen, we would know exactly what to do and how to avert an issue. We are big believers in the pets vs. cattle idea but go a step further. We don’t want to have pet servers or pet clusters either. Imagine losing logs each time we perform a failover. Without audit logs, we’d fall out of regulatory compliance right there and then. Centralized services and streamlined processes increased developer productivity Centralizing most of our platform services has already streamlined many processes and improved developer productivity. We ensure that all releases, metrics, logs, traces, etc., are properly tagged with fields such Squad names, environments, and so on, making it easy for developers to find what they are looking for. It also ensures clear ownership of that particular piece. Managing the team is also a lot simpler. For me, that means I don’t have to set up dashboards, help search through logs, etc. — our Squads are truly independent. Because our platform is based on self-service, it is decoupled from the organization allowing our team to focus on implementing the next thing that will help our developers move faster, be more secure, or ensure better quality. Easy audits and peace of mind for management Then there are the easy audits. Since everything is centralized, we can run audit reports for all clouds and services across groups and environments. That is good for us and provides peace of mind in the highly-regulated financial services industry. While we aren’t there yet, we expect to save significant time in engineering resources by not having to operate and maintain nine versions of the soon-to-be fully centralized stack. Well-positioned to scale fast and smoothly Overall we feel well-positioned for upcoming acquisitions and organic growth. With a platform able to extend anywhere, we’ve become a truly elastic organization.

Read More
CLOUD SECURITY

Elastic releases security solution for the cloud

Article | July 8, 2022

Elastic announced the launch of Elastic Security for Cloud, extending the capabilities of the existing Elastic Security offering, which included SIEM and endpoint protection, to incorporate cloud risk and posture management and cloud workload protection. The new solution allows enterprises to manage their security posture for cloud-native and hybrid environments with infrastructure detection and response (IDR) and a machine learning offering that can detect known and unknown threats in cloud environments. This approach means that enterprises can detect and respond to malicious activity in the cloud as soon as possible to minimize the risk and damage caused by intrusions. Addressing cloud complexity The release comes as modern enterprises are struggling to secure cloud environments. Research shows that not only are 76% of organizations using two or more cloud providers, but also that 35% of organizations have more than 50% of their workloads in the cloud. This means for many organizations, protecting the cloud attack surface is now vital to maintaining operational stability. In other words, the complexity of cloud deployments has created a need for solutions with cloud detection and response capabilities, so that enterprises can respond to threat actors who are targeting this new attack surface. “The world has rapidly transitioned to the cloud and chosen operations over security. Cloud infrastructure is stood up and torn down at a blazing rate and many different teams are deploying these cloud instances. Just answering the question, ‘Am I good?’ is an increasingly complex question to answer for CIO/CISOs,” Mike Nichols, vice president of product for Elastic Security Elastic’s answer to monitoring these environments, is a cloud posture management and IDR solution that can improve security teams visibility over external threats. The cloud security market Elastic Security is competing in the global cloud security market, which researchers valued at $40.8 billion in 2021 and anticipate will grow to $77.5 billion by 2026 as enterprises attempt to keep up with the increasing sophistication of cyberattacks, and rise in bring your own device (BYOD) and (choose your own device) policies. In the realm of cloud security, one of Elastic’s main competitors is Splunk, a cloud and observability monitoring platform that can monitor public clouds, apps, services, on-premise data centers, and edge services. Last year, Splunk announced it had received a $1 billion investment from Silver Lake. Another competitor in the market is Wazuh, an XDR and SIEM tool designed for protecting endpoints and cloud workloads with vulnerability detection, log data analysis, workload protection, and container security. Wazhu currently has more than 10 million downloads per year and is used by a range of companies including Verifone, Walgreens, Rappi, Grubhub, Intuit, and more. Although, one of the key differentiators between Elastic Security is that the solution is built on the Elastic Search Platform, with analytics, SIEM, endpoint protection, XDR and cloud security all offered as part of one single offering.

Read More
CLOUD APP DEVELOPMENT

Database Management in the Cloud Computing Era

Article | May 20, 2022

Database Management in the Cloud Computing Era A cloud computing database is ideally a service that is built, deployed, and delivered via a cloud platform. A cloud platform as a service (PaaS) delivery model allows organizations, end-users and applications to store, manage and retrieve data using the cloud. When seen from a structural and design perspective, a cloud database is not very different from one that operates its business using its own on-premise servers. However, ever since big data has entered space, database management has become a little more complex. In addition to all the traditional and structured data, we also have semi-structured and unstructured data coming in from almost all directions. In recent times, there has been significant adoption of cloud platforms, infrastructures and services. The idea of the blend of cloud technology with database services has garnered more demand with respect to management. Cloud databases, which arealso widely termed Database-as-a-Service (DaaS), offer various added options for organizations to choose. With the current rate of adoption, experts have a strong feeling that DaaS will, just like any other "as-a-service", will become the standard solution for all highly sensitive and mission-critical data. How Can Cloud Database Management Help Your Business? Every organization has the constant need to manage its data in the most efficient way possible. A cloud database effectively fulfills all the needs of any organization with respect to data, ranging from keeping the information secure, accurate and consistent for resource utilization and optimal performance. Cloud data management is constantly changing the way organizations think about data. The cloud helps bring in the required versatility, security, and professional data management assistance that is required. For any business to survive and succeed, it should ensure that the data is healthy so that everyone in the organization has access to the data they need, when they need it. "Line-of-business leaders everywhere are bypassing IT departments to get applications from the cloud (also known as software as a service, or SaaS) and pay for them similar to a magazine subscription. And when the service is no longer required, they can cancel that subscription with no equipment left unused in the corner." - Daryl Plummer, Gartner analyst Effective Strategies forDatabase Management in Cloud Computing Using a cloud-based database ensures that it is easy for your database to grow along with your needs and requirements, in addition to scaling up or down on-demand to accommodate specific peak-workload phases. Ideally, before procuring a cloud data management system, it is essential to have a solid strategy that would suit best with your organization's ecosystem and, at the same time, would also help you getthe most out of the system you select. Exploring the Best Practices for DBM in Cloud Computing With the idea ofeffectively developing, monitoring and managing database infrastructure, there are various methods that organizations can use. These days, organizations also have the option for a ready-made database management system or picking a tailored solution as per their requirements. Keeping all these aspects in mind, it is also essential that these organizations enroll themselves in the best practices followed to ensure optimum results are achieved. Some of the best practices for database management in cloud computing are mentioned below: Before moving to the cloud, build a robust data management architecture. Give cloud data integration requirements the first priority. Regardless of the data's platform or location, govern it comprehensively. Use encryption and VPN to protect data in transit. Automate database management tasks to keep track of them. Why Is Cloud DBMS the C-Suite’s Pick? While it is more than evident that in today's age and time, data management is one of the most crucial components of any organization, it is also right up the area of value addition for all the top executives in an organization. Every industry relies vastly on data and its management, with the significant shift towards cloud servicestaking place constantly several organizations and the c-suite are now evaluating what cloud services bring to their table, especially on the data front. Once the core pain points are assessed and the c-suite gets the picture of how cloud database management systems would help them become more efficient, there wouldn't be any other obvious choice. Microsoft Consulting Services, along with Tata Consultancy Services (TCS), partnered with Walgreens to create the Information, Data, and Insights (IDI) platform powered by Azure.When patients or medical providers initiate the prescription fulfilment process, Azure Databricks and Azure Synapse Analytics move hundreds of related data points through the IDI. Pharmacists can access information through a centralized dashboard interface and request data visualizations.Pharmacists gain real-time insights, and the system can scale as needed to meet current needs. During peak times, the platform can handle more than 40,000 transactions per second. Compared to Walgreens’ previous solution, the private cloud-based architecture, saves an enormous amount of time on every transaction — prescription data that once took about 48 hours to reach its data warehouse can be handled by Walgreens within minutes. Conclusion With the next big things being data and the cloud, how can an amalgamation of both be wrong? Having said this, everyone willhave their own share of concerns and questions. But with what is being built and functions being worked upon, this space is bound to bring in numerous opportunities. So when the world is slowly makingthis shift, it's time you reconsidered, if you haven't already, before you get overwhelmed at the helm of data! FAQ What Are the Best Features Available in Cloud Database Management? While there are numerous features that one can leverage using Cloud Database Management, the top 5 of its features are listed below: Elasticity Scalability High availability Easy data reduction Redundancy Which Is the Most Popular Cloud Database? One of the most popular and widely used cloud databases is the Microsoft Azure cloud database. It provides services in computing, networking, databases, analytics, artificial intelligence, and the Internet of Things. In Cloud Computing, What Is Database Management? Cloud data management is a technique for managing data across cloud platforms, either in combination with or instead of on-premises storage. Disaster recovery, backup, and long-term archiving can benefit from using the cloud as a data storage tier. In addition, resources can be acquired as needed using cloud data management.

Read More

Spotlight

SYSPRO Asia Pacific

SYSPRO software is an award-winning, best-of-breed Enterprise Resource Planning (ERP) software solution for cost-effective on-premise and cloud-based utilization. Industry analysts rank SYSPRO software among the finest, best-in-class enterprise-resource planning solutions in the world. SYSPRO software’s powerful features, simplicity of use, scalability, information visibility, analytic/reporting capabilities, business process and rapid deployment methodology are unmatched in its sector.

Related News

CLOUD STORAGE

ManageEngine Releases SaaS Version of Analytics Plus to Complete its Deploy-Analytics-Anywhere Model

ManageEngine | August 09, 2022

ManageEngine, the enterprise IT management division of Zoho Corporation, today announced that its IT analytics product, Analytics Plus, is now available as a SaaS offering, enabling users to set up a fully functional, integrated analytics platform in under 60 seconds. This launch completes the company's vision to deploy analytics anywhere, that is, to make it easy for an organization to deploy analytics on private or public clouds such as AWS and Azure. ManageEngine will showcase the new offering at its user conference to be held on August 4-5, 2022, at the InterContinental Melbourne in Australia. IT operations nowadays are structured in a way that warrants the use of several monitoring tools and technologies to ensure the business remains operational and accessible to its customers around the clock. While these disjointed IT tools provide visibility into their area of operations, they fall short in providing complete visibility into IT as a whole, costing IT leaders time and effort in gathering insights. Analytics Plus' new cloud offering completes the IT application stack by creating a foundation for integrations, allowing organizations to connect to a multitude of data sources and attain faster time to market, increase productivity, curb expenditure and garner more revenue. "At ManageEngine, we've witnessed several digital transformation trends over the last two decades across all industry verticals: rapid cloud adoption, a need for setting up a data-centric culture, and the need for advanced AI to sift through data lakes and establish correlations, triage events, and eliminate the need for human intervention in data analysis, That's why we've launched the cloud version of Analytics Plus—a marriage of our 20+ years of domain expertise with cloud benefits like flexibility, agility and scalability to help augment strategic decision-making with insights that are fast, reliable and contextual." Rakesh Jayaprakash, product manager at ManageEngine Analytics Plus can be deployed in on-premises servers (Windows or Linux-based), Docker or on cloud platforms such as AWS, Azure and Google Cloud. IT and Business App Connectivity to Track IT Performance Holistically Analytics Plus now connects with more than 40 business applications such as Microsoft Dynamics CRM, Stripe, SurveyMonkey, Google Analytics, Xero, QuickBooks, Salesforce CRM, and LinkedIn along with over 30 IT monitoring applications such as SolarWinds, Nagios, Splunk, DataDog, AppDynamics, and OpenNMS to help IT leaders get a holistic view of IT performance. Support for these new apps will enable IT leaders to measure the ROI of IT along with how IT has contributed to achieving business objectives. Context-Aware AI to Deliver Domain-Level Intelligence ManageEngine has also enhanced its built-in AI assistant with domain-level intelligence to bridge the gap between data and decision makers. "Contextual AI can deliver the most crucial insights at a large scale that will resonate with IT leaders. For example, context-aware AI can suggest how to deploy workloads in the most cost-effective and high-performing cloud locations, taking into account performance, cost structure and security requirements," said Jayaprakash. Analytics Plus' context-aware AI enables users to: Establish correlations between data from various applications and data sources. Quickly identify opportunities and threats. Gain granular insights into aspects of IT operations and business that might not be possible otherwise. Stan Veloutsos, IT service desk manager at Regis Aged Care in Australia, said, "Analytics Plus helped us improve processes by 30%. Using Analytics Plus, we were able to deep dive into IT data, identify problems and trends, align project releases and also increase agent performance. It also helped us share our findings with a larger audience for better reach. With Analytics Plus, we have also set up easy-to-read dashboards for our CXOs with custom metrics and widgets, so they can gain information and make decisions that can increase our revenue." Pricing and Availability The cloud version of Analytics Plus offers a Professional edition at $199/month for two users and three viewers. The Enterprise edition starts at $399/month for 10 users and 25 viewers. The Personal edition is available for free forever and supports one user. A free, fully-functional evaluation is also available. For more details on pricing, visit https://mnge.it/ySN About Analytics Plus Analytics Plus is an AI-enabled, IT analytics solution that connects with over 200 IT and business applications to enable CXOs to converge analytics on a single pane for faster decision-making. Zia, the built-in AI-assistant, comes loaded with IT domain intelligence that identifies correlations in data from multiple applications and data sources, and provides actionable insights to resolve problems and exploit opportunities. Analytics Plus supports cross-module analytical capabilities that helps leaders gain insight into how functional IT domains such as service management, operations management, IT financial management, and project management are all interrelated. About ManageEngine ManageEngine is the enterprise IT management division of Zoho Corporation. Established and emerging enterprises—including 9 of every 10 Fortune 100 organizations—rely on ManageEngine's real-time IT management tools to ensure optimal performance of their IT infrastructure, including networks, servers, applications, desktops and more. ManageEngine has offices worldwide, including the United States, the United Arab Emirates, the Netherlands, India, Colombia, Mexico, Brazil, Colombia, Singapore, Japan, China and Australia, as well as 200+ global partners to help organizations tightly align their business and IT.

Read More

CLOUD SECURITY

Micro Focus' CyberRes Partners with Google Cloud to Enable High-Scale Secure Cloud Analytics with Data Privacy

Micro Focus | August 09, 2022

CyberRes, a Micro Focus line of business, today announced a partnership with Google Cloud to support the upcoming release of BigQuery remote functions. The partnership will see CyberRes' end-to-end enterprise data protection solution, Voltage SecureData, integrate with Google's BigQuery data warehouse to accelerate and expand organizations' data science initiatives and help companies comply with ever-evolving privacy regulations. The integration will enable CyberRes Voltage customers to persistently protect data in use, in motion, and at rest in Google BigQuery. The support for remote functions also enables Google Cloud's BigQuery customers to take advantage of Voltage's privacy-enabling technologies. Mutual customers can use Voltage's format-preserving encryption, hashing, and tokenization services within BigQuery in conjunction with Google BigQuery's native security to address strict privacy compliance controls. Voltage's cloud-agnostic and consistent data protection allows all customers to safely use regulated data for analytics across hybrid clouds. "The availability of remote functions from BigQuery is an exciting and critical evolution of Google Cloud's platform for our customers," said Tony de la Lama, Vice President of Product Management, CyberRes. "The integration with Voltage SecureData means BigQuery will allow customers to utilize and support the Voltage data-centric protection approach for secure analytics, enabling enterprises to accelerate and expand their data science initiatives with privacy by default." BigQuery, Google Cloud's highly scalable multi-cloud data warehouse, is designed for business agility and allows customers to gain insights with real-time and predictive analytics, access data and securely share insights with ease. This new partnership adds to Voltage SecureData's deep capabilities in securing analytics across data warehouses, databases and data lakes and enables customers to shift workloads seamlessly and securely to BigQuery. "Emerging threats and evolving technology needs are at the forefront of challenges in cyber security. By making its Voltage SecureData solution available to Google Cloud customers from within their BigQuery data warehouse, CyberRes is enabling customers with the technologies needed to protect their sensitive data while addressing and adhering to current data privacy regulations," Ritika Suri, Director, Technology Partnerships, Google Cloud The CyberRes Voltage portfolio helps secure organizations with continuous data discovery, insight, and protection to reduce risk and enable privacy by design. Organizations can work with high-value, sensitive customer data in its protected state to derive business intelligence without the risk of data exposure in Google BigQuery. The data protection technologies in Voltage SecureData provide flexible implementation and data-centric protection for a virtually unlimited number of structured data types in any language, and in any region, with proven performance, reliability, and scalability. About CyberRes CyberRes is a Micro Focus line of business. We bring the expertise of one of the world's largest security portfolios to help our customers navigate the changing threat landscape by building both cyber and business resiliency within their teams and organizations. CyberRes is part of a larger set of digital transformation solutions that fight adverse conditions so businesses can continue to run today, keep the lights on, and transform to grow and take advantage of tomorrow's opportunities.

Read More

CLOUD SECURITY

Preview of the Lightbits Cloud Data Platform Now Available on the AWS Marketplace

Lightbits | August 02, 2022

Lightbits, the first software-defined NVMe data platform for any cloud, today announced a preview of Lightbits on Amazon Web Services (AWS). When used with Amazon Storage Optimized Elastic Compute Cloud (Amazon EC2) instances that leverage Intel Xeon Scalable processors with built-in AI accelerators or AWS Graviton, Lightbits delivers enterprise-grade, resilient block storage that is easy to use and provides greater cost-efficiency compared to other cloud-based block storage solutions. It offers great value for enterprise organizations who want to right-size their block storage on AWS while boosting performance and operating IO-intensive database and analytics applications. Lightbits delivers cloud-native and redundant NVMe/TCP storage with an unmatched combination of enterprise-rich data services, resiliency, high performance and scalability that simplifies infrastructure management and operations for database and analytics applications. It’s easy to use and install with AWS Marketplace or AMI-based setup, automated upgrades, and automated healing capabilities. Additionally, it features compression, thin provisioning, and auto-scaling allowing for maximum capacity planning efficiency–no more worrying about running out of space or allocating the correct volume capacity or performance. Furthermore, Lightbits meets the growing demands of VMware and Openstack workloads, while also enabling full cloud-native persistent storage integration for Kubernetes. For IT organizations with a hybrid implementation, Lightbits offers one unified storage namespace whether the data resides on-premises or on AWS. “With Lightbits on AWS, we can deliver cloud-native, software-defined, and NVMe-powered solutions customers need to enhance their operational efficiency and performance, Lightbits on AWS delivers unmatched scalability, enterprise-rich features, and performance while delivering significant cost savings in CAPEX and OPEX. Customers can deploy the most demanding applications without worrying about running out of IOPs or storage capacity.” Amir Michael, Chief Technology Evangelist at Lightbits A Lightbits cluster will be significantly more cost-efficient and more performant than other popular block storage alternatives on the Public Cloud. The Lightbits advantage is one of efficiency, economics, and performance. Customers will pay only for what they use, not for what they provision. Lightbits meets applications demands with over 1.5M/volume IOPs, snapshots and clones, and high-speed restore are included with no per-use fees. “Intel processors provide the foundation for many cloud computing services deployed on AWS. Using Lightbits on Amazon Storage Optimized EC2 instances fueled by Intel Xeon Scalable processors offer IT organizations a faster, more secure, and cost-effective cloud storage platform to accelerate innovation. Enterprise IT organizations require solutions to operate in a hybrid, multi-cloud environment so we are excited to be working with Lightbits to extend their solution portfolio to the AWS Marketplace,” said Niv Zilberman, Vice President and General Manager, Datacenter & AI Business Innovation Office at Intel. About Lightbits Labs Lightbits Labs (Lightbits), is on a mission to make high-performance block storage simple, scalable, and cost-efficient for any cloud. Lightbits offers a Cloud Data Platform that delivers efficiency, simplicity, and agility for modern data centers. Inventors of the NVMe® over TCP (NVMe/TCP) protocol, Lightbits is leading the digital data center transformation by making software-defined storage that is easy to deploy at scale and delivers performance equivalent to local flash to accelerate cloud-native applications in bare metal, virtual, or containerized environments.

Read More

CLOUD STORAGE

ManageEngine Releases SaaS Version of Analytics Plus to Complete its Deploy-Analytics-Anywhere Model

ManageEngine | August 09, 2022

ManageEngine, the enterprise IT management division of Zoho Corporation, today announced that its IT analytics product, Analytics Plus, is now available as a SaaS offering, enabling users to set up a fully functional, integrated analytics platform in under 60 seconds. This launch completes the company's vision to deploy analytics anywhere, that is, to make it easy for an organization to deploy analytics on private or public clouds such as AWS and Azure. ManageEngine will showcase the new offering at its user conference to be held on August 4-5, 2022, at the InterContinental Melbourne in Australia. IT operations nowadays are structured in a way that warrants the use of several monitoring tools and technologies to ensure the business remains operational and accessible to its customers around the clock. While these disjointed IT tools provide visibility into their area of operations, they fall short in providing complete visibility into IT as a whole, costing IT leaders time and effort in gathering insights. Analytics Plus' new cloud offering completes the IT application stack by creating a foundation for integrations, allowing organizations to connect to a multitude of data sources and attain faster time to market, increase productivity, curb expenditure and garner more revenue. "At ManageEngine, we've witnessed several digital transformation trends over the last two decades across all industry verticals: rapid cloud adoption, a need for setting up a data-centric culture, and the need for advanced AI to sift through data lakes and establish correlations, triage events, and eliminate the need for human intervention in data analysis, That's why we've launched the cloud version of Analytics Plus—a marriage of our 20+ years of domain expertise with cloud benefits like flexibility, agility and scalability to help augment strategic decision-making with insights that are fast, reliable and contextual." Rakesh Jayaprakash, product manager at ManageEngine Analytics Plus can be deployed in on-premises servers (Windows or Linux-based), Docker or on cloud platforms such as AWS, Azure and Google Cloud. IT and Business App Connectivity to Track IT Performance Holistically Analytics Plus now connects with more than 40 business applications such as Microsoft Dynamics CRM, Stripe, SurveyMonkey, Google Analytics, Xero, QuickBooks, Salesforce CRM, and LinkedIn along with over 30 IT monitoring applications such as SolarWinds, Nagios, Splunk, DataDog, AppDynamics, and OpenNMS to help IT leaders get a holistic view of IT performance. Support for these new apps will enable IT leaders to measure the ROI of IT along with how IT has contributed to achieving business objectives. Context-Aware AI to Deliver Domain-Level Intelligence ManageEngine has also enhanced its built-in AI assistant with domain-level intelligence to bridge the gap between data and decision makers. "Contextual AI can deliver the most crucial insights at a large scale that will resonate with IT leaders. For example, context-aware AI can suggest how to deploy workloads in the most cost-effective and high-performing cloud locations, taking into account performance, cost structure and security requirements," said Jayaprakash. Analytics Plus' context-aware AI enables users to: Establish correlations between data from various applications and data sources. Quickly identify opportunities and threats. Gain granular insights into aspects of IT operations and business that might not be possible otherwise. Stan Veloutsos, IT service desk manager at Regis Aged Care in Australia, said, "Analytics Plus helped us improve processes by 30%. Using Analytics Plus, we were able to deep dive into IT data, identify problems and trends, align project releases and also increase agent performance. It also helped us share our findings with a larger audience for better reach. With Analytics Plus, we have also set up easy-to-read dashboards for our CXOs with custom metrics and widgets, so they can gain information and make decisions that can increase our revenue." Pricing and Availability The cloud version of Analytics Plus offers a Professional edition at $199/month for two users and three viewers. The Enterprise edition starts at $399/month for 10 users and 25 viewers. The Personal edition is available for free forever and supports one user. A free, fully-functional evaluation is also available. For more details on pricing, visit https://mnge.it/ySN About Analytics Plus Analytics Plus is an AI-enabled, IT analytics solution that connects with over 200 IT and business applications to enable CXOs to converge analytics on a single pane for faster decision-making. Zia, the built-in AI-assistant, comes loaded with IT domain intelligence that identifies correlations in data from multiple applications and data sources, and provides actionable insights to resolve problems and exploit opportunities. Analytics Plus supports cross-module analytical capabilities that helps leaders gain insight into how functional IT domains such as service management, operations management, IT financial management, and project management are all interrelated. About ManageEngine ManageEngine is the enterprise IT management division of Zoho Corporation. Established and emerging enterprises—including 9 of every 10 Fortune 100 organizations—rely on ManageEngine's real-time IT management tools to ensure optimal performance of their IT infrastructure, including networks, servers, applications, desktops and more. ManageEngine has offices worldwide, including the United States, the United Arab Emirates, the Netherlands, India, Colombia, Mexico, Brazil, Colombia, Singapore, Japan, China and Australia, as well as 200+ global partners to help organizations tightly align their business and IT.

Read More

CLOUD SECURITY

Micro Focus' CyberRes Partners with Google Cloud to Enable High-Scale Secure Cloud Analytics with Data Privacy

Micro Focus | August 09, 2022

CyberRes, a Micro Focus line of business, today announced a partnership with Google Cloud to support the upcoming release of BigQuery remote functions. The partnership will see CyberRes' end-to-end enterprise data protection solution, Voltage SecureData, integrate with Google's BigQuery data warehouse to accelerate and expand organizations' data science initiatives and help companies comply with ever-evolving privacy regulations. The integration will enable CyberRes Voltage customers to persistently protect data in use, in motion, and at rest in Google BigQuery. The support for remote functions also enables Google Cloud's BigQuery customers to take advantage of Voltage's privacy-enabling technologies. Mutual customers can use Voltage's format-preserving encryption, hashing, and tokenization services within BigQuery in conjunction with Google BigQuery's native security to address strict privacy compliance controls. Voltage's cloud-agnostic and consistent data protection allows all customers to safely use regulated data for analytics across hybrid clouds. "The availability of remote functions from BigQuery is an exciting and critical evolution of Google Cloud's platform for our customers," said Tony de la Lama, Vice President of Product Management, CyberRes. "The integration with Voltage SecureData means BigQuery will allow customers to utilize and support the Voltage data-centric protection approach for secure analytics, enabling enterprises to accelerate and expand their data science initiatives with privacy by default." BigQuery, Google Cloud's highly scalable multi-cloud data warehouse, is designed for business agility and allows customers to gain insights with real-time and predictive analytics, access data and securely share insights with ease. This new partnership adds to Voltage SecureData's deep capabilities in securing analytics across data warehouses, databases and data lakes and enables customers to shift workloads seamlessly and securely to BigQuery. "Emerging threats and evolving technology needs are at the forefront of challenges in cyber security. By making its Voltage SecureData solution available to Google Cloud customers from within their BigQuery data warehouse, CyberRes is enabling customers with the technologies needed to protect their sensitive data while addressing and adhering to current data privacy regulations," Ritika Suri, Director, Technology Partnerships, Google Cloud The CyberRes Voltage portfolio helps secure organizations with continuous data discovery, insight, and protection to reduce risk and enable privacy by design. Organizations can work with high-value, sensitive customer data in its protected state to derive business intelligence without the risk of data exposure in Google BigQuery. The data protection technologies in Voltage SecureData provide flexible implementation and data-centric protection for a virtually unlimited number of structured data types in any language, and in any region, with proven performance, reliability, and scalability. About CyberRes CyberRes is a Micro Focus line of business. We bring the expertise of one of the world's largest security portfolios to help our customers navigate the changing threat landscape by building both cyber and business resiliency within their teams and organizations. CyberRes is part of a larger set of digital transformation solutions that fight adverse conditions so businesses can continue to run today, keep the lights on, and transform to grow and take advantage of tomorrow's opportunities.

Read More

CLOUD SECURITY

Preview of the Lightbits Cloud Data Platform Now Available on the AWS Marketplace

Lightbits | August 02, 2022

Lightbits, the first software-defined NVMe data platform for any cloud, today announced a preview of Lightbits on Amazon Web Services (AWS). When used with Amazon Storage Optimized Elastic Compute Cloud (Amazon EC2) instances that leverage Intel Xeon Scalable processors with built-in AI accelerators or AWS Graviton, Lightbits delivers enterprise-grade, resilient block storage that is easy to use and provides greater cost-efficiency compared to other cloud-based block storage solutions. It offers great value for enterprise organizations who want to right-size their block storage on AWS while boosting performance and operating IO-intensive database and analytics applications. Lightbits delivers cloud-native and redundant NVMe/TCP storage with an unmatched combination of enterprise-rich data services, resiliency, high performance and scalability that simplifies infrastructure management and operations for database and analytics applications. It’s easy to use and install with AWS Marketplace or AMI-based setup, automated upgrades, and automated healing capabilities. Additionally, it features compression, thin provisioning, and auto-scaling allowing for maximum capacity planning efficiency–no more worrying about running out of space or allocating the correct volume capacity or performance. Furthermore, Lightbits meets the growing demands of VMware and Openstack workloads, while also enabling full cloud-native persistent storage integration for Kubernetes. For IT organizations with a hybrid implementation, Lightbits offers one unified storage namespace whether the data resides on-premises or on AWS. “With Lightbits on AWS, we can deliver cloud-native, software-defined, and NVMe-powered solutions customers need to enhance their operational efficiency and performance, Lightbits on AWS delivers unmatched scalability, enterprise-rich features, and performance while delivering significant cost savings in CAPEX and OPEX. Customers can deploy the most demanding applications without worrying about running out of IOPs or storage capacity.” Amir Michael, Chief Technology Evangelist at Lightbits A Lightbits cluster will be significantly more cost-efficient and more performant than other popular block storage alternatives on the Public Cloud. The Lightbits advantage is one of efficiency, economics, and performance. Customers will pay only for what they use, not for what they provision. Lightbits meets applications demands with over 1.5M/volume IOPs, snapshots and clones, and high-speed restore are included with no per-use fees. “Intel processors provide the foundation for many cloud computing services deployed on AWS. Using Lightbits on Amazon Storage Optimized EC2 instances fueled by Intel Xeon Scalable processors offer IT organizations a faster, more secure, and cost-effective cloud storage platform to accelerate innovation. Enterprise IT organizations require solutions to operate in a hybrid, multi-cloud environment so we are excited to be working with Lightbits to extend their solution portfolio to the AWS Marketplace,” said Niv Zilberman, Vice President and General Manager, Datacenter & AI Business Innovation Office at Intel. About Lightbits Labs Lightbits Labs (Lightbits), is on a mission to make high-performance block storage simple, scalable, and cost-efficient for any cloud. Lightbits offers a Cloud Data Platform that delivers efficiency, simplicity, and agility for modern data centers. Inventors of the NVMe® over TCP (NVMe/TCP) protocol, Lightbits is leading the digital data center transformation by making software-defined storage that is easy to deploy at scale and delivers performance equivalent to local flash to accelerate cloud-native applications in bare metal, virtual, or containerized environments.

Read More

Events