Categories
Cloud Hosting

Protect Your Cloud Apps From These 5 Common API Security … – ITPro Today

APIs barely existed two decades ago, but they've now become the glue that holds the world of cloud computing together. APIs play a central role in enabling cloud applications to interface with each other and with the various cloud resources they need to do their jobs.

But APIs have a downside: When they're poorly managed, they can become low-hanging fruit for attackers.

Related: Why APIs Are the Foundation of Modern Software Development

That's why it's critical to ensure that you use APIs securely in the cloud. This article unpacks common API security mistakes that IT organizations run into in order to highlight what not to do if you want to make the most of APIs while also maximizing security.

In many ways, insecure APIs are a DDoS attacker's dream. The reason why is that by issuing repeated calls to APIs, attackers can overwhelm the servers hosting them and render applications that depend on the APIs unusable.

Fortunately, there's a simple way to prevent this type of API attack: throttling. API throttling lets admins limit the number of requests that each client can make to an API in a given time period. Throttling doesn't totally prevent abuse of APIs it's still possible to launch a DDoS-style attack using a botnet that consists of a large number of independent clients but it goes a long way toward stopping or mitigating API attacks designed to disrupt application availability.

Unless all of the data available through an API is 100% public, the API should require authentication in order to respond to requests. Otherwise, attackers can use the API to access data that should not be available to them as one attacker did when scraping data from about 700 million LinkedIn users, for example.

The LinkedIn API hack was a bit complicated because the data the attacker scraped was semi-public. It was available on LinkedIn profiles to other LinkedIn users who had access to those profiles. But it wasn't supposed to be available to a random, unauthenticated client making API requests. Basic API authentication would have prevented the abuse that took place in this incident.

Another API security mistake that can subject your business to an API attack is to assume that just because you don't advertise your API endpoints publicly, no one can find them and you therefore don't need to worry about securing your APIs.

This strategy which amounts to what security folks call "security by obscurity" is akin to publishing sensitive data on a website but choosing not to share the URL in the hope that no one finds it.

There are situations where you may choose not to advertise an API's location (for example, if the API isn't used by the public, you might share endpoint information only internally). But even so, you should invest just as much in securing the API as you would if it were a fully public API.

From a security standpoint, the fewer APIs you expose and use, the better. Unnecessary APIs are like extraneous libraries on an operating system or abandoned code within an application: They give attackers more potential ways to wreak havoc while offering no value to your business.

So, before you publish a new API, make sure you have a good reason to do so. And be sure, as well, to deprecate APIs that are no longer necessary, rather than leaving them active.

A one-size-fits-all security model often does not work well for APIs. Different API users may have different needs and require different security controls. For example, users who are internal to your business may require a higher level of data access via an API than your customers or partners.

For this reason, it's a best practice to define and enforce API access controls in a granular way. Using an API gateway, establish varying levels of access for different users (whom you could differentiate based on their network locations requests that originate from your VPN should be treated differently from those coming from the internet, for example or based on authentication schemes).

APIs make it easy to share resources in a cloud environment. But too much sharing via APIs is a bad thing. APIs must be secured with throttling, authentication, and granular access controls in order to keep data and applications secure against attackers looking for ways to abuse APIs.

About the author

See the rest here:

Protect Your Cloud Apps From These 5 Common API Security ... - ITPro Today

Categories
Cloud Hosting

Most businesses hope cloud will be the catalyst to net zero – TechRadar

A recent AWS-backed survey into the way that companies and business leaders manage decarbonization efforts to reach net zero in Europe by 2050 has found that cloud technology may hold the key (or one of many keys) to success.

The study consulted 4,000 businesses in the UK, France, Germany, and Spain, 96% of whom having set emissions reduction targets. A

round three quarters of business leaders believed that technology like cloud hosting would accelerate their journey to net zero by at least two years, helping them to achieve their target by 2048 at the latest.

With that in mind, around 20% claim that they lack the appropriate technology to achieve their net zero goals, and one in five were yet to go cloud-first. Among a number of obstacles holding businesses back was the impact of rising costs and economic uncertainty on a global scale.

Despite the challenges, three quarters of business leaders feel confident in their abilities to control greenhouse gas emissions. This is in stark contrast to the just one in ten that measure emissions scope 3, which focuses on indirect emissions that occur in the companys value chain. Just over half of the companies in question were measuring scopes 1 (direct emissions from owned or controlled sources) and 2 (indirect emissions from electricity, heating, cooling, and so on).

What I think is so interesting here is that business leaders who have already engaged cloud services think they are more successful in delivering carbon reductions," noted Chris Wellise, AWS Director of Sustainability. "The data backs up this view, as cloud offers nearly any company or public body a less carbon intensive way of managing their IT.

Read more from the original source:

Most businesses hope cloud will be the catalyst to net zero - TechRadar

Categories
Cloud Hosting

ACE Recognized as the 2022 Most Innovative Cloud Solutions … – openPR

Pompano Beach, FL, December 13, 2022 --(PR.com)--Ace Cloud Hosting, a leading cloud computing solutions provider, announced that it has been recognized as the Most Innovative Cloud Solutions Provider for 2022 by Global Business Awards. ACE received the award in the technology category for delivering continuous innovation and consistent quality in the cloud solutions realm.

Global Business Awards are the most coveted awards that celebrate enterprises that demonstrate authentic and best work in specific categories. Every year the esteemed panel of Global Business Awards carefully scrutinizes several applications and portfolios to decisively shortlist a finalist. This year the pinnacle award programs were sharply focused on recognizing organizations that brought a cohesive mix of innovation, technology, and humanization to the forefront along with their digital transformation solutions.

ACH will share the stage with many top Indian players like OLA Cars, GoDaddy, Zomato, EdgeVerve Systems Limited - An Infosys Company, Quick Heal, Infosys, Leeway Hertz, Bureau, etc., in the technology category. Its gratifying to see our cloud computing expertise, knowledge, inventiveness, and adaptability recognized. This award is a testament to our ability of delivering the highest level of service and create value for partner clients. In addition, it also personifies the dedication and rigor that our teams put into delivering highly successful results for our customers, said Managing Director, Vinay Chhabra.

ACH has a strong commitment to building and implementing intelligent cloud solutions to address the most pressing needs of high-growth enterprises. Commenting on the win, Dr. Bindu Rathore, Director (VP-Sales & Marketing) said, This award is a hallmark of our excellence. ACH is leveraging its investments in innovation, deep technologies, and a talented workforce, to help clients accelerate their growth and transformation journey. This momentous award is a reaffirmation of our commitment to consistently deliver differentiated and transformational results.

Dr. Sangeeta Chhabra, Executive Director, RTDS said, We are extremely proud to receive this award. ACH has earned the award through its steadfast commitment to process innovation, quality, and industry expertise. We are unique in the cloud computing space our deep industry knowledge, capabilities, and rich portfolio of services set us apart.

ACH has nearly 15 years of experience in solving complex cloud challenges through unconventional business solutions and commitment to benchmark best practices. The organization is a firm believer in creating out-of-the-box strategies to address the strategic requirements of companies across diverse industries, with minimal impact on their present IT ecosystem. Recently, the organization also received two CPA Practice Advisor Readers Choice Awards for the Best Hosted Solution Provider and Best Outsourced Technology Services categories. The winners will be announced on 10th December 2022 at the Waldorf Astoria Dubai Palm Jumeirah.

About ACEACH offers business-critical cloud computing solutions that provide vibrant pathways to transcend operations, foster innovation, and create value for partner organizations. The organization enables a conducive IT ecosystem that empowers businesses to smoothly work from anywhere and at any time in a secure manner. ACH has over 15+ years in creating, deploying, and scaling dynamic cloud infrastructure of high-growth enterprises and enabling real-world foundations to support their business growth. Leading organizations are harnessing ACHs Cloud Computing, QuickBooks Hosting, Virtual Desktop Infrastructure, and Managed Security Solutions to challenge the status quo, breaking their previous molds and clearing the groundwork for business success.

Read more here:

ACE Recognized as the 2022 Most Innovative Cloud Solutions ... - openPR

Categories
Cloud Hosting

What is a Data Center? Working & Best Practices Explained – Spiceworks News and Insights

A data center is defined as a room, a building, or a group of buildings used to house backend computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. This article defines and describes the workings of a data center, including its architecture, types, and best practices.

A data center is a room, a building, or a group of buildings used to house back-end computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. Remote data centers power all cloud infrastructure.

A data center is a physical facility providing the computing power to operate programs, storage to process information, and networking to link people to the resources they need to do their tasks and support organizational operations.

Due to a dense concentration of servers, which are often placed in tiers, sometimes data centers are called server farms. They provide essential services like information storage, recovery and backup information management, and networking.

Almost every company and government agency needs either its own data center or access to third-party facilities. Some construct and operate them in-house, and others rent servers from co-location facilities. In contrast, others still leverage public cloud-based services from hosts such as Google, Microsoft, and Amazon Web Services (AWS).

In general, there are four recognized levels of data centers. The numerical tiers allocated to these data centers represent the redundant infrastructure, power, and cooling systems. Commonly assigned to these levels are the following values or functionalities:

The storage and computing capabilities for apps, information, and content are housed in data centers. Access to this data is a major issue in this cloud-based, application-driven world. Using high-speed packet-optical communication, Data Center Interconnect (DCI) technologies join two or more data centers across short, medium, or long distances.

Further, a hyper-converged data center is built on hyper-converged infrastructure (HCI), a software architecture consolidating the compute, network, and storage commodity hardware. The merging of software and hardware components into a single data center streamlines the processing and management process, with the added perk of lowering an organizations IT infrastructure and management costs.

See More: Want To Achieve Five Nines Uptime? 2 Keys To Maximize Data Center Performance

The working of a data center is based on the successful execution of data center operations. Operations of a data center consist of the systems and processes that maintain the data center on a daily basis.

Data center operations consist of establishing and managing network resources, assuring data center security, and monitoring power and cooling systems. Different kinds of data centers, differing in size, dependability, and redundancy, are defined by the IT needs of enterprises that operate data centers. The expansion of cloud computing is driving their modernization, including automation and virtualization.

Data centers comprise real or virtual servers linked externally and internally via communication and networking equipment to store, transport, and access digital data. Each server is comparable to a home computer in that it contains a CPU, storage space, and memory but is more powerful. Data centers use software to cluster computers and divide the load among them. To keep all of this up and running, the data center uses the following key elements:

Availability in a data center refers to components that are operational at all times. Periodically, systems are maintained to guarantee future activities run smoothly. You may arrange a failover in which a server switches duties to a distant server to increase redundancy. In IT infrastructure, redundant systems reduce the risk of single-point failure.

Network Operation Center (NOC) is a workspace (or virtual workplace) for employees or dedicated workers tasked with monitoring, administering, and maintaining the computer resources in a data center. A NOC can supply all of the data centers information and update all activities. The responsible person at a NOC may see and control network visualizations that are being monitored.

Unquestionably, power is the most critical aspect of a data center. Colocation equipment or web hosting servers use a dedicated power supply inside the data center. Every data center needs power backups to ensure its servers are continually operational and that overall service availability is maintained.

A safe data center requires the implementation of security mechanisms. One must first identify the weaknesses in your DCs infrastructure. Multi-factor ID identification, monitoring across the whole building, metal detectors, and biometric systems are a few measures that one may take to ensure the highest level of security. Also necessary to a data center are on-site security personnel.

Power and cooling are equally crucial in a data center. The colocation equipment and web-hosting servers need sufficient cooling to prevent overheating and guarantee their continued operation. A data center should be constructed so that there is enough airflow and the systems are always kept cool.

Uninterruptible power supply (UPS), as well as generators, are components of backup systems. During power disruptions, a generator may be configured to start automatically. As far as the generators have fuel, they will remain on during a blackout. UPS systems should provide redundancy so that a failed module does not compromise the overall systems capability. Regular maintenance of the UPS and batteries decreases the likelihood of failure during a power outage.

CMMS is among the most effective methods to monitor, measure, and enhance your maintenance plan. This program enables the data center management to track the progress of maintenance work performed on their assets and the associated costs. This program will aid in lowering maintenance costs and boosting internal efficiency.

In a modern data center, artificial intelligence (AI) also plays an essential role in its working. AI enables algorithms to fulfill the conventional Data Center Infrastructure Manager (DCIM) tasks by monitoring energy distribution, cooling capacity, server traffic, and cyber threats in real-time and automatically adjusting efficiency. AI can shift workloads to underused resources, identify possible component faults, and balance pooled resources. It accomplishes this with minimal human intervention.

See More: What Is Enterprise Data Management (EDM)? Definition, Importance, and Best Practices

The different types of data centers include:

Organizations construct and own these private data centers for their end customers. They may be placed both on and off-site and serve a single organizations IT processes and essential apps. An organization may isolate business activities from data center operations in a natural catastrophe. Or, it may construct its data center in a cooler environment to reduce energy consumption.

Multi-tenant data centers (called colocation data centers) provide data center space to organizations desiring to host their computer gear and servers remotely.

These spaces for rent inside colocation centers are the property of other parties. The renting company is responsible for providing the hardware, while the data center offers and administers the infrastructure, which includes physical area, connectivity, ventilation, and security systems. Colocation is attractive for businesses that want to avoid the high capital costs involved with developing and running their own data centers.

The desire for immediate connection, the Internet of Things (IoT) expansion, and the requirement for insights and robotics are driving the emergence of edge technologies, which enable processing to take place closer to actual data sources. Edge data centers are compact facilities that tackle the latency issue by being located nearer to the networks edge and data sources.

These data centers are tiny and placed close to the users they serve, allowing for low-latency connection with smart devices. By processing multiple services as near-to-end users as feasible, edge data centers enable businesses to decrease communication delays and enhance the customer experience.

Hyperscale data centers are intended to host IT infrastructure on a vast scale. These hyperscale computing infrastructures, synonymous with large-scale providers like Amazon, Meta, and Google, optimize hardware density while reducing the expense of cooling and administrative overhead.

Hyperscale data centers, like business data centers, are owned and maintained by the organization they serve, although on a considerably broader level for platforms for cloud computing and big data retention. The minimum requirements for a hyperscale data center are 5,000 servers, 500 cabinets, and 10,000 square feet of floor space.

These dispersed data centers are operated by 3rd party or public cloud providers like AWS, Microsoft Azure, and Google Cloud. The leased infrastructure, predicated on an infrastructure-as-a-service approach, enables users to establish a virtual data center within minutes. Remember that cloud data centers operate as any other physical data center type for the cloud provider managing it.

See More: What Is a Data Catalog? Definition, Examples, and Best Practices

A modular data center is a module or physical container bundled with ready-to-use, plug-and-play data center elements: servers, storage, networking hardware, UPS, stabilizers, air conditioners, etc. Modular data centers are used on building sites and disaster zones (to take care of alternate care sites during the pandemic, for example). In permanent situations, they are implemented to make space available or to let an organization develop rapidly, such as installing IT equipment to support classrooms in an educational institution.

In a managed data center, a third-party provider provides enterprises with processing, data storage, and other associated services to aid in managing their IT operations. This data center type is deployed, monitored, and maintained by the service provider, who offers the functionalities via a controlled platform.

You may get managed data center services through a colocation facility, cloud-based data centers, or a fixed hosting location. A managed data center might be entirely or partly managed, but these are not multi-tenant by default, unlike colocation.

See More: What Is Data Modeling? Process, Tools, and Best Practices

The modern data center design has shifted from an on-premises infrastructure to one that mixes on-premises hardware with cloud environments wherein networks, apps, or workloads are virtualized across multiple private and public clouds. This innovation has revolutionized the design of data centers since all components are no longer co-located and may only be accessible over the Internet.

Generally speaking, there are four kinds of data center structures: meshes, three or multi-tier, mesh points of delivery, and super spine mesh. Now let us start with the most famous instance. The multi-tier structure, which consists of the foundation, aggregation, and access layers, has emerged as the most popular architectural approach for corporate data centers.

The mesh data center architecture follows next. The mesh network model refers to the topology in which data is exchanged between components through linked switches. It can provide basic cloud services due to its dependable capacity and minimal latency. Moreover, because of its scattered network topologies, the mesh configuration can quickly materialize any connection and is less costly to construct.

This mesh point of delivery (PoD) comprises several leaf switches connected inside the PoDs. It is a recurrent design pattern wherein components improve the data centers modularity, scalability, and administration. Consequently, data center managers may rapidly add new data center architecture to their existing three-tier topology to meet the extremely low-latency data flow of new cloud apps.

In summary, super spine architecture is suitable for large-scale, campus-style data centers. This kind of data center architecture handles vast volumes of data through east-to-west data corridors.

The data center will comprise a facility and its internal infrastructure in these architectural alternatives. The site is where the data center is physically located. A data center is a big, open space in which infrastructure is installed. Virtually every place is capable of housing IT infrastructure.

Infrastructure is the extensive collection of IT equipment installed inside a facility. This refers to the hardware responsible for running applications and providing business and user services. A traditional IT infrastructure includes, among other elements, servers, storage, computer networks, and racks.

There are no obligatory or necessary criteria for designing or building a data center; a data center is intended to satisfy the organizations unique needs. However, the fundamental purpose of any standard is to offer a consistent foundation for best practices. Several modern data center specifications exist, and a company may embrace a few or all of them.

See More: What Is Kubernetes Ingress? Meaning, Working, Types, and Uses

When designing, managing, and optimizing a data center, here are the top best practices to follow:

When developing a data center, it is crucial to provide space for growth. To save costs, data center designers may seek to limit facility capacities to the organizations present needs; nevertheless, this might be a costly error in the long run. Having a room available for new equipment is vital as your needs change.

One cannot regulate the things you do not measure; thus, monitor energy usage to explain the system efficiency of your data center. Power usage effectiveness (PUE) is a statistic used to reduce non-computing energy use, like cooling and power transmission. Frequently measuring PUE is required for optimal use. Since seasonal weather variations greatly influence PUE, gathering energy information for the whole year is considerably more essential.

Inspections and preventative maintenance are often performed at time-led intervals to prevent the breakdown of components and systems. Nonetheless, this technique disregards actual operating conditions. Utilizing analytics and intelligent monitoring technologies may alter maintenance procedures. A powerful analytics platform with machine learning capabilities can forecast maintenance needs.

Even with the declining price of computer memory, global archiving incurs billions of dollars annually. By deleting and retaining data, the IT infrastructure of data centers is freed of its burden, resulting in decreased conditioning expenses and energy consumption and more effective allocation of computing resources and storage.

For data centers, creating backup pathways for networked gear and communication channels in the event of a failure is a big challenge. These redundancies offer a backup system that allows personnel to perform maintenance and execute system upgrades without disturbing service or to transition to the backup system when the primary system fails. Tier systems within data centers, numbered from one to four, define the uptime that customers may expect (4 being the highest).

See More: Why the Future of Database Management Lies In Open Source

Data centers are the backbone of modern-day computing. Not only do they house information, but they also support resource-heavy data operations like analysis and modeling. By investing in your data center architecture, you can better support IT and business processes. A well-functioning data center is one with minimal downtime and scalable capacity while maintaining costs at an optimum.

Did this article help you understand how data centers work? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!

Read more here:

What is a Data Center? Working & Best Practices Explained - Spiceworks News and Insights

Categories
Cloud Hosting

Alibaba partners with Avalanche to host nodes – CryptoTvplus

Alibaba Cloud, the cloud computing service and subsidiary of the Alibaba group, has announced integration with validators on the Avalanche blockchain. Avalanche blockchain made the announcement that will see the expansion of Web3 services into Web2 infrastructures.

Details of the partnership show that users of the Avalanche blockchain can launch validator nodes using the Alibaba Cloud system including storage and distribution of resources on the largest cloud infrastructure in Asia.

As part of a way to incentivize validators, Alibaba Cloud is giving Avalanche validators credit via coupons for their services.

Alibaba Cloud, which was launched in 2009 as the cloud service for the Alibaba Group, serves 85 zones in 28 regions globally. It is also tagged as the third largest cloud computing service in the world as businesses around the world rely on its IaaS (Infrastructure as a Service) model to grow and scale.

Elastic computing, network virtualization, database, storage, security, management, and application services are some of the utilities available in the Alibaba cloud system.

Avalanche is an eco-friendly blockchain launched in 2020 with fast finality, and on which scalable applications and services can be designed for institutional and individual purposes. Several top projects that use Avalanche include BENQi, Aave, Chainlink, Curve, and Sushi. It currently has over 1,000 validators and transactions per second of 4500 plus.

Alibaba Cloud is not the only Web2 infrastructure thats integrating and supporting the hosting of blockchain nodes. In November, Google Cloud announced its partnership with Solana to enable validators to use its cloud services for hosting nodes.

For dApp builders, Alibaba has published a guide on how to set up Avalanche nodes using the Alibaba Cloud.

Google Cloud and Aptos to start an accelerator program

Expect war of chains & bridges Avalanche founder

Visit link:

Alibaba partners with Avalanche to host nodes - CryptoTvplus

Categories
Cloud Hosting

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle – Nextgov

The Pentagon on Wednesday announced the awardees of the Joint Warfighting Cloud Capabilityor JWCCcontract, with Amazon Web Services, Google, Microsoft and Oracle each receiving an award.

Through the contract, which has a $9 billion ceiling, the Pentagon aims to bring enterprisewide cloud computing capabilities to the Defense Department across all domains and classification levels, with the four companies competing for individual task orders.

Last year, the Defense Department had named the four companies as contenders for the multi-cloud, multi-vendor contract.

The purpose of this contract is to provide the Department of Defense with enterprise-wide, globally available cloud services across all security domains and classification levels, from the strategic level to the tactical edge, the Defense Department said in a Wednesday announcement.

The awards come after a years-long effort to provide enterprisewide cloud computing across the department, with a significant delay in March as the DOD conducted due diligence with the four vendors.

All four companies issued statements the day after the award.

We are honored to have been selected for the Joint Warfighting Cloud Capability contract and look forward to continuing our support for the Department of Defense," saidDave Levy, Vice President U.S. Government, Nonprofit, and Healthcare at AWS. "From the enterprise to the tactical edge, we are ready to deliver industry-leading cloud services to enable the DoD to achieve its critical mission.

Oracle looks forward to continuing its long history of success with the Department of Defense by providing our highly performant, secure, and cost-effective cloud infrastructure,"Ken Glueck, Executive Vice President, Oracle, said in a statement. "Built to enable interoperability, Oracle Cloud Infrastructure will help drive the DoDs multicloud innovation and ensure that our defense and intelligence communities have the best technology available to protect and preserve our national security.

"The selection is another clear demonstration of the trust the DoD places in Microsoft and our technologies," Microsoft Federal President Rick Wagner said in a blog post. "Our work on JWCC will build on the success of our industry-leading cloud capabilities to support national security missions that we have developed and deployed across the department and service branches."

We are proud to be selected as an approved cloud vendor for the JWCC contract,"Karen Dahut, CEO of Google Public Sector, said in a statement.

JWCC itself was announced in July 2021 following the failure and cancellation of the Joint Enterprise Defense Infrastructureor JEDIcontract, DODs previous effort aimed at providing commercial cloud capabilities to the enterprise.

Conceptualized in 2017, JEDI was designed to be the Pentagons war cloud, providing a common and connected global IT fabric at all levels of classification for customer agencies and warfighters. A single-award contract worth up to $10 billion, JEDI would have put a single cloud service provider in charge of hosting and analyzing some of the militarys most sensitive data. Ultimately, JEDI was delayed for several years over numerous lawsuits that ultimately caused the Pentagon to reconsider its plan, opting for a multi-cloud approach more common in the private sector.

For many years, Amazon Web Servicesby virtue of its 2013 contract with the Central Intelligence Agencywas the only commercial cloud provider with the security accreditations allowing it to host the DODs most sensitive data. In the interim, however, Microsoft has achieved the top-secret accreditation, and Oracle and Google both achieved Impact Level 5or IL5accreditation, allowing the two companies to host the departments most sensitive unclassified data in their cloud offerings. Oracle has also achieved top secret accreditation.

JWCC is just one of several multibillion-dollar cloud contracts the government has awarded over the past few years. In late 2020, the CIA awarded its Commercial Cloud Enterprise, or C2E, contract to five companies: AWS, Microsoft, Google, Oracle and IBM. The contract could be worth tens of billions of dollars, according to contracting documents, and the companies will compete for task orders issued by various intelligence agencies.

Last April, the National Security Agency re-awarded its $10 billion cloud contract codenamed Wild and Stormy to AWS following a protest from losing bidder Microsoft. The contract is part of the NSAs modernization of its Hybrid Compute Initiative, which will move some of the NSAs crown jewel intelligence data from internal servers to AWS air-gapped cloud.

Editor's note: This story was updated to include statements from all four cloud service providers.

Read the original post:

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle - Nextgov

Categories
Cloud Hosting

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be – DARKReading

A Dec. 2 ransomware attack at Rackspace Technology which the managed cloud hosting company took several days to confirm is quickly becoming a case study on the havoc that can result from a single well-placed attack on a cloud service provider.

The attack has disrupted email services for thousands of mostly small and midsize organizations. The forced migration to a competitor's platform left some Rackspace customers frustrated and desperate for support from the company. It has also already prompted at least one class-action lawsuit and pushed the publicly traded Rackspace's share price down nearly 21% over the past five days.

"While it's possible the root cause was a missed patch or misconfiguration, there's not enough information publicly available to say what technique the attackers used to breach the Rackspace environment," says Mike Parkin, senior technical engineer at Vulcan Cyber. "The larger issue is that the breach affected multiple Rackspace customers here, which points out one of the potential challenges with relying on cloud infrastructure." The attack shows how if threat actors can compromise or cripple large service providers, they can affect multiple tenants at once.

Rackspace first disclosed something was amiss at 2:20 a.m. EST on Dec. 2 with an announcement it was looking into "an issue" affecting the company's Hosted Exchange environment. Over the next several hours, the company kept providing updates about customers reporting email connectivity and login issues, but it wasn't until nearly a full day later that Rackspace even identified the issue as a "security incident."

By that time, Rackspace had already shut down its Hosted Exchange environment citing "significant failure" and said it did not have an estimate for when the company would be able to restore the service. Rackspace warned customers that restoration efforts could take several days and advised those looking for immediate access to email services to use Microsoft 365 instead. "At no cost to you, we will be providing access to Microsoft Exchange Plan 1 licenses on Microsoft 365 until further notice," Rackspace said in a Dec. 3 update.

The company noted that Rackspace's support team would be available to assist administrators configure and set up accounts for their organizations in Microsoft 365. In subsequent updates, Rackspace said it had helped and was helping thousands of its customers move to Microsoft 365.

On Dec. 6, more than four days after its first alert, Rackspace identified the issue that had knocked its Hosted Exchange environment offline as a ransomware attack. The company described the incident as isolated to its Exchange service and said it was still trying to determine what data the attack might have affected. "At this time, we are unable to provide a timeline for restoration of the Hosted Exchange environment," Rackspace said. "We are working to provide customers with archives of inboxes where available, to eventually import over to Microsoft 365."

The company acknowledged that moving to Microsoft 365 is not going to be particularly easy for some of its customers and said it has mustered all the support it can get to help organizations. "We recognize that setting up and configuring Microsoft 365 can be challenging and we have added all available resources to help support customers," it said. Rackspace suggested that as a temporary solution, customers could enable a forwarding option, so mail destined to their Hosted Exchange account goes to an external email address instead.

Rackspace has not disclosed how many organizations the attack has affected, whether it received any ransom demand or paid a ransom, or whether it has been able to identify the attacker. The company did not respond immediately to a Dark Reading request seeking information on these issues. In a Dec. 6. SEC filing, Rackspace warned the incident could cause a loss in revenue for the company's nearly $30 million Hosted Exchange business. "In addition, the Company may have incremental costs associated with its response to the incident."

Messages on Twitter suggest that many customers are furious at Rackspace over the incident and the company's handling of it so far. Many appear frustrated at what they perceive as Rackspace's lack of transparency and the challenges they are encountering in trying to get their email back online.

One Twitter user and apparent Rackspace customer wanted to know about their organization's data. "Guys, when are you going to give us access to our data," the user posted. "Telling us to go to M365 with a new blank slate is not acceptable. Help your partners. Give us our data back."

Another Twitter user suggested that the Rackspace attackers had also compromised customer data in the incident based on the number of Rackspace-specific phishing emails they had been receiving the last few days. "I assume all of your customer data has also been breached and is now for sale on the dark web. Your customers aren't stupid," the user said.

Several others expressed frustration over their inability to get support from Rackspace, and others claimed to have terminated their relationship with the company. "You are holding us hostages. The lawsuit is going to take you to bankruptcy," another apparent Rackspace customer noted.

Davis McCarthy, principal security researcher at Valtix, says the breach is a reminder why organizations should pay attention to the fact that security in the cloud is a shared responsibility. "If a service provider fails to deliver that security, an organization is unknowingly exposed to threats they cannot mitigate themselves," he says. "Having a risk management plan that determines the impact of those known unknowns will help organizations recover during that worst case scenario."

Meanwhile, the lawsuit, filed by California law firm Cole & Van Note on behalf of Rackspace customers, accused the company of "negligence and related violations" around the breach. "That Rackspace offered opaque updates for days, then admitted to a ransomware event without further customer assistance is outrageous, a statement announcing the lawsuit noted.

No details are publicly available on how the attackers might have breached Rackspace's Hosted Exchange environment. But security researcher Kevin Beaumont has said his analysis showed that just prior to the intrusion, Rackspace's Exchange cluster had versions of the technology that appeared vulnerable to the "ProxyNotShell" zero-day flaws in Exchange Server earlier this year.

"It is possible the Rackspace breach happened due to other issues," Beaumont said. But the breach is a general reminder why Exchange Server administrators need to apply Microsoft's patches for the flaws, he added. "I expect continued attacks on organizations via Microsoft Exchange through 2023."

Link:

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be - DARKReading

Categories
Cloud Hosting

St. Cloud Hosting Summit on the Vision for Its Downtown – KVSC-FM News

By Zac Chapman / Assistant News Director

The city of St. Cloud is bringing in strategists to examine the vision for the future of its downtown with a goal of rebooting the historic area.

The summit is exploring lessons learned from American downtowns throughout COVID-19. The event is bringing community partners, businesses and others together with the goal of increasing the quality of downtowns vision.

Four speakers are giving presentations at the summit.

A Top 100 Most Influential Urbanist, Chris Leinberger, is the downtown strategist and investor.

Tobias Peter is Assistant Director of the American Enterprise Institutes Housing Center, and is focusing on housing market trends and policy.

Mayor Dave Kleis and CentraCare CEO Ken Holmen are speaking about St. Clouds unique opportunity to create an active walkable downtown through strategic investment in housing and workforce amenities.

The downtown summit is Monday, Dec. 12 at 6 p.m at St. Clouds River Edge Convention Center. The room will open at 5:30 p.m. for a pre-event social and networking. It is free to attend.

Go here to see the original:

St. Cloud Hosting Summit on the Vision for Its Downtown - KVSC-FM News

Categories
Cloud Hosting

Alibaba Cloud Pros and Cons – ITPro Today

Should you use Alibaba Cloud?

That's a question that you may not even think to ask yourself, given that Alibaba's cloud computing services tend to receive much less attention in the media than those of the "Big Three" cloud providers meaning Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Related: Why Public Cloud Vendors Must Get Serious About eBPF Now

But when it comes to the diversity of the cloud services available, as well as pricing and geographic coverage, Alibaba Cloud is in many cases a worthy competitor with the other, better-known cloud platforms.

To understand when it does and doesn't make sense to consider Alibaba Cloud, let's examine Alibaba Cloud's pros and cons as a cloud computing platform.

Related: How the Cloud Made Computing Harder, Not Easier

Alibaba Cloud is a public cloud platform. It's owned by Alibaba, a China-based multinational business that is also a major player in the e-commerce and retail industries. Currently, Alibaba Cloud is the fourth-largest public cloud provider globally, after AWS, Azure, and GCP.

Like other major public clouds, Alibaba Cloud offers a broad set of cloud services, such as:

Alibaba Cloud also provides a variety of native management and monitoring tools the equivalents of solutions like AWS CloudWatch and IAM.

The breadth of Alibaba Cloud's services is one factor that sets Alibaba Cloud apart from "alternative cloud" providers, many of which specialize only in certain types of services (like storage).

Compared with other major public clouds, Alibaba Cloud offers a few notable advantages:

In short, Alibaba Cloud offers the same large selection of core services as AWS, Azure, and GCP. In some cases, Alibaba's services cost less. And when it comes to geographical presence in Asia, Alibaba Cloud beats the Big Three clouds hands-down.

On the other hand, there are common reasons why businesses may opt not to use Alibaba Cloud, including:

To be sure, Alibaba Cloud is still evolving rapidly, and it's possible that these disadvantages will abate as it grows. But for now, Alibaba Cloud remains most heavily invested in the Asia-Pacific region, which means its support for workloads, tools, and engineers that are based in other parts of the world is limited.

There's also not a lot of reason to believe that Alibaba Cloud will be expanding its presence in North American or European markets in the near future. It hasn't added data centers in those regions since the mid-2010s, although it has continued expanding its footprint in Asia. And when Alibaba Cloud talks about engaging the North American market, it's usually in the context of working with North American companies seeking to expand operations in Asia, rather than landing customers that don't have a presence in Alibaba's backyard.

In general, then, it seems that Alibaba Cloud's business strategy is focused on owning the Asia-Pacific market and leaving the rest of the world to AWS, Azure, and GCP, rather than going head-to-head with the Big Three clouds.

To sum up, whether Alibaba Cloud is a good fit for hosting your workloads depends on:

These considerations may change as Alibaba Cloud continues to evolve, especially if the company invests more heavily in markets outside of Asia-Pacific. But at present, Alibaba Cloud's appeal for businesses that don't have a strong presence in Asia remains limited.

About the author

Read more here:

Alibaba Cloud Pros and Cons - ITPro Today

Categories
Cloud Hosting

Last Year’s Predictions and 4 Kubernetes and Edge Trends to Watch | – Spiceworks News and Insights

The edge computing landscape is evolving fast. How can enterprises best prepare to ride the upcoming trends? In this article, Stewart McGrath, CEO and co-founder of Section, reviews the predictions from last year about Kubernetes and the edge and examines four key trends to look forward to.

As this year draws to a close, I thought it would be a good time to throw out a few predictions about what 2023 holds for the Kubernetes, container orchestration and edge computing landscape. But first, Id like to hold ourselves accountable and look back on the predictions we made this time last year. In retrospect, how did we score?

1. The use of containers at the edge will continue to growThe Internet of Things, online gaming, video conferencing and a whole host of emerging use cases mean the use of containers at the edge will continue to grow. Moreover, as usage increases, so too will organizational expectations. Companies will demand more from edge platform providers in terms of support to help ease deployment and ongoing operations.

This one is tough to measure as theres little data available. This outcome seems inevitable, and anecdotal evidence from conversations with analysts, customers and others in the industry indicates it is, in fact, happening. That said, without hard evidence, I have to give us a N/A on the score check here.

2. Kubernetes will become central to edge computingHosting and edge platforms built to support Kubernetes will have a competitive advantage in flexibly supporting modern DevOps teams requirements. Edge platform providers who can ease integration with Kubernetes-aware environments will attract attention from the growing cloud-native community; for example, leveraging Helm charts to allow application builders to hand over their application manifest and rely on an intelligent edge orchestration system to deploy clusters accordingly.How about 7.5 out of 10 on this one? The overall ecosystem developing around Cloud Native Computing Foundation (CNCF) technologies is growing quickly and extensively. CNCF projects like KubeVirt, Knative, WASM, Krustlet, Dapr and others indicate the growing acceptance of Kubernetes as an operating system of choice for not only containers but also virtual machines and serverless workloads. Providers of limited distribution for Kubernetes clusters, such as VMWares Tanzu, Rafay Systems and Platform9, continue to build and help customers run on multi-location, always-on footprints, while our location-aware global Kubernetes platform as a service grew substantially in its ability to help customers instantly run Kubernetes workloads in the right place at the right time.

3. CDN attempts to reinvent themselves will gain paceIn the year ahead, content delivery networks (CDNs) will increasingly recognize the need to diversify away from the steadily declining margins of large object (e.g., video and download) delivery. In addition to reinventing themselves as application security platforms, CDNs will continue to lean into the application hosting market. Cloudflare and Fastly have built on their existing infrastructure to deliver distributed serverless. We expect other CDNs will enter and/or expand offerings focused on the application hosting market as they seek to capitalize on their investment in building distributed networks. I am going to take a 10 out of 10 here. Akamai indicated a major shift when it spent nearly $1 billion acquiring Linode to plunge headlong into the application hosting space and recently announced its investment in data network company Macrometa. Fastly and Cloudflare have continued to expand their Edge offerings and, at recent conferences, reinforced the importance of their Edge compute plays for the future of their companies.

4. Telcos will riseTelcos will start developing more mature approaches to application hosting and leverage their unique differentiation of massively distributed networks to deliver hosting options at the edge. Additionally, more partnerships will emerge to facilitate the connection between developers and telcos 5G and edge infrastructure to solve their lack of expertise in this space. We were too optimistic, so Ill give this one a 5 out of 10. The telcos do seem to be moving in this direction but are moving at a typical telco pace. While players like Lumen have continued to roll out hosting infrastructure in distributed footprints, we did not see a monumental shift released by any telco during 2022.

See More: Whats Next for DevOps? Four DevOps Predictions for 2023

Overall, Id give ourselves 22.5 out of 30, or 75% (having removed the N/A score). Definitely a passing mark, but some headroom for excellence this year!

See More: Predictions for Service Mesh and Microservices: What Does 2023 Have in Store?

Kubernetes environments allow for the dynamic scheduling of non-related workloads in a single cluster. With the development of greater levels of Kubernetes abstraction and the hardening of security and observability, I can see a world where providers of Kubernetes clusters will announce the availability of their clusters to a general global pool of available resources on which a developer could deploy workloads.

Each cluster will be able to describe its attributes (location, capacity, compliance, etc.), and devs will be able to let an overall orchestration system match workload requirements to underlying attributes of contributed clusters (e.g., needs GPU, PCI DSS, specific always on locations, etc.). This will be the next evolution of cloud computing: a dynamic cloud of clusters.

The Kubernetes ecosystem has continued to demonstrate remarkable growth over the past 12 months. I have no doubt well see further evolution in the coming year as the demand for better automation of deployment, scaling and management of containerized applications is clear.

Whats your take on the trends predicted? Share your thoughts with us on Facebook, Twitter, and LinkedIn.

Image Source: Shutterstock

Visit link:

Last Year's Predictions and 4 Kubernetes and Edge Trends to Watch | - Spiceworks News and Insights