Categories
Cloud Hosting

Most businesses hope cloud will be the catalyst to net zero – TechRadar

A recent AWS-backed survey into the way that companies and business leaders manage decarbonization efforts to reach net zero in Europe by 2050 has found that cloud technology may hold the key (or one of many keys) to success.

The study consulted 4,000 businesses in the UK, France, Germany, and Spain, 96% of whom having set emissions reduction targets. A

round three quarters of business leaders believed that technology like cloud hosting would accelerate their journey to net zero by at least two years, helping them to achieve their target by 2048 at the latest.

With that in mind, around 20% claim that they lack the appropriate technology to achieve their net zero goals, and one in five were yet to go cloud-first. Among a number of obstacles holding businesses back was the impact of rising costs and economic uncertainty on a global scale.

Despite the challenges, three quarters of business leaders feel confident in their abilities to control greenhouse gas emissions. This is in stark contrast to the just one in ten that measure emissions scope 3, which focuses on indirect emissions that occur in the companys value chain. Just over half of the companies in question were measuring scopes 1 (direct emissions from owned or controlled sources) and 2 (indirect emissions from electricity, heating, cooling, and so on).

What I think is so interesting here is that business leaders who have already engaged cloud services think they are more successful in delivering carbon reductions," noted Chris Wellise, AWS Director of Sustainability. "The data backs up this view, as cloud offers nearly any company or public body a less carbon intensive way of managing their IT.

Read more from the original source:

Most businesses hope cloud will be the catalyst to net zero - TechRadar

Categories
Cloud Hosting

Protect Your Cloud Apps From These 5 Common API Security … – ITPro Today

APIs barely existed two decades ago, but they've now become the glue that holds the world of cloud computing together. APIs play a central role in enabling cloud applications to interface with each other and with the various cloud resources they need to do their jobs.

But APIs have a downside: When they're poorly managed, they can become low-hanging fruit for attackers.

Related: Why APIs Are the Foundation of Modern Software Development

That's why it's critical to ensure that you use APIs securely in the cloud. This article unpacks common API security mistakes that IT organizations run into in order to highlight what not to do if you want to make the most of APIs while also maximizing security.

In many ways, insecure APIs are a DDoS attacker's dream. The reason why is that by issuing repeated calls to APIs, attackers can overwhelm the servers hosting them and render applications that depend on the APIs unusable.

Fortunately, there's a simple way to prevent this type of API attack: throttling. API throttling lets admins limit the number of requests that each client can make to an API in a given time period. Throttling doesn't totally prevent abuse of APIs it's still possible to launch a DDoS-style attack using a botnet that consists of a large number of independent clients but it goes a long way toward stopping or mitigating API attacks designed to disrupt application availability.

Unless all of the data available through an API is 100% public, the API should require authentication in order to respond to requests. Otherwise, attackers can use the API to access data that should not be available to them as one attacker did when scraping data from about 700 million LinkedIn users, for example.

The LinkedIn API hack was a bit complicated because the data the attacker scraped was semi-public. It was available on LinkedIn profiles to other LinkedIn users who had access to those profiles. But it wasn't supposed to be available to a random, unauthenticated client making API requests. Basic API authentication would have prevented the abuse that took place in this incident.

Another API security mistake that can subject your business to an API attack is to assume that just because you don't advertise your API endpoints publicly, no one can find them and you therefore don't need to worry about securing your APIs.

This strategy which amounts to what security folks call "security by obscurity" is akin to publishing sensitive data on a website but choosing not to share the URL in the hope that no one finds it.

There are situations where you may choose not to advertise an API's location (for example, if the API isn't used by the public, you might share endpoint information only internally). But even so, you should invest just as much in securing the API as you would if it were a fully public API.

From a security standpoint, the fewer APIs you expose and use, the better. Unnecessary APIs are like extraneous libraries on an operating system or abandoned code within an application: They give attackers more potential ways to wreak havoc while offering no value to your business.

So, before you publish a new API, make sure you have a good reason to do so. And be sure, as well, to deprecate APIs that are no longer necessary, rather than leaving them active.

A one-size-fits-all security model often does not work well for APIs. Different API users may have different needs and require different security controls. For example, users who are internal to your business may require a higher level of data access via an API than your customers or partners.

For this reason, it's a best practice to define and enforce API access controls in a granular way. Using an API gateway, establish varying levels of access for different users (whom you could differentiate based on their network locations requests that originate from your VPN should be treated differently from those coming from the internet, for example or based on authentication schemes).

APIs make it easy to share resources in a cloud environment. But too much sharing via APIs is a bad thing. APIs must be secured with throttling, authentication, and granular access controls in order to keep data and applications secure against attackers looking for ways to abuse APIs.

About the author

See the rest here:

Protect Your Cloud Apps From These 5 Common API Security ... - ITPro Today

Categories
Cloud Hosting

What is a Data Center? Working & Best Practices Explained – Spiceworks News and Insights

A data center is defined as a room, a building, or a group of buildings used to house backend computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. This article defines and describes the workings of a data center, including its architecture, types, and best practices.

A data center is a room, a building, or a group of buildings used to house back-end computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. Remote data centers power all cloud infrastructure.

A data center is a physical facility providing the computing power to operate programs, storage to process information, and networking to link people to the resources they need to do their tasks and support organizational operations.

Due to a dense concentration of servers, which are often placed in tiers, sometimes data centers are called server farms. They provide essential services like information storage, recovery and backup information management, and networking.

Almost every company and government agency needs either its own data center or access to third-party facilities. Some construct and operate them in-house, and others rent servers from co-location facilities. In contrast, others still leverage public cloud-based services from hosts such as Google, Microsoft, and Amazon Web Services (AWS).

In general, there are four recognized levels of data centers. The numerical tiers allocated to these data centers represent the redundant infrastructure, power, and cooling systems. Commonly assigned to these levels are the following values or functionalities:

The storage and computing capabilities for apps, information, and content are housed in data centers. Access to this data is a major issue in this cloud-based, application-driven world. Using high-speed packet-optical communication, Data Center Interconnect (DCI) technologies join two or more data centers across short, medium, or long distances.

Further, a hyper-converged data center is built on hyper-converged infrastructure (HCI), a software architecture consolidating the compute, network, and storage commodity hardware. The merging of software and hardware components into a single data center streamlines the processing and management process, with the added perk of lowering an organizations IT infrastructure and management costs.

See More: Want To Achieve Five Nines Uptime? 2 Keys To Maximize Data Center Performance

The working of a data center is based on the successful execution of data center operations. Operations of a data center consist of the systems and processes that maintain the data center on a daily basis.

Data center operations consist of establishing and managing network resources, assuring data center security, and monitoring power and cooling systems. Different kinds of data centers, differing in size, dependability, and redundancy, are defined by the IT needs of enterprises that operate data centers. The expansion of cloud computing is driving their modernization, including automation and virtualization.

Data centers comprise real or virtual servers linked externally and internally via communication and networking equipment to store, transport, and access digital data. Each server is comparable to a home computer in that it contains a CPU, storage space, and memory but is more powerful. Data centers use software to cluster computers and divide the load among them. To keep all of this up and running, the data center uses the following key elements:

Availability in a data center refers to components that are operational at all times. Periodically, systems are maintained to guarantee future activities run smoothly. You may arrange a failover in which a server switches duties to a distant server to increase redundancy. In IT infrastructure, redundant systems reduce the risk of single-point failure.

Network Operation Center (NOC) is a workspace (or virtual workplace) for employees or dedicated workers tasked with monitoring, administering, and maintaining the computer resources in a data center. A NOC can supply all of the data centers information and update all activities. The responsible person at a NOC may see and control network visualizations that are being monitored.

Unquestionably, power is the most critical aspect of a data center. Colocation equipment or web hosting servers use a dedicated power supply inside the data center. Every data center needs power backups to ensure its servers are continually operational and that overall service availability is maintained.

A safe data center requires the implementation of security mechanisms. One must first identify the weaknesses in your DCs infrastructure. Multi-factor ID identification, monitoring across the whole building, metal detectors, and biometric systems are a few measures that one may take to ensure the highest level of security. Also necessary to a data center are on-site security personnel.

Power and cooling are equally crucial in a data center. The colocation equipment and web-hosting servers need sufficient cooling to prevent overheating and guarantee their continued operation. A data center should be constructed so that there is enough airflow and the systems are always kept cool.

Uninterruptible power supply (UPS), as well as generators, are components of backup systems. During power disruptions, a generator may be configured to start automatically. As far as the generators have fuel, they will remain on during a blackout. UPS systems should provide redundancy so that a failed module does not compromise the overall systems capability. Regular maintenance of the UPS and batteries decreases the likelihood of failure during a power outage.

CMMS is among the most effective methods to monitor, measure, and enhance your maintenance plan. This program enables the data center management to track the progress of maintenance work performed on their assets and the associated costs. This program will aid in lowering maintenance costs and boosting internal efficiency.

In a modern data center, artificial intelligence (AI) also plays an essential role in its working. AI enables algorithms to fulfill the conventional Data Center Infrastructure Manager (DCIM) tasks by monitoring energy distribution, cooling capacity, server traffic, and cyber threats in real-time and automatically adjusting efficiency. AI can shift workloads to underused resources, identify possible component faults, and balance pooled resources. It accomplishes this with minimal human intervention.

See More: What Is Enterprise Data Management (EDM)? Definition, Importance, and Best Practices

The different types of data centers include:

Organizations construct and own these private data centers for their end customers. They may be placed both on and off-site and serve a single organizations IT processes and essential apps. An organization may isolate business activities from data center operations in a natural catastrophe. Or, it may construct its data center in a cooler environment to reduce energy consumption.

Multi-tenant data centers (called colocation data centers) provide data center space to organizations desiring to host their computer gear and servers remotely.

These spaces for rent inside colocation centers are the property of other parties. The renting company is responsible for providing the hardware, while the data center offers and administers the infrastructure, which includes physical area, connectivity, ventilation, and security systems. Colocation is attractive for businesses that want to avoid the high capital costs involved with developing and running their own data centers.

The desire for immediate connection, the Internet of Things (IoT) expansion, and the requirement for insights and robotics are driving the emergence of edge technologies, which enable processing to take place closer to actual data sources. Edge data centers are compact facilities that tackle the latency issue by being located nearer to the networks edge and data sources.

These data centers are tiny and placed close to the users they serve, allowing for low-latency connection with smart devices. By processing multiple services as near-to-end users as feasible, edge data centers enable businesses to decrease communication delays and enhance the customer experience.

Hyperscale data centers are intended to host IT infrastructure on a vast scale. These hyperscale computing infrastructures, synonymous with large-scale providers like Amazon, Meta, and Google, optimize hardware density while reducing the expense of cooling and administrative overhead.

Hyperscale data centers, like business data centers, are owned and maintained by the organization they serve, although on a considerably broader level for platforms for cloud computing and big data retention. The minimum requirements for a hyperscale data center are 5,000 servers, 500 cabinets, and 10,000 square feet of floor space.

These dispersed data centers are operated by 3rd party or public cloud providers like AWS, Microsoft Azure, and Google Cloud. The leased infrastructure, predicated on an infrastructure-as-a-service approach, enables users to establish a virtual data center within minutes. Remember that cloud data centers operate as any other physical data center type for the cloud provider managing it.

See More: What Is a Data Catalog? Definition, Examples, and Best Practices

A modular data center is a module or physical container bundled with ready-to-use, plug-and-play data center elements: servers, storage, networking hardware, UPS, stabilizers, air conditioners, etc. Modular data centers are used on building sites and disaster zones (to take care of alternate care sites during the pandemic, for example). In permanent situations, they are implemented to make space available or to let an organization develop rapidly, such as installing IT equipment to support classrooms in an educational institution.

In a managed data center, a third-party provider provides enterprises with processing, data storage, and other associated services to aid in managing their IT operations. This data center type is deployed, monitored, and maintained by the service provider, who offers the functionalities via a controlled platform.

You may get managed data center services through a colocation facility, cloud-based data centers, or a fixed hosting location. A managed data center might be entirely or partly managed, but these are not multi-tenant by default, unlike colocation.

See More: What Is Data Modeling? Process, Tools, and Best Practices

The modern data center design has shifted from an on-premises infrastructure to one that mixes on-premises hardware with cloud environments wherein networks, apps, or workloads are virtualized across multiple private and public clouds. This innovation has revolutionized the design of data centers since all components are no longer co-located and may only be accessible over the Internet.

Generally speaking, there are four kinds of data center structures: meshes, three or multi-tier, mesh points of delivery, and super spine mesh. Now let us start with the most famous instance. The multi-tier structure, which consists of the foundation, aggregation, and access layers, has emerged as the most popular architectural approach for corporate data centers.

The mesh data center architecture follows next. The mesh network model refers to the topology in which data is exchanged between components through linked switches. It can provide basic cloud services due to its dependable capacity and minimal latency. Moreover, because of its scattered network topologies, the mesh configuration can quickly materialize any connection and is less costly to construct.

This mesh point of delivery (PoD) comprises several leaf switches connected inside the PoDs. It is a recurrent design pattern wherein components improve the data centers modularity, scalability, and administration. Consequently, data center managers may rapidly add new data center architecture to their existing three-tier topology to meet the extremely low-latency data flow of new cloud apps.

In summary, super spine architecture is suitable for large-scale, campus-style data centers. This kind of data center architecture handles vast volumes of data through east-to-west data corridors.

The data center will comprise a facility and its internal infrastructure in these architectural alternatives. The site is where the data center is physically located. A data center is a big, open space in which infrastructure is installed. Virtually every place is capable of housing IT infrastructure.

Infrastructure is the extensive collection of IT equipment installed inside a facility. This refers to the hardware responsible for running applications and providing business and user services. A traditional IT infrastructure includes, among other elements, servers, storage, computer networks, and racks.

There are no obligatory or necessary criteria for designing or building a data center; a data center is intended to satisfy the organizations unique needs. However, the fundamental purpose of any standard is to offer a consistent foundation for best practices. Several modern data center specifications exist, and a company may embrace a few or all of them.

See More: What Is Kubernetes Ingress? Meaning, Working, Types, and Uses

When designing, managing, and optimizing a data center, here are the top best practices to follow:

When developing a data center, it is crucial to provide space for growth. To save costs, data center designers may seek to limit facility capacities to the organizations present needs; nevertheless, this might be a costly error in the long run. Having a room available for new equipment is vital as your needs change.

One cannot regulate the things you do not measure; thus, monitor energy usage to explain the system efficiency of your data center. Power usage effectiveness (PUE) is a statistic used to reduce non-computing energy use, like cooling and power transmission. Frequently measuring PUE is required for optimal use. Since seasonal weather variations greatly influence PUE, gathering energy information for the whole year is considerably more essential.

Inspections and preventative maintenance are often performed at time-led intervals to prevent the breakdown of components and systems. Nonetheless, this technique disregards actual operating conditions. Utilizing analytics and intelligent monitoring technologies may alter maintenance procedures. A powerful analytics platform with machine learning capabilities can forecast maintenance needs.

Even with the declining price of computer memory, global archiving incurs billions of dollars annually. By deleting and retaining data, the IT infrastructure of data centers is freed of its burden, resulting in decreased conditioning expenses and energy consumption and more effective allocation of computing resources and storage.

For data centers, creating backup pathways for networked gear and communication channels in the event of a failure is a big challenge. These redundancies offer a backup system that allows personnel to perform maintenance and execute system upgrades without disturbing service or to transition to the backup system when the primary system fails. Tier systems within data centers, numbered from one to four, define the uptime that customers may expect (4 being the highest).

See More: Why the Future of Database Management Lies In Open Source

Data centers are the backbone of modern-day computing. Not only do they house information, but they also support resource-heavy data operations like analysis and modeling. By investing in your data center architecture, you can better support IT and business processes. A well-functioning data center is one with minimal downtime and scalable capacity while maintaining costs at an optimum.

Did this article help you understand how data centers work? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!

Read more here:

What is a Data Center? Working & Best Practices Explained - Spiceworks News and Insights

Categories
Cloud Hosting

Alibaba partners with Avalanche to host nodes – CryptoTvplus

Alibaba Cloud, the cloud computing service and subsidiary of the Alibaba group, has announced integration with validators on the Avalanche blockchain. Avalanche blockchain made the announcement that will see the expansion of Web3 services into Web2 infrastructures.

Details of the partnership show that users of the Avalanche blockchain can launch validator nodes using the Alibaba Cloud system including storage and distribution of resources on the largest cloud infrastructure in Asia.

As part of a way to incentivize validators, Alibaba Cloud is giving Avalanche validators credit via coupons for their services.

Alibaba Cloud, which was launched in 2009 as the cloud service for the Alibaba Group, serves 85 zones in 28 regions globally. It is also tagged as the third largest cloud computing service in the world as businesses around the world rely on its IaaS (Infrastructure as a Service) model to grow and scale.

Elastic computing, network virtualization, database, storage, security, management, and application services are some of the utilities available in the Alibaba cloud system.

Avalanche is an eco-friendly blockchain launched in 2020 with fast finality, and on which scalable applications and services can be designed for institutional and individual purposes. Several top projects that use Avalanche include BENQi, Aave, Chainlink, Curve, and Sushi. It currently has over 1,000 validators and transactions per second of 4500 plus.

Alibaba Cloud is not the only Web2 infrastructure thats integrating and supporting the hosting of blockchain nodes. In November, Google Cloud announced its partnership with Solana to enable validators to use its cloud services for hosting nodes.

For dApp builders, Alibaba has published a guide on how to set up Avalanche nodes using the Alibaba Cloud.

Google Cloud and Aptos to start an accelerator program

Expect war of chains & bridges Avalanche founder

Visit link:

Alibaba partners with Avalanche to host nodes - CryptoTvplus

Categories
Cloud Hosting

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle – Nextgov

The Pentagon on Wednesday announced the awardees of the Joint Warfighting Cloud Capabilityor JWCCcontract, with Amazon Web Services, Google, Microsoft and Oracle each receiving an award.

Through the contract, which has a $9 billion ceiling, the Pentagon aims to bring enterprisewide cloud computing capabilities to the Defense Department across all domains and classification levels, with the four companies competing for individual task orders.

Last year, the Defense Department had named the four companies as contenders for the multi-cloud, multi-vendor contract.

The purpose of this contract is to provide the Department of Defense with enterprise-wide, globally available cloud services across all security domains and classification levels, from the strategic level to the tactical edge, the Defense Department said in a Wednesday announcement.

The awards come after a years-long effort to provide enterprisewide cloud computing across the department, with a significant delay in March as the DOD conducted due diligence with the four vendors.

All four companies issued statements the day after the award.

We are honored to have been selected for the Joint Warfighting Cloud Capability contract and look forward to continuing our support for the Department of Defense," saidDave Levy, Vice President U.S. Government, Nonprofit, and Healthcare at AWS. "From the enterprise to the tactical edge, we are ready to deliver industry-leading cloud services to enable the DoD to achieve its critical mission.

Oracle looks forward to continuing its long history of success with the Department of Defense by providing our highly performant, secure, and cost-effective cloud infrastructure,"Ken Glueck, Executive Vice President, Oracle, said in a statement. "Built to enable interoperability, Oracle Cloud Infrastructure will help drive the DoDs multicloud innovation and ensure that our defense and intelligence communities have the best technology available to protect and preserve our national security.

"The selection is another clear demonstration of the trust the DoD places in Microsoft and our technologies," Microsoft Federal President Rick Wagner said in a blog post. "Our work on JWCC will build on the success of our industry-leading cloud capabilities to support national security missions that we have developed and deployed across the department and service branches."

We are proud to be selected as an approved cloud vendor for the JWCC contract,"Karen Dahut, CEO of Google Public Sector, said in a statement.

JWCC itself was announced in July 2021 following the failure and cancellation of the Joint Enterprise Defense Infrastructureor JEDIcontract, DODs previous effort aimed at providing commercial cloud capabilities to the enterprise.

Conceptualized in 2017, JEDI was designed to be the Pentagons war cloud, providing a common and connected global IT fabric at all levels of classification for customer agencies and warfighters. A single-award contract worth up to $10 billion, JEDI would have put a single cloud service provider in charge of hosting and analyzing some of the militarys most sensitive data. Ultimately, JEDI was delayed for several years over numerous lawsuits that ultimately caused the Pentagon to reconsider its plan, opting for a multi-cloud approach more common in the private sector.

For many years, Amazon Web Servicesby virtue of its 2013 contract with the Central Intelligence Agencywas the only commercial cloud provider with the security accreditations allowing it to host the DODs most sensitive data. In the interim, however, Microsoft has achieved the top-secret accreditation, and Oracle and Google both achieved Impact Level 5or IL5accreditation, allowing the two companies to host the departments most sensitive unclassified data in their cloud offerings. Oracle has also achieved top secret accreditation.

JWCC is just one of several multibillion-dollar cloud contracts the government has awarded over the past few years. In late 2020, the CIA awarded its Commercial Cloud Enterprise, or C2E, contract to five companies: AWS, Microsoft, Google, Oracle and IBM. The contract could be worth tens of billions of dollars, according to contracting documents, and the companies will compete for task orders issued by various intelligence agencies.

Last April, the National Security Agency re-awarded its $10 billion cloud contract codenamed Wild and Stormy to AWS following a protest from losing bidder Microsoft. The contract is part of the NSAs modernization of its Hybrid Compute Initiative, which will move some of the NSAs crown jewel intelligence data from internal servers to AWS air-gapped cloud.

Editor's note: This story was updated to include statements from all four cloud service providers.

Read the original post:

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle - Nextgov

Categories
Cloud Hosting

Alibaba Cloud Pros and Cons – ITPro Today

Should you use Alibaba Cloud?

That's a question that you may not even think to ask yourself, given that Alibaba's cloud computing services tend to receive much less attention in the media than those of the "Big Three" cloud providers meaning Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Related: Why Public Cloud Vendors Must Get Serious About eBPF Now

But when it comes to the diversity of the cloud services available, as well as pricing and geographic coverage, Alibaba Cloud is in many cases a worthy competitor with the other, better-known cloud platforms.

To understand when it does and doesn't make sense to consider Alibaba Cloud, let's examine Alibaba Cloud's pros and cons as a cloud computing platform.

Related: How the Cloud Made Computing Harder, Not Easier

Alibaba Cloud is a public cloud platform. It's owned by Alibaba, a China-based multinational business that is also a major player in the e-commerce and retail industries. Currently, Alibaba Cloud is the fourth-largest public cloud provider globally, after AWS, Azure, and GCP.

Like other major public clouds, Alibaba Cloud offers a broad set of cloud services, such as:

Alibaba Cloud also provides a variety of native management and monitoring tools the equivalents of solutions like AWS CloudWatch and IAM.

The breadth of Alibaba Cloud's services is one factor that sets Alibaba Cloud apart from "alternative cloud" providers, many of which specialize only in certain types of services (like storage).

Compared with other major public clouds, Alibaba Cloud offers a few notable advantages:

In short, Alibaba Cloud offers the same large selection of core services as AWS, Azure, and GCP. In some cases, Alibaba's services cost less. And when it comes to geographical presence in Asia, Alibaba Cloud beats the Big Three clouds hands-down.

On the other hand, there are common reasons why businesses may opt not to use Alibaba Cloud, including:

To be sure, Alibaba Cloud is still evolving rapidly, and it's possible that these disadvantages will abate as it grows. But for now, Alibaba Cloud remains most heavily invested in the Asia-Pacific region, which means its support for workloads, tools, and engineers that are based in other parts of the world is limited.

There's also not a lot of reason to believe that Alibaba Cloud will be expanding its presence in North American or European markets in the near future. It hasn't added data centers in those regions since the mid-2010s, although it has continued expanding its footprint in Asia. And when Alibaba Cloud talks about engaging the North American market, it's usually in the context of working with North American companies seeking to expand operations in Asia, rather than landing customers that don't have a presence in Alibaba's backyard.

In general, then, it seems that Alibaba Cloud's business strategy is focused on owning the Asia-Pacific market and leaving the rest of the world to AWS, Azure, and GCP, rather than going head-to-head with the Big Three clouds.

To sum up, whether Alibaba Cloud is a good fit for hosting your workloads depends on:

These considerations may change as Alibaba Cloud continues to evolve, especially if the company invests more heavily in markets outside of Asia-Pacific. But at present, Alibaba Cloud's appeal for businesses that don't have a strong presence in Asia remains limited.

About the author

Read more here:

Alibaba Cloud Pros and Cons - ITPro Today

Categories
Cloud Hosting

St. Cloud Hosting Summit on the Vision for Its Downtown – KVSC-FM News

By Zac Chapman / Assistant News Director

The city of St. Cloud is bringing in strategists to examine the vision for the future of its downtown with a goal of rebooting the historic area.

The summit is exploring lessons learned from American downtowns throughout COVID-19. The event is bringing community partners, businesses and others together with the goal of increasing the quality of downtowns vision.

Four speakers are giving presentations at the summit.

A Top 100 Most Influential Urbanist, Chris Leinberger, is the downtown strategist and investor.

Tobias Peter is Assistant Director of the American Enterprise Institutes Housing Center, and is focusing on housing market trends and policy.

Mayor Dave Kleis and CentraCare CEO Ken Holmen are speaking about St. Clouds unique opportunity to create an active walkable downtown through strategic investment in housing and workforce amenities.

The downtown summit is Monday, Dec. 12 at 6 p.m at St. Clouds River Edge Convention Center. The room will open at 5:30 p.m. for a pre-event social and networking. It is free to attend.

Go here to see the original:

St. Cloud Hosting Summit on the Vision for Its Downtown - KVSC-FM News

Categories
Cloud Hosting

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be – DARKReading

A Dec. 2 ransomware attack at Rackspace Technology which the managed cloud hosting company took several days to confirm is quickly becoming a case study on the havoc that can result from a single well-placed attack on a cloud service provider.

The attack has disrupted email services for thousands of mostly small and midsize organizations. The forced migration to a competitor's platform left some Rackspace customers frustrated and desperate for support from the company. It has also already prompted at least one class-action lawsuit and pushed the publicly traded Rackspace's share price down nearly 21% over the past five days.

"While it's possible the root cause was a missed patch or misconfiguration, there's not enough information publicly available to say what technique the attackers used to breach the Rackspace environment," says Mike Parkin, senior technical engineer at Vulcan Cyber. "The larger issue is that the breach affected multiple Rackspace customers here, which points out one of the potential challenges with relying on cloud infrastructure." The attack shows how if threat actors can compromise or cripple large service providers, they can affect multiple tenants at once.

Rackspace first disclosed something was amiss at 2:20 a.m. EST on Dec. 2 with an announcement it was looking into "an issue" affecting the company's Hosted Exchange environment. Over the next several hours, the company kept providing updates about customers reporting email connectivity and login issues, but it wasn't until nearly a full day later that Rackspace even identified the issue as a "security incident."

By that time, Rackspace had already shut down its Hosted Exchange environment citing "significant failure" and said it did not have an estimate for when the company would be able to restore the service. Rackspace warned customers that restoration efforts could take several days and advised those looking for immediate access to email services to use Microsoft 365 instead. "At no cost to you, we will be providing access to Microsoft Exchange Plan 1 licenses on Microsoft 365 until further notice," Rackspace said in a Dec. 3 update.

The company noted that Rackspace's support team would be available to assist administrators configure and set up accounts for their organizations in Microsoft 365. In subsequent updates, Rackspace said it had helped and was helping thousands of its customers move to Microsoft 365.

On Dec. 6, more than four days after its first alert, Rackspace identified the issue that had knocked its Hosted Exchange environment offline as a ransomware attack. The company described the incident as isolated to its Exchange service and said it was still trying to determine what data the attack might have affected. "At this time, we are unable to provide a timeline for restoration of the Hosted Exchange environment," Rackspace said. "We are working to provide customers with archives of inboxes where available, to eventually import over to Microsoft 365."

The company acknowledged that moving to Microsoft 365 is not going to be particularly easy for some of its customers and said it has mustered all the support it can get to help organizations. "We recognize that setting up and configuring Microsoft 365 can be challenging and we have added all available resources to help support customers," it said. Rackspace suggested that as a temporary solution, customers could enable a forwarding option, so mail destined to their Hosted Exchange account goes to an external email address instead.

Rackspace has not disclosed how many organizations the attack has affected, whether it received any ransom demand or paid a ransom, or whether it has been able to identify the attacker. The company did not respond immediately to a Dark Reading request seeking information on these issues. In a Dec. 6. SEC filing, Rackspace warned the incident could cause a loss in revenue for the company's nearly $30 million Hosted Exchange business. "In addition, the Company may have incremental costs associated with its response to the incident."

Messages on Twitter suggest that many customers are furious at Rackspace over the incident and the company's handling of it so far. Many appear frustrated at what they perceive as Rackspace's lack of transparency and the challenges they are encountering in trying to get their email back online.

One Twitter user and apparent Rackspace customer wanted to know about their organization's data. "Guys, when are you going to give us access to our data," the user posted. "Telling us to go to M365 with a new blank slate is not acceptable. Help your partners. Give us our data back."

Another Twitter user suggested that the Rackspace attackers had also compromised customer data in the incident based on the number of Rackspace-specific phishing emails they had been receiving the last few days. "I assume all of your customer data has also been breached and is now for sale on the dark web. Your customers aren't stupid," the user said.

Several others expressed frustration over their inability to get support from Rackspace, and others claimed to have terminated their relationship with the company. "You are holding us hostages. The lawsuit is going to take you to bankruptcy," another apparent Rackspace customer noted.

Davis McCarthy, principal security researcher at Valtix, says the breach is a reminder why organizations should pay attention to the fact that security in the cloud is a shared responsibility. "If a service provider fails to deliver that security, an organization is unknowingly exposed to threats they cannot mitigate themselves," he says. "Having a risk management plan that determines the impact of those known unknowns will help organizations recover during that worst case scenario."

Meanwhile, the lawsuit, filed by California law firm Cole & Van Note on behalf of Rackspace customers, accused the company of "negligence and related violations" around the breach. "That Rackspace offered opaque updates for days, then admitted to a ransomware event without further customer assistance is outrageous, a statement announcing the lawsuit noted.

No details are publicly available on how the attackers might have breached Rackspace's Hosted Exchange environment. But security researcher Kevin Beaumont has said his analysis showed that just prior to the intrusion, Rackspace's Exchange cluster had versions of the technology that appeared vulnerable to the "ProxyNotShell" zero-day flaws in Exchange Server earlier this year.

"It is possible the Rackspace breach happened due to other issues," Beaumont said. But the breach is a general reminder why Exchange Server administrators need to apply Microsoft's patches for the flaws, he added. "I expect continued attacks on organizations via Microsoft Exchange through 2023."

Link:

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be - DARKReading

Categories
Cloud Hosting

LastPass cloud breach involves ‘certain elements’ of customer information – SC Media

LastPass on Wednesday reported that it detected unusual activity within a third-party cloud service thats shared by LastPass and its GoTo affiliate an event that was the companys second reported breach in three months.

In an update blog to customers, LastPass CEO Karim Toubba said the unauthorized party, using information obtained in the earlier August 2022 incident, gained access to "certain elements" of customer information.

Toubba said LastPass launched an investigation, hired Mandiant, and alerted law enforcement.

We are working diligently to understand the scope of the incident and identify what specific information has been accessed, wrote Toubba In the meantime, we can confirm that LastPass products and services remain fully functional.

Its concerning to hear that LastPass experienced another security incident following a previous one that was made public back in August, said Chris Vaughan, vice president, technical account management, EME at Tanium. Vaughan said the attack involved source code and technical information being taken from unauthorized access to a third-party storage service the company uses.

The new breach is more severe because customer information has been accessed, which wasnt the case previously, Vaughan said. The intruder has done this by leveraging data exposed in the previous incident to gain access to the LastPass IT environment. The company says that passwords remain safely encrypted and that it is working to better understand the scope of the incident and identify exactly what data has been taken. You can bet that the IT security team is working around the clock on this and their visibility of the network and the devices being connected to it will be severely tested.

Vaughan added that password managers are a challenging, but attractive target for a threat actor, as they can potentially unlock a treasure trove of access to accounts and sensitive customer data in an instant if they are breached.

However, the benefits of using a secure password management solution often far outweigh the risks of a potential breach, said Vaughan. When layered with the other security recommendations, it's still one of the best solutions to prevent credential theft and associated attacks. We just have to hope that customer confidence has not been impacted too much by these recent attacks.

Lorri Janssen-Anessi, director, external cyber assessments at BlueVoyant, added that theres a notion of security with cloud hosting, and while thats somewhat true, organizations must still stay aware of the attack surface that exists on cloud hosted networks, services, or applications.

Companies must still minimize user privileges, patch vulnerable software, be conscious of what assets are actively hosted, and make sure to have secure configurations to include the cloud security settings, said Janssen-Anessi.

Be thoughtful about what you choose to host in the cloud, and dont put critical data or operationally necessary applications that could affect your business continuity in the cloud as you are at the mercy of the hosting provider and their continuity-of-services, said Janssen-Anessi. Like any third-party connection, cloud hosting also needs to be thoughtfully included and secured within your ecosystem.

Read more:

LastPass cloud breach involves 'certain elements' of customer information - SC Media

Categories
Cloud Hosting

Focus on cost and agility to ensure your cloud migration success – CIO

When businesses migrate to public cloud, they expect to enjoy greater agility, resiliency, scalability, security, and cost-efficiency. But while some organizations undergo a relatively smooth journey, others can find themselves embarked on a bumpy trek fraught with time-wasting detours and lurking money pits and with that glowing cloud promise still beyond their reach.

Where do they go awry? Too often, impetuosity and a diminished focus on key business drivers can result in a loss of direction, reports Chris DePerro, SVP, Global Professional Services at NTT.

When assembling the case for a move to public cloud, organizations tend to overload stakeholder expectations and lose sight of the main imperatives behind the initiative namely, supporting business agility and cost-efficiency, DePerro says. When a cloud strategy team has those chief objectives nailed down, they can plan supporting considerations such as security, resiliency and scalability around them more effectively.

The Multicloud Business Impact Brief by 451 Research summarizes the findings of its own Voice of the Enterprise: Cloud, Hosting & Managed Services, Budgets & Outlook 2022 survey and identifies costs as a key driver and desired outcome of cloud transformation as well as a key limiting factor in the use of some of these resources. Indeed, 39% surveyed cited concerns about controlling costs.

But cost-efficiencies from public-cloud adoption can be undermined if organizations overspend to get there. Public-cloud services can be a tremendous resource if proper care is taken to plan and optimize the environment rather than just pushing the entire estate to the cloud as is. If optimization isnt done, clients are often left with a larger bill and fall short of their cloud aspirations.

In many instances, the problems can be traced to inexperience in and insufficient understanding of cloud-migration best practices, and a lack of proper planning. Too often, organizations set off with project plans that do not take account of the full gamut of challenges.

Why do organizations fast-forward cloud migration, even if it might result in headaches afterwards?

Common missteps are not taking time to understand and remediate as many issues as possible within the existing IT estate before migration occurs, says DePerro. Its crucial that the best migration approaches are selected based on solid discovery for each workload. This determines the approach best suited to an organizations specific applications.

DePerro adds: Without a thorough pre-migration assessment of its IT estate, an organization might shift its existing inefficiencies into the cloud, where they become even more of a budgetary and performance burden by bumping up clouds operational costs.

The capacity to innovate and respond to changing market conditions in a rapid manner is even more vital. Cost-efficiency and expectations of agility should be integral to a properly orchestrated cloud-migration program.

Agility is not always well understood, explains DePerro. Increasingly, its about using the cloud to give organizations the facility and flexibility to achieve their business objectives faster, rather than necessarily having numerous added features from the onset. We are seeing more and more customers trying to modernize in an incremental nature so as not to get bogged down in overly complicated transformation. Often, expediency overrides functionality when it comes to getting apps to market fast so that value can be derived ASAP.

This requirement plays into the increased adoption of multicloud models. The business benefits of multicloud are compelling: organizations want to develop/run their applications in the cloud environment thats best suited to their needs: private, public, edge or hybrid.

This in turn enlarges the complexities of managing workloads across multiple platforms.

Working with a managed cloud service provider is a proven way to mitigate those complexities, DePerro says, especially if its cloud reach extends across both multivarious cloud platforms and business industries, as NTTs does. This enables us to share both technical knowledge and cross-sector insight.

Increasing multicloud take-up demonstrates again how rapidly cloud opportunities are evolving, leaving migration roadmaps outdated.

Ultimately, many organizations will have to go multicloud because its the only way they will achieve their business objectives, DePerro believes. For many, multicloud is the new reality. And as with any journey into uncharted territory, being accompanied by a knowledgeable guide such as NTT, that has helped organizations complete their cloud journeys, will help navigate the twists and turns ahead.

Visit NTTs website now to find out how to start your cloud journey with experts who understand the pitfalls and how to overcome them.

The rest is here:

Focus on cost and agility to ensure your cloud migration success - CIO