Categories
Dedicated Server

Bar Talk: PARC Aspen off to tasty start – The Aspen Times

Just over a year after beloved local gathering spot LHosteria closed in Aspen, another restaurant that wants to be the hub of the local dining scene has opened in its place.

PARC Aspen officially opened Dec. 16 but enjoyed a soft opening on Dec. 13.

The below-street level eatery has been fully-renovated in a farmhouse style, the walls whitewashed, and the space heavily brightened. Gone are the days of dark corners and the dim lighting of PARCs predecessor.

According to the PARC Aspen website, the new restaurant is focused on revitalizing the local dining scene in Aspen with a menu of locally-sourced, seasonally-curated dishes.

The idea of locally-sourced and seasonal or as our server put it, Aspen seasonal also extends to the cocktail menu, which highlights a few Roaring Fork Valley and Colorado-based distilleries, plus a section dedicated to winter flavors.

The selection of cocktails is broken down into five sections, starting with PARC Aspen specialties. Just about every mainstream spirit option is represented in the specialties section, and the liquor is paired with some more unique ingredients, such as a tequila drink called the Spiral Incline that has tomatillo in it, and the Newbury Park, which is the drink that caught my eye first because of its flavors.

The Newbury Park is made with Roaring Fork Vodka, raspberry, beet, Domaine De Canton Ginger Liqueur and topped with Fever Tree. The result is a beautiful and bright raspberry-colored drink that you might be able to trick your mind into thinking is just a healthy juice, thanks to the beets. The taste is slightly beet-forward on the first sip, but the finish is all tart raspberry. The warming and spicy presence of the ginger is there but not overwhelming, and I believe the presence of the bubbles from the Fever Tree helps dilute the ginger flavor a contribution to keeping the drink light. Vodka, although not my preferred liquor by any means, was definitely the correct spirit choice for the cocktail because its easily hidden and doesnt take away from the fresh, fun flavor of the drink. I would honestly rate this a 10 out of 10 and can see this being an easy go-to drink.

Next up on the PARC Aspen cocktail list is a short section of zero-proof cocktails, two options to be exact, followed by two hot alcoholic drinks (620 Toddy and Mulled Wine), and then a nice selection of classic cocktails, including an Old Fashioned, Jimmys style, and the spirit-forward Vesper.

My next, and final, cocktail of the evening came from the seasonal Winter Specialties section.

Two of the drinks in this section sound like good, creamy, dessert cocktails, while the other two seem more appropriate for sipping with your meal.

I chose the A White Winter cocktail, made again with Roaring Fork Vodka, St-Germain Elderflower Liqueur, white cranberry and egg white.

This drink appeared and was totally different than my expectations but not in a bad way.

Its served in a martini glass and is indeed the color of a white winter with a nice amount of foam from the egg whites and garnished with a dash of bright-blue sugar crystals. It tastes like youre drinking a cloud, both in texture and flavor. Somehow all the ingredients, when mixed together, seem to cancel each other out. There are floral notes of the St-Germain on the nose, but the first flavor on the tongue is somehow cotton candy-esque. Although the drink starts off white, like a wintery day in Aspen, by the end, it turns an icy blue, thanks to the sugar crystals on top.

I wont go into discussing the food because this is a drink column, after all, but, safe to say, it did not disappoint, and it was full of flavor.

I already have my eyes on some other cocktails I want to try from the menu such as the Blueberry Glacier found on the Winter Specialties section and look forward to seeing how the restaurant changes with the seasons, as it finds its foothold in Aspen.

Cheers, PARC Aspen, youre off to a great start!

Follow this link:

Bar Talk: PARC Aspen off to tasty start - The Aspen Times

Categories
Cloud Hosting

Managing Ediscovery In The Cloud: Practical, Ethical and Technical … – JD Supra

In this excerpt from our white paper on managing ediscovery in the cloud, we explain the basics of the cloud and its biggest benefits in ediscovery. Click here to download the full white paper.

As early pioneers of cloud computing in legal tech, the cloud has always been an integral part of Nextpoints business model. Now, many providers are making the switch to the cloud, and more and more law firms are embracing ediscovery in the cloud. Even if youve used cloud services for a long time, you may have never stopped to consider why is the cloud the best solution? And if youre looking to adopt cloud technology or switch to a new provider, its important to understand the fundamentals of the cloud and why its the only ediscovery solution for modern litigators.

The greatest upside is that cloud providers like Amazon, Microsoft, and Rackspace can invest billions of dollars each year in research and development of cloud platforms, providing more robust services and security than any company or law firm can hope to provide. Thanks to those investments, SaaS ediscovery systems cost about 35 percent less than solutions that are hosted in-house.

Nearly 60% of businesses transitioned to the cloud in 2022, and this trend is expected to continue. The benefits that are enticing businesses to adopt cloud computing include:

Thats the power of cloud computing, but it is also part of the challenge cloud computing poses for law firms. So much data is being created in todays networked and super-massive computing environments that it will quickly overwhelm litigation. Law firms struggle to process and review gigabytes of data, while many types of litigation routinely involve multiple terabytes of information. The cloud is creating a tsunami of digital evidence, but it is also the only cost-effective solution to meet the challenge it has created.

Why Cloud Ediscovery?

Ediscovery is ideally suited to maximize the benefits of cloud computing. The volume of electronic data is such that when a legal matter arises, a law firm or corporate counsel can suddenly be faced with a mountain of electronic data, which can cost hundreds of thousands of dollars to process in-house or with the service of outside consultants. Then theres licensing fees, software installation, hardware costs, and consulting fees all of which make ediscovery costs spiral out of control. As law firms and their clients become increasingly distressed by these kinds of bills, the Software-as-a-Service model promises to cut many of these needless costs by providing an all-in-one processing, stamping, reviewing, and production platform.

The bottom line is that litigation software built for local networks simply cannot cope with exploding volumes of digital data. The right ediscovery cloud platform offers low-cost data hosting, built-in processing functionality, and levels of security no on-premise solution can match.

Security: The Real Danger is Doing it Yourself

In considering on-premise versus cloud solutions, firms that host sensitive client data on-premise are likely to find that they themselves are the greatest security risk. A network hosted on-premise can afford very little in the way of network security beyond what can be found in an off-the-shelf network appliance. Even more problematic, on-premise systems (and private cloud systems in a single facility) offer nothing in the way of physical security or environmental controls beyond what is found in a typical office building. The fact is, many local networks are managed from a supply closet or backroom that anyone with access to an office can enter.

Organizations that rely on local, on-premise solutions often have to fall back on unsecured and archaic mechanisms to move and share data, including mailing it on disks. And depending on the size of an organization, on-premise networks lack redundant storage and backup; if a disaster strikes, data is likely lost forever. The largest and most reputable cloud providers have redundant data centers with robust physical security dispersed across the country, or even the planet.

For example, Amazon Web Services data centers have extensive setback and military grade perimeter control berms as well as other natural boundary protection. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff using video surveillance, state of the art intrusion detection systems, and other electronic means.

Now compare that to the security of on-premise servers, your typical hosting providers server room (private cloud), or to that of any other company whose primary business is not data security. The safest bet for your clients data would be to utilize one of the leading cloud infrastructure providers when moving ediscovery data to the cloud. But whichever ediscovery provider you choose, be sure to do some hard comparative research.

Cloud platforms give users control over large data sets, including permission-based access and security roles that are supported by the highest levels of security. Thats because large cloud providers have built-in encryption and security protocols backed up by large teams of security experts with real-time monitoring tools. When considering a cloud ediscovery service, find out the levels of security your provider has in place. Make sure they are taking advantage of the cloud platform in all phases of transmission and data storage, including:

Scalability: Big Data is Here

In the 1970s Bill Gates was telling people, No one will need more than 637 kilobytes of memory for a personal computer. Today, personal computers ship with 2 terabyte hard drives.

Organizations today love data. Modern businesses are finding new and interesting ways to generate and use it. The growth of data is clobbering business IT environments, our federal government, federal court system, and pretty much any data-driven business. For example, it took Nextpoint 13 years to reach our first petabyte of data. (Thats 1,000 terabytes.) After that, it only took two years to add a second petabyte, and the exponential growth has only continued.

In special circumstances, like a data-intensive ediscovery matter, the computational requirements grow exponentially with the amount of data. This is particularly true in heavy processing, indexing, and analytics-based tasks like native file processing, near-dupe identification, and search functionality. Because cloud computing platforms have virtually unlimited ability to scale up for big jobs, reviewers can use advanced analytic tools to analyze data that would break most computer systems.

Law firms may be tempted to throw more hardware at large data challenges, but when clients that used to provide several gigabytes of data for discovery are now delivering terabytes of structured and unstructured data for review, a few new computers cannot address the problem. Thanks to cloud computing, computing power is now a commodity that can be accessed as needed.

Accessibility: Multi-Party Case Management

Hosting documents in the cloud makes it possible to effectively review huge data collections with reviewers working simultaneously in locations around the world. Data is easily kept organized and there is more control over the review process.

Many matters today involve similar documents and data sets. The cloud gives companies the ability to store a document set, along with the appropriate privilege codes, redactions, and stamping so that it can be accessed in future matters that may arise. They can allow data sets to be reused and accessed by new parties as appropriate.

Cloud platforms offer the ability to reduce duplicative efforts by multiple parties on cases with similar issues, facts, discovery, and relevant case law. There are so many actors involved in multidistrict litigation in different jurisdictions, with differing internal technology environments, it is critical that the solution selected encourages collaboration among co-counsel.

Mobility: Working on the Road

There was a time when a lot of companies pretended BYOD (Bring Your Own Device) was just a fad, and that employees should remain tied to applications and data stored on their desktop in a cubicle. The pandemic upended this mentality, and the cloud allowed applications and data to be device independent, freeing the workforce to work wherever and however they needed.

With SaaS services, users can securely access the data from anywhere an internet connection is available. When selecting a cloud platform, make sure it is natively accessible via all devices and OSs including including Macs, PCs, iPads, iPhones, and

Android mobile devices.

The Cloud is the Only Answer for Ediscovery

These are the considerations to take into account when assessing the cloud for ediscovery. According to a 2022 report from ACEDS, 38% of firms still use on-premises technology for ediscovery, while 14% use a hybrid cloud solution, and 43% are fully in the cloud. A huge percentage of firms are moving to the cloud each year, but there is still a sizable number of attorneys working with technology not equipped for todays information-rich litigation environment.

There are obvious ethical obligations and technical issues to take into account when moving client data to a cloud repository or transitioning to a new cloud provider. Check back for our next post on cloud-based ediscovery to see all the questions you need to ask when interviewing potential vendors. If a vendor can satisfy these demands, your firm will be able to deliver data processing power, data security, and a cost savings that old-school review software cannot hope to match.

Read more:

Managing Ediscovery In The Cloud: Practical, Ethical and Technical ... - JD Supra

Categories
Cloud Hosting

Protect Your Cloud Apps From These 5 Common API Security … – ITPro Today

APIs barely existed two decades ago, but they've now become the glue that holds the world of cloud computing together. APIs play a central role in enabling cloud applications to interface with each other and with the various cloud resources they need to do their jobs.

But APIs have a downside: When they're poorly managed, they can become low-hanging fruit for attackers.

Related: Why APIs Are the Foundation of Modern Software Development

That's why it's critical to ensure that you use APIs securely in the cloud. This article unpacks common API security mistakes that IT organizations run into in order to highlight what not to do if you want to make the most of APIs while also maximizing security.

In many ways, insecure APIs are a DDoS attacker's dream. The reason why is that by issuing repeated calls to APIs, attackers can overwhelm the servers hosting them and render applications that depend on the APIs unusable.

Fortunately, there's a simple way to prevent this type of API attack: throttling. API throttling lets admins limit the number of requests that each client can make to an API in a given time period. Throttling doesn't totally prevent abuse of APIs it's still possible to launch a DDoS-style attack using a botnet that consists of a large number of independent clients but it goes a long way toward stopping or mitigating API attacks designed to disrupt application availability.

Unless all of the data available through an API is 100% public, the API should require authentication in order to respond to requests. Otherwise, attackers can use the API to access data that should not be available to them as one attacker did when scraping data from about 700 million LinkedIn users, for example.

The LinkedIn API hack was a bit complicated because the data the attacker scraped was semi-public. It was available on LinkedIn profiles to other LinkedIn users who had access to those profiles. But it wasn't supposed to be available to a random, unauthenticated client making API requests. Basic API authentication would have prevented the abuse that took place in this incident.

Another API security mistake that can subject your business to an API attack is to assume that just because you don't advertise your API endpoints publicly, no one can find them and you therefore don't need to worry about securing your APIs.

This strategy which amounts to what security folks call "security by obscurity" is akin to publishing sensitive data on a website but choosing not to share the URL in the hope that no one finds it.

There are situations where you may choose not to advertise an API's location (for example, if the API isn't used by the public, you might share endpoint information only internally). But even so, you should invest just as much in securing the API as you would if it were a fully public API.

From a security standpoint, the fewer APIs you expose and use, the better. Unnecessary APIs are like extraneous libraries on an operating system or abandoned code within an application: They give attackers more potential ways to wreak havoc while offering no value to your business.

So, before you publish a new API, make sure you have a good reason to do so. And be sure, as well, to deprecate APIs that are no longer necessary, rather than leaving them active.

A one-size-fits-all security model often does not work well for APIs. Different API users may have different needs and require different security controls. For example, users who are internal to your business may require a higher level of data access via an API than your customers or partners.

For this reason, it's a best practice to define and enforce API access controls in a granular way. Using an API gateway, establish varying levels of access for different users (whom you could differentiate based on their network locations requests that originate from your VPN should be treated differently from those coming from the internet, for example or based on authentication schemes).

APIs make it easy to share resources in a cloud environment. But too much sharing via APIs is a bad thing. APIs must be secured with throttling, authentication, and granular access controls in order to keep data and applications secure against attackers looking for ways to abuse APIs.

About the author

See the rest here:

Protect Your Cloud Apps From These 5 Common API Security ... - ITPro Today

Categories
Cloud Hosting

Most businesses hope cloud will be the catalyst to net zero – TechRadar

A recent AWS-backed survey into the way that companies and business leaders manage decarbonization efforts to reach net zero in Europe by 2050 has found that cloud technology may hold the key (or one of many keys) to success.

The study consulted 4,000 businesses in the UK, France, Germany, and Spain, 96% of whom having set emissions reduction targets. A

round three quarters of business leaders believed that technology like cloud hosting would accelerate their journey to net zero by at least two years, helping them to achieve their target by 2048 at the latest.

With that in mind, around 20% claim that they lack the appropriate technology to achieve their net zero goals, and one in five were yet to go cloud-first. Among a number of obstacles holding businesses back was the impact of rising costs and economic uncertainty on a global scale.

Despite the challenges, three quarters of business leaders feel confident in their abilities to control greenhouse gas emissions. This is in stark contrast to the just one in ten that measure emissions scope 3, which focuses on indirect emissions that occur in the companys value chain. Just over half of the companies in question were measuring scopes 1 (direct emissions from owned or controlled sources) and 2 (indirect emissions from electricity, heating, cooling, and so on).

What I think is so interesting here is that business leaders who have already engaged cloud services think they are more successful in delivering carbon reductions," noted Chris Wellise, AWS Director of Sustainability. "The data backs up this view, as cloud offers nearly any company or public body a less carbon intensive way of managing their IT.

Read more from the original source:

Most businesses hope cloud will be the catalyst to net zero - TechRadar

Categories
Cloud Hosting

ACE Recognized as the 2022 Most Innovative Cloud Solutions … – openPR

Pompano Beach, FL, December 13, 2022 --(PR.com)--Ace Cloud Hosting, a leading cloud computing solutions provider, announced that it has been recognized as the Most Innovative Cloud Solutions Provider for 2022 by Global Business Awards. ACE received the award in the technology category for delivering continuous innovation and consistent quality in the cloud solutions realm.

Global Business Awards are the most coveted awards that celebrate enterprises that demonstrate authentic and best work in specific categories. Every year the esteemed panel of Global Business Awards carefully scrutinizes several applications and portfolios to decisively shortlist a finalist. This year the pinnacle award programs were sharply focused on recognizing organizations that brought a cohesive mix of innovation, technology, and humanization to the forefront along with their digital transformation solutions.

ACH will share the stage with many top Indian players like OLA Cars, GoDaddy, Zomato, EdgeVerve Systems Limited - An Infosys Company, Quick Heal, Infosys, Leeway Hertz, Bureau, etc., in the technology category. Its gratifying to see our cloud computing expertise, knowledge, inventiveness, and adaptability recognized. This award is a testament to our ability of delivering the highest level of service and create value for partner clients. In addition, it also personifies the dedication and rigor that our teams put into delivering highly successful results for our customers, said Managing Director, Vinay Chhabra.

ACH has a strong commitment to building and implementing intelligent cloud solutions to address the most pressing needs of high-growth enterprises. Commenting on the win, Dr. Bindu Rathore, Director (VP-Sales & Marketing) said, This award is a hallmark of our excellence. ACH is leveraging its investments in innovation, deep technologies, and a talented workforce, to help clients accelerate their growth and transformation journey. This momentous award is a reaffirmation of our commitment to consistently deliver differentiated and transformational results.

Dr. Sangeeta Chhabra, Executive Director, RTDS said, We are extremely proud to receive this award. ACH has earned the award through its steadfast commitment to process innovation, quality, and industry expertise. We are unique in the cloud computing space our deep industry knowledge, capabilities, and rich portfolio of services set us apart.

ACH has nearly 15 years of experience in solving complex cloud challenges through unconventional business solutions and commitment to benchmark best practices. The organization is a firm believer in creating out-of-the-box strategies to address the strategic requirements of companies across diverse industries, with minimal impact on their present IT ecosystem. Recently, the organization also received two CPA Practice Advisor Readers Choice Awards for the Best Hosted Solution Provider and Best Outsourced Technology Services categories. The winners will be announced on 10th December 2022 at the Waldorf Astoria Dubai Palm Jumeirah.

About ACEACH offers business-critical cloud computing solutions that provide vibrant pathways to transcend operations, foster innovation, and create value for partner organizations. The organization enables a conducive IT ecosystem that empowers businesses to smoothly work from anywhere and at any time in a secure manner. ACH has over 15+ years in creating, deploying, and scaling dynamic cloud infrastructure of high-growth enterprises and enabling real-world foundations to support their business growth. Leading organizations are harnessing ACHs Cloud Computing, QuickBooks Hosting, Virtual Desktop Infrastructure, and Managed Security Solutions to challenge the status quo, breaking their previous molds and clearing the groundwork for business success.

Read more here:

ACE Recognized as the 2022 Most Innovative Cloud Solutions ... - openPR

Categories
Cloud Hosting

What is a Data Center? Working & Best Practices Explained – Spiceworks News and Insights

A data center is defined as a room, a building, or a group of buildings used to house backend computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. This article defines and describes the workings of a data center, including its architecture, types, and best practices.

A data center is a room, a building, or a group of buildings used to house back-end computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. Remote data centers power all cloud infrastructure.

A data center is a physical facility providing the computing power to operate programs, storage to process information, and networking to link people to the resources they need to do their tasks and support organizational operations.

Due to a dense concentration of servers, which are often placed in tiers, sometimes data centers are called server farms. They provide essential services like information storage, recovery and backup information management, and networking.

Almost every company and government agency needs either its own data center or access to third-party facilities. Some construct and operate them in-house, and others rent servers from co-location facilities. In contrast, others still leverage public cloud-based services from hosts such as Google, Microsoft, and Amazon Web Services (AWS).

In general, there are four recognized levels of data centers. The numerical tiers allocated to these data centers represent the redundant infrastructure, power, and cooling systems. Commonly assigned to these levels are the following values or functionalities:

The storage and computing capabilities for apps, information, and content are housed in data centers. Access to this data is a major issue in this cloud-based, application-driven world. Using high-speed packet-optical communication, Data Center Interconnect (DCI) technologies join two or more data centers across short, medium, or long distances.

Further, a hyper-converged data center is built on hyper-converged infrastructure (HCI), a software architecture consolidating the compute, network, and storage commodity hardware. The merging of software and hardware components into a single data center streamlines the processing and management process, with the added perk of lowering an organizations IT infrastructure and management costs.

See More: Want To Achieve Five Nines Uptime? 2 Keys To Maximize Data Center Performance

The working of a data center is based on the successful execution of data center operations. Operations of a data center consist of the systems and processes that maintain the data center on a daily basis.

Data center operations consist of establishing and managing network resources, assuring data center security, and monitoring power and cooling systems. Different kinds of data centers, differing in size, dependability, and redundancy, are defined by the IT needs of enterprises that operate data centers. The expansion of cloud computing is driving their modernization, including automation and virtualization.

Data centers comprise real or virtual servers linked externally and internally via communication and networking equipment to store, transport, and access digital data. Each server is comparable to a home computer in that it contains a CPU, storage space, and memory but is more powerful. Data centers use software to cluster computers and divide the load among them. To keep all of this up and running, the data center uses the following key elements:

Availability in a data center refers to components that are operational at all times. Periodically, systems are maintained to guarantee future activities run smoothly. You may arrange a failover in which a server switches duties to a distant server to increase redundancy. In IT infrastructure, redundant systems reduce the risk of single-point failure.

Network Operation Center (NOC) is a workspace (or virtual workplace) for employees or dedicated workers tasked with monitoring, administering, and maintaining the computer resources in a data center. A NOC can supply all of the data centers information and update all activities. The responsible person at a NOC may see and control network visualizations that are being monitored.

Unquestionably, power is the most critical aspect of a data center. Colocation equipment or web hosting servers use a dedicated power supply inside the data center. Every data center needs power backups to ensure its servers are continually operational and that overall service availability is maintained.

A safe data center requires the implementation of security mechanisms. One must first identify the weaknesses in your DCs infrastructure. Multi-factor ID identification, monitoring across the whole building, metal detectors, and biometric systems are a few measures that one may take to ensure the highest level of security. Also necessary to a data center are on-site security personnel.

Power and cooling are equally crucial in a data center. The colocation equipment and web-hosting servers need sufficient cooling to prevent overheating and guarantee their continued operation. A data center should be constructed so that there is enough airflow and the systems are always kept cool.

Uninterruptible power supply (UPS), as well as generators, are components of backup systems. During power disruptions, a generator may be configured to start automatically. As far as the generators have fuel, they will remain on during a blackout. UPS systems should provide redundancy so that a failed module does not compromise the overall systems capability. Regular maintenance of the UPS and batteries decreases the likelihood of failure during a power outage.

CMMS is among the most effective methods to monitor, measure, and enhance your maintenance plan. This program enables the data center management to track the progress of maintenance work performed on their assets and the associated costs. This program will aid in lowering maintenance costs and boosting internal efficiency.

In a modern data center, artificial intelligence (AI) also plays an essential role in its working. AI enables algorithms to fulfill the conventional Data Center Infrastructure Manager (DCIM) tasks by monitoring energy distribution, cooling capacity, server traffic, and cyber threats in real-time and automatically adjusting efficiency. AI can shift workloads to underused resources, identify possible component faults, and balance pooled resources. It accomplishes this with minimal human intervention.

See More: What Is Enterprise Data Management (EDM)? Definition, Importance, and Best Practices

The different types of data centers include:

Organizations construct and own these private data centers for their end customers. They may be placed both on and off-site and serve a single organizations IT processes and essential apps. An organization may isolate business activities from data center operations in a natural catastrophe. Or, it may construct its data center in a cooler environment to reduce energy consumption.

Multi-tenant data centers (called colocation data centers) provide data center space to organizations desiring to host their computer gear and servers remotely.

These spaces for rent inside colocation centers are the property of other parties. The renting company is responsible for providing the hardware, while the data center offers and administers the infrastructure, which includes physical area, connectivity, ventilation, and security systems. Colocation is attractive for businesses that want to avoid the high capital costs involved with developing and running their own data centers.

The desire for immediate connection, the Internet of Things (IoT) expansion, and the requirement for insights and robotics are driving the emergence of edge technologies, which enable processing to take place closer to actual data sources. Edge data centers are compact facilities that tackle the latency issue by being located nearer to the networks edge and data sources.

These data centers are tiny and placed close to the users they serve, allowing for low-latency connection with smart devices. By processing multiple services as near-to-end users as feasible, edge data centers enable businesses to decrease communication delays and enhance the customer experience.

Hyperscale data centers are intended to host IT infrastructure on a vast scale. These hyperscale computing infrastructures, synonymous with large-scale providers like Amazon, Meta, and Google, optimize hardware density while reducing the expense of cooling and administrative overhead.

Hyperscale data centers, like business data centers, are owned and maintained by the organization they serve, although on a considerably broader level for platforms for cloud computing and big data retention. The minimum requirements for a hyperscale data center are 5,000 servers, 500 cabinets, and 10,000 square feet of floor space.

These dispersed data centers are operated by 3rd party or public cloud providers like AWS, Microsoft Azure, and Google Cloud. The leased infrastructure, predicated on an infrastructure-as-a-service approach, enables users to establish a virtual data center within minutes. Remember that cloud data centers operate as any other physical data center type for the cloud provider managing it.

See More: What Is a Data Catalog? Definition, Examples, and Best Practices

A modular data center is a module or physical container bundled with ready-to-use, plug-and-play data center elements: servers, storage, networking hardware, UPS, stabilizers, air conditioners, etc. Modular data centers are used on building sites and disaster zones (to take care of alternate care sites during the pandemic, for example). In permanent situations, they are implemented to make space available or to let an organization develop rapidly, such as installing IT equipment to support classrooms in an educational institution.

In a managed data center, a third-party provider provides enterprises with processing, data storage, and other associated services to aid in managing their IT operations. This data center type is deployed, monitored, and maintained by the service provider, who offers the functionalities via a controlled platform.

You may get managed data center services through a colocation facility, cloud-based data centers, or a fixed hosting location. A managed data center might be entirely or partly managed, but these are not multi-tenant by default, unlike colocation.

See More: What Is Data Modeling? Process, Tools, and Best Practices

The modern data center design has shifted from an on-premises infrastructure to one that mixes on-premises hardware with cloud environments wherein networks, apps, or workloads are virtualized across multiple private and public clouds. This innovation has revolutionized the design of data centers since all components are no longer co-located and may only be accessible over the Internet.

Generally speaking, there are four kinds of data center structures: meshes, three or multi-tier, mesh points of delivery, and super spine mesh. Now let us start with the most famous instance. The multi-tier structure, which consists of the foundation, aggregation, and access layers, has emerged as the most popular architectural approach for corporate data centers.

The mesh data center architecture follows next. The mesh network model refers to the topology in which data is exchanged between components through linked switches. It can provide basic cloud services due to its dependable capacity and minimal latency. Moreover, because of its scattered network topologies, the mesh configuration can quickly materialize any connection and is less costly to construct.

This mesh point of delivery (PoD) comprises several leaf switches connected inside the PoDs. It is a recurrent design pattern wherein components improve the data centers modularity, scalability, and administration. Consequently, data center managers may rapidly add new data center architecture to their existing three-tier topology to meet the extremely low-latency data flow of new cloud apps.

In summary, super spine architecture is suitable for large-scale, campus-style data centers. This kind of data center architecture handles vast volumes of data through east-to-west data corridors.

The data center will comprise a facility and its internal infrastructure in these architectural alternatives. The site is where the data center is physically located. A data center is a big, open space in which infrastructure is installed. Virtually every place is capable of housing IT infrastructure.

Infrastructure is the extensive collection of IT equipment installed inside a facility. This refers to the hardware responsible for running applications and providing business and user services. A traditional IT infrastructure includes, among other elements, servers, storage, computer networks, and racks.

There are no obligatory or necessary criteria for designing or building a data center; a data center is intended to satisfy the organizations unique needs. However, the fundamental purpose of any standard is to offer a consistent foundation for best practices. Several modern data center specifications exist, and a company may embrace a few or all of them.

See More: What Is Kubernetes Ingress? Meaning, Working, Types, and Uses

When designing, managing, and optimizing a data center, here are the top best practices to follow:

When developing a data center, it is crucial to provide space for growth. To save costs, data center designers may seek to limit facility capacities to the organizations present needs; nevertheless, this might be a costly error in the long run. Having a room available for new equipment is vital as your needs change.

One cannot regulate the things you do not measure; thus, monitor energy usage to explain the system efficiency of your data center. Power usage effectiveness (PUE) is a statistic used to reduce non-computing energy use, like cooling and power transmission. Frequently measuring PUE is required for optimal use. Since seasonal weather variations greatly influence PUE, gathering energy information for the whole year is considerably more essential.

Inspections and preventative maintenance are often performed at time-led intervals to prevent the breakdown of components and systems. Nonetheless, this technique disregards actual operating conditions. Utilizing analytics and intelligent monitoring technologies may alter maintenance procedures. A powerful analytics platform with machine learning capabilities can forecast maintenance needs.

Even with the declining price of computer memory, global archiving incurs billions of dollars annually. By deleting and retaining data, the IT infrastructure of data centers is freed of its burden, resulting in decreased conditioning expenses and energy consumption and more effective allocation of computing resources and storage.

For data centers, creating backup pathways for networked gear and communication channels in the event of a failure is a big challenge. These redundancies offer a backup system that allows personnel to perform maintenance and execute system upgrades without disturbing service or to transition to the backup system when the primary system fails. Tier systems within data centers, numbered from one to four, define the uptime that customers may expect (4 being the highest).

See More: Why the Future of Database Management Lies In Open Source

Data centers are the backbone of modern-day computing. Not only do they house information, but they also support resource-heavy data operations like analysis and modeling. By investing in your data center architecture, you can better support IT and business processes. A well-functioning data center is one with minimal downtime and scalable capacity while maintaining costs at an optimum.

Did this article help you understand how data centers work? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!

Read more here:

What is a Data Center? Working & Best Practices Explained - Spiceworks News and Insights

Categories
Cloud Hosting

Alibaba partners with Avalanche to host nodes – CryptoTvplus

Alibaba Cloud, the cloud computing service and subsidiary of the Alibaba group, has announced integration with validators on the Avalanche blockchain. Avalanche blockchain made the announcement that will see the expansion of Web3 services into Web2 infrastructures.

Details of the partnership show that users of the Avalanche blockchain can launch validator nodes using the Alibaba Cloud system including storage and distribution of resources on the largest cloud infrastructure in Asia.

As part of a way to incentivize validators, Alibaba Cloud is giving Avalanche validators credit via coupons for their services.

Alibaba Cloud, which was launched in 2009 as the cloud service for the Alibaba Group, serves 85 zones in 28 regions globally. It is also tagged as the third largest cloud computing service in the world as businesses around the world rely on its IaaS (Infrastructure as a Service) model to grow and scale.

Elastic computing, network virtualization, database, storage, security, management, and application services are some of the utilities available in the Alibaba cloud system.

Avalanche is an eco-friendly blockchain launched in 2020 with fast finality, and on which scalable applications and services can be designed for institutional and individual purposes. Several top projects that use Avalanche include BENQi, Aave, Chainlink, Curve, and Sushi. It currently has over 1,000 validators and transactions per second of 4500 plus.

Alibaba Cloud is not the only Web2 infrastructure thats integrating and supporting the hosting of blockchain nodes. In November, Google Cloud announced its partnership with Solana to enable validators to use its cloud services for hosting nodes.

For dApp builders, Alibaba has published a guide on how to set up Avalanche nodes using the Alibaba Cloud.

Google Cloud and Aptos to start an accelerator program

Expect war of chains & bridges Avalanche founder

Visit link:

Alibaba partners with Avalanche to host nodes - CryptoTvplus

Categories
Cloud Hosting

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle – Nextgov

The Pentagon on Wednesday announced the awardees of the Joint Warfighting Cloud Capabilityor JWCCcontract, with Amazon Web Services, Google, Microsoft and Oracle each receiving an award.

Through the contract, which has a $9 billion ceiling, the Pentagon aims to bring enterprisewide cloud computing capabilities to the Defense Department across all domains and classification levels, with the four companies competing for individual task orders.

Last year, the Defense Department had named the four companies as contenders for the multi-cloud, multi-vendor contract.

The purpose of this contract is to provide the Department of Defense with enterprise-wide, globally available cloud services across all security domains and classification levels, from the strategic level to the tactical edge, the Defense Department said in a Wednesday announcement.

The awards come after a years-long effort to provide enterprisewide cloud computing across the department, with a significant delay in March as the DOD conducted due diligence with the four vendors.

All four companies issued statements the day after the award.

We are honored to have been selected for the Joint Warfighting Cloud Capability contract and look forward to continuing our support for the Department of Defense," saidDave Levy, Vice President U.S. Government, Nonprofit, and Healthcare at AWS. "From the enterprise to the tactical edge, we are ready to deliver industry-leading cloud services to enable the DoD to achieve its critical mission.

Oracle looks forward to continuing its long history of success with the Department of Defense by providing our highly performant, secure, and cost-effective cloud infrastructure,"Ken Glueck, Executive Vice President, Oracle, said in a statement. "Built to enable interoperability, Oracle Cloud Infrastructure will help drive the DoDs multicloud innovation and ensure that our defense and intelligence communities have the best technology available to protect and preserve our national security.

"The selection is another clear demonstration of the trust the DoD places in Microsoft and our technologies," Microsoft Federal President Rick Wagner said in a blog post. "Our work on JWCC will build on the success of our industry-leading cloud capabilities to support national security missions that we have developed and deployed across the department and service branches."

We are proud to be selected as an approved cloud vendor for the JWCC contract,"Karen Dahut, CEO of Google Public Sector, said in a statement.

JWCC itself was announced in July 2021 following the failure and cancellation of the Joint Enterprise Defense Infrastructureor JEDIcontract, DODs previous effort aimed at providing commercial cloud capabilities to the enterprise.

Conceptualized in 2017, JEDI was designed to be the Pentagons war cloud, providing a common and connected global IT fabric at all levels of classification for customer agencies and warfighters. A single-award contract worth up to $10 billion, JEDI would have put a single cloud service provider in charge of hosting and analyzing some of the militarys most sensitive data. Ultimately, JEDI was delayed for several years over numerous lawsuits that ultimately caused the Pentagon to reconsider its plan, opting for a multi-cloud approach more common in the private sector.

For many years, Amazon Web Servicesby virtue of its 2013 contract with the Central Intelligence Agencywas the only commercial cloud provider with the security accreditations allowing it to host the DODs most sensitive data. In the interim, however, Microsoft has achieved the top-secret accreditation, and Oracle and Google both achieved Impact Level 5or IL5accreditation, allowing the two companies to host the departments most sensitive unclassified data in their cloud offerings. Oracle has also achieved top secret accreditation.

JWCC is just one of several multibillion-dollar cloud contracts the government has awarded over the past few years. In late 2020, the CIA awarded its Commercial Cloud Enterprise, or C2E, contract to five companies: AWS, Microsoft, Google, Oracle and IBM. The contract could be worth tens of billions of dollars, according to contracting documents, and the companies will compete for task orders issued by various intelligence agencies.

Last April, the National Security Agency re-awarded its $10 billion cloud contract codenamed Wild and Stormy to AWS following a protest from losing bidder Microsoft. The contract is part of the NSAs modernization of its Hybrid Compute Initiative, which will move some of the NSAs crown jewel intelligence data from internal servers to AWS air-gapped cloud.

Editor's note: This story was updated to include statements from all four cloud service providers.

Read the original post:

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle - Nextgov

Categories
Cloud Hosting

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be – DARKReading

A Dec. 2 ransomware attack at Rackspace Technology which the managed cloud hosting company took several days to confirm is quickly becoming a case study on the havoc that can result from a single well-placed attack on a cloud service provider.

The attack has disrupted email services for thousands of mostly small and midsize organizations. The forced migration to a competitor's platform left some Rackspace customers frustrated and desperate for support from the company. It has also already prompted at least one class-action lawsuit and pushed the publicly traded Rackspace's share price down nearly 21% over the past five days.

"While it's possible the root cause was a missed patch or misconfiguration, there's not enough information publicly available to say what technique the attackers used to breach the Rackspace environment," says Mike Parkin, senior technical engineer at Vulcan Cyber. "The larger issue is that the breach affected multiple Rackspace customers here, which points out one of the potential challenges with relying on cloud infrastructure." The attack shows how if threat actors can compromise or cripple large service providers, they can affect multiple tenants at once.

Rackspace first disclosed something was amiss at 2:20 a.m. EST on Dec. 2 with an announcement it was looking into "an issue" affecting the company's Hosted Exchange environment. Over the next several hours, the company kept providing updates about customers reporting email connectivity and login issues, but it wasn't until nearly a full day later that Rackspace even identified the issue as a "security incident."

By that time, Rackspace had already shut down its Hosted Exchange environment citing "significant failure" and said it did not have an estimate for when the company would be able to restore the service. Rackspace warned customers that restoration efforts could take several days and advised those looking for immediate access to email services to use Microsoft 365 instead. "At no cost to you, we will be providing access to Microsoft Exchange Plan 1 licenses on Microsoft 365 until further notice," Rackspace said in a Dec. 3 update.

The company noted that Rackspace's support team would be available to assist administrators configure and set up accounts for their organizations in Microsoft 365. In subsequent updates, Rackspace said it had helped and was helping thousands of its customers move to Microsoft 365.

On Dec. 6, more than four days after its first alert, Rackspace identified the issue that had knocked its Hosted Exchange environment offline as a ransomware attack. The company described the incident as isolated to its Exchange service and said it was still trying to determine what data the attack might have affected. "At this time, we are unable to provide a timeline for restoration of the Hosted Exchange environment," Rackspace said. "We are working to provide customers with archives of inboxes where available, to eventually import over to Microsoft 365."

The company acknowledged that moving to Microsoft 365 is not going to be particularly easy for some of its customers and said it has mustered all the support it can get to help organizations. "We recognize that setting up and configuring Microsoft 365 can be challenging and we have added all available resources to help support customers," it said. Rackspace suggested that as a temporary solution, customers could enable a forwarding option, so mail destined to their Hosted Exchange account goes to an external email address instead.

Rackspace has not disclosed how many organizations the attack has affected, whether it received any ransom demand or paid a ransom, or whether it has been able to identify the attacker. The company did not respond immediately to a Dark Reading request seeking information on these issues. In a Dec. 6. SEC filing, Rackspace warned the incident could cause a loss in revenue for the company's nearly $30 million Hosted Exchange business. "In addition, the Company may have incremental costs associated with its response to the incident."

Messages on Twitter suggest that many customers are furious at Rackspace over the incident and the company's handling of it so far. Many appear frustrated at what they perceive as Rackspace's lack of transparency and the challenges they are encountering in trying to get their email back online.

One Twitter user and apparent Rackspace customer wanted to know about their organization's data. "Guys, when are you going to give us access to our data," the user posted. "Telling us to go to M365 with a new blank slate is not acceptable. Help your partners. Give us our data back."

Another Twitter user suggested that the Rackspace attackers had also compromised customer data in the incident based on the number of Rackspace-specific phishing emails they had been receiving the last few days. "I assume all of your customer data has also been breached and is now for sale on the dark web. Your customers aren't stupid," the user said.

Several others expressed frustration over their inability to get support from Rackspace, and others claimed to have terminated their relationship with the company. "You are holding us hostages. The lawsuit is going to take you to bankruptcy," another apparent Rackspace customer noted.

Davis McCarthy, principal security researcher at Valtix, says the breach is a reminder why organizations should pay attention to the fact that security in the cloud is a shared responsibility. "If a service provider fails to deliver that security, an organization is unknowingly exposed to threats they cannot mitigate themselves," he says. "Having a risk management plan that determines the impact of those known unknowns will help organizations recover during that worst case scenario."

Meanwhile, the lawsuit, filed by California law firm Cole & Van Note on behalf of Rackspace customers, accused the company of "negligence and related violations" around the breach. "That Rackspace offered opaque updates for days, then admitted to a ransomware event without further customer assistance is outrageous, a statement announcing the lawsuit noted.

No details are publicly available on how the attackers might have breached Rackspace's Hosted Exchange environment. But security researcher Kevin Beaumont has said his analysis showed that just prior to the intrusion, Rackspace's Exchange cluster had versions of the technology that appeared vulnerable to the "ProxyNotShell" zero-day flaws in Exchange Server earlier this year.

"It is possible the Rackspace breach happened due to other issues," Beaumont said. But the breach is a general reminder why Exchange Server administrators need to apply Microsoft's patches for the flaws, he added. "I expect continued attacks on organizations via Microsoft Exchange through 2023."

Link:

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be - DARKReading

Categories
Cloud Hosting

St. Cloud Hosting Summit on the Vision for Its Downtown – KVSC-FM News

By Zac Chapman / Assistant News Director

The city of St. Cloud is bringing in strategists to examine the vision for the future of its downtown with a goal of rebooting the historic area.

The summit is exploring lessons learned from American downtowns throughout COVID-19. The event is bringing community partners, businesses and others together with the goal of increasing the quality of downtowns vision.

Four speakers are giving presentations at the summit.

A Top 100 Most Influential Urbanist, Chris Leinberger, is the downtown strategist and investor.

Tobias Peter is Assistant Director of the American Enterprise Institutes Housing Center, and is focusing on housing market trends and policy.

Mayor Dave Kleis and CentraCare CEO Ken Holmen are speaking about St. Clouds unique opportunity to create an active walkable downtown through strategic investment in housing and workforce amenities.

The downtown summit is Monday, Dec. 12 at 6 p.m at St. Clouds River Edge Convention Center. The room will open at 5:30 p.m. for a pre-event social and networking. It is free to attend.

Go here to see the original:

St. Cloud Hosting Summit on the Vision for Its Downtown - KVSC-FM News