Categories
Dedicated Server

How Musk may reinvent the internet without even trying – The Hill

Billionaire entrepreneur and innovator Elon Musk might have just opened a new chapter in the history of the internet albeit unintentionally. Hisnew Twitter policiesand thedigital refugees he created, most fleeing to the heretofore obscureTwitter-like platform Mastodon, could give birth to a very new type of social media experience.

Upon buying Twitter, Musk,a self-described free speech absolutist, reinstated accounts belonging to formerPresident Donald Trump, the right-wing satire siteBabylon Beeand the occasionally crude left-wing comedianKathy Griffin.This was coupled with removing verification requirements (that have since been updated) while adding a monthly fee, as well as mass layoff at the company.

More recently,Twitter has suspended several journalistswho reported on information about Musks jet.

Disgruntled with the changes and controversy, some users stampeded to other services, such as the far smaller European Twitter alternative Mastodon, the brainchild of free-speech advocate and German software engineerEugen Rochko. But can Mastodon compete with Twitters reach? AlthoughMastodons 1 million userspale in comparison toTwitters 238 million users, Mastodons secret weapon is that its more than a mere site. It is a federation of sites that can maintain their autonomy while exchanging information with each other. Mastodon uses an open and freesocial media protocol, ActivityPub,which allows any social medium to connect to any other, as long as they are open and transparent with each other. Several platforms, such as the YouTube-likePeerTube, Instagram alternativePixelfed, social networkingFriendicaalready do it.The shift from Twitter to Mastodon and ActivityPub could be an epoch-making digital revolution, comparable with the invention of the web itself. ActivityPub may restore the web and its most sophisticated layer, social media, to the open and universally connectable vision of the internet itself. Our most popular social media platforms Facebook, Twitter and TikTok remain walled gardens, only allowing the users to exchange information within apps under the same ownership or create apps within each platform. This drawback does not apply to ActivityPub-empowered sites. Far from walled gardens, they are fields connected by open roads.

What is ActivityPub and how does it work? At the simplest level, it is a method (protocol) for social media servers to talk to each other even if they are owned by different entities and dedicated to different purposes. Imagine that CBS News, BBC, National Review and Fox News create their own social media servers using the Mastodon user interface and ActivityPub as a server-to-server protocol. All that the owners of these sites must do to connect to each other is to list the server addresses of each other on a list of federated sites.

Immediately, the users of the sites that talk to each other will be able to follow other users reposts or comments across server boundaries. This has several advantages. The first and most important is that social media owners have a direct and immediate relationship with their users. The owners do not need to be corporate, by the way. Independent media organizations, nonprofits or user co-ops can create their own media servers. They can develop their content policies, privacy protection methods and financial support methods, from advertising (which they control) to subscription or donation-based support. Social media owners can also decide when and how to open access to other federation members. This may includetrial periods or suspension of communication.

Finally, any company or nonprofit organization can use a social media interface of their own, not Mastodon, and still be able to talk to other sites using ActivityPub. The interface can include new tools, such as a Trust button to replace the like or favorite buttons. My colleagues and I created the TrustFirst social media server powered by ActivityPub and Mastodon. On it, machine learning analyzes the content you are about to reshare or like and advises you whether the content is trustworthy. A new button invites you to trust or not and the trust value is used in disseminating the content more or less.

More intriguingly, Musk could also implement ActivityPub on Twitter, asTumblr did. He would ensure the sites long-term reach while Twitter users will have the cake (be on Twitter) and eat it too (not be beholden to its rules).

The genius of the internet is that it allowed and should allow independently owned and managed sites to talk to each other. This is reflected in the very name of the internet, which is a network of networks (inter-net), not one integrated network. The closed social media detour in the history of communication might just come to a very interesting swerve. Watch out for tight turns!

Sorin Matei, Ph.D., is the College of Liberal Arts associate dean of research and graduate education and a professor of communication at Purdue University, where he studies the relationship between information technology, group behavior and social structures in a variety of contexts. He is a senior research fellow at the Krach Institute for Tech Diplomacy at Purdue.

View original post here:

How Musk may reinvent the internet without even trying - The Hill

Categories
Cloud Hosting

Managing Ediscovery In The Cloud: Practical, Ethical and Technical … – JD Supra

In this excerpt from our white paper on managing ediscovery in the cloud, we explain the basics of the cloud and its biggest benefits in ediscovery. Click here to download the full white paper.

As early pioneers of cloud computing in legal tech, the cloud has always been an integral part of Nextpoints business model. Now, many providers are making the switch to the cloud, and more and more law firms are embracing ediscovery in the cloud. Even if youve used cloud services for a long time, you may have never stopped to consider why is the cloud the best solution? And if youre looking to adopt cloud technology or switch to a new provider, its important to understand the fundamentals of the cloud and why its the only ediscovery solution for modern litigators.

The greatest upside is that cloud providers like Amazon, Microsoft, and Rackspace can invest billions of dollars each year in research and development of cloud platforms, providing more robust services and security than any company or law firm can hope to provide. Thanks to those investments, SaaS ediscovery systems cost about 35 percent less than solutions that are hosted in-house.

Nearly 60% of businesses transitioned to the cloud in 2022, and this trend is expected to continue. The benefits that are enticing businesses to adopt cloud computing include:

Thats the power of cloud computing, but it is also part of the challenge cloud computing poses for law firms. So much data is being created in todays networked and super-massive computing environments that it will quickly overwhelm litigation. Law firms struggle to process and review gigabytes of data, while many types of litigation routinely involve multiple terabytes of information. The cloud is creating a tsunami of digital evidence, but it is also the only cost-effective solution to meet the challenge it has created.

Why Cloud Ediscovery?

Ediscovery is ideally suited to maximize the benefits of cloud computing. The volume of electronic data is such that when a legal matter arises, a law firm or corporate counsel can suddenly be faced with a mountain of electronic data, which can cost hundreds of thousands of dollars to process in-house or with the service of outside consultants. Then theres licensing fees, software installation, hardware costs, and consulting fees all of which make ediscovery costs spiral out of control. As law firms and their clients become increasingly distressed by these kinds of bills, the Software-as-a-Service model promises to cut many of these needless costs by providing an all-in-one processing, stamping, reviewing, and production platform.

The bottom line is that litigation software built for local networks simply cannot cope with exploding volumes of digital data. The right ediscovery cloud platform offers low-cost data hosting, built-in processing functionality, and levels of security no on-premise solution can match.

Security: The Real Danger is Doing it Yourself

In considering on-premise versus cloud solutions, firms that host sensitive client data on-premise are likely to find that they themselves are the greatest security risk. A network hosted on-premise can afford very little in the way of network security beyond what can be found in an off-the-shelf network appliance. Even more problematic, on-premise systems (and private cloud systems in a single facility) offer nothing in the way of physical security or environmental controls beyond what is found in a typical office building. The fact is, many local networks are managed from a supply closet or backroom that anyone with access to an office can enter.

Organizations that rely on local, on-premise solutions often have to fall back on unsecured and archaic mechanisms to move and share data, including mailing it on disks. And depending on the size of an organization, on-premise networks lack redundant storage and backup; if a disaster strikes, data is likely lost forever. The largest and most reputable cloud providers have redundant data centers with robust physical security dispersed across the country, or even the planet.

For example, Amazon Web Services data centers have extensive setback and military grade perimeter control berms as well as other natural boundary protection. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff using video surveillance, state of the art intrusion detection systems, and other electronic means.

Now compare that to the security of on-premise servers, your typical hosting providers server room (private cloud), or to that of any other company whose primary business is not data security. The safest bet for your clients data would be to utilize one of the leading cloud infrastructure providers when moving ediscovery data to the cloud. But whichever ediscovery provider you choose, be sure to do some hard comparative research.

Cloud platforms give users control over large data sets, including permission-based access and security roles that are supported by the highest levels of security. Thats because large cloud providers have built-in encryption and security protocols backed up by large teams of security experts with real-time monitoring tools. When considering a cloud ediscovery service, find out the levels of security your provider has in place. Make sure they are taking advantage of the cloud platform in all phases of transmission and data storage, including:

Scalability: Big Data is Here

In the 1970s Bill Gates was telling people, No one will need more than 637 kilobytes of memory for a personal computer. Today, personal computers ship with 2 terabyte hard drives.

Organizations today love data. Modern businesses are finding new and interesting ways to generate and use it. The growth of data is clobbering business IT environments, our federal government, federal court system, and pretty much any data-driven business. For example, it took Nextpoint 13 years to reach our first petabyte of data. (Thats 1,000 terabytes.) After that, it only took two years to add a second petabyte, and the exponential growth has only continued.

In special circumstances, like a data-intensive ediscovery matter, the computational requirements grow exponentially with the amount of data. This is particularly true in heavy processing, indexing, and analytics-based tasks like native file processing, near-dupe identification, and search functionality. Because cloud computing platforms have virtually unlimited ability to scale up for big jobs, reviewers can use advanced analytic tools to analyze data that would break most computer systems.

Law firms may be tempted to throw more hardware at large data challenges, but when clients that used to provide several gigabytes of data for discovery are now delivering terabytes of structured and unstructured data for review, a few new computers cannot address the problem. Thanks to cloud computing, computing power is now a commodity that can be accessed as needed.

Accessibility: Multi-Party Case Management

Hosting documents in the cloud makes it possible to effectively review huge data collections with reviewers working simultaneously in locations around the world. Data is easily kept organized and there is more control over the review process.

Many matters today involve similar documents and data sets. The cloud gives companies the ability to store a document set, along with the appropriate privilege codes, redactions, and stamping so that it can be accessed in future matters that may arise. They can allow data sets to be reused and accessed by new parties as appropriate.

Cloud platforms offer the ability to reduce duplicative efforts by multiple parties on cases with similar issues, facts, discovery, and relevant case law. There are so many actors involved in multidistrict litigation in different jurisdictions, with differing internal technology environments, it is critical that the solution selected encourages collaboration among co-counsel.

Mobility: Working on the Road

There was a time when a lot of companies pretended BYOD (Bring Your Own Device) was just a fad, and that employees should remain tied to applications and data stored on their desktop in a cubicle. The pandemic upended this mentality, and the cloud allowed applications and data to be device independent, freeing the workforce to work wherever and however they needed.

With SaaS services, users can securely access the data from anywhere an internet connection is available. When selecting a cloud platform, make sure it is natively accessible via all devices and OSs including including Macs, PCs, iPads, iPhones, and

Android mobile devices.

The Cloud is the Only Answer for Ediscovery

These are the considerations to take into account when assessing the cloud for ediscovery. According to a 2022 report from ACEDS, 38% of firms still use on-premises technology for ediscovery, while 14% use a hybrid cloud solution, and 43% are fully in the cloud. A huge percentage of firms are moving to the cloud each year, but there is still a sizable number of attorneys working with technology not equipped for todays information-rich litigation environment.

There are obvious ethical obligations and technical issues to take into account when moving client data to a cloud repository or transitioning to a new cloud provider. Check back for our next post on cloud-based ediscovery to see all the questions you need to ask when interviewing potential vendors. If a vendor can satisfy these demands, your firm will be able to deliver data processing power, data security, and a cost savings that old-school review software cannot hope to match.

Read more:

Managing Ediscovery In The Cloud: Practical, Ethical and Technical ... - JD Supra

Categories
Cloud Hosting

ACE Recognized as the 2022 Most Innovative Cloud Solutions … – openPR

Pompano Beach, FL, December 13, 2022 --(PR.com)--Ace Cloud Hosting, a leading cloud computing solutions provider, announced that it has been recognized as the Most Innovative Cloud Solutions Provider for 2022 by Global Business Awards. ACE received the award in the technology category for delivering continuous innovation and consistent quality in the cloud solutions realm.

Global Business Awards are the most coveted awards that celebrate enterprises that demonstrate authentic and best work in specific categories. Every year the esteemed panel of Global Business Awards carefully scrutinizes several applications and portfolios to decisively shortlist a finalist. This year the pinnacle award programs were sharply focused on recognizing organizations that brought a cohesive mix of innovation, technology, and humanization to the forefront along with their digital transformation solutions.

ACH will share the stage with many top Indian players like OLA Cars, GoDaddy, Zomato, EdgeVerve Systems Limited - An Infosys Company, Quick Heal, Infosys, Leeway Hertz, Bureau, etc., in the technology category. Its gratifying to see our cloud computing expertise, knowledge, inventiveness, and adaptability recognized. This award is a testament to our ability of delivering the highest level of service and create value for partner clients. In addition, it also personifies the dedication and rigor that our teams put into delivering highly successful results for our customers, said Managing Director, Vinay Chhabra.

ACH has a strong commitment to building and implementing intelligent cloud solutions to address the most pressing needs of high-growth enterprises. Commenting on the win, Dr. Bindu Rathore, Director (VP-Sales & Marketing) said, This award is a hallmark of our excellence. ACH is leveraging its investments in innovation, deep technologies, and a talented workforce, to help clients accelerate their growth and transformation journey. This momentous award is a reaffirmation of our commitment to consistently deliver differentiated and transformational results.

Dr. Sangeeta Chhabra, Executive Director, RTDS said, We are extremely proud to receive this award. ACH has earned the award through its steadfast commitment to process innovation, quality, and industry expertise. We are unique in the cloud computing space our deep industry knowledge, capabilities, and rich portfolio of services set us apart.

ACH has nearly 15 years of experience in solving complex cloud challenges through unconventional business solutions and commitment to benchmark best practices. The organization is a firm believer in creating out-of-the-box strategies to address the strategic requirements of companies across diverse industries, with minimal impact on their present IT ecosystem. Recently, the organization also received two CPA Practice Advisor Readers Choice Awards for the Best Hosted Solution Provider and Best Outsourced Technology Services categories. The winners will be announced on 10th December 2022 at the Waldorf Astoria Dubai Palm Jumeirah.

About ACEACH offers business-critical cloud computing solutions that provide vibrant pathways to transcend operations, foster innovation, and create value for partner organizations. The organization enables a conducive IT ecosystem that empowers businesses to smoothly work from anywhere and at any time in a secure manner. ACH has over 15+ years in creating, deploying, and scaling dynamic cloud infrastructure of high-growth enterprises and enabling real-world foundations to support their business growth. Leading organizations are harnessing ACHs Cloud Computing, QuickBooks Hosting, Virtual Desktop Infrastructure, and Managed Security Solutions to challenge the status quo, breaking their previous molds and clearing the groundwork for business success.

Read more here:

ACE Recognized as the 2022 Most Innovative Cloud Solutions ... - openPR

Categories
Cloud Hosting

Most businesses hope cloud will be the catalyst to net zero – TechRadar

A recent AWS-backed survey into the way that companies and business leaders manage decarbonization efforts to reach net zero in Europe by 2050 has found that cloud technology may hold the key (or one of many keys) to success.

The study consulted 4,000 businesses in the UK, France, Germany, and Spain, 96% of whom having set emissions reduction targets. A

round three quarters of business leaders believed that technology like cloud hosting would accelerate their journey to net zero by at least two years, helping them to achieve their target by 2048 at the latest.

With that in mind, around 20% claim that they lack the appropriate technology to achieve their net zero goals, and one in five were yet to go cloud-first. Among a number of obstacles holding businesses back was the impact of rising costs and economic uncertainty on a global scale.

Despite the challenges, three quarters of business leaders feel confident in their abilities to control greenhouse gas emissions. This is in stark contrast to the just one in ten that measure emissions scope 3, which focuses on indirect emissions that occur in the companys value chain. Just over half of the companies in question were measuring scopes 1 (direct emissions from owned or controlled sources) and 2 (indirect emissions from electricity, heating, cooling, and so on).

What I think is so interesting here is that business leaders who have already engaged cloud services think they are more successful in delivering carbon reductions," noted Chris Wellise, AWS Director of Sustainability. "The data backs up this view, as cloud offers nearly any company or public body a less carbon intensive way of managing their IT.

Read more from the original source:

Most businesses hope cloud will be the catalyst to net zero - TechRadar

Categories
Cloud Hosting

Protect Your Cloud Apps From These 5 Common API Security … – ITPro Today

APIs barely existed two decades ago, but they've now become the glue that holds the world of cloud computing together. APIs play a central role in enabling cloud applications to interface with each other and with the various cloud resources they need to do their jobs.

But APIs have a downside: When they're poorly managed, they can become low-hanging fruit for attackers.

Related: Why APIs Are the Foundation of Modern Software Development

That's why it's critical to ensure that you use APIs securely in the cloud. This article unpacks common API security mistakes that IT organizations run into in order to highlight what not to do if you want to make the most of APIs while also maximizing security.

In many ways, insecure APIs are a DDoS attacker's dream. The reason why is that by issuing repeated calls to APIs, attackers can overwhelm the servers hosting them and render applications that depend on the APIs unusable.

Fortunately, there's a simple way to prevent this type of API attack: throttling. API throttling lets admins limit the number of requests that each client can make to an API in a given time period. Throttling doesn't totally prevent abuse of APIs it's still possible to launch a DDoS-style attack using a botnet that consists of a large number of independent clients but it goes a long way toward stopping or mitigating API attacks designed to disrupt application availability.

Unless all of the data available through an API is 100% public, the API should require authentication in order to respond to requests. Otherwise, attackers can use the API to access data that should not be available to them as one attacker did when scraping data from about 700 million LinkedIn users, for example.

The LinkedIn API hack was a bit complicated because the data the attacker scraped was semi-public. It was available on LinkedIn profiles to other LinkedIn users who had access to those profiles. But it wasn't supposed to be available to a random, unauthenticated client making API requests. Basic API authentication would have prevented the abuse that took place in this incident.

Another API security mistake that can subject your business to an API attack is to assume that just because you don't advertise your API endpoints publicly, no one can find them and you therefore don't need to worry about securing your APIs.

This strategy which amounts to what security folks call "security by obscurity" is akin to publishing sensitive data on a website but choosing not to share the URL in the hope that no one finds it.

There are situations where you may choose not to advertise an API's location (for example, if the API isn't used by the public, you might share endpoint information only internally). But even so, you should invest just as much in securing the API as you would if it were a fully public API.

From a security standpoint, the fewer APIs you expose and use, the better. Unnecessary APIs are like extraneous libraries on an operating system or abandoned code within an application: They give attackers more potential ways to wreak havoc while offering no value to your business.

So, before you publish a new API, make sure you have a good reason to do so. And be sure, as well, to deprecate APIs that are no longer necessary, rather than leaving them active.

A one-size-fits-all security model often does not work well for APIs. Different API users may have different needs and require different security controls. For example, users who are internal to your business may require a higher level of data access via an API than your customers or partners.

For this reason, it's a best practice to define and enforce API access controls in a granular way. Using an API gateway, establish varying levels of access for different users (whom you could differentiate based on their network locations requests that originate from your VPN should be treated differently from those coming from the internet, for example or based on authentication schemes).

APIs make it easy to share resources in a cloud environment. But too much sharing via APIs is a bad thing. APIs must be secured with throttling, authentication, and granular access controls in order to keep data and applications secure against attackers looking for ways to abuse APIs.

About the author

See the rest here:

Protect Your Cloud Apps From These 5 Common API Security ... - ITPro Today

Categories
Cloud Hosting

What is a Data Center? Working & Best Practices Explained – Spiceworks News and Insights

A data center is defined as a room, a building, or a group of buildings used to house backend computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. This article defines and describes the workings of a data center, including its architecture, types, and best practices.

A data center is a room, a building, or a group of buildings used to house back-end computer systems (without a user interface) and supporting systems like cooling capabilities, physical security, networking appliances, and more. Remote data centers power all cloud infrastructure.

A data center is a physical facility providing the computing power to operate programs, storage to process information, and networking to link people to the resources they need to do their tasks and support organizational operations.

Due to a dense concentration of servers, which are often placed in tiers, sometimes data centers are called server farms. They provide essential services like information storage, recovery and backup information management, and networking.

Almost every company and government agency needs either its own data center or access to third-party facilities. Some construct and operate them in-house, and others rent servers from co-location facilities. In contrast, others still leverage public cloud-based services from hosts such as Google, Microsoft, and Amazon Web Services (AWS).

In general, there are four recognized levels of data centers. The numerical tiers allocated to these data centers represent the redundant infrastructure, power, and cooling systems. Commonly assigned to these levels are the following values or functionalities:

The storage and computing capabilities for apps, information, and content are housed in data centers. Access to this data is a major issue in this cloud-based, application-driven world. Using high-speed packet-optical communication, Data Center Interconnect (DCI) technologies join two or more data centers across short, medium, or long distances.

Further, a hyper-converged data center is built on hyper-converged infrastructure (HCI), a software architecture consolidating the compute, network, and storage commodity hardware. The merging of software and hardware components into a single data center streamlines the processing and management process, with the added perk of lowering an organizations IT infrastructure and management costs.

See More: Want To Achieve Five Nines Uptime? 2 Keys To Maximize Data Center Performance

The working of a data center is based on the successful execution of data center operations. Operations of a data center consist of the systems and processes that maintain the data center on a daily basis.

Data center operations consist of establishing and managing network resources, assuring data center security, and monitoring power and cooling systems. Different kinds of data centers, differing in size, dependability, and redundancy, are defined by the IT needs of enterprises that operate data centers. The expansion of cloud computing is driving their modernization, including automation and virtualization.

Data centers comprise real or virtual servers linked externally and internally via communication and networking equipment to store, transport, and access digital data. Each server is comparable to a home computer in that it contains a CPU, storage space, and memory but is more powerful. Data centers use software to cluster computers and divide the load among them. To keep all of this up and running, the data center uses the following key elements:

Availability in a data center refers to components that are operational at all times. Periodically, systems are maintained to guarantee future activities run smoothly. You may arrange a failover in which a server switches duties to a distant server to increase redundancy. In IT infrastructure, redundant systems reduce the risk of single-point failure.

Network Operation Center (NOC) is a workspace (or virtual workplace) for employees or dedicated workers tasked with monitoring, administering, and maintaining the computer resources in a data center. A NOC can supply all of the data centers information and update all activities. The responsible person at a NOC may see and control network visualizations that are being monitored.

Unquestionably, power is the most critical aspect of a data center. Colocation equipment or web hosting servers use a dedicated power supply inside the data center. Every data center needs power backups to ensure its servers are continually operational and that overall service availability is maintained.

A safe data center requires the implementation of security mechanisms. One must first identify the weaknesses in your DCs infrastructure. Multi-factor ID identification, monitoring across the whole building, metal detectors, and biometric systems are a few measures that one may take to ensure the highest level of security. Also necessary to a data center are on-site security personnel.

Power and cooling are equally crucial in a data center. The colocation equipment and web-hosting servers need sufficient cooling to prevent overheating and guarantee their continued operation. A data center should be constructed so that there is enough airflow and the systems are always kept cool.

Uninterruptible power supply (UPS), as well as generators, are components of backup systems. During power disruptions, a generator may be configured to start automatically. As far as the generators have fuel, they will remain on during a blackout. UPS systems should provide redundancy so that a failed module does not compromise the overall systems capability. Regular maintenance of the UPS and batteries decreases the likelihood of failure during a power outage.

CMMS is among the most effective methods to monitor, measure, and enhance your maintenance plan. This program enables the data center management to track the progress of maintenance work performed on their assets and the associated costs. This program will aid in lowering maintenance costs and boosting internal efficiency.

In a modern data center, artificial intelligence (AI) also plays an essential role in its working. AI enables algorithms to fulfill the conventional Data Center Infrastructure Manager (DCIM) tasks by monitoring energy distribution, cooling capacity, server traffic, and cyber threats in real-time and automatically adjusting efficiency. AI can shift workloads to underused resources, identify possible component faults, and balance pooled resources. It accomplishes this with minimal human intervention.

See More: What Is Enterprise Data Management (EDM)? Definition, Importance, and Best Practices

The different types of data centers include:

Organizations construct and own these private data centers for their end customers. They may be placed both on and off-site and serve a single organizations IT processes and essential apps. An organization may isolate business activities from data center operations in a natural catastrophe. Or, it may construct its data center in a cooler environment to reduce energy consumption.

Multi-tenant data centers (called colocation data centers) provide data center space to organizations desiring to host their computer gear and servers remotely.

These spaces for rent inside colocation centers are the property of other parties. The renting company is responsible for providing the hardware, while the data center offers and administers the infrastructure, which includes physical area, connectivity, ventilation, and security systems. Colocation is attractive for businesses that want to avoid the high capital costs involved with developing and running their own data centers.

The desire for immediate connection, the Internet of Things (IoT) expansion, and the requirement for insights and robotics are driving the emergence of edge technologies, which enable processing to take place closer to actual data sources. Edge data centers are compact facilities that tackle the latency issue by being located nearer to the networks edge and data sources.

These data centers are tiny and placed close to the users they serve, allowing for low-latency connection with smart devices. By processing multiple services as near-to-end users as feasible, edge data centers enable businesses to decrease communication delays and enhance the customer experience.

Hyperscale data centers are intended to host IT infrastructure on a vast scale. These hyperscale computing infrastructures, synonymous with large-scale providers like Amazon, Meta, and Google, optimize hardware density while reducing the expense of cooling and administrative overhead.

Hyperscale data centers, like business data centers, are owned and maintained by the organization they serve, although on a considerably broader level for platforms for cloud computing and big data retention. The minimum requirements for a hyperscale data center are 5,000 servers, 500 cabinets, and 10,000 square feet of floor space.

These dispersed data centers are operated by 3rd party or public cloud providers like AWS, Microsoft Azure, and Google Cloud. The leased infrastructure, predicated on an infrastructure-as-a-service approach, enables users to establish a virtual data center within minutes. Remember that cloud data centers operate as any other physical data center type for the cloud provider managing it.

See More: What Is a Data Catalog? Definition, Examples, and Best Practices

A modular data center is a module or physical container bundled with ready-to-use, plug-and-play data center elements: servers, storage, networking hardware, UPS, stabilizers, air conditioners, etc. Modular data centers are used on building sites and disaster zones (to take care of alternate care sites during the pandemic, for example). In permanent situations, they are implemented to make space available or to let an organization develop rapidly, such as installing IT equipment to support classrooms in an educational institution.

In a managed data center, a third-party provider provides enterprises with processing, data storage, and other associated services to aid in managing their IT operations. This data center type is deployed, monitored, and maintained by the service provider, who offers the functionalities via a controlled platform.

You may get managed data center services through a colocation facility, cloud-based data centers, or a fixed hosting location. A managed data center might be entirely or partly managed, but these are not multi-tenant by default, unlike colocation.

See More: What Is Data Modeling? Process, Tools, and Best Practices

The modern data center design has shifted from an on-premises infrastructure to one that mixes on-premises hardware with cloud environments wherein networks, apps, or workloads are virtualized across multiple private and public clouds. This innovation has revolutionized the design of data centers since all components are no longer co-located and may only be accessible over the Internet.

Generally speaking, there are four kinds of data center structures: meshes, three or multi-tier, mesh points of delivery, and super spine mesh. Now let us start with the most famous instance. The multi-tier structure, which consists of the foundation, aggregation, and access layers, has emerged as the most popular architectural approach for corporate data centers.

The mesh data center architecture follows next. The mesh network model refers to the topology in which data is exchanged between components through linked switches. It can provide basic cloud services due to its dependable capacity and minimal latency. Moreover, because of its scattered network topologies, the mesh configuration can quickly materialize any connection and is less costly to construct.

This mesh point of delivery (PoD) comprises several leaf switches connected inside the PoDs. It is a recurrent design pattern wherein components improve the data centers modularity, scalability, and administration. Consequently, data center managers may rapidly add new data center architecture to their existing three-tier topology to meet the extremely low-latency data flow of new cloud apps.

In summary, super spine architecture is suitable for large-scale, campus-style data centers. This kind of data center architecture handles vast volumes of data through east-to-west data corridors.

The data center will comprise a facility and its internal infrastructure in these architectural alternatives. The site is where the data center is physically located. A data center is a big, open space in which infrastructure is installed. Virtually every place is capable of housing IT infrastructure.

Infrastructure is the extensive collection of IT equipment installed inside a facility. This refers to the hardware responsible for running applications and providing business and user services. A traditional IT infrastructure includes, among other elements, servers, storage, computer networks, and racks.

There are no obligatory or necessary criteria for designing or building a data center; a data center is intended to satisfy the organizations unique needs. However, the fundamental purpose of any standard is to offer a consistent foundation for best practices. Several modern data center specifications exist, and a company may embrace a few or all of them.

See More: What Is Kubernetes Ingress? Meaning, Working, Types, and Uses

When designing, managing, and optimizing a data center, here are the top best practices to follow:

When developing a data center, it is crucial to provide space for growth. To save costs, data center designers may seek to limit facility capacities to the organizations present needs; nevertheless, this might be a costly error in the long run. Having a room available for new equipment is vital as your needs change.

One cannot regulate the things you do not measure; thus, monitor energy usage to explain the system efficiency of your data center. Power usage effectiveness (PUE) is a statistic used to reduce non-computing energy use, like cooling and power transmission. Frequently measuring PUE is required for optimal use. Since seasonal weather variations greatly influence PUE, gathering energy information for the whole year is considerably more essential.

Inspections and preventative maintenance are often performed at time-led intervals to prevent the breakdown of components and systems. Nonetheless, this technique disregards actual operating conditions. Utilizing analytics and intelligent monitoring technologies may alter maintenance procedures. A powerful analytics platform with machine learning capabilities can forecast maintenance needs.

Even with the declining price of computer memory, global archiving incurs billions of dollars annually. By deleting and retaining data, the IT infrastructure of data centers is freed of its burden, resulting in decreased conditioning expenses and energy consumption and more effective allocation of computing resources and storage.

For data centers, creating backup pathways for networked gear and communication channels in the event of a failure is a big challenge. These redundancies offer a backup system that allows personnel to perform maintenance and execute system upgrades without disturbing service or to transition to the backup system when the primary system fails. Tier systems within data centers, numbered from one to four, define the uptime that customers may expect (4 being the highest).

See More: Why the Future of Database Management Lies In Open Source

Data centers are the backbone of modern-day computing. Not only do they house information, but they also support resource-heavy data operations like analysis and modeling. By investing in your data center architecture, you can better support IT and business processes. A well-functioning data center is one with minimal downtime and scalable capacity while maintaining costs at an optimum.

Did this article help you understand how data centers work? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!

Read more here:

What is a Data Center? Working & Best Practices Explained - Spiceworks News and Insights

Categories
Cloud Hosting

Alibaba partners with Avalanche to host nodes – CryptoTvplus

Alibaba Cloud, the cloud computing service and subsidiary of the Alibaba group, has announced integration with validators on the Avalanche blockchain. Avalanche blockchain made the announcement that will see the expansion of Web3 services into Web2 infrastructures.

Details of the partnership show that users of the Avalanche blockchain can launch validator nodes using the Alibaba Cloud system including storage and distribution of resources on the largest cloud infrastructure in Asia.

As part of a way to incentivize validators, Alibaba Cloud is giving Avalanche validators credit via coupons for their services.

Alibaba Cloud, which was launched in 2009 as the cloud service for the Alibaba Group, serves 85 zones in 28 regions globally. It is also tagged as the third largest cloud computing service in the world as businesses around the world rely on its IaaS (Infrastructure as a Service) model to grow and scale.

Elastic computing, network virtualization, database, storage, security, management, and application services are some of the utilities available in the Alibaba cloud system.

Avalanche is an eco-friendly blockchain launched in 2020 with fast finality, and on which scalable applications and services can be designed for institutional and individual purposes. Several top projects that use Avalanche include BENQi, Aave, Chainlink, Curve, and Sushi. It currently has over 1,000 validators and transactions per second of 4500 plus.

Alibaba Cloud is not the only Web2 infrastructure thats integrating and supporting the hosting of blockchain nodes. In November, Google Cloud announced its partnership with Solana to enable validators to use its cloud services for hosting nodes.

For dApp builders, Alibaba has published a guide on how to set up Avalanche nodes using the Alibaba Cloud.

Google Cloud and Aptos to start an accelerator program

Expect war of chains & bridges Avalanche founder

Visit link:

Alibaba partners with Avalanche to host nodes - CryptoTvplus

Categories
Cloud Hosting

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle – Nextgov

The Pentagon on Wednesday announced the awardees of the Joint Warfighting Cloud Capabilityor JWCCcontract, with Amazon Web Services, Google, Microsoft and Oracle each receiving an award.

Through the contract, which has a $9 billion ceiling, the Pentagon aims to bring enterprisewide cloud computing capabilities to the Defense Department across all domains and classification levels, with the four companies competing for individual task orders.

Last year, the Defense Department had named the four companies as contenders for the multi-cloud, multi-vendor contract.

The purpose of this contract is to provide the Department of Defense with enterprise-wide, globally available cloud services across all security domains and classification levels, from the strategic level to the tactical edge, the Defense Department said in a Wednesday announcement.

The awards come after a years-long effort to provide enterprisewide cloud computing across the department, with a significant delay in March as the DOD conducted due diligence with the four vendors.

All four companies issued statements the day after the award.

We are honored to have been selected for the Joint Warfighting Cloud Capability contract and look forward to continuing our support for the Department of Defense," saidDave Levy, Vice President U.S. Government, Nonprofit, and Healthcare at AWS. "From the enterprise to the tactical edge, we are ready to deliver industry-leading cloud services to enable the DoD to achieve its critical mission.

Oracle looks forward to continuing its long history of success with the Department of Defense by providing our highly performant, secure, and cost-effective cloud infrastructure,"Ken Glueck, Executive Vice President, Oracle, said in a statement. "Built to enable interoperability, Oracle Cloud Infrastructure will help drive the DoDs multicloud innovation and ensure that our defense and intelligence communities have the best technology available to protect and preserve our national security.

"The selection is another clear demonstration of the trust the DoD places in Microsoft and our technologies," Microsoft Federal President Rick Wagner said in a blog post. "Our work on JWCC will build on the success of our industry-leading cloud capabilities to support national security missions that we have developed and deployed across the department and service branches."

We are proud to be selected as an approved cloud vendor for the JWCC contract,"Karen Dahut, CEO of Google Public Sector, said in a statement.

JWCC itself was announced in July 2021 following the failure and cancellation of the Joint Enterprise Defense Infrastructureor JEDIcontract, DODs previous effort aimed at providing commercial cloud capabilities to the enterprise.

Conceptualized in 2017, JEDI was designed to be the Pentagons war cloud, providing a common and connected global IT fabric at all levels of classification for customer agencies and warfighters. A single-award contract worth up to $10 billion, JEDI would have put a single cloud service provider in charge of hosting and analyzing some of the militarys most sensitive data. Ultimately, JEDI was delayed for several years over numerous lawsuits that ultimately caused the Pentagon to reconsider its plan, opting for a multi-cloud approach more common in the private sector.

For many years, Amazon Web Servicesby virtue of its 2013 contract with the Central Intelligence Agencywas the only commercial cloud provider with the security accreditations allowing it to host the DODs most sensitive data. In the interim, however, Microsoft has achieved the top-secret accreditation, and Oracle and Google both achieved Impact Level 5or IL5accreditation, allowing the two companies to host the departments most sensitive unclassified data in their cloud offerings. Oracle has also achieved top secret accreditation.

JWCC is just one of several multibillion-dollar cloud contracts the government has awarded over the past few years. In late 2020, the CIA awarded its Commercial Cloud Enterprise, or C2E, contract to five companies: AWS, Microsoft, Google, Oracle and IBM. The contract could be worth tens of billions of dollars, according to contracting documents, and the companies will compete for task orders issued by various intelligence agencies.

Last April, the National Security Agency re-awarded its $10 billion cloud contract codenamed Wild and Stormy to AWS following a protest from losing bidder Microsoft. The contract is part of the NSAs modernization of its Hybrid Compute Initiative, which will move some of the NSAs crown jewel intelligence data from internal servers to AWS air-gapped cloud.

Editor's note: This story was updated to include statements from all four cloud service providers.

Read the original post:

Pentagon Awards $9B Cloud Contract to Amazon, Google, Microsoft, Oracle - Nextgov

Categories
Cloud Hosting

Alibaba Cloud Pros and Cons – ITPro Today

Should you use Alibaba Cloud?

That's a question that you may not even think to ask yourself, given that Alibaba's cloud computing services tend to receive much less attention in the media than those of the "Big Three" cloud providers meaning Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Related: Why Public Cloud Vendors Must Get Serious About eBPF Now

But when it comes to the diversity of the cloud services available, as well as pricing and geographic coverage, Alibaba Cloud is in many cases a worthy competitor with the other, better-known cloud platforms.

To understand when it does and doesn't make sense to consider Alibaba Cloud, let's examine Alibaba Cloud's pros and cons as a cloud computing platform.

Related: How the Cloud Made Computing Harder, Not Easier

Alibaba Cloud is a public cloud platform. It's owned by Alibaba, a China-based multinational business that is also a major player in the e-commerce and retail industries. Currently, Alibaba Cloud is the fourth-largest public cloud provider globally, after AWS, Azure, and GCP.

Like other major public clouds, Alibaba Cloud offers a broad set of cloud services, such as:

Alibaba Cloud also provides a variety of native management and monitoring tools the equivalents of solutions like AWS CloudWatch and IAM.

The breadth of Alibaba Cloud's services is one factor that sets Alibaba Cloud apart from "alternative cloud" providers, many of which specialize only in certain types of services (like storage).

Compared with other major public clouds, Alibaba Cloud offers a few notable advantages:

In short, Alibaba Cloud offers the same large selection of core services as AWS, Azure, and GCP. In some cases, Alibaba's services cost less. And when it comes to geographical presence in Asia, Alibaba Cloud beats the Big Three clouds hands-down.

On the other hand, there are common reasons why businesses may opt not to use Alibaba Cloud, including:

To be sure, Alibaba Cloud is still evolving rapidly, and it's possible that these disadvantages will abate as it grows. But for now, Alibaba Cloud remains most heavily invested in the Asia-Pacific region, which means its support for workloads, tools, and engineers that are based in other parts of the world is limited.

There's also not a lot of reason to believe that Alibaba Cloud will be expanding its presence in North American or European markets in the near future. It hasn't added data centers in those regions since the mid-2010s, although it has continued expanding its footprint in Asia. And when Alibaba Cloud talks about engaging the North American market, it's usually in the context of working with North American companies seeking to expand operations in Asia, rather than landing customers that don't have a presence in Alibaba's backyard.

In general, then, it seems that Alibaba Cloud's business strategy is focused on owning the Asia-Pacific market and leaving the rest of the world to AWS, Azure, and GCP, rather than going head-to-head with the Big Three clouds.

To sum up, whether Alibaba Cloud is a good fit for hosting your workloads depends on:

These considerations may change as Alibaba Cloud continues to evolve, especially if the company invests more heavily in markets outside of Asia-Pacific. But at present, Alibaba Cloud's appeal for businesses that don't have a strong presence in Asia remains limited.

About the author

Read more here:

Alibaba Cloud Pros and Cons - ITPro Today

Categories
Cloud Hosting

St. Cloud Hosting Summit on the Vision for Its Downtown – KVSC-FM News

By Zac Chapman / Assistant News Director

The city of St. Cloud is bringing in strategists to examine the vision for the future of its downtown with a goal of rebooting the historic area.

The summit is exploring lessons learned from American downtowns throughout COVID-19. The event is bringing community partners, businesses and others together with the goal of increasing the quality of downtowns vision.

Four speakers are giving presentations at the summit.

A Top 100 Most Influential Urbanist, Chris Leinberger, is the downtown strategist and investor.

Tobias Peter is Assistant Director of the American Enterprise Institutes Housing Center, and is focusing on housing market trends and policy.

Mayor Dave Kleis and CentraCare CEO Ken Holmen are speaking about St. Clouds unique opportunity to create an active walkable downtown through strategic investment in housing and workforce amenities.

The downtown summit is Monday, Dec. 12 at 6 p.m at St. Clouds River Edge Convention Center. The room will open at 5:30 p.m. for a pre-event social and networking. It is free to attend.

Go here to see the original:

St. Cloud Hosting Summit on the Vision for Its Downtown - KVSC-FM News